Love seeing people use my software in different ways! If this looks interesting to you, try it out:
Also, I made a Twitter bot that posts samples every 30 minutes based on "interesting" Flickr photos using randomized Primitive settings:
I did add an additional step (as lots of extra triangles are generated with a large n like this), which is to put it through SVGOMG to optimize out some of those extra bits.
That cut the generated SVG image size about in half, which I think is an excellent amount of bytes saved for relatively little effort. You can see the result on my site linked in my profile, or compare the PNG and SVG files from the source on my GitHub in this commit.
If you GZIP compress an SVG to SVGZ then you are going to save way more bytes, and if you have a half-decent webserver and a browser this side of 1999 it should do that automatically - https://en.wikipedia.org/wiki/HTTP_compression#Compression_s...
> FLIF is lossless, but can still be used in low-bandwidth situations, since only the first part of a file is needed for a reasonable preview of the image.
> A FLIF image can be loaded in different ‘variations’ from the same source file, by loading the file only partially. This makes it a very appropriate file format for responsive web design.
The loading video on http://flif.info/index.html makes this clear.
In most cases if you get more bandwidth which you could spend on using a lossless FLIF, you would get better subjective quality by using a higher resolution JPEG at the same file size.
Are you saying that for the same bandwidth usage, you could load two separate lossy JPGs and they'd look better?
What if the fact that you've loaded the first 10% of the FLIF for the low-res version means you can resuse that data and start with the 11th percent when loading the high-res version?
The tests I've done are along the lines of this (I forget the exact numbers I used):
* JPEG: 10kB + 90kB
* FLIF: 100kB
JPEG usually wins for photographic source material. I was unable to come up with “reasonable” parameters where FLIF would win, but perhaps someone creative can figure that part out.
Note that the FLIF home page doesn't even claim that FLIF is superior to normal JPEG images... it only claims that it's superior to other lossless formats. For lossless formats, you can compare your desired metric (e.g. file size) for your corpus. For lossy formats, you can either fix subjective quality and compare size, or fix size and compare subjective quality. (Or compare some other metric, but these two are more common.)
These are completely different ways of evaluating compression algorithms, and it intuitively makes sense that different algorithms will be better if you evaluate them differently.
Consider that FLAC gives the best bitrate for lossless audio, but Opus gives a far better bitrate when you fix the subjective quality or a better quality when you fix the bitrate at reasonable rates.
Or consider that there is a wide spectrum of data compression algorithms, each of which performs the best depending on how you assign weight to compression speed, decompression speed, and compression ratio, and what is in your corpus. There is a surprising variety of new compression algorithms out there, some of which may be the best for your use case even though their compression ratios are significantly worse than other well-known algorithms (LZ4, LZFSE, Snappy, for example).
JPEG is designed for best quality at reduced bit rates, so it should not be surprising that it is good at doing that, even though it is old and newer algorithms are better.
I'm not sure it would. The nice thing about a lot of these SVG options (or even the smaller images as well), is that they can be embedded into pages. So not only do you see the image faster, you also reduce the number of remote file fetches. FLIF sounds like you'd still have to hit another server to see anything, even if the thing you see would display before it finishes loading.
This will become less of an issue as HTTP/2 spreads though.
That's a good point. Though presumably you could embed FLIF data into HTML using a data URI - https://css-tricks.com/data-uris/
Maybe you could use something like `srcset` to say "and also load the higher-quality verion from the server"? https://responsiveimages.org/
- Sites which work perfectly fine, except that all the images are blurry, low-res placeholders. These are so annoying because they're actually serving images. Their developers know how to do the right thing, but obstinately refuse to do so.
- Sites where small things don't work. They are annoying, but at least one can read them. Ars Technica gets a special mention for the fact that its articles are perfectly readable but the comments are not. I chalk this up to incompetence and laziness.
- Single page apps which really need to be single-page apps. They're annoying because they would almost certainly be better as native apps, but they really are doing something that the static web can't do.
I definitely disagree. If something is truly an application and can also be done on the web, I vastly prefer that over native apps. Not having to worry about different OSes, different machines, updating, deployments, etc. Webapps have significant advantages over native here IMO.
Seeing that it's such a small change, it really makes me wonder why they don't already include it.
For reading articles composed of just pictures and text on random websites, running code is not (shouldn't be) really necessary. In this light, lack of images is a big problem: the user is forced to choose between not seeing half the content or running untrusted code.
Wouldn't WebAssembly make waiting for browser support unnecessary?
Second Life uses it ubiquitously, its progressive loading of textures is distinctive.
No, it will not make much difference. You rarely need lossless for anything but stuff like UI elements, which to begin with should not be drawn as PNGs if they are simple graphics that can be drawn as SVG
Don't need the full resolution of the 40KB lossless file? OK, sure, stop loading after 20KB, or 10KB, or 1KB, or whatever seems like enough.
Yes, SVG is still better for things like icons, because you get clear, resizable images with a tiny data transfer. But for previewing a photograph, FLIF seems ideal, and doesn't require any additional tools.
> for every image, only one file is required, ever. The optimization can happen entirely on the client side.
> A FLIF image can be loaded in different ‘variations’ from the same source file, by loading the file only partially. This makes it a very appropriate file format for responsive web design. Since there is only one file, the browser can start downloading the beginning of that file immediately, even before it knows exactly how much detail will be needed. The download or file read operations can be stopped as soon as sufficient detail is available, and if needed, it can be resumed when for whatever reason more detail is needed — e.g. the user zooms in or decides to print the page.
Yes, interlaced JPEG has that property: each "pass" in the file adds detail, you can stop at any point. They're more expensive to process but compress better than non-progressive JPEG. PNG also has an interlaced mode, however it compresses less well than non-interlaced. Because of the way jpeg works, it also looks better early on, especially when the PNG renderer does no interpolation on the early interlacing phases.
Interlaced GIF is actually pretty crappy as it's one-dimensional, so it has a "blinds" effect both during initial rendering and during refresh phases: https://nuwen.net/png.html
Wouldn't gif have been a poor format for that even with progressive loading?
IIRC, watching a JPEG paint on a 386/25 or maybe an early 486 was something that could likely be measured in full seconds or tens of seconds, not tenths of seconds.
It doesn't need it, folks. It's a static blog page.
Instead, every x seconds it executes another POST request with pretty much all the details they can gather (scroll from top, scrollable height, referrer etc.).
As soon as you start moving your cursor, the new requests start adding up very quickly, with lots of new params such as "experimentName: readers.experimentShareWidget" or "key: post.streamScrolled".
It really is collecting every single interaction with this page. As it's provided by Medium I'm sure it's part of their data collection program.
Which kind of puts the opening paragraph into question:
> I’m passionate about image performance optimisation and making images load fast on the web.
I've noticed the word "passionate" gets thrown around a lot cheaply.
It's seriously starting to lose all meaning.
Just because it can be cached doesn't mean it's free or even necessary.
Maybe I am unusual, but I have never in my life been annoyed at image load time, even in the dial-up days. I just simply open a new tab or window and continue reading an article or writing code if loading takes too long. But when websites start adding "progressive" placeholders that don't decrease the load time but increase the CPU, that's what annoys me since they prevent multitasked work from being done. If I don't care about the images on the page, I'll continue scrolling, but when my scrolling becomes laggy while an image placeholder is being rendered, it's just unnecessary.
So think carefully if you want to take over the resources of your users' computers, which trades visuals for feel. "Feeling" your website (interacting with it) is more important than its appearance at all points in the loading sequence.
(edit for clarity: my site currently generates <12Kb blurred placeholders from high-def photos and only loads visible images to be more mobile-friendly, and I worry about client-side rendering impacting battery life).
Code is here:
The thumbnails you refer to are part of EXIF data, partially designed to allow digital cameras to render previews inside the camera itself in more resource-constrained times.
I wish there was a way to disable this functionality.
It’s even worse when combined with lazy scroll-based loading: now you’re guaranteeing that I’ll see the unpleasant version briefly (especially in Australia, I imagine, where there’s generally higher latency on such requests than in the USA—but I haven’t tried Medium or similar sites from the USA, so I’m not sure if it’s as unpleasant there).
It’s worst of all when combined with lazy scroll-based loading and an unreliable internet connection: I load pages in places where I know I have an internet connection, and then read them in places where an internet connection is unavailable. With lazy loading of these things, I can no longer be confident that it’s actually loaded everything I need. Same deal with Medium’s blocked iframes for things like CodePen—that just means that the iframe is not loaded when I need it to be.
I want less magic, not more, because we’ve proven as an industry that we’re not responsible with magic, and always manage to make a mess with it.
Edit: Maybe they're not sneakers, but they're shoes. I just wear loafers, I really have no idea what any shoe types are called.
I've had this happen with blurred images on some websites.
Not saying it isn't worth it, but AFAIK no-one has profiled this.
Although that was just for fun, not to create image previews or anything of the sort.
I prefer something very subtle to show that there is an image that is supposed to be there. If it is going to show a single color rectangle, I prefer it be translucent so as to attract even less attention.
Aside from my anecdote, there are many blog posts and experiment results out there that suggests that this works, and it works well.
Also I have suspicions about what you are measuring being 100% correlated with "better user experience." Lack of user drop off CAN indicate a better user experience, but the concept of "click bait" illustrates that you can gain short term increases in attention in ways that both decrease user experience and can contribute to driving away users in the long term. I'm not saying this sort of thing is the same as click bait, but still. I'm sure that in theory, you could put lots of things in those placeholder spots that will increase the number of people that wait for the page to load, but may drive away users in the long term. (e.g. blurry nudes)
When you have image heavy content please load an image when I'm one or two page heights before the image. This way when I get there it will already be there. You could then just use regular single background placeholders, because I (and I would believe that you also) never intend to see them in the first place.
Here’s a demo I made using Primitive + Vivus: http://minimaxir.com/2016/12/primitive/
Effectively, you have to rewrite a big part of actionscript functionality into your browser to do really simple things
It is greatly regrettable than browsermakers have thrown out all declarative animation features, and never thought of improving on them.
It is like saying that you don't need native video support, but you use JS to interpolate initial image.
The JS graphics+SVG APIs, however, do have all the right primitives exposed to let you do flash-level animation. It's not a matter of incapability; it's just a matter of nobody having coded the right framework, or the right framework (i.e. the one Animate uses) being proprietary and without an open-source attempt to clone it.
That doesn't suggest browser vendors should step in and put the capability into the browser, any more than the inability to do realtime 3D without a framework like three.js or a game engine like Unity's HTML5+WebAssembly engine, suggests that browsers should create a common, native game-engine-like API.
There are probably more efficient ways of storing the vectors than SVG too, which would help the compression - it would be interesting to see how small these could get.
So what on earth are you talking about?
Why? To use progressive JPG, you have to pre-recode (if you don't have money to dish out for FPGA to recode on the fly) and store recoded images. With this, you don't have to alter the original image.
My favorite method was to use img tag with blur, you programmatically add image, then you wait when, say 10% of this it is loaded, and abort the request. It will stay that way. When you need to fully load it, you start loading, look for progress events, and set blur accordingly.
Not sure why progressive JPG isn't the default. It's not any larger.
ImageTracer is a simple raster image tracer and vectorizer that outputs SVG, 100% free, Public Domain.
It's super easy to integrate this into your site w/ our Image component & GraphQL fragment.
See the source code for the page: https://github.com/gatsbyjs/gatsby/blob/master/examples/usin...
And component documentation https://www.gatsbyjs.org/packages/gatsby-image/
For many of these I like the SVG as much as the full image, and start to wonder if the preview/placeholder makes having the actual image irrelevant.
I, myself fiddled with the idea in one of my side-projects and ended up using the technique where I save the dominant colors from an image, which I’m using to create a CSS gradient as the placeholder.
The effect can be seen on https://epicpxls.com
Here's one someone else did: https://vimeo.com/210854333
> The images generated with 100 shapes are larger, as expected, weighting ~5kB after SVGO (8kB before).
Looking at images that randomly appear on my hard drive:
- 1KB PNG image is a tiny image, 45x30 pixels. Just the Base64 part of it's data uri version is 1476 characters in length
- A two-color 152x30px PNG image containing simple text and a logo is already 3KB. It's base64 is 3344 characters in length
I don't have any small JPEGs though. The smallest I have is a selfie, 960x960 pixels. It's 79KB in size.
My guess is that you probably can produce a tiny JPEG/PNG that would beat SVG, but you'd have to play around with a lot of settings for it: reduce quality etc.