Perform, perform, perform

In my Last talk of Lighthouse integration we grilled Australian websites under a slow 3G network and kicked off performance improvement journey by integrating Lighthouse into code review. Here I am going to give a few tips on performant images.

Before the main topic:

Performance is not a developer fuss.

If you are interested, there is an in-depth article on Why Performance matters. Teleport is here. Web developers care about images. If you have a performance budget in your project or in your mind, No. 1 priority should be given to images. As —

According to the HTTP Archive, 60% of the data transferred to fetch a web page is images composed of JPEGs, PNGs and GIFs. As of July 2017, images accounted for 1.7MB of the content loaded for the 3.0MB average site.

Yes, images are half of the web page load. And compared with JavaScript bundle which we spend days and nights trying to split it into meaningful chunks and to lazy load, images processing is transparent and static. Today I am sharing a few low hanging fruit with respect on images:

Fruit 1 — Progressive JPEGS

When a baseline JPEG loaded, as we see most of the time, the image appears as a top-to-bottom scan. A progressive JPEG by comparison is a series of scans of increasing quality. Check B-JPEG VS P-JPEG below:

Top-to-bottom scan reveals image gradually. User will need to wait for full image download.

Progressive JPEG unveils itself in a series of scans. User doesn’t need to wait if user is not interested in image details

As you can see, progressive images give the user a perception that the image loading is “visually completed”. This will give user a feeling that page loading is fast. This conclusion is not subjective. There are a number of big names that put progressive JPEGs under their belts:

Twitter.com ships Progressive JPEGs with a baseline of quality of 85%. They measured user perceived latency (time to first scan and overall load time) and found overall, PJPEGs were competitive at addressing their requirements for low file-sizes, acceptable transcode and decode times.

with a baseline of quality of 85%. They measured user perceived latency (time to first scan and overall load time) and found overall, PJPEGs were competitive at addressing their requirements for low file-sizes, acceptable transcode and decode times. Facebook ships Progressive JPEGs for their iOS app . They found it reduced data usage by 10% and enabled them to show a good quality image 15% faster.

. They found it reduced data usage by 10% and enabled them to show a good quality image 15% faster. Yelp switched to Progressive JPEGs and found it was in part responsible for ~4.5% of their image size reduction savings. They also saved an extra 13.8% using MozJPEG.

From the numbers, you can see it is faster but not really that impressive. So I did a demo project to experiment:

I built a demo site from create-react-app template and added material-UI Carousel. I used image-min webpack plugin to make images progressive. Turning on slow 3G network throttling in Chrome, I get following scans:

image(bg6) show initial coarse scan as 36 kb loaded

with secondary scan(68kb) loaded, image looks almost same with full image on 15 inch screen

image keeps replacing itself with better scan, till a final scan with full size 239kb

Pictures above are quite explanatory. With the second scan of 68kb, you almost get full details of 239kb image. That’s actually a 71% of initial page load reduction. For proactive users who don’t care about image details, user can use initial scan of 36kb as full image and react. That is 84% less.

So Why NOT turn progressive? Progressive JPEGs are useful under slow network, amusing at normal network and stupidly simple to make. Only a few lines of code can make this difference. Check my commit to apply this changes for all the details. You only need to install a plugin and add it to your build process.

Fruit 2 — Guetzli Compression

Ok, lets see if we can go further on JPEG. For image formats, developers are picky and reserve JPEG as first choice for large images.

Jpeg format is the one if no animation and fine details are needed.

However, is a JPEG image rock solid on every pixel? I came across a “magic” tool called Guetzli. It amazes me in two aspects:

1. I could not pronounce it. I can’t find anywhere where the word “Guetzli” is used.

2. It can halve your JPEG images with no visual loss.