A little bit of context

The basic infrastructure at heycar was built in 6 weeks. Of course, at that time it was a very raw platform, despite that was a huge achievement.

Our core differentiation is the quality of what we offer. Therefore, we all have to keep an eye on the User Experience. With that in mind, one of the first things I’ve noticed when I joined the company was the huge payload of the search results page mostly given by our images.

The market we aimed is quite secluded so we had to adapt to it. The images come from integration with car dealerships into an unified API, we had to deal with that.

Most of the times they have a reasonable size for input, but they are not normalised and optimised. Sometimes you have a raw 2MB image straight from the camera.

example partial screenshot of the before, with ~7MB of images per page

We’ve invested a bit of time on gathering data about size, format, and aspect ratio.

Later we measured both the impact of the images on the web-page, and it was surprising:

The average size of an single image was 380 KB.

Given that we have 18 vehicles displayed per page, that’t almost 7 MB of data.

It was obviously a big deal, given the percentage of mobile devices that we serve content to.

One of my first assignments was to figure a way to optimise that.

Alternatives

The first option was to take advantage of our fan-out architecture and resize images to a reasonable resolution and quality, so that we could serve that version.

ingestion-time resizing

The good part is that it doesn’t impact on load time, also it makes it easy to cache. However, it was very limited considering things that might come next. e.g.: more mobile support, different pages with different sizes of images…

All of that would require us to implement something flexible and fast, in order to not impact on time-to-market of our listings. We want them to be published as fast as possible from the time they are sent to us.