Hot on the heels of Off-Main-Thread Painting, our next big Firefox graphics performance project is Retained Display Lists!

I you haven’t already read it, I highly recommend reading David’s post about Off-Main-Thread Painting as it provides a lot of background information on how our painting pipeline works.

Display list building is the process in which we collect the set of high-level items that we want to display on screen (borders, backgrounds, text and many, many more) and then sort it according to the CSS painting rules into the correct back-to-front order. It’s at this point that we figure out which parts of the page are currently visible on-screen.

Currently, whenever we want to update what’s on the screen, we build a full new display list from scratch and then we use it paint everything on the screen. This is great for simplicity, we don’t to worry about figuring out which bits changed or went away. Unfortunately, it can take a really long time. This has always been a problem, but as websites gets more complex and users get higher resolution monitors the problem has magnified.

The solution is to retain the display list between paints, only build a new display list for the parts of the page that changed since we last painted and then merge the new list into the old to get an updated list. This adds a lot more complexity, since we need to figure out which items to remove from the old list, and where to insert new items. The upside is that in a lot of cases the new list can be significantly smaller than a full list, and we have the opportunity to save a lot of time.

If you’re interested in the lower level details on how the partial updates and merging works, take a look at the project planning document.

Motivation:

As part of the lead up to Firefox Quantum, we added new telemetry to Firefox to help us measure painting performance, and to let us make more informed decisions as to where to direct our efforts. One of these measurements defined a minimum threshold for a ‘slow’ paint (16ms), and recorded percentages of time spent in various paint stages when it occurred. We expected display list building to be significant, but were still surprised with the results: On average, display list building was consuming more than 40% of the total paint time, for work that was largely identical to the previous frame. We’d long been planning on an overhaul of how we built and managed display lists, but with this new data we decided that it needed to be a top priority for our Painting team.

Results:

Once we had everything working, the next step was to see how much of an effect it had on performance! We ran an A/B test on the Beta 58 population so that we could collect telemetry for the two groups, and compare the results.

The first and most significant change is that the frequency of slow paints dropped by almost 30%!

The horizontal axis shows the duration of the paints, and the vertical axis shows how frequently (as a percent) this duration happened. As you can see, paints in the 2-7ms range became significantly more frequent, and paints that took 8ms or longer became significantly less frequent.

We also see movement in the breakdown percentages for slow paints. As this only includes data for slow paints, it doesn’t include data for the all the slow paints that stopped happening as a result of retaining the display list, and instead shows how we performed when retaining the display list wasn’t enough to make us fast, or we were unable to retain the display list at all.



The horizontal axis is the percentage of time spent display list building, and the vertical axis shows how frequently that occurred (during a slow paint). You can see the 38-50% range dropped significantly, with a corresponding rise in all the buckets below that. The 51%+ actually got a bit worse, but that’s expected since the truly slow cases are the ones where we either fixed the problem (and got excluded from this data) or were unable to help. More on that later.

We also developed a stress test for display list building, as part of our ’Talos’ automated testing infrastructure, known as “displaylist_mutate”. This creates a display list with over 10 thousand items, and repeatedly modifies it one item at a time. As expected, we’re seeing more than a 30% drop in time taken to run this test, with very little time spent in display list building.

Future Work:

As mentioned above, we aren’t always able to retain the display list. We spent time working out what parts of the page changed, and if that ends up being everything (or close to), then we still have to rebuild the full display list and the time spent doing the analysis was wasted. Work is ongoing to try detect this as early as possible, but it’s unlikely that we’ll be able to entirely prevent it. We’re also actively working to minimize how long the preparation work takes, so that we can make the most of opportunities for a partial update.

Retaining the display list also doesn’t help for the first time we paint a webpage when it loads. The first paint always has to build the full list from scratch, so in the future we’re going to be looking at ways to make that faster across the board.

Thanks to everyone who has helped work on this, including: Miko Mynttinen, Timothy Nikkel, Markus Stange, David Anderson, Ethan Lin and Jonathan Watt.