We recently spent time optimizing our font loading strategy on the Frontend Infrastructure team at NerdWallet. This is what we did and how we did it.

Goal

Our goal was to have our users spend less time in this phase:

CSSOM/DOM constructed and first paint has happened but something is missing 🤔

And more time in this phase:

That’s more like it

In terms of metrics, we relied on SpeedCurve’s time to H1 render to measure how our changes impacted font loading performance.

Some background on web fonts:

Web fonts require downloading

Modern browsers render text invisible until the font has been downloaded (known as the Flash of Invisible Text)

Browsers won’t begin downloading a particular font file until after the render tree has been constructed and there is at least one node using a font variant that maps to that font resource

NerdWallet uses a web font (Gotham)

Choosing a font loading strategy

We found Zach Leatherman’s guide to font loading strategies extremely insightful when figuring out our options. For starters, it trivially answered the question of whether our existing font loading implementation (the unceremonious @font-face ) was optimal:

😅

After analyzing the different font loading strategies, in essence there are three categories of optimizations:

Reduce size of font files

Eagerly load fonts

Multiple stage font rendering with faux text in the earlier stages

The last item is unfortunately prohibitively difficult to achieve in our current CSS environment. In particular, we have font-families declared in many places, so we can’t trivially toggle a CSS class on the body to denote the font rendering stage (however we are in the process of rolling out a shiny new design system that should allow for this).

So none of the faux text strategies worked for us, however we still had plenty of room for improvement just with the first optimizations.

Reduce size of font files

The biggest way to reduce font file size is to cut out characters/glyphs that are unused. For example, it’s great that our web font has support for Greek characters (e.g. ‘β’), however it is unused bytes on most pages.

Thankfully, the CSS @font-face rule has support for unicode-range subsetting. This allows you to define a specific set of characters for which a given @font-face declaration applies to. Any characters outside of that range would not map to that font resource. Here’s an example usage for basic latin characters:

@font-face {

font-family: 'Gotham';

unicode-range: U+0020-007F;

}

Using this, we can define a critical character subset of our font files that will cover all characters needed for most of our pages, and the inverse of that — the full character subset — all the remaining characters that will only be downloaded on a small percentage of pages.

Determine unicode ranges

We wrote a script ourselves but afterwards discovered and now recommend glyphhanger , which will scrape/crawl your pages and output the unicode characters used on those pages.

After finding your critical characters, you can use the characterset package to generate the inverse of these characters.

Subset the fonts

The goal of subsetting is to creates font files containing only the glyphs/characters needed. There is an open source python tool, fonttools , that can help in this regard. Given a list of unicode ranges and an existing font file, it will create a new font file with all unnecessary glyphs pruned.

Glyphhanger provides a nice sugary wrapper around pyftsubset (the specific command line tool for subsetting fonts within fonttools ) , or you can leverage pyftsubset directly yourself.

For simplicity, we’d recommend using glyphhanger , as the pyftsubset tool is not super user-friendly. But the advantage of doing it yourself is that there are performance optimizations that you can make that glyphhanger is otherwise opinionated about.

We saw ~10kb critical subset font files with glyphhanger versus ~6kb with the most performant pruning options via pyftsubset , but you should fully understand the tradeoff of each option, for which you’ll need to dive into comments in the pyftsubset code.

(You’ll also need to confirm with your font provider this falls within your font license).

Eagerly load fonts

As mentioned, browsers will not begin downloading a font resource until they know it will be used, which won’t happen the CSSOM/DOM/render tree have been constructed.

Given that we know which fonts are critical and can be loaded across the site, waiting for a browser to figure this out can delay the text loading by hundreds of milliseconds or even seconds on a slow connection. We can do better.

Inline vs Preload

Either by inlining the critical fonts directly into the CSS as a base64 encoded data-uri, or by preloading the critical fonts, we can force the browser to download our fonts much earlier.

There are pros/cons to both but ultimately the time to H1 render was lowest when we preloaded. Both strategies can impact start render but we didn’t see much movement to Speed Index when preloading, which is the more import metric for analyzing page load impact.

Preloading has limited browser support but by the end of the month (September 2017) will be supported in the current version of all major browsers.

Results

Ultimately the results varied by page — we saw as much as 30% improvement in time to H1 Render (Chrome 3G). Some samples:

Hey that’s not bad!

An example SpeedCurve chart from one page showing the impact pre/post:

Preloaded critical subset on the left, legacy font strategy on the right

We still have room for improvement but we will take these results for our first pass at optimizing our fonts.

Resources

Thanks to @parshap / @zaneriley for their help!