Just a few weeks ago I was lucky enough to attend the PerfMatters conference, up in San Francisco, CA. It was a site speed enthusiast’s dream.

Not only we were around a bunch of other web performance nerds for two straight days, but we were graced with talks by speed experts from Google, Netflix, Etsy, Akamai, and more. Plus I got to meet my two idols Steve Souders and Paul Irish 🙂

Here’s a quick summary of what I got out of it, which is in no means exhaustive.

Enlightening Talks

Not all presenters had links to their slides available, but here were some of my favorites:

Another attendee Cristina Shaver took some really good notes as well, for all talks that didn’t make their slides available.

Major Takeaways

A few interesting points that I learned

All performance teams have their own metric that they focus on optimizing. For Pinterest they call it PWT, pinner wait time – and it’s different for each page. Etsy chose DomContentLoaded. Paul Irish suggested 3 to focus on: Time to Interactive, Speed Index, and First Contentful Paint

that they focus on optimizing. For Pinterest they call it PWT, pinner wait time – and it’s different for each page. Etsy chose DomContentLoaded. Paul Irish suggested 3 to focus on: Time to Interactive, Speed Index, and First Contentful Paint The mathematical summary of each metric we look at is important. A mean average is going to include outliers. Many teams choose the median (aka percentile 50, or P50). Other teams looked at P90 or P95 (the value that at least 90% of your users experienced)

we look at is important. A mean average is going to include outliers. Many teams choose the median (aka percentile 50, or P50). Other teams looked at P90 or P95 (the value that at least 90% of your users experienced) It’s crucial to segment your perf results over the type of users. For example: show speed improvement effect by user connection speed, browser, user activity type (new user vs power user). Each of those slices is going to tell you a different story, or unhide something previously unknown.

over the type of users. For example: show speed improvement effect by user connection speed, browser, user activity type (new user vs power user). Each of those slices is going to tell you a different story, or unhide something previously unknown. Unshared your domains – that was an old HTTP/1.1 best practice. Now with better protocols it’s more efficient to have as much as you can coming from one domain/connection.

– that was an old HTTP/1.1 best practice. Now with better protocols it’s more efficient to have as much as you can coming from one domain/connection. Make use of the CPU and Bandwidth graphs below your WebPageTest waterfalls. You want high bandwidth usage (holes indicate wasted connection ability), but low CPU usage (high CPU usage usually caused by parsing/running css and javascript degrades your browser)

below your WebPageTest waterfalls. You want high bandwidth usage (holes indicate wasted connection ability), but low CPU usage (high CPU usage usually caused by parsing/running css and javascript degrades your browser) Compression: Brotli offer slightly better compression than gzip, but it takes 3x as long on the server – so make sure your server can handle it, or use static compression (becoming more widely accepted).

than gzip, but it takes 3x as long on the server – so make sure your server can handle it, or use static compression (becoming more widely accepted). To make users happy, shoot for 30% speed improvements – anything under 20% isn’t perceptible

– anything under 20% isn’t perceptible We all know third party tracking scripts are annoying, but how can we tell exactly how much they slow down our site since they are usually loaded asynchrounously? Here’s a tip: use the Block feature of WebPageTest to block them from loading. Then you can give your marketing team evidence of the true effect of tracking pixels on your page speed.

of WebPageTest to block them from loading. Then you can give your marketing team evidence of the true effect of tracking pixels on your page speed. Don’t argue synthetic vs RUM speed tracking – in reality we need both. Synthetic gives you a consistent environment to isolate performance changes in your code base. RUM is effective in validating your improvements across your users, and is the only way to measure the effect on your bottom-line revenue/sales/conversions.

Thank you

This was the first PerfMatters conference, and I’m really hoping it’s not the last. It was well organized, fun, and extremely insightful. Thanks so much to all the speakers, organizers, and attendees.

If you get the chance to go next year, do it!