Every year, fast and smooth experiences become more prolific. Devices, apps, games, even movies are getting smoother and smoother. This raises expectations, up to the point when even my parents complain about the slowness of their car’s satnav, compared to what they have on their phones. This makes it very important to focus on performance of whatever you’re building, as the lack of it may seriously damage the joy of using your interface.

The current standard is the same for apps and video games – 60 fps is felt as buttery smooth, anything below 30 fps will feel choppy. But most importantly, any fluctuations will be clearly visible as stutters. You can mask this by not using any animations, but it still will show while scrolling and in general responsiveness.

In this article, I’ll focus on a new way of measuring performance of web applications, but the same technique could be applied to any other software.

What can you do about it?

There are many great tools you can use to tackle performance issues of HTML applications. There is a swiss-army knife of Chrome Developer Tools, there is a micro-optimisations heaven of JsPerf and there are multiple page speed analysers just to name a few. But as some wise people say – “You can’t improve what you can’t measure”. So, how do you measure front-end performance?

Most common approach is to measure page load times – from a click to page fully loaded. This tells how quickly the user will see the content, or how quickly the app responds to user clicks. This will include slow backend response times and unoptimised JavaScript, which is great, but does “300ms” tell the whole performance story?

I believe not. I believe that the missing ingredient is “was it buttery smooth”? And the simplest answer is…

Detect slow frames on client devices

Theoretically you could profile your app with dev tools and make sure it doesn’t lag, but this way you can only check a finite (probably small) set of machines, browsers and datasets. It’s very hard to find out that one of your interactions started to lag for 10% of your users, simply because their data became more complex than you envisioned. On the contrary, synthetic benchmarks won’t assure that your old templating engine is still performing well enough for your users and you don’t have to switch to React right now…

What I want to tell you now, is that you should do your profiling directly on user devices – because this is where it really matters.

So, how could you do it?

First, you’ll need to detect slow frames in a loop. The best candidate is requestAnimationFrame as it guarantees that it won’t be called sooner than the next frame. A slow frame is the one that took longer than say 1/30th of a second to complete. It’s a good idea to measure a few different buckets (like >100ms, >1s etc.) as it will allow to track your progress and will add more depth to the slowTime – which is a total time spent on slow operations. Remember that slowTime alone won’t tell you if it was a long UI lock, or a series of stutters. A quick and dirty mockup would like this:

var frames = { count: 0, slow: 0, slowTime: 0, timestamp: performance.now() } function measureFrame() { now = performance.now() time = now - frames.timestamp if (time >= 1000/30) { frames.slow++ frames.slowTime += time } frames.count++ requestAnimationFrame(measureFrame) } requestAnimationFrame(measureFrame)

Secondly, you rather want to measure defined operation – like page load, animation or the whole time the user spent on a specific page. You can store the state before running the action, and log the difference of states when it finishes.

function startMeasure() { return _.clone(frames) } function stopMeasure(measure) { return { time: frames.timestamp - measure.timestamp, count: frames.count - measure.count, slow: frames.slow - measure.slow, slowTime: frames.slowTime - measure.slowTime } }

Great, you now have very basic performance monitoring! You can tell what percentage of frames were slow and how painful it was by looking at slowTime. Now, how could you make it better?

Measure only when the tab is visible and in focus

The goal of this metric is to tell if the user interaction was smooth and fluent. So you’re only interested on performance of a web page that is being interacted with – visible and in focus. Plus browsers tend to slow down inactive tabs, which would render your measurements untrustworthy.

For this you may use the Visibility API, which is available for evergreens and IE10+. It will tell you whenever the window is visible. Unfortunately, visible window is not necessarily the window the user is and its performance may be affected by other user’s actions.

The other API are window’s focus & blur events. These will tell when the user actually focused, or left your window. The caveat here, though, is that you cannot tell at load time if the window is focused – only when this state actually changes.

The best solution depends mostly on your case, but I’ve found that the best way is to use both of them simultaneously, enabling the loop when window is visible or focused and disabling it as soon as it’s hidden or loses focus.

Of course, as it’s not an environment you control, there’ll always be a lot of false-positives – so keep calm and use medians (not averages)!

Add more context

To better understand the issues it’d be best to log some accompanying information. Something that gives you a hint of what could be happening at that time. As the frames object stores counters, which are simply subtracted when reported, you can add the number of handled requests, rendered templates and other potentially heavy operations.

That’s all folks!

Here at Base we’re experimenting with this approach to pinpoint parts of our app in dire need of optimisations, and will use this data afterwards to measure our progress.

Let me know in the comments what you think about it.