Update 2016-12-12: Some folks aren’t happy with the SSR’d code in the test, because it wasn’t generated the idiomatic way. Firstly, I’m more than happy to take PRs on the code and update numbers accordingly; secondly, I went through guides and tutorials to try and get it right (lest anyone think I was lazy); thirdly, if getting it right involves becoming an expert I think learning the web platform is probably a better investment; and finally, the actual values of SSR vs CSR are, in the scheme of things, relatively minor: the issue at stake here is that libraries and frameworks do not yield the main thread, and whether that’s for 300ms, 3,000ms, or 30,000ms is simply degrees of bad.

With that out of the way, let’s crack on!

I dunno if you ever watched The Incredibles, but if you did you may recall the bad guy, Syndrome, declaring to Mr. Incredible that “when everyone’s super… no-one will be.” That sentiment came back to me recently when I was working with Preact.

Firstly, let me say that I really like Preact. As with React (or any other VDOM-based library / framework), I totally get the functional approach, and I think the ergonomics are really nice! (JSX weirds me out a little, but nbd.) I wanted to be very clear on that fact because the last time I posted about frameworks some folks felt I was unduly critical, and I also want to say that I know that the people who work on frameworks are trying to make the world better. Many dedicate chunks of their spare time so we don’t have to write as much code. Okay, not hating on anyone here. Okay? Cool.

Anyway, what I wondered, as I looked at a DevTools Timeline recording, was “what does Preact think is special?”

Browsers are super good at prioritizing, largely because they’ve had many years of doing it.

Let me put this another way: browsers are super good at prioritizing, largely because they’ve had many years of doing it. And we, in turn, have learned to play the game: put async or defer on your scripts, put scripts at the end of your <body> , inline your critical CSS, lazy-load the rest, and on and on. We give the browsers a heap of hints (or work with the prioritization systems they have) to get the desired outcome.

If you’re interested in how Chrome sees priority you can right click in the Network panel’s table of requests and add the Priority column. Behold! Priorities!

Chrome DevTools showing request priorities.

Right, where was I? Yes, Preact… Actually this is not specific to Preact. Frameworks. Libraries. Performance.

The question is this: do modern libraries and frameworks prioritize components during boot? Is everything “super”, making everything normalized to highest priority, or do we see nuance in the booting process? To answer that question I feel we should, by way of background, define a couple of pertinent metrics: First Meaningful Paint (‘FMP’) and Time to Interactive (‘TTI’).

First Meaningful Paint

FMP is when we get something useful on screen (which is the bit that people actually wanted to use, not just your app shell). Most people suggest using Server-Side Rendering (‘SSR’) to get that time down. After all, if you send down HTML in the initial response you’re going to get something rendered sooner than if you don’t. Jake has done a bunch of research in this area, and that’s the TL;DR.

Time to Interactive

TTI generally translates to “if someone tries to interact with this thing, will it be able to respond?” If you look over a DevTools Timeline trace you can eyeball it yourself.

First Meaningful Paint and Time to Interactive on a DevTools Timeline

I’ve marked where I think the FMP and TTI are in the timeline above. You can find your FMP if you have the Screenshots enabled, and you look for when the main content gets shown. TTI is typically a case of looking after FMP for that moment when everything in the main thread calms down post-JavaScript.

Lighthouse is looking at ways to capture both measurements automatically, so you should definitely check that out if you’re interested.

Moving on…

Let’s imagine I’m building a web app which allows someone to comment on a story. Like Hacker News, or maybe Reddit, and I’m going to use a framework. A quick survey of those types of sites shows that you can see posts with thousands of (sometimes nested) comments.

What if we made a page like that with React, Preact, Vue, Custom Elements, and Vanilla? What… if…

(In fact, Reddit mobile uses React, so I’m not far off the mark.)

Sample setup

My demo setup: a page which loads a lot of the same Comment component

Here’s my setup:

A story synopsis, title and link.

500 comments. (I think this is a relatively modest count, some posts easily get into the thousands.)

Each comment has up- and down-vote buttons, which need to at least adjust the score value. (No need to reorder comments based on score.)

My primary hypothesis is that frameworks typically have no concept of priority that can be surfaced to developers.

My primary hypothesis is that frameworks typically have no concept of priority that can be surfaced to developers. If that’s true then we will see a single, solid task when the JavaScript payload is delivered that will lock up the main thread. For funsies I tried giving the various frameworks a Server-Side Rendered (pre-baked like FlipKart does) and Client-Side Rendered versions of the page, to see if SSR changed the numbers at all.

I also added Custom Elements 1.0 and Vanilla variants to the mix to see how they came out.

Let’s see how the numbers came out on a Moto G running Chrome with a 3G connection.

Server-Side rendered (‘SSR’) - FMP & TTI

Tech FMP time TTI time Preact (7.1.0) 2,343ms 4,000ms React (15.4.1) 2,746ms 4,850ms Vue (2.1.4) 2,578ms 4,600ms Ember (2.10.0) N/A N/A Custom Elements (1.0) 2,456ms 2,700ms Vanilla 2,610ms 2,610ms

Client-Side rendered (‘CSR’) - FMP & TTI

Tech FMP time TTI time Preact (7.1.0) 3,042ms 3,200ms React (15.4.1) 4,701ms 4,701ms Vue (2.1.4) 3,948ms 3,950ms Ember (2.10.0) 10,180ms 10,180ms Custom Elements (1.0) 1,897ms 2,900ms Vanilla 1,824ms 2,700ms

Server-Side rendered - Script Duration & Component Mount Time

Let’s look at the time spent in scripting as well.

Script duration: The time taken overall by script in the boot. Includes the framework boot time, parse, evalution, compilation and GC.

The time taken overall by script in the boot. Includes the framework boot time, parse, evalution, compilation and GC. Mount time: The time taken to bootstrap the 500 comment components along with their up- and down-vote buttons.

Tech Script dur. Mount time Preact (7.1.0) 1,028ms 986.92ms React (15.4.1) 1,272ms 1,219.10ms Vue (2.1.4) 1,436ms 1,285.10ms Ember (2.10.0) N/A N/A Custom Elements (1.0) 204ms 197.99ms Vanilla 168ms 156.83ms

Client-Side rendered - Script Duration & Component Mount Time

Tech Script dur. Mount time Preact (7.1.0) 509ms 504.55ms React (15.4.1) 1,183ms 1,139.70ms Vue (2.1.4) 1,295ms 1,145.10ms Ember (2.10.0) 5,565ms 3,986.70ms Custom Elements (1.0) 564ms 516.78ms Vanilla 395ms 362.10ms

Notes, caveats, disclaimers, not-so-small print, provisos

And here are the things you need to bear in mind with these results.

These results are interim results. I may have made errors (despite attempting to avoid that!) . You can check my code over on the GitHub repo if you like. Let’s chat on there if there are issues that need resolving.

. You can check my code over on the GitHub repo if you like. Let’s chat on there if there are issues that need resolving. All tests were done on a Moto G on a regular 3G connection.

TTI is eyeballed from the trace in WebPagetest, based on the main thread settling and First Meaningful Paint having happened.

Ember says fast-boot hydration is not ready for production, so I thought it fairer not to include it for the SSR results.

Polymer 2.0 is a fairly thin wrapper around Custom Elements 1.0 so I figured it wasn’t worth adding it in for this test. If there’s a strong desire to see numbers I will consider reconsidering.

Conclusions

What are we to make of the results? Here are my high-level conclusions.

SSR typically gets you a faster First Meaningful Paint . That’s great for perceived performance, but for libraries / frameworks that recreate the DOM virtually, TTI seems to be pushed back, sometimes a long way. I guess the diffing of real DOM to make VDOM is more expensive than starting fresh? Kind of like inheriting someone’s legacy code!

. That’s great for perceived performance, but for libraries / frameworks that recreate the DOM virtually, TTI seems to be pushed back, sometimes a long way. I guess the diffing of real DOM to make VDOM is more expensive than starting fresh? Kind of like inheriting someone’s legacy code! Hydration is unusually slow in Preact (~2x slower) when doing SSR . I’m not totally sure why that is, but I’ve filed an issue so hopefully Jason Miller and the other Preact folks will be able to find out why!

. I’m not totally sure why that is, but I’ve filed an issue so hopefully Jason Miller and the other Preact folks will be able to find out why! Chrome seems to be faster when there are multiple innerHTML calls . Breaking the work into multiple appendChild / innerHTML calls seems to be faster than one big dump of HTML from SSR. I found this surprising! I thought that the browser would flush more often, but it seems to not. So SSR, for the Custom Elements and Vanilla variants, seems to perform worse than the CSR equivalent when there’s a decent amount to boot.

. Breaking the work into multiple / calls seems to be faster than one big dump of HTML from SSR. I found this surprising! I thought that the browser would flush more often, but it seems to not. So SSR, for the Custom Elements and Vanilla variants, seems to perform worse than the CSR equivalent when there’s a decent amount to boot. Even if CSR beats SSR, we should be cautious. Thinking more about the above point, if JavaScript fails for some reason, SSR will still give us content that someone can read. CSR will not. This is very much a per-case consideration, since some would say comments are non-essential content, which is probably true for a news article and not at all true for Hacker News or Reddit, where the comment is community-generated.

Main thread locking

Whether you agree or disagree with the high level conclusions, there’s one bit that’s really important and deserves more scrutiny: the component boot time.

In all cases the JavaScript that mounts components runs synchronously and blocks the main thread with one big task. This is even true if you use Custom Elements, though as I’ll try to show in a little bit, there is an escape hatch or two.

A DevTools timeline showing a single long task

Is locking the main thread like this bad? I can’t emphasize this enough: YES!

Locking the main thread is unthinkable to native developers, because it negatively impacts the user experience so much.

Locking the main thread is unthinkable to native developers, because it negatively impacts the user experience so much. When the main thread is pegged you typically see:

Less CPU time for scrolling and tasks on other threads . Because the CPUs on mobiles are so heavily controlled, utilizing them for long periods means there’s simply no time left for anything else. Alex Russell has a fantastic talk about why your mobile hates you, which you should watch. But this is A Big Deal for the User Experience because we’re locking up the phone.

. Because the CPUs on mobiles are so heavily controlled, utilizing them for long periods means there’s simply no time left for anything else. Alex Russell has a fantastic talk about why your mobile hates you, which you should watch. But this is A Big Deal for the User Experience because we’re locking up the phone. The CPU being pegged. When you peg the CPU you run down the user’s battery more quickly.

Possible Objections

I expect at this point there may be some who feel that this isn’t representative of their situation.

“I’m not making Reddit / Hacker News.” Sure. You probably aren’t, but you may be booting a lot of components at once, which may well mean you’re locking up the main thread.

Sure. You probably aren’t, but you may be booting a lot of components at once, which may well mean you’re locking up the main thread. “My users are all on [insert high-end phone here]” Cool! That’s not necessarily the case for everyone, and it may also be that folks with less powerful devices are excluded from the experience if it’s too heavy to load well.

Cool! That’s not necessarily the case for everyone, and it may also be that folks with less powerful devices are excluded from the experience if it’s too heavy to load well. “You can work around this with [insert tactic here]” Quite possibly the case, but are they the defaults for the framework or library you’re using? If you have to be an expert (or tending in that direction) to not break the user experience then surely you’re a) better off using the underlying platform instead, and b) saying that the framework isn’t living up to its promise of making life easier?

Three booting models

Last year I created some graphics explaining what I consider the three major booting patterns in the wild. People asked if there was a blog post to back it up. This would be it… a touch later than I would have liked.

Client-Side Rendering aka CSR

Rendering your app client-side.

In a Client-Side, JavaScript-based render you are reliant on the script to be downloaded, parsed, and evaluated before you are able to render the page. This can end up with a lot of wasted time from when the HTML arrives to when you give the user something meaningful.

If the JavaScript fails you can end up giving somebody nothing at all.

Server-Side Rendering aka SSR

Rendering your app server-side.

With Server-Side Rendering you send a view to the user, but you’re typically reliant on the JavaScript to boot entirely before the functionality is available. This can result in an “uncanny valley” where the app looks interactive, but isn’t.

SSR is great for getting pixels on screen, but as the data implies, for some cases it makes it more computationally expensive to get booted, meaning that it takes longer to be interactive.

I still prefer this to CSR because you’re showing the user something, but if the boot process blocks the main thread then it’s a pretty awful experience.

Progressive Booting aka Progressive Booting

Rendering your app progressively.

Progressive Booting sits somewhere between CSR and SSR. You SSR a functionally viable (though minimal) view in the HTML, including minimal JavaScript and CSS. As more resources arrive the app progressively “unlocks” features.

Today, however, Progressive Booting is behavior we can’t easily access in most libraries and frameworks.

This requires knowing what the person visiting your app is there to do, and an adequate strategy to determine boot order. You may also need a re-prioritization strategy if the user interacts with something you expected to be low priority.

Which to use? Progressive Booting.

Looking at the data above, I’d say a Progressive Booting model is the best approach: it uses SSR to get a better FMP, but there’s minimal JavaScript included so we don’t peg the main thread, keeping TTI nearer FMP. We can then, either on demand, or as time allows, boot non-essential parts of the app.

Today, however, Progressive Booting is behavior we can’t easily access in most libraries and frameworks. There’s no place to hook into component booting.

In short: everything is special, so nothing is.

Mitigation, platform-style

We have a platform-level primitive that can help here: requestIdleCallback .

We can use requestIdleCallback to spread the load of booting out over several tasks, and the browser will prioritize handling user interactions above other main thread code, which is exactly what we want in a Progressive Booting world. If a browser doesn’t support requestIdleCallback we can still invoke component booting immediately, if that’s what we need. If we don’t need to, why not boot components later on?

We have a platform-level primitive that can help: requestIdleCallback .

When it comes to Custom Elements or Vanilla, we already have control in order to wrap things in requestIdleCallback ourselves, or to even wait for user interaction before attempting to boot a given component (or set of components). But my goal isn’t actually to “convert” anyone over to using Custom Elements or Vanilla (though I’ve found it’s been a better long-term investment for me to learn the Web Platform than any particular library, framework or tool). What I’m more bothered about is that the people at the end of the chain, the people we build for, get the best possible experience.

I’d love to see is libraries and frameworks adopting a Progressive Booting model, especially as we ship more code to our users. Not every component is special, and the developer should be given controls to decide how booting happens!

To show the impact of taking this approach, I made a straw man PR for Preact that uses requestIdleCallback. It cut the TTI down by 6x. I know Jason Miller, who made Preact, is very interested in this area, and I’d love others to join in.

Using requestIdleCallback in Preact to boot progressively.

Finishing up

Prioritization is a big deal. As our apps get bigger and more complex we need mechanisms to handle that.

I suppose what I’m really saying is that we need to move on from an off-on component world to one with nuance and priorities.

I suppose what I’m really saying is that we need to move on from an off-on component world to one with nuance and, in particular, priorities. The web can already support it, the browser already does it itself for many things, we need to enable it in our libraries and frameworks.

The test results:

WebPagetest

Preact: SSR, CSR

React: SSR, CSR

Vue: SSR, CSR

Ember: CSR

Custom Elements: SSR, CSR

Vanilla: SSR, CSR

Source code

GitHub Repo