The Results

HyperScript (inferno, ivi, solid-h)

HyperScript is a way of representing views as a composition of functions (often an h or perhaps React.createElement). For example:

h('div', {id: 'my-element'}, [

h('span', 'Hello'),

h('span', 'John')

])

This is the category owned by Virtual DOM libraries. Even if they use JSX or other templating DSLs ultimately it is converted to per element render methods. This suits constructing a virtual DOM tree in JS every render cycle, but as shown here it can also be used to construct a Reactive dependency graph in Solid’s case.

As you can see, the Virtual DOM libraries are much faster here. The overhead of creating the reactive graph hurts Solid here. Notice the difference in benchmarks #1, #2, #7, #8, #9. Conversely, Solid is slightly quicker on the remaining benchmarks which measure partial updates.

Memory is less conclusive. Inferno and this version of Solid are mostly neck and neck, whereas the more performant ivi uses more memory. This is the most memory-intensive version of Solid, but it is worth noting how close memory usage is here.

This is the classic VDOM vs Fine-Grained comparison. Fine-Grained takes the hit upfront to perform better on updates. If this was the end of the story it would be easy to explain VDOM’s dominance in the past years. Suffice to say if you are using HyperScript with Fine-Grained you are probably better off using the Virtual DOM.

String Templates (domc, lit-html, solid-lit)

Each library here has a few things in common. They render based on cloning template elements, they execute at runtime and make no use of a Virtual DOM. However, each does so differently. DomC and lit-html do top-down diffing similar to a Virtual DOM whereas Solid uses fine-grained reactive graph. Lit-html splits templates into parts. DomC and Solid just in time compile the template into separate code paths for creation and updates.

This category has the widest range of performance, DomC is one of the fastest libraries and lit-html is the slowest. Solid Lit is right in the middle of the pack. DomC is a testament to how keeping code simple can lead to incredible performance. Its only weakness is #4 since it works by diffing at the leaf nodes, which is more expensive the deeper the structure. It is plenty fast enough but we will need to validate how it scales. Solid Lit is much more performant than Solid HyperScript. At runtime, just-in-time compilation negates most of the downsides of creating the reactive-graph, letting it sneak just in front of ivi, the fastest Virtual DOM library (see full Performance Results table at the end of the article).

Memory is much better with this bunch. DomC has the smallest memory footprint out of all the competitors. A decent amount of savings comes from rendering by cloning Template elements.

The most interesting takeaway from this group is that runtime code generation can have a minimal performance cost over pre-compilation at the build step. It is perhaps an unfair comparison for lit-html in that sense since it doesn’t leverage this technique, but it is fair to say that currently lit-html, or similar libraries like hyperHTML or lighterHTML, are not the most performant way to use Tagged Template Literals and it is possible to get really good performance even at runtime without the Virtual DOM.

Precompiled JSX (solid, solid-signals, surplus)

Now on to the heavy-weight class. These libraries use JSX compiled at build time down to DOM and Reactive graph instructions. Unlike the last 2 categories for Fine-Grained libraries, the overhead for initial construction is almost completely removed making this an ideal for libraries of this type. The templates really could be anything, but JSX provides a clear syntax tree that lends to better tooling and developer experience.

This group has the closest performance results, but the differences are really important here. All 3 of these libraries use the same change management library in S.js. Using Solid Signals as a baseline shows how observing functions with template element cloning offers the best performance. Solid’s standard implementation adds the overhead of using ES2015 Proxies which adds overhead on all benchmarks. Surplus on the other hand uses document.createElement which has more overhead on benchmarks that test creating many rows #1, #2, #7, #8.

Memory scales up similarly. However, in this case, it’s the proxies that have more overhead than the template element cloning.

The takeaway here is that proxies have a real performance cost, and more libraries should be cloning template elements. On the other hand, you could take this performance hit with proxies as a small investment. Solid’s official implementation has the smallest amount of implementation code of all libraries by a long shot weighing in at 66 lines and is even 13% less written non-whitespace characters than Svelte, a library that prides itself in writing less code.

Best in Class (domc, ivi, solid-signals, vanillajs)

Next, we take the winners of each category and compare them against a brutally efficient hand-crafted version written in plain vanilla JavaScript. What is nice here is that the best in each category represents each of the popular change management approaches. You can even draw similarities from these libraries to the big 3. Solid → Vue, DomC → Angular, ivi → React. That is if you strip them right down to their renderers and shed 60–200kb of code.

So how did we fair?

DomC and Solid are close here and ivi is no slouch either, but DomC is generally faster. Its overhead over the vanilla version is remarkably small but is less efficient for nested partial updates. This benchmark alone is not going to conclusively separate these approaches. Anyone claiming the Virtual DOM is slow or has unnecessary overhead should check themselves. Most libraries will never have this performance.

With memory, DomC again shows how small of a footprint it has. Fine-Grained Solid leads Virtual DOM ivi on memory usage.

The most interesting takeaway from these results might simply be how little overhead these libraries have over the vanilla JavaScript version irrespective of method. These libraries are all very fast.

Bundle Size

Lastly, I wanted to call out bundle size for a moment because I feel this area gets way too much attention. Recent “real world” benchmarks put almost all their attention on these metrics. Yes, bundle size is important and there is a direct correlation on performance, but how much of a difference does it make. I suspect variation in code loading overhead has a larger impact than size.