I’ve recently read a few articles making some pretty outrageous claims, based on little more than anecdotal references to a few conference talks. Part of it is the time of year. As we move from one year to the next it is a good time to look forward to the future. And I don’t even think all the sentiment is misplaced, just that the evidence and justifications are lacking. We might get there one day but that day is not today.

So let’s dig into a few ones that have been floating around the past year.

Myth #1: Web Components replace Frameworks/Libraries

Web Components are a series of technologies to enable HTML, CSS, and JavaScript to be modularized in a reusable way just using HTML Elements. These technologies add features to the DOM that were previously missing to aid in this goal including templating, CSS encapsulation, element lifecycle hooks including attribute watching, and child element slotting. On the surface, these are many of the same features implemented by UI Frameworks and ultimately you end up with Components. So they must be equivalent right?

No, not at all. One is a set of native tools to solve general problems, and the other is an opinionated set of features to make it more productive to produce applications. Now the confusion is understandable. At times I’m not sure even those involved in writing the proposals are clear where the line is. Some of the proposals read like specs for the next framework. But in the 6 years, I’ve been following this and the rate of consensus here amongst vendors it is clear the more basic the feature set here the more likely it is to move forward. There is still a lot of contention on what role they should play. Like supporting Native Built-Ins (the ability to extend existing Elements) doesn’t have full support. That alone suggests that some parties don’t see these suitable for design systems.

So what are they good for? Micro-Frontends or packaging of widgets seems viable. You could also invent your own accessibility, localization, and form handling capabilities and design your own elements. One thing that is clear is they are not the same as your React Components. They modularize but have no tie to change propagation or efficient rendering. Their boundaries are heavier, representing encapsulated isolation through their own life cycle. Enough that each could house its own UI library right next to each other. And some do. In fact, nearly all libraries that produce Web Components are libraries of that nature. You don’t get to avoid learning a Framework/Library. I mean you could use vanilla DOM APIs but you could also do that right now without Web Components. Don’t be duped. You are either using them with a library you are already familiar with by using something like Angular Elements, or Svelte or Vue export Custom Elements, or you are learning a new Library like Polymer, Stencil, Heresy or LitElement.

Nothing has changed. Using them is not suddenly supporting the open web. In fact, the most freeing approach might be something like SkateJS which really doesn’t try to ship with a Framework and let you use any existing one. The library only exists to homogenize the API surface and allow you to work the way you want to. But guess what? You are still using a Framework/Library.

Myth #2: Disappearing Frameworks

I love this one. This has to be the best marketing phrase in the web frontend since the Virtual DOM, and is helping Svelte come in like a storm. And it is complete hyperbole. There is always a Runtime. It can be small, but something has to be something to trigger change. And all any library ships is plain JS, HTML, and CSS. The real hero here is Tree Shaking, the process of statically analyzing import statements to do dead code elimination. Basically code that is never imported doesn’t need to be included in the final bundle. Using compilation all one has to do is look for identifiers and then add the import statements, and Tree Shaking takes care of the best. It isn’t hard to picture the process. The compiler comes across a certain helper like #each and decides to include the list mapping code. Don’t believe me look for import statements in your compiled Svelte code. They are there.

Now through compilation, we can reduce the amount of bundled runtime code that is required as you can hardcode templates, essentially unwinding loops. But as soon as you hit common patterns it is actually more size efficient to abstract, and before you know it, small runtime. Now any sufficiently simple library in combination with Tree Shaking can produce the same results, albeit maybe less slickly. Reactive libraries tend to be smaller, but there are even Virtual DOM libraries that can even produce equivalent or smaller bundles (see HyperApp). Maybe we should consider focusing on making worthwhile things disappear.

Myth #3: Virtual DOM is Pure Overhead

Ok, I can’t deny this isn’t true. However, the use of the statement is misleading. Absolutely everything that isn’t vanilla js is pure overhead. And I don’t mean just compile to optimized code either. As mentioned in Myth #2 there is still a runtime. So regardless of how you attack it something needs to manage updates and changes to the DOM. In general, all modern data-driven UI libraries work in one of 3 ways* each with its own tradeoffs. You can make efficient versions of each approach. In fact, libraries already exist that exemplify both size and performance characteristics of each approach way beyond the capabilities of popular libraries (See https://github.com/krausest/js-framework-benchmark).

Virtual DOM is a completely viable approach like any others and is still the most popular approach used in libraries even weighing in libraries that produce Web Components. Most benchmarks are setup where libraries do all the work in a single Component. In real projects, you modularize into multiple Components. That has an overhead as well and isn’t something talked about nearly as much. Virtual DOM generally scales on more Components better than the other approaches. So our perspective on real performance might be misaligned.

In my opinion, it isn’t performance or size why you choose not to use the Virtual DOM. It is extensions of the developer experience that you prefer — compositional patterns, mutability vs immutability, code structure, etc... But just because something isn’t as easy with a technological approach doesn’t make it not possible. React Fiber and Hooks shows that a Virtual DOM library can behave almost like a reactive library. Sure KnockoutJS had these capabilities in 2010, but it doesn’t lessen its potential.

The three approaches I am referring to are Virtual DOM, DOM Reconciliation, and Reactive. Virtual DOM libraries are based on using generating a virtual tree and then diffing against the previous iteration and patching the DOM updates. It uses immutability and referential equality to short cut optimize. However, immutability leads to significant cloning and memory allocations. Examples: React, Vue, Inferno DOM Reconciliation libraries stash binding values while creating DOM nodes. On each update they diff against the previous values and update the DOM. They are similar to virtual DOM libraries except they work in a single pass and only diff at the leaves. However based on mutability they need to always diff at the leaves so there are less ways to optimize shortcuts in deep nested structures. Examples: Angular, Polymer, lit-html Reactive libraries construct a reactive graph while creating the DOM nodes. In so, each binding context can be associated to an event subscription so as the data updates only the related event handlers run. This approach is optimized in it requires minimal diffing, but it has the greatest initial creation cost. Examples: Svelte, Knockout, Solid

Myth #4: Web Assembly is Faster for Web UI

The one constant that never fails is: Never underestimate JavaScript. But more so in my opinion: Never underestimate the cost of the DOM. The DOM is absurdly expensive. By now most people know the cost of manipulating the DOM as it causes reflows and repaints. And that even reading properties that affect layouts can cause premature reflows. However, even peeking into other properties on it can have immense costs. Any sort of tree walking can be almost as expensive as creating DOM nodes. Almost anything you do with the DOM has an extra cost.

Unfortunately for other technologies that cost is even harder to deal with. Web Workers were going to save us but they can’t access the DOM so while having many performance benefits they didn’t have a meaningful impact on Web UI. WASM has similar limitations. WASM is much faster than JavaScript when you stay within WASM, but the more you have to cross over to JavaScript APIs the slower it is. Currently, that is the only way for it to access the DOM. This will change eventually with the introduction of Web Bindings. This has been in the works the last couple of years and when the spec eventually lands we could see some big gains. As of today not only is plain vanilla js faster at doing DOM rendering, but some high-level data-driven libraries are more performant than the fastest low-level WASM implementation (See https://github.com/krausest/js-framework-benchmark). Higher-level WASM implementations have potentially even more overhead. So no the latest breed of WASM UI libraries are not the most performant by a long shot.

Myth #5: Scheduling means Better Performance

This isn’t a new one but it keeps coming back. React Concurrent Mode is only the latest in a trend compete making mostly meaningless animation demos. Why do I say mostly meaningless? There is a ceiling on meaningful performance limited by the frame rate. Almost all demos use requestAnimationFrame since not doing so would be doing unnecessary work. So the only way to get past this generally is to come up with absurd scenarios. What this means though is we are only testing libraries at the very limit — when they are being constrained in the worst possible ways. There is some merit to that. Gracefully degradation under limited resources can matter on lower-powered devices. But what does a library do to get out of that bind? Find ways to do less work. This is no longer an exercise in improving performance, but how to scale back work to provide a better user experience.

This is a very noble goal, but it has some interesting side effects. Detection and scheduling have overhead. Less than the DOM obviously. But the heavier the rendering in the library the sooner it will need this intervention. It becomes a sort of self-fulfilling prophecy. More so techniques to schedule like requestAnimationFrame are available to all libraries. So while we can agree to block the main thread is not good for interactivity, it’s hard to generalize pinpointing a sweet spot between performance and amount of blocking. This is only something that we notice when we go beyond the limits the hardware can support, so what’s the most graceful way to degrade? I ran a test around my office of visual reaction to different scheduling algorithms with the Sierpinski Triangle Demo and it was pretty split. There was no clear better option. Whereas the visual benefit of delaying loading state on switching tabs with Suspense was unanimous.