EDIT: 11/2017 — This post was originally published 04/2016. Yehuda Katz now has a WHATWG proposal for “DOMChangeLists”, which has a very similar spirit to the behavior of Node#mergeWith() described later in this post, though it wasn’t inspired by this post.

LazyDOM — an experiment to bring virtual DOM to the browser natively

tl;dr React-style virtual elements are fast but cannot be consumed like real Elements, LazyDOM elements are fast like React elements, but can be consumed like real DOM Elements because they lazily proxy to one. To be clear though, this is only an experiment!

Let’s start with a somewhat deceptively simple example to give you an idea of what LazyDOM gives you. It may look semi-familiar but it’s not React, it’s LazyDOM (using optional JSX).

Note: even though I use jQuery here, this isn’t really about jQuery.

React arguably popularized the concept of virtual DOM and unidirectional data flow that revolutionized UI development. It really is great. The problem is, its approach isn’t quite a natural fit for native adoption in the W3C spec that browsers implement. For starters, developer ergonomics around virtual DOM is awkward in certain places. “virtual” elements are actually just Plain Old JavaScript Objects (POJOs) so integrating with things like jQuery plugins require escape hatches, and touching the real DOM this way also leads to inconsistent DOM state. All these make it harder to learn and debug, especially for newcomers who just want the benefits without needing to learn the technical caveats.

LazyDOM avoids these problems by having interactions with your “lazy” nodes no different than dealing with real DOM nodes, while still allowing you to keep the “re-render everything” model.

A simple introduction

Without further ado, here’s what it looks like:

Boring huh? Keep reading for the juicy bits!

Creating lazy elements and text nodes is done almost exactly like their real counterparts:

For the sake of simplicity, I will often use the term LazyNode to mean either a LazyElement or LazyText node.

Once you have a LazyNode, you can access exactly the same properties and methods you’d expect to find if they weren’t Lazy. So for example, a real <input> (aka HTMLInputElement) has a value property and you can listen for changes with oninput.

If you’ve written raw JavaScript DOM, all of this should be very familiar to you, so what’s the deal with all this Lazy stuff and why is it any better?

The Problem

The magic of LazyDOM will reveal itself when you want to re-render. If you wanted to use the “re-render everything” paradigm React popularized, how would you do that with pure, real DOM nodes? (aka the normal DOM API without a framework) Here’s a naive example:

Try this code: http://jsbin.com/fadico/edit?js,output

If you run this code, you’ll notice as soon as you type a single character into the <input> box, you lose focus. That’s because we literally recreated all of the nodes so of course, the input box we replaced it with is a totally new element. This is an intentionally naive example because this might not have been obvious to everyone. Besides focus and selection issues, creating real DOM nodes like this can be expensive both in creation time and forcing the entire page to repaint.

So how can we get the desired developer ergonomics but avoid these issues? You guessed it, LazyDOM:

The Solution

Try this code: http://jsbin.com/qefehoz/edit?js,output

The difference is so subtle it’s easy to miss, give it a try. We switch to use createLazyElement and on subsequent re-renders we use the new Node#mergeWith() method to apply the new lazy tree differences to the first one. This method only exists on LazyNode’s (in-depth technical details might come in a separate post). This example is also very contrived for simplicity’s sake, see below for more complex examples or feel free to play in JSBin.

The first LazyNode tree we create will be appended into document.body and at that time, every LazyNode will suddenly create real Nodes from the queued up properties changes and children.

On subsequent re-renders that use mergeWith, instead of creating new real DOM nodes for each LazyNode, the queued up changes are instead applied to the existing nodes that are in the DOM, in-place and importantly, the new tree references now point to the existing nodes in the DOM so the input.value lookup inside input.oninput works as expected.

By queued up, you can think of it that LazyNode’s have a setter trap which is run for every property you assign on your LazyNode:

Pseudo code for what a LazyNode’s catch-all setter trap looks like

Since our InputBox example doesn’t read any properties on the LazyElement here’s an example that does:

Above, we read input.value immediately and then again when the oninput handler is trigged. Under the hood, LazyNodes have a catch-all getter that looks a bit like this:

Pseudo code for what a LazyNode’s catch-all getter trap looks like

There are basically three situations when reading a property on a LazyNode:

You’re looking up a property that you had set yourself previously (stored in queuedProperties )

) The real node exists and we should just look it up on that (which happens most often after you’ve inserted the LazyNode into the document or merged it into the old tree)

(worst-case) You look up property on the LazyNode which cannot be determined without a real node existing so one is lazily created just-in-time. Reading properties that deopts like this (which creates that node that might end up being thrown away later) is really rare so we can provide that developer convenience at little real world cost.

A similar queuing mechanism is used when you lazyNode.appendChild(node) and such.

Using JSX

In real world apps, you probably would use JSX to make things more declarative, just like you do in React. Since JSX is even less likely to ever be natively supported in JavaScript, a fairly trivial Babel plugin could output the appropriate imperative code:

For performance reasons, it would be quite handy if document.createLazyElement accepted a similar signature as React.createElement(tagName, props, …children), but that’s not a requirement and outside the scope of this post.

Comparing to existing React

Existing React uses virtual elements which are just POJOs:

Existing React’s “virtual elements” are just POJOs

Which is very fast, but obviously these cannot be consumed by anything that expects a real element node. Instead you use refs and various escape hatches to get a reference to the real node after the reconciliation. Here’s a contrived example, so it’s easy to grok:

Comparison if React used LazyDOM semantics

Some may suggest these React caveats enforce purity of the separation of concerns, which is somewhat true but as a browser primitive, that’s a rather opinionated enforcement. In practice, it’s helpful to be able to treat things as if “it’s just DOM” so people can make their own preferences (or break them) to just get things working and ship it™. Purely speculative, but a primitive like LazyDOM might provide new and useful patterns that would otherwise not emerge. After all, React was created by rethinking established best-practices.

Lazy tree merging

When you merge a lazy tree into another, it uses the incoming tree as a diff/patch just as you might expect, but each corresponding node in the new tree internally now references the original, real DOM nodes. This might be a bit hard to wrap your head around, but the effect is what’s most important: the new LazyNodes will actually reference the original DOM nodes you rendered the first time, immediately after you merge them. LazyNode’s are effectively lazy proxies. When they can, they will queue up the changes you make like setting properties and appending children, then apply them to a real node only when it needs to be.

It might help you to know that in fact, the reference implementation (polyfill) of LazyDOM does in fact use ES2015 (ES6) Proxy, which is not a well-known feature. Many people assume they’re slow, but they’re actually pretty darn fast!

Although not usually necessary, you can manually reify (make the real DOM node actually exist), access it, and even reassign them using the node property. This is what mergeWith does under the hood, after its applied the diff, so that the new lazy tree nodes reference the original real nodes.

Here are some of the cases where a real node is reified:

You place a LazyNode into some other real Node, like when you document.body.appendChild(lazyNode)

You read a property that requires a real node to exist but it doesn’t yet, like clientWidth and probably many others, though most could be optimized to return expected values; like clientWidth is always zero when the element isn’t in the DOM yet.

What about real-world applications?

Good question. While this is just a fun experiment at the moment, I have played around with various degrees of application complexity. Here are a couple, though admittedly they still don’t completely test the viability in the real world.

LazyDOM isn’t meant to be a full-fledged UI framework or library like React is, so there’s no baked-in concept of setState, or lifecycle hooks; these are things you could build on top of the LazyDOM primitives.

So keep in mind that this reference implementation of LazyDOM is cobbled together and not really optimized or battle tested. Because of this, I’m not sharing the unminified source code at this point. You can play in the JSBin at your own risk. There certainly are a tons of bugs too, some I’m aware of and others that get fixed over years of real world discovery. It would be pretty distasteful to draw any concrete performance comparisons at the moment, except for the general statement that it is indeed fast. React contains hundreds of fixes for browser bugs and inconsistencies, so I wanted to make it clear that I’m not suggesting “death to React” or similar.

Alternatives

The most obvious alternative that some people come to when reviewing this is, “why can’t real Element/Text nodes just implement laziness as an implementation detail unrelated to spec?”. Indeed they could — to an extent. Whatever is causing node creation to be expensive could be done lazily or they could even wrap their internal logic behind nearly the same Proxy logic I use in the reference implementation. Even if they did that, it wouldn’t satisfy this proposal though since I’m advocating for a way to change the node which the proxy targets, so that new lazy trees can be merged into existing trees in-place. This ability would either have to be supported by the existing Element/Text nodes or the new LazyElement/LazyText variants could have it.

Another thing that could be punted on is the mergeWith method which does the actual diffing and does the newLazyNode.node = oldLazyNode.node assignment so new tree nodes point to the existing nodes. While this would certainly benefit from standardization, the details are probably more controversial and (relatively) easy for libraries to handle. Specs should focus on primitive building blocks and I can see mergeWith being argued as too opinionated.

EDIT: 11/2017 — Yehuda Katz now has a WHATWG proposal for “DOMChangeLists”, which is very similar to Node#mergeWith() in spirit. Specs move slowly, but the progress made so far is reassuring.

This is just an experiment!

Finally, I want to clarify that this is meant just as a technical experiment. I’m not directly involved with W3C or the React team, and there aren’t plans to maintain the reference implementation as a real-world library. I just thought it was an interesting idea and something to experiment with. Some may argue pitching this as a browser API is the wrong approach, and I don’t disagree; it could live as a separate framework like React is, I’m just not excited at the prospect of yet-another-framework.

The chances of this ever becoming a real browser standard are quite slim, even if this ended up being a useful approach. Please don’t take the code and use it in production or complain about bugs :)

“lazy”, not “virtual” DOM

Now might be a good time to clarify that I’m not sure this is “virtual DOM”. As far as I’m aware, this is a novel approach so I’m currently calling this technique “lazy DOM”, hence the name. I merely kept the “virtual” buzzword for the obvious similarities they have and people’s familiarly with it.