Exploring Dualities

If there is one common thread I’ve seen when reflecting on topics around design is that things are never just black or white. Before talking about any specific technical choices I think it’s important to appreciate not only the scale from left to right but understand that often by slightly reframing the question to things that are seemingly opposite of each other can align next to each other. That stepping out or going far in one direction can wrap around to the other; Communism becomes Fascism, Infinity becomes zero, everything becomes nothing at all.

In programming and API design we also have dualities like this and I’ve found they serve as a great place of inspiration from embracing their boundaries on all sides. For example, React is built upon exploring one of the biggest dualities Declarative vs Imperative. Almost nothing about React’s underpinnings is Declarative. It runs through a series of instructions, completely reconstructing a Virtual DOM, every cycle. Yet they managed to give a Declarative API through JSX sitting on a completely Imperative engine. They didn’t do it by changing their fundamental truth, but by going even deeper into their imperative nature. Hooks and Hook Rules are further examples of this.

Reactive vs Differential

The Virtual DOM and its diffing rendering engine have been the go-to solution for UI’s the last several years. Even newer non-Virtual DOM solutions like lit-html still work off a diffing engine of sorts using Tagged Template Literals, and HTML Template elements. Being able to represent the whole view as simply view = fn(state) has big benefits from conceptual simplicity.

Whereas a Reactive rendering engine is one built on events. The view is not a pure transformation but a side effect of the change in data. Instead of building this composition of functions that runs every cycle, you can avoid the diff/patch by constructing a dependency graph upfront and upon update only apply changes where applicable. How is this achieved? Through much finer-grained wrapping of expressions, often right down to the individual binding to the DOM on a per attribute basis.

As you can imagine while there are immense benefits in performance after the fact, initial rendering in Reactive systems has classically been a considerable overhead, both in ensuring the data can be listened to, and constructing the dependency graph. These factors don’t only weigh into the performance but into developer experience. The constant needing to map data to getter/setters (to track dependencies), and proprietary DSL’s in templating can be a cognitive overhead. Finally, there are just some cases like data snapshots where the finer granularity doesn’t help. When React burst on the scene these sort of benchmarks started appearing to really push the idea that the Virtual DOM was the more performant technology. So what is a Reactive developer to do? Let’s see if we can learn something by looking at patterns differential rendering engines use.

The first thing to help with the classic shortcoming of initialization is to look at pre-compilation. React already showed us how to use JSX to make an Imperative API, look Declarative. Its components provide a generalized abstraction and it works right next to natural JavaScript without specialized DSL. What would happen if you used the same pre-compilation technique on an already Declarative API? Could you take a generalizable JavaScript/Component API but optimally construct slim code to create the Reactive dependency graph? Sure you can. Constructing the graph nodes as you create the DOM nodes once upon initialization and then just updating what changes turns out to be incredibly fast across the board.

Next to consider is the awkwardness of dealing with Reactive data. While a diff engine doesn’t need specialized data types they still need to know when to update. And in the case that they support hierarchal trees (Components) they need to bind that update function to the current Component. So if you are already dealing with explicit setters, maybe all we need to do is make the getters cleaner. ES2015 Proxies seem perfectly up to the task. They can intercept property access, to not only allow dependency tracking but also wrap children in their own Proxies. Serializing enumerable properties for data submission and constructing the structure from server snapshots is trivial. As long as the “state” object is constructed there is no API syntax cost over plain objects used in diff engines (semantics is a different consideration).

Finally, there is the consideration of places where a diff engine naturally does better than a Reactive one. Things like continuous ingestion of large data snapshots can be cumbersome to set up in a Reactive rendering system. Whole trees of information may or may not be there.

A diff engine like one used in React’s Virtual DOM is still Reactive in a basic sense. Updates are queued and executed. You can view the actions to trigger updates like an event, just a single event governing the whole system. You could make the most inefficient React clone ever by just wrapping each Component in a Reactive system with a Computation re-triggering the whole thing to re-render every time state updated. Whatever depth the update, all children of that Component would be reconstructed. What makes React performant is that it recreates a Virtual DOM each time and then diffs to ensure the real DOM is not recreated each time.

So in a sense, you can view the lifecycle of React as a single Reactive atom. And to our benefit, a Reactive system can always become less fine-grained if needed. So perhaps diffing can just be a tool in the belt. But it needs to happen without re-rendering everything. Proxies can come to the rescue here again to provide a generic answer. If we can diff deep state we can then only notify the places where properties are added, removed, or basic values have changed. In this case, the Proxies awareness of data shape can aid us easier to traverse the diff compared to typical Reactive atoms (Observables, Signals, etc..) which work as completely isolated cells.

So through this exploration of how the other side lives, the takeaways are:

JSX and ES2015 Proxies have the potential to address some classic weaknesses of the fine-grained Reactive approach.

If the need arises, for performance or otherwise, the benefits of diffing can be opted into to shore up future weaknesses. Advancements in Virtual DOM techniques are still applicable here.

Mutable vs Immutable

This duality is as fiercely debated as any (OOP vs Functional might still be the winner). This one is definitely one that perspectives change in terms of scale. Zoom in or zoom out and you might be seeing very different things. If you took an immutable system and zoomed all the way out, somewhere you’d find an assignment. At the very top somewhere you would find state = newState . After all, what is change but mutation? Yes, you never changed the previous state but somewhere you will have mutated that reference. state is no longer what it used to be. Other the hand picture a system where everything mutable. Zoom in far enough and eventually, you will hit a constant, some value that cannot be changed. So what if instead of worrying about which approach is better for managing our state, waxing poetic over the value of the contract, we instead look at the significance of the boundary between mutable and immutable data.

The significance is that is the single point where change can be detected. And libraries are built on optimizing around knowing where that point is. All UI libraries need to know about change. A fine-grained system is often viewed as a mutable one by default since you apply many changes throughout the graph, but each atom (Observable/Signal) expects its contents to be immutable, otherwise, it will not know to update. A Component in React that manages its own state is adding another mutable mounting point within the seemingly immutable tree.

More often than not you can mess with this point and the library will still work just less efficiently. In a fine-grained system, you can always just replace the parent near the root and re-render everything. In a Virtual DOM library, you can mutate state directly and trigger a forceUpdate which will figure things out as it must. So maybe there is more room to play around in this range than originally thought?

We like mutable systems because they are optimal for performance and simpler in a “Diablo-esque” point and kill sort of way. We like immutable systems because of the strong contract they present which lends to better traceability. Common knowledge tells us “the right tool for the job.” That’s a lazy cop-out that people use to sound smart when they have no idea what they are talking about. The truth is things are never that cut and dry, and knowing that, why settle for either’s classic strength or weakness? Why not pick and choose your tradeoffs?

We’ve seen people make mutable API’s over immutable systems. They do so to make things easier to use while keeping the traceability. They do that by loosening the contract, but without benefitting from the performance. Why aren’t people making immutable API’s over mutable systems? In that way, we can add a strict contract without losing the performance but at the cost of ease. It depends on what you care about, and where you are starting from. A fine-grained Reactive system is mutable by nature, and it already has a bulky API around getter/setters. Classically it also suffers performance hits from too many distinct (non-batched) changes and ease that passed around data can just be mutated leading to cascading effects. From that point, we have the option to take something like ES2015 Proxies and make things easier. Or conversely, we accept the setter API we already have and enforce the immutable contract on top of our system. It’s not necessarily the most straightforward choice, and it’s one we will explore in the next section, but only through exploring the full range are we even presented with the option.

The key takeaways are:

Immutability to Mutability is not a single decision but a scale that appears in all libraries, applications.

Benefits (and weaknesses) of the other approach can often be injected by changing the API surface.

For a Reactive system, we can use the immutable contract to great effect to shore up the classic weakness that any piece of data can cause widespread change propagation.

Simple vs Easy

At this point through exploring these opposites, we are starting to have a range of possible options that can put together to build our solution. So in the case of designing Solid, this was a critical check. Simple vs Easy doesn’t seem like a duality, but more or less the same thing. They are not. Something that is simple is not complicated. It acts in a transparent and consistent manner. Something that is easy by contrast is something that can be done without much effort. But that could also be a one-click button that generates a bunch of configuration in the background to handle the 80% case.

Something simple can also be easy. But it is usually always possible to make something easier at the cost of simplicity. Abstractions exist to make things easier, not to keep things simple. When things are simple it takes less to know everything you have at your disposal. When things are easy you need to know a smaller subset of what there is to know.

Should we prefer simple things over easy things? Easy things may take less effort to get started with but can obscure your understanding of what you are dealing with and require ongoing retention of these constructed abstractions. Less complexity also lends to stronger construction. If something simple can be used in multiple situations, it can be said to be adaptable. On the other hand, flexibility (the ability to bend something to your will) often comes with greater complexity. Often when things are adaptable we don’t need them to be flexible.

Key takeaways:

Being easy is not without tradeoffs.

It is usually possible to make something easier but a lot harder to make something simpler.

Strive for adaptability over flexibility.

Other Dualities

This list is far from exhaustive and I feel like weekly I’m faced with these scaling axis like Performance vs Size, Explicit vs Implicit, or Library vs Framework. I just try to remember that sometimes by changing the question we can arrive at different answers. In designing out different parts of Solid from the Component System and Context API to Suspense I needed to use this sort of thinking to challenge expected norm and explore the result of turning their respective axis on their heads.

The dualities I covered, were key to framing the initial solution for the system that would become Solid. Understanding the depths of metaprogramming available with JavaScript allowed me to design the API surface I desired, grabbing the best parts from the approaches I admired. Ultimately not everything could be as simple as I wanted. The trade-offs I used allowed for a minimal API surface built on simple re-useable primitives. I could use a familiar explicit API to maintain simplicity when interacting with these primitives. But some details, like updating the nested fine-grained state, are in no way simple and all I could strive for is to make them easy.

So the answer isn’t always going to be straight-forward. But whenever I get stuck this can be a useful exercise to re-imagine the problem space. Pushing two seemingly opposite things together, or forcing two similar things apart. You never know what you might come up with.