Mechanisms derived from CSP provide abstractions for pieces of work to control other pieces of work. With primitives like this, it’s easier to build cooperative mechanisms for concurrency. CSP gives you a relatively low-level set of constructs. In Functional Reactive Programming, signals represent values that change over time. Signals have a push-based interface, which is why it’s reactive. FRP provides a purely functional interface so you don’t emit events, but you do have control flow structures allowing you to define transformations over base signals. Signals in FRP can be implemented using CSP channels.

James Long and Tom Ashworth have some rock solid posts on CSP and Transducers (composable algorithmic transformations) are worth looking at if you find yourself wanting more than a global event system.

Also, do check out the js-csp project which offers a close JavaScript port of ClojureScript’s core.async that implemented Inversion-Of-Control (IOC) encapsulation using Generators (yield statements) rather than macros.

ES6 & Browserify

I’ve written in the past about the net gains of large-scale systems that take advantage of decoupling and a sane approach to JavaScript ‘modules’. I consider us in a far far better position now than we were a few years ago, no longer just making do with AMD, RequireJS and the Module pattern.

We can thank the abundance of increasingly reliable tooling around Browserify (there’s an entire set of npm modules that are browserifiable) or a plethora of transpilation-friendly ES6 features; super-useful while we wait for ES6 Modules to eventually ship in browsers.

es6ify by @thlorenz and @domenic result in a relatively nice pipeline for working with ES5 & ES6 with full support for source maps.

In many cases we’ve moved beyond this and it’s almost passe to frown at someone using a build-step in their authoring workflow. Plenty of JS library authors are similarly happy to use ES6 for their pre-built source.

ES6 Modules solve a plethora of issues we’ve faced in dependency and deployment, allowing us to create modules with explicit exports, import named exports from those modules and keep their names separate. They’re something of a combination between the asynchronous nature of AMD (needed in the browser) and the clarity of code you find in CommonJS, also handling circular dependencies in a better fashion.

With deps in ES6 modules being static we have a statically analyzable dependency graph which is hugely useful. They’re also significantly more clean than using CommonJS (despite Browserify workflows making it pretty pleasant to use). In order to support CommonJS in the browser context, you either wrap the module or XHR it in, wrap yourself and eval. Whilst this allows it to work in browsers, ES6 modules support both web and server use-cases more cleanly as they were designed from the bottom-up to support both.

On the native front, I’ve also been excited to see support for ES6 primitives continue to be natively explored in V8, Chakra, SpiderMonkey, and JSC. Perhaps the biggest surprise for me was IE Tech Preview shooting up to 32/44 on the ES6 compat table ahead of everyone else:

IE Tech Preview already has support for ES6 Classes, for…of, Maps, Sets, typed arrays, Array.prototype methods and many other features.

In case you’ve missed the V8 progress updates I’ve been posting, here are a few reminders:

Template Strings/Literals with embedded expressions enabling easier string interpolation:

Object Literal Extensions. Shorthand properties and methods will help us save keystrokes:

ES6 Classes bringing syntactical sugar over today’s objects and prototypes:

Whether it’s ES6 features or CommonJS modules we have sufficient tooling to gift our projects with strong compositions whether they’re client or server-side, isomorphic or otherwise. That’s kind of amazing. Don’t get me wrong, we have a long road ahead towards maturing the quality of our ecosystems but our composition story for the front-end is strong today.

Side: As we’ve already talked about Web Components, HTML Imports are worth a mention here too. JavaScript modules may not always be the best container format for components & their corresponding templates — a lot of people still use additional tooling to load and parse them.

There’s an argument to be made that as we as JS developers have an existing and somewhat mature ecosystem of tooling around scripts, moving back to HTML and having to rewrite tools to support it as a dependency mechanism can feel backwards. I see this problem in-part solved with tools like Vulcanize (for flattening imports) and hopefully going away with HTTP2.

I’m personally conflicted about how much machinery should go in pure script vs. an import and where the lines draw when we start to look at ES6 module interop. That said, HTML imports are a nice way to both package component resources but always loads scripts without blocking the parser (they still block the load event). I remain hopeful that we’ll see usage of imports, modules and interop of both systems evolve in the future.

The Offline Problem

We don’t really have a true mobile web experience if our applications don’t work offline.

There have been fundamental challenges in achieving this in the past, but things are getting better. This year, APIs in the web platform have continued to evolve in a direction giving us better primitives, most interestingly of late, Service Workers. Service Workers are an API allowing us to make sites work offline through intercepting network requests and programmatically telling the browser what to do with these requests.

Service Worker enabling an offline experience for the Chrome Dev Summit site. This feature is available in Chrome 40 beta and above.

They’re what AppCache should have been, except with the right level of control. We can cache content, modify what is served and treat the network like an enhancement. You can learn more about Service Worker through Matt Gaunt’s excellent Service Worker primer or Jake Archibald’s masterful offline patterns article.

In 2015, I would like to see us evolve more routing and state-management libraries to be built on top of Service Workers. First-class offline and syncronization support for any routes you’ve navigated to would be huge, especially if developers can get them for next to free.

Sneak peek of the Push API working in Chrome for Android.

This would help us offer significant performance improvements for repeat visits through cached views and assets. Service Workers are also somewhat of a bedrock API and request control is only the first of a plethora of new functionality we may get on top of them, including Push Notifications and Background Syncronization.

To learn more about how to use Push Notifications in Chrome today, read Push Notifications & Service Worker by Matt Gaunt.

Component APIs and Facades

One could argue that the “facade pattern” I’ve touched on in previous literature is still viable today, especially if you don’t allow the implementation details of your component leak into its public API. If you are able to define a clean, robust interface to your component, consumers of it can continue to utilize said components without worrying about the implementation details . Those can change at any time with minimal breakage.

An addendum to this could be that this is a good model for framework and library authors to follow for public components they ship. While this is absolutely not tied to Web Components, I’ve enjoyed seeing the Polymer paper-* elements evolve over time with the machinery behind the scenes having minimal impact to public component APIs. This is inherently good for users. Try not to violate the principle of least astonishment i.e the users of your component API shouldn’t be surprised by behaviour. Hold this true and you’ll have happier users and a happier team.

Immutable & persistent data structures

In previous write-ups on large-scale JS, I haven’t really touched on immutability or persistent data structures. If you’ve crossed paths with libraries like immutable-js or Mori and been unclear on where their value lies, a quick primer may be useful.

An immutable data structure is one that can’t be modified after it has been created, meaning the way to efficiently modify it would be making a mutable copy. A persistent data structure is one which preserves the previous versions of itself when changed. Data structures like this are immutable (their state cannot be modified once created). Mutations don’t update the in-place structure, but rather generate a new updated structure. Anything pointing to the original structure has a guarantee it won’t ever change.

The real benefit behind persistent data structures is referential equality so it’s clear by comparing the address in memory, you have not only the same object but also the same data ~ Pascal Hartig, Twitter.

Let’s try to rationalize immutable data structures in the form of a Todo app. Imagine in our app we have a normal JS array for our Todo items. There’s a reference to this array in memory and it has a specific value. A user then adds a new Todo item, changing the array. The array has now been mutated. In JavaScript, the in-memory reference to this array doesn’t change, but the value of what it is pointing to has.

For us to know if the value of our array has changed, we need to perform a comparison on each element in the array — an expensive operation. Let’s imagine that instead of a vanilla array, we have an immutable one. This could be created with immutable-js from Facebook or Mori. Modifying an item in the array, we get back a new array and a new array reference in memory.

If we were to go back and check the reference to our array in memory is the same, it’s guaranteed not to have changed. The value will be the same. This enables all kinds of things, like fast and efficient equality checking. As you’re only checking the reference rather than every value in the Todos array the operation is cheaper.

As mentioned, immutability should allow us to guarantee a data structure (e.g Todos) hasn’t been tampered. For example (rough code):

var todos = [‘Item 1', ‘Item 2', ‘Item 3'];

updateTodos(todos, newItem);

destructiveOpOnTodos(todos);

console.assert(todos.length === 3);

At the point we hit the assertion, it’s guaranteed that none of the ops since array creation have mutated it. This probably isn’t a huge deal if you’re strict about changing data structures, but this updates the guarantee from a “maybe” to a “definitely”.

I’ve previously walked through implementing an Undo stack using existing platform tech like Mutation Observers. If you’re working on a system using this, there’s a linear increase involved in the cost of memory usage. With persistent data structures, that memory usage can potentially be much smaller if your undo stack uses structural sharing.

Immutability comes with a number of benefits, including:

Typically destructive updates like adding, appending or removing on objects belonging to others can be performed without unwanted side-effects.

You can treat updates like expressions as each change generates a value.

You get the ability to pass objects as arguments to functions and not worry about those functions mutating the object.

You get the ability to pass objects as arguments to functions and not worry about those functions mutating the object. These benefits can be helpful for writing web apps, but it’s also possible to live without them as well and many have.

How does immutability relate to things like React? Well, let’s talk about application state. If state is represented by a data structure that is immutable, it is possible to check for reference equality right when making a call on re-rendering the app (or individual components). If the in-memory reference is equal, you’re pretty much guaranteed data behind the app or component hasn’t been changed. This allows you to bail and tell React that it doesn’t need to re-render.

What about Object.freeze? Where you to read through the MDN description of Object.freeze(), you might be curious as to why additional libraries are still required to solve the issue of immutability. Object.freeze() freezes an object, preventing new properties from being added to it, existing properties from being removed and prevents existing properties, or their enumerability, configurability, or writability, from being changed. In essence the object is made effectively immutable. Great, so...why isn’t this enough?

Well, you technically could use Object.freeze() to achieve immutability, however, the moment you need to modify those immutable objects you will need to perform a deep copy of the entire object, mutate the copy and then freeze it. This is often too slow to be of practical use in most use-cases. This is really where solutions like immutable-js and mori shine. They also don’t just assist with immutability — they make it more pleasant to work with persistent data structures if you care about avoiding destructive updates.

Are they worth the effort?

Immutable data structures (for some use-cases) make it easier to avoid thinking about the side-effects of your code. If working on a component or app where the underlying data may be changed by another entity, you don’t really have to worry if your data structures are immutable. Perhaps the main downside to immutability are the memory performance drawbacks, but again, this really depends on whether the objects you’re working with contain lots of data or not.

We have a long way to go yet

Beyond an agreement that composition is fundamentally good, we still disagree on a lot. Separation of concerns. Data-flow. Structure. The necessity for immutable data structures. Data-binding (two-way isn’t always better than one-way binding and depends on how you wish to model mutable state for your components). The correct level of magic for our abstractions. The right place to solve issues with our rendering pipelines (native vs. virtual diffing). How to hit 60fps in our user-interfaces. Templating (yes, we’re still not all using the template tag yet).

Onward and upward

Ultimately how you ‘solve’ these problems today comes down to asking yourself three questions:

1. Are you happy delegating such decisions and choices to an opinionated framework?

2. Are you happy ‘composing’ solutions to these problems using existing modules?

3. Are you happy crafting (from scratch) the architecture and pieces that solve these problems on your own?

I’m an idiot, still haven’t ‘figured’ this all out and am still learning as I go along. With that, please feel free to share your thoughts on the future of application architecture, whether there is more needed on the platform side, in patterns or in frameworks. If my take on any of the problems in this space is flawed (it may well be), please feel free to correct me or suggest more articles for my holiday reading list below:

Note: You’ll notice a lack of content here around Web Components. As my work with Chrome covers articles written on Web Components primitives and Polymer, I feel sufficiently coverered (maybe not!), but am always open to new explorations around these topics if folks have links to share.

I’m particularly interested in how we bridge the gaps between Web Components and Virtual-DOM diffing approaches, where we see component messaging patterns evolving here and what you feel the web is still missing for application authors.