We like React and Flux at NoRedInk, and awhile back we decided to add immutability to the mix.

Others have written about the general benefits of immutability, but the primary motivating factor for us was its debugging benefit: knowing for certain that a value could not have been accidentally changed sometime after it was instantiated. This saves us a ton of time, because we can instantly rule out many potential culprits that otherwise we would have to spend time investigating.

This is especially beneficial when using Flux, because it lets us implement our stores in a way where accidental updates of the store’s data (outside using its normal API) become nearly impossible.

Questing for a Guarantee

A quick-and-dirty way to guarantee full immutability (as opposed to shallow immutability like Object.freeze, which does not recurse) is to serialize a given data structure as a JSON string. However, this forces you to deserialize again every time you needed to read values from your data structure, which is both clunky and costly.

What we wanted were data structures that were guaranteed to be fully immutable, but which took no more effort to use than their mutable counterparts.

Since we’d been so impressed with React and Flux, our first instinct was to reach for Facebook’s own immutable.js. We tried it out, but soon hit three problems.

Problem 1: Like Object.freeze , the collections are only shallowly immutable. In other words, any mutable objects we place in immutable.js collections remain mutable.

var obj = {foo: "original"}; var notFullyImmutable = Immutable.List.of(obj); notFullyImmutable.get(0) // { foo: 'original' } obj.foo = "mutated!"; notFullyImmutable.get(0) // { foo: 'mutated!' }

Partially mutable collections miss out on immutability’s biggest debugging benefit: knowing for certain that the collection could not have been accidentally changed after instantiation.

Problem 2: We had to do a lot of converting back and forth to go from immutable.js collections to vanilla JS collections. It surely wasn’t as bad as the quick-and-dirty “serialize to JSON” hack would have been, but it was a recurring pain nevertheless.

Since immutable.js collections have a very different internal representation than vanilla JS collections, using them with functions expecting vanilla JS data structures requires an explicit conversion step (using immutable.js helper methods) even if the function would not attempt to mutate the data in the collection.

var hobbits = Immutable.fromJS([{name: "Frodo"}, {name: "Samwise"}, {name: "Meriadoc"}]) _.pluck(hobbits, "name") // Runtime exception _.pluck(hobbits.toJS(), "name") // ["Frodo", "Samwise", "Meriadoc"]

It wasn’t that this took a lot of effort to code, but rather that it was a nuisance to remember. We’d see a runtime exception crop up, remember that we’d decided to use an immutable.js data structure there, and double back to add the conversion step.

This came up not only for third-party libraries, but also for our internal code. It meant more friction when invoking our own preexisting helper functions, and encouraged writing new helper functions in terms of immutable.js - making them more trouble to use in other parts of the code base.

Problem 3: The API had unorthodox, changing opinions on functional programming fundamentals

Discovering we could not call map on an immutable.js collection and then call map again on the result was a real shock - like discovering that evaluating "foo".toString() would for some reason return {stringRepresentation: "foo"} . We assumed this was a bug, because in the course of normal programming you expect toString to return a string, 5 to be an integer, and map to be chainable. Anything that doesn’t follow these well-established semantics deserves a different name.

When we discovered that this was a design decision and not a bug, it was time to part ways. We spent enough time hunting down the consequence of that design decision the first time it bit us, and the fact that the design was eventually reversed was not enough to restore confidence in a library that would necessarily pervade our code base. As the saying goes: “Fool me twice, shame on me.”

Fortunately, trying out immutable.js did help us enumerate what we wanted in an immutables library:

Fully immutable collections: once instantiated, it’s guaranteed that nothing about them can change. As little effort as possible needed to use them with JS libraries. No surprises; APIs follow established conventions.

We searched for something that met all these needs and came up empty; thus, seamless-immutable was born.

Integrating with Third-Party Libraries

Since our calls to third-party libraries tend to be non-mutating, we haven’t spent any noticeable amount of time converting data structures when dealing with them.

An unanticipated benefit of this was realizing we could use much of Underscore’s library of functions as normal. When we wanted to use _.max or _.find, we passed them a seamless-immutable array and everything just worked.

An anticipated—and enjoyable!—benefit is that existing debugging tools work swimmingly with seamless-immutable collections. If you run console.log(someComplicatedImmutableObject) , the output is straightforward and readable, and includes all the interactive folding arrows we’ve come to expect for objects in the console.

Backing React Components

As long as you’re using React 0.12 or later, these collections also work just fine as a replacement for React components’ props and state values. (Prior versions of React used to mutate the props and state objects they were passed.)

Naturally, using an immutable object for props or state means that setState and setProps no longer work, as they attempt to mutate those values, but you can use merge with replaceState to achieve the same effect.

(We actually found ourselves doing this so often that we wrote a convenience function that builds components with setState and setProps overwritten to just use merge and replaceState behind the scenes.)

Unfortunately, React components themselves must be mutable objects. As such, it’s not possible to call map on a seamless-immutable array to generate an array of React components; seamless-immutable will call Object.freeze on everything it returns, and React components are not designed to work when frozen.

There are a few ways to resolve this. One way is to use asMutable, which returns a vanilla JS array representation of a seamless-immutable array. Another is to use another library’s map function, such as _.map from Underscore. In CoffeeScript, you also can use a list comprehension ( for … in ) instead of map to similar effect.

Performance

Three performance-related features that we lost when switching from immutable.js to seamless-immutable are lazy sequences, batching mutations, and Persistent data structures under the hood.

We haven’t missed them. Lazy sequences are a reasonable tool to have for performance optimization, but even for our largest immutable instances, we have yet to encounter a performance problem in practice that they would solve. The same is true of batching mutations.

Persistent data structures are different, as their performance improvements are passive. Although seamless-immutable does not (and cannot, while maintaining its backwards compatibility with vanilla JS collections) use things like VLists under the hood, its cloning functions—such as merge—only bother to make shallow copies, as shallow and deep copies of immutable values are equivalent.

In practice, this simple passive optimization has been sufficient; we have yet to encounter a performance problem that Bagwell-style persistent data structures would have solved.

Takeaways

We wanted data structures that were guaranteed to be fully immutable, but which took no more effort to use than mutable equivalents. seamless-immutable provided that where the alternatives we investigated did not.

Not only did they work with Flux data stores, they also provided a fine replacement for React components’ state and props objects. Integrating them with existing third-party libraries, even Swiss Army Knives like Underscore , was a breeze.

Performance has been great, and we’ve been using it in production for months without issue.

In short: Worked as expected; installation was hassle-free; would buy again!

Discuss this post on Hacker News



Richard Feldman

Engineer at NoRedInk

github.com/rtfeldman