How do tools that grew up on the desktop, like Ember, Angular and React, make the jump to the mobile future?

This post is adapted from my JSConf EU talk, given May 6, 2017 in Berlin.

When people think about adapting apps to phones, they often focus on obvious differences, like screen size, CPU performance and input devices.

But the context in which a device is used is important, too. Because phones can be used anywhere, there are very few assumptions we can make, whether that’s how attentive the user is or whether they have an internet connection at all.

For me, working on a site like LinkedIn is a constant reminder of how truly global the web can be. In many cases, adapting to smartphones really means adapting to entirely new users. For many people, their first computer is a smartphone. That means millions, maybe billions, of people participating online without ever owning a desktop computer.

The more global your app is, the more combinations of devices and networks you’ll have to deal with. CPU power can range from a feature phone, to a low-end smartphone, to the latest iPhone, to a powerful desktop workstation. Network connectivity can range from GPRS to gigabit fiber to not being there at all. (Just try riding the subway in New York.) And then there’s the fact that each device may have a different browser, with widely varying capabilities.

Without careful design, it’s easy to optimize for one particular combination of capabilities at the expense of another. Let’s look at an example.

Imagine these two scenarios. User A has a very low-end smartphone, with a CPU that easily overheats and flash storage so slow it’s borderline useless. In fact, the flash drive is completely full, so it really is useless.

This person’s data connection is quite slow, so they use Opera Mini, which uses a proxy to heavily compress data before it gets to the phone.

User B has a high-end phone with a CPU that rivals laptop computers and plenty of fast storage. The only problem is that this person is traveling without any cellular data, so while they sometimes have access to broadband internet, it’s only when they’re in range of a Wi-Fi network.

For User A, anything that requires a lot of JavaScript is probably not going to work at all. Even if they stopped using Opera’s proxy, the slow CPU and storage means that downloading and evaluating a bunch of JavaScript is going to take a long time. Getting reasonable load times probably means rendering on the server and keeping the file size of everything as small as possible.

For User B, we want to work more like a native application. We’d be willing to spend more time upfront to load the entire app and as much data as possible onto the phone, if it meant that we could still use it when the phone was away from Wi-Fi.

Historically, the more we’ve tried to optimize for User B and take advantage of high-end phones and fast networks, the worse we’ve made the experience for the majority of the world, represented by User A.

So what’s the solution to this problem? Can you guess?

I’ll give you a hint. It starts with the letters “P” and “E”.

Yes! Well done!

No, just kidding. The real answer to this problem is supposed to be progressive enhancement. But one thing implicit about all of the advice I’ve ever received about progressive enhancement is that you’re supposed to just do it yourself. It almost always means rendering on the server and denying yourself the temptation of using too much JavaScript.

Browsers have advanced at a remarkable rate over the last 10 years. To me, it feels like the web has more momentum than ever before. Every new browser release brings so many new features.

Despite all of this incredible innovation, from IndexedDB to Web Workers, to me it doesn’t feel like the day-to-day experience of using web apps has improved that much in the past 3 or 4 years.

So why don’t radical improvements to the browser seem to be translating into radically improved web applications?

I’d like to propose that this is due to the fact that the cost of code is too damn high.

Taking advantage of all of those new features in the browser requires a lot of code! Native apps that work offline with beautiful user interfaces are hundreds of megabytes, and that’s not including the SDK that ships with the operating system.

Just parsing and downloading JavaScript can turn some phones janky. When you bundle all of your JavaScript into a single file, every byte starts to count.

In turn, this sets up misaligned incentives where libraries have to compete on file size rather than robustness. How do they achieve these improbably small file sizes? Often, it’s by persuading you that the old thing is unnecessarily complex — the cardinal sin in JavaScript — but they have seen through the BS and built something simple.

It is this emphasis on file size that leads the JavaScript community to its simplicity fetish. When file size relies on simplicity, and speed relies on file size, and speed is paramount on the web, then we have to pretend that simple tools are the best tools — what other choice do we have?

The only way to write an app that runs well on slower phones and networks is to ship less JavaScript. Too often, that comes at the expense of handling edge cases or building higher-level abstractions.

As a community, we often feel like we can’t build sophisticated solutions because eventually we start to collapse under our own page weight. We’ve seen this play out several times now. More sophistication = more code = slower load times.

The time period from 2011 to 2017 can roughly be broken up into the Backbone Era, the Angular Era and the React Era. (This same time period is also sometimes known as the Ember Era.)

It’s easy to become enamored with the simplicity of a tool, and that can lead us to underestimate the complexity of building modern web apps. Let’s hop in the time machine and see how the simplicity fetish has played out.

In 2011, Backbone was the cutting-edge of web app technology. I remember people would always say, “I love Backbone’s simplicity. I can read and understand all of its source code in an hour.”

But after building a big enough app, it would get slower and slower and they would discover that one model changing caused the entire app to re-render. No one on the team understood how the app worked.

But good news! Unlike the complexity of Backbone where you have to listen for change events and re-render entire view hierarchies manually, Angular’s super simple because you just set a property on your scope and it updates the DOM for you, automatically.

But after building a big enough app, you discover that the entire thing is a single controller with 3 million lines of code. Each dirty check takes 5 minutes. No one understands how the app works.

But good news! React solves this problem by being so much simpler than all of that Angular spaghetti. It can be simpler because it’s just the V in MVC.

But after building a big enough app, you discover that you actually need more than just the V in MVC and your React-Redux-Relay-Router-Reflux-MobX app weighs in at 7 megabytes, becoming just the F in WTF.

And no one understands how the webpack config works.

Don Norman, who you may know as the author of “The Design of Everyday Things,” wrote this in an essay about security.

For example, when password rules are too annoying, people just write their password down on a piece of paper on their desk.

I’d like to offer the Tom Dale Simplicity Corollary: the simpler you make something, the less simple it becomes.

Because when simplicity gets in the way, sensible, well-meaning, dedicated people develop hacks and workarounds that defeat the simplicity.

So how do we break out of this local maximum? How do we write one app that can scale up and down across different devices and performance characteristics?

I think we can learn from native developers, because they had to tackle a similar problem. Different CPU architectures have different instruction sets. If you write some assembly code for x86 and then want to run it on ARM, you have to start over from scratch.

Learning assembly for all of these architectures is a big task. If this was how system software was written, there wouldn’t be much cross-platform code, and introducing new CPU architectures would be borderline impossible.

We figured out a long time ago that a compiler can take a higher-level program and get it to run across all of these architectures. If a new architecture comes along, you just have to update the compiler, not rewrite every app in existence.

For example, you can compile C code using Clang and LLVM to WebAssembly, an architecture that definitely didn’t exist in the 1970’s:

clang -emit-llvm --target=wasm32 -S sample.c

llc sample.ll -march=wasm32

Best of all, not only can a compiler get our code running on different architectures, it can also optimize it for those architectures. By encoding CPU-specific optimizations in the compiler, everyone’s code gets faster for free.