We can’t just assume that everyone has access to the standard of devices and connections that we’ve grown accustomed to as developers. Especially not us: we’re building a new web game platform for a global audience of millions, that will be all about frictionless access to the best and latest games.

Not just for us, but for everyone: our users come from all over the world, which means that for our new product, we’re optimizing for a wide range of devices and connectivities. In this post we’ll do a walkthrough and describe the considerations that led to our current choice of front-end stack.

Going for an SPA

We knew we wanted to build a single page application (SPA) in order to have more control over the user experience of our website, making it as smooth as possible. On top of this, it also helps speed our website up since there’s no longer a need for full page reloads. We only need to load the data that we don’t have yet, and then re-render the page.

One big downside of SPAs, however, is that they’re generally dependent on downloading a huge chunk of JavaScript, parsing the JavaScript and doing some API calls before even being able to render anything to the page.

Needless to say this results in issues for our users, as well as for search engine crawlers. Google has solved the crawler problem to some extent but it doesn’t beat a document which is already rendered and ready to go.

Isomorphic JavaScript

As much as we want to build a SPA, we also care about speed and search engine performance. Because of this we decided to opt for something referred to as “isomorphic” or “universal” JavaScript. What it comes down to is that we write our code in such a way that it can run on both the server and in a browser, meaning we only render once on the server and subsequent pages are fetched without full page reloads.

To facilitate this we use NodeJS to serve our front-end application. This is a small layer which sits between our back-end and the client. It does API calls to fetch all the data it needs to render the current view and then returns the full markup back to the browser.

One thing to note is that these API calls are going from server to server over a high-speed internet connection meaning we don’t have to burden our users with doing these calls over (much) slower connections.

Isomorphic JavaScript, via Matt Hinchcliffe

Efficiently bundled code with ES2015 modules

Once the server has done its job, the client takes over. To do so, it has to download some (CSS and JavaScript) assets and parse them. These assets should be optimized so the user has the smallest possible bundle to download and the browser can parse the code efficiently.

To do this, we do the usual minification, mangling and compression you’d expect, but we also took some time to look at various tools that can actually bundle the code.

The tools we evaluated are, in order, Browserify, Rollup and Webpack. We started with Webpack since it was a tool we were familiar and comfortable with, and we could quickly get a first version working. We hit a few snags early on however that spurred us to give other tools a chance:

It’s only capable of understanding CommonJS syntax.

It wraps each module in its own function closure which adds overhead both in terms of download size as well as parsing efficiency.

CommonJS modules are not statically analysable which means that even if you only use a fraction of the functionality in a module, the entire module is included in the final bundle.

ES2015 modules, by contrast, were designed from the ground up to be statically analysed — this means that tools which understand this syntax can figure out which parts of the module are actually used and only include those parts. This process is referred to as tree-shaking.

Rollup was the first tool to support this. On top of supporting tree-shaking, it also allows to output the code in various formats, of which a self-executing function is suitable for inclusion with a script tag.

It actually puts all the code inside a single function closure, keeping bundle size down, as well as increasing parsing efficiency. This equates to happier browsers and thus happier users.

The problem with rollup however, is that the community is fairly small and it does not have support for things like code splitting, or sharing memory when creating multiple bundles.

This is why we finally settled on Webpack 2, which allows us to make our development experience more pleasant, allowing us to share memory when creating our mobile and desktop bundles.

Additionally, Webpack 3 got released which features support for scope hoisting, which is the same mechanic Rollup uses to put code in a single function closure and will go a long way in keeping bundle size and parsing times down.

ES2015 with Babel

ECMAScript has evolved rapidly over the past few years and a lot of the new features are making their way into browsers at unmatched speeds.

We author all our code in ES2015 syntax because we want to use a syntax that will eventually be understood natively by browsers. This means we can simply look at our site’s browser usage and adjust the configuration of the babel-preset-env plugin to handle which functionality should be transpiled.

We did take a quick look at TypeScript as well; while it’s certainly interesting, we’re not familiar enough with it to use it for production purposes just yet.

We could also have gone with good ol’ ES5 but ES2015 provides a lot of nice features which allow you to more concisely express your logic, reducing cognitive load.

On top of ES2015 we use a few features which are likely to make it into ES2016, such as the spread operator, since they make our code that much more concise.

Fetch

Since we are building a SPA we also needed a way to get data into our application. Ideally we’d like to be “as close to the metal” as possible, just having a thin wrapper around Node libraries and browser technologies which abstracts these differences away for us.

We landed on Fetch because it’s a browser technology that has great support (Chrome 42+, Firefox 39+, Edge 14+) and there is also a package on npm which implements the syntax around the native node http package.

For browsers which don’t have access to the Fetch API there is an excellent polyfill available.

React

When it comes to building a SPA, you can either do things from scratch or use a library to help you out. Building from scratch is a lot of work and adds a lot of new decisions to be made, so we decided to go with a framework.

There are many libraries to choose from these days: React, Angular, VueJS… The list goes on. Endlessly. Forever. New frameworks are being created every day and it never seems to stop, so when choosing one it was important to be pragmatic.

We were familiar with two libraries that are making most headway in the JavaScript community, so the main contenders for us were React and Angular 2. We’ve had positive experiences with both and were comfortable that we could start building stuff fast with either one.

Angular 2 vs. React, via 500Tech

Both of these libraries have their own pros and cons so we evaluated both. In the end we went with React for a couple of reasons:

Angular 2 only recently came out of beta, not much of production quality had been built with it.

Google doesn’t seem to dogfood Angular much with their own projects, even though there are plenty of projects which seem ideal for it.

Facebook built React because it solved a problem for them, they built parts of Facebook in it, as well as the entire Instagram website — they actually use it.

it. A lot of other companies are using React. This doesn’t mean we want to hop on any kind of bandwagon, but it does mean that if something were to happen to Facebook there are other companies to fill the void.

mean that if something were to happen to Facebook there are other companies to fill the void. React is just a view layer, you can decide what other parts you actually need and you’re free to use whatever library makes sense.

Angular provides everything out of the box. This is a double-edged sword since it gets you up and running quickly, but there is potentially a lot of it which you don’t use, which adds bloat.

React has a tiny API-surface area, making it easy for new people to get into since you’re writing regular JavaScript most of the time, not some specific library DSL.

That said, React has some downsides as well:

The license React comes with has caused some issues but Facebook has since set up a FAQ to help understand it. As a result, several companies which previously disallowed the use of React now embrace it.

The React ecosystem can be overwhelming. There are so many packages to choose from and a lot of them seem to solve the same problem. It’s important to find something you’re comfortable with and remain pragmatic before jumping ship to a new package every week.

That said, these weren’t dealbreakers, so we decided to go with React.

Redux

Since React is just a view layer we needed to find another library that would help us with the “model” part of the application. When React was released it popularised (at least within the JavaScript community) the concept of a uni-directional data flow.

This means that your data only ever flows through your application in a single direction. Any changes you want to make are made at the top of the tree and then mutate your entire application tree. They referred to this pattern as the Flux architecture.

There are quite a few variations of the Flux architecture but none of them really hit the mark until Redux came along. It upped the ante by not actually doing flux at all, but taking cues from the elm language and improving upon the Flux pattern which was laid out by Facebook.

Redux helps us maintain state for our entire application, storing everything from API data to information about which site the user is viewing.

With the entire application state in one place, your interface simply becomes a visual representation of said state. This is extremely powerful since it becomes predictable and easily testable. Any time the state updates, the interface updates.

Additional libraries

There are other libraries we use but we won’t dig too deep into them since they don’t significantly affect how we structure our code.

However we do observe one trend when it comes to picking libraries: we tend to choose libraries that do one thing well or at least split their code up in modules so we can include only the parts that we need.

Package management & code quality

Besides the code that actually ships to our users we also need to make sure that our development and deployment processes are efficient and enjoyable.

To check the syntax of our everyday coding we use ESLint, ensuring we all write code in the same way and reducing the amount of common errors.

For unit testing we use ava — this tool is great because it runs tests in parallel which really shaves off some time when we try to hit >80% unit test coverage.

We use Yarn to manage our dependencies. It gives us faster, reproducible builds which helps especially in our CI process — builds now take seconds, rather than the minutes they would take using npm 3.

Living styleguide

Because we have some components that are shared between projects we decided to bundle these together as a separate package. This allows us to use these shared components and things like brand colours in any product and continuously improve upon them, benefiting all projects.

Whenever we push this project we also generate a living styleguide with all the components so that we can always look up how they are used. The most important feature of the styleguide is that most of it is actually taken directly from the code which reduces the amount of maintenance overhead we have.