Browser JavaScript-land was not a place I thought I’d be fighting a battle of speed. Years ago, computers and internet speeds were getting faster and faster. Then mobile phones came along, blasting us into the past. Screens got smaller, connections got slower, and CPU/graphics power decreased.

Our diner web apps, Grubhub and Seamless, had run AngularJS from 2014 and Angular since 2017 after a lengthy upgrade/conversion process in between. Angular carried us through a period of amazing growth and productivity. But our mobile performance still dragged — average mobile page load time was 9s to 11s, with full interactivity taking up to 17s after the diner opened the site.

While this load time was bad for diners and could potentially affect our SEO rankings, that wasn’t the only motivation. Like many software development undertakings, it started with a personal inconvenience. The application had felt slow to start up, taking over a minute for development/unit-test mode compilation, and around three minutes for Angular’s ahead-of-time (AOT) compilation.

I pitched (badgered incessantly, more like) to my team lead the idea that the site could be made faster if we could undertake a special project. And, oh yes, those “mobile device users” would benefit, too, if you need a selling point.

I’m not sure what kind of conversations happened above my level after that. I wasn’t even expecting to actually be taken up on the offer. But, as I eventually learned, apparently a four-month deadline had been proposed and a 30% page speed improvement was promised.

Soon thereafter, you could say I was placed on a performance improvement plan. For the site, that is.

Drowning in extra modules

For JavaScript applications on the web, unlike server-side software or downloaded applications, the cost of additional code impacts the user twice, in the form of 1) download time, and 2) run time. At the same time, open-source offerings — external modules that add abstraction layers — have exploded in popularity and number, and building a modern web application involves selecting dozens of these external modules, including the big decision on a rendering system — a “framework.”

Our application was a good candidate for a culling of these modules:

Relative to, for example, an enterprise subscription SaaS, initial impressions (one of which is site speed) have a larger effect on e-commerce conversion rates.

The document API that comes with all browsers is already an amazing abstraction layer. No need to add so many more onto it. This sentiment seems to be shared by Preact.

If a certain external library provided a truly indispensable API, we could implement the API ourselves if the external implementation was too large.

Our application size was about 200k LOC, 400–450 template components, and perhaps a hundred other logical classes.

To begin we had:

6 MB of JavaScript (measured before gzip)

a TypeScript Angular application

average mobile page load time of 9s — 11s, with full interactivity as late as 17s

I got my cheese grater out and looked for modules to whittle down.

moment , the chrono library

Removing moment was easy. The team had already written an alternate DateTime implementation by the time I joined. Nothing to be done here.

AB-testing software

We had identified a memory leak related to a blocking script that loaded before everything else that was degrading page performance, compounded by the large number of experiments being run. As a result, the product team and developers jointly concluded that it should be replaced. We are currently evaluating other solutions.

core-js , the polyfill

This library added hundreds of kilobytes of potential payload that only served to patch old browsers up to modern standards. This wasn’t removed entirely, but we made it load conditionally based on feature detection.

The feature detection shibboleth:

eval(‘for(const v of new Set([Map,Symbol,MutationObserver,IntersectionObserver,Intl,Promise,CustomEvent])){};[].includes();[].fill();[].find(()=>{});’);

lodash , the collections/algorithms library

Removing lodash was the next easiest thing. We had already converted to use lodash-es for tree-shaking, so the payload wasn’t as severe as it previously might’ve been. Still, removal was straight-forward. I audited all the usages, and converted them to use native methods where applicable, and wrote alternate implementations of functions that did not have a straightforward native equivalent, such as uniqBy . Size of the final replacement implementation: under 1kb.

Angular, the framework

This is the big one. Angular wants to be a one-stop-shop and was in every corner of our application, from rendering and templates to dependency injection and testing suites. We bought into this particular product hard, and you can see our enthusiasm for it in previous blog posts.

So, why would anyone want to remove a framework that advertises on its homepage:

“Achieve the maximum speed possible on the Web Platform today…”

Let’s get a little technical. For all its benefits, Angular has problems:

The runtime itself is large.

It requires two associates, zone.js and rxjs , bringing the total up to about 150kb-160kb (gzip).

and , bringing the total up to about 150kb-160kb (gzip). The code it generates from its special HTML-like language is ~2x larger than the equivalent JSX.

It has a host of tools and infrastructure problems, too:

Its AOT compiler is slow and imposes a compilation step that is incompatible with other tooling.

Its AOT compiler considers only a subset of JavaScript to be valid in comparison to its JIT compiler.

It’s difficult to use existing static analysis tools or write new ones for Angular templates because they use their own language.

It uses a custom module system that is at odds with the existing language standard (ESM), making analysis of unused code difficult.

Its dependency publication system was nearly non-existent at the time.

We could’ve waited to be saved by Bazel and/or the Ivy renderer, but we opted to remove the framework. Given this history, we weren’t confident that Bazel/Ivy would arrive on the scene quickly enough, nor that they would conclusively solve these problems.

Converting to Preact

I initially said I could convert the entire application to use the native document.createElement API because TypeScript allows you to choose anything as the JSX factory function. I’m glad my team lead stopped me from trying. We decided to use Preact, and to convert our components one at a time.

Due to its impressively small size of under 4kb, the addition of the Preact library into our application did not need to make things worse while we were making them better.

Utilizing an “angular-preact-bridge” component, we were able to start from the leaf nodes of our application component tree and move upward, culminating in the site-container component and the removal of Angular, without requiring a general halt of other development.

Abbreviated bridge component:

Our approach to converting Angular-specific concepts to Preact:

To be clear on what conversion means, we rewrote every one of four hundred plus Angular components. With each pull request containing only a handful of converted components, we were able to check for regressions.

The checklist for component conversion looked something like this:

Run the Angular template code through a regex converter, producing annotated TSX code that could be finalized by hand. (I know, I know, parsing HTML with regex).

@Input() and @Output() ’s were converted to props. EventEmitters became callback props.

and ’s were converted to props. became callback props. Template variables converted to state variables.

Lifecycle methods could mostly be mapped one-to-one.

Remaining methods and anything that contained asynchronous updates, like network requests or subscribers, were modified to ensure they called setState at the end, instead of the zone.js hidden change detection.

For simple templates/ OnPush components, any components without lifecycle and without state were a simple flip over to functional components. For more complex templates and those containing lifecycle methods, we had to do a little more work:

Components that had lifecycle events, such as asynchronous data requests or state variables, were converted to class components.

Angular Directives are a unique Angular concept and required some creativity to convert. Some directives were converted to wrapper components. This pattern fits more of the compositional style that we were now adopting. Other directives that didn’t make sense as components were either converted to native HTMLElement interactions, or built into our “Preact middleware.” Preact documentation encourages using these sort of middleware adapters to its core functionality. So we wrote a function that served as our primary jsxFactory , delegating to Preact’s createElement after converting attributes like ngClass , ngStyle , i18n , href , etc. This enabled us to reduce changes to template source code during conversion while maintaining the same functionality that these Angular directives provided.

Forms are complex and critical in e-commerce, but it’s not the end of the world to use native forms with a little event-listener decoration. We evidently didn’t need whatever complexities were being shipped in the massive forms module.

We took the simple Preact router and slightly modified it to suit our needs. Because Angular routing has concepts of observable RouterEvents and Resolvers , we implemented equivalent constructs in order to reduce the amount of code we had to change. In the middle of the conversion, we had both routers active and handing off pages between each other.

As for Angular Modules, I’m reasonably certain that this system only exists because it is otherwise impossible to determine component dependencies because the Angular template selector is not otherwise traceable to the component class by ESM. Yet, it’s spoken about as if it’s a feature. With TypeScript + Webpack’s support for lazy loading and bundle splitting via the dynamic import() syntax, we no longer needed to set up Angular modules for lazy loading. Dependency injection and dependency injection mocking were rewritten in plain JavaScript. Though we had ~100 classes under Angular’s DI pattern, converting them via regex only took one pull request.

We ended up with:

3.3 MB of JavaScript before gzip, down from 6.0 MB

a TypeScript application that uses Preact for rendering

average mobile page load time of 3s — 4s

TypeScript carried us through it all, and remains an integral part of our application. I’m continually impressed by how much it aids in understanding and refactoring code. Furthermore, TSX (JSX) being part of TypeScript, which allows for HTML-like template code, is such a short compilation step away from plain JavaScript that keeping a mental model of source-to-runtime is much easier. Not only is this easier for humans to understand, but static analysis tools, such as the TS typechecker, linter, and IDEs offer better insights to offer on our codebase.

With the breathing room that this brought us, I plan to keep playing the role of killjoy. When a developer asks if we can adopt a new module, I ask “how many kilobytes is it?” and “is there a smaller implementation of the API?”

The last observed drop in page load time as the Angular runtime was removed.

tl;dr

From October 2018 to April 2019, a series of JavaScript performance improvements were released to Grubhub and Seamless, our diner web applications.

In one important measure, average page load time for mobile devices, we saw improvement from 9s — 11s to the 3s — 4s range.

Two major open source libraries were of particular help in this project.

TypeScript (devtool), which we continue to use throughout the application, allowed greater confidence in refactoring.

Preact (runtime), a deliberately tiny rendering library to replace our previous one, allowing our application to shrink so that users download and run it more quickly.