Posted on

Tomorrow's Ember is not what you think

Prelude

This is part two of a multi-part series responding to the call for #EmberJS2018 blog posts. In part one I discussed the current state of Ember and what's coming soon (this year). If you haven't read that yet, you should read that first.

I ended that last post with a question that I will begin to answer here:

What can we do as an ecosystem, as a community, as a team to build an technology stack that is happy, productive, easy, and performant?

#EmberJS2018

The call for blog post was meant for exploration of the unknowns.

There is a vast universe of problems and questions folks are facing in their Javascript applications today that are answered poorly, partially, or not at all by any framework.

I would like to make the case for why Ember got the last four years right: why two huge technology bets (Glimmer and Broccoli) have already prepared us better for the next decade on the web than any other Javascript framework. I truly believe that upon these foundations Ember is uniquely positioned right now: a sweet convergence of productivity, happiness, and performance.

Do we have work to do to deliver first-class ergonomics and learnability? Of course. But today, let's talk about what comes next in tooling and why Broccoli is a great bet for ease, productivity and performance.

The Web is hard

The Web today is stuck in the Cycle of Uncertainty

The web is hard. I usually describe it as the most hostile programming environment possible. We expect instant-load of on-demand applications and content regardless of device quality, operating system, execution environment, latency, bandwidth, and CDN location. We expect complete, seamless experiences despite the demands on instant gratification. We are several layers of abstraction removed from device APIs we often need to create these experiences. We expect applications to be secure. And in the middle of all this, we are tasked with advertising based revenue models that introduce scripts that are hostile to performance, user experience, and security.

It is no surprise given the difficulty of the problem space that so many different solutions have evolved. Neither is it a surprise given this complexity that some folks argue that the web is too complex.

We have found ourselves in a situation where we are always compromising, always building with a degree of uncertainty in our solutions, and always sacrificing architecture, features, or craftmanship for performance.

Is the problem the language? Is it file-size? Is it the type of bytes? Is it the number or type of network requests? Is it how we split our bundles? How many bundles we have? The bytes per bundle? Is the metric that matters TTFI? TTFP? Session length?

While the best answers to these questions vary based on the type of application or content being delivered, the sacrifices made are similar. Because our users expect instant-load of content, we commonly sacrifice giving them complete experiences in favor of faster partial experiences. We need look no further than the much-lauded Twitter-lite PWA mobile experience to see this in effect.

Twitter-lite optimized performance over experience, the end result is an unusable app. When using it I almost immediately begin looking for a way to re-open the same tweet in the native app. Why? The experience is incomparable: the lite experience has limited support for threads, no animation, no app flow, and poor navigation. In comparison to the native app it is a jarring, click-to-nowhere, never-find-what-you-want experience that is built with one purpose in mind: show you the tweet you came for and assume you are going to leave afterwards.

I understand why Twitter made the decisions and sacrifices that they did: the web is hard and there are few answers available. Given the choice between a complete experience that users abandon before loading finishes, and an incomplete experience that shows users the content they want, they chose the latter. This is a common sacrifice for sites to make: it is the driving force behind the market for AMP despite how wrong it is as a technology and a platform.

For several years I deleted native apps and used only mobile-web versions of LinkedIn, Facebook, Twitter and a number of smaller services. Of these, only LinkedIn's experience came more than halfway to feeling complete, and even it was frustratingly limited. It's been a rough few years of AMP sites that don't load or are broken, attempts at static content delivery that result in fast content but no interactivity, and ill-conceived attempts at lite experiences to "solve mobile".

Even the embedded web is hard

This all is of course from the perspective of the web. The problem space increases if you examine the attempts to use web technology to build cross-platform, whether via Xamarin, React Native, Cordova, Electron, or Servive Worker/App Manifest install.

Prior to working for LinkedIn, I spent several years focused mostly on hybrid applications using Cordova and Electron with Ember. What I found was that for application experiences to be seamless six things had to be true.

Routing scenarios needed to be simple. UX Animation needs needed to be simple. Gesture use needed to be limited or unneeded. Integration with any native device API needed to be very simple or unneeded. Several device level browser features had to be turned off (edge swipes for history and scroll bounce to name two of the most common). Platform specific layouts and features had to be well encapsulated and kept to a minimum if they could not be avoided.

Performance and features could be brought to par with Native so long as these six very-limiting things were true. To get that performance though, you had to really learn how to diagnose performance issues and solve them.

It wasn't as simple as app-size or virtual-dom. It helped to start with good foundations, but it took a dedication to consistently doing tedious performance work and building optimized application infra to deliver first class app experiences. I got it frustratingly wrong innumerable times before I started getting it right.

In the years since, these six things seem to have remained true, even in the face of the availability of frameworks such as ReactNative.

Sidenote: I include number five as this is one area where I am unsure that we will ever be given the necessary APIs to build app experiences via a ServiceWorker approach.

Churn and JS Fatigue are symptoms of our uncertainty.

This cycle of uncertainty has led to a cycle of uncertainty in our choices of frameworks and tooling, our virtual-dom implementations and our delivery mechanisms.

Ractive, no React, no virtual-dom, no Inferno, no Elm, no Preact, no Vue, no Glimmer. Grunt, no Gulp, no RequireJS, no Brunch, no SystemJS, no Browserify, no Rollup, no Webpack, no Rollup, no Webpack, no Parcel, no HTTP2. Closure compiler, WebAssembly, WebP and Brotli, GraphQL to save us? Things have improved, but we've largely been chasing magic bullets with little to show for it.

A few years ago I watched a talk in which Ryan Florence described having chosen React because he wanted to "say yes" instead of "no" to more features and more abilities for clients. I loved the enthusiam of that sentiment, even if both then and now I believe I can say yes more often with Ember than React. Yet even in the React ecosystem, the idea of building true app experiences and shipping features is being slowly strangled to death by performance and file-size-above-all-else arguments.

Bundling and code-splitting are likely still many years away from being a solved problem

One of the limitations of Webpack, Parcel and Rollup today is that they have no hooks for being given additional graph edges, such as those that might exist via dependency injection. Many frameworks and applications have discarded an important tool for ergonomics, instance encapsulation, and application state management entirely because of this limiting feature.

This limitation is one of the reasons why we didn't rush to use Webpack instead of concatting scripts as the default final step of the build in EmberCLI. Two years ago I discussed this limitation with Sean Larkin and Stefan Penner. While Sean was interested he also made it clear it would be very low priority, if it ever happened at all.

Another limitation is that once the graph is assembled, these libraries don't provide APIs for determining how many bundles to generate, and where to prioritize splitting them. The current state of code-splitting is an at-best manual process. While manual is better than nothing it isn't suited for the complexity we face. Some devices and network conditions will prefer more bundles to fewer. Split points are likely best prioritized by file-size and user application usage patterns that are automatically tuned in conjunction with the device and network conditions: not manually.

When it comes to delivering application assets, roughly what we want is to quickly deliver the optimal number of bundles for only the optimized assets necessary for initial interaction. By optimized assets we mean assets that have eliminated dead code, optimized any patterns where possible, eval in the most efficient way possible, are compressed as well as possible, and were built to target the feature set of the requesting browser as closely as possible. Likely we would pre-build the most commonly seen configurations with fallbacks for less common scenarios.

Achieving this will require extensive effort and coordination across multiple tool chains and technologies. But we're still stuck in a world where our bundlers are trying to control and manage our build processes too, instead of doubling down on being the best bundlers they could be.

I agree with @landongn that we need more collaboration between framework authors, but it's less about coalescing on the right framework and more about working together to solve tooling.

Enter Broccoli

We looked at these problems and realized that building a very efficient build pipeline that would integrate with all of the other technologies and tools necessary to create these bundles would position us to both solve these problems incrementally and integrate with Javascript community solutions as they were developed.

Broccoli is the technology we developed to do that, and I discussed how it does so in my last post.

In EmberCLI, in addition to pre-built addons and multiple targets (Targets RFC, Standardized Targets RFC), we're exposing a packager hook (Packager RFC). These are some of our first steps towards solving the tooling problem in a broader way, and I'd recommend reading up on them.

What Broccoli, and by extension ember-addons, have given us is a way by which to ensure that your assets are built and bundled as efficiently as possible for the intended language target using the best tools currently available.

For the give-it-to-me-now folks, it is possible to go all the way with this today with a bit more legwork :).

It means that even if a library is written in Typescript (or Elm, or some other language), we can work to ensure that it is delivered in it's most optimal form for the selected target, instead of relying soley on the generated output the libary author's have provided.

It means that if uglify, or closure compiler, or babel-minify, or something unknown becomes best, it get's swapped into the default toolchain as the sensible default. It means that if parcel, or Webpack, or Rollup, or something new produces the best bundles, it gets swapped in too.

It will naturally take much more than great build tools alone to produce the ability to build great applications. Library authors will have to standarize on how they publish their library's build requirements (whether as we've done it via the broccoli-addon / ember-addon keyword or elsewise). Code will have to be written in a way that tree-shaking, code-splitting, and dead code elimination techniques can be utilized, instead of exporting globals or entry modules packed with library features as is still too often the case.

We have begun this within the core Ember eco-system by working to split Ember and EmberData into more individually digestible packages and by trimming away legacy parts of the framework that are no longer necessary.

Simplicity Across Complexity

We can iterate to a future in which the applications we are building are not limited by the environment we are building them for.

We can iterate to a future in which the applications we are building are not limited by the environment we are building them for.

In which our applications, whether they be tiny static pages or media heavy applications with advanced routing, animation, and interactivity are built, optimized, and delivered by tools that handle the hard problems of ensuring ideal first-load performance for us.

In which instead of endlessly debating features vs. performance, we focus on pushing the boundaries of application expectations.

Broccoli is the foundation that will help us do just that.

I refer to this ability to deliver optimized experiences regardless of the level of complexity of the application at hand as simplicity across complexity.

Instead of different ad-hoc tooling and solutions for each kind of content and application, a single CLI and pipeline with strong conventions and easy configuration to allow you to use what you already know every time you begin to build, no matter what you want to build.

That's what EmberCLI and Broccoli were born to be. Let's look at some more of what such a toolchain brings:

ServiceWorker

ServiceWorker is hard.

From the need for a careful dedication to ensuring proper kill switches and error handling, to proper re-install and un-install, cache busting, request handling, and testing: ServiceWorker is frought with the types of pitfalls that could leave your users in an incredibly frustrating broken state and little to no recourse on how to fix it. And unlike your website, such mistakes often cannot be fixed by a redeploy or rollback.

But ServiceWorker is also a key ingredient for "installable" app experiences and faster site load times. If we want the optimal asset delivery described above, it must include a strong default story for ServiceWorker.

Good news!

A toolchain built on solid conventions provides the ability to have a default ServiceWorker that has knowledge of the assets you have available!

In Ember, while we already have addons that make ServiceWorker a bit easier, we are also beginning to explore shipping one as a default.

Server Side Rendering

Server side rendering is another important aspect of delivering simplicity across complexity.

The conventions that Broccoli and EmberCLI bring allowed us to provide SSR as a quick install of a best-in-class solution complete with incremental rehydration.

Now before I go too much further on this, I think it's important to understand that SSR is not a one-size-fits-all solution. It may even hurt your site's performance at times. We ran an experiment at LinkedIn in which highly optimized versions of our feed built with both Glimmer and Preact were used to measure various lower bounds of performance. While SSR universally helped time-to-first-paint (TTFP), it regressed time-to-first-interactivity (TTFI) for the Glimmer variant.

Our test was hitting traffic across all percentiles of devices and network conditions. All else equal, for a highly optimized application, using SSR will require the client to do more work to get to an interactive state. More work equals more time.

If you are only using SSR for performance reasons, there isn't a simple "yes or no" answer to whether or not it is worth it. The answer will be driven by the value of TTFP vs TTFI to your users, and the device and network conditions available.

However, this is something that a smart delivery pipeline and build tools with strong conventions can answer! As with which version of assets to deliver and default ServiceWorker, an integrated process gives us the ability to deliver the ideal experience tailored for the end user regardless of the type of content or application being developed.

SSR for Static Content and SEO

There are of course other reasons than TTFP to utilize SSR. Even though several major search indexers claim that they index sites with Javascript, it's still been shown that SEO in this case suffers without static content optimized for SEO being crawlable.

Equally important, there are deployment environments that either don't support Javascript or don't offer access or configuration of a server.

An example of this is Github Pages. Fastboot is not the only SSR solution for Ember applications using EmberCLI. There is also Prerender. And there is Prember, which builds upon Fastboot and stands out for the capabilities it unlocks for publishing.

Prember allows us to build our application's assets with a set of pages already prepared by Fastboot for static serving: perfect for Github Pages where we have only static assets, not a server. And yes, we can ship a ServiceWorker with our Github Pages site as well.

If that level of integration wasn't cool enough, members of the Ember community built ember-cli-addon-docs to make building documentation for addons ridiculously easy. They used Prember to ensure those docs were shipped to your addon's Github Pages site optimized for speed!

This level of tooling integration across the ecosystem is the power of EmberCLI, and it wouldn't work so flawlessly without Broccoli, the key primitive it is built upon.

Integration is about more than your production build

EmberCLI offers three environments: production , development , and testing . Why these three and why does this matter?

sidenote, if you've ever accidentally gotten hung up on dev vs prod builds, read this

Over the years, you may have heard members of the community discussing svelte builds. Svelte meant several things. It meant removing cruft. It meant shaking-off parts of the framework applications didn't use. It also meant stripping debug logic from production builds.

In development and testing , Ember (and innumerable addons) provide extensive assertions, warnings, more verbose stack traces, and guards against bad patterns (such as things which trigger extra renders). In production , these debug helpers are stripped away, reducing the size of your shipped code and removing the performance penalty that these checks impose.

In testing , we take this to the next level by injecting waiters and counters for async behaviors. Initially this was used to make the wait() helper more robust, but now we also detect async test leakage. And it's not just Ember (which by the way is itself an addon), lots of Addons create test waiters and add assertions for async leakage now. Craftsmanship is a community value.

With ember-exam we run our tests with parallelization and randomization. Pair that with reporting of async leakage, and you've got a great start towards a more trusty test suite. This is just a small part of why so many folks say that the testing story in Ember is one of their favorite things, and it's enabled by EmberCLI.

CSS Optimization

If you haven't looked at CSS Blocks yet, you should. CSS Blocks is just the beginning of what it likely to be a long road of building optimizing compilers for CSS in the same manner as has been done for JS.

With so many 3rd party components and component ui frameworks available, how do you ensure consistent performance?

Via Broccoli, we allow each addon to self-report it's needs and prepare it's own assets. But we do so in a coordinated way in which each addon knows the environment and the build target. The same applies to CSS and CSS Optimizers. Library integration will allow us to easily enable 3rd party components and ui-frameworks that coordinate their CSS Optimization.

Much like how in production we strip out debug code helpers and run uglify but in development we focus on dev ergonomics, in development projects such as CSS Blocks can produce user friendly class names instead of class names that are aggressively optimized for production builds.

Integration Across Teams

The integrated workflows I'm describing here aren't just for ensuring the best build output for the given environment or target. They are a convention, and being a convention they become shareable.

A major joy of working in the Ember ecosystem is the ability to drop into any Application and immediately be productive. Project layouts are similar, naming is similar, and most importantly the tooling is the same.

With Ember Engines the Ember eco-system has pushed this concept as far as ensuring that Ember works just as well for Applications that expand across multiple teams and products. Engines are an isolation mechanism for sharing code and interoping with other parts of an Application that a team does not own.

They are also a natural code-splitting point (but not the only one).

When considering tooling that unleashes productivity for applications or content of any kind, we cannot forget that such products are also worked on by team(s) of every size. EmberCLI, thanks to Broccoli, ensures we deliver on that.

This doesn't need to be specific to Ember, much of our ecosystem including most of what this post describes is more ready to be shared with others than folks realize.

Conclusion

Hopefully as you read this I conveyed well the power of the primitive we've built upon. Were the Ember community to suddenly fold today, Broccoli is the primary piece I would bring with me wherever I went.

Recently, I used it to rapidly build an API framework (with framework addons!) for node using the latest Javascript features the entire way through the stack.

If there's one piece of the Ember ecosystem we should be spreading the good news about at every dev-tools and Javascript conference there is, it is Broccoli.

And honestly, EmberCLI too, there's no reason other frameworks can't make use of it :)

Eat your greens.