Minimal Changes to Tests

We’ve put significant effort into ensuring that our tests follow best practices and a fairly strict coding style. This has helped us to mitigate flakiness and enable cross team collaboration, because tests look the same across different sections of our code. It was pretty important to us that our test files look more or less the same before/after the migration.

Most teams will not have to change the contents of their test files. In fact the below snippet was enough at Airbnb to paper over the differences between our usage of Mocha and Jest.

This is possible because the Jest API and the Mocha API are similar with only slightly different function names. Which functions you need to control for (or change in your tests) depends greatly on which functions you’re using. We also had a few test calls, which were 1:1 replaceable with it which is what we’ve decided as standard. Below is a quick example of what our tests look like before and after our migration to Jest.

Simplified Testing Architecture

We initially rolled out code coverage using Istanbul and Mocha in January 2016, but we discovered that instrumenting our source files was expensive, and added an unreasonable amount of time to our tests.

To solve this problem we wrote some custom logic to batch our tests into chunks, run them in separate processes, collect coverage on each, and then merge the coverage into a single report at the end.

Coverage was collected (and enforced) in a different CI job than our test suite. The test suite itself was later parallelized by dispatching chunks of the test suite to different worker machines and aggregating the results at the end.

Jest automatically handles splitting tests across processes, collecting and aggregating coverage. This was a major perk for us. By utilizing this fact we were able to remove our custom logic in both jobs, and rely solely on Jest to handle this intelligently for us.

Improved Performance

For projects with many test files, Jest will get you improved performance out of the box. It’s able to do this through a couple of mechanisms.

Parallelization. This is probably what Jest gets the most attention for, and for good reason. If you aren’t already parallelizing your CPU-bound work, you can expect to see a large performance gain by doing so. As stated above we were already parallelizing our tests at Airbnb, but we were doing so by getting a list of all our test files and dividing them equally among our workers. This left opportunity for a worker to get either an abnormally fast or abnormally slow queue of tests, and led to wasted CPU cycles. Jest instead takes a round-robin approach, and runs your slowest tests first, helping you to squeeze the most out of your processing power. It has a built-in babel transform cache. Applying transforms to code is very CPU intensive. By utilizing a cache that is shared across processes, you can dedicate your CPU to running your code and cut a lot of time off your runs.

With Mocha our suite took ~45 minutes to run locally, and sometimes it wouldn’t complete at all due to the memory pressure of running our full suite in a single thread. With Jest it’s down to 14.5 minutes locally. We saw a similar improvement on our build server with Mocha clocking it at 12+ minutes (after our work to parallelize across machines) and Jest finishing in 4.5 minutes.

Reducing Flakiness

When you’ve got a test suite as large as ours (several thousand test files), running your tests in a single thread will inevitably lead to flakiness. When we started working on the migration, roughly 12% of our builds would need to be rerun due to flake, and we had tests in our suite that required other tests to run first or they wouldn’t pass.

Being run in isolation means that it is impossible for a test to fail due to side effects of other test files in your suite. This is especially helpful for errors that are thrown in setTimeout calls that happen after a test has completed. Now it’s much easier for us to investigate a flaky test by checking the test and source file for any asynchronous code.

After migrating to Jest and fixing the tests that failed in isolation we were able to reduce our flake rate to ~1%. This saves our developers hours of time per work day as they no longer have to wait for a build to fail, and repeatedly rerun the test suite until it passes. Additionally on the rare occasions that flake does happen, we’re able to more accurately identify where it is coming from. It’s easier to identify, because of file runs in its own process, so it is guaranteed that flakiness is coming from within that file. With Mocha, a bad timer in file x, could cause a test to fail in file y.

If you’re dedicated to reducing flakiness, you can take this a step further and reduce flake in your tests even more by killing timers that are set in your tests:

Taking Performance a Step Further

Jest was much faster for us out of the box, but initially we weren’t seeing the sort of improvement that we expected. After profiling a few runs, we found that our global spec_helper.js file was the culprit. This is a file that we set up with Mocha to configure some global helpers that made our tests more convenient to write. For example, we use Enzyme for testing our React code, and to make writing tests easier, we include chai-enzyme. Rather than having all of our developers hook up this library manually in every test file, we hook it up in spec_helper.js which is run before all of our tests.

This turns out to be really problematic in Jest. Because each test file is run in a clean virtual machine, Jest reruns the spec_helper.js file once for each test file. In the case of the above example, importing chai-enzyme starts a chain that imports all of enzyme, which then imports all of React and ReactDOM. This takes 480ms even for tests that do not include React. In our case 480ms * several thousand files meant that we were spending over a minute just setting up this library. With Mocha, we didn’t feel the pain from this because it isn’t parallelized, and only runs the spec_helper.js file one time.

To get around this, we got a little bit creative with Jest’s mocking capabilities. By utilizing the callback on jest.mock() we were able to intercept enzyme imports, and load chai-enzyme only for tests that need it.

Developer Sentiment

Our ultimate goal with this migration was to improve our developer experience, both when writing their tests and when running them. We’ve only been on Jest a few weeks, but so far we’ve seen nothing but positive feedback: