At Wealthfront, we believe that all code should look like it was written by the same person. In practice, that means that across our frontend stack we enforce consistent patterns, follow code style guidelines, and use the same technologies. This enables engineers to easily contribute to projects they have never worked on before by reducing the cognitive overhead associated with onboarding.

We have many JavaScript projects and internal NPM packages that we work on and maintain. Since we use the same technologies across all of our projects, we have found it to be complicated to keep these projects in sync as we start using new libraries. Since we use the same libraries to solve the same problems, such as chai for assertions and sinon for mocks, we found that we were duplicating our test suite setup file in every repo that needed it. The setup file looks something like this:

View the code on Gist.

This is a lot of configuration code duplicated in each repo, but at least our test suites all behaved the same. However, when we decided we wanted to stop using chai.assert.equal , we realized we would have to copy and paste the following snippet into every project we maintained!

View the code on Gist.

There has to be a better way!

A test-setup package

We wanted to be able to abstract out our test-setup into its own package so that we could replace all of our test setup files with the following:

View the code on Gist.

If we could do this, we’d be able to consistently and reliably set up our test suite in each package. This would then enable us to make the chai.assert.equal change in one place and get the benefits everywhere.

One of the challenges of pulling out our test setup into its own package is that our projects all have slight variations in dependencies. Some projects use React, others use a library to write DOM fixtures to the document, and not all need sinon or sinon-as-promised . Each of those modules require slightly different things in the test setup. The commonality between all of our test suites was that if a project used a certain package, we wanted to configure that library consistently.

For our test-setup package to work for different variations of dependencies, we need some way to detect what is being used. If we could wrap our require statements in a try / catch, we’d be able to do that. Something like this:

View the code on Gist.

This won’t work!

This works exactly how we want it to in Node, but it won’t work when using a module loader like browserify or webpack. Browserify will throw an exception when processing code that tries to require a non existent file. We want to be able to catch non-existent requires and continue, so we need to convert those compile time exceptions to runtime ones.

browserify-optional

A handy package, browserify-optional , will do just that.

We can use browserify-optional by putting the following in our test-setup’s package.json . browserify-optional is the only dependency we need.

View the code on Gist.

When a project depends on test-setup and runs through browserify (which we do when running karma), browserify-optional would convert the above try/catch into the following code snippet if chai doesn’t exist.

View the code on Gist.

Using this structure, we can now check for the existence of modules and configure them, giving ourselves a consistent test runner across all of our packages.

Testing our test-setup

In order to feel confident that browserify-optional works for us and that test-setup properly handles different combinations of dependencies, we need to be able to test this setup package.

To do this, we created many sub packages that have their own test suite. The test runner for test-setup can then iterate through each package fixture, run npm install and npm test . Below is the file structure of our test-setup repo and the fixture projects that exist in the test suite.

View the code on Gist.

The package.json for chai-and-sinon is exactly what you’d expect:

View the code on Gist.

Conclusion

We previously had problems keeping our test suite configuration up to date since we maintain many internal NPM packages. By creating this test-setup package, we have enabled our engineers to have consistent expectations about how the test suites behave. When we want to change what we can use in our tests or the behavior of a package, we can make that change in one place and have it propagated everywhere. Having this package has drastically reduced the overhead of managing multiple NPM packages on our frontend. It might do the same for you.