At ACL we have a library of shared React components, called acl-ui, that helps us to create a more unified user experience across our different cloud modules. The project consists primarily of presentational components that each include their own CSS file containing all the component-specific styles. Most of the components are also fully controlled, leaving all state management up to the consuming applications. We use React Storybook to render the components during development as well as to showcase the library on an internal website.

Since these components are used by multiple applications, it’s very important that they behave as expected. When it comes to testing them, we use a somewhat unique approach.

Our Approach

The general approach to testing any software component is

place the component in a test harness

This is basically an isolated environment that simulates as accurately as possible the real environment in which the component will run.

2. interact with the component via its public-facing interfaces

The idea is to test the behavior of the component but not test the implementation of the component.

3. verify that actual behavior is the same as expected behavior.

The environment in which a React component will run is a web browser, and its public-facing interfaces look something like this:

Our test harness will therefore need to simulate a web browser and allow interactions with both the App-facing and User-facing interfaces.

The most accurate way to simulate a web browser is to use an actual web browser. And the easiest way to automate user interactions in a browser is to use one of the Selenium Webdriver-based tools. WebdriverIO (wdio) runs on NodeJS and has an easy-to-use synchronous API making it the perfect tool to simulate the user in our tests. It makes it easy to select elements and call functions like isVisible() and hasFocus() to verify what the user "sees". And also provides functions like click() and setValue() to simulate what the user "does".

Since our project is just a library of components, there is no App. Instead, we create a Storybook story for each component which simulates an App in that it renders the component with a given set of props. This already provides a lot of what our test harness needs to do. Without any further enhancements we could create stories with different prop values, and then use wdio to verify what is displayed to the user. We could even just create one story and use the Storybook Knobs plugin to allow us to change the props on-the-fly.

What remains is to verify that the correct callbacks are called in response to user inputs from wdio. The problem is these callbacks are not exposed to wdio. Luckily, aside from all the user interaction functions, wdio provides one very powerful function, browser.execute() , which will execute any given script in the browser. A simple way to expose the callbacks would be as follows.

We’ve passed a stub onClose() callback which simply sets a property on the global window object when it's called. (Note: Storybook renders each story in its own iframe, and each iframe has its own private window object.) We can then verify that this value was set in our test script using browser.execute() as shown below.

Our test harness is now capable of exposing both the app-facing and user-facing interfaces of a component, which is a great start, but there’s still a lot of room for improvement. Instead of our simple callback stub, a purpose-built mocking library could be used to pass an actual function spy. And since we’re using the window object to expose callbacks, could we also use it to expose input props which could be changed on-the-fly? This would prevent us from having to create separate stories with different prop values or use Storybook Knobs which comes with a lot of overhead. Also, could we avoid having to write out all of this functionality manually for every story?

Storybook provides a useful method, addDecorator , which can be used to wrap the primary story in a parent component. We decided to write a simple decorator that can be used to expose any input and callback props to the window object. The decorator is called windowHandles and is used as follows.

Each prop is declared as either an input prop or a callback prop and a default value can optionally be passed for input props. The decorator then exposes each prop at window.<componentName>.<propName> . Input props are exposed as standard properties which can simply be assigned to in order to update the component. Callback props are mocked using Sinon.js and the function spy is what is exposed on the window object. One could now view this story in Chrome and open the side panel by entering window.SidePanel.isOpen = true in the debug tools console.

Using the above story, tests can be written as follows:

Where the page object is defined as:

The result is beautifully readable test scripts that clearly document and verify the expected behavior of each component. Once the Storybook story and the page object have been created, writing tests actually requires very little effort and complete feature coverage is easily attainable.

Where this solution suffers, is execution time. A big time waster is having to refresh the page between each test to prevent tests affecting each other. To solve this, we added a reset() function to windowHandles which, when called, re-creates all callback spies, restores all input props to their original values, and re-renders the component. After replacing browser.refresh() with sidePanel.reset() , the tests still don't run as fast as standard unit tests, but are acceptably fast for our needs.

The source code for our windowHandles decorator is available here.

Why not just use Jest Snapshots?

One of the main benefits of writing tests is defining and verifying behavior. This helps developers to think through the different behaviors of the component as well as providing a form of documentation for future developers. Tests can even be written before the behavior is implemented resulting in a TDD workflow.

None of the above applies to Jest Snapshots. This is because Jest Snapshots are purely for regression testing and don’t involve defining any behavior. They therefore don’t provide any verification that a component actually works. Even if a regression testing tool is what you’re looking for, Jest Snapshots is far from the best. It is fast and painless to use, which has made it quite popular, but it’s extremely brittle because it compares the raw HTML which is tightly coupled to the implementation details of the component (tag names, element hierarches, etc.). This results in failed tests even when the component is in fact still working perfectly. A tool like Screener has a far more intelligent comparison strategy. Screener is a browser-based visual regression testing tool which is far less brittle because its comparisons are based on the rendered output of the component.

In summary, Jest Snapshots might have its place, but definitely not as a primary component testing tool.

Ok, what about Enzyme?

Enzyme is a great tool and was actually our first choice (we still use it for some of our component tests) because it works really well for a lot of testing scenarios. Enzyme provides the ability to render React components and then easily extract information about the rendered output such as text content and class names. It also allows simulating events like ‘click’, ‘change’, etc. This sounds similar to the wdio API, because it is, but there are some key differences.

Enzyme was not designed to be used in a real browser environment but rather in a mock browser, like JSDOM. Its API is therefore more limited and doesn’t include functions like isVisibleInViewPort , which we used above. This is because there is no layout engine and therefore no way to determine things like whether an element would be visible and positioned within the viewport. For a lot of testing scenarios this is perfectly acceptable, but for a library of presentational components, what is rendered by the browser is critically important.

There are other issues with Enzyme as well. We have several components that make use of so-called “portal” elements. These are elements that are attached to the DOM outside the hierarchy of the parent component. This is addressed under common issues in the Enzyme documentation, “things like ‘portals’ are not currently testable with enzyme directly”. There is a workaround for this but the workaround involves manually exposing the portal element from the component which means breaking the principle of encapsulation and is only possible if you have control over the code that generates the element, i.e. it won’t work if a third-party library generates the portal element.

Another thing we sometimes do is add event listeners on document or window . An example is listening for the escape key in order to close a modal or side panel. We need to listen to this event globally, and we need to be able to test it. This is not possible with Enzyme. Querying :focus and :hover states is also not possible. In fact, anything that happens outside the scope of React is awkward or impossible to test with Enzyme.

Conclusion

The ideal test suite would

guarantee both behavioral and visual correctness, take zero seconds to execute, require no effort to develop.

These are lofty goals and there tend to be trade-offs between them so, for our component library, we chose to prioritize the first. We also wanted a single testing framework that would cover all components and work in all fringe cases so the described solution was the best fit for our needs. And while it took some time to complete the initial development, writing tests does not actually require more effort than other approaches, so we didn’t sacrifice on the third goal either.