Share this post

Chris Zetter, a Developer at FutureLearn, talks about our automated test suite for the Ruby on Rails application, which powers the FutureLearn website.

Testing a Rails application

Having code that’s well tested with fast and understandable tests gives us the confidence and agility to continuously deliver updates to our learners and partners. It’s important to us that all the developers in the team understand how the tests are organised and what value they give so they can add to and modify them.

Of all the Ruby on Rails projects I have worked on, I have found that the thing that has differed the most between the codebases is how they are tested. There’s many possible ways to structure tests in a Rails project and a lot of additional testing tools and frameworks that you can use which leads to this variety.

Our testing pyramid

A testing pyramid is a way of thinking about how your tests are organised. At the top of the pyramid are integration tests that verify your whole application stack. They are great at providing coverage for user-facing features, but are usually slow to run, harder to orchestrate and harder to debug when they fail.

At the base of the pyramid are unit tests. Unit tests are usually faster to run, but on their own don’t give you the confidence that the components of your application are put together correctly. A testing pyramid visualises the ideal relative quantities of different types of tests, reminding us that we should have a few slow-running feature tests for many fast-running unit tests.

This is our testing pyramid, and I will talk through the different parts of it. I’m going to start from the base of our pyramid since these run first in our test suite.

Linters and other safety checks

The first check of our build is to run different linters. Linters can find problems with code without running it by looking at the source files. We find them useful because they:

catch any syntax errors;

help our code follow consistent style and conventions;

and prevent us from using any problematic patterns that could cause unexpected issues.

We use RuboCop to lint our Ruby files, SCSS-Lint to check out stylesheets and JSHint with JSCS to check our JavaScript. All these linters can be configured as to what rules they alert on, so we have set each one to only alert on rules that are important to us, taking into account our existing conventions and the problems they can help prevent.

We recently had an issue where we were using an incorrect time because we were using Time.now rather than Time.zone.now . As well as fixing the issue, we turned on a RuboCop check that would warn us if we used Time.now again.

The other step is to compile our assets. Assets in Rails behave differently in production to our development and test environments (for example, they are only combined and minified in production) so by exercising asset compilation as part of the build we can catch any problems that we wouldn’t otherwise see until we deploy.

Units

For our Ruby code we use RSpec to test the behaviour of every public method in a class that we have written (or action within a Rails controller). These unit tests mean that we can modify any piece of code and be told if the existing behaviour, which may be relied upon by other code, has been changed. We don’t always test methods that have been added by other code (such as attr_reader in Ruby, or a has_many statement in Rails) since we can trust that these behave as documented.

We also test our Rails views. We write tests to verify that any logic behaves correctly and as a way to test other important behaviour (such as the inclusion of a class required by JavaScript). I also find view specs useful for documenting the API for the view-what objects and methods it expects to be available.

Independent unit tests that setup and test one object at a time are great because they are easier to follow and quicker to run. We often use dependency injection and mock objects to help us achieve this.

Sometimes it’s difficult to create truly independent unit tests. In particular, we have some presenter and service objects that wrap the functionality of Rails’ ActiveRecord models. We have found that because ActiveRecord has a large API surface area and tight coupling of the database, it can be hard to separate the objects under test. In this case we may create and test multiple objects at once.

Outside of our Rails stack, we test our JavaScript modules using Jasmine and HTML fixtures.

Features

Our feature tests follow user journeys, such as “enrolling in a course.” They tell us if all the component parts of the system have been put together (or integrated) correctly so that our users can actually do the things they need to.

I’ve previously written about how we write feature tests using RSpec. We use Capybara to simulate navigating around our site, allowing us to test most of our application stack: running a controller action, talking to the database, rendering a view. Capybara allows you to use different ‘drivers’ to navigate your site. Our tests either use the default Rack Test driver or the Poltergeist driver, which is based on a real web browser and can run JavaScript. Since the Poltergeist driver is slower to run, we only use it for features that need to execute our JavaScript.

Too much or too little?

We often talk in the team about tests, discussing how we’d write a test for a certain piece of code or if a given test is providing value.

One downside to having many tests is that all of them can take a while to run. Our full test suite currently takes around 12 minutes to run, this is a few minutes longer than we would like, so we’re currently looking at ways we can make some of our tests run faster.

We also know that there are some parts of the application we don’t have automated tests for. For example, we don’t yet automate testing the visual appearance of the site and manually spend time checking this in a variety of browsers, even though there are tools that might help us do this.

We’d love to hear from others about their test suite for web applications. What do you do differently? Tell us in the comments below. Or for more about how we work, check out other “Making FutureLearn” posts.