I think you know the feeling. Everything is working perfectly in your project: the code looks fine, tests are green, stakeholders are impressed after the last demo. Then comes a week when the code repository (like Bitbucket) changes its IP, and your DevOps do some improvements in your CI. You’re not surprised that after all this, some builds appear to be red. But in a few days’ time, everything should be fine, right? The problem is if it’s not.

Almost exactly the same situation occurred in our latest Angular project. Tests started to fail randomly every third or fourth time. While investigating, we were so lost that we started blaming our testing library (ie. ng-test-runner) in our attempts to introduce some non-determinism into our environment. We were, of course, simply looking for a scapegoat, and the reason was totally different. Keep reading, and you’ll discover how we tracked down the problematic parts of our code. Perhaps the same applies to your project?

FIRST principle

Before we start the story, let’s recall the fundamental principles of the test from Bob Martin’s “Clean Code”. Tests should be:

Fast — because they’re run again and again. You don’t want to waste time waiting for tests to finish.

— because they’re run again and again. You don’t want to waste time waiting for tests to finish. Independent — should not depend on each other.

— should not depend on each other. Repeatable — results should not vary between different executions.

— results should not vary between different executions. Self-validating — should have one output that has a boolean value. They should pass or fail.

— should have one output that has a boolean value. They should pass or fail. Timely — should be written at the right time; in this case, just before implementation.

As you can see, having randomly failing tests violates the “Repeatable” principle. Breaking even one of these rules makes developers lose trust in their tests. This represents the worst case scenario.

Angular CLI setup

As mentioned before, our project was Angular-based (specifically Angular 6). It was generated by Angular CLI. CLI configures Karma as the test runner and Jasmine as the test framework. By default, it looks like this:

We added a small amendment to run the tests in a random order:

One may ask why we did it? The answer is as follows: we need to find these little bastards, and a random order usually causes false positives and false negatives to appear often.

Why is a false positive or a false negative so dangerous? A false positive means you have a green test, but actually the code is broken. A false negative is the other way around — the code is proper, but the test is still indicating some problem. If either of these scenarios occurs, you’ll soon lose confidence in your test!

Let’s look at some examples below:

Can you spot where the problem lies? UserService persists state in SessionStorage. So if the currentUser should be empty when no data is fetched spec is run before currentUser should return data that is fetched from http, everything is working fine. But in reverse order, the currentUser should be empty when no data is fetched fails.

We encounter an almost identical situation in our project. With a determined order of tests, everything was working fine, but when we added a random order execution, several tests began failing periodically.

Custom Jasmine reporter

In the described scenario, we wanted to have a repeatable failure for two reasons. Firstly, we wanted to find a bug. Secondly, after fixing it, we wanted to ensure we had a repeatable green. In the karma-jasmine documentation, you can find information on how to set a specific seed, so you can have a specific order. But for us, the question was: for which seed are our tests failing? This seed wasn’t written on the console, nor in any log.

After some googling, we found Oliver Peate’s gist that I paste here with some comments. It’s a simple Jasmine reporter that prints the seed of the current test execution.

When we have a custom reporter, we only need to configure Karma to use it. This is illustrated below:

With the above configuration, we can run our tests and check problematic seeds. As we see on the console, our test fails for seed 00516.

Hence, we can set 00516 as the seed in karma.conf.js, ensuring we have a repeatable failure for our tests. Thanks to this, we can now fix our tests with the following code:

Conclusion

After the described issue, we leave the configured jasmine-seed reporter in our karma.conf.js. I must admit that on two or three occasions, it was very helpful, so I definitely recommend this solution for any of your projects.

You can find the repository illustrating this example at the following link: https://github.com/marmatys/angular-random-failing-tests

I’ve even asked Oliver if he plans to publish that reporter on npmjs. His preference is to keep build scripts simple, but he leaves us a free hand should we wish to do it. So if you think it’s worth publishing this small piece of code on npmjs, please leave us a comment.