I know a certain developer whose level of anxiety rises to disproportionate levels when his test coverage report returns a less-than-100% coverage. Whenever this happens, he will fiddle with his tests until he achieves the glorious 100% statistic - after which he will have earned not only bragging rights (shudder!), but also the approval of upper management. But let’s take a step back and examine this situation. Does a 100% test coverage result mean that we have achieved testing perfection? The answer may baffle you.

Test coverage (also referred to by some as code coverage) is one of many metrics that are commonly used to give a statistical representation of the state of the code written for a certain piece of software. Other typical metrics include1: cyclomatic complexity, lines of code, maintainability index and depth of inheritance. Each of these, I would argue, is a book of its own.

Test coverage in particular, is a measure of the extent to which the code in question has been tested by a particular test suite2.

The higher the test coverage, the greater the extent to which the code has been tested. This leads to the natural conclusion that higher is better. But how high?

To answer this question, let us examine the way tools that measure test coverage work:

Normally, the tool measuring test coverage will monitor the code during a test suite run. The tool will attempt to check that each written line of code is called at least once during the test suite run. And this is logical - since a line that has not been called during a test suite run is effectively untested.