This article is part of the Continuous Integration, Delivery and Deployment series.

In the previous article we explored unit tests as the first and fastest set of tests we should run. Now it's time to see whether our unit tests provide enough code coverage.

Code Coverage

Unit tests by themselves do not provide enough confidence unless we know that they cover significant code coverage. Having all tests successful while, for example, covering only 15% of the code cannot provide enough trust.

Mature teams might not need to measure code coverage. They might know from experience that their unit tests are covering as much code as the project they're working on needs. Teams like that tend to have years of practice with Test-driven development (TDD). However, for the majority of us, tools that measure the coverage are very indeed useful addition to our tool-belt.



What is code coverage?

Code coverage is a measure we're using to check how much of our source code was tested. The higher the code coverage, the bigger the percentage of our code has been tested.

Important thing to understand is that the code coverage is not directly related with code quality. High code coverage does not guarantee high quality of those tests. For example, even with 100% of code coverage there is no guarantee that certain functionality was developed correctly nor that it was developed at all.

While code coverage can be used with any type of testing, it is most common and useful to tie it to unit tests. Integration and functional tests can be measured as well but the expectations from results must be different.

For more information please consult Code and Test Coverage.

Let's see how we can implement code coverage in our CI/CD tools.

For those of you who did not follow the previous articles in detail or those who failed to reproduce all exercises, please install Vagrant and clone the code from the repository TechnologyConversationsCD. To create a virtual machine with Jenkins and myFirstJob job that performs static analysis, please go to the directory servers/jenkinsWithTests inside the cloned repository and execute the following:

vagrant up

If this is not the first time you're starting this vagrant VM ("vagrant up" is configured to run the Docker container only the first time) or for some reason Docker container is not running, it can be run with the following shortcut script.

/opt/start_docker_with_jenkins.sh

We can check which containers are running by executing:

docker ps

When finished, virtual machine with Jenkins and myFirstJob job will be up and running. Results can be seen by opening http://localhost:8080 in browser.

We'll continue using Gradle as our build tool. In the last article we were executing the Gradle task check to run static analysis (CheckStyle, FindBugs and PMD) and tests. Now we'll add Code Coverage with JaCoCo.

To add JaCoCo code coverage all that should be done is add the plugin to the build.gradle file.

apply plugin: 'jacoco'

Good news is that if JaCoCo plugin is used, it adds itself to Gradle tasks check and test. Since we already have Jenkins job that runs Gradle check task, there's nothing to be done to tell JaCoCo to collect statistics of our tests.

However, JaCoCo needs two executions. One is to start collecting data from our tests (already covered within the Gradle check task). The second one is to generate HTML report from collected data. In order to do that, we should change our Gradle invocation from the Jenkins job myFirstJob by adding jacocoTestReport to the list of Gradle tasks. Now it should look like:

clean check jacocoTestReport

To publish JaCoCo coverage reports, we'll need to install the JaCoCo plugin (Jenkins > Manage Jenkins > Manage Plugins > Available). Once the plugin is installed, we can add the post-build action called "Record JaCoCo coverage report". As "Path to exec files" put "/.exec" (JaCoCo saves coverage data to an .exec file). "Path to class directories" should be set to "/classes" so that all those previously compiled with Gradle are included. Finally, we should set "Path to source directories" to "/src/main/java". The rest of parameters are related to coverage health and depends on team preferences. I tend to consider code coverage of 90% acceptable. More than 95% is often counter productive.

That's it. We're ready to build the myFirstJob with JaCoCo code coverage included. Press "Build Now" button1 from the left-hand menu. Once job execution is finished, enter the build and click the "Coverage Report". It will show the report with coverage of instructions, branches, complexity, lines, methods and classes.

Integrating JaCoCo inside Jenkins is fairly easy. Hard part is to get to the point where code coverage through tests is reasonably high. Having a firm grasp of TDD and functional and integration testing (my preference is BDD) is the key.

Travis does not play well with JaCoCo. Sure, it's easy to run it through Gradle in the same way as we did with Jenkins. However, there is no easy way to obtain and publish results. In case of test results, inability to publish is not a problem since we are interested only whether there is some failed test and can easily see from logs which one failed. Since failed tests should be fixed as soon as possible, there is no real reason for reports. Code coverage is different. Since in most cases code coverage obtained through tests is not 100%, ability to see and review the results in important.

As with Travis, CircleCI does not require anything special to be done to run code coverage since it is part of the Gradle task check (assuming that we added JaCoCo Gradle plugin).

We already saw in the previous article that Circle is very similar to Travis in its approach to CI. Major difference was in the stability and speed (it is several times faster than Travis). With code coverage we can explore another difference. Circle has the option to store build artifacts. With them we can accomplish the similar effect as what we did with Jenkins. We'll allow users to see the Gradle code coverage together with other types of reports. All we need to do is to modify our circle.yml file by adding jacocoTestReport and making sure that all reports are added to CircleCI artifacts.

[circle.yml]

test: override: - gradle check jacocoTestReport post: - cp -R build/reports/* $CIRCLE_ARTIFACTS

Full source code can be found in the circle.yml.

Once this change is pushed to the repository CircleCI will store all our reports and make them available through the "Artifacts" option available in each build. By opening jacoco/test/html/index.html we can see the nice report JaCoCo generated for us.

Summary

It was fairly easy to set up code coverage and publish results in both Jenkins and CircleCI. The only thing we had to do was to tell them to run Gradle task jacocoTestReport and where to find the results. Travis, on the other hand, does not have an option to publish reports. Without them code coverage is next to useless.

Jenkins continues to shine with its plugin system. Circle's "Build Artifacts" was a pleasant surprise and another handy addition to its speed. The more I'm using it, the more I can see the advantages when compared to Travis. Travis' price was unbeatable (public repositories are free). However, since recently CircleCI is free as well while providing faster builds and several nice additions (like storing artifacts).

The next article in the searies is an Introduction to Continuous Deployment. In it we explore the final stage of the continuous delivery and deployment. The goal is to continuously deploy (or at least deliver) software. Continuous in this case means often, fast and with zero downtime.

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

This book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We'll go through many practices and, even more, tools.