For a long time, we used TravisCI and Coveralls for executing lint checkers and tests and tracking our code coverage. These are fine tools but we've recently switched to CircleCI and CodeCov. This is our default setup for projects.

For a long time, we used TravisCI and Coveralls for executing lint checkers and tests and tracking our code coverage. These are fine tools but we've recently switched to CircleCI and CodeCov. This is our default setup for projects.

Photo by Mikael Kristenson on Unsplash

Let's start with a breakdown of our tools. We believe developers should run lints and tests locally during development. Our Continuous Integration (CI) process runs these checks automatically in a clean environment after commits are pushed or a pull request is created. The team-wide check provides an extra level of validation that we are on the right track.

Linting

Our code linting tools ensure code stays clean and consistent. Clean code is crucial for long term productivity.

As the mess builds, the productivity of the team continues to decrease, asymptotically approaching zero. –Robert Martin in What is Clean Code

We start with flake8 using the flake8-quotes plugin.

We prefer the Pinax-style of using of double-quotes over single-quotes. There is plenty of debate over this style preference and I'll leave the quote holy war out of this post. I'll just say flake8-quotes keeps our quote style consistent, which is incredibly awesome.

Next we apply isort.

Just like quotation style, everyone has their own preference when it comes to sorting imports. We use Timothy Crosley's isort to help maintain consistency. This automation is so nice we revised our import sorting preferences to fit what isort can do for us.

I usually can very quickly give up on personal taste if the computer can just do it for me. – Brian Rosner

We maintain tool configurations at the top of each project's tox.ini file:

[ flake8 ] ignore = E265,E501 max-line-length = 100 max-complexity = 10 exclude = */migrations/* inline-quotes = double [ isort ] multi_line_output= 3 known_django= django sections= FUTURE,STDLIB,DJANGO,THIRDPARTY,FIRSTPARTY,LOCALFOLDER include_trailing_comma= True line_length= 60

Then we check our code before committing:

flake8 <package> isort --recursive --check-only --diff <package> -sp tox.ini

One great feature of isort is when you omit the --check-only flag, isort fixes up your imports and saves the changes automatically. Robots FTW!

Testing

Making tests run fast is important. When a test suite takes too long to complete developers avoid running tests locally. Writing tests efficiently is equally important. If tests are difficult to construct developers shy from writing them.

Our main mission is delivering quality product. Well-tested code is both more robust and simpler to refactor, so we place a high value on the ease of writing on comprehensive test suites. The good news is a solid testing regime actually speeds up development over the life of a product as summarized by our lead quality engineer:

Festina lente, my friend. Take a bit more time, be thorough, plan well, write tests, etc. The end result is faster development. – Graham Ullrich

This Latin motto, festina lente, is more a philosophy. Translated, it means make haste, slowly. This is how we approach crafting software solutions for ourselves and our clients. It absolutely works.

We leverage django-test-plus by Frank Wiles and Revolution Systems to make our test writing faster and more efficient. We use factory_boy to solve a huge headache when building and maintaining test suites in complex projects: the creation of fixture data to drive test scenarios. The combination of django-test-plus and factory_boy substantially reduces test authoring overhead.

Coverage

To make sure we are testing the things that need to be tested we employ Ned Batchelder's popular coverage.py tool. We believe coverage is a valuable tool, but its utility is sometimes misunderstood, so we wrote a blog post: 5 Reasons You Should Care About Coverage. Read that post if you're on the fence about using coverage for your projects.

We have config sections in tox.ini for coverage as well:

[ coverage:run ] source = pinax omit = <package>/tests/* <package>/migrations/* branch = true data_file = .coverage [ coverage:report ] omit = <package>/tests/* <package>/migrations/* exclude_lines = coverage: omit pragma: no-cover show_missing = True

Services

Having all these tools running locally is great. But even better is to have robots in the cloud executing lints, tests, and coverage tracing whenever commits get pushed to your repository. This can help catch the human error of forgetting to run some of these tools. It can also catch issues where a local devs environment has gotten out of whack from what's in source control (e.g pip installed a package the code depends on but didn't add to requirements.txt or added a migration but forgot the git add ).

Continuous Integration

We use CircleCI to run our flake8 and isort checks and if those pass proceed to running our tests with coverage. Finally, if on master , we will go ahead an deploy to Eldarion Cloud on successful lints and tests.

This all typically happens within seconds to just a few minutes depending on how complex the test suite is.

We have this process setup to integrate with Github and Slack so that everyone is always up to date. We use Github Flow in our development process and our Pull Requests get annotated with the results from CircleCI (and CodeCov which we will discuss in the next section). This aids in the peer review process letting our engineers focus on what the robots can't.

Our configuration for CircleCI follows this general pattern:

# .circleci/config.yml version : 2 jobs : test : docker : - image : circleci/python:3.6.2 environment : PGUSER : root PGHOST : 127.0.0.1 - image : circleci/postgres:9.6.2 environment : POSTGRES_USER : root POSTGRES_DB : <same name as in your settings.py default> POSTGRES_PASSWORD : "" working_directory : ~/repo steps : - checkout - restore_cache : keys : - v1-dependencies-{{ checksum "requirements.txt" }} - v1-dependencies- - run : name : install dependencies command : | python3 -m venv venv . venv/bin/activate pip install -r requirements-dev.txt - save_cache : paths : - ./venv key : v1-dependencies-{{ checksum "requirements.txt" }} - run : name : lint code command : | . venv/bin/activate flake8 <project package> isort --recursive --check-only --diff <project package> -sp tox.ini - run : name : run tests command : | . venv/bin/activate python manage.py makemigrations --check --dry-run coverage run manage.py test codecov --token=<your private project token> - store_artifacts : path : test-reports destination : test-reports deploy : docker : - image : buildpack-deps:trusty-scm working_directory : ~/repo steps : - checkout - deploy : name : Eldarion Cloud command : | bin/ec/auth.sh bin/ec/deploy.sh staging bin/ec/slack-notify.sh staging workflows : version : 2 test-and-deploy : jobs : - test - deploy : requires : - test filters : branches : only : master

The workflows option in CircleCI adds a lot of power. Basically what's happening here is:

We run lint checkers against the code. Then execute tests If those two items pass, the deploy step will execute, but only if on the master branch.

The deploy step executes some shell scripts in our repo that will deploy the code to an instance that we have setup for getting feedback from the product owner as the last step in our development process. What this means in practice is that as soon as a peer review is complete and the pull request is merged, the lints and tests are run and then automatically deployed.

Code Coverage Tracking

We use CodeCov to take the coverage reports uploaded from the CircleCI processing and produce reports to help us keep a pulse on how we are doing coverage-wise. But more important than that and what I think the killer feature of CodeCov is, is the diff reporting.

Each Pull Request is updated with a report on how much the diff is covered in tests. That is, out of the changes submitted by the developer, how much of those changes have been at least executed by a test suite.

Conclusion

This stack of tools and services enables us to remain vigilant about code quality.

By encapsulating a lot of quality metrics into machine readable rules we can let the computer validate the code for us. With tests that can exercise the code in a repeatable fashion, we can move more violently to implement new features or refactor existing code for performance or maintainability.

Finally, this stack helps us to focus on taking ideas to launch, faster, because we are much more able to be focused on the value proposition of the software.