An integration test suite is supposed to test the user stories for your service. If your service is composed of several micro-services, you do not need to tie your test suite to any of these micro-service.

Yet, we chose to write our test suites close to the people using them, ie CSS regression suite written with javascript in the frontend repository, and Integration suite written with python in the backend repository.

I like the Behave Python framework, because it allows us to write features and scenarios using regular English sentences (or any language, really). It can be used before the development of a feature to agree with non-technical people on expected behavior, and re-used afterwards to validate the implementation.

Behave-django empowers us by allowing us to create a test database and custom fixtures using Django’s manage.py tool, and running the integration test suite in the same fashion as our unit test suite.

Splinter is like browser testing on steroids. It’s a nice wrapper around Selenium allowing us to write succint and easy to read tests related to browser operations.

Let’s have another look at the Contact scenario from above, matched with its steps implementations:

Jon the reader: “That’s it ? 25 lines of code to visit a page, fill a form, submit it, verify the redirection page and even check for emails sent ?”

Hey Jon, nice to meet you! Yes, it’s worth it installing a couple more libraries such as behave-django and splinter, right ?

Jon the reader: “Why did you add the @fixture.browser.chrome decorator to the feature?

Splinter and Selenium offer several browser integrations, so you could decide to run several browser if you’d like. Since we already have another CSS regression suite and do not care about design in our integration tests, a single browser is fine for us.

Here’s how we defined our decorator:

Jon the reader: “I just noticed that Splinter has a Chrome webdriver. Why are you using the remote one ? And what is this selenium-hub ?

Here’s the thing: From our tests, we’re connecting to a domain name https://xpc.gounite.test/ which doesn’t exist!

A django test runner would normally send queries to http://localhost:43093 or other random port. The thing is, we have several micro-services, and our React project needs js/css files to be downloaded, then once the SPA started, queries are sent to the API server.

While we use Kubernetes on production, we simplified the developer’s local infrastructure using docker-compose, and we use a nginx reverse proxy docker container to map and rewrite requests to the appropriate docker container.

Let’s look at a couple interesting configuration points.

reverse-proxy:

networks:

default:

aliases:

- api.gounite.test

- xpc.gounite.test

Aliases are a docker-compose feature that map said aliased domain name to the container. Any container from the docker network will be able to query the reverse-proxy container using api.gounite.test .

In addition, our requests are not sent from the python container which runs our integration tests, but from a chrome container which contains the browser we’re using to test the user stories.



listen 8080;

resolver 127.0.0.11;

set $proxy_pass

} server {listen 8080;resolver 127.0.0.11;set $proxy_pass http://backend:8001

The resolver here tells nginx where to query to get a DNS resolution for the domain name api.gounite.test . Docker DNS resolver always use the same IP in any docker network, so it looks like we can safely hardcode it in the configuration. By querying the Docker DNS resolver, nginx will know that backend corresponds to the ip of the backend container within the same network, and it will be able to forward requests from the chrome container to the django test runner.

Here’s how it looks like in the end

One final tip regarding debugging within such environment:

Python developers must be familiar with ipdb to set breakpoints in their code and easily investigate any frame. For our docker container to stop the execution on a breakpoint and give us access to a python shell, we need to restart our container with the --service-ports flag. Since this flag can only be used with docker-compose run (not exec), and we absolutely want to keep the backend container name (versus backend_run_1 or other), we need to remove the currently running container first as to avoid a naming conflict.

We also need the feature we’re debugging to be decorated with @wip for Behave to not capture stdout or logging output and stop at the first failure.

$ docker-compose up -d # Regular dev env

$ docker-compose rm -sf backend && docker-compose run --service-ports --name backend backend

We can also debug the frontend since we have a chrome browser ! Within your python shell, send your browser actions like context.browser.visit() , and see what’s happening within your chrome instance using a vnc viewer (Note: you need to use image: selenium/node-chrome-debug instead of image: selenium-node-chrome in your docker-compose config).

$ vncviewer 127.0.0.1:5900 # password 'secret'

Here’s how it looks like: