Background

When you first start using Ansible, you go from writing bash scripts that you upload and run on machines to running desired end state playbooks. You go from a write-once read-never set of scripts to an easily readable and updatable yaml. Life is good.

Fast forward to when you become an Ansible power user. You’re now:

Writing playbooks that run on multiple distros

Breaking down your complex Ansible project into multiple bite-sized roles

Using variables like a boss: host vars, group vars, include variable files

Tagging every possible task and role so you can jump to any execution point and control the execution flow

Sharing your playbooks with colleagues and they’ve started contributing back

As you gain familiarity with Ansible, you inevitably end up doing more and more stuff-- which in turn makes the playbooks and roles that you’re creating and maintaining longer and a bit more complex. The side effect is that you may feel that development begins to move a bit slower as you manually take the time to verify variable permutations. When you find yourself in this situation, it’s time to start testing. Here’s how to get started by using Docker and Ansible to automatically test your Ansible roles.

Even if you haven’t reached Ansible guru status, or your playbooks are just starting out, testing can provide value because playbook development strongly parallels software development. Early testing can help shape your code: hard-coded values become variables and large monoliths become modules.

Tooling

The testing examples in this blog post are driven by Docker and TravisCI. Docker is a good fit for testing Ansible because:

quickly spawn a new instance with a known base state that is... described in the Dockerfile that lives near and with your project, and... is lightweight enough to allow running complex infrastructures on your laptop.

TravisCI is a free CI service for open source projects, works with Docker, and delivers results that are easily accessible by users. Users trust the TravisCI “build passing” Travis badges, increasing the chances of community users adopting your well-tested role.

Docker and TravisCI are two of several technologies that will meet the need.

I will not be covering existing common testing methodologies like syntax checking (see Related Works section at the end), but I will cover how to construct role tests and continuously test your roles. There are also a few publicly accessible roles currently using the test methodologies that I am detailing in this blog. For the sake of this post, I’ll be leaning on them heavily to motivate that this is a good way to test:

Spinning up Docker Images

We use Docker images to ensure a known base state for which to run our roles against. For our testing needs, a Docker image can be thought of as just another Ansible host. We’ll use the Ansible role ‘provision_docker’ to create the hosts. The interface to the role is much like an Ansible cloud module interface. A dynamic inventory group is created for the newly minted hosts that you can interact with in your playbook.

Now’s a good time for an example:

--- - name: Bring up docker containers hosts: localhost gather_facts: false vars: inventory: - name: iptables_host_1 image: "chrismeyers/centos6" roles: - { role: provision_docker, provision_docker_inventory: "{{inventory}}" } - name: Hello world hosts: docker_containers tasks: - debug: msg=”Hello World”

This playbook brings up one host with centos6, connects to the created host(s), and simple prints “Hello World." Notice that the second play runs against the dynamic inventory group docker_containers. The group was created by the call to the role provision_docker. Also, notice that the role provision_docker requires the parameter provision_docker_inventory; an array of dictionaries describing the docker hosts. The image key is optional and if unspecified will default to centos6.

Deep Dive Note: Base distro Docker images don’t normally come with ssh running. Nor do they have their init systems enabled. I’ve hacked together Dockerfile(s) for centos6/7 and Ubuntu 12.04 (14.04 just works :) to run ssh and the init system. Enough to ensure Ansible service module works against them and they can be sshed into chrismeyers/centos6

chrismeyers/centos7

chrismeyers/ubuntu12.04

ubuntu-upstart:14.04 The default login is root / docker.io

A Role Under Test

Now that we know how the provision_docker interface works, let’s do something useful by putting a role under test. We’ll look to the role-iptables/test/main.yml example for inspiration.

--- - name: Bring up docker containers hosts: localhost gather_facts: false vars: inventory: - name: iptables_host_1 image: "chrismeyers/centos6" roles: - { role: provision_docker, provision_docker_company: 'ansible', provision_docker_inventory: "{{ inventory }}" } - name: Run iptables Tests hosts: docker_containers vars: ports: [22, 1025, 1026] roles: - { role: iptables, iptables_allowed_tcp_ports: "{{ ports }}" } tasks: - name: Test ports command: 'echo "hello world"'

The first play creates a new host, and the second invokes the role under test, iptables, and run it on our newly created Docker container via the dynamic inventory docker_containers.

With this structure, we can test role parameter permutations by invoking iptables multiple times with different values for the iptables_allowed_tcp_ports variable.

There is some magic in how the role under test, iptables, gets invoked. To figure this out, we need to look at the directory structure of role-iptables.

role-iptables ├── README.md ├── .travis.yml ├── defaults │ └── main.yml ├── handlers │ └── main.yml ├── meta │ └── main.yml ├── tasks │ └── main.yml ├── templates │ └── iptables.j2 ├── test │ ├── inventory │ ├── main.yml │ ├── requirements.yml │ └── roles │ └── iptables -> ../../../role-iptables └── vars ├── RedHat-6.yml └── RedHat-7.yml

Note the symbolic link from role-iptables/test/roles/iptables to role-iptables/. This allows us to invoke iptables from role-iptables/test/main.yml, always against the newest, most-up-to-date version of the role. Also note the requirements.yml file in the test directory. In this example, there are also alaxy role requirements, like provision_docker, which is needed to run our tests. We’ll also get to the .travis.yml file a bit later in the post.

Spinning up Multiple Docker Containers

The usage of Docker really shines when you require multiple machines to test your roles. To showcase this we will look at the role role-install_mongod. The role requires at least 3 instances to test the MongoDB High Availability configuration.

- name: Bring up docker containers hosts: all vars: inventory: - name: install_mongod_one - name: install_mongod_two - name: install_mongod_three roles: - role: provision_docker provision_docker_inventory: "{{inventory}}" tasks: - name: Group primary add_host: hostname: "{{item}}" groups: primary with_items: - install_mongod_one tags: provision_docker - name: Group secondaries add_host: hostname: "{{item}}" groups: secondary with_items: - install_mongod_two - install_mongod_three tags: provision_docker - name: Test Install Mongo hosts: primary:secondary vars: admin_user: "admin" admin_pass: "secret_squirrel" normal_user: "chris" normal_pass: "morocco_mole" db: "qq" pre_tasks: - name: "Build hosts file" #lineinfile: dest=/etc/hosts regexp='.*{{inventory_hostname}}$' line="{{ hostvars[inventory_hostname].ansible_default_ipv4.address }} {{inventory_hostname}}" state=present shell: 'echo "{{ hostvars[inventory_hostname].ansible_default_ipv4.address }} {{inventory_hostname}}" >> /etc/hosts' when: hostvars[inventory_hostname].ansible_default_ipv4.address is defined - debug: msg="Running on host {{inventory_hostname}}, {{hostvars[inventory_hostname]['ansible_ssh_host']}}" roles: - { role: install_mongod , install_mongod_admin_username: "{{admin_user}}" , install_mongod_admin_password: "{{admin_pass}}" , install_mongod_user_username: "{{normal_user}}" , install_mongod_user_password: "{{normal_pass}}" , install_mongod_user_database: "{{db}}" , install_mongod_bind_ip: '0.0.0.0' , install_mongod_replset: tower , install_mongod_keyfile: '/etc/pki/mongo/keyfile' , tags: mongo } tasks: - name: Test connection to mongo instances. command: "echo hi" - name: Test connection as admin command: "echo hi" - name: Test connection as user command: "echo hi" - name: Test user access to db command: "echo hi"

Again, we create a list variable, inventory, that expresses the set of hosts we want to bring up. In the first play we call provision_docker which creates 3 hosts and adds them to the inventory group provision_docker. The tasks immediately after the role invocation re-organize the created hosts into groups expected by the role under test, install_mongod.

The second play invokes the role under test, install_mongod, using the newly created Docker containers as hosts. The task block after shows how you can then run acceptance tests against the hosts.

Note: The second play’s pre_task block adds the Docker container hostname and ips to the /etc/hosts file. This is required by Mongo HA mode as it expects to be able to reach the nodes by host name.

Continuous Integration with TravisCI + Docker

I’ll assume a familiarity with TravisCI. Specifically, that you know how to link TravisCI to your GitHub repository, and trigger builds based on git pushes... and that a .travis.yml file is needed. Let’s look at an example .travis.yml file to learn about the components:

Install: docker-machine, ansible, ansible deps (docker-py), role requirements, test requirements

Invoke the role test(s)!

Let’s use the provision_docker/.travis.yml file as an example.

sudo: required

dist: trusty

language: python

python:

- "2.7"

services:

- docker

env:

global:

- PATH="/usr/bin:$PATH"



before_install:

# Ansible doesn't play well with virtualenv

- deactivate

- sudo apt-get update -qq

- sudo apt-get install -y -o Dpkg::Options::="--force-confnew" docker-engine



install:

- sudo pip install docker-py

# software-properties-common for ubuntu 14.04

# python-software-properties for ubuntu 12.04

- sudo apt-get install -y sshpass software-properties-common python-software-properties

- sudo apt-add-repository -y ppa:ansible/ansible

- sudo apt-get update -qq

- sudo apt-get install -y ansible

- sudo rm /usr/bin/python && sudo ln -s /usr/bin/python2.7 /usr/bin/python

- ansible --version



script:

- ansible-playbook -i test/inventory test/main.yml --syntax-check

- cd test && make



notifications:

webhooks: https://galaxy.ansible.com/api/v1/notifications/

The sudo: required line gives us a TravisCI VM machine instead of a Docker container. Disable the virtualenv because it causes problems. Also lay down the newest docker-engine, because why not? In the install: block we install our dependencies, docker-py for the Ansible docker module, some Ansible dependencies, and Ansible itself. There is some weird python that comes with TravisCI, so next, remove the existing python link, and re-link to a more standard version. Finally, the script block invokes our test playbooks, main.yml and groups.yml, via Make. Voilà, continuous integration achieved!

Continuous Delivery with TravisCI + Galaxy

Shout out to Chris Houseknecht for integrating Galaxy and TravisCI! Successful runs in TravisCI will result in Galaxy publishing your newly tested role code. Let’s see how it’s done!

notifications: webhooks: https://galaxy.ansible.com/api/v1/notifications/

First, navigate over to https://galaxy.ansible.com/intro#travis for instructions on how to setup the integration.

The webhooks line in the notifications block instructs TravisCI to call Galaxy after a run with the results of the test. Galaxy has enough context from the data in the POST request to make all the connections. Flim flam kazam, continuous delivery complete!

Related Work

This isn’t the first time Ansible testing has been proposed. Jeff Geerling has told users how to use TravisCI to test roles + syntax since 2014. Molecule is a testing framework for Ansible that leverages Vagrant images.

Future Work

I’ve personally been applying the role testing patterns described here to test my own Ansible playbook projects-- one of which is a rather-important Tower installer playbook. It’s been extremely helpful for ensuring all of the different configuration matrixes that we support are properly tested. Testing our Tower installer Ansible code has positively influenced our interface of the code itself.