This article tries to provide one possible way to set up the Continuous Integration, Delivery or Deployment pipeline. We'll use Jenkins, Docker, Ansible and Vagrant to set up two servers. One will be used as a Jenkins server and the other one as an imitation of production servers. First one will checkout, test and build applications while perform deployment and post-deployment tests.



You'll need Vagrant and Git installed. The rest of the tools will be set up as part of the exercises in this article.

CI/CD Environment

We'll set up Jenkins environment using Vagrant and Ansible. Vagrant will create a VM with Ubuntu and run the bootstrap.sh script. The only purpose of that script is to install Ansible. Once that is done, Ansible will make sure that Docker is installed and Jenkins process is running.

As everything else in this article, Jenkins itself is packed as a Docker container and deployed with Ansible. Please consult the Continuous Deployment: Implementation with Ansible and Docker article for more info.

If you haven't already, please clone the GitHub repo jenkins-docker-ansible. Once repo is cloned, we can fire up Vagrant for the cd machine.

If you run into issues with Ansible complaining about executable permissions, try modifying the Vagrantfile 's synced_folder entry from config.vm.synced_folder ".", "/vagrant" to

config.vm.synced_folder ".", "/vagrant", mount_options: ["dmode=700,fmode=600"] . You'll find an example in the Vagrantfile.

git clone https://github.com/vfarcic/jenkins-docker-ansible.git cd jenkins-docker-ansible vagrant up cd

This might take a while when run for the first time (each consecutive run will be much faster) so let's use this opportunity to go through the setup while waiting for the creation of VM and configuration to finish.

Two key lines in the Vagrantfile are:

... cd.vm.provision "shell", path: "bootstrap.sh" cd.vm.provision :shell, inline: 'ansible-playbook /vagrant/ansible/cd.yml -c local' ...

First one runs the bootstrap.sh script that installs Ansible. We could use the Vagrant Ansible Provisioner but that would require Ansible to be installed on the host machine. That is unnecessary dependency, especially for Windows users who would have a hard time to set up Ansible. Moreover, we'll need Ansible inside the VM to perform deployment from the cd to prod VM.

Once bootstrap.sh is executed, Ansible playbook cd.yml is run.

- hosts: localhost remote_user: vagrant sudo: yes roles: - java - docker - registry - jenkins

It will run java, docker, registry and jenkins roles. Java is the Jenkins dependency required for running slaves. Docker is needed for building and running containers. All the rest will run as Docker processes. There will be no other dependency, package or application that will be installed directly. Registry role runs Docker Registry. Instead of using the public one in hub.docker.com, we'll push all our containers to the private registry running on port 5000. Finally, jenkins role is run. This one might require a bit more explanation.

Here's the list of tasks in the jenkins role.

- name: Directories are present file: path="{{ item }}" state=directory with_items: directories - name: Config files are present copy: src='{{ item }}' dest='{{ jenkins_directory }}/{{ item }}' with_items: configs - name: Plugins are present get_url: url='https://updates.jenkins-ci.org/{{ item }}' dest='{{ jenkins_directory }}/plugins' with_items: plugins - name: Build job directories are present file: path='{{ jenkins_directory }}/jobs/{{ item }}' state=directory with_items: jobs - name: Build jobs are present template: src=build.xml.j2 dest='{{ jenkins_directory }}/jobs/{{ item }}/config.xml' backup=yes with_items: jobs - name: Deployment job directories are present file: path='{{ jenkins_directory }}/jobs/{{ item }}-deployment' state=directory with_items: jobs - name: Deployment jobs are present template: src=deployment.xml.j2 dest='{{ jenkins_directory }}/jobs/{{ item }}-deployment/config.xml' backup=yes with_items: jobs - name: Container is running docker: name=jenkins image=jenkins ports=8080:8080 volumes=/data/jenkins:/var/jenkins_home - name: Reload uri: url=http://localhost:8080/reload method=POST status_code=302 ignore_errors: yes

First we create directories where Jenkins plugins and slaves will reside. In order to speed up building containers, we're also creating the directory where ivy files (used by SBT) will be stored on host. That way containers will not need to download all dependencies every time we build Docker containers.

Once directories are created, we copy Jenkins configuration files and few plugins.

Next are Jenkins jobs. Since all jobs are going to do the same thing, we have two templates (build.xml.j2 and deployment.xml.j2) that will be used to create as many jobs as we need.

Finally, once Jenkins job files are in the server, we are making sure that Jenkins container is up and running.

Full source code with Ansible Jenkins role can be found in the jenkins-docker-ansible repository.

Let's go back to Jenkins job templates. One template is for building and the other one for deployment. Build jobs will clone the code repository from GitHub and run few shell commands.

Following is the key part of the build.xml.j2 template:

sudo docker build -t 192.168.50.91:5000/{{ item }}-tests docker/tests/ sudo docker push 192.168.50.91:5000/{{ item }}-tests sudo docker run -t --rm -v $PWD:/source -v /data/.ivy2:/root/.ivy2/cache 192.168.50.91:5000/{{ item }}-tests sudo docker build -t 192.168.50.91:5000/{{ item }} . sudo docker push 192.168.50.91:5000/{{ item }}

Each {{ item }} from above will be replaced with values from Ansible variables. Since all build jobs will do the same procedure, we can use the same template for all of them and simply provide a list of values. In this article, variables from the main.yml are following:

jobs: - books-service

When Ansible is run, each {{ item }} will be replaced with books-service. jobs variable could have as many items as we need. They don't need to be added all at once but gradually according to our needs.

Later on it could look something like:

jobs: - books-service - authentication-service - shopping-cart-service - books-ui

Commands from the template, when deployed with Ansible, are following.

sudo docker build -t 192.168.50.91:5000/books-service-tests docker/tests/ sudo docker push 192.168.50.91:5000/books-service-tests sudo docker run -t --rm -v $PWD:/source -v /data/.ivy2:/root/.ivy2/cache localhost:5000/books-service-tests sudo docker build -t 192.168.50.91:5000/books-service . sudo docker push 192.168.50.91:5000/books-service

First we build the test container and push it to the private registry. Then we runs tests. If there are no failures, we'll build the books-service container and push it to the private registry. From here on, books-service is tested, built and ready to be deployed.

Before Docker, all my Jenkins servers ended up with a huge number of jobs. Many of them were different due to variety of frameworks, languages and libraries required to build all the applications. Managing a lot of different jobs easily becomes very tiring and prone to errors. And it's not only jobs that become complicated very fast. Managing slaves and dependencies they need to have often requires a lot of time.

With Docker comes simplicity. If we can assume that each project will have its own tests and application containers, all jobs can do the same thing. Build the test container and run it. If nothing fails, build the application container and push it to the registry. Finally, deploy it. All projects can be exactly the same if we can assume that each of them have their own Docker files. Another advantage is that there's nothing to be installed on servers (besides Docker). All we need is Docker that will run containers.

Unlike build jobs that are always the same (build with the specification from Dockerfile), deployments tend to get a bit more complicated. Even though applications are immutable and packed in containers, there are still few environment variables, links and/or volumes to be set. That's where Ansible comes in handy. We can have every Jenkins deployment job the same with only name of the Ansible playbook differing. Deployment jobs simply run Ansible role that corresponds to the application we're deploying. It's still fairly simple in most cases. The difference when compared to deploying applications without Docker is huge. While with Docker we need to think only about data (application and all dependencies are packed inside containers), without it we would need to think what to install, what to update and how those changes might affect the rest of applications running on the same server or VM. That's one of the reasons why companies tend not to change their technology stack and, for example, still stick with Java 5 (or worse).

As an example, books-service Ansible tasks are listed below.

- name: Directory is present file: path=/data/books-service/db state=directory - name: Latest container is pulled shell: sudo docker pull 192.168.50.91:5000/books-service - name: Container is absent docker: image=192.168.50.91:5000/books-service name=books-service state=absent - name: Container is running docker: name=books-service image=192.168.50.91:5000/books-service ports=9001:8080 volumes=/data/books-service/db:/data/db state=running

We're making sure that directory where data will be stored is present, pulling the latest version of the container, removing the running process and starting the new one.

Let's get back to the cd VM we started creating at the beginning of this article! If vagrant up cd command finished executing, whole VM with Jenkins, Docker and Registry is up and running.

Now we can open http://localhost:8080 and (almost) use Jenkins. Ansible tasks did not create credentials so we'll have to do that manually.

Click Manage Jenkins > Manage Nodes > CD > Configure .

> > > . Click Add button in the Credentials Section.

button in the Section. Type vagrant as both username and password and click the Add button.

as both username and password and click the button. Select the newly created key in the Credentials section.

section. Click the Save and, finally, the Launch slave agent buttons

This could probably be automated as well but, for security reasons, I prefer doing this step manually.

Now the CD slave is launched. It's pointing to the cd VM we created with Vagrant and will be used for all our jobs (even for deployments that will be done on a separate machine).

We are ready to run the books-service job that was explained earlier. From the Jenkins home page, click books-service link. First build already started (can be started manually by pressing Build Now. Progress can be seen in the Build History section. Console Output inside the build (in this case #1) can be used to see logs. Building Docker containers for the first time can take quite some time. Once this job is finished it will run the books-service-deployment job. However, we still don't have the production environment VM and the Ansible playbook run by the Jenkins job will fail to connect to it. We'll get back to this soon. At the moment we're able to checkout the code, run tests, build Docker containers and push them to the private registry.

Major advantages to this kind of setup is that there is no need to install anything besides Docker on the cd server since everything is run through containers. There will be no headache provoked by installations of all kinds of libraries and frameworks required for compilation and execution of tests. There will be no conflicts between different versions of the same dependency. Finally, Jenkins jobs are going to be very simple since all the logic resides in Docker files in the repositories of applications that should be built, tested and deployed. In other words, simple and painless setup that will be easy to maintain no matter how many projects/applications Jenkins will need to manage.

If naming conventions are used (as in this example), creating new jobs is very easy. All there is to be done is to add new variables to the Ansible configuration file ansible/roles/jenkins/defaults/main.yml and run vagrant provision cd or directly ansible-playbook /vagrant/ansible/cd.yml -c local from the CD VM.

Here's how to apply changes to the CD server (includes adding new Jenkins jobs).

[from the host directory where this repo is cloned]

vagrant provision cd

or

vagrant ssh cd ansible-playbook /vagrant/ansible/cd.yml -c local exit

books-service job is scheduled to pull the code from the repository every 5 minutes. This consumes resources and is slow. Better setup is to have a GitHub hook. With it the build would be launched almost immediately after each push to the repository. More info can be found in the GitHub Plugin page. Similar setup can be done for almost any other type of code repository.

Production Environment

In order to simulate closer to reality situation, production environment will be a separate VM. At the moment we don't need anything installed on that VM. Later on, Jenkins will run Ansible that will make sure that the server is set up correctly for each application we deploy. We'll create this environment in the same way as the previous one.

[from the host directory where this repo is cloned]

vagrant up prod

Unlike the cd VM that required setup, prod has only the Ubuntu OS. Packages and additional dependencies are not required.

Now, with the prod environment up and running, all that's missing is to generate SSH keys and import them to the cd VM.

[from the host directory where this repo is cloned]

vagrant ssh prod ssh-keygen # Simply press enter to all questions exit vagrant ssh cd ssh-keygen # Simply press enter to all questions ssh-copy-id 192.168.50.92 # Password is "vagrant" exit

That's about it. Now we have an production VM where we can deploy applications. We can go back to Jenkins (http://localhost:8080) and run the books-service-deployment job. If, the books-service job did not finish before you reached this part, please wait until it's over and books-service-deployment will start automatically. When finished, service will be up and running on the port 9001.

Let's put few entries to our recently deployed books-service

[from the host directory where this repo is cloned]

vagrant ssh prod curl -H 'Content-Type: application/json' -X PUT -d '{"_id": 1, "title": "My First Book", "author": "John Doe", "description": "Not a very good book"}' http://localhost:9001/api/v1/books curl -H 'Content-Type: application/json' -X PUT -d '{"_id": 2, "title": "My Second Book", "author": "John Doe", "description": "Not a bad as the first book"}' http://localhost:9001/api/v1/books curl -H 'Content-Type: application/json' -X PUT -d '{"_id": 3, "title": "My Third Book", "author": "John Doe", "description": "Failed writers club"}' http://localhost:9001/api/v1/books exit

Let's check whether the service returns correct data. Open http://localhost:9001/api/v1/books from your browser. You should see the three books that were previously inserted with curl.

Our service has been deployed and is up and running. Every time there is a change in the code, the same process will be repeated. Jenkins will clone the code, run tests, build the container, push it to the registry and, finally, run that container in the destination server.

VM creation, provisioning, building and deploying them took a lot of time. However, from now on most of the things (Docker images, IVY dependencies, etc) are already downloaded so each next run will be very fast. Only new Docker images will be created and pushed to the registry. From this moment on, speed is what matters.

Summary

With Docker we can explore new ways to build, test and deploy applications. One of the many benefits of containers is simplicity due to their immutability and self sufficiency. There are no reasons any more to have servers with huge number of packages installed. No more going through the hell of maintaining different versions required by different applications or spinning up a new VM for every single application that should be tested or deployed.

But it's not only servers provisioning that got simplified with Docker. Ability to provide Docker file for each application means that Jenkins jobs are much easier to maintain. Instead of having tens, hundreds or even thousands of jobs where each of them is specific to the application it is building, testing or deploying, we can simply make all (or most of) Jenkins jobs the same. Build with the Dockerfile, test with the Dockerfile and, finally, deploy Docker container(s) with Ansible (or some other tool like Fig).

We didn't touch the subject of post-deployment tests (functional, integration, stress, etc) that are required for successful Continuous Delivery and/or Deployment. We're also missing the way to deploy the application with zero-downtime. Both will be the subject of one of the next articles. We'll continue where we left and explore in more depth what should be done once the application is deployed.

Source code for this article can be found in jenkins-docker-ansible repository.

The DevOps 2.0 Toolkit

If you liked this article, you might be interested in The DevOps 2.0 Toolkit: Automating the Continuous Deployment Pipeline with Containerized Microservices book.

This book is about different techniques that help us architect software in a better and more efficient way with microservices packed as immutable containers, tested and deployed continuously to servers that are automatically provisioned with configuration management tools. It's about fast, reliable and continuous deployments with zero-downtime and ability to roll-back. It's about scaling to any number of servers, design of self-healing systems capable of recuperation from both hardware and software failures and about centralized logging and monitoring of the cluster.

In other words, this book envelops the whole microservices development and deployment lifecycle using some of the latest and greatest practices and tools. We'll use Docker, Kubernetes, Ansible, Ubuntu, Docker Swarm and Docker Compose, Consul, etcd, Registrator, confd, Jenkins, and so on. We'll go through many practices and, even more, tools.