In this tutorial, you’ll learn:

How to put NGINX in front of Jenkins

How to use Docker Compose to manage a set of containers as a single “Service” and simplify configuration

Docker Networking techniques to allow containers to talk to each other

PREVIOUSLY ON…

If you’re just joining us, check out the introductory post about our move to continuous delivery on League and how we found the tech stack to best solve our problems.

In our first tutorial, we began the process of putting Jenkins in a Docker container. In the follow-up, we learned how to use a Docker data volume to create a persistence layer. We created a couple of volumes that would preserve our Jenkins home directory so that plugins, jobs, and other Jenkins core data would persist between image rebuilds. We also made a volume to hold our logs. We discussed the differences between using a data volume versus just a volume mount from the host. Finally, we also learned how to move the Jenkins war file so it wasn’t in the Jenkins home directory and therefore wouldn’t persist.

At the end of the previous post, we had a perfectly functional Jenkins image and volume set that could save data. I finished with a few reasons why it wasn’t ideal, and this post will address one in particular: the lack of a handy web proxy in front of Jenkins. With that in place, we’re going to be running two containers to support our Jenkins environment and we're going to need to deal with the missing web proxy. This blog will be a two-parter, and it will cover adding a web proxy container and learning how to use Compose, a handy Docker utility for managing multi-container applications.

At the end of this post you'll have a full stack Jenkins Master server. We're not quite at what I personally would consider production readiness, but the remaining bits (like creating your own Jenkins core image on your preferred OS) are more based on preference then technical requirement.

PROXY CONTAINERS AND YOU

We use NGINX as our proxy at Riot because it cleanly enforces things like redirects to HTTPS and masks Jenkins listening on port 8080 with a web server that listens on port 80. I won’t be covering setting up NGINX for use with SSL and HTTPS (good documentation can easily be found on the internet). Instead, I’ll go over how to get NGINX up and running in a simple proxy container, and how to set up proxying to a Jenkins server.

Here’s what we’ll cover in this section:

Creating a simple NGINX container.

Learning how to add files from our local folder into our image builds, like the NGINX configurations we want to use.

Using Docker networks to allow for easy networking between NGINX and Jenkins.

Configuring NGINX to proxy to Jenkins.

YOU GET AN OS AND YOU GET AN OS!

At Riot, we’re not regular Debian users; however, the Cloudbees Jenkins image uses Debian as its default OS, inherited from its Java 8 image. But one of the powerful advantages of Docker is that the OS can be whatever you want - because the host doesn’t care! This is also a useful demonstration of “mixed mode” containers, which is the idea that if your application spans multiple containers, they don't all need to be the same OS. This has value if processes have better library or module support in specific Linux distributions. I’ll let you decide whether you think running an app that has a Debian/Centos/Ubuntu spread is a good idea. This is just a demonstration of capability.

You’re free to modify this image into Ubuntu, Debian, or whatever flavor you want. I’m going to use CentOS 7, an OS that’s familiar to me. Keep in mind if you do change OS flavors, you will need to alter many commands/configurations to conform to how NGINX works in that OS environment.

SETTING UP A JENKINS MASTER FOLDER

Now that we’re going to have more than one Dockerfile, we should put each distinct image we want into its own subdirectory. This means you’ll need to move the Dockerfile we’ve been using so far into a new directory we’ll call jenkins-master. Make sure you’re still in the folder where it exists.

mkdir jenkins-master mv Dockerfile jenkins-master

Now that our Dockerfile is in a subdirectory it won’t build with the docker build command I’ve had you use because the context of the command is “.” (or current directory). You could, in theory, change directory into the sub dir and run the build command - and it would work. However, I suggest getting used to targeting the directory where your Dockerfile is for multi-Dockerfile applications. A simple twist of our build command will work like so:

docker build -t myjenkins jenkins-master/.

CREATING THE NGINX DOCKERFILE

We’re now ready to make the NGINX image. In your project root folder, make a new directory called jenkins-nginx to store the new Dockerfile. You should now have two directories:

1. jenkins-master

2. jenkins-nginx

Inside the jenkins-nginx directory, open a new file called Dockerfile in your preferred editor. Then, do the following:

1. Set the OS base image you want to use

2. Use Yum to install NGINX:

RUN yum -y update; yum clean all RUN yum -y install http://nginx.org/packages/centos/7/noarch/RPMS/nginx-release-centos-7-0.el7.ngx.noarch.rpm; yum -y makecache RUN yum -y install nginx-1.10.1

Note that we lock the NGINX version to 1.10.1. This is just a best practice: always fix your versions to avoid rebuilds of your image moving to an untested version.

3. Cleanup the default NGINX configuration file we don’t need:

RUN rm /etc/nginx/conf.d/default.conf

4. Add our configuration files (we still need to make these):

COPY conf/jenkins.conf /etc/nginx/conf.d/jenkins.conf COPY conf/nginx.conf /etc/nginx/nginx.conf

This is the first time we’ve used the COPY command. There’s also the related ADD command. For an exhaustive look at the difference between these commands, I recommend these two links: http://stackoverflow.com/questions/24958140/docker-copy-vs-add https://docs.docker.com/develop/develop-images/dockerfile_best-practices/

For our purpose, COPY is the best choice here. As the articles above suggest, we’re copying individual files and don’t need the features of ADD (tarball extraction, URL based retrieval, etc). As you might predict, we’re going to have some updates to the default nginx.conf file and a specific site configuration for Jenkins.

5. We want NGINX to listen on Port 80 so let’s make sure that port is exposed:

EXPOSE 80

6. Finish up by making sure NGINX is started:

CMD ["nginx"]

Save the file - but don’t build it yet! We have those two COPY commands in there, so we need to actually create the files we’re copying or the build will fail when it can’t find them.

CREATING THE NGINX CONFIGURATION

I’m going to provide the entire nginx.conf file here as an example, then go over the specific changes from the default nginx.conf file.

daemon off; user nginx; worker_processes 2; error_log /var/log/nginx/error.log warn; pid /var/run/nginx.pid; events { worker_connections 1024; use epoll; accept_mutex off; } http { include /etc/nginx/mime.types; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; #tcp_nopush on; keepalive_timeout 65; client_max_body_size 300m; client_body_buffer_size 128k; gzip on; gzip_http_version 1.0; gzip_comp_level 6; gzip_min_length 0; gzip_buffers 16 8k; gzip_proxied any; gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json; gzip_disable "MSIE [1-6]\."; gzip_vary on; include /etc/nginx/conf.d/*.conf; }

Let’s go over the changes from the default:

1. To make NGINX not run as a daemon:

daemon off;

We do this because by default calling nginx at the command line has NGINX run as a background daemon. That returns exit 0 which causes Docker to think the process has stopped, so it shuts down the container. You’ll find this happens a lot with applications not designed to run in containers. Thankfully for NGINX, this simple change solves the problem without a complex workaround.

2. Upping the NGINX worker count to 2:

worker_processes 2;

This is something I do with every NGINX I set up. You can leave this at 1 if you want. It’s really a “tune as you see fit” option. NGINX tuning is a topic for a post in its own right. I can’t tell you what’s right for you. Very roughly speaking, this is how many individual NGINX processes you have. The number of CPUs you’ll allocate is a good guide. Hordes of NGINX specialists will say it’s more complicated than that. Certainly inside a Docker container you could debate what to do here.

3. Event tuning:

use epoll;

accept_mutex off;

Turning epolling on is a handy tuning mechanism to use more efficient connection handling models. We turn off accept_mutex for speed, because we don’t mind the wasted resources at low connection request counts.

4. Setting the proxy headers:

proxy_set_header X-Real-IP $remote_addr;

proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

This is the second setting (after turning daemon off) that’s a must-have for Jenkins proxying. This sets the headers so that Jenkins can interpret the requests properly, which helps eliminate some warnings about improperly set headers.

5. Client sizes:

client_max_body_size 300m;

client_body_buffer_size 128k;

You may or may not need this. Admittedly, 300 MBs is a large body size. However, we have users that upload files to our Jenkins server - some of which are just HPI plugins, while others are actual files. We set this to help them out.

6. GZIP on:

gzip on;

gzip_http_version 1.0;

gzip_comp_level 6;

gzip_min_length 0;

gzip_buffers 16 8k;

gzip_proxied any;

gzip_types text/plain text/css text/xml text/javascript application/xml application/xml+rss application/javascript application/json;

gzip_disable "MSIE [1-6]\.";

gzip_vary on;

Here, we turn on gzip compression for speed.

And that’s it! Save this file, and make sure it’s in conf/nginx.conf where the Dockerfile expects it. The next step is to add the specific site configuration for Jenkins.

THE JENKINS CONFIGURATION FOR NGINX

Like the previous section, I’ll provide the entire conf file here and then walk through the settings that matter. You can find most of what you need in the Jenkins documentation. I tweaked the file because I found some parts unclear. You can see mine here:

server { listen 80; server_name ""; access_log off; location / { proxy_pass http://jenkins-master:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto http; proxy_max_temp_file_size 0; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffer_size 8k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

There’s only one setting that really matters to what we’re doing here, and that’s the proxy pass setting:

This expects a domain name of jenkins-master to exist, which will come from the magic of Docker networks (I’ll address this below). If you aren’t using Docker networks, this has to reference the IP/hostname of wherever your Jenkins container was running.

Interestingly enough, you can’t set this to localhost. That’s because each Docker container is its own localhost, and it’d think you’re referring to the host of the NGINX container, which isn’t running Jenkins on port 8080. To avoid using Docker networks, it’d have to point to the IP address of your Dockerhost (which should be your desktop/laptop where you’re working). While you know this information, try to imagine the challenge of figuring it out with a farm of Dockerhosts where your Jenkins container could get deployed to any of them. You’d have to write some automation to grab the IP, then edit the conf file. It can be done, but it's a hassle. Docker Networks make this much easier for us! To learn about the incredibly extensive options of Docker networks, see this starting article. Don’t worry though, I’ll cover the basics below.

Now that we have our configuration files made, let’s go ahead and build our NGINX image to make sure everything works:

docker build -t myjenkinsnginx jenkins-nginx/.

MAKE A DOCKER NETWORK SO NGINX CAN TALK TO JENKINS

We want to create a network between our two containers so that they can easily find each other. One reason they’ll be able to easily find each other is Docker networks offer something they call “automatic service discovery.” Which is fancy speak for creating DNS names on the network that match the container names you create. This is why our NGINX config file references jenkins-master. Docker will handle making that DNS entry for us when our container attaches to the network. That’s pretty awesome - let’s begin.

Making a network is as easy as making a data volume:

docker network create --driver bridge jenkins-net

The network name is jenkins-net. Kinda cute. But why did we use a bridge driver? In layman’s terms we want to “bridge” these two containers together. If you want to know more about all the network drivers available, check out the Docker network driver documentation . The bridge network is suitable for our needs, so let’s move on.

Like data volumes, seeing the list of networks you have is also easy and intuitive:

docker network ls

And removing them if you need to is also a walk in the park (you don’t need to run this):

docker network rm jenkins-net

BUILD THE NGINX IMAGE AND LINK IT TO THE JENKINS IMAGE

We have all the pieces we need now. We built our NGINX image and we have a network we can attach to. We need to make sure our containers are attached to the network. Go ahead and stop your running jenkins-master container and remove it (don’t worry about your data - we persisted it, remember?):

1. docker stop jenkins-master

2. docker rm jenkins-master

We’re going to restart our Jenkins master container, but attach it to the network this time:

docker run -p 8080:8080 -p 50000:50000 --name=jenkins-master --network jenkins-net --mount source=jenkins-log,target=/var/log/jenkins --mount source=jenkins-data,target=/var/jenkins_home -d myjenkins

See that --network jenkins-net command? That’s all it takes! Our Jenkins master container is now on its very own private Docker network.

Now we can finally build start the NGINX container and attach it to jenkins-master

docker build -t myjenkinsnginx jenkins-nginx/.

docker run -p 80:80 --name=jenkins-nginx --network jenkins-net -d myjenkinsnginx

Note that startup order doesn’t matter here. The network exists even if the containers don’t, so either one can attach at any time. That can be very useful operationally.

Testing that everything works is simple. Just point your browser to your docker-machine IP address and everything should work as normal!

If it doesn’t work, something may be blocking port 80 on your machine. (This can happen, especially in OSX.) Make sure your firewalls are turned off, or at least accepting traffic on port 80. If for some reason you can’t clear port 80, stop and remove the jenkins-nginx container and re-run it, but use -p 8000:80 instead to map port 8000 to the container's internal port 80 port. Then go to http://localhost:8000 and see if that works.

JENKINS IMAGE CLEANUP

Now that we have NGINX listening on port 80, we don’t need the Jenkins image or container to expose port 8080. Let’s remove that exposure by removing the port option when we start the container. We’ll do one more shutdown and restart. We don’t need to restart NGINX however, because it’s using internal Docker DNS to find jenkins-master on the jenkins-net network we created.

docker stop jenkins-master docker rm jenkins-master docker run -p 50000:50000 --name=jenkins-master --network jenkins-net --mount source=jenkins-log,target=/var/log/jenkins --mount source=jenkins-data,target=/var/jenkins_home -d myjenkins

Refresh your browser on http://localhost.

Nice and clean, and now errant users can’t even reach Jenkins on port 8080. Instead, they must go through your NGINX proxy to get it.

We’ve learned how to setup an NGINX proxy and how to use Docker networks to route two containers together, which would otherwise be somewhat awkward in the NGINX configuration settings. We’ve also learned that using a different container base OS in one of our containers has no impact on our multi-container app.

This is a good breaking point. As always, things are updated online in the Github tutorial. This session can be found here: https://github.com/maxfields2000/dockerjenkins_tutorial/tree/master/tutorial_04. You’ll note the makefile has been updated again to account for the NGINX container and preserves proper start ordering.

DOCKER COMPOSE AND JENKINS

We're now running the ideal 2 container setup, with an NGINX proxy container, the Jenkins app container, our own container network and data volumes to house all of our Jenkins data. We've discovered that managing 2 containers that have some extensive options with networks and volumes is becoming a bit of a chore. You could always use a makefile to manage all this stuff, but here’s a tip - Docker offers another handy tool called “Compose.” With this post, we’ll add it to the mix.

This section covers the following subject:

Using Compose to manage a multi-container application

WHAT IS COMPOSE

Compose started life as a tool called Fig. Docker defines it as “A tool designed for running complex applications with Docker.” You can find its full documentation here: https://docs.docker.com/compose/. Compose will handle building our images and maintaining responsibility around what to stop and start when the application is rerun. It can even help us make data volumes and networks if they don’t exist already.

Let’s say I want to take our two-container app, rebuild the Jenkins container, and rerun the app - perhaps to upgrade the Jenkins version. Here’s the list of commands I’d have to run:

docker stop jenkins-master docker rm jenkins-master docker build -t myjenkins jenkins-master/. docker run -p 50000:50000 --name=jenkins-master --network jenkins-net --mount source=jenkins-log,target=/var/log/jenkins --mount source=jenkins-data,target=/var/jenkins_home -d myjenkins

With a properly configured Compose, that becomes:

docker-compose -p jenkins down docker-compose build docker-compose -p jenkins up -d

This is similar in behavior to the simple makefile I provide with most of these tutorials. The trade-off for using Compose is that you have to maintain yet another configuration file along with your Dockerfiles.

This section is provided on its own because setting up and using Docker Compose is a personal choice. I recommend it as a method of self documenting startup dependencies and relationships that fits into the overall Docker ecosystem.

PRE-REQUIREMENTS

If you’re using Docker for Mac or Docker for Windows Compose is part of the default installation.

If you are running on Linux, install Compose by following the directions here: https://docs.docker.com/compose/install/

STEP 1: SETTING UP YOUR COMPOSE CONFIG FILE

Compose uses a YAML configuration file which makes it pretty straightforward to read and understand. We need to add an entry for every image we want Compose to manage and give it the specifics.

In your project root directory, create a new file called docker-compose.yml

You can use another file name, but by default Compose will look for this name.

STEP 2: VERSIONS, VOLUMES AND NETWORKS

Let’s edit the docker-compose.yml file and throw in our foundational basics. First add a version reference at the top:

version: ‘3’

Why do we need this? Well, Docker Compose has had a long storied life through three generations so far. This lets Docker Compose know we’re using version 3’s options and APIs. We’re using this version because it happens to be latest at the time of this writing.

Next let’s go ahead and add our volumes:

volumes: jenkins-data: jenkins-log:

Then we can add our network after that:

networks: jenkins-net:

These don’t say much because we’re using the default bridge networking and volume options, they just need to be declared. Our Compose file doesn’t do much yet - for that we need to add our images.

Fun fact! If you don’t define a network, Compose will automatically create a network for you. I prefer to explicitly name my networks so I make one anyway. In fact Docker Compose will create a default network for any container that doesn’t have one (all containers get attached to a network one way or another). You don’t have to worry about this, but you should be aware of the behavior.

STEP 2: JENKINS MASTER IMAGE

Continue editing the docker-compose.yml file and add the following::

services: master: build: ./jenkins-master ports: - “50000:50000” volumes: - jenkins-log:/var/log/jenkins - jenkins-data:/var/jenkins_home networks: - jenkins-net

First we added a services section where we define our running images and containers. We’ll add the NGINX image in this section once I’m done explaining the other bits.

We then created a master service (for our jenkins-master) image. We gave it a build directive that points to the path where our Dockerfile is for jenkins-master. We also specified what ports to listen on.

Then we added the volumes we need. This is the same as the --mount options we had in our docker run command. Lastly we specified the networks we wanted to be on. Remember - we created the volumes and the network in the first part of the docker-compose.yml file creation.

Compose automatically handles the dependencies for you. If your networks or volumes don’t exist it will create them. It’s also smart enough not to destroy them between stops and starts (or in Compose nomenclature, ups and downs).

STEP 4: NGINX IMAGE

Let’s get the final piece in place. Add the following entries into your services section after the master entry:

nginx: build: ./jenkins-nginx ports: - "80:80" networks: - jenkins-net

Not much new here to talk about. In fact it’s a bit simpler because there’s no data volume for our NGINX service. Builds and ports work the same way. We do have a bit of a problem though. Because we’re using Compose it’s going to name our jenkins-master container according to its naming standards. If you remember when we set up the NGINX configuration files, we referenced jenkins-master by name in the jenkins.conf. With Compose, the new name will be jenkins_master_1. So we need to make a change.

1. Open up the jenkins.conf file inside /jenkins-nginx/jenkins.conf.

2. Edit the line that says:

proxy_pass http://jenkins-master:8080;

To

proxy_pass http://jenkins_master_1:8080;

3. Save the file.

STEP 5: PUTTING IT ALL TOGETHER

The entire docker-compose.yml file should now look like this:

version: '3' services: master: build: ./jenkins-master ports: - "50000:50000" volumes: - jenkins-log:/var/log/jenkins - jenkins-data:/var/jenkins_home networks: - jenkins-net nginx: build: ./jenkins-nginx ports: - "80:80" networks: - jenkins-net volumes: jenkins-log: jenkins-data: networks: jenkins-net:

Now we just need to build it. First, let’s make sure there are no traces of the former containers used in previous posts. If you’ve already cleaned up you can skip this set of steps.

At a command line:

docker stop jenkins-nginx docker rm jenkins-nginx docker stop jenkins-master docker rm jenkins-master docker volume rm jenkins-data docker volume rm jenkins-log docker network rm jenkins-net

Note: we have to lose our data and containers to move to the new model. That kind of sucks. In future posts I’ll talk about how to backup this data, but if you really need to keep this data you can use the docker cp command I talked about in post #3 to back it up.

Now let’s build and run things with Compose!

docker-compose build

docker-compose -p jenkins up -d

That’s it! You’ll see there’s a -p option, as this is where I give the Compose “project” a name, jenkins. This is why the services in the Compose file don’t mention jenkins. Docker Compose uses the project name as a prefix to all the containers it starts. If you don’t provide this, it derives it from the folder you’re in, which could be just about anything. Using a project parameter guarantees consistency and is a good best practice.

Also note the -d so that Docker Compose runs the containers as a daemon, just like the -d option for Docker. You’ll note the output indicates the start order and names. If you want to see what's running, Docker Compose has a handy feature for that too.

docker-compose -p jenkins ps

This gives a nicely formatted list of all the applications containers. Note that you still have to give it a project name so it knows what to look for! This helps it filter only the containers from your app, even if other things are running on the host. Very handy.

However you’ve probably noticed the container names are not what you’re used to. You should see:

jenkins_master_1

jenkins_nginx_1

This is a Compose naming standard: [project]_[service]_[instance]. This is why I had you change the NGINX config DNS reference. I personally find it a bit less intuitive but with that comes power. Compose can create additional instances of containers if you need them, hence the instance naming scheme. Remember that these instance name changes mean any commands you’ve gotten used to so far have to change, like docker exec or docker cp to reference the specific instances.

For your own edification you should also take a look at what Compose did for your volumes and networks. Feel free to run the following commands:

docker network ls

docker volume ls

STEP 6: MAINTENANCE USING COMPOSE

Compose is smart enough to know about your data volume and preserve it. Try this:

Create a test job in your Jenkins instance (http://localhost). Not that you may have to reconfigure the admin page again with a password. Remember the instance names have now changed!

docker-compose -p jenkins down

Make a simple edit to your Jenkins master dockerfile, like changing your LABEL maintainer name and save it.

docker-compose build

docker-compose -p jenkins up -d

Go back to your Jenkins instance and note your test job is still there.

You’ll note that Docker Compose tears down your network (because it can be easily recreated) but doesn’t remove your volume. When it comes up again it recreates the network and doesn’t bother with the volume. Smart tools are awesome!

By default, docker-compose down will remove everything but the volumes. If you need it to also remove volumes try:

docker-compose -p jenkins down -v

CONCLUSION

As always, all updates discussed in this post can be found at my git repository here: https://github.com/maxfields2000/dockerjenkins_tutorial/tree/master/tutorial_05

We learned that Compose can simplify our command management for starting, stopping, and building a multi-image application - all for the low price of one more configuration file. This file comes with the benefit of being self documenting in defining the relationships between the containers we are running, their networks, and volumes.

Compose is a great tool that is clearly opinionated. I’d add to its potential list of drawbacks that it needs you to always specify a consistent project name. In return, Compose can manage most basic operations, including cleanup.

Whether or not you use Compose will depend on both how much you like its opinionated approaches and just how complex your Docker apps get. I personally like the self-documenting nature of the docker-compose.yml file and the simplification of the command structure. You’ll note I still provide a makefile with a simplified set of commands, as I also like a system that reduces my chances of forgetting a boiler-plate parameter like -p or -d. That is entirely a personal choice.

At this point my basic tutorials are done! You’ve now got your very own Jenkins in a box, fronted by NGINX, with persistence that’s portable onto any dockerhost. The next posts will be considerably more advanced.

WHAT’S NEXT

We still have three big areas to cover in more advanced topics: backups, build slaves, and total ownership of your Docker images. I’m going to explore totally owning your Jenkins images to remove dependencies on public repositories first. Mainly because this can be a big deal in dependency management - perhaps you’re not a fan of Debian-based containers and would rather use Ubuntu or CentOS. It’s also a good primer for making your own Dockerfiles from scratch, something we’ll be doing when we get to build slaves as containers. See you next time!