This post is based on material from Docker in Practice, available on Manning’s Early Access Program:

The Problem

At our company we had (another) problem. Our Jenkins server had apparently shrunk from what seemed like an over-spec’d monstrosity running a few jobs a day to a weedy-looking server that couldn’t cope with the hundreds of check-in-triggered jobs that were running 24/7.

Picture this:

This approach clearly wouldn’t scale. Eventually servers died under the load as more and more jobs ran in parallel. Naturally this would happen when lots of check-ins were happening and the heat was on, so it was a high-visibility problem. We added more servers as a stop-gap, but that was simply putting more fingers in the dyke.

The Solution

Fortunately there’s a neat way around this problem.

Developer laptops tend to be quite powerful, so it’s only natural to consider using them. Wouldn’t it be great if you could allocate jobs to those multi-core machines that mostly lie idle while developers read Hacker News, and say “awesome” a lot?

Traditionally, achieving this with VMs would be painful, and the overhead of allocating resources and running them on most machines unworkable.

With Docker it becomes easier. Docker containers can be set up that function as dynamic Jenkins slaves that are relatively unobtrusive. The Jenkins swarm plugin allows Jenkins to provision jobs to slaves dynamically.

Demo

Here’s a simple proof of concept to demonstrate the idea. You’ll need Docker installed, natch, but nothing else.

$ docker run -d \ --name jenkins_server \ -p 8080:8080 \ -p 50000:50000 \ dockerinpractice/jenkins $ echo "Let's wait a couple of minutes for jenkins to start" && sleep 120 $ docker run -d \ --hostname jenkins_swarm_slave_1 \ --name jenkins_swarm_slave_1 \ dockerinpractice/jenkins_swarm_slave

Navigate to your http://localhost:8080.

Check the swarm client has registered ok on the build executor status page.

Now set up a simple Jenkins job. I set one up to run “echo done” as a shell build step. Then click on “Restrict where this can be run” and apply the label “swarm”.

Run the job, then check the console output and you should see that ran the job on the swarm container.

There you have it! A dynamic Jenkins slave running “anywhere”, making your Jenkins jobs scalable across your development effort.

Under the Hood

The default startup script for the jenkins_swarm_slave image is here.

This sets up these environment variables through defaults and introspection:

HOST_IP=$(ip route | grep ^default | awk '{print $3}') JENKINS_SERVER=${JENKINS_SERVER:-$HOST_IP} JENKINS_PORT=${JENKINS_PORT:-8080} JENKINS_LABELS=${JENKINS_LABELS:-swarm} JENKINS_HOME=${JENKINS_HOME:-$HOME}

Overriding any of these with the docker command line for your environment is trivial. For example, if your Jenkins server is at http://jenkins.internal:12345 you could run:

$ docker run -d \ -e JENKINS_SERVER=jenkins.internal -e JENKINS_PORT=12345 --name jenkins_server \ -p 8080:8080 \ -p 50000:50000 \ -d dockerinpractice/jenkins_server

And to adapt this for your use case you’ll need to adapt the Dockerfile to install the software needed to run your jobs.

Final Thoughts

This may not be a solution fit for every organisation (security?), but those small-medium sized places that can implement this are the ones most likely to be feeling this pinch.

Oh, and remember that more compute is not always needed! Just because you can kick off that memory-hungry Java cluster regression test every time someone updates the README doesn’t mean you should…

It strikes us that there’s an opportunity for further work here.

Why not gamify your compute, so that developers that contribute more get credit?

Allow engineers to shut down jobs, or even make them more available at certain times of day?

Maybe telemetry in the client node could indicate how busy the machine its on is, and help it decide to accept the job or not?

Reporting on jobs that can’t complete, or find a home?

The possibilities are quite dizzying!