For several years we’ve automated the creation of Jenkins jobs with the Jenkins Job DSL Plugin. This excellent Jenkins plugin allows you to manage Jenkins job definitions in code and generate them based on external configuration. The Jenkins system configuration however has always been a gap that required manual configuration which prevented being able to automate the creation of new Jenkins instances. Recently the adoption of our Alpine platform has grown a lot internally, so in order to scale it better we decided to split up the base infrastructure by business unit. This meant we would be creating and maintaining several Jenkins instances so we had to solve this problem so we wrote the Jenkins System DSL Plugin!

Introducing the Jenkins System Config Plugin

This plugin takes advantage of a Jenkins feature where it runs any groovy scripts located in $JENKINS_HOME/init.groovy.d on startup. These scripts have full access to any classes defined in any installed plugins which is how the System DSL plugin is hooked in. The plugin follows a similar pattern to the Job DSL Plugin and exposes a Groovy based DSL to configure the Jenkins System Configuration and all of the plugins.

Here’s an example of the DSL:

com.rei.jenkins.systemdsl.JenkinsSystemConfigDsl.configure { global { url("https://jenkins.example.com/") environmentVariables([TZ: 'America/Los_Angeles']) quietPeriod(5) } masterNode { numExecutors(4) mode(Node.Mode.NORMAL) } git { author("Jenkins", "jenkins@example.com") } mailer { smtpServer('email-server.example.com') defaultSuffix('@example.com') } seedJobs { jobDsl(getJobDslScript()) } }

See the full method reference Here

Custom Jenkins Docker Image

Before any plugins can be configured they need to be installed. The Jenkins docker image provides two facilities to solve this problem. The first is that anything in the /usr/share/jenkins/ref/ directory will get copied to the Jenkins home directory before it starts. The second is an excellent script provided in the image to download plugins to that directory which will then be automatically installed before jenkins starts. We also have a couple of custom plugins (including the System DSL plugin) we need to install by cURLing the plugin from Nexus. We end up with a Dockerfile that looks something like this:

FROM jenkins/jenkins:2.107.2-alpine COPY --chown=1000:1000 plugins.txt /usr/share/jenkins/ref/plugins.txt COPY --chown=1000:1000 init.groovy /usr/share/jenkins/ref/init.groovy.d/init.groovy.override # suppresses the new install wizard RUN echo "2.107.2" > /usr/share/jenkins/ref/jenkins.install.UpgradeWizard.state && \ echo "2.107.2" > /usr/share/jenkins/ref/jenkins.install.InstallUtil.lastExecVersion # installs the plugins, the file contains <pluginid>:<version> one per line RUN /usr/local/bin/install-plugins.sh < /usr/share/jenkins/ref/plugins.txt ... ADD --chown=1000:1000 "http://central.maven.org/maven2/com/rei/jenkins/systemdsl/jenkins-system-config-dsl/1.0.0/jenkins-system-config-dsl-1.0.0.hpi" "$refplugins_dir/jenkins-system-config-dsl.hpi.override"

Automating the Whole Thing

With the Jenkins System configuration finally automated we are now able to completely provision Jenkins instances in an automated way from scratch. We now have a Terraform module to stamp out a Jenkins instance complete with system configuration.

One of the challenging things about provisioning Jenkins instances from scratch is what to do with the passwords and other secrets for things like connecting to source control or deploying artifacts. When manually configuring things in the UI the Jenkins secrets storage mechanisms make for a secure way to store them once entered. In order to stand it up from scratch in an automated way you need those secrets in plaintext at some point to load it into the Jenkins config (which it encrypts) but you obviously don’t want to check the secrets into source control in plaintext. Our solution was the AWS KMS service. The secrets are encrypted with a KMS key that only trusted people and Terraform are able to access (using IAM roles), Terraform then is able to decrypt it when it provisions a new Jenkins instance and place it into an S3 bucket which is encrypted with a different KMS key that only Jenkins and its build nodes are able to read. Jenkins is then able to read these secrets at startup and insert them into the configuration. The reason we opted to put the secrets into S3 rather than just directly decrypting them on the Jenkins master with KMS is because the build nodes needed access to them when they are created (we’re using the Jenkins EC2 plugin).

A key piece of the system configuration is the seedJobs {} block which we use to create an initial “meta-generator” job that then creates additional jobs based on external configuration. We actually use three tiers of job generators, the seedJobs block itself, the “meta-generator” job which creates a folder and “pipeline-generator” for each application which then creates the actual deployment pipeline for each application. This gives us the ability to regenerate a single application’s pipeline on demand without regenerating all of the jobs which for our largest Jenkins instance is ~2,300 jobs in total.

Conclusion

With Jenkins fully automated the Jenkins configuration pages are considered read-only, all system configuration is done through code so there’s source control on it and pull request reviews. Managing multiple Jenkins instances is now much simpler since we can test a valid configuration once and roll it out to all instances. This plugin is definitely still a work in progress and we intend to support configuring additional plugins with the DSL over time. Contributions are welcome if a plugin you need isn’t currently supported.