I was presented with a challenge a few months ago. -“Create a container based hybrid CI/CD pipeline that includes GKE that we can demo at Google Cloud Next ’19”. Specs are always vague in these requests which suits me quite well as you get a lot of creative freedom to solve the task at hand. After listening to a talk on YouTube by Vic Iglesias I were intrigued by the idea of ephemeral workspaces that are dynamically created for each build job. As we all know, anything idling in the cloud costs money.

Hello World

The workspace is in fact a Kubernetes workload that the Jenkins master boots up for a particular build job. This was fairly easy to setup, I simply followed the setup procedures in the GKE docs and I had my echo "Hello World" up in minutes. The challenge quickly ramped up as I realized I needed to run a Docker daemon that the Jenkins agent could issue docker build against. How would you go about doing this without statically defining a DOCKER_HOST somewhere in your cloud?

Is docker-in-docker a thing for Kubernetes?

Running docker-in-docker I knew from past encounters that you could do and it’s well documented. Cobbling this together with GKE and Jenkins seemed to be a less obvious topic while googling. I realized I was using the stock Kubernetes Plugin for Jenkins to dynamically provision Jenkins agent. The plugin allows you to declare your own Pod specification, hence running a sidecar docker daemon with the Jenkins agent is quite trivial.

For reference, here’s the full pod spec the Jenkins master eventually spawns:

--- apiVersion: v1 kind: Pod metadata: labels: jenkins: slave jenkins/cd-jenkins-slave: "true" name: default-s13mc namespace: cicd spec: containers: - env: - name: JENKINS_SECRET value: 898200a1131e649637edb5c5faa3778c541bba82b9855d139b15cca7bf3e4492 - name: JENKINS_TUNNEL value: cd-jenkins-agent:50000 - name: JENKINS_AGENT_NAME value: default-s13mc - name: JENKINS_NAME value: default-s13mc - name: JENKINS_URL value: http://cd-jenkins:8080/ - name: HOME value: /home/jenkins image: docker:18.09.3-dind imagePullPolicy: IfNotPresent name: dind securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File tty: true volumeMounts: - mountPath: /var/lib/docker name: volume-0 - mountPath: /home/jenkins name: workspace-volume - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-w7k5z readOnly: true workingDir: /home/jenkins - args: - 898200a1131e649637edb5c5faa3778c541bba82b9855d139b15cca7bf3e4492 - default-s13mc env: - name: JENKINS_SECRET value: 898200a1131e649637edb5c5faa3778c541bba82b9855d139b15cca7bf3e4492 - name: JENKINS_TUNNEL value: cd-jenkins-agent:50000 - name: JENKINS_AGENT_NAME value: default-s13mc - name: JENKINS_NAME value: default-s13mc - name: JENKINS_URL value: http://cd-jenkins:8080 - name: HOME value: /home/jenkins image: drajen/jnlp-slave:3.27-5 imagePullPolicy: IfNotPresent name: jnlp securityContext: privileged: true terminationMessagePath: /dev/termination-log terminationMessagePolicy: File volumeMounts: - mountPath: /var/lib/docker name: volume-0 - mountPath: /home/jenkins name: workspace-volume - mountPath: /var/run/secrets/kubernetes.io/serviceaccount name: default-token-w7k5z readOnly: true workingDir: /home/jenkins dnsPolicy: ClusterFirst nodeName: gke-standard-cluster-1-default-pool-dcc3e8a4-8jvh priority: 0 restartPolicy: Never schedulerName: default-scheduler serviceAccount: default serviceAccountName: default terminationGracePeriodSeconds: 30 tolerations: - effect: NoExecute key: node.kubernetes.io/not-ready operator: Exists tolerationSeconds: 300 - effect: NoExecute key: node.kubernetes.io/unreachable operator: Exists tolerationSeconds: 300 volumes: - emptyDir: {} name: volume-0 - emptyDir: {} name: workspace-volume - name: default-token-w7k5z secret: defaultMode: 420 secretName: default-token-w7k5z

What we can see here is the fact I can use the stock docker image from Docker, Inc with the -dind tag. In the volumeMounts section we can also see the mapping against /var/lib/docker which in turn allow the Jenkins agent to run the docker command without constraints or configuration to do so. The Jenkins team makes it very easy to build your own custom agent and throwing in the docker binary is not harder than this Dockerfile example (with some other extras sprinkled):

FROM jenkins/jnlp-slave:3.27-1 USER root RUN apt-get update && \ apt-get install -y python-pip \ apt-transport-https \ ca-certificates \ curl \ gnupg2 \ software-properties-common && \ curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add - && \ add-apt-repository \ "deb [arch=amd64] https://download.docker.com/linux/debian \ $(lsb_release -cs) \ stable" && \ curl -ssSL https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add && \ echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" | tee -a /etc/apt/sources.list.d/kubernetes.list && \ apt-get update && \ apt-get install -y docker-ce-cli kubectl && \ pip install ansible && \ apt-get clean && \ mkdir -p /etc/ansible && \ echo "localhost ansible_connection=local" | tee -a /etc/ansible/hosts USER jenkins ENV PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:~/.local/bin

Further I used the GKE plugin to apply the new manifest I generate with the Ansible template plugin (there’s some history behind this choice I don’t recall at the moment, but I love Ansible, maybe that’s why).

Output

It’s questionable how common my use case is. I would assume a more natural path for this type of pipeline would be more suitable with a cloud-provided build system, like Google Cloud Build. I was looking for a short path to victory at the time for the demo asset and Jenkins is a known variable in the equation that you can bend to your will for the most part.

The demo I put together for Google Cloud Next ’19 was published to YouTube shortly after, "Using HPE Cloud Volumes with Google Kubernetes Engine with Hybrid Cloud CI/CD pipelines on Jenkins”: