Based on some of my previous posts I am quite busy creating a complete Continuous Integration pipeline using Docker and Jenkins-pipelines. For those who have read my previous blogposts I am a big fan of Kubernetes for Docker container orchestration and what I ideally want to achieve is have a full CI pipeline where even the kubernetes cluster gets deployed.

In this blog post I will detail how you can setup Jenkins to be able to deploy a Kubernetes cluster. I wanted to use cloudformation scripts as that makes most sense for a structural deployment method. It seems actually the easiest way to do this is using the kube-aws tool from core-os. The kube-aws tool is provided by core-os and can generate a cloudformation script that we can use for deploying a core-os based kubernetes cluster on AWS.

Preparing Jenkins

I am still using a Docker container to host my Jenkins installation (https://renzedevries.wordpress.com/2016/06/30/building-containers-with-docker-in-docker-and-jenkins/). If you just want to know how to use the kube-aws tool from Jenkins please skip this section :).

In order for me to use the kube-aws tool I need to do modify my docker Jenkins container, I need to do three things for that:

Install the aws command line client

Installing the aws-cli is actually relatively simple, I just need to make sure python & python pip are install them I can simply install the python package. These two packages are simply available from the apt-get repository, so regardless if you are running Jenkins in Docker or on a regular Ubuntu box you can install aws-cli as following: RUN apt-get install -qqy python RUN apt-get install -qqy python-pip RUN pip install awscli Install the kube-aws tool

Next we need to install the kube-aws tool from core-os, this is a bit more tricky as there is nothing available in the default package repositories. So instead i simply download a specific release from the core-os site using wget, unpack it and then move the tool binary to /usr/local/bin RUN wget https://github.com/coreos/coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-amd64.tar.gz RUN tar zxvf kube-aws-linux-amd64.tar.gz RUN mv linux-amd64/kube-aws /usr/local/bin

3. Provide AWS keys

Because I do not want to prebake the AWS identity keys into the Jenkins image I will instead use the ability to inject these as environment variables during Docker container startup. Because I start Jenkins using a docker-compose start sequence, I can simply modify my compose file to inject the AWS identity keys, this looks as following:

jenkins: container_name: jenkins image: myjenkins:latest ports: - "8080:8080" volumes: - /Users/renarj/dev/docker/volumes/jenkins:/var/jenkins_home - /var/run:/var/run:rw environment: - AWS_ACCESS_KEY_ID=MY_AWS_ACCESS_KEY - AWS_SECRET_ACCESS_KEY=MY_AWS_ACCESS_KEY_SECRET - AWS_DEFAULT_REGION=eu-west-1

Putting it all together

Jenkins Dockerfile used to install all the tooling, looks as following:

from jenkinsci/jenkins:latest USER root RUN apt-get update -qq RUN apt-get install -qqy apt-transport-https ca-certificates RUN apt-key adv --keyserver hkp://p80.pool.sks-keyservers.net:80 --recv-keys 58118E89F3A912897C070ADBF76221572C52609D RUN echo deb https://apt.dockerproject.org/repo debian-jessie main > /etc/apt/sources.list.d/docker.list RUN apt-get update -qq RUN apt-get install -qqy docker-engine RUN usermod -a -G staff jenkins RUN apt-get install -qqy python RUN apt-get install -qqy python-pip RUN pip install awscli RUN wget https://github.com/coreos/coreos-kubernetes/releases/download/v0.7.1/kube-aws-linux-amd64.tar.gz RUN tar zxvf kube-aws-linux-amd64.tar.gz RUN mv linux-amd64/kube-aws /usr/local/bin USER jenkins

Generating the CloudFormation script

The kube-aws tooling has relatively simple input parameters, and the tool is relatively straightforward. What the tool does is to generate a cloudformation script you can use to deploy a Kubernetes stack on AWS.

The tool has in essence three phases

1. Initialise the cluster settings

2. Render the CloudFormation templates

3. Validate and Deploy the cluster

To start we need to initialise the tool, we do this by specifying a number of things. We need to tell the name of the cluster, we need to specify what the external DNS name will be (for the kube console for example) and you need to specify the name of the key-pair to use for the nodes that will be created.

kube-aws init --cluster-name=robo-cluster \ --external-dns-name=cluster.mydomain.com \ --region=eu-west-1 \ --availability-zone=eu-west-1c \ --key-name=kubelet-key \ --kms-key-arn='arn:aws:kms:eu-west-1:11111111:key/dfssdfsdfsfsdfsfsdfdsfsdfsdfs'

You also will need a KMS encryption key from AWS (see here how: http://docs.aws.amazon.com/kms/latest/developerguide/create-keys.html), the ARN of that encryption you need to specify in the above call to the kube-aws tool.

Rendering CloudFormation templates

The next step is actually very simple, we need to render the CloudFormation templates, we do this by simply executing the below command:

kube-aws render

This renders in the local directory a number of files, where most importantly you have the cluster.yaml which contains all the configuration settings used in the CloudFormation template. The cloudformation template is available as well under the filename stack-template.json

Setting nodeType and number of nodes

Before we deploy we need to set some typical settings in the generated cluster.yaml settings file. The main settings I want to change are the AWS instance type and the number of kubernetes worker nodes. Because I want to automate the deployment, I have chosen to do a simple in line replacement of values using sed

With the below code I change the workerInstanceType to a Environment variable called ‘INSTANCE_TYPE’ and I change the workerCount property (the amount of kube nodes) to an environment variable ‘$WORKER_COUNT’.

sed -i '''s/#workerCount: 1/workerCount: '''$WORKER_COUNT'''/''' cluster.yaml sed -i '''s/#workerInstanceType: m3.medium/workerInstanceType: '''$INSTANCE_TYPE'''/''' cluster.yaml

The reason I use environment variables is because this way I can use Jenkins job input parameters later on to specify them. This way when deploying a cluster I can specify the values on triggering of the job.

Last step to do is to validate the changes we have made are correct, we do this as following

kube-aws validate

Deploying the Kubernetes cluster to AWS

The next and final step is to actually run the CloudFormation scripts, the kube-aws tool has a handy command for this, running this will execute the CloudFormation template. Simply run this:

kube-aws up

You are of course free to run the actual template yourself manually via the aws-cli or using the web console. I do like this handy shortcut built in the kube-aws tool for automation purposes. All the files for running this manually are available after the render and validate step described above. The output you need to use are the cluster.yaml , stack-template.json , userdata/* folder and the credentials folder.

Kubeconfig

After the kube-aws command completes it will have written a valid kubeconfig file to the local working directory. This kubeconfig can be used to access the kubernetes cluster. The cluster will take roughly an additional 5-10 minutes to be available but once it is up you can execute the regular kubernetes commands to validate your cluster is up and running:

kubectl --kubeconfig=kubeconfig get nodes

Creating a Jenkins Job

I want to put all of the above together in a single Jenkins job. For this I have created a Jenkins pipelines file where we put the entire process in as put in smaller pieces above. On top of this I introduced a multi-stage Jenkins pipeline:

1. Initialise the tool

2. Change the cluster configuration based on input parameters

3. Archive the generated cloudformation scripts

4. Deploy the cluster

5. Destroy the cluster

Input parameters

In the Jenkins pipeline I have defined multiple input parameters. These allow me to customise the worker count and instance types as described before. In order to use them you can do this two ways, you can hard code this in a freestyle job, but if you want to use the new style build pipelines in Jenkins you can use this:

WORKER_COUNT = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: '4', description: '', name: 'WORKER_COUNT']] INSTANCE_TYPE = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: 't2.micro', description: '', name: 'INSTANCE_TYPE']]

For step 4 and 5 I use an additional Input step in Jenkins to check if the cluster really needs to be deployed. Also an additional fifth step is introduced to potentially allow destroying of the cluster.

The full pipeline looks like this:

node { stage 'Kube-aws init' deleteDir() sh "kube-aws init --cluster-name=robo-cluster \ --external-dns-name=kube.robot.renarj.nl \ --region=eu-west-1 \ --availability-zone=eu-west-1c \ --key-name=kube-key \ --kms-key-arn='arn:aws:kms:eu-west-1:11111111:key/dfssdfsdfsfsdfsfsdfdsfsdfsdfs'" stage "Kube-aws render" WORKER_COUNT = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: '4', description: '', name: 'WORKER_COUNT']] INSTANCE_TYPE = input message: 'Number of Nodes', parameters: [[$class: 'StringParameterDefinition', defaultValue: 't2.micro', description: '', name: 'INSTANCE_TYPE']] sh "kube-aws render" sh "sed -i '''s/#workerCount: 1/workerCount: '''$WORKER_COUNT'''/''' cluster.yaml" sh "sed -i '''s/#workerInstanceType: m3.medium/workerInstanceType: '''$INSTANCE_TYPE'''/''' cluster.yaml" sh "kube-aws validate" stage "Archive CFN" step([$class: 'ArtifactArchiver', artifacts: 'cluster.yaml,stack-template.json,credentials/*,userdata/*', fingerprint: true]) stage "Deploy Cluster" shouldDeploy = input message: 'Deploy Cluster?', parameters: [[$class: 'ChoiceParameterDefinition', choices: 'yes

no', description: '', name: 'Deploy']] if(shouldDeploy == "yes") { echo "deploying Kubernetes cluster" sh "kube-aws up" step([$class: 'ArtifactArchiver', artifacts: 'kubeconfig', fingerprint: true]) } stage "Destroy cluster" shouldDestroy = input message: 'Destroy the cluster?', parameters: [[$class: 'BooleanParameterDefinition', defaultValue: false, description: '', name: 'Destroy the cluster']] if(shouldDestroy) { sh "kube-aws destroy" } }

In Jenkins you can create a new job of type ‘Pipeline’ and then simply copy this above Jenkinsfile script in the job. But you can also check this file into sourcecontrol and let Jenkins scan your repositories for the presence of these files using the Pipeline multibranch capability.

The pipeline stage-view will roughly look as following after the above:



Conclusion

I hope based on the above people one way of automating the deployment of a Kubernetes cluster. Please bear in mind that this is not ready for a production setup and will require more work. I hope the above demonstrates the possibilities of deploying and automating a Kubernetes cluster in a simple way. However it is still lacking significant steps like doing rolling upgrades of existing clusters, deploying parallel clusters, cluster expansion etc. etc.

But for now for my use-case, it helps me to quickly and easily deploy a Kubernetes cluster for my project experiments.