After all my recent posts about deploying a Kubernetes cluster to AWS the one step I still wanted to talk about is how you can deploy the Docker containers to a Kubernetes cluster using a bit of automation. I will try to explain here how you can relatively simply do this by using Jenkins pipelines and some groovy scripting 🙂

Pre-requisites

* Working Kubernetes cluster (see here: https://renzedevries.wordpress.com/2016/07/18/deploying-kubernetes-to-aws-using-jenkins/)

* Jenkins slave/master setup

* Kubectl tool installed and configured on the Jenkins master/slave and desktop

* Publicly accessible Docker images (AWS ECR for example see: https://renzedevries.wordpress.com/2016/07/20/publishing-a-docker-image-to-amazon-ecr-using-jenkins/)

What are we deploying

In order to deploy containers against kubernetes there are two things that are needed. First I need to deploy the services that will ensure that we have ingress traffic via AWS ELB’s and this also ensures we have an internal DNS lookup capability for service to service communication. Second I need to deploy the actual containers using Kubernetes Deployments.

In this post I will focus on mainly one service which is called ‘command-service’. If you want to read a bit more about the services that I deploy you can find that here: https://renzedevries.wordpress.com/2016/06/10/robot-interaction-with-a-nao-robot-and-a-raspberry-pi-robot/

Creating the services

The first task I do is to create the actual kubernetes service for the command-service. The service descriptors are relatively simple in my case, the command-service needs to be publicly load balanced so I want kubernetes to create an AWS ELB for me. I will deploy this service by first checking out my git repository where I contain the service descriptors using Kubernetes yaml files. I will then use a Jenkins pipeline with some groovy scripting to deploy it.

The service descriptor for the public loadbalanced command-svc looks like this. This is a load balancer that is backed by all pods that have a label ‘app’ with value ‘command-svc’ and then attached to the AWS ELB backing this service.

apiVersion: v1 kind: Service metadata: name: command-svc labels: app: command-svc spec: type: LoadBalancer ports: - port: 8080 targetPort: 8080 protocol: TCP name: command-svc selector: app: command-svc

In order to actually create this services I use the below Jenkins pipeline code. In this code I use the apply command because the services are not very likely to change and this way it works both in clean and already existing environments. Because I constantly create new environments and sometimes update existing ones, I want all my scripts to be runnable multiple times regardless of current cluster/deployment state.

import groovy.json.* node { stage 'prepare' deleteDir() git credentialsId: 'bb420c66-8efb-43e5-b5f6-583b5448e984', url: 'git@bitbucket.org:oberasoftware/haas-build.git' sh "wget http://localhost:8080/job/kube-deploy/lastSuccessfulBuild/artifact/*zip*/archive.zip" sh "unzip archive.zip" sh "mv archive/* ." stage "deploy services" sh "kubectl apply -f command-svc.yml --kubeconfig=kubeconfig" waitForServices() }

Waiting for creation

One of the challenges I faced tho is that I have a number of containers that I want to deploy that depend on these service definitions. However it takes a bit of time to deploy these services and for the ELB’s to be fully created. So I have created a bit of small waiting code in Groovy that checks if the services are up and running. This is being called using the ‘waitForServices()’ method in the pipeline, you can see the code for this below:

def waitForServices() { sh "kubectl get svc -o json > services.json --kubeconfig=kubeconfig" while(!toServiceMap(readFile('services.json')).containsKey('command-svc')) { sleep(10) echo "Services are not yet ready, waiting 10 seconds" sh "kubectl get svc -o json > services.json --kubeconfig=kubeconfig" } echo "Services are ready, continuing" } @com.cloudbees.groovy.cps.NonCPS Map toServiceMap(servicesJson) { def json = new JsonSlurper().parseText(servicesJson) def serviceMap = [:] json.items.each { i -> def serviceName = i.metadata.name def ingress = i.status.loadBalancer.ingress if(ingress != null) { def serviceUrl = ingress[0].hostname serviceMap.put(serviceName, serviceUrl) } } return serviceMap }

This should not complete until at least all the services are ready for usage, in this case my command-svc with its ELB backing.

Creating the containers

The next step is actually the most important, deploying the actual container. In this example I will be using the deployments objects that are there since Kubernetes 1.2.x.

Let’s take a look again at the command-svc container that I want to deploy. I use again the yaml file syntax for describing the deployment object:

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: command-svc spec: replicas: 1 template: metadata: labels: app: command-svc spec: containers: - name: command-svc image: account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:latest ports: - containerPort: 8080 env: - name: amq_host value: amq - name: SPRING_PROFILES_ACTIVE value: production

Let’s put all that together for the rest of my deployments for the other containers. In this case I have one additional container that I deploy the edge-service. Using Jenkins pipelines this looks relatively simple:

stage "deploy" sh "kubectl apply -f kubernetes/command-deployment.yml --kubeconfig=kubeconfig"

I currently do not have any active health checking at the end of the deployment, i am still planning on it. For now I just check that the pods and deployments are properly deployed, you can also do this by simply running these commands:

kubectl get deployments

This will yield something like below:

NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE command-svc 1 1 1 1 1m

If you check the running pods kubectl get po you can see the deployment has scheduled a single pod:

NAME READY STATUS RESTARTS AGE command-svc-533647621-e85yo 1/1 Running 0 2m

Conclusion

I hope in this article I have taken away a bit of the difficulty on how to deploy your containers against Kubernetes. It can be done relatively simple, of course its not production grade but it shows on a very basic level how with any basic scripting (groovy) you can accomplish this task by just using Jenkins.

Upgrades

In this particular article I have not zoomed into the act of upgrading a cluster or the containers running on them. I will discuss this in a future blog post where I will zoom in on the particulars of doing rolling-updates on your containers and eventually will address the upgrade of the cluster itself on AWS.