In the last few posts the main comment that keeps coming back is ‘you should not use latest’. I totally agree on that and in this blog I will finally do something about it 🙂 I mainly used the setup for development purpose, however in production I need something more reliable, meaning versioned deployments.

To do this there are of course two parts, first I have to actually create a build job that can version the containers and second I will need a build job that rolls out the update deployed container.

Versioning and Releasing

Using the below Jenkins pipeline I have a build job that uses a Jenkins input parameter that defines the release version. In order to use this create a Jenkins pipeline job that has one input parameter called ‘RELEASE_VERSION’. The default value I keep to ‘dev-latest’ for development purposes, but when releasing we obviously need to use a sensible number.

node { stage 'build' build 'home projects/command-svc/master' stage 'test-deploy' sh "\$(aws ecr get-login)" sh "docker tag home-core/command-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:dev-latest" sh "docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:dev-latest" stage 'qa' build 'home projects/command-svc-tests/master' stage 'Publish containers' sh "docker tag home-core/command-svc:latest aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:'$RELEASE_VERSION'" sh "docker push aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:'$RELEASE_VERSION'" }

Now when I trigger the build Job, Jenkins will ask me to input the version for releasing that container. So let’s in this article use the version ‘0.0.1’. When running the build job, Jenkins will go through a few stages.

1. Building the container

2. Pushing a dev-latest version

3. Running tests which deploy a container against the test cluster

4. Release the container using a fixed version.

In the latest stage I release the container using the input parameter specified on the job triggering. The build job does not actually deploy to production, that is for the time being still a manual action.

Deploying to Kubernetes

For deploying to production I have done an initial deployment of the service using below deployment descriptor. I started out deploying version ‘0.0.1’ which I have built above.

apiVersion: extensions/v1beta1 kind: Deployment metadata: name: command-svc spec: replicas: 1 template: metadata: labels: app: command-svc spec: containers: - name: command-svc image: aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:0.0.1 ports: - containerPort: 8080 env: - name: amq_host value: amq - name: SPRING_PROFILES_ACTIVE value: production

Doing a rolling update

This works fine for an initial deployment, however if i want to do an upgrade of my production containers I need something more. In production in essence I just want to upgrade the container image version. This is a relatively simple operation with Kubernetes. Let’s assume i have released a newer version of the container with version ‘0.0.2’.

In order to update the container in Kubernetes I can simply do a rolling update by changing the image of the Deployment object in Kubernetes as following:

kubectl set image deployment/command-svc command-svc=aws_account_id.dkr.ecr.eu-west-1.amazonaws.com/command-svc:0.0.2

I am currently not yet integrating this into my build pipeline as I want a production upgrade to be a conscious decision still. But once all the quality gates are in place, there should be no reason to not automate the above step as well. More on this in some future blog posts.