With the rise of containerized deployments both in the cloud as well as in on premise installations Kubernetes has risen to the de facto standard in creating orchestrating container based infrastructures. Whilst the architecture style has a lot of benefits like fault tolerance, easy horizontal scaling (and in the cloud automatic one), and the principle of infrastructure as code it also has a couple of challenges. It requires for architects another way of thinking and pushes complexity into the infrastructure. This is especially true if you want to leverage the full potential of this infrastructure. One of these differences is handling updates of systems which run in Kubernetes.

Application updates inside of Kubernetes

Kubernetes has the concept of deployments for non stateful application components. In short, they enable to spin up one or multiple containers on any machine in the cluster and handle upgrading of images or configuration changes in a non-obstructive way, enabling the application to still function even when an upgrade is happening (providing you have enough replicas).



Classically to update an image on the cluster you can do the following with the kubernetes command line tool kubectl:

kubectl set image deployment.v1.apps/homepage nginx=nginx:1.91 --record=true

This will handle in the background the update of the application to the given image version and keeps the remaining configuration untouched. The update can be monitored by running

kubectl describe deployment homepage

This will return a human readable state of the container configuration. In case the upgrade fails a manual revert can be done to the previous revision.

kubectl rollout undo deployment.v1.apps/homepage

There is one more step you need to think of if you rely on structural changes inside of your database during the update: You would have to inject into the container the possibility to migrate the underlying application’s database schema or create a dedicated job inside of kubernetes which executes the necessary code to migrate the database to the new structure.

Continuous Delivery

If you want to utilize Continuous Delivery for your solution you would have to give access to the cluster’s API to the CI/CD application (in XCNT’s case: Jenkins) and provide means to handle these upgrade steps. This solution requires at least an edit based access to the deployment resources. However, in most cases I’ve seen in the industry CI/CD just has admin access to the cluster to easily add and remove deployments without needing to adjust the cumbersome RBAC configuration in Kubernetes.

Using kubectl with edit access to update the cluster resources

Giving Admin Access to your CI/CD System

In my opinion this is an issue regarding the security of the application. You leave an administration token in a system which allows due to the capabilities of kubectl to completely destroy the cluster.

The preferable way would be to be able to specify inside of the deployment configuration that you’re giving access to update this specific deployment if the image matches.

Introducing the Kubernetes Update Manager

Updating a deployment on the CLI with the Kubernetes Update Manager

We at XCNT deeply rely on Open Source Software and also want to support the OSS community. That is why we are releasing our internal solution, the Kubernetes Update Manager into the OSS space with the MIT license. This is a service which runs inside the Kubernetes cluster and provides an API Key secured REST interface for updates of deployments as well as the support to run dedicated migration scripts in a job once you want to update a docker image.

Using kubernetes update manager as a proxy between the CI/CD system and the Kubernetes API to reduce the access scope of the CI/CD system in the Kubernetes cluster

For this we use the annotations available in Kubernetes to mark a deployment and a job as relevant for the update manager. Additionally, migration jobs can be configured inside of kubernetes and also attached with an annotation to mark them as to be executed with the new image during an update process.

This greatly reduces the possibility to interact between the CI/CD system and the Kubernetes cluster to just the deployment and jobs images which the CI/CD system should actually change, following the principle of least privilege and thus reducing a possible attack vector massively. Additionally, it provides an easy way to configure access to the update cluster without the necessity to share cluster certificates or google service accounts to the CI/CD system. In case the deployment fails (for example because the image doesn’t exist). It automatically rolls back the deployment to the previous one and marks the update process as failed.

The update manager does support “update classifiers” which make it possible to update in the same cluster images for different stages. If you have a “stable” and a “test” system inside of your cluster you can distinguish this by specifying a different update classifier.

Updating cluster resources can be done by executing the CLI interface inside of the docker container. The configuration can be done both via environment variables as well as via command line flags. For example the command will update the resources with the update classifier stable:

docker run --rm -t xcnt/kubernetes-update-manager:stable update --url https://update-manager.example.org/updates --image xcnt/example:1.0.0 --update-classifier stable --api-key example-api-key

For more information on how to use the Update Manager please see our github repository.

Links: