Charts development workflow

For our applications, we use the branching workflow and we’ve decided to do the same with our charts development.

the dev branch is used to build charts that are meant to be tested on development clusters.

branch is used to build charts that are meant to be tested on development clusters. Then, when a pull request is merged on master , they are validated in a staging environment.

, they are validated in a staging environment. Finally, we create a pull request to merge the changes into the prod branch in order to apply the changes in production.

Each environment has its own private repository that stores our charts and we use Chartmuseum that exposes a really useful API. That way we enforce clear isolation between environments and we ensure that the charts have been battle tested before using them in production.

Chart repository per environment

It is worth noting that when the developers push their dev branch, a version of their chart is automatically pushed to the dev Chartmuseum. Thus all the developers are using the same dev repository and they have to be careful to specify their own chart version in order to avoid using someone else’s chart changes.

Furthermore, our small python script validates the Kubernetes objects against the Kubernetes OpenAPI specs by using Kubeval before pushing them to the Chartmusem

Summary of the chart development workflow

Setup our pipeline tasks according to the gazr.io specification for the quality tasks (lint, unit-test)

2. Push the docker image that contains the Python tooling used to deploy our applications

3. Set the environment according to the branch name

4. Check the Kubernetes yamls with Kubeval

5. Increment the chart version and its parents automatically (charts that depend on the one being changed)

6. Push the chart to the Chartmuseum that corresponds to its environment

Managing clusters differences

Cluster federation

At some point, we were using the Kubernetes cluster federation that allowed us to declare Kubernetes objects from a single API endpoint. But we faced issues namely that some of the Kubernetes objects couldn’t be created in the federation endpoint making it hard to maintain federated objects and other per-cluster objects.

To alieviate this issue, we decided to manage our clusters independently and which ended up making the process much easier (we were using the federation v1 though, things may have changed with the v2).

Geo-distributed platform

Currently, our platform is spread across 6 regions, 3 on-premises and 3 in the cloud.

Distributed deployment

Helm global values

4 global Helm values allow us to define the differences between our clusters. These are the minimum default values for all of our charts.