Many companies want the same thing. The ability to deploy code to test a particular feature/bug without actually deploying it to development, staging, or production. Kubernetes makes this pretty easy with the use of namespaces. For more information on namespaces, check out: https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/

So the question is: how do we make it so that our project can be deployed to a namespace without having to teach all of our developers the ins and outs of kubernetes? Well there’s a couple things that need to happen to make this possible:

namespace

configmaps

secrets

manifests for the services which are being deployed

a script to tie everything together

I decided to take a combination of configuration and convention to accomplish the above tasks.

T he Namespace

I wrote a simple namespace.json template that could be overwritten with the namespace

{

"kind": "Namespace",

"apiVersion": "v1",

"metadata": {

"name": "${NAMESPACE}",

"labels": {

"name": "${NAMESPACE}"

}

}

}

The Configmaps

Each service has it’s own configuration variables need to be available to the service at runtime. I decided to go with a conventional approach mixed with how .env works.

Key value pairs that are written to files ending in `.env` will be combined together and added to a ConfigMap that will be used by a service for environmental variables.

A ConfigMap will be created for each service in the following manner:

.env/staging.env will set variables for all services in the staging environment

will set variables for all services in the staging environment .env/service-a.env will set variables for all service-a ’s regardless of the environment

will set variables for all ’s regardless of the environment .env/feature-myfeature.env will set variables for all services in the feature-myfeature namespace

will set variables for all services in the namespace .env/staging/service-a.env will set variables for all service-a ’s in the staging environment

will set variables for all ’s in the staging environment .env/staging/feature-myfeature/service-a.env will set variables for only the service-a that’s on staging running the the feature-myfeature namespace

I realize that allowing the granularity of setting configuation based on the feature instead of enforcing configuration to be written to the environment name could have been overkill… but I felt that this was necessary to provide a way to test out a feature without accidentally overwriting the upstream config when the branch was merged.

Here’s the python script to combine each file into a final env file:

I’ll get into how to create the actual configmap in the script section below.

The Secrets

Every non-trivial service has some number of secrets that are necessary to connect to various other services such as passwords, api tokens, nonces, etc. Because of the sensitivity of the information, you don’t want to check it in as part of your project. So instead, we create a kubernetes secret based on env variables defined at runtime. Since the projects that I’ve been working on are using CircleCI I’ll be discussing how this was accomplished in that context. However, it should be easy to apply to another integration tool like Jenkins. Here’s the script:

It iterates over the environment variables and stores them as secret key/values based on the environment that we’re deploying to. So, any key defined with the prefix STAGING_ will be written to the secrets file for the staging environment. Since feature deployments are branches off of the staging environment, secrets defined for the staging environment apply to a feature deployment.

Manifests For The Services

I took a conventional approach here. Any kubernetes manifest defined beneath a directory that matches the name of our current environment will be deployed. As said above, feature deployments are branches off of the staging environment, so the manifests from staging are applied to the feature deployment.

Script To Tie It All Together

Essentially what is happening here is this:

If NAMESPACE isn’t default create a new namespace

isn’t default create a new namespace Create a kubernetes secret based on the set environment variables

Combine all of the .env files into one file and create a kubernetes configmap based on that file

Apply all of the manifests defined in the MANIFEST_IN_DIR into the NAMESPACE

into the Wait for the rollout to finish for each of the IMAGES

I used envsubst to inject environment variables into the templates that I created for the namespace and manifests.

Conclusion

It’s all very much a work in progress still. I hope to get most of this cleaned up and put onto my github so it can be easily applied to other projects. I’ve left out a few of the steps of the deployment process such as building/pushing the images, cleaning up the namespaces after a development branch has been merged, and deploying to production vs deploying to staging. I plan to write how that was accomplished future posts.