This article is part of a series of pragmatic and repeatable ways of managing your configuration data across environments and clouds.

All this started as a by-product of a meeting I had recently with a customer and also from a conversation I had with a partner.

And now to give you a bit of context…

Context

In my previous article, I have described the approach to configuration management components for production-grade systems and applications using Git as a source of truth. If you’re not planning on reading the article, I’ll try to sum it all up in a few bullets.

Configuration data managed in a form of JSON files

JSON config files committed to Github

Each application/system environment (dev, staging, prod, etc.) represented by the Git branch. For instance, the Git branch dev hosts all config files for the environment DEV . Needless to say, branches never to be merged.

hosts all config files for the environment . Needless to say, branches never to be merged. Configurations retrieved through configuration service in a form of the REST endpoint and JSON-path keys

A quick example to help to understand the concept:

Consider following JSON config file my-service.json in the branch prod (it serves prod environment)

{

"app" : "services",

"port": "8000",

"app-type" : "backend",

"env" : "sandbox",

"log-level": "debug"

}

Hence, if I'm looking to deploy my-service app which looks for its parameters (like port number, etc). It does a simple REST GET call to get a port number and logging level as such:



8000 $ curl http://config-service.mydomain/api/v2/my-service/prod/port 8000

info $ curl http://config-service.mydomain/api/v2/my-service/prod/ log-levelinfo

Every time you commit a change to one of the branches, it automatically gets propagated to service’s cache via GitHooks, hence your application will never use outdated config data.

This approach yelled a bunch of high-value benefits such as

Boiling down configuration management to simple REST call

Maintaining a single version of the truth — all your configuration data can be stored and managed here, in Git branch

Full traceability and auditing, all transept

Stronger security guarantees, integration with authoritative backends

Completely cloud-agnostic (perfect for multi-cloud deployments)

Okay, all this is good, now what.

Fetching configurations one by one using REST calls is fine for the most of use cases, however, when it comes to apps deployed on Kubernetes, it absolutely makes sense to use configuration management components provided by Kubernetes — ConfigMaps, one of the most powerful ways to manage the configs. So, how do we get configuration provided by Config Data Service (and backed by Git) and “translate” it into ConfigMap(s) in “real-time mode” and ensure it doesn’t get out of sync?

Well, the solution that seemed to be quite pitched away turned out to be quite simple — use Kubernetes Operator. What is Kubernetes Operator anyway? Operators are controllers working in association with custom resources to perform tasks that well… “human operators” have to take care of. Think about deploying a database cluster with a manageable number of nodes, managing upgrades and even performing backups. A custom resource would specify the version of the database, the number of nodes to deploy and the frequency of the backups as well as their target storage, and the controller would implement all the business logic needed to perform these operations. This is what the etcd-operator does, for example.