Cattle not Pets!

All of the above points are great in theory but how can we go from where we are to where we need to be? Especially if our current pipeline does not match these ideals. This was the case with Oddcheckers original pipeline and required a change of mindset to get there.

Typically, the phrase “Cattle not pets” has been used to describe server infrastructure where they can be either described as disposable or unique.

An example of a unique “pet” might be of a manually installed and configured application server, that can never be switched off or automatically upgraded in fear of the entire surrounding dependant systems services collapsing when in it became unavailable. Contrast this with say a fully automated web server that is one of many identical “cattle”, that can be simply destroyed, with a new one taking its place without any loss of service.

This idea of “cattle” vs “pets” is a very powerful concept and can be applied to more than just infrastructure. As an example of this, we had many different applications and products that had various “unique pet” releases and deployment methods, which ultimately meant a large proportion of time was wasted on maintenance and support. In order to resolve these issues, we focused on the commonality of these processes and came away with the following:

· Our current build and release processes took place over 2 different CI’s: Jenkins and Gitlab.

· 95% of all our Java applications were similar and the process to build, package and release our applications could be templated and reused.

· We had over 150 projects with differing release and build scripts that needed to be updated each time our deployment environments changed.

· Some releases were entirely manual and required a member of the infrastructure team to complete

· Build job configurations and locations were inconsistent across applications.

We ultimately did the following by leveraging the power Kubernetes API and empowering each development team to manage and deploy their own applications:

· We unified our CI and CD platform — We chose Gitlab as it was already the primary location for all our code base. It had an easy to use, configurable build/deployment pipeline that could be defined as a single YAML file that would reside within each repository.

· Created Helm templates that could be reused across all projects. This enabled us to easily update and manage deployments independent of their code base. This also meant all releases were now consistent across environments and could easily be rolled back.

· Created a standard CI Build file that could be utilised across projects. We now had a consistent build pipeline over every project making support and maintenance a breeze.

· Defined and documented a set of common global and shared CI variables that could use across Build and Deployment Jobs

· All Helm deployments are linted and tested before being deployed ensuring deployments were working before running.

Below is an example of what one of build and deployment jobs looked like after unifying and removing duplicate scripts.

By removing a lot of the uniqueness in our pipeline, we were able to spend more time working on more important challenges within the migration.