As Kubernetes turns five, we explore the changing face of DevOps in the K8s world.

DevOps Inception and Shortcomings

Anoop Balakuntalam Anoop Balakuntalam is a curious technologist based out of Hyderabad, India. In the last 18 years, he has founded ventures in the areas of web development & server management, built a cloud practice serving Fortune 500 clients and has been instrumental in developing various cloud-based applications and SaaS products. Combining his deep interest in DevOps & software architecture with his passion for building inspired teams, he is currently busy building a frictionless app delivery platform for Kubernetes at HyScale.

In the pre-Kubernetes era, infrastructure and app development were inescapably intertwined. As complexities grew and teams evolved, we saw “DevOps” emerge as a bridge between development and operations in an attempt to resolve the age-old delivery trouble arising from developers throwing things over the wall to ops and then ops having to deal with production issues on the other side. DevOps rose to be a new subculture within existing teams, sometimes yielding new “DevOps teams” and even leading to a new class of tools and methodologies.

In reality, though, bridging skills and evolving an operational culture was not enough. Issues arise in the correct functioning of an application without proper configuration management in place. Application configuration would often conflict with IT’s own configuration goals towards security, scalability and availability. Inevitably, every time the pager went off — except for trivial issues — both roles would get pulled in to uncover the mysteries of running software in production.

The challenge of agile delivery was that application configuration and infrastructure configuration had to be achieved collaboratively yet have well-defined ownership and clear role separation. In the absence of such a model, there were too many heads still involved in delivery, resulting in a lot of friction and spilled energies. On top of that was the classic case of ‘it-works-for-me’ syndrome wherein developers would often complain that their software worked fine in their own configured development environments but behaved differently once pushed to an IT configured environment. Configuration hell reigned, even as the culture of DevOps seemed to be finding its way.

Configuration-Driven DevOps: From Idempotency to Immutability

To address these challenges, the industry turned to the principle of idempotency. Most of us understand idempotency simply as “an operation that produces the same end-result no matter how many times it is performed.” In the case of configuration management, this end-result would be the desired configuration of an app’s environment. When an environment deviates from its desired configuration we could use this principle to ensure that the drift is corrected and the environment is brought back to the desired state. Tools such as Puppet and Chef were born out of this idea and seemed to be the most suitable solution for a while.

While the idempotency theory solved some issues, it came with its challenges.

We had a way of knowing when things changed and only updating things that needed to be updated. However, it did not solve all the problems as we still had to cater to a seemingly large number of cases.

This complexity became avoidable with the advent of containers. Instead of changing things in place, we could now deploy immutable fully-configured container images and simply replace older containers with new updated ones. Thus the focus shifted from idempotency to another important principle: immutability. As Wikipedia says, “an immutable object is an object whose state cannot be modified after it is created.” So once an application has been packaged into a container image along with its dependencies and configurations, any number of identical containers can be spawned from it.

Enter Kubernetes: Immutability + Infra Abstraction

With the popularity of containers becoming a DevOps game-changer, Kubernetes came to be the most sought after container orchestration platform. Application teams could now be sure that their applications packaged as containers can be deployed onto any K8s environment running anywhere, and that the application would behave the same thanks to immutability. In addition to that, Kubernetes’ excellent abstraction over the infrastructure meant that infrastructure and development teams could cleanly and separately focus on their own areas of expertise, taking away the massive configuration and collaboration challenges.

This seemed to put an end to the configuration hell mentioned earlier, as we witnessed a fundamental shift emerge with the clean separation of concerns between runtime infrastructure ops and application deployment. So IT can focus on things like cluster infrastructure, capacity management, infrastructure monitoring, cluster level disaster recovery, networking and network security, storage redundancy, etc. On the other hand, application teams can focus their energies on building container images, writing scripts (Kubernetes manifest YAML) for deployment and configuration, externalizing configuration and secrets, and so on.

Application teams no longer need to go back and forth between different skill sets, seeking information and coordinating with different teams to get the job done, nor waiting for hours/days to have tickets addressed. Suddenly, there are ways to eliminate friction and lift the weight of collaboration. The actual infrastructure doesn’t matter so much anymore for delivery as it got nicely abstracted by K8s.

The Road Ahead for DevOps

With all the abstraction and clean separation that K8s brings, it also adds another layer on top of the VMs/machines on which it runs. This means additional overhead for IT with regard to cluster management, networking, storage, etc. In recent times, there has been a lot of industry focus on how to simplify K8s setup and management for enterprise teams to access all the intended benefits.

For an app team, containerizing a typical medium-sized, microservices-based app would require several thousands of lines of K8s manifest files to be written and managed. Each new deployment would need a rebuild of container images and potential modifications of several manifest files. Clearly, DevOps in today’s world will be different from DevOps in the pre-Kubernetes era.

These new-world DevOps teams may do well with an automation process for delivery to Kubernetes so that efficiency gains and economic benefits can be realized sooner while also maintaining reliability and speed. Such automation along with a standardized process will further enable a clean hand-off interface between the IT teams managing the infrastructure and the app teams delivering apps to K8s. For enterprises pursuing agility and frictionless delivery at scale, finding the shortest path to Kubernetes will be at the heart of DevOps in times to come.