The Origins of Dev/Ops

DevOps (a clipped compound of DEVelopment and OPerationS) is one of the fastest-growing areas in the programming and development world. This section will explore what DevOps is, the various types of tools involved in DevOps, and how DevOps might be evolving.

The Origins of DevOps

In 2007, Patrick Debois accepted a position with a Belgian ministry wherein his task involved migrating a data center. Debois was determined to understand every part of the IT infrastructure, and his role in QA (Quality Assurance) forced him to move between the development and operations worlds with regularity. Some days, Debois would be working with the dev team–planning, participating in agile development, working with developer tools, and so on. Other days, he found himself embedded within the operations group–fighting fires, keeping production running effectively and ensuring that code was effectively deployed. Switching back and forth illustrated the stark contrast between the development and operations cultures and Debois came to the realization that there must be a better way for them to work together.

Development and operations were generally separate “siloed” functions within an organization before the advent of DevOps.

Debois encountered the like-minded Andrew Clay Shafer at an Agile conference in 2008, and their spirited discussions laid the foundations for what would later become DevOps.

Now that we’ve briefly explored the history, we can focus on what DevOps is–as well as what it isn’t. First and foremost, DevOps is a human problem–specifically, that of a historical lack of communication and collaboration between developers, IT professionals, and QA engineers (and, more recently, information security professionals). Therefore, embracing DevOps translates into a profound culture shift, wherein developers, IT, and QA communicate and collaborate on a daily basis, breaking down the silos that formerly existed between these groups. Without this culture shift, DevOps cannot succeed.

Let us be clear–a culture shift of this magnitude is difficult, and it will not happen overnight. Understanding the steps that are required is easy, whereas implementing them is quite another story. Furthermore, for successful adoption of DevOps, 100% management buy-in is required. Should management continue to expect its employees to remain entrenched in the old ways (where “old” means circa 2008!), an attempted DevOps adoption will fail spectacularly.

Given that we haven’t yet truly defined DevOps, let us begin with the notion that DevOps embodies a set of principles that espouse increased communication and collaboration (among other cultural changes). These principles are often described via the CALMS model–Culture, Automation, Lean, Measurement, and Sharing. As before, understanding these principles is easy, whereas changing behavior to embrace them is anything but.

At this point, we are almost there, and a practical, technical definition of DevOps will certainly be helpful–this one was taken from the Agile Admin blog:

DevOps is the practice of operations and development engineers participating together in the entire service lifecycle, from design through the development process to production support.

Here is another definition, this time from Wikipedia:

…a culture, movement, or practice emphasizing collaboration and communication of software developers, QA, and other IT (operations) professionals, while automating the process of software delivery and infrastructure changes.

It should be clear that there is no single definition, and indeed, DevOps can mean different things to different people. But at its core, DevOps is certainly about increased collaboration and communication, culminating in the breaking down of the “silos” that formerly existed around Dev, Ops, and QA.

Before we examine some of the various workflows and technologies of DevOps, we want to make clear what DevOps isn’t. DevOps isn’t simply a mixture of Dev and Ops, nor is it a department in your organization. There are no DevOps certifications, so it’s not compliance. It’s not a product–you can’t buy or download it, nor is it a tool or even a collection of tools, although as we’ll see shortly, there are plenty of tools which we may leverage in order to enhance our DevOps journey.

Striving for Continuous Integration and Continuous Deployment

A company that runs a DevOps environment effectively is rewarded with continuous integration (CI). This is a development lifecycle in which there is a continuous stream of deployments from the production code base. The metric most commonly used to determine the relative success of CI is “deployments per day,” stemming from the seminal 2009 presentation “10+ Deploys per Day: Dev & Ops Cooperation at Flickr” by John Allspaw and Paul Hammond.

Instead of the relatively slow and cumbersome 48-hour deployments of yesteryear, continuous integration allows developers to pivot quickly to address issues, make changes, and constantly experiment.

CI is the beating heart of agile, lean, and many other management philosophies. CI makes for better software, happier users, and healthier companies; but, in order for effective CI to occur, we need to decentralize much of the traditional “dev” and “ops” activities such that every member of the team is working together. This is the management challenge of DevOps.

The Tension Between Dev and Ops

Operations teams have historically concerned themselves with such things as user environments, server states, load balancing, and memory management. They need to keep things running in a fixed state within a constantly changing environment. Developers, on the other hand, are all about constant deployment and constant change. Getting these two teams to work together can be a gargantuan effort. As we will see, many new technologies have been developed that can help us overcome this challenge.

Version Control Technologies

The early days of DevOps saw the re-invention of version control technologies such as Git, Github, Bitbucket, and SVN. These tools actually existed in the pre-DevOps world, but they’ve taken on new importance under DevOps. Instead of devs simply being concerned with getting the correct version of the code, the ops team is now deploying code that is checked in and built daily. Everything that is deployed must go through rigorous integration testing before it will be allowed onto a production machine. Version control allows a virtual connection between the developers and the operations team such that it is a trivial process to “roll back” undesirable code in order to return the production machines to their previous states.

Automating Deployments and Continuous Integration

Continuous integration tools such as Jenkins, TeamCity, and Travis enable code to be built and tested as soon as it is checked in, effectively automating the deployment and QA processes. Given that automation is a key principle of DevOps, these tools allows for faster integrations which no longer rely on human intervention, and in concert with version control systems, they allow for easy rollback in the event any errors are detected.

Cloud Services and Configuration Management

Another obvious issue arises when one considers the prevalence of cloud technologies in modern development lifecycles. For the first time in history, production servers may be created or destroyed at will using platforms such as AWS, Azure, and Google Cloud Platform, enabling elasticity in our load balancing. Before cloud servers were the norm, companies purchased physical servers in order to handle the maximum computing loads that they anticipated. This is the metaphorical equivalent of owning enough warehouse space for “Christmas level” throughput whilst needing the vast majority of that space but a few times a year.

As these cloud-based servers are brought online, we must ensure they share a common configuration with current production server(s). The technologies used to manage these servers are configuration management tools such as Puppet, Chef and Ansible. These tools were created to manage the configurations of large numbers of servers through an easy-to-use scripting-like language. They work by creating machine descriptions (“infrastructure as code”) that can be stored in and retrieved from version control, and can be swiftly applied to tens, hundreds, or even thousands of machines. Should the desired configuration change, it is typically a trivial process to push out a new configuration to all of the machines in our infrastructure.

Microservices and Containers

Continuous integration and deployment are built on the philosophical concept of modularization. This is the idea that a thousand small changes are better than one large one and that developers should seek to isolate their code to enable these small changes. The idea of re-deploying an entire code base over several days has long passed.

On a practical level, this means that instead of building the monolithic applications of yore, developers are currently building applications based on microservices–small, independent, easy-to-replace applications. Architectures based on microservices allow for easier continuous deployment.

Container technologies such as Docker and LXC are based on a simple idea–namely, they enable developers to encapsulate (or “containerize”) an application (or a microservice which is part of an application), along with any dependencies that are required to run them.

Instead of depending on “golden image” virtual machines, developers can now simply encapsulate their work in containers which can then be deployed into the production environment as completely independent microservice applications.

Containers isolate the code they contain from the underlying host machines. As a result, the dependencies of the code live inside the container and therefore cannot conflict with versions of those dependencies which may be installed on the host machine. Concerns regarding the state of the production server at any given time become irrelevant–the containers will run regardless of where they are deployed. Another advantage is that the containers themselves are disposable. They are environments that launch, run an application, and then disappear. Instead of building an application in an environment, developers are able to build an environment around an application.

Managing Microservices and Containers

Once we deploying large numbers of containerized microservices rapidly, we have to have some way of managing them. An abstraction level is necessary to allow an operations team to effectively deploy and manage microservices that are all part of a larger production ecosystem. Cluster Manager tools (also called Orchestration tools) such as Docker Swarm, Kubernetes, and Mesos are designed to help with this. These tools allow the scheduling and rapid deployment of microservices to multiple nodes in a cluster, enabling operators to manage the rapid integration and deployment of large numbers of containers in a multi-node environment.

Where is DevOps going?

As you can see, our current DevOps ecosystem is not really a single item, it is an environment made up of several technologies combined together to make for a smooth code transition. This is partially done by design. A large part of any decent engineering framework is the ability to unplug a widget from the machine and plug in a new one at will. An example would be replacing Puppet with Ansible or replacing Bitbucket with Gitlab at a moment’s notice. The ability to make these rapid changes afford engineers the flexibility to use the constantly evolving technologies and plug them in with as little disruption to the overall flow of the process–but, there is a cost to doing this.

First, there’s the human element. At this point, to be an effective DevOps engineer, one needs to have competency in many different technologies. A DevOps engineer could be spinning up AWS machines today, writing bash scripts tomorrow, and rolling back changes in version control the day after (or all three of these tasks in the same day). Being able to do all this, and do it well, is a daunting task for any engineer. We find ourselves in a situation where developers and operations folks need to know how to code and have a deep understanding of all of the different aspects of a deployment. How do we solve this conundrum?

Management technologies have been created recently to help merge these disparate technologies into a seamless system for controlling these technologies. This is the concept of “abstraction.”

Abstraction lets us take these complex smaller processes and abstract them away so we’ll need fewer people to run them effectively.

Developers might still occasionally have to go into production machines and write bash code or tweak a Docker container which isn’t working properly, but generally speaking, abstraction will give us a single tool that will do the majority of the work around scheduling, prioritizing, and ensuring the smooth flowing of most of the tasks in this system. It’s much easier to understand and operate the tool than it is to understand the unique aspect of every process the tool is controlling.

We are starting to see the rise of integrated environments such as Cloud Foundry, Spring, and Sonatype. These environments enable DevOps teams to manage a smooth, integrated DevOps chain whilst simultaneously ensuring the same “plug and play” flexibility that comes with the current setup. More companies will adopt integrated environments to smooth out their DevOps workflow.