Containers were big news in 2017, on Opensource.com and throughout the IT infrastructure community. Three primary storylines dominated container news over the past year:

The first is Kubernetes. Kubernetes gained huge momentum as the primary means to combine Open Container Initiative (OCI)-format containers into managed clusters composing an application. Much of Kubernetes' increasingly broad acceptance is due to its large and active community.

The second is standardization and decoupling of components. The OCI published the 1.0 versions of its container image and container runtime format specs. CRI-O now provides a lightweight alternative to using Docker as the runtime for Kubernetes orchestration.

The third storyline is security, with widespread recognition of the need to secure containers on multiple levels against threats caused by unpatched upstream code, attacks on the underlying platform, and production software that isn't quickly updated.

Let's take a look at how these storylines are playing out in the open source world.

Kubernetes and orchestration

Containers on their own are fine for individual developers working on their laptops. However, as Dan Walsh notes in How Linux containers have evolved:

The real power of containers comes about when you start to run many containers simultaneously and hook them together into a more powerful application. The problem with setting up a multi-container application is the complexity quickly grows and wiring it up using simple Docker commands falls apart. How do you manage the placement or orchestration of container applications across a cluster of nodes with limited resources? How does one manage their lifecycle, and so on?

A variety of orchestration and container scheduling projects have tried to tackle this basic problem; one is Kubernetes, which came out of Google's internal container work (known as Borg). Kubernetes has continued to rapidly add technical capabilities.

However, as Anurag Gupta writes in Why is Kubernetes so popular?, it's not just about the tech. He says:

One of the reasons Kubernetes surged past these other systems in recent months is the community and support behind the system: It's one of the largest open source communities (more than 27,000+ stars on GitHub); has contributions from thousands of organizations (1,409 contributors); and is housed within a large, neutral open source foundation, the Cloud Native Computing Foundation (CNCF).

Google's Sarah Novotny offers further insights into what it's taken to make Kubernetes into a vibrant open source community; her remarks in an April podcast are summarized in How Kubernetes is making contributing easy. She says it starts "with a goal of being a successful project, so finding adoption, growing adoption, finding contributors, growing the best toolset that they need or a platform that they need and their end users need. That is fundamental."

Standardization and decoupling

The OCI, part of the Linux Foundation, launched in 2015 "for the express purpose of creating open industry standards around container formats and runtime." Currently there are two specs: Runtime and Image, and both specs released version 1.0 in 2017.

The basic idea here is pretty simple. By standardizing at this level, you provide a sort of contract that allows for innovation in other areas.

Chris Aniszczyk, executive director of the OCI, put it this way in our conversation at the Open Source Leadership Summit in February:

People have learned their lessons, and I think they want to standardize on the thing that will allow the market to grow. Everyone wants containers to be super‑successful, run everywhere, build out the business, and then compete on the actual higher levels, sell services and products around that. And not try to fragment the market in a way where people won't adopt containers, because they're scared that it's not ready.

Here are a couple of specific examples of what this approach makes possible.

The CRI-O project started as a way to create a minimal maintainable runtime dedicated to Kubernetes. As Mrunal Patel describes in CRI-O: All the runtime Kubernetes needs:

CRI-O is an implementation of the Kubernetes CRI [Container Runtime Interface] that allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods... It is a lightweight alternative to using Docker as the runtime for Kubernetes.

In this way, CRI-O allows for mixing and matching different layers of the container software stack.

A more recent community project is Buildah. It uses the underlying container storage to build the image and does not require a runtime. As a result, it also uses the host's package manager(s) to build the image, and therefore the resulting images can be much smaller while still meeting the OCI spec. William Henry's Getting started with Buildah (published on Project Atomic) provides additional detail.

As William and I discuss in our free e-book From Pots and Vats to Programs and Apps: How software learned to package itself (PDF), the larger point here is that OCI standardization has freed up a lot of innovation at higher levels of the software stack. Much of the image building, registry pull and push services, and container runtime service are now automated by higher level tools like OpenShift.

Container security at many levels

Container security happens at many levels; Daniel Oh counts 10 layers of Linux container security. It starts at the familiar infrastructure level. This is where technical features like SELinux, cgroups, and seccomp come in. Security of the platform is just one reason I say the operating system matters even more in 2017 across many aspects of containers.

However, Daniel also observes the many other container layers you need to consider. "What's inside your container matters." He adds that "as with any code you download from an external source, you need to know where the packages originated, who built them, and whether there's any malicious code inside them."

Perhaps less familiar to traditional software development processes is securing the build environment, the software deployment pipeline itself. Daniel notes,

managing this build process is key to securing the software stack. Adhering to a 'build once, deploy everywhere' philosophy ensures that the product of the build process is exactly what is deployed in production. It's also important to maintain the immutability of your containers—in other words, do not patch running containers; rebuild and redeploy them instead.

Still other areas of concern include securing the Kubernetes cluster, isolating networks, securing both persistent and ephemeral storage, and managing APIs.

Onward to 2018

I expect all three of these areas to remain important topics in 2018. However, I think one of the biggest stories will be the continued expansion of the open source ecosystem around containers. The landscape document from the Cloud Native Computing Foundation gives some sense of the overall scope, but it includes everything from the container runtimes to orchestration to monitoring to provisioning to logging to analysis.

It's as good an illustration as any of the level of activity taking place in the open source communities around containers and the power of the open source development model to create virtuous cycles of activity.