Kubernetes 1.9 and a look inside the Kubernetes project

• By Eric Chiang

Kubernetes is the highest velocity cloud-related open source project, and its pace of development isn't slowing down. This week the project will ship Kubernetes 1.9, its latest release, coming three months after Kubernetes 1.8. The new version includes a number of updates, fixes, and new features, as you can see in the release notes. Many of these changes are "under the hood," however – so rather than diving into a feature checklist here, it's worth looking at the overarching goals driving the next phase of Kubernetes development.

As adoption increases, the Kubernetes project faces new challenges to scale and service its growing contributor base. Rather than adding new features to "core," much of the current work involves refactoring the Kubernetes code base so that it is less monolithic. This effort has the dual purposes of making sub-projects easier to consume and maintain, and to improve the extensibility of Kubernetes itself.

Project health and splitting the monolith

Over the 1.9 release cycle, the Cloud Native Computing Foundation (CNCF) funded devstats.k8s.io, a project to collect metrics for GitHub repos under the Kubernetes organization. While the dashboards can be used to generate summaries of contributions by individuals or companies, this effort also provides insight into the development velocity of the project as a whole.

Notably, this data validates the ongoing goal to split the Kubernetes monolithic repository into smaller, more consumable projects. As Brian Grant (@bgrant0607), Google's lead Kubernetes architect, has observed, the merge rate of pull requests for the kubernetes/kubernetes project has not increased since version 1.0 in 2015. Even as the project has grown from 50 monthly contributors to 250, all growth in merge rate has occurred in other repositories.

The Kubernetes Go client was an early experiment in splitting out code from core in this way. Since then, new repos have been created for client generation, custom API servers, and API definitions, as well as moving full projects into their own workspaces. Some of this code still isn’t strictly developed externally (rather, it's still synced from the monorepo), but identifying and formalizing dependencies has been a critical effort in paying down the technical debt of the Kubernetes codebase.

Improving extensibility

Besides moving code out of kubernetes/kubernetes into new workspaces, the Kubernetes project is committed to enabling feature development outside of core as well.

As of 1.9, many features are designed for this purpose. Kubernetes’s use of the Container Network Interface (CNI) has enabled a rich ecosystem of networking options. Custom Resource Definitions (CRDs) and aggregated API servers allow users to extend the API while preserving a familiar workflow for custom controllers and operators. Webhook plugins for authentication and authorization have allowed integrations with a variety of identity providers and policy engines. The list goes on and on.

Certain kinds of new features, particularly those that integrate directly with cloud providers, have started to require an extensibility point, instead of being implemented in core. This lifts the burden off of an already huge codebase, while also offering the benefit of not favoring one distribution or cloud provider over another.

Conformance

Not all of the changes to enable greater extensibility have been implemented in code. Extensibility is problematic if users can’t depend on a stable underlying platform for extensions and applications to interact with. Users should be able to rely on the same basic workflows and interact with the same core set of resources that they come to expect, regardless of the plugins they’re using.

To address this need, over the 1.9 release cycle, the CNCF announced the Certified Kubernetes Conformance Program. Certification verifies expected functionality, agreed upon by the community, of a Kubernetes installation tool or distribution. "Kubernetes" should provide basic expectations of flexibility, portability, and confidence to users and enterprises, and defining rules around what Kubernetes is is critical to that effort. The initial announcement included 32 projects, all certified by the CNCF, including Tectonic, CoreOS’s own Kubernetes offering.

The fact that 32 unique projects were certified speaks to the efforts across the Kubernetes community to maintain project health and extensibility, and helps ensure that the Kubernetes ecosystem will scale with Kubernetes adoption.

Next steps for future Kubernetes releases

There are still many components that need to be moved to external projects so that Kubernetes as a whole can continue to thrive. Examples include cloud-controller-manager, CSI for external storage, and the longstanding goal of moving kubeadm to its own repo. This work will take time.

Hopefully, however, Kubernetes releases will become more and more "boring" over time, as things are in the world of Linux. This will in no way reflect a community that is slowing down, but one that is accelerating and empowering the hundreds developers that already work on Kubernetes. This process of cutting up repos and designing extensibility points is the next frontier for Kubernetes as a project because it's essential for users to have the flexibility to build on and extend Kubernetes up the stack .

Ready to get started?

Tectonic, the enterprise Kubernetes solution delivered by CoreOS, will include Kubernetes 1.9 in a future release. In the meantime, if you are new to Tectonic or Kubernetes and are interested in trying it out, we suggest you download Tectonic Sandbox, a unique test and experimentation environment that you can install and run on your local macOS, Windows, or Linux machine. We think you'll agree it's the fastest way to get up and running with a complete Kubernetes demo environment that's suitable for non-production workloads.