Why CoreOS Builds with Open Source

• By Brandon Philips

CoreOS builds open source software. Why build with open source? Because the problem to be solved is massive, and innovation is needed at the macro level. It is estimated:

That 3,646,000,000 internet users exist today,

With 29,000,000 software developers and IT practitioners,

With 238,975,082 internet users being added last year alone,

With ~100,000,000 servers worldwide.

Clearly we, as software engineers and administrators, are outnumbered. Open source software is the key to making this proliferation an asset, by collaborating across a diverse environment on the hardest operational problems, such as maintaining security, reliability, and portability.

And about three and a half years ago, we started working together to unlock the compute portability layer with the toolkit for distributed systems. With the introduction of Container Linux, Docker, Kubernetes, and more – an ecosystem based on software containers began.

Building Container Linux

CoreOS Container Linux, the open source operating system created for the world of containers, was started almost 4 years ago, and designed with the point of view that containers push Linux in new ways requiring the latest releases for security, performance, and wide platform support. So, we built an OS capable of rapidly delivering the latest Linux kernel features especially optimized for containerization.

Container Linux is widely adopted today and runs on any cloud or bare metal platform.

CoreOS-created open source projects in the CNCF: rkt and CNI

Since then, we helped establish the Cloud Native Computing Foundation (CNCF) and donated key container technologies including rkt and CNI. These open source projects were started by CoreOS engineers and have helped ensure success of the container ecosystem.

The latest release of rkt is the first since the project has been added to the CNCF, with big thanks to a community member that helped to lead the release. The project has added many new contributors in this release, and a new Xen-based stage1 was proposed.

CNI is a pluggable networking system for containers and makes it much easier for orchestrators like Kubernetes to support an array of network IPAM, SDN, and other schemes. CoreOS created CNI years ago to enable simple container networking interoperability across container solutions and compute environments. CNI has a thriving community of third-party networking solutions users can choose from that plug into the Kubernetes container infrastructure. We are glad to see this widely adopted system accepted into the CNCF.

Dedicated to Prometheus monitoring engine

Monitoring infrastructure software and systems is key to creating a reliable production system and Prometheus is the de facto monitoring system for the Kubernetes ecosystem. Given its importance, CoreOS dedicates engineers to the project and has been pushing the scalability of the project forward in a meaningful way.

The community's focus on scalability is front and center in the the upcoming Prometheus 2.0 release which delivers a vastly improved the time-series storage layer, tailored to addressing the transient nature of today’s workloads

Celebrating Clair 2.0

Clair is a security project created by CoreOS that performs static analysis of container images and correlates their contents with public vulnerability databases. At the heart of Quay Security Scanning is Clair but it also powers many other projects needing container image scanning.

Today we celebrate the release of Clair 2.0, with the addition of Alpine Linux scanning and much more. This is a major milestone release thanks to the 43 contributors involved!

Kubernetes State of the Union

We are dedicated to building a powerful Kubernetes developer community alongside the wider community of customers and partners. This last year the Kubernetes community has hit several important milestones including several solid releases, a growing set of contributors, and the start to a Kubernetes Steering Committee to help organize the projects growth over the coming years.

Project Governance and Releases

The problem space Kubernetes is tackling is simply huge! And the pace of Kubernetes community growth is largely due to the Kubernetes communities ability to organize into small teams called Special Interest Groups. These groups can range from horizontal cross-project concerns like cluster installation or authentication/authorization to more vertical concerns like Azure integration. However, when cross-SIG decisions need to be made on creating a new repo, cutting a new release, introducing a new API, or removing code that doesn't pass tests it can be unclear who to turn to make a final decision.

In the last couple of months the Kubernetes community created a Governance Bootstrap Commitee to make recommendations on how to organize the project to tackle these sorts of decisions. Recently the Bootstrap Commitee made a number of recommendations and documents which most importantly includes the creation of a Kubernetes Steering Committee to tackle project organization full time. The plan is for this Kubernetes Steering Committee to be formed in early August.

Predictions for the Future

Finally, let's look forward to some features and capabilities that will be coming into Kubernetes in the next few releases. I am taking inspiration from the regular Linux Kernel predictions that Jon Corbet of LWN makes. However, my job is much easier than Jon's as the Kubernetes community has developed a comparatively more centralized system of feature and design planning. So, I have high confidence all of these predictions will come true.

Core Deployment API are moving to stable: This is the easiest prediction as many folks in the Kubernetes community are working hard to make this happen. The Kubernetes community has rapidly added critical APIs to the project to support workloads like long-running node services via DaemonSets, static stateful apps via StatefulSets, and rolling updates via Deployments. These APIs have been proven by users over many releases and are ready to move to stable.

Consistent and automated cluster configuration: Kubernetes has a number of moving parts, from the API server & scheduler to the kubelet & kube-proxy. Each of these components need configuration to get required secrets and tunables. Today, this is done via command line flags; however, these are difficult to manage across components, require restarts to update, and are nearly impossible to version. Work is underway to make this configuration more consistent and easier to automate via configuration files that map to API objects called component configs.

Extensions via TPRs and API Aggregation: The Kubernetes API has become a convenient system for people to build clustered applications. This is because the API has friendly command line tools, role-based access control, and can plug into many user identity systems. So, naturally people are building a number of applications that store resources into Kubernetes, called third-party resources and also a new API aggregation subsystem to enable APIs that can also do server-side validation, integrate with Kubernetes API versioning, and can be backed by different datastores.

Apps are being built on top of Kubernetes APIs: With the ability to extend the Kubernetes API a number of applications have emerged that build on-top of this API. This includes dozens of applications from databases, certificate authorities, and custom application specific workflows. Administrators of these apps will be able to leverage all Kubernetes has to offer including logging, debugging, and monitoring tools.

More Monitoring Driven APIs with Metrics API: It isn't just external applications that are enabled by the new API aggregation in Kubernetes. SIG Instrumentation is now creating a metrics API server that can scale with the larger and larger sizes of clusters users are building today. This will enable more monitoring driven decision making systems inside of Kubernetes: from auto-scaling based on custom metrics, better "first-pass" administration tools, and easier plugins to time-series databases like Prometheus.

Working toward a bright future

Thank you to all for working alongside us in the success of our many key projects in the open source container ecosystem. We encourage you to join us on this journey of open source and distributed systems. Or, work with us! CoreOS is hiring.