Kubernetes continues its evolution with version 1.8, announced today on the Kubernetes blog. This release promotes the Roles Based Access Control (RBAC) to general availability, and includes a stable release of the lightweight container runtime for Kubernetes, CRI-O. You'll also find improvements to the Kubernetes CLI, cluster stability, service automation, and more.

If you are not familiar with Kubernetes, the project creates a system for automating the deployment, scaling, and management of containerized applications. Red Hat builds on Kubernetes to produce its enterprise-grade container platform, Red Hat OpenShift Container Platform. Kubernetes 1.8 delivers additional stability, security improvements, and simplicity to the upstream project.

Red Hat is proud to have worked on a number of features in this milestone alongside other community members. Red Hat co-leads API Machinery, Auth, Autoscaling, Big Data, Command Line Interface (CLI), Network, Node, OpenStack, Service Catalog, Storage, Container Identity Working Group, and the Resource Management Working Group...so it’s pretty hard to miss us! Specifically for Kubernetes 1.8, Red Hat engineering focused on the following areas:

Workload Diversity

Extensibility

Security Improvements

Cluster Stability

Service Automation

Workload diversity

Red Hat looked into two aspects of workload diversity in this release. The first was related to batch or task based commuting. We have a lot of customers that are interested in moving some batch workloads to their OpenShift clusters. We were able to land some alpha features around batch retries, waiting between failures, and other essential features required for controlling large parallel or serial deployments. End users will also be glad to see scheduledJob has become cronJobs and has moved to beta.

As exciting as batch jobs are, we believe that this next topic is going to enable the next wave in cloud computing. The Resource Management Working Group recently took on a massive charter and they were able to merge some alpha code in Kubernetes 1.8. You will want to keep a watch out for these features as they grow up out of alpha over the coming releases.

Device Manager : provide access to hardware devices such as NICs, GPUs, FPGA, Infiniband and so on.

CPU Manager : provide a way for users to request static CPU assignment via the Guaranteed QoS tier.

HugePages: should provide a way for users to consume huge pages of any size.

Extensibility

Thanks to work from the CLI special interest group, we can now allow kubectl to have plugins. This feature enables people to extend kubectl without having to clone the code repository by simply writing in any language and redeploying the command. This results in new subcommands added by placing an executable in a specific location on disk; all-in-all, a fantastic idea that has reached alpha status.

The Custom Resource Definitions we spoke about in Kubernetes 1.7 has moved to beta2 in this release. They allow for the extension of the Kubernetes API to provide features not in core Kubernetes, but make them look like first-class APIs to users.

Security improvements: Role-based access control

OpenShift was one of the first Kubernetes solutions to offer multi-tenancy. With multi-tenancy comes a need to design for role-based access control (RBAC) to the cluster, a need that Red Hat has been focused on for some time. We are pleased to announce that RBACv1 is now stable in Kubernetes 1.8. The RBAC authorization is a direct port of the authorization system that OpenShift has had since 3.0, and enables fine-grained control over access to the Kubernetes API.

Like anything in open source, the more people that work on the problem from different perspectives, the better the result. We are happy with the variety of out of the box RoleBindings that range from discovery roles, user-facing roles, framework component roles, and controller roles. The integration with escalation prevention and node bootstrapping is excellent and the ability to customize and expand the RoleBindings and ClusterRoleBindings are first class.

Cluster stability

Red Hat has a few offerings that result in us running many large Kubernetes clusters. We have OpenShift Online -- if you haven’t stopped by, go launch a container for free at https://manage.openshift.com We also have OpenShift Dedicated where you can own your own personal cluster. Then there is our next-generation, collaborative code development platform at OpenShift.io If Spring is your fancy, hit us up at Launch.OpenShift.io.

All of these, of course, are running on Kubernetes. What can I say? We love Kubernetes!

With so many hosted services running Kubernetes clusters, we have been able to observe Kubernetes running at scale, and invest in cluster stability. In the Kubernetes 1.8 development cycle, we worked on adding a client side event spam filter to stop excessive traffic to the API server from internal cluster components. We also added the ability to limit events processed by the API server. Limits can be set globally on a server, per-namespace, per-user, and per-source+object. This is needed to prevent badly-configured or misbehaving players from making a cluster unstable.

We wrote monitoring improvements to the Kubernetes master to help platform operators to better observe failures, see when the system is shedding load, and report accurate metrics about how Kubernetes is achieving its service level objectives.

We also worked on allowing API consumers, especially those that must retrieve large sets of data, to retrieve results in pages so as to reduce the memory and size impact of those very large queries. This reduces the memory allocation impact of very large LIST operations against the Kubernetes apiserver.

CRI-O - Lightweight container runtime for Kubernetes

Kubernetes 1.8 expands options for choosing container runtimes via CRI-O, which is stable and passes all node and cluster end-to-end (e2e) tests in 1.8.

CRI-O is an implementation of the Kubernetes CRI (Container Runtime Interface) to enable using OCI (Open Container Initiative) compatible runtimes, and it moves in lock-step with the Kubernetes project. It allows Kubernetes to use any OCI-compliant runtime as the container runtime for running pods. Today CRI-O works with runc and Clear Containers as the container runtimes, but any OCI-conformant runtime can be plugged in principle.

The aim here is for CRI-O to have a lean and stable container runtime that includes improved security features and supports Kubernetes as its primary goal. We're happy to have worked with the wider community on developing CRI-O and look forward to continue work on CRI-O as a "boring" piece of the container platform infrastructure.

Service automation

There is no use running a distributed cluster if you aren’t going to run something on it! There are a few features in Kubernetes 1.8 that will help with services. One area of innovation has been Horizontal Pod Autoscaling (HPA). HPA enables Kubernetes to automatically scale the number of pods based on utilization. Initially, Kubernetes was limited to scaling based on CPU usage, but work in these three areas enables the feature to work with custom metrics. This will give users much more flexibility in scaling workloads.

Custom metrics for HPA - will enable scaling based on arbitrary metrics (rather than only scaling based on CPU usage), and supports scaling by request percentages.

HPA Status Conditions - will indicate current status orblocking issues for HPA.

Monitoring Pipeline Metrics HPA API - Provides an API for scaling based on built-in metrics beyond pod-based metrics as well as scaling based on arbitrary metrics and request percentages.

Many production services require storage or data persistence. With alpha storage snapshotting, the community has figured out how to leverage the underlying storage APIs to enable some clever application-level features. This includes exposing the ability in the Kubernetes API to create, list, delete, and restore snapshots from an arbitrary underlying storage systems that support it. Image developers can ultimately speed up testing and recovery through snapshotting.

Finally, the Kubernetes incubator service catalog project has reached agreement on the scope of the initial beta release. The service catalog cleans up the service discovery and selection user experience and adds automations around binding services together while facilitating a programmatic way to consume service from outside and inside the cluster.

What else have Red Hat contributors been working on with upstream Kubernetes lately? Swing by one of the special interest groups (SIG) anytime to find out. As you can see, Red Hat is involved in many enterprise and open hybrid cloud projects across Kubernetes. We are part of a fantastic open source team that spans many companies and none of these features above would have been possible without the constant support all members offer to one another. We look forward to seeing you in three months for Kubernetes 1.9. Hold on tight!

Additional resources