In this article, we will focus on one of the most demanding topics in OpenStack deployment – its control plane. This refers to a set of OpenStack projects working together to provide cloud services to users. We will identify the most common technical issues of an OpenStack control plane, and present a solution based on containers and container orchestration.

OpenStack Control Plane Challenges

The very first challenge is how to deploy OpenStack. Several deployment tools provide deployment models and architectures, but no standard tool or approach is recommended for OpenStack deployment. Such tools are very prescriptive in terms of how they install OpenStack services on cluster nodes and how high-availability (HA) for OpenStack services is implemented.

Check this infographic to select the best tools to deploy OpenStack.

Managing and Maintaining OpenStack

Once OpenStack is deployed, the second major challenge is how to manage and maintain it. An OpenStack control plane consists of many moving parts, known as OpenStack projects, which are installed on different physical servers to achieve high-availability. It is difficult to configure and manage them; you need to know where each OpenStack service is installed and how many instances of each service are currently running.

Because these services communicate with other services, you need to know the right order to start or stop them. This is especially important when performing a planned maintenance or upgrade. There is no a standard way to apply OpenStack updates in production. Upgrading your cloud to a newer release can be so complex that it is often simpler and safer to create a new deployment and migrate workloads from the existing cloud to the new one.

Security Challenges

Another considerable challenge is security. With so many moving parts in OpenStack services and standard GNU/Linux programs, ensuring security for the numerous integration points and places is an almost unsurmountable goal. A better approach is to isolate each moving part, such as a program or service, in its sandbox, and only then expose the endpoints required to integrate components. In this case, even if one service is compromised by an attacker, the impact to the other services will be minimal.

Click here to discover how to harden security of your OpenStack clouds

Monitoring the Services

Finally, OpenStack services have to be properly monitored in term of liveness and resource consumption (such as CPU and RAM). If a service in the OpenStack control plane fails, it should be automatically restarted. If an entire physical server fails, all of its OpenStack services should be automatically restarted on other servers. Ideally, the OpenStack control plane should be able to automatically scale up or down, depending on actual or planned cloud load. If CPU or RAM consumption of an individual service is too high, a new instance of this service should be automatically started in the OpenStack control plane.

Containerized OpenStack Control Plane

What is an OpenStack Release?

Generally speaking an OpenStack release is just source code frozen at a given point in time. The first step in taking this source code to a production-grade OpenStack cluster is packaging it (for example, by using a pre-built package from a Linux distribution). To deploy OpenStack, OpenStack Puppet, Chef, Ansible and Salt and other OpenStack projects are available, in addition to OpenStack vendor tools. To address packaging, deploying, as well as additional challenges, the Containerized OpenStack Control Plane offers an exciting new approach.

What is a containerized OpenStack control plane and what are the benefits of such approach?

In a containerized OpenStack control plane, instances of OpenStack services run in containers, for example, by using Docker. Since containers are lightweight, they can be easily started and terminated. With almost no overhead to run a process, the same image can be used to run several containers. Containers isolate running processes and only required ports can be exposed to the external world and to other containers. It is possible to set CPU and RAM limits for a container and monitor container’s liveness, CPU and RAM consumption.

According to the existing best practices, a container should represent a single service (for example, Nova API) and modifiable files and directories (for example, logs or persistent storage) should live outside the container and should be mounted to the container as a data volume.

Control plane orchestration

OpenStack Kolla is a relatively new project in the OpenStack Big Tent that provides production-ready Docker containers for an OpenStack control plane. Kolla provides tools to build the images locally; the Kolla images are available in the Docker Hub Image Registry. The latest release tag is 2.0.0, corresponding to the Mitaka release.

OpenStack Kolla provides a tool for upgrading an existing deployment consisting of a set of containers and configuration data to a new deployment. Ansible playbooks can deploy a single-node and multi-node OpenStack cloud with a containerized control plane. However, the Ansible playbooks do not address all of the challenges identified in the beginning of this article such as:

More flexible placement of containers to the available nodes

Autoscaling (scale the number of container up or down depending on cluster’s load)

Auto-healing (restart failed containers)

Upgrades with zero downtime

A production-grade OpenStack control plane requires container orchestration to allow coordination of multiple containers in a cluster, when these containers work together as parts of complex containerized application. The Kolla mission does not include container orchestration, so we need an additional tool.

Open source container orchestration tools

There are several open source container orchestration tools available to solve this task, such as Docker Swarm, Mesosphere Marathon, and the most popular of the bunch, Kubernetes. You can deploy a Kubernetes cluster on the top of OpenStack using Heat or Kargo, so it’s a natural fit to coordinate with a containerized OpenStack control plane.

Click here for an introduction to container orchestration with Kubernetes.

Kubernetes is an open source platform that can be used to automatically deploy, scale and operate complex containerized applications that:

Automates the initial placement of the OpenStack services across available nodes

Runs the required number of containers on different nodes

Automatically scales the number of containers for a specific service, using the observable CPU and RAM consumption for the container or another provided metric

Automatically restart a container if a process inside a container fails

If an entire node fails, Kubernetes can start containers on the remaining nodes, making sure that the specified number of containers (replicas) exist in the cluster. Kubernetes supports rolling updates, updating containers one by one at a specified rate instead of taking down the entire service all at once. Also, Kubernetes provides other useful building blocks that can be efficiently used in a containerized control plane for OpenStack: container networking (such as load balancing for services) and network data volumes. Kubernetes also provides the orchestration for the highly available containerized control plane.

What’s Next

The containerized control plane for OpenStack is a viable approach to simplify OpenStack deployment and maintenance. The presence of the Kolla project in the OpenStack Big Tent shows that containers have found their niche for OpenStack operators, adding an operational level to the existing projects, such as OpenStack rpm and deb packages.