Deploying an Openstack cloud on Kubernetes

Posted on July 10, 2018

I have always wanted a full blown Openstack environment at home but have never had the resources for it nor the space for a full stack of blades.

Its handy to have a cloud at home for trying out different things. Devstack on a single machine is an option I could of gone with but I have tried that before and it didn't really tickle my fancy. Devstack is fun to play with and I would recommend it for anyone looking to try out Openstack over the short term. The idea of Openstack on Kubernetes sounds a lot better to me though. Kubernetes would look after the Openstack services and make it very easy to scale up your Openstack cluster.

There are so many different ways to deploy your own Cloud these days. SUSE go with Chef cookbooks for their deployment solution while RedHat have backed Ansible to get the job done. The folks working on the Openstack-Helm project have made it incredibly easy to get a Openstack cloud up and running with Kubernetes and Helm. For those of you who haven't heard of Helm, it is used to streamline the management of applications on Kubernetes. Helm charts are files with pre-configured Kubernetes resources. The Openstack-Helm project provide Helm charts for a one node and a highly available multi-node solution. The one node solution satisfied my needs for my home cloud environment so that is the one I am going to run through here. Hopefully I will get to try out the multi-node solution when I can get a couple more machines for it.

Unfortunately you'll need more than your standard run of the mill desktop. 4 cores and 8GB won't cut it here. For this deployment, I used an 8 core machine with 16GB running Ubuntu 16.04. The procedure is pretty straight forward thanks to the folks working with the Openstack-Helm project but it does take a bit of time...

Before we begin we have to check that the machine has been updated to the latest software and install a couple dependencies.

sudo apt-get update sudo apt-get upgrade sudo apt-get install ca-certificates git make jq nmap curl uuid-runtime

Clone the openstack-helm-infra and openstack-helm repos from Github.

git clone https://github.com/openstack/openstack-helm-infra.git git clone https://github.com/openstack/openstack-helm.git

All of the following scripts will be run from the openstack-helm directory on your machine. The content of each of these scripts is outlined in the documentation but they mainly just install the required services using Helm charts.

First up is a script that will get a one node kubernetes cluster up and running as well as deploying Helm so that Helm charts can be used to deploy the Openstack services in some of the later steps- this can take some time so I would recommend you making a cup of tea for yourself.

./tools/deployment/developer/common/010-deploy-k8s.sh

Once Kubernetes and Helm are deployed successfully, the next script installs both the Openstack and Heat clients using python-pip. These clients provide you with a command-line interface with your Openstack. This script also builds all the Helm charts for all of the required Openstack services.

./tools/deployment/developer/common/020-setup-client.sh

Deploy the Ingress Controller - this is a Kubernetes resource that controls access to the different services in the cluster.

./tools/deployment/developer/common/030-ingress.sh

For shared storage in your Openstack cloud, you can either go with Ceph storage or NFS Provisioner. I went with NFS provisioner as I believed it was more suitable to my needs.

./tools/deployment/developer/nfs/040-nfs-provisioner.sh

A database is required by each of the Openstack services. MariaDB is the database that is deployed in this solution

./tools/deployment/developer/nfs/050-mariadb.sh

Next up is a message bus, RabbitMQ is commonly used in Openstack deployments

./tools/deployment/developer/nfs/060-rabbitmq.sh

Deploy Memcached - distributed memory object caching system which is used to reduce the load on the database services.

./tools/deployment/developer/nfs/070-memcached.sh

Run the following script to deploy Keystone. Keystone is the well known identity service that comes with Openstack. It is used to authenticate users as well as other services in the cloud.

./tools/deployment/developer/nfs/080-keystone.sh

Heat is the orchestration service that helps you to automate application deployments to your cloud. Heat uses YAML files that describe the required cloud resources.

./tools/deployment/developer/nfs/090-heat.sh

The Openstack Dashboard also known as Horizon is deployed using the following script.

./tools/deployment/developer/nfs/100-horizon.sh

Deploy Glance - Glance is the image registry. Images used to spin up instances are stored using Glance

./tools/deployment/developer/nfs/120-glance.sh

OpenvSwitch is a virtual switch that has become nearly standard in all Openstack deployments. It allows users to create large number of virtual networks within the one cloud.

./tools/deployment/developer/nfs/140-openvswitch.sh

After the Glance service is up and running, it is time to deploy Libvirt. Libvirt is used to interface with QEMU

./tools/deployment/developer/nfs/150-libvirt.sh

This compute kit script covers both the Nova service and the Neutron service. The Nova service is the compute service in Openstack while Neutron is the networking service which looks after the creation of network objects like routers.

./tools/deployment/developer/nfs/160-compute-kit.sh

Setup the gateway to the public network

./tools/deployment/developer/nfs/170-setup-gateway.sh

After all of this the Openstack dashboard should now be available at: https://horizon.openstack.svc.cluster.local

I managed to get a CirrOs instance up and running which is always a good sanity check for a cloud deployment





With one small instance running and the Openstack services deployed, it is using nearly 8GB RAM already so if you can get a bigger machine go for it!



You can easily see all the different services that are deployed by running:

kubectl get pods -n openstack

kubectl get deployments -n openstack

Or by checking the kubernetes deployment resources:

I haven't hit any issues with the cloud since I got it up and running so pretty happy with it overall. If you have any questions on any of this you can give me a shout on Twitter @br1ancarey.