Playing With VMs and Kubernetes

Some people play with vintage cars or collect rarities — I play with computers and software systems

Photo by Adrien Bruneau on Unsplash

I recently purchased a refurbished rack server I want to try out as a Kubernetes cluster. My plan is to create a host that has several VMs plus some fundamental server elements.

This article assumes a basic knowledge of Linux and command lines and some networking experience as well. A lot of the steps are documented with gists from my account and have my username and other local information. Hopefully, you’ll be able to figure out what needs to be replaced for your installation.

In my plan, Kubernetes will exist in virtual machines (VMs). So the first thing I need to do is install a host with KVM. I’ll be using the daily build of Ubuntu 20.04 (until April, when it’s released).

While it is still in prerelease, you can download the install image here. Once it’s released, you should be able to get the install image from the normal Ubuntu download page. You should be able to run the installation with mostly default options. I had to manually configure the network as the DHCP server wasn’t working, but once the network adapter was configured, it was able to connect to the network and the internet.

When it asks for additional software to install, only choose the OpenSSH server. We want to keep the main host as clean as possible, so we’ll keep the software installed to a minimum. One nice thing about the new installation from Ubuntu is it’ll pull your public keys directly from GitHub, so any computers you have that are trusted by GitHub will be trusted by your new server.

Next, we can install KVM to allow VMs. Here are the steps I used to install KVM:

Once we’ve installed all the components, we can test it out with the command virsh list --all to check that the basic KVM system is working. Then, we can create our first VM. Here are the steps I used for a basic VM:

You can check it out by SSHing to the new VM. You’ll notice the VM is on a different subnet (in my case, it’s on 192.168.122.0 versus my main LAN, 192.168.0.0 ). It’ll only be directly accessible by the main host, which can be used as a jump box.

A couple of other pieces of software I’ll want on the main host is nginx to use as a reverse proxy for the services running on VMs and Postfix so services that require an SMTP server can send emails. For the nginx server, sudo apt install nginx will do the trick. To test that its installed, browse to http://<mainhostIP>/ .

The standard Ubuntu configuration creates /etc/nginx/available-sites and /etc/nginx/enabled-sites , which I’ve used in the past when I used the Apache HTTP server.

When first installed, there’s a default site that’s just a welcome screen. It resides in the available-sites directory and is linked in the enabled-sites directory. You should delete or unlink it from the enabled-sites directory.

The idea behind this setup is you can keep dark sites in the available-sites and link them to the enabled-sites when you want to expose them. You can use this configuration, or you can just edit the /etc/nginx/nginx.conf file directly. I’ll set up a reverse proxy later when I have a service.

The next thing I want to set up is an SMTP server so services can send emails. This is a little more complicated, and if you can’t get it set up, it’s not the end of the world. These are the steps I followed:

You’ll have to set Gmail to allow less-secure access. Again, if you don’t care to do this, being able to send emails is a nice-to-have, not a necessity.

Now we have a VM, nginx, and Postfix installed. Let’s put Kubernetes on the VM. I’ll be using the microk8s distribution of Kubernetes, and the installation is pretty simple. Log onto your new VM, and run these steps.

While enabling the plugins, the metallb load balancer will ask for a range of IP addresses it can use. I used the subnet of the VM and gave it 15 IPs, so in my case it was 192.168.122.240 – 192.168.122.254 .

To see all the processes running under your new Kubernetes system, you can use the command microk8s.kubectl get all --all-namespaces to get a listing. This is the basic system of components needed for what we’ll be doing.

One final thing is to get the configuration so we can communicate with our Kubernetes system from the main host. Use the command sudo microk8s.config , and copy the output from that command. Be aware that the output contains sensitive information on how to communicate with the microk8s cluster. Then, log out of the VM, and go back to the main host.

Kubernetes is a server system that’s controlled via an API. The standard way of controlling the API is the kubectl command-line tool.

In order for kubectl to talk to the correct Kubernetes server or cluster, you need to copy the cluster configuration to the ~/.kube/config file. This is why we copied the output of the microk8s.config file in the previous step. The kubectl command can be installed with snap install kubectl --classic .

Next, you can create the ~/.kube/config file and copy the output directly into the file. That’ll set up the kubectl command to talk to your new cluster. You can test it out with kubectl get all --all-namespaces . It should give the same listing as you got on the VM.

Let’s install something fun on our new system. Fun for me is Jenkins so I can run builds locally and deploy them to our new Kubernetes cluster. To install preconfigured systems, I’ll be using Helm, which must be installed first. A simple sudo snap install helm --classic command will do that. Then, we can install Jenkins.

Here are steps that can be used to install Jenkins from the Google Helm repo or the Bitnami Helm repo. I’ll be using the Google Helm repo.

But before we can access the new Jenkins server, we must give it a load balancer (even though it only has one pod by default), and we must add it to the nginx reverse proxy. kubectl edit service/jenkins will bring up the configuration of the Jenkins service (that is, its network access) and change the spec.type to LoadBalancer .

Now do a kubectl get services , and it should list the IP address that it’s running on. Now, you can configure the nginx reverse proxy. I’ll create a file called /etc/nginx/sites-enabled/jenkins (assuming you’ve deleted the default site as directed in a previous step). Here are the contents:

You should change the IP address in the upstream jenkins block to the one that was listed by the kubectl get services command. Then sudo nginx -s reload to reload the nginx configuration.

Since we’re using virtual hosts, you’ll have to mess with the /etc/hosts on the host you’ll be accessing it on that’ll bind the name jenkins to the IP of the main host. Then, you can browse to http://jenkins/login , and voila!

You can use the Username admin and the password you can get with the kubectl get secret jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode command. As part of the installation of Jenkins, it created a secret and stored it in Kubernetes for future access. Pretty handy!

To recap, we installed a virtual machine in an Ubuntu host, installed Kubernetes in the VM, installed Jenkins into Kubernetes, and exposed it to the LAN via nginx. In my next article, I’ll get some jobs running in our new Jenkins instance using Kubernetes pods as worker nodes.