I’ve been experimenting with running a Kubernetes node set up by kubeadm in a Vagrant machine on my Mac. It’s set up so I can access the cluster from my Mac using kubectl just as if it were in the cloud. It seems to be working great, so here’s how I did it, but first let’s discuss why. (Or you can just grab the Vagrantfile if you prefer. )

Why not Minikube or Docker for Mac?

I want to be able to run tests and demos on Kubernetes running locally on my Mac — ideally with no need for network connectivity so that it’s safe to use in conference talks.

I’ve been happily using Minikube for many months, and in many respects it’s great. You can also run Kubernetes under Docker for Mac, and if you’re running application code within Kubernetes both these solutions are easy to use.

However, a lot of what I’m doing at the moment relates to security settings that you might configure on your Kubernetes cluster. For example, if I’m working on kube-bench a lot of the tests look at the parameters passed to the API Server executable. Neither Minikube nor Docker for Mac use standard installation tools like kubeadm or kops that you might use for a production cluster, so for my work I was looking for ways to tweak parameters in ways that are not the same as on a regular production server.

In addition, Docker for Mac runs Kubernetes within a host machine based on LinuxKit. This has advantages in terms of size / efficiency, but it made my life more difficult if I wanted to examine things on that host machine. The final straw came when I had localkube (the executable in Minikube) crash on me during a meetup talk, rendering my demo unusable.

I decided that I’d be better off running exactly the same code that a Kubernetes user might run on a production cluster. And I couldn’t see any reason not to try running that in a regular Linux VM on my local machine.

The guest VM

At heart it’s really very simple:

A Vagrant machine with Ubuntu + Docker installed

Install and run kubeadm as per the official docs to set up a Kubernetes master node (with a couple of pre-requisite steps first)

Run Kubernetes as a single-node cluster (allowing application code to run on the master)

There was a little additional complexity to set things up network-wise so you can connect to this node from outside (i.e. from a terminal on my Mac).

The basic box

Creating an Ubuntu box with Docker is very easy with just a couple of lines in the Vagrantfile:

config.vm.box = "bento/ubuntu-16.04"

config.vm.provision "docker"

Edit: I originally used “geerlingguy/ubuntu1604” as the box and it worked fine, but I changed when I learned that Hashicorp recommend Chef’s bento boxes.

Network setup

As soon as a vagrant VM is set up it’s easy to SSH into it with the aptly-named vagrant ssh command. But I wanted to be able to run kubectl from my (Mac) desktop and have it act on this node, just as it would if it were a remote node in the cloud somewhere.

First I configured my Vagrantfile to use a private network:

config.vm.network "private_network", type: "dhcp"

This results in the VM having an extra network interface — it’s referred to as enp0s8 in this output from ifconfig (on this guest VM):

vagrant@vagrant:~$ ifconfig

docker0 Link encap:Ethernet HWaddr 02:42:16:be:e2:b7

inet addr:172.17.0.1 Bcast:172.17.255.255 Mask:255.255.0.0

UP BROADCAST MULTICAST MTU:1500 Metric:1

RX packets:0 errors:0 dropped:0 overruns:0 frame:0

TX packets:0 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:0

RX bytes:0 (0.0 B) TX bytes:0 (0.0 B) enp0s3 Link encap:Ethernet HWaddr 08:00:27:4b:24:0e

inet addr:10.0.2.15 Bcast:10.0.2.255 Mask:255.255.255.0

inet6 addr: fe80::a00:27ff:fe4b:240e/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:205667 errors:0 dropped:0 overruns:0 frame:0

TX packets:97618 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:269969165 (269.9 MB) TX bytes:6031791 (6.0 MB) enp0s8 Link encap:Ethernet HWaddr 08:00:27:57:8c:c2

inet addr:172.28.128.4 Bcast:172.28.128.255 Mask:255.255.255.0

inet6 addr: fe80::a00:27ff:fe57:8cc2/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

RX packets:87 errors:0 dropped:0 overruns:0 frame:0

TX packets:16 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1000

RX bytes:16366 (16.3 KB) TX bytes:2484 (2.4 KB)

Interrupt:16 Base address:0xd240 lo Link encap:Local Loopback

inet addr:127.0.0.1 Mask:255.0.0.0

inet6 addr: ::1/128 Scope:Host

UP LOOPBACK RUNNING MTU:65536 Metric:1

RX packets:183686 errors:0 dropped:0 overruns:0 frame:0

TX packets:183686 errors:0 dropped:0 overruns:0 carrier:0

collisions:0 txqueuelen:1

RX bytes:39482388 (39.4 MB) TX bytes:39482388 (39.4 MB)

The IP address 172.28.128.4 has been assigned by DHCP, but the assigned address could vary from time to time when I set up a VM based on this Vagrantfile. So I’m going to grab that IP address as part of my installation script as we’re going to need it when we install Kubernetes.

IPADDR=`ifconfig enp0s8 | grep Mask | awk '{print $2}'| cut -f2 -d:`

echo This VM has IP address $IPADDR

(I suspect it’s possible that the interface name could also vary, but I haven’t looked into that yet).

Kubeadm pre-requisites

I’ve added a couple of lines into the Vagrantfile to fulfil the pre-requisite steps of turning off swap…

swapoff -a

…and configuring Kubernetes to use the same CGroup driver as Docker:

sed -i '0,/ExecStart=/s//Environment="KUBELET_EXTRA_ARGS=--cgroup-driver=cgroupfs"

&/' /etc/systemd/system/kubelet.service.d/10-kubeadm.conf

This is a bit of a hack because (a) it assumes the name of the driver is cgroupfs without checking it and (b) it doesn’t check to see if there is already anything defined as KUBELET_EXTRA_ARGS, but it will do for now.

Edit: One of the pre-requisite steps is to apt-get install the kubernetes components. If you want to specify a particular version, that’s easy — just change this line in the Vagrantfile, for example:

apt-get install -y kubelet=1.9.0-00 kubeadm=1.9.0-00 kubectl=1.9.0-00

Install Kubernetes with kubeadm

Installing Kubernetes is made very simple with kubeadm init. The only difference here is that we want to declare that this box has an additional IP address — the one that it is known by externally (i.e. from the Mac). We pass that in so that the API server certificates recognise that IP address as being valid for this node. If we don’t do this, when we connect from the Mac we’ll see error messages about the X.509 certificates not matching the host IP address.

NODENAME=$(hostname -s)

kubeadm init --apiserver-cert-extra-sans=$IPADDR --node-name $NODENAME

Credentials

When the kubeadm installation is complete, there will be a set of admin credentials ready for the root user. I am copying these to the vagrant user so that if required, you can vagrant ssh into the box and start running kubectl commands without sudo.

sudo --user=vagrant mkdir -p /home/vagrant/.kube

cp -i /etc/kubernetes/admin.conf /home/vagrant/.kube/config

chown $(id -u vagrant):$(id -g vagrant) /home/vagrant/.kube/config

Vagrantfile

Putting it all together, here’s the final Vagrantfile:

Vagrantfile for a single-node Kubernetes installation

Running Kubernetes applications

If you bring up a box based on this Vagrantfile, you should have a node with Kubernetes up and running! But there are a couple of final steps required before your Kubernetes node will be ready to run your workloads and accept commands from your Mac’s terminal.

Install a pod network

As per the official docs you need to install a CNI-based pod network add-on. I went for the Weave network one and so far, so good. Use vagrant ssh to get onto the VM where you can run kubectl commands.

$ kubectl apply -f https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '

')

Allow pods to run on the master node

By default master nodes get a taint which prevents regular workloads being scheduled on them. Since we only have one node in this cluster, we want to remove that taint.

$ kubectl taint nodes --all node-role.kubernetes.io/master-

Copy credentials to your host machine and update IP address

If you want to run kubectl from outside the VM you need some Kubernetes credentials. One simple way to deal with this is simply to copy out the same admin credentials (e.g. cat them on the VM, and then copy-paste them into a file called admin.conf on the Mac).

Before you can use these credentials you need to edit them to update the IP address so that it uses the address by which the host knows the VM — this is the address we got earlier. The installation script in the Vagrantfile will output it for you with a line:

This VM has IP address <address>

Edit the cluster.server address in admin.conf and replace the IP address with this one (leaving the port number in place).

Then you can use these credentials to access Kubernetes running inside the VM.

$ kubectl --kubeconfig ./admin.conf get nodes

NAME STATUS ROLES AGE VERSION

vagrant Ready master 11m v1.10.0

Edit: improvements based on Mark Kahn’s very helpful response to this article — fixing the sed command, permanently turning swap off, and allowing for a different host name. Thanks!