You may find yourself sitting at home, thinking about software and stuff and what to do with that spare PC of yours. I’ve got an idea: set up a single-node run-at-home Kubernetes cluster!

We need a ship (or a whale)

A little disclaimer to start with: the configuration here provides no recovery mechanisms since it is based on a single master node. If you want to set up a high availability cluster, I recommend you consult the excellent Kubernetes documentation.

But before we can install kubeadm itself, we first need a container runtime. Luckily most recent Linux distros come with Docker available through the package manager. So depending which distribution you run, you consult your package manager and ask it for Docker.

sudo apt-get update sudo apt-get install -qy docker.io

Even if your package manager ships an older version of Docker, don’t worry: Kubernetes works rather well with older versions.

We got a ship, we need a captain!

Installing Kubernetes is rather straightforwar but we first need to add the official Google package sources to our local listings.

curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg \ | sudo apt-key add - echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \ | sudo tee /etc/apt/sources.list.d/kubernetes.list

If adding the key does not work, you may still need to install the apt-transport-https package. Now we can update our package manager and install the kubeadm , kubelet and kubernetes-cni tool.

sudo apt-get update sudo apt-get install -qy \ kubelet kubeadm kubernetes-cni

Careful here, stormy waters ahead

But before can actually use these tools, you need to further prepare the runtime system for a new captain. Since Kubernetes does not work well with Linux swap files/partitions, it is strongly recommended to disable swap space.

The easiest way to perform this action is to edit your /etc/fstab , comment out the swap line(s) and reboot the system. After rebooting, we need to get the IP we want to advertise our cluster on. When you run this action in a datacenter, you may pick the private networking IP provided by your hoster. When we run this at home, you may have only one IP assigned by your router or hypervisor so we will pick this one.

ip -c a

... 2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> ... link /ether 52:54:00:7e:76:cb brd ff:ff:ff:ff:ff:ff inet 192.168.10.234/24 brd 192.168.10.255 scope global ens3 valid_lft forever preferred_lft forever inet6 ...

When I run the command on my local machine, I see that I may use 192.168.10.234 in this case. One command is actually is enough to tell Kubernetes to setup our cluster using bootstrapping!

sudo kubeadm init \ --pod-network-cidr = 10.244.0.0/16 \ --apiserver-advertise-address = 192.168.10.234 \ --kubernetes-version stable-1.10

If you want to add more nodes later on, I recommend to save the kubeadm join command printed out on your screen. It allows an easy extension of your lonely one-node cluster.

The kubeadm stores the configuration required to access the cluster via kubectl in a file under /etc/kubernetes/admin.conf . By copying it to our home directory, we can access the cluster without using sudo all the time.

mkdir ~/.kube/ sudo cp /etc/kubernetes/admin.conf $HOME /.kube/config sudo chown $( id -u ) : $( id -g ) $HOME /.kube/config

If you want to access the cluster from another machine, you need to copy the configuration file to it.

scp <username>@<master-ip>:~/.kube/config kubeconfig export KUBECONFIG = ./kubeconfig

You may want to copy the kubeconfig file into $HOME/.kube/config to use it as your default kubectl configuration. Be careful, you overwrite existing configurations when copying. As an alternative, you can use multiple files as your kubeconfig by separating each file path using a : (e.g. KUBECONFIG=~/kubeconf1:~/kubeconf2 kubectl get nodes ).

We got a ship and a captain but we still need a sail

In our case the sail will be flannel, a popular layer 3 network fabric by CoreOS. Because even though Kubernetes allocates an IP for each pod, it actually takes not the responsibility of routing the traffic between the pods. Now, there are actually two approaches for solving this problem: either you use a software-defined network that encapsulates the traffic or you can actually setup physical infrastructure with real switches and router which is more performant but a little bit overkill for a home cluster. If you want to learn more about the Kubernetes network model, you can check out the documentation.

Setting up virtual networking was really complicated once but since we are living in the days of modern DevOps, we can setup virtual infrastructure by just running a few (actually just one) command.

kubectl apply -f \ https://raw.githubusercontent.com/coreos/flannel/v0.10.0/Documentation/kube-flannel.yml

The supplied file actually does a lot of things I will not go deeper into, but it sets up roles for RBAC, a security model for the Kubernetes API, creates service accounts, configurations and a daemon-set. This is interesting because the container networking infrastructure then actually runs on top of Kubernetes itself.

The lonely master

Since we only have one node in our cluster the master has to do all the work. To allow it to do the heavy lifting we must actually taint it.

kubectl taint nodes --all node-role.kubernetes.io/master-

Now our master node is captain and cabin boy at the same time, how wonderful!

Running the dashboard

Since using kubectl is a nice way to interact your cluster, you stil want to see those nice, green graphs. To get this experience, we can install the Kubernetes dashboard.

kubectl apply -f \ https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml

Since we use the secure RBAC role model, we need some more configuration files. First of all, we need a ServiceAccount and a ClusterRoleBinding .

cat > account.yaml << EOF apiVersion: v1 kind: ServiceAccount metadata: name: admin-user namespace: kube-system EOF cat > rolebinding.yaml << EOF apiVersion: rbac.authorization.k8s.io/v1beta1 kind: ClusterRoleBinding metadata: name: admin-user roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: cluster-admin subjects: - kind: ServiceAccount name: admin-user namespace: kube-system EOF kubectl apply -f account.yaml kubectl apply -f rolebinding.yaml

We can now fetch the secret token from kube-system .

kubectl -n kube-system describe secret \ $( kubectl -n kube-system get secret | grep admin-user | awk '{print $1}' ) \ | grep "token:"

token: eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.ey...

Copy the token to your clipboard, we can now forward the Kubernetes API to our development machine (I assume you have copied the kubeconf to your local machine, you may need SSH port forwarding else).

kubectl proxy

You can reach the dashboard via a very long pretty URL. Use the token retrieved from above to sign in.

Congratulations, you actually set up your own local Kubernetes cluster! Have fun experimenting and always remember to sail safe.