This guide uses a Hypriot blog entry as a massive starting point but goes further to show the steps I took in order to get my kubernetes cluster of three Raspberry Pi computers running. This guide addresses much of the troubleshooting done to fill the gaps and/or “bugs” on this blog entry and applying it to my setup.

Also credit to Alex Ellis’ Guide for some of the kubernetes troubleshooting.

Assumptions

You have more than 1 Raspberry Pi with at least an 8GB SD card for each

You want all of your Raspberry Pi’s to have a static IP

You desire all of your networking and orchestration over WIFI rather than Ethernet

For troubleshooting that you have access to your routers administrative panel to see attached devices and their IP address

For further troubleshooting you may require Ethernet access for diagnosis and further setup

Kubernetes Setup

Preparation

Clone my Raspberry Pi Kubernetes Cluster GitHub repository: 1 2 3 git clone https://github.com/johnwyles/raspberry-pi-kubernetes-cluster.git cd raspberry-pi-kubernetes-cluster/hypriot/ Download the latest Hypriot disk image to a new directory named Hypriot and note the version number (as of writing: 1.10.0 ): 1 2 3 4 curl -L -O "https://github.com/hypriot/image-builder-rpi/releases/download/v1.10.0/hypriotos-rpi-v1.10.0.img.zip" unzip hypriotos-rpi-v1.10.0.img.zip rm -f hypriotos-rpi-v1.10.0.img.zip Open a file in this directory named files/wifi-devices-init.yml with the contents below, editing for your specific network: 1 2 3 4 5 6 7 hostname: pi1 wifi: interfaces: wlan0: ssid: "WIFI_SSID" password: "WIFI_PASSWORD" Open a file in this directory named files/user-data.yaml with the contents of the below, editing for your specific network and also make sure your update the section for SSH authorized_keys with your SSH public key: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 #cloud-config hostname : pi1 apt_preserve_sources_list : true manage_etc_hosts : true resize_rootfs : true growpart : mode : auto devices : [ "/" ] ignore_growroot_disabled : false locale : "en_US.UTF-8" timezone : "America/Los_Angeles" package_update : true package_upgrade : true package_reboot_if_required : true packages : - ntp users : - name : pi primary-group : users shell : /bin/bash sudo : ALL=(ALL) NOPASSWD:ALL groups : users,docker,adm,dialout,audio,plugdev,netdev,video ssh-import-id : None lock_passwd : true ssh-authorized-keys : - ssh-rsa AAAA_SSH_PUBLIC_KEY_NNNN write_files : - content : | allow-hotplug wlan0 iface wlan0 inet static address 192.168.1.101 netmask 255.255.255.0 gateway 192.168.1.1 dns-nameservers 8.8.8.8 8.8.4.4 wpa-conf /etc/wpa_supplicant/wpa_supplicant.conf iface default inet static path : /etc/network/interfaces.d/wlan0 - content : | country=de ctrl_interface=DIR=/var/run/wpa_supplicant GROUP=netdev update_config= 1 network={ ssid= "WIFI_SSID" psk= "WIFI_PASSWORD" proto=RSN key_mgmt=WPA-PSK pairwise=CCMP auth_alg=OPEN } path : /etc/wpa_supplicant/wpa_supplicant.conf runcmd : - [ systemctl, restart, avahi-daemon ] Open the file files/config.txt if you have any boot options you would like to specify: 1 2 3 4 5 6 7 disable_camera_led = 1 dtparam = audio = on enable_uart = 0 gpu_mem = 128 hdmi_force_hotplug = 1 start_x = 1 Install the flash utility for Hypriot changing the version to the latest (as of writing this is 2.3.0 ): 1 2 3 4 5 6 brew install pv brew install awscli curl -LO https://github.com/hypriot/flash/releases/download/2.3.0/flash chmod +x flash sudo mv flash /usr/local/bin/flash

Flash the SD cards for each Raspberry Pi

Insert an SD card ready for formatting On mac run diskutil list to find the SD card as a disk Update the files/user-data.yaml file created earlier with the different information for each machine (e.g. IP address, hostname, username, etc.) Run, changing the hostname each time, and when prompted make sure the disk the utility finds is the SD card present in diskutil list : 1 2 3 4 5 6 7 8 9 flash \ --bootconf files/config.txt \ --config files/wifi-device-init.yml \ --userdata files/user-data.yml \ --ssid "WIFI_SSID" \ --password "WIFI_PASSWORD" \ --hostname pi1 \ hypriotos-rpi-v1.10.0.img Insert the flashed Hypriot SD card into the Raspberry Pi and power it on. Power the machine all the way on for about 2 full minutes. Power back off. And then power back on. This is necessary because the first boot boots some initial cloud-init in place. On the second boot it is used this time in configuration of the machine. You should be able to watch the machine show up in your attached devices on your router. Ensure that it has the static IP address you have set in files/user-data.yaml . If you see the machine come online as a dynamic IP address you did not expect you will need to attach the Raspberry Pi to Ethernet and perform some modifications of these files (beyond the scope of this writing - please use the comments below to discuss): /etc/hostname

/etc/hosts

/etc/network/interfaces.d/wlan0

/etc/ssh/sshd_config

/etc/wpa_supplicant/wpa_supplicant.conf

/home/pi/.ssh/authorized_keys

Setting up the cluster

ON ALL NODES SSH to the machine you have just setup in files/user-data.yaml : 1 2 ssh -i ~/.ssh/YOUR_PRIVATE_KEY_FILENAME_HERE pi@pi1.local ON ALL NODES Become root: 1 2 sudo su - ON ALL NODES Install kubernetes repository and kubernetes itself: 1 2 3 4 5 curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" \ > /etc/apt/sources.list.d/kubernetes.list apt-get update && apt-get install -y kubeadm ONLY ON THE MASTER NODE Run the following command: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 cat > /etc/docker/daemon.json <<EOF { "exec-opts": ["native.cgroupdriver=cgroupfs"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2", "storage-opts": [ "overlay2.override_kernel_check=true" ] } EOF mkdir -p /etc/systemd/system/docker.service.d systemctl daemon-reload systemctl restart docker ONLY ON THE MASTER NODE You will want to run (substituting the IP address for the IP address of your master node): 1 2 sudo kubeadm config images pull -v3 Now initialize the kubernetes cluster: 1 2 sudo kubeadm init --token-ttl = 0 If you chose to use Flannel you will run: 1 2 3 4 sudo kubeadm init --pod-network-cidr 10.244.0.0/16 \ --apiserver-advertise-address = 192.168.1.101 \ --token-ttl = 0 NOTE DOWN THE LAST LINE OF OUTPUT FROM ABOVE It should look like the following: 1 2 3 sudo kubeadm join 192.168.1.101:6443 --token 88duzufy.iic6qylefezykvmuwu \ --discovery-token-ca-cert-hash sha256:dbcc7dfe885966a10483acde9dc536cb1215acd16a9b12fe04847efcaaba448 If this fails then try a kubectl reset and then run the init again: 1 2 3 sudo sed -i 's/failureThreshold: 8/failureThreshold: 20/g' /etc/kubernetes/manifests/kube-apiserver.yaml && \ sudo sed -i 's/initialDelaySeconds: [0-9]\+/initialDelaySeconds: 360/' /etc/kubernetes/manifests/kube-apiserver.yaml ONLY ON THE MASTER NODE You will want to run this to capture the kubernetes configuration: 1 2 3 4 5 mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $( id -u ) : $( id -g ) $HOME/.kube/config export KUBECONFIG = /etc/kubernetes/kubelet.conf ONLY ON THE MASTER NODE You will want to run the following: To use Weave run: 1 2 kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/aio/deploy/alternative/kubernetes-dashboard-arm-head.yaml Alternatively you can try with Flannel: 1 2 kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml ON ALL NODES BUT THE MASTER NODE You will want to run this (substituting this for the line output from Step 7 above): 1 2 3 sudo kubeadm join 192.168.1.101:6443 --token 88duzufy.iic6qylefezykvmuwu \ --discovery-token-ca-cert-hash sha256:dbcc7dfe885966a10483acde9dc536cb1215acd16a9b12fe04847efcaaba448 ONLY ON THE MASTER NODE Wait for 2 minutes and then run: 1 2 kubectl get nodes You should see output like the following: 1 2 3 4 5 NAME STATUS ROLES AGE VERSION pi1 NotReady master 20h v1.13.4 pi2 NotReady <none> 19h v1.13.4 pi3 NotReady <none> 19h v1.13.4 ONLY ON THE MASTER NODE Run the following command: 1 2 kubectl get pods --namespace = kube-system To see if you get the expected output: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 NAME READY STATUS RESTARTS AGE coredns-fb8b8dccf-hsbs2 1/1 Running 0 22m coredns-fb8b8dccf-j8r5z 1/1 Running 0 22m etcd-pi1 1/1 Running 0 21m kube-apiserver-pi1 1/1 Running 0 21m kube-controller-manager-pi1 1/1 Running 0 21m kube-proxy-5x27c 1/1 Running 0 19m kube-proxy-lrbd9 1/1 Running 0 22m kube-proxy-nx2gm 1/1 Running 0 19m kube-scheduler-pi1 1/1 Running 0 21m weave-net-5bwvt 2/2 Running 0 18m weave-net-dp6z8 2/2 Running 0 18m weave-net-nkrkq 2/2 Running 0 18m

Testing the cluster works

In order to test that all the services and setup are running as we expect we run a few commands further with validation.

ONLY ON THE MASTER NODE Run the following as the user pi : 1 2 kubectl get nodes Given enough time you should see output like the following: 1 2 3 4 5 NAME STATUS ROLES AGE VERSION pi1 Ready master 10h v1.14.0 pi2 Ready <none> 22m v1.14.0 pi3 Ready <none> 22m v1.14.0 SCP the files files/function.yml to the master node: 1 2 scp files/function.yml pi@pi1.local:~/ ONLY ON THE MASTER NODE Run the following command as the user pi : 1 2 kubectl create -f function .yml And then run the following test: 1 2 curl -4 http://127.0.0.1:31118 -d "# test" You should see the following output: 1 2 <h1>test</h1> Finally cleanup the test: 1 2 kubectl delete -f function .yml

Reset back to initial status

ON ALL NODES Run the following suite of commands as root : 1 2 3 4 5 6 7 8 sudo kubeadm reset systemctl stop kubelet & systemctl stop docker rm -rf /var/lib/cni/ rm -rf /var/lib/kubelet/* rm -rf /etc/cni/ ifconfig cni0 down & ip link delete cni0 ifconfig docker0 down If you installed Flannel: 1 2 ifconfig flannel.1 down & ip link delete flannel.1

At this point you are done setting up Kubernetes. However, if you want to further setup OpenFaaS, I have outlined those steps below.

Setup OpenFaaS

Now we can setup our OpenFaas service: