Kubernetes Master Node – (Hostname: k8s-master, IP : 192.168.1.40, OS : Ubuntu 18.04 LTS)

Kubernetes Worker Node 1 – (Hostname: k8s-worker-node1, IP: 192.168.1.41 , OS : Ubuntu 18.04 LTS)

Kubernetes Worker Node 2 – (Hostname: k8s-worker-node2, IP: 192.168.1.42 , OS : Ubuntu 18.04 LTS)

Kubernetes Worker Node 3 – (Hostname: k8s-worker-node3, IP: 192.168.1.43 , OS : Ubuntu 18.04 LTS)

Step:1) Set Hostname and update hosts file

draconpern@localhost:~$ sudo hostnamectl set-hostname "k8s-master" draconpern@localhost:~$ exec bash draconpern@k8s-master:~$

draconpern@localhost:~$ sudo hostnamectl set-hostname k8s-worker-node1 draconpern@localhost:~$ exec bash draconpern@k8s-worker-node1:~$ draconpern@localhost:~$ sudo hostnamectl set-hostname k8s-worker-node2 draconpern@localhost:~$ exec bash draconpern@k8s-worker-node2:~$ draconpern@localhost:~$ sudo hostnamectl set-hostname k8s-worker-node3 draconpern@localhost:~$ exec bash draconpern@k8s-worker-node3:~$

draconpern@k8s-master:~$ sudo nano /etc/hosts

192.168.1.40 k8s-master 192.168.1.41 k8s-worker-node1 192.168.1.42 k8s-worker-node2 192.168.1.43 k8s-worker-node3

draconpern@k8s-master:~$ sudo nano /etc/netplan/50-cloud-init.yaml

network: ethernets: eth0: addresses: - 192.168.1.40/24 gateway4: 192.168.1.1 nameservers: addresses: - 192.168.1.1 search: - draconpern.local version: 2

Step:2) Install and Start Docker Service on Master and Worker Nodes

draconpern@k8s-master:~$ sudo apt-get install docker.io -y

draconpern@k8s-worker-node1:~$ sudo apt-get install docker.io -y draconpern@k8s-worker-node2:~$ sudo apt-get install docker.io -y draconpern@k8s-worker-node3:~$ sudo apt-get install docker.io -y

draconpern@k8s-master:~$ sudo systemctl edit docker draconpern@k8s-worker-node1:~$ sudo systemctl edit docker draconpern@k8s-worker-node2:~$ sudo systemctl edit docker draconpern@k8s-worker-node3:~$ sudo systemctl edit docker

[Service] ExecStart= ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --exec-opt native.cgroupdriver=systemd

:~$ sudo systemctl restart docker :~$ sudo systemctl enable docker Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker :~$

:~$ docker --version Docker version 18.09.7, build 2d0083d :~$

Step:3) Configure Kubernetes Package Repository on Master & Worker Nodes

draconpern@k8s-master:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add draconpern@k8s-worker-node1:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add draconpern@k8s-worker-node2:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add draconpern@k8s-worker-node3:~$ curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add

:~$ sudo apt-add-repository "deb http://apt.kubernetes.io/ kubernetes-xenial main"

Step:4) Disable Swap and Install Kubeadm on all the Nodes

:~$ sudo swapoff -a

:~$ sudo nano /etc/fstab

#/swap.img none swap

:~$ sudo apt-get install kubeadm -y

:~$ kubeadm version kubeadm version: &version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:34:01Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} :~$

Step:5) Initialize and Start Kubernetes Cluster on Master Node using Kubeadm

draconpern@k8s-master:~$ sudo kubeadm init --pod-network-cidr=10.244.0.0/16

draconpern@k8s-master:~$ mkdir -p $HOME/.kube draconpern@k8s-master:~$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config draconpern@k8s-master:~$ sudo chown $(id -u):$(id -g) $HOME/.kube/config draconpern@k8s-master:~$

draconpern@k8s-master:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master NotReady master 2m v1.16.0

Step:6) Deploy Calico as Pod Network from Master node and verify Pod Namespaces

draconpern@k8s-master:~$ nano calico.yaml

- name: CALICO_IPV4POOL_CIDR value: "10.244.0.0/16"

draconpern@k8s-master:~$ kubectl apply -f calico.yaml

draconpern@k8s-master:~$ kubectl apply -f calico.yaml configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.apps/calico-node created serviceaccount/calico-node created deployment.apps/calico-kube-controllers created serviceaccount/calico-kube-controllers created draconpern@k8s-master:~$

draconpern@k8s-master:~$ sudo kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 11m v1.16.0 draconpern@k8s-master:~$ draconpern@k8s-master:~$ kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-system calico-kube-controllers-6895d4984b-xs45k 1/1 Running 0 15m kube-system calico-node-756lr 1/1 Running 0 15m kube-system coredns-5644d7b6d9-6hgww 1/1 Running 0 15m kube-system coredns-5644d7b6d9-7l8vc 1/1 Running 0 15m kube-system etcd-k8s-master 1/1 Running 0 15m kube-system kube-apiserver-k8s-master 1/1 Running 0 15m kube-system kube-controller-manager-k8s-master 1/1 Running 0 15m kube-system kube-proxy-2hbgd 1/1 Running 0 15m kube-system kube-scheduler-k8s-master 1/1 Running 0 15m draconpern@k8s-master:~$

Step:7) Add Worker Nodes to the Cluster

draconpern@k8s-worker-node1:~$ sudo kubeadm join 192.168.1.40:6443 --token 1wx3sk.hjkd54juaxlov7d2 --discovery-token-ca-cert-hash sha256:5bc67b66720b048dea438578c9591cc5095f572c5dbf240aca0c3e0620a917f3

draconpern@k8s-worker-node2:~$ sudo kubeadm join 192.168.1.40:6443 --token 1wx3sk.hjkd54juaxlov7d2 --discovery-token-ca-cert-hash sha256:5bc67b66720b048dea438578c9591cc5095f572c5dbf240aca0c3e0620a917f3

draconpern@k8s-master:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 100m v1.16.0 k8s-worker-node1 Ready <none> 10m v1.16.0 k8s-worker-node2 Ready <none> 12m v1.16.0 k8s-worker-node3 Ready <none> 13m v1.16.0 draconpern@k8s-master:~$

Step: 8) Install a Baremetal Load Balancer, MetalLB

draconpern@k8s-master:~$ kubectl apply -f https://raw.githubusercontent.com/google/metallb/v0.8.1/manifests/metallb.yaml

draconpern@k8s-master:~$ nano metallb-config.yaml

apiVersion: v1 kind: ConfigMap metadata: namespace: metallb-system name: config data: config: | address-pools: - name: default protocol: layer2 addresses: - 192.168.1.200-192.168.1.250

draconpern@k8s-master:~$ kubectl apply -f metallb-config.yaml

draconpern@k8s-master:~$ kubectl create deployment nginx -–image=nginx

apiVersion: v1 kind: Service metadata: name: nginx spec: ports: - port: 80 targetPort: 80 selector: app: nginx type: LoadBalancer

draconpern@k8s-master:~$ kubectl apply -f test.yaml

draconpern@k8s-master:~$ kubectl get svc

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx LoadBalancer 10.96.180.255 192.168.1.240 80:30752/TCP 21s

draconpern@k8s-master:~$ kubectl delete -f test.yaml draconpern@k8s-master:~$ kubectl delete deployment nginx

Step: 9) Setup Rook Ceph for Storage

draconpern@k8s-master:~$ kubectl apply -f https://github.com/rook/rook/raw/release-1.1/cluster/examples/kubernetes/ceph/common.yaml draconpern@k8s-master:~$ kubectl apply -f https://github.com/rook/rook/raw/release-1.1/cluster/examples/kubernetes/ceph/operator.yaml

draconpern@k8s-worker-node1:~$ sudo apt-get install ceph-common draconpern@k8s-worker-node2:~$ sudo apt-get install ceph-common draconpern@k8s-worker-node3:~$ sudo apt-get install ceph-common

Step: 10) Use Data Drive for Storage

draconpern@~$ dd if=/dev/zero of=/dev/sdb bs=1024 count=1

draconpern@k8s-master:~$ kubectl apply -f cluster.yaml

Step: 11) Use Directory for Storage

draconpern@k8s-master:~$ nano cluster.yaml

directories: - path: /var/lib/rook

draconpern@k8s-master:~$ kubectl apply -f cluster.yaml

Step: 12) Create the Storage Class and make it the Default for the Cluster.

draconpern@k8s-master:~$ kubectl apply -f https://github.com/rook/rook/raw/release-1.1/cluster/examples/kubernetes/ceph/csi/rbd/storageclass.yaml draconpern@k8s-master:~$ kubectl patch storageclass rook-ceph-block -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Step: 13) Verify and try it out

draconpern@k8s-master:~$ kubectl get nodes NAME STATUS ROLES AGE VERSION k8s-master Ready master 9d v1.16.0 k8s-worker-node1 Ready 9d v1.16.0 k8s-worker-node2 Ready 9d v1.16.0 k8s-worker-node3 Ready 9d v1.16.0

Step: 14) Prevent accidental upgrades

draconpern@~$ sudo apt-mark hold kubelet

Extra Bonus

shell completion on master:~$ echo "source <(kubectl completion bash)" >> /etc/bash_completion.d/kubectl

Running pods on master:~$ kubectl taint nodes --all node-role.kubernetes.io/master- Stop running pods on master:~$ kubectl taint nodes k8s-master node-role.kubernetes.io/master=:NoSchedule

:~$ sudo nano /etc/docker/daemon.json

{ "insecure-registries":[ "jenkins:5000" ] }

:~$ sudo systemctl disable apparmor.service --now

This is how I built a 4 node Kubernetes cluster on Ubuntu without using MAAS. Software used include(1.16) andLTS. Recommended minimum number of machine is 3, but they can all virtual or mixed VM and baremetal. You’ll get a cluster with oneand with three nodes acting asThe cluster will support running most containers and features includingand. We will start with 4 machines. The master node needs a minimum of 4Gigs of RAM and one OS disk. The three worker nodes will need at least 8Gigs of RAM, OS disk and optionally one disk for storage. We’ll start with the following minimally installed systems. They will need to have static ip’s, with DNS entries being optional. The network subnet is 192.168.1.0/24 with a DHCP range of 192.168.1.50-192.168.1.150You would usually set the hostname and ip addresses during install, but if not.. Login to the master node and configure its hostname using the hostnamectl commandLogin to worker nodes and configure their hostname respectively using the hostnamectl command,Add the following lines infile on all three systems,Edit thefile and change to static ip. For example on master. (Make sure you use spaces, all yaml require spaces for indention)Run the below apt-get command to install Docker on all nodes,Run the below apt-get command to install docker on worker nodes,Override the default docker unit file. (For a reason why this is required: https://kubernetes.io/docs/setup/production-environment/container-runtimes/ You’ll go into an editor with no content. Enter the following into it.Once the Docker packages are installed on all four systems, restart and enable the docker service using the below systemctl commands, these commands needs to be executed on. (The restart is for changing the cgroupdriver).The docker command should verify which Docker version has been installed.All the commands in this step needs to be run on master and worker nodes. Add Kubernetes package repository key using the following command,Now configure Kubernetes repository, at this point in time Ubuntu 18.04 (bionic weaver) Kubernetes package repository is not yet available, so we will be using the Xenial Kubernetes repository.All the commands in this step must be run on master and worker nodes You must disable swap on all nodes for k8s to install. Run the following command to disable swap temporarily,You also need to disable the swap permanently by commenting out swapfile or swap partition entry in thefile. Use nano to edit the file and put a ‘#’ at the beginning of the swap.img line.Now Install Kubeadm package on all the nodes including master.Once kubeadm packages are installed successfully, verify the kubeadm version.Use the below kubeadm command on Master Node to initialize KubernetesIn the above command, you can use the same pod network or choose your own pod network if your network is overlapping. Keep the /16 subnet size. If the command is successful, you’ll get instructions on copying the configuration and also a command line for joining computers to the cluster. Copy the join command into a text file for later use. Copy the configuration to your profile by running,Verify the status of master node using the following command,As we can see in the above output, our master node is not ready because we haven’t installed a pod network. We will deployas our pod network, it will provide thebetween cluster nodes and provide pod communication.Download the calico yaml fileEditand change the entry forfrom 192.168.0.0/16 to 10.244.0.0/16. This should be the same CIDR as the one used in step 5. If you don’t do this, the cluster will look like it works, but communication between pods of different hosts will fail.Execute the followingcommand to deploy the pod network from master nodeOutput of above command should be something like belowNow verify the master node status and pod namespaces using kubectl command,As we can see in the above output our master node status has changed to “” and all the pod in all the namespaces are in the running state, so this confirms that our master node is healthy and ready to form a cluster.In Step 5, kubeadm printed a command which we will need to use on worker nodes to join a cluster. (Your token and hash will be different, and it can always be regenerated.) Login to the first worker node (k8s-worker-node1) and run the following command to join the cluster,Similarly run the same kubeadm join command on the rest of the worker nodes,Now go to master node to check master and worker node statusWe can install metallb directly from the yaml file.Create a configuration file e.g metallb-config.yaml on k8s-master, here I use a range of free ip that’s not used by any machine or DHCP on the same network as the nodes. For example, for this network we pick 192.168.1.200-192.168.1.250 to avoid the DHCP range of 192.168.1.50-192.168.1.150.Apply the fileVerify the load balancer works. First create a test deployment of nginx.Create a file e.g. test.yaml with the followingThen apply with,Get a list of services,You can browse to 192.168.1.240 and you’ll get an nginx page. Remove the test service,Note: If you have a Synology, you should use NFS, steps here . At this point, the cluster works but can only run stateless pods. The moment a pod is terminated, all the information with it is gone. To run stateful pods, for example StatefulSets, you need to have persistent storage. We’ll use Rook Ceph to provide that. Install rook-cephInstall the rbd commandline utility needed to mount rbd volumes on each node.Next, download the cluster.yaml on the master node.If you have the optional data drive on the worker nodes, go to the next step. Otherwise, go to step 11 to set up a directory for data storageRun the following command on the nodes to wipe the partition on the data drive. Warning, be sure to use the correct drive!! Note the block size of 1024 to remove previous installation of ceph data. If you don’t want to format the drive, then use step 11.Apply the default rook-ceph configurationYou are done! Skip the next step and go to Step 12 to create the storage classEdit cluster.yaml and find the directories lines. Uncomment the two lines by removing the # from the beginning. This will use thedirectory on each worker node for storage. You can change the path to whatever you want. The directory should be created by root.Note: rook-ceph only supports ext4 and xfs. It will not work if the directory is on a btrfs volume. Apply the cluster fileFollow this example to get a fully working application. https://kubernetes.io/docs/tutorials/stateful-application/mysql-wordpress-persistent-volume/ kubelet shouldn’t be upgraded automatically. To make sure it doesn’t happen. Run the following on each node.Here’s some common commands you might want to try out.Need to pull images from a private registry? On master and every node, editand add insecure-registries.If k8s is having trouble terminating some pods, disable apparmor so that kubernetes can delete pods from docker faster. This has to be done on master and worker nodes.