How to deploy Kubernetes HA cluster

Until very recently to deploy Kubernetes HA cluster for production was very complicated job and often required additional tools. Now it’s very easy with updated version of kubeadm . It allows to quickly deploy one without need for any tools like kops or such.

In this article I will see exactly how to deploy Kubernetes HA cluster. With new version it is just a few simple steps. In my case I used AWS EC2 as I only needed it to test this guide. However, below instructions can be easily adjusted to any environment starting from bare metal to any other cloud provider or own private cloud.

If you experience any issues with these instructions, please let me know in comments so we can try and solve this together. Or reach out to me on Upwork if you need assistance in setting up or fixing Kubernetes cluster.

Requirements

Minimum of 3 hosts for master servers with root access. In this example I will be using latest Centos7 image available from AWS Marketplace. I will launch 5 EC2 instances – 3 for masters and 2 for workers. The requirement of minimum 3 hosts comes from the fact that Kubernetes HA cluster uses etcd for storing and syncing configuration. etcd requires minimum 3 nodes to ensure HA. In general case we need to use n+1 model when want to deploy Kubernetes HA cluster

with root access. In this example I will be using latest Centos7 image available from AWS Marketplace. I will launch 5 EC2 instances – 3 for masters and 2 for workers. The requirement of minimum 3 hosts comes from the fact that Kubernetes HA cluster uses for storing and syncing configuration. etcd requires minimum 3 nodes to ensure HA. In general case we need to use model when want to deploy Kubernetes HA cluster Each master must be minimum 2 vCPU and 4GB memory . Which translates to t2.medium EC2 instance size on AWS. It is possible to run small clusters with smaller specifications. However, deploying Kubernetes HA cluster for small systems makes little sense, and the bigger is your cluster the more horsepower your master will need

. Which translates to t2.medium EC2 instance size on AWS. It is possible to run small clusters with smaller specifications. However, deploying Kubernetes HA cluster for small systems makes little sense, and the bigger is your cluster the more horsepower your master will need L4 Load Balancer for access to Kubernetes API on port 6443. It must be L4 in order for us to be able to use certificates automatically generated by kubeadm. It will be accessed by all nodes of your cluster. In case of AWS for the sake of security I will be using not 1 but 2 ELBs. One will be internal ELB for nodes to access API. Another will be publicly accessible ELB for me to be able to use kubectl from my localhost. The internal ELB has DNS hostname internal-k8s-163415868.us-west-1.elb.amazonaws.com and public ELB DNS hostname is public-k8s-934192612.us-west-1.elb.amazonaws.com . These hostname are going into our config file that we will use to deploy Kubernetes HA cluster

for access to Kubernetes API on port 6443. It must be L4 in order for us to be able to use certificates automatically generated by kubeadm. It will be accessed by all nodes of your cluster. In case of AWS for the sake of security I will be using not 1 but 2 ELBs. One will be internal ELB for nodes to access API. Another will be publicly accessible ELB for me to be able to use from my localhost. The internal ELB has DNS hostname and public ELB DNS hostname is . These hostname are going into our config file that we will use to deploy Kubernetes HA cluster Internet access for package installations. I think this one should be rather obvious, so no additional explanations here 😉

Prepare masters and workers for installation

NOTE: The preparations below should be done on all hosts masters and workers.

Set hostnames and DNS resolution

NOTE: Technically, all hosts on which you deploy Kubernetes HA cluster must be resolvable by proper DNS server. However, this is out of scope for this tutorial, so we will instead do old trick with editing /etc/hosts on each server.

First we configure hostnames for all members of cluster. Those will be master1 , master2 and master3 for our masters and worker1 through workerN for workers. On first host do:

$ sudo hostnamectl set-hostname master1

Repeat this on all other hosts.

Next we need to make sure that those hostnames are in /etc/hosts on all members. After editing it should look like below:

127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 # k8s masters 10.0.1.97 master1 10.0.1.137 master2 10.0.1.30 master3 # k8s workers 10.0.2.248 worker1 10.0.2.99 worker2

Enable needed kernel module and set kernel parameters:

NOTE: Depending on which Linux distribution you use module and parameter names may be different. The name is parameters in this example are for Centos7.

Enable kernel module and make sure it is loaded on boot:

$ sudo modprobe br_netfilter $ sudo cat < /etc/modules-load.d/br_netfilter.conf br_netfilter EOF

Configure parameters provided by newly loaded module:

$ sudo cat < /etc/sysctl.d/k8s.conf net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 EOF

Load new parameters:

$ sudo sysctl --system

Verify parameters set correctly:

$ sudo sysctl net.bridge.bridge-nf-call-iptables

The output of above should be:

net.bridge.bridge-nf-call-iptables = 1

Install Docker on all hosts

NOTE: Below commands are for Centos. You can find commands specific for your distribution in Docker documentation

Install and enable Docker as per official documentation:

$ sudo yum-config-manager --add-repo https://download.docker.com/linux/centos/docker-ce.repo $ sudo yum install -y docker-ce $ sudo systemctl enable docker $ sudo systemctl start docker

Install and deploy Kubernetes HA cluster

NOTE: Below instuctions are for Centos7. You can find commands specific for your distribution in Kubernetes documentation

Install Kubernetes tools

NOTE: This step should be done on all hosts masters and workers.

$ sudo cat < /etc/yum.repos.d/kubernetes.repo [kubernetes] name=Kubernetes baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg exclude=kube* EOF $ sudo yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes $ sudo systemctl enable kubelet

Initialize cluster on first master

NOTE: This step should be done ONLY on one master server

Create kubeadm YAML configuration file. Let’s call it kubeadm.yml :

apiVersion: kubeadm.k8s.io/v1beta1 kind: ClusterConfiguration kubernetesVersion: stable apiServer: certSANs: - "internal-k8s-163415868.us-west-1.elb.amazonaws.com" - "public-k8s-934192612.us-west-1.elb.amazonaws.com" controlPlaneEndpoint: "internal-k8s-163415868.us-west-1.elb.amazonaws.com:6443"

Replace hostnames above with the ones you have for your Load Balancer. If you use single Load Balancer, then you do not need 2 SANs. Since I am using 2 ELBs I put both of their hostnames as SANs for certificates and internal ELB where hosts will be talking to each other goes as controlPlaneEndpoint with API port 6443 .

Now we are ready to deploy Kubernetes HA cluster configuration on first master. Doing this step may take some time depending on how fast your servers and network are. It will pull some images from external repositories and generate all certificates for all different parts of cluster. At the end it will start containers with relevant service by calling API, which will go through Load Balancer. So make sure that your Load Balancer is ready and reachable from masters.

$ sudo kubeadm init --config=kubeadm.yml

The above command will produce long wall of text. If successful you will see at the end something similar to this:

Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join internal-k8s-163415868.us-west-1.elb.amazonaws.com:6443 --token 91tbqa.px2vlt2hjglgk4jm --discovery-token-ca-cert-hash sha256:74b51fd2b99b8ff68426935733f4a17370648c512561e620c4376e73d8d4b892

IMPORTANT: Save the above output from your setup process to some file as you need it later

Deploy kubectl configuration

I like to be able to control Kubernetes cluster from my laptop, it provides great degree of flexibility whereever I go. To do that you need to be sure that you have kubectl installed. Then you can find /etc/kubernetes/admin.conf file on newly deployed master. Copy it to your computer to ~/.kube/config . If you already have this file, you can either back it up or merge two files (see Kubernetes documentation for syntax of multiple cluster definitions in one config).

If you do not want or cannot to deploy it on your computer you can follow commands as above on already provisioned master to set up kubectl access there:

$ mkdir -p $HOME/.kube $ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config $ sudo chown $(id -u):$(id -g) $HOME/.kube/config

Deploy overlay network on first master

This step is done from the host where you configured kubectl:

$ kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '

')"

Verify first server setup is ready

From the host where you deployed kubectl configuration check whether first master has finished setting up or if there are any errors:

$ kubectl get pod -n kube-system -w

You will see output similar to this:

NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-bptgt 1/1 Running 0 24m coredns-86c58d9df4-dfpmj 1/1 Running 0 24m etcd-master1 1/1 Running 0 23m kube-apiserver-master1 1/1 Running 0 23m kube-controller-manager-master1 1/1 Running 0 23m kube-proxy-lspf2 1/1 Running 0 24m kube-scheduler-master1 1/1 Running 0 23m weave-net-h75dt 2/2 Running 0 28s

NOTE: All pods should be running and no errors. If you executed this command too quickly after deploying overlay network, then some pods may still be created, you can just wait for them to finish it will show live progress in this command.

Congratulations! You have successfully deploy Kubernetes HA cluster! Well, it is not HA yet. Let’s make it.

Copy certificates and configuration that are shared across all masters in cluster

NOTE: It is possibly a good idea to keep a backup of archive that we will create in this step to make sure you can always recover these certificates. It is critical piece of your cluster that should be both kept secret and stored safely.

Create archive of shared files certs.tar.gz :

$ sudo tar zcvf certs.tar.gz /etc/kubernetes/admin.conf /etc/kubernetes/pki/ca.crt /etc/kubernetes/pki/ca.key /etc/kubernetes/pki/sa.key /etc/kubernetes/pki/sa.pub /etc/kubernetes/pki/front-proxy-ca.crt /etc/kubernetes/pki/front-proxy-ca.key /etc/kubernetes/pki/etcd/ca.crt /etc/kubernetes/pki/etcd/ca.key

Copy this archive to remaining masters and deploy these certificates to where they belong on each remaining master:

$ sudo tar xvf certs.tar.gz -C /

Join additional masters into cluster using kubeadm

The below command is same as join command from long output on first master setup. But it uses additional argument --experimental-control-plane , which is new to kubeadm and tells it to join this node as master:

$ sudo kubeadm join internal-k8s-163415868.us-west-1.elb.amazonaws.com:6443 --token 91tbqa.px2vlt2hjglgk4jm --discovery-token-ca-cert-hash sha256:74b51fd2b99b8ff68426935733f4a17370648c512561e620c4376e73d8d4b892 --experimental-control-plane

Same as on first master setup, the above command will produce long output. On success it will be ending with something like below:

This node has joined the cluster and a new control plane instance was created: * Certificate signing request was sent to apiserver and approval was received. * The Kubelet was informed of the new secure connection details. * Master label and taint were applied to the new node. * The Kubernetes control plane instances scaled up. * A new etcd member was added to the local/stacked etcd cluster. To start administering your cluster from this node, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config Run 'kubectl get nodes' to see this node join the cluster.

Verify your enter Kubernetes HA cluster is up

Same as before — from machine where you configured kubectl check pods status:

$ kubectl get pod -n kube-system -w NAME READY STATUS RESTARTS AGE coredns-86c58d9df4-bptgt 1/1 Running 0 54m coredns-86c58d9df4-dfpmj 1/1 Running 0 54m etcd-master1 1/1 Running 0 53m etcd-master2 1/1 Running 0 3m etcd-master3 1/1 Running 0 54s kube-apiserver-master1 1/1 Running 0 53m kube-apiserver-master2 1/1 Running 0 3m kube-apiserver-master3 1/1 Running 0 55s kube-controller-manager-master1 1/1 Running 1 53m kube-controller-manager-master2 1/1 Running 0 3m kube-controller-manager-master3 1/1 Running 0 55s kube-proxy-8xttw 1/1 Running 0 55s kube-proxy-g6p5v 1/1 Running 0 3m kube-proxy-lspf2 1/1 Running 0 54m kube-scheduler-master1 1/1 Running 1 53m kube-scheduler-master2 1/1 Running 0 3m kube-scheduler-master3 1/1 Running 0 55s weave-net-8xjgk 2/2 Running 1 3m weave-net-h75dt 2/2 Running 0 30m weave-net-zbqnt 2/2 Running 0 55s

NOTE: All pods should be running and no errors. If you executed this command too quickly after adding masters, then some pods may still be created, you can just wait for them to finish it will show live progress in this command.

Congratulations, you just sucessfully deploy Kubernetes HA cluster. And despite long description it was rather simple and very portable across any environment! Now we only have a few final parts to finish to make it a complete guide.

Join workers to your cluster

Use the join command we’ve got from setting up first master to join any number of workers:

$ sudo kubeadm join internal-k8s-163415868.us-west-1.elb.amazonaws.com:6443 --token 91tbqa.px2vlt2hjglgk4jm --discovery-token-ca-cert-hash sha256:74b51fd2b99b8ff68426935733f4a17370648c512561e620c4376e73d8d4b892

Check all nodes status

Use kubectl to check nodes status:

$ kubectl get nodes NAME STATUS ROLES AGE VERSION master1 Ready master 1h v1.13.1 master2 Ready master 17m v1.13.1 master3 Ready master 14m v1.13.1 worker1 Ready 1m v1.13.1 worker2 Ready 1m v1.13.1

You will see all your masters and joined workers on this list along with versions, their status and node uptime.

Conclusion

With newer updated version of kubeadm it became much easier to deploy Kubernetes HA cluster. Which basically obsoletes 3rd party tools that sprung up in early days of overly complicated Kubernetes cluster deployment.

Share this: Facebook

LinkedIn

Reddit

Twitter



Like this: Like Loading...

Related