A Kubernetes cluster set up

Kubernetes installation

Perform the next steps on both EC2.

Update packages list and installed packages:

root@ip-10–0–0–112:~# apt update && apt -y upgrade

Add Docker and Kubernetes repositories:



OK

root@ip-10–0–0–112:~# add-apt-repository “deb [arch=amd64]

root@ip-10–0–0–112:~# curl -s

OK

root@ip-10–0–0–112:~# echo “deb

root@ip-10–0–0–112:~# apt update

root@ip-10–0–0–112:~# apt install -y docker-ce kubelet kubeadm kubectl root@ip-10–0–0–112:~# curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -OKroot@ip-10–0–0–112:~# add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable”root@ip-10–0–0–112:~# curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -OKroot@ip-10–0–0–112:~# echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” > /etc/apt/sources.list.d/kubernetes.listroot@ip-10–0–0–112:~# apt updateroot@ip-10–0–0–112:~# apt install -y docker-ce kubelet kubeadm kubectl

Or do everything just be one command:

root@ip-10–0–0–112:~# apt update && apt -y upgrade && curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add — && add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable” && curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add — && echo “deb https://apt.kubernetes.io/ kubernetes-xenial main” > /etc/apt/sources.list.d/kubernetes.list && apt update && apt install -y docker-ce kubelet kubeadm kubectl

Hostname

Perform the next steps on both EC2.

Afaik the next changes need to be done on Ubuntu only, and you can’t change a hostname which was set by AWS (the ip-10–0–0–102 in this example).

Check a hostname now:

root@ip-10–0–0–102:~# hostname

ip-10–0–0–102

Get it as a fully qualified domain name (FQDN):

Set the hostname as FQDN:

root@ip-10–0–0–102:~# hostnamectl set-hostname ip-10–0–0–102.eu-west-3.compute.internal

Check now:

root@ip-10–0–0–102:~# hostname

ip-10–0–0–102.eu-west-3.compute.internal

Repeat on the worker node.

Cluster set up

Create a /etc/kubernetes/aws.yml file:

---

apiVersion: kubeadm.k8s.io/v1beta2

kind: ClusterConfiguration

networking:

serviceSubnet: "10.100.0.0/16"

podSubnet: "10.244.0.0/16"

apiServer:

extraArgs:

cloud-provider: "aws"

controllerManager:

extraArgs:

cloud-provider: "aws"

Initialize cluster using this config:



[init] Using Kubernetes version: v1.15.2

[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09

…

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start] Activating the kubelet service

[certs] Using certificateDir folder “/etc/kubernetes/pki”

[certs] Generating “ca” certificate and key

[certs] Generating “apiserver” certificate and key

[certs] apiserver serving cert is signed for DNS names [ip-10–0–0–102.eu-west-3.compute.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.100.0.1 10.0.0.102]

…

[kubeconfig] Using kubeconfig folder “/etc/kubernetes”

[kubeconfig] Writing “admin.conf” kubeconfig file

[kubeconfig] Writing “kubelet.conf” kubeconfig file

[kubeconfig] Writing “controller-manager.conf” kubeconfig file

[kubeconfig] Writing “scheduler.conf” kubeconfig file

[control-plane] Using manifest folder “/etc/kubernetes/manifests”

[control-plane] Creating static Pod manifest for “kube-apiserver”

[control-plane] Creating static Pod manifest for “kube-controller-manager”

[control-plane] Creating static Pod manifest for “kube-scheduler”

[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”

[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s

[apiclient] All control plane components are healthy after 23.502303 seconds

[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace

[kubelet] Creating a ConfigMap “kubelet-config-1.15” in namespace kube-system with the configuration for the kubelets in the cluster

…

[mark-control-plane] Marking the node ip-10–0–0–102.eu-west-3.compute.internal as control-plane by adding the label “node-role.kubernetes.io/master=’’”

[mark-control-plane] Marking the node ip-10–0–0–102.eu-west-3.compute.internal as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]

…

Your Kubernetes control-plane has initialized successfully!

…

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.0.0.102:6443 — token rat2th.qzmvv988e3pz9ywa \

— discovery-token-ca-cert-hash sha256:ce983b5fbf4f067176c4641a48dc6f7203d8bef972cb9d2d9bd34831a864d744 root@ip-10–0–0–102:~# kubeadm init — config /etc/kubernetes/aws.yml[init] Using Kubernetes version: v1.15.2[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”[kubelet-start] Activating the kubelet service[certs] Using certificateDir folder “/etc/kubernetes/pki”[certs] Generating “ca” certificate and key[certs] Generating “apiserver” certificate and key[certs] apiserver serving cert is signed for DNS names [ip-10–0–0–102.eu-west-3.compute.internal kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.100.0.1 10.0.0.102][kubeconfig] Using kubeconfig folder “/etc/kubernetes”[kubeconfig] Writing “admin.conf” kubeconfig file[kubeconfig] Writing “kubelet.conf” kubeconfig file[kubeconfig] Writing “controller-manager.conf” kubeconfig file[kubeconfig] Writing “scheduler.conf” kubeconfig file[control-plane] Using manifest folder “/etc/kubernetes/manifests”[control-plane] Creating static Pod manifest for “kube-apiserver”[control-plane] Creating static Pod manifest for “kube-controller-manager”[control-plane] Creating static Pod manifest for “kube-scheduler”[etcd] Creating static Pod manifest for local etcd in “/etc/kubernetes/manifests”[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory “/etc/kubernetes/manifests”. This can take up to 4m0s[apiclient] All control plane components are healthy after 23.502303 seconds[upload-config] Storing the configuration used in ConfigMap “kubeadm-config” in the “kube-system” Namespace[kubelet] Creating a ConfigMap “kubelet-config-1.15” in namespace kube-system with the configuration for the kubelets in the cluster[mark-control-plane] Marking the node ip-10–0–0–102.eu-west-3.compute.internal as control-plane by adding the label “node-role.kubernetes.io/master=’’”[mark-control-plane] Marking the node ip-10–0–0–102.eu-west-3.compute.internal as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]Your Kubernetes control-plane has initialized successfully!Then you can join any number of worker nodes by running the following on each as root:kubeadm join 10.0.0.102:6443 — token rat2th.qzmvv988e3pz9ywa \— discovery-token-ca-cert-hash sha256:ce983b5fbf4f067176c4641a48dc6f7203d8bef972cb9d2d9bd34831a864d744

Create a kubelet config file:

root@ip-10–0–0–102:~# mkdir -p $HOME/.kube

root@ip-10–0–0–102:~# cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

root@ip-10–0–0–102:~# chown ubuntu:ubuntu $HOME/.kube/config

Check nodes:

root@ip-10–0–0–102:~# kubectl get nodes -o wide

NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME

ip-10–0–0–102.eu-west-3.compute.internal NotReady master 55s v1.15.2 10.0.0.102 <none> Ubuntu 18.04.3 LTS 4.15.0–1044-aws docker://19.3.1

You can get your cluster-info using the config view :

root@ip-10–0–0–102:~# kubeadm config view

apiServer:

extraArgs:

authorization-mode: Node,RBAC

cloud-provider: aws

timeoutForControlPlane: 4m0s

apiVersion: kubeadm.k8s.io/v1beta2

certificatesDir: /etc/kubernetes/pki

clusterName: kubernetes

controllerManager:

extraArgs:

cloud-provider: aws

dns:

type: CoreDNS

etcd:

local:

dataDir: /var/lib/etcd

imageRepository: k8s.gcr.io

kind: ClusterConfiguration

kubernetesVersion: v1.15.2

networking:

dnsDomain: cluster.local

podSubnet: 10.244.0.0/16

serviceSubnet: 10.100.0.0/16

scheduler: {}

kubeadm reset

In case you want to fully destroy your cluster to run set up it from the scratch — use reset :

root@ip-10–0–0–102:~# kubeadm reset

And reset IPTABLES rules:

root@ip-10–0–0–102:~# iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X

Flannel CNI installation

From the Master node execute:



podsecuritypolicy.policy/psp.flannel.unprivileged created

clusterrole.rbac.authorization.k8s.io/flannel created

clusterrolebinding.rbac.authorization.k8s.io/flannel created

serviceaccount/flannel created

configmap/kube-flannel-cfg created

daemonset.apps/kube-flannel-ds-amd64 created

daemonset.apps/kube-flannel-ds-arm64 created

daemonset.apps/kube-flannel-ds-arm created

daemonset.apps/kube-flannel-ds-ppc64le created

daemonset.apps/kube-flannel-ds-s390x created root@ip-10–0–0–102:~# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml podsecuritypolicy.policy/psp.flannel.unprivileged createdclusterrole.rbac.authorization.k8s.io/flannel createdclusterrolebinding.rbac.authorization.k8s.io/flannel createdserviceaccount/flannel createdconfigmap/kube-flannel-cfg createddaemonset.apps/kube-flannel-ds-amd64 createddaemonset.apps/kube-flannel-ds-arm64 createddaemonset.apps/kube-flannel-ds-arm createddaemonset.apps/kube-flannel-ds-ppc64le createddaemonset.apps/kube-flannel-ds-s390x created

Wait a minute and check nodes again:

root@ip-10–0–0–102:~# kubectl get nodes

NAME STATUS ROLES AGE VERSION

ip-10–0–0–102.eu-west-3.compute.internal Ready master 3m26s v1.15.2

STATUS == Ready , Okay.

Attaching the Worker Node

On the Worker node create a /etc/kubernetes/node.yml file with the JoinConfiguration :

---

apiVersion: kubeadm.k8s.io/v1beta1

kind: JoinConfiguration

discovery:

bootstrapToken:

token: "rat2th.qzmvv988e3pz9ywa"

apiServerEndpoint: "10.0.0.102:6443"

caCertHashes:

- "sha256:ce983b5fbf4f067176c4641a48dc6f7203d8bef972cb9d2d9bd34831a864d744"

nodeRegistration:

name: ip-10-0-0-186.eu-west-3.compute.internal

kubeletExtraArgs:

cloud-provider: aws

Join this node to the cluster:



[preflight] Running pre-flight checks

[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at

[WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09

[preflight] Reading configuration from the cluster…

[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’[kubelet-start] Downloading configuration for the kubelet from the “kubelet-config-1.15” ConfigMap in the kube-system namespace

[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”

[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”

[kubelet-start] Activating the kubelet service

[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

… root@ip-10–0–0–186:~# kubeadm join — config /etc/kubernetes/node.yml[preflight] Running pre-flight checks[WARNING IsDockerSystemdCheck]: detected “cgroupfs” as the Docker cgroup driver. The recommended driver is “systemd”. Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING SystemVerification]: this Docker version is not on the list of validated versions: 19.03.1. Latest validated version: 18.09[preflight] Reading configuration from the cluster…[preflight] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml’[kubelet-start] Downloading configuration for the kubelet from the “kubelet-config-1.15” ConfigMap in the kube-system namespace[kubelet-start] Writing kubelet configuration to file “/var/lib/kubelet/config.yaml”[kubelet-start] Writing kubelet environment file with flags to file “/var/lib/kubelet/kubeadm-flags.env”[kubelet-start] Activating the kubelet service[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap…

Go back to the Master, check nodes one more time:

root@ip-10–0–0–102:~# kubectl get nodes

NAME STATUS ROLES AGE VERSION

ip-10–0–0–102.eu-west-3.compute.internal Ready master 7m37s v1.15.2

ip-10–0–0–186.eu-west-3.compute.internal Ready <none> 27s v1.15.2

Load Balancer creation

And the last thing is to run a web-service, let’s use a simple NGINX container and to place a LoadBalancer Service:

kind: Service

apiVersion: v1

metadata:

name: hello

spec:

type: LoadBalancer

selector:

app: hello

ports:

- name: http

protocol: TCP

# ELB's port

port: 80

---

apiVersion: apps/v1

kind: Deployment

metadata:

name: hello

spec:

replicas: 1

selector:

matchLabels:

app: hello

template:

metadata:

labels:

app: hello

spec:

containers:

- name: hello

image: nginx

Apply it:

root@ip-10–0–0–102:~# kubectl apply -f elb-example.yml\service/hello created\deployment.apps/hello created

Check Deployment :

root@ip-10–0–0–102:~# kubectl get deploy -o wide

NAME READY UP-TO-DATE AVAILABLE AGE CONTAINERS IMAGES SELECTOR

hello 1/1 1 1 22s hello nginx app=hello

ReplicaSet :

root@ip-10–0–0–102:~# kubectl get rs -o wide

NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR

hello-5bfb6b69f 1 1 1 39s hello nginx app=hello,pod-template-hash=5bfb6b69f

Pod:

root@ip-10–0–0–102:~# kubectl get pod -o wide

NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES

hello-5bfb6b69f-4pklx 1/1 Running 0 62s 10.244.1.2 ip-10–0–0–186.eu-west-3.compute.internal <none> <none>

And Services:

root@ip-10–0–0–102:~# kubectl get svc -o wide

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR

hello LoadBalancer 10.100.102.37 aa5***295.eu-west-3.elb.amazonaws.com 80:30381/TCP 83s app=hello

kubernetes ClusterIP 10.100.0.1 <none> 443/TCP 17m <none>

Check an ELB in the AWS Console:

Instances — here is our Worker node:

Let’s recall how it’s working:

AWS ELB will route traffic to the Worker Node ( NodePort Service) on the Worker node via a NodePort service it will be routed to the Pod's port ( TargetPort ) on the Pod with the TargetPort traffic will be routed to a container's port ( containerPort )

In the LoadBalancer description above we see the next setting:

Port Configuration

80 (TCP) forwarding to 30381 (TCP)

Check the Kubernetes cluster services:

root@ip-10–0–0–102:~# kk describe svc hello

Name: hello

Namespace: default

Labels: <none>

Annotations: kubectl.kubernetes.io/last-applied-configuration:

{“apiVersion”:”v1",”kind”:”Service”,”metadata”:{“annotations”:{},”name”:”hello”,”namespace”:”default”},”spec”:{“ports”:[{“name”:”http”,”po…

Selector: app=hello

Type: LoadBalancer

IP: 10.100.102.37

LoadBalancer Ingress: aa5***295.eu-west-3.elb.amazonaws.com

Port: http 80/TCP

TargetPort: 80/TCP

NodePort: http 30381/TCP

Endpoints: 10.244.1.2:80

…

Here is our NodePort : http 30381/TCP

You can send a request directly to the Node.

Find a Worker node’s address:

root@ip-10–0–0–102:~# kk get node | grep -v master

NAME STATUS ROLES AGE VERSION

ip-10–0–0–186.eu-west-3.compute.internal Ready <none> 51m v1.15.2

And connect to the 30381 port:

root@ip-10–0–0–102:~# curl ip-10–0–0–186.eu-west-3.compute.internal:30381

<!DOCTYPE html><html>

<head>

<title>Welcome to nginx!</title>

…

Check if ELB is working:

root@ip-10–0–0–102:~# curl aa5***295.eu-west-3.elb.amazonaws.com

<!DOCTYPE html>

<html>

<head>

<title>Welcome to nginx!</title>

…

And a pod’s logs:

root@ip-10–0–0–102:~# kubectl logs hello-5bfb6b69f-4pklx

10.244.1.1 — — [09/Aug/2019:13:57:10 +0000] “GET / HTTP/1.1” 200 612 “-” “curl/7.58.0” “-”

AWS Load Balancer — no Worker Node added

While I tried to setup this cluster and ELB faced with the issue when Worker Nodes wasn’t added to an AWS LoadBalancer when creating a LoadBalancer Kubernetes service.

In such a case try to check if the ProviderID ( --provider-id ) is present in a node's settings:

root@ip-10–0–0–102:~# kubectl describe node ip-10–0–0–186.eu-west-3.compute.internal | grep ProviderID

ProviderID: aws:///eu-west-3a/i-03b04118a32bd8788

If there is no ProviderID add it using the kubectl edit node <NODE_NAME> as a ProviderID: aws:///eu-west-3a/<EC2_INSTANCE_ID> :

But it must be set when you are joining a node using /etc/kubernetes/node.yml file with the JoinConfiguration with the cloud-provider: aws is set.

Done.