Kubernetes is an open-source system for automating deployment, scaling, and management of containerized applications. It groups containers that make up an application into logical units for easy management and discovery. Kubernetes has many moving parts and there are countless ways to configure its pieces — from the various system components, network transport drivers, CLI utilities not to mention applications and workloads.

In this post we’ll install Kubernetes 1.8 on a bare-metal machine with Ubuntu 16.04 in about 10 minutes. At the end you’ll be able to start learning how to interact with Kubernetes via its CLI kubectl .

Prerequisites

At least 2 Ubuntu machines: one for master and one for worker

Apiserver and etcd together are fine on a machine with 1 core and 1GB RAM for clusters with 10s of nodes. Larger or more active clusters may benefit from more cores. Other nodes can have any reasonable amount of memory and any number of cores. They need not have identical configurations.

Install Docker

Installation

Install Kubernetes apt repo

$ apt-get update && apt-get install -y apt-transport-https \

&& curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add - $ echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" > /etc/apt/sources.list.d/kubernetes.list

Now update your packages list with apt-get update .

Install kubelet , kubeadm and kubernetes-cni

The kubelet is responsible for running containers on your hosts. kubeadm is a convenience utility to configure the various components that make up a working cluster and kubernetes-cni represents the networking components.

CNI stands for Container Networking Interface which is a spec that defines how network drivers should interact with Kubernetes.

$ apt-get update \

&& apt-get install -y kubelet kubeadm kubernetes-cni

Initialize your cluster with kubeadm

From the docs:

kubeadm aims to create a secure cluster out of the box via mechanisms such as RBAC.

Docker Swarm provides an overlay networking driver by default — but with kubeadm this decision is left to us. The team are still working on updating their instructions - so I'll show you how to use the most similar driver to Docker's overlay driver (flannel by CoreOS).

Prepare the host — notes for Kubernetes /1.8

If you are using Kubernetes 1.8+ then the following applies:

Swap must be disabled

You can check if you have swap enabled by typing in cat /proc/swaps . If you have a swap file or partition enabled then turn it off with swapoff . You can make this permanent by commenting out the swap file in /etc/fstab .

Flannel

Flannel provides a software defined network (SDN) using the Linux kernel’s overlay and ipvlan modules.

Another popular SDN offering is Weave Net by WeaveWorks. Find out more here.

You can find your private/public/datacenter IP address through ifconfig :

root@master:~# ifconfig eth0

eth0 Link encap:Ethernet HWaddr 66:9b:c7:29:a8:be

inet addr:10.133.15.28 Bcast:10.133.255.255 Mask:255.255.0.0

inet6 addr: fe80::649b:c7ff:fe29:a8be/64 Scope:Link

UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1

We’ll now use the internal IP address to broadcast the Kubernetes API — rather than the Internet-facing address.

You must replace --apiserver-advertise-address with the IP of your host.

$ kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.133.15.28 --kubernetes-version stable-1.8

--apiserver-advertise-address determines which IP address Kubernetes should advertise its API server on.

determines which IP address Kubernetes should advertise its API server on. --pod-network-cidr is needed for the flannel driver and specifies an address space for containers

is needed for the flannel driver and specifies an address space for containers --skip-preflight-checks allows kubeadm to check the host kernel for required features. If you run into issues where a host has the kernel meta-data removed you may need to run with this flag.

allows to check the host kernel for required features. If you run into issues where a host has the kernel meta-data removed you may need to run with this flag. --kubernetes-version stable-1.8 this pins the version of the cluster to 1.8, but if you want to use Kubernetes 1.7 for example - then just alter the version. Removing this flag will use whatever counts as "latest".

Here’s the output we got:

[kubeadm] WARNING: kubeadm is in beta, please do not use it for production clusters.

[init] Using Kubernetes version: v1.8.1

[init] Using Authorization modes: [Node RBAC]

[preflight] Running pre-flight checks

[kubeadm] WARNING: starting in 1.8, tokens expire after 24 hours by default (if you require a non-expiring token use --token-ttl 0)

[certificates] Generated ca certificate and key.

[certificates] Generated apiserver certificate and key.

[certificates] apiserver serving cert is signed for DNS names [kubehost1 kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.100.195.129]

[certificates] Generated apiserver-kubelet-client certificate and key.

[certificates] Generated sa key and public key.

[certificates] Generated front-proxy-ca certificate and key.

[certificates] Generated front-proxy-client certificate and key.

[certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki"

[kubeconfig] Wrote KubeConfig file to disk: "admin.conf"

[kubeconfig] Wrote KubeConfig file to disk: "kubelet.conf"

[kubeconfig] Wrote KubeConfig file to disk: "controller-manager.conf"

[kubeconfig] Wrote KubeConfig file to disk: "scheduler.conf"

[controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml"

[controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml"

[controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml"

[etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml"

[init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests"

[init] This often takes around a minute; or longer if the control plane images have to be pulled.

[apiclient] All control plane components are healthy after 55.504048 seconds

[uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace

[markmaster] Will mark node kubehost1 as master by adding a label and a taint

[markmaster] Master kubehost1 tainted and labelled with key/value: node-role.kubernetes.io/master=""

[bootstraptoken] Using token: f2292a.77a85956eb6acbd6

[bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials

[bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token

[bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster

[bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace

[addons] Applied essential addon: kube-dns

[addons] Applied essential addon: kube-proxy

Your Kubernetes master has initialized successfully!

To start using your cluster, you need to run (as a regular user): mkdir -p $HOME/.kube

sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config

sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:

http://kubernetes.io/docs/admin/addons/

You can now join any number of machines by running the following on each node

as root: kubeadm join --token f2292a.77a85956eb6acbd6 10.133.15.28:6443 --discovery-token-ca-cert-hash sha256:0c4890b8d174078072545ef17f295a9badc5e2041dc68c419880cca93d084098

Configure an unprivileged user-account

Ubuntu installation ships without an unprivileged user-account, so let’s add one.

# useradd username -G sudo -m -s /bin/bash

# passwd username

Configure environmental variables as the new user

You can now configure your environment with the instructions at the end of the init message above.

Switch into the new user account with: sudo su username .

$ cd $HOME

$ sudo whoami $ sudo cp /etc/kubernetes/admin.conf $HOME/

$ sudo chown $(id -u):$(id -g) $HOME/admin.conf

$ export KUBECONFIG=$HOME/admin.conf $ echo "export KUBECONFIG=$HOME/admin.conf" | tee -a ~/.bashrc

Apply your pod network (flannel)

We will now apply configuration to the cluster using kubectl and two entries from the flannel docs:

$ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/kube-flannel.yml serviceaccount "flannel" created

configmap "kube-flannel-cfg" created

daemonset "kube-flannel-ds" created $ kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/v0.9.1/Documentation/k8s-manifests/kube-flannel-rbac.yml clusterrole "flannel" created

clusterrolebinding "flannel" created

Update: with master version there is a problem with flannel. We will use 0.9.1 version with newer Docker versions.

We’ve now configured networking for pods.

Allow a single-host cluster

Kubernetes is about multi-host clustering — so by default containers cannot run on master nodes in the cluster. Since we only have one node — we’ll taint it so that it can run containers for us.

$ kubectl taint nodes --all node-role.kubernetes.io/master-

An alternative at this point would be to provision a second machine and use the join token from the output of kubeadm .

Check it’s working

Many of the Kubernetes components run as containers on your cluster in a hidden namespace called kube-system . You can see whether they are working like this:

$ kubectl get all --namespace=kube-system NAME READY STATUS RESTARTS AGE

po/etcd-k8s-master 1/1 Running 0 2m

po/kube-apiserver-k8s-master 1/1 Running 0 2m

po/kube-controller-manager-k8s-master 1/1 Running 0 2m

po/kube-dns-545bc4bfd4-vwtxh 3/3 Running 0 2m

po/kube-flannel-ds-4vknf 1/1 Running 0 2m

po/kube-flannel-ds-sbtpl 1/1 Running 0 2m

po/kube-proxy-hv4tn 1/1 Running 0 2m

po/kube-proxy-kmqlv 1/1 Running 0 2m

po/kube-scheduler-k8s-master 1/1 Running 0 2m NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE

svc/kube-dns 10.96.0.10 <none> 53/UDP,53/TCP 3m NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE

deploy/kube-dns 1 1 1 1 3m NAME DESIRED CURRENT READY AGE

rs/kube-dns-692378583 1 1 1 2m

As you can see all of the services are in a state of Running which indicates a healthy cluster. If these components are still being downloaded from the Internet they may appear as not started.

Slides about Kubernetes:

Run a container (comming soon)

Like to learn?

Follow me on twitter where I post all about the latest and greatest AI, DevOps, VR/AR, Technology, and Science! Connect with me on LinkedIn too!