In this tutorial, we'll spin up a three-node Kubernetes cluster using Ubuntu 20.04 DigitalOcean droplets. We'll also look at how to automate the setup of a Kubernetes cluster with Python and Fabric.

Feel free to swap out DigitalOcean for a different cloud hosting provider or your own on-premise environment.

Dependencies:

Docker v19.03.8 Kubernetes v1.18.3

Contents

What is Fabric?

Fabric is a Python library used for automating routine shell commands over SSH, which we'll be using to automate the setup of a Kubernetes cluster.

Install:

$ pip install fabric == 2 .5.0

Verify the version:

$ fab --version Fabric 2 .5.0 Paramiko 2 .7.1 Invoke 1 .4.1

Test it out by adding the following code to a new file called fabfile.py:

from fabric import task @task def ping ( ctx , output ): """ Sanity check """ print ( f 'pong!' ) print ( f 'hello {output} !' )

Try it out:

$ fab ping --output = "world" pong! hello world!

For more, review the official Fabric docs.

Droplets Setup

First, sign up for an account on DigitalOcean (if you don’t already have one), add a public SSH key to your account, and then generate an access token so you can access the DigitalOcean API.

Add the token to your environment:

$ export DIGITAL_OCEAN_ACCESS_TOKEN = <YOUR_DIGITAL_OCEAN_ACCESS_TOKEN>

Next, to interact with the API programmatically, install the python-digitalocean module:

$ pip install python-digitalocean == 1 .15.0

Now, let's create another task to spin up three droplets: one for the Kubernetes master and two for the workers. Update fabfile.py like so:

import os from fabric import task from digitalocean import Droplet , Manager DIGITAL_OCEAN_ACCESS_TOKEN = os . getenv ( 'DIGITAL_OCEAN_ACCESS_TOKEN' ) # tasks @task def ping ( ctx , output ): """ Sanity check """ print ( f 'pong!' ) print ( f 'hello {output} !' ) @task def create_droplets ( ctx ): """ Create three new DigitalOcean droplets - node-1, node-2, node-3 """ manager = Manager ( token = DIGITAL_OCEAN_ACCESS_TOKEN ) keys = manager . get_all_sshkeys () for num in range ( 3 ): node = f 'node-{num + 1}' droplet = Droplet ( token = DIGITAL_OCEAN_ACCESS_TOKEN , name = node , region = 'nyc3' , image = 'ubuntu-20-04-x64' , size_slug = '4gb' , tags = [ node ], ssh_keys = keys , ) droplet . create () print ( f ' {node} has been created.' )

Take note of the arguments passed to the Droplet class. Essentially, we're creating three Ubuntu 20.04 droplets in the NYC3 region with 4 GB of memory each. We're also adding all SSH keys to each droplet. You may want to update this to only include the SSH key that you created specifically for this project:

@task def create_droplets ( ctx ): """ Create three new DigitalOcean droplets - node-1, node-2, node-3 """ manager = Manager ( token = DIGITAL_OCEAN_ACCESS_TOKEN ) # Get ALL SSH keys all_keys = manager . get_all_sshkeys () keys = [] for key in all_keys : if key . name == '<ADD_YOUR_KEY_NAME_HERE>' : keys . append ( key ) for num in range ( 3 ): node = f 'node-{num + 1}' droplet = Droplet ( token = DIGITAL_OCEAN_ACCESS_TOKEN , name = node , region = 'nyc3' , image = 'ubuntu-20-04-x64' , size_slug = '4gb' , tags = [ node ], ssh_keys = keys , ) droplet . create () print ( f ' {node} has been created.' )

Create the droplets:

$ fab create-droplets node-1 has been created. node-2 has been created. node-3 has been created.

Moving along, let's add a task that checks the status of each droplet, to ensure that each are up and ready to go before we start installing Docker and Kubernetes:

@task def wait_for_droplets ( ctx ): """ Wait for each droplet to be ready and active """ for num in range ( 3 ): node = f 'node-{num + 1}' while True : status = get_droplet_status ( node ) if status == 'active' : print ( f ' {node} is ready.' ) break else : print ( f ' {node} is not ready.' ) time . sleep ( 1 )

Add the get_droplet_status helper function:

def get_droplet_status ( node ): """ Given a droplet's tag name, return the status of the droplet """ manager = Manager ( token = DIGITAL_OCEAN_ACCESS_TOKEN ) droplet = manager . get_all_droplets ( tag_name = node ) return droplet [ 0 ] . status

Don't forget the import:

import time

Before we test, add another task to destroy the droplets:

@task def destroy_droplets ( ctx ): """ Destroy the droplets - node-1, node-2, node-3 """ manager = Manager ( token = DIGITAL_OCEAN_ACCESS_TOKEN ) for num in range ( 3 ): node = f 'node-{num + 1}' droplets = manager . get_all_droplets ( tag_name = node ) for droplet in droplets : droplet . destroy () print ( f ' {node} has been destroyed.' )

Destroy the three droplets we just created:

$ fab destroy-droplets node-1 has been destroyed. node-2 has been destroyed. node-3 has been destroyed.

Then, bring up three new droplets and verify that they are good to go:

$ fab create-droplets node-1 has been created. node-2 has been created. node-3 has been created. $ fab wait-for-droplets node-1 is not ready. node-1 is not ready. node-1 is not ready. node-1 is not ready. node-1 is not ready. node-1 is not ready. node-1 is ready. node-2 is not ready. node-2 is not ready. node-2 is ready. node-3 is ready.

Provision the Machines

The following tasks need to be run on each droplet...

Set Addresses

Start by adding a task to set the host addresses in the hosts environment variable:

@task def get_addresses ( ctx , type ): """ Get IP address """ manager = Manager ( token = DIGITAL_OCEAN_ACCESS_TOKEN ) if type == 'master' : droplet = manager . get_all_droplets ( tag_name = 'node-1' ) print ( droplet [ 0 ] . ip_address ) hosts . append ( droplet [ 0 ] . ip_address ) elif type == 'workers' : for num in range ( 2 , 4 ): node = f 'node- {num} ' droplet = manager . get_all_droplets ( tag_name = node ) print ( droplet [ 0 ] . ip_address ) hosts . append ( droplet [ 0 ] . ip_address ) elif type == 'all' : for num in range ( 3 ): node = f 'node-{num + 1}' droplet = manager . get_all_droplets ( tag_name = node ) print ( droplet [ 0 ] . ip_address ) hosts . append ( droplet [ 0 ] . ip_address ) else : print ( 'The "type" should be either "master", "workers", or "all".' ) print ( f 'Host addresses - {hosts} ' )

Define the following variables at the top, just below DIGITAL_OCEAN_ACCESS_TOKEN = os.getenv('DIGITAL_OCEAN_ACCESS_TOKEN') :

user = 'root' hosts = []

Run:

$ fab get-addresses --type = all 165 .227.103.30 159 .65.182.113 165 .227.222.71 Host addresses - [ '165.227.103.30' , '159.65.182.113' , '165.227.222.71' ]

With that, we can start installing the Docker and Kubernetes dependencies.

Install Dependencies

Install Docker along with-

kubeadm - bootstraps a Kubernetes cluster kubelet - configures containers to run on a host kubectl - command line tool used managing a cluster

Add a task to install Docker to the fabfile:

@task def install_docker ( ctx ): """ Install Docker """ print ( f 'Installing Docker on {ctx.host} ' ) ctx . sudo ( 'apt-get update && apt-get install -qy docker.io' ) ctx . run ( 'docker --version' ) ctx . sudo ( 'systemctl enable docker.service' )

Let's disable the swap file:

@task def disable_selinux_swap ( ctx ): """ Disable SELinux so kubernetes can communicate with other hosts Disable Swap https://github.com/kubernetes/kubernetes/issues/53533 """ ctx . sudo ( 'sed -i "/ swap / s/^/#/" /etc/fstab' ) ctx . sudo ( 'swapoff -a' )

Install Kubernetes:

@task def install_kubernetes ( ctx ): """ Install Kubernetes """ print ( f 'Installing Kubernetes on {ctx.host} ' ) ctx . sudo ( 'apt-get update && apt-get install -y apt-transport-https' ) ctx . sudo ( 'curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -' ) ctx . sudo ( 'echo "deb http://apt.kubernetes.io/ kubernetes-xenial main" | \ tee -a /etc/apt/sources.list.d/kubernetes.list && apt-get update' ) ctx . sudo ( 'apt-get update && apt-get install -y kubelet kubeadm kubectl' )

Instead of running each of these separately, create a main provision_machines task:

@task def provision_machines ( ctx ): for conn in get_connections ( hosts ): install_docker ( conn ) disable_selinux_swap ( conn ) install_kubernetes ( conn )

Add the get_connections helper function:

def get_connections ( hosts ): for host in hosts : yield Connection ( f " {user} @ {host} " )

Update the import:

from fabric import task , Connection

Run:

$ fab get-addresses --type = all provision-machines

This will take a few minutes to install the required packages.

Configure the Master Node

Init the Kubernetes cluster and deploy the flannel network:

@task def configure_master ( ctx ): """ Init Kubernetes Set up the Kubernetes Config Deploy flannel network to the cluster """ ctx . sudo ( 'kubeadm init' ) ctx . sudo ( 'mkdir -p $HOME/.kube' ) ctx . sudo ( 'cp -i /etc/kubernetes/admin.conf $HOME/.kube/config' ) ctx . sudo ( 'chown $(id -u):$(id -g) $HOME/.kube/config' ) ctx . sudo ( 'kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml' )

Save the join token:

@task def get_join_key ( ctx ): sudo_command_res = ctx . sudo ( 'kubeadm token create --print-join-command' ) token = re . findall ( "^kubeadm.*$" , str ( sudo_command_res ), re . MULTILINE )[ 0 ] with open ( 'join.txt' , "w" ) as f : with stdout_redirected ( f ): print ( token )

Add the following imports:

import re import sys from contextlib import contextmanager

Create the stdout_redirected context manager:

@contextmanager def stdout_redirected ( new_stdout ): save_stdout = sys . stdout sys . stdout = new_stdout try : yield None finally : sys . stdout = save_stdout

Again, add a parent task to run these:

@task def create_cluster ( ctx ): for conn in get_connections ( hosts ): configure_master ( conn ) get_join_key ( conn )

Run it:

$ fab get-addresses --type = master create-cluster

This will take a minute or two to run. Once done, the join token command should be outputted to the screen and saved in a join.txt file:

kubeadm join 165 .227.103.30:6443 --token dabsh3.itdhdo45fxj65lrb --discovery-token-ca-cert-hash sha256:5af14ed1388b240e25fe2b3bbaa38752c6a23328516e47aedef501d4db4057af

Configure the Worker Nodes

Using the saved join command from above, add a task to "join" the workers to the master:

@task def configure_worker_node ( ctx ): """ Join a worker to the cluster """ with open ( 'join.txt' ) as f : join_command = f . readline () for conn in get_connections ( hosts ): conn . sudo ( f ' {join_command} ' )

Run this on the two worker nodes:

$ fab get-addresses --type = workers configure-worker-node

Sanity Check

Finally, to ensure the cluster is up and running, add a task to view the nodes:

@task def get_nodes ( ctx ): for conn in get_connections ( hosts ): conn . sudo ( 'kubectl get nodes' )

Run:

$ fab get-addresses --type = master get-nodes

You should see something similar to:

NAME STATUS ROLES AGE VERSION node-1 Ready master 44s v1.18.3 node-2 Ready <none> 30s v1.18.3 node-3 Ready <none> 26s v1.18.3

Remove the droplets once done:

$ fab destroy-droplets node-1 has been destroyed. node-2 has been destroyed. node-3 has been destroyed.

Automation Script

One last thing: Add a create.sh script to automate this full process:

#!/bin/bash echo "Creating droplets..." fab create-droplets fab wait-for-droplets sleep 20 echo "Provision the droplets..." fab get-addresses --type = all provision-machines echo "Configure the master..." fab get-addresses --type = master create-cluster echo "Configure the workers..." fab get-addresses --type = workers configure-worker-node sleep 20 echo "Running a sanity check..." fab get-addresses --type = master get-nodes

Try it out:

$ sh create.sh

That's it!

You can find the scripts in the kubernetes-fabric repo on GitHub.