Implementation

Let’s get our hands dirty. Bring your Linux machine and let’s hack it out! If you don’t have any, you can also try this out in a nested KVM machine, get one from DigitalOcean or GCP or your own lab.

By the end of this post, we would have created a Kubernetes control plane using Cluster API Libvirt provider that we will write from scratch.

System Requirements

A virtualization enabled Ubuntu 18.04 machine with following dependencies.

Libvirt Installation & Configuration

Run the following command to install libvirt along with a few other relevant packages.

$ sudo apt-get install -y \

qemu-kvm \

libvirt-daemon-system \

libvirt-clients \

bridge-utils \

libvirt-bin

Add current user to libvirt user group so that libvirt can be used as non-root user.

$ usermod -a -G libvirt $(whoami)

Logout and login for usergroup change to take effect.

Next, we will configure libvirt so that it accepts TCP connections. Ensure that the /etc/default/libvirtd file looks as below.

start_libvirtd="yes"

libvirtd_opts="--listen"

Change the /etc/libvirt/libvirtd.conf file so that it has the following configuration.

# Disable TLS

listen_tls = 0 # Enable TCP port

listen_tcp = 1 # Add TCP port

tcp_port = "16509" # Setup libvirt socket group

unix_sock_group = "libvirt" # Setup libvirt socket permissions

unix_sock_ro_perms = "0777"

unix_sock_rw_perms = "0770" # Setup libvirt auth

auth_unix_ro = "none"

auth_unix_rw = "none" # Disable TCP auth

auth_tcp = "none" # Enable auth log

audit_logging = 1

Restart libvirtd service by running the command

$ sudo systemctl restart libvirtd

Create new repository

Now that the system is configured, let’s get down to coding. We will first create the scaffolding and boilerplate code. Then, we will get down to writing business logic.

$ mkdir -p ${GOPATH}/src/sigs.k8s.io/cluster-api-provider-libvirt

$ cd ${GOPATH}/src/sigs.k8s.io/cluster-api-provider-libvirt

Next, we will use kubebuilder to generate scaffolding.

What is kubebuilder?

Kubebuilder is an SDK for rapidly building and publishing Kubernetes APIs in Go. It builds on top of the canonical techniques used to build the core Kubernetes APIs to provide simple abstractions that reduce boilerplate and toil.

Generate Scaffolding

Run the following command in the newly created repo.

$ kubebuilder init --domain cluster.k8s.io --license apache2 --owner "GOJEK Tech"

Kubebuilder will ask for your permission to run dep ensure . When asked, enter y to grant the permission.

Run `dep ensure` to fetch dependencies (Recommended) [y/n]?

y

This command generates a bunch of files. Let’s quickly take a look at the main folders it generated and what they are meant for.

Generate Provider resources for Machine

Add cluster-api as a dependency to our project.

dep ensure -add sigs.k8s.io/cluster-api@0.1.0

Now let’s define the machine resource for our libvirt provider.

kubebuilder create api --group libvirt --version v1alpha1 --kind LibvirtMachineProviderSpec

Kubebuilder prompts us to create resources as well as controllers. We will only create resources. We do not need to create new controllers as we will be using the one present in the Cluster API codebas.

Create Resource under pkg/apis [y/n]?

y

Create Controller under pkg/controller [y/n]?

n

Register Schemes

The manager that kubebuilder generates only knows about the resources we defined. It is unaware of the resources defined in the common cluster API code. Hence, replace the contents in cmd/manager/main.go with the following code.

Register Controllers

To register controllers, change the code in pkg/controller/add_machine_controller.go to below.

Now we have a new Cluster API provider boilerplate code.

Define Machine Spec

We will define the custom fields required for machine creation in struct LibvirtMachineProviderSpecSpec . These values will have to be provided by the user in the CRD when they use this provider.

This struct is defined in pkg/apis/libvirt/v1aplha1/libvirtmachineproviderspec_types.go .

Run make to regenerate relevant code once this struct is modified.

Libvirt Specific Code

We will now add libvirt specific code in pkg/cloud/libvirt/domain.go

Actuator

Actuator is one of the main components of Cluster API. Create a new actuator at pkg/cloud/libvirt/actuators/machine/actuator.go

Dockerfile

We will deploy our provider on a Docker container. The container should have libvirt-dev installed as it is a libvirt-xml dependency.

Makefile

Change a few tasks in the Makefile.

Deploy the provider

We will deploy our provider on minikube, which we have run on kvm hypervisor.

$ minikube start --vm-driver kvm2

$ export IMG=himani93/cluster-api-provider-libvirt

$ dep ensure -v

$ make docker-build IMG=${IMG}

$ make docker-push IMG=${IMG}

$ make deploy

Control Plane Images

We will now create base images for machine on which our Kubernetes Control Plane will run.

Create boot disk image

We need a base Ubuntu image with Docker and Kubernetes components installed. This image will act as control plane’s boot disk.

Please follow the instructions in the repository https://github.com/himani93/vm-builder to create this image or use your own image.

Create user-data image

Now, we create a cloud-init image for node initialization. Create a file named user-data with following content.

#cloud-config

password: passw0rd

chpasswd: {expire: False}

ssh_pwauth: True runcmd:

- echo "127.0.0.1 kube-cp-01" >> /etc/hosts

- kubeadm init --pod-network-cidr 10.40.0.0/16

and following file named meta-data

instance-id: kube-cp

local-hostname: kube-cp

Generate an ISO image of the above cloud-init files.

$ genisoimage -output user-data.img -volid cidata -joliet -rock user-data meta-data

Create Kubernetes Control Plane

Finally, it’s time to reap the fruits of our efforts. We will now create a new Kubernetes Control Plane using the custom provider that we just wrote.

Define machine CRD

Create create_machine.yaml . We specify the provider to be used and other machine specific information required for it’s creation, like imageURI , userDataURI , etc.

Finally run

kubectl apply -f create-machine.yaml

Now, the machine controller will create a Machine named kube-cp .

The created machine can be accessed using

virsh console kube-cp

The kubernetes pods running on machine kube-cp can be accessed after sshing into the machine kube-cp .

kube-cp$ kubectl --kubeconfig /etc/kubernetes/admin.conf get pods -n kube-system

Cluster API libvirt provider controller manager logs can be tailed using

kubectl logs -f cluster-api-provider-libvirt-controller-manager-0 -n cluster-api-provider-libvirt-system -c manager

Conclusion:

Cluster API can provision infrastructure and Kubernetes cluster using declarative style APIs. It can run across different cloud providers and also provides flexibility to define provision of infrastructure and clusters.

The project is still in alpha stage and is not widely adopted.

References:

https://blogs.vmware.com/cloudnative/2019/03/14/what-and-why-of-cluster-api/

https://blogs.vmware.com/cloudnative/2019/05/14/cluster-api-kubernetes-lifecycle-management/

https://blog.heptio.com/the-kubernetes-cluster-api-de5a1ff870a5

Want to learn more about Cluster API:

Cluster API Slack Channel: #cluster-api

Cluster API Github Repo: https://github.com/kubernetes-sigs/cluster-api

Cluster API GitBook: https://cluster-api.sigs.k8s.io/