8. Provisioning and Orchestration

Go back to the table of contents

This chapter will cover different aspects of cloud technology such as automatic creation of servers, also called provisioning, and combining different virtual machines to create a single service, a form of orchestration. This will be done using an example where a total of three virtual machines will be created. Two machines will provide a web service while one will load balance requests for this service. This means that all requests will go through the machine with load balancer, which will then decide on a machine which will process the request in round robin style.

Figure 4 — Architecture Provisioning Multiple VMS (made with Archi)

See the architecture above. This is a variant on the architecture used in chapter 2 but displays more about our current situation. As can be seen, the goal is that the user (student) or any other can send HTTP requests to the guest machine. This request will firstly be available in your machine at the network interface card. The NIC will then search based on the destination of request where the packet must be sent. The destination network should already be known by hardware with the help of the hypervisor as it creates a so called ‘bridge’ between the NIC of the host machine and the virtual network interface of the hypervisor. On this bridge Network Address Translation (NAT) will be enabled so that the virtual machines can be accessed from the host network. Furthermore, the bridge will function as a middelway for Domain name solving (DNS) and will provide ip addresses for the virtual machines using Dynamic Host Configuration Protocol (DHCP). In this chapter we will try to achieve this by provisioning all this from Terraform code. This will show a good example on how Terraform can be used to create such cloud infrastructure with a simple load balancing example.

The first section of this chapter will cover how to provision a virtual machine that runs a server on boot. The same PiCalc program will be used to test the virtual machine. After that, in the second section, more about the virtual network that libvirt provides will be discussed where a custom bridged networking will be created. The third, fourth and fifth section will teach on how to use modules which can be reused from a centralized place with the goal to make the configuration more manageable. Finally, the sixth section will combine all the information to create multiple virtual machines that load balance the PiCalc service.

To follow the instructions in this chapter it is prerequisite that chapter 7 was done successfully because in this chapter, we will build further on top of the code created in that chapter. We will be adding some code in these files or moving the said files.

Provisioning a server with a virtual machine

To be able to create the eventual goal of this chapter, a load balanced PiCalc service, it would be useful if the PiCalc server in the machine is automatically created and run. This way we won’t have to log into the virtual machine to start the server. To achieve this some edits will have to be done in the cloud_init.cfg file. We will be adding some packages which the PiCalc server requires to install and run but also we will be adding configuration to run the server after installation.

1. Edit cloud_init.cfg and the following yaml configuration. This will attempt to install packages after the machine is created. This functionality was displayed before, in chapter 7.8, where git was installed on the virtual machine. Note that we added python wget, two programs used to install the PiCalc service.

# cloud-init configuration

# install packages

packages:

— qemu-guest-agent

— git

— python

— wget

2. To start the server after installing, we will have to run some bash commands. Fortunately, cloud init also can do this. Add the configuration below as well. As you can see this will run the same commands we used in chapter 7.8 to create the PiCalc server.

# cloud-init configuration

# run command after boot

runcmd:

— [“cd”, “/home/terraform_guest”]

— [“git”, “clone”, “https://github.com/krebsalad/PiCalcPy.git"]

— [“cd”, “PiCalcPy”]

— [“python”, “install_picalc.py”]

— [“python”, “run.py”, “mode=server”,”&”]

These additions should be enough to run the server automatically when it’s started. Test this by initializing Terraform in the project and applying the configuration. When the created machine is booting the packages in step I will be installed. When this is done, the installation of the server should automatically start, where after it will run. If you want to see the output of all the installation until the moment it starts, you can go into the virtual machine’s console using the commands below. (recommended to start a new terminal for this). As shown before, the first command will list the machines on your system. Note down the id of the machine you want to log into and replace the number behind the ‘virsh console’ by the id of the machine. Now you will be able to see the output of the installation.

$ virsh list — all $ virsh console 2

When the installation is done, test the server by sending a HTTP request to the IP address from the CLI or from your host’s browser. You can find the ip with which you can find with the ‘virsh net-dhcp-leases default’ command.

$ curl VM_Guest_ip:8080/PiCalc/100

When done testing the server, make sure to destroy the created infrastructure before continuing with the next instructions as we will be building changing more configuration.

Creating a libvirt bridged network

In this section, a virtual bridged network will be created using libvirt with Terraform. A bridge is a networking device that provides the ability to combine several communication networks into a single network. The network device will be used to provide IP addresses to the virtual machines we are going to create. This will be done using DHCP. Heed the sketch below where a scenario that is going to be created in this section can be found. Here can be seen that there is a virtual network and a guest machine that is connected to the network. The bridge will provide an IP address in a given rang the toe machine. The bridge can communicate with the outside network via the host. This is possible using NAT. All the communications on the outside will see the virtual network as the host machine.

Figure 5 — Bridged network made with draw.io

Until now we have been using the ‘default’ network that libvirt provides. This network is also a NAT based bridged network with DHCP. We will be basing our configuration on the ‘default’ network with the goal to understand more about this before we go more in depth in the next sections.

The libvirt network is defined using XML files. The syntax of XML requires that you define sections as following </section>. Note that the / at the beginning is used to end the tag. If it is not present all the sections below will be a member of that section. To end this behavior, repeating the section and closing it will also work. For example:

<section> arguments </section>

Fortunately, libvirt will look for errors in our syntax, which will help in debugging faults in our configuration. Look into the configuration of the default network by running the command below. This will print the configuration of the network.

$ virsh net-dumpxml default

This configuration is in XML format. Copy the output, as we will be using this configuration to create our own network. Create a new file in your project directory called ‘libvirt_network_config.xml’ and paste the configuration of the ‘default’ network copied from the last step. This file will be used to define a new network. We will have to edit some values so that it works for our new network.

$ nano ~/kvm_project/libvirt_network_config.xml

The first XML tag we will have to change is the name of the network. This is between by the tags ‘<name>’ and ‘<name/>’. Change the name ‘default’ to ‘picalc_net’. The line should look as following

<name>picalc_net</name>

Now remove the <uuid> tag entirely. This tag is generated by libvirt when the network is created.

The <forward mode = ‘nat’> tag does not need to be changed. This will provide Network Address Translation (NAT) for our network. This is required to communicate with other virtual networks in our host machine and therefore also needed to be able to communicate with the internet.

The <bridge name..> must be changed. The bridge is the network interface where all ingoing and outgoing packets of the network go through. The option that needs to be changed is the ‘name=’ option as it is used to identify bridges and should be unique in your machine. This will have to change from ‘virbr0’ for example ‘virbr1’. Besides this option, there is the ‘stp=’on’’ tag. Make sure this is on, as it helps to prevent any loops in the network, but will also have the packets choose the most reliable path to its destination. We don’t need to change this option.

<bridge name=’virbr1' stp=’on’ delay=’0'>

The <mac address…> tag is used to identify the virtual network interface on machine level. The address itself must be unique. This means that the mac address that we copied from default will have to change as well. Change the mac address to a valid mac address. Heed the example below.

<mac address=’52:54:00:6c:3c:01'/>

The <ip address…> tag is used to set an IP address for the network interface. This IP should be the gateway address for the machines in your network. Furthermore, the netmask must be a valid mask. In our case we will be using the same netmask as the default network, which is /24, but we will be changing the subnet of the network to network 192.168.180.0. Change the tag as follows.

<ip address= ‘192.168.180.1’ netmask=’255.255.255.0'>

As you can see there is also the child member tag <dhcp> within the IP address tag. This tag will provide an IP address for all clients in the network we defined. It is important that the ranges in this tag are valid addresses within the subnet we defined. In our case this will look as following.

<dhcp>

<range start=’192.168.180.2' end=’192.168.180.254'/>

</dhcp>

Lastly, we will configure a DNS server. This is required because the virtual network within our host will not be able to solve domain addresses because its hiding behind a bridge interface. Fortunately, the interface itself is within our host machine and can find a DNS server. This means the clients in our network will be able to solve domain names by simply using the network interface. The only thing we will need to configure is that the resolve request from our clients in the virtual network get passed on by the interface. This is done by defining our network interface IP as a DNS server.

<dns enable=’yes’/>

Now that the configuration is done. Save the file, define and start the network using the following commands.

$ virsh net-define ~/kvm_project/libvirt_network_config.xml $ virsh net-start picalc_net $ virsh net-list — all

To test the network, you can make a virtual machine using the project ‘~/kvm_project’. But before initializing some changes will have to be made in the Terraform configuration so that the guest machine’s network interface is connected to the virtual network we called “picalc_net”. This was discussed briefly in chapter 7 step VII. To change the network in the configuration, edit the ‘libvirt.tf’ file and change the ‘network_name = “default”’ under ‘network_interface’ in the ‘libvirt_domain’ resource to ‘network_name = “picalc_net”’.

Finally, initialize and apply the Terraform configuration. When the guest machine is created. Run the command below to get the DHCP leased IPs. The leased IP should be within the range that was defined earlier which is 192.168.180.2/24 to 192.168.180.254/24.

$ virsh net-dhcp-leases picalc_net

Make sure to destroy the machine with Terraform when done testing. Do not delete the network configuration file as this will be used in the next section and further.

To stop a running libvirt network run the following command.

$ virsh net-destroy picalc_net

To remove an existing network which is inactive, run the following command.

$ virsh net-undefine picalc_net

To search for existing networks, active or inactive, run the following command.

$ virsh net-list — all

Creating a custom Terraform module for a libvirt network

In this chapter, Terraform will be used to create the same network in the last chapter. This is so that the network’s life cycle can be managed using Terraform. The terraform-provider-libvirt does provide a libvirt network resource, but it does not support the use of static hosts for DHCP as of October 2019, which we will need to provision the load balanced PiCalc service.

To be able to create a network from a file and manage its life cycle we will have to create a module. A module is a directory in which configuration files of Terraform are placed. A module requires at least one file with the ‘.tf’ extension. This module will then be callable from another Terraform configuration files by calling the directory.

Create a directory ‘/modules/’ in your project and then create a new module directory within the modules directory called ‘/libvirt_picalc_network/’.

$ mkdir -p ~/kvm_project/modules/libvirt_picalc_network/

Move the network configuration XML file we created in the last chapter(8.2) to this new directory. We will be using this network configuration file to create the same network from Terraform code.

$ mv ~/kvm_project/libvirt_network_config.xml ~/kvm_project/modules/libvirt_picalc_network/libvirt_network_config.xml

Add and edit a new file called ‘picalc_net.tf’ in the ‘/libvirt_picalc_network/’ directory. This file will have the configuration needed to define and run the network but also destroy and undefine the network when instructed.

$ nano ~/kvm_project/modules/libvirt_picalc_network/picalc_net.tf

Add the code below. This is to make sure libvirt is running before the network can be created.

# make sure qemu is running

provider “libvirt” {

uri = “qemu:///system”

}

To have terraform do the required calls when ‘apply’ or ‘destroy’ is called we have to create a resource. To this end Terraform provides a resource called ‘null_resource’ this resource has a unique ID which Terraform will manage. Within the resource a ‘provision’ of type ‘local-exec’ is added. This is a useful function that can be used to execute commands locally. There is also the option to choose which interpreter is used. We will use the standard ‘/bin/bash’ to call some virsh commands. Note that there two provisioners. One of the provisioners will be run when ‘terraform apply’ is run and the second one when ’terraform destroy’ is run.

# let terraform manage the lifecycle of the network

resource “null_resource” “picalc_network” {



# when terraform apply

provisioner “local-exec” {

command = “”

interpreter = [“/bin/bash”, “-c”]

} # when terraform destroy

provisioner “local-exec” {

when = “destroy”

command = “”

}

}

Add the following command in the provisioner that is used when applying the terraform configuration. This is a one-line command that combines multiple commands for defining and starting a libvirt network. The ‘&&’ means that the following command will only execute when the command before was successful. The commands used where discussed in chapter 8.2.

command = “virsh net-define ${path.module}/libvirt_network_config.xml && virsh net-autostart picalc_net && virsh net-start picalc_net”

Now add some commands to destroy the network. Again, these are the commands used earlier in chapter 8.2 to undefine an existing network but now in one single line. Use the following line to do just that.

command = “virsh net-undefine picalc_net && virsh net-destroy picalc_net”

Save the configuration file. The final structure of the project should look as following

To test the module, we can simply initialize and apply terraform in the directory of the module. This will not do anything to the project’s configuration.

$ cd ~/kvm_project/modules/libvirt_picalc_network $ terraform init $ terraform apply

Verify if the network is created with the following command

$ virsh net-list -–all

If you want to test the virtual machine on this network, go into the project directory init and apply the changes. Note that the network name of the virtual machine’s network interface should be picalc_net. This was discussed in the last chapter as well and should already be configured.

To destroy the network, call ‘terraform destroy’ in the directory ‘libvirt_picalc_network’. In this chapter we only created the module from it directory. But it is also possible to call it from different files. This method will be discussed later.

Configuring guest machines with static IP’s

The eventual goal is creating a service of multiple virtual machines, where one of the virtual machines load balances requests sent to the other virtual machines. To achieve this, the load balancing machine needs to know the IP addresses of the said machines. As you could have noticed, the IP addresses are randomly given within a given range by the DHCP server. This would mean that even if we provision a machine with load balancing services, it won’t immediately know which IP addresses it should load balancer for and therefore we would have to manually search the IP and connect it to a load balancer.

Fortunately, DHCP has the option to assign fixed IP addresses to hosts based on MAC addresses. In this chapter we will use this behaviour to statically set the IP addresses of the virtual machines we are going to provision by assigning MAC addresses before creating the machines. The sketch below displays the situation we want to configure.

Figure 6 — DHCP static hosts based on MAC addresses made with draw.io

Using the hosts functionality in DHCP we can have a client with a fixed MAC address get a predefined IP. In the case of provisioning it would mean that the MAC address should be set statically when the virtual machine is created if we always want the virtual machine to get the same IP address. This chapter will cover this by configuring the MAC address in Terraform code.

To do this section correctly, knowledge on how to create a libvirt network is required. Heed the chapters 8.2 and 8.3 for these topics. It is assumed that the same configuration created in chapter 8.3 is used.

1. Edit the ‘libvirt_network_config.xml’ file.

2. Add the the bold lines below within the DHCP tags. This will ensure that the IP addresses 192.168.180.102, are leased to only the machines that match the given MAC address, which in this case is 52:54:00:6c:3c:02.

<dhcp>

<range start=’192.168.180.2' end=’192.168.180.254'/>

<host mac=’52:54:00:6c:3c:02' ip=’192.168.180.102'/>

</dhcp>

3. The virtual machine also needs to be set to the MAC addresses above. This can be done within the terraform configuration file used to create the virtual machine. In this document, until now, it was called libvirt.tf. Edit this file and add the bold line below under the ‘network_interface’ in the ‘libvirt_domain’ module. The MAC address added here should match the host mac address added in the last step.

network_interface {

network_name = “picalc_net”

mac = “52:54:00:6c:3c:02”

}

4. Now test the project. Make sure the picalc_net we created in the last chapter is running. After that test this configuration with a virtual machine by initializing and planning in the main project workspace.

5. Verify if the IP is leased as expected with the following command.

$ virsh net-dhcp-leases picalc_net

Using modules and variables

This chapter will describe how to create a Terraform module for provisioning a virtual machine. The goal is to create a template with which different ubuntu machines can be created by simply changing values. With the use of variables it will be possible to reuse the module and set them up from a centralized place.

In this chapter we will be using the ‘libvirt.tf’ file used in earlier chapters to create a virtual machine. We will be making this file into a module that is reusable. Therefore it is a prerequisite to have done all the instructions in this chapter, before doing this section so that the guide can be followed accurately.

Create a new module directory called ‘ubuntu-module’. This directory will have the required files to create a virtual machine.

$ mkdir -p ~/kvm_project/modules/ubuntu-module/

Move the file ‘libvirt.tf’ to the module directory. We will be changing the ‘libvirt.tf’ file so that its reusable.

$ mv ~/kvm_project/libvirt.tf ~/kvm_project/modules/ubuntu-module

The tree structure of the kvm_project should look as following:

Edit the ‘libvirt.tf’ file with your favourite editor. We will start with adding variables. Variables are used to set values. These variables can be set from other Terraform modules besides the file it self. It can also be done from other ‘.tf’ files. Heed the following code. This code defines a variable called ‘machine_name’ . The variable we defined is of type string. This can be defined using the ‘type’ argument. The description argument is useful when many variables are required. It should describe what the variable will be used for. Add the variable definition below, to the top of ‘libvirt.tf’. This variable will be used to set various values of the file.

#libvirt.tf

variable “machine_name” {

}

When reusing a module, variables are useful as there are some values that need to be unique so that the module is reusable. In the ‘libvirt.tf’ file there a number of values will have to be changed using variables. We will cover all these values below that have to be unique in the file.

First, start with the ‘libvirt_pool’. The name of the pool must be unique when creating multiple machines. Furthermore, the directory in which the pool will save its images in must be unique. To use a variable ‘var’ argument is used. This provides access to all variables in our current module. Using this we can set the values from a different file. Take the example below. If the variable ‘machine_name’ was set on initialization to ‘ubuntu_1’, then the name of the pool would be ‘ubuntu_1_pool’ and the path would be ‘/libvirt_images/ubuntu_1_pool’.

resource “libvirt_pool” “ubuntu” {

name = “${var.machine_name}_pool”

type = “dir”

path = “/libvirt_images/${var.machine_name}_pool/”

}

The second resource that has to be changed in the ‘libvirt.tf’ file is the ‘libvirt_volume’. Heed the example below. The only value that needs to be dynamic is the name value since its used to manage the life cycle of the machine and is the name of the image file. Changing on the basis of a variable will help prevent name collisions. We will change the path value as well. This is because we moved the directory of the module. Because of this, the path.module function won’t return the path to our download. Change the values as shown below

resource “libvirt_volume” “image-qcow2”{

name = “${var.machine_name}_image.qcow2”

pool = libvirt_pool.ubuntu.name

source = “${path.module}/../../downloads/bionic-server-cloudimg-amd64.img”

format = “qcow2”

}

The third resource that must be changed is the ‘libvirt_domain’. Heed the example below. The name should be changed again as it must be unique. It is possible to create new variable for this resource. For example, creating variables for setting the memory.

resource “libvirt_domain” “test-domain” {

name = “${var.machine_name}_domain”

memory = “1024”

vcpu = 1

…

Before editing the more components, we will create three new variables which will be used to set the network interface of the ‘libvirt_domain’ and the user configuration of the machine. As described in the last chapter we can set the MAC address of the network interface so that our machine gets a fixed IP. To make this possible for multiple machines, it will be useful to have a variable with which we can set the MAC address and network. Besides the variables for the network interface, it is also important to be able to set different user configurations for the virtual machines. To this end we will use the variable ‘user_data_path’. Add the following variables.

variable “network_name” {

} variable “mac_address” {

} variable “user_data_path”{

}

Now use the two newly added variables to set the ‘network_name’ and ‘mac’ of the ‘network_interface’ which can be found under the ‘libvirt_domain’. Heed the example below

network_interface {

network_name = “${var.network_name}”

mac = “${var.mac_address}”

}

The ‘user_data_path’ variable will have to point to the ‘cloud_init.cfg’ file. To this end, we have to change the place where the user data file is loaded so that it can be loaded from a variable path. Heed the example below

data “template_file” “user_data” {

template = file(“${var.user_data_path}”)

}

Finally, save the ‘libvirt.tf’ file and close. The module can be reused without name conflicts. To make use of the module a new file will be created in the main directory of the project. This file will be used to call multiple modules, therefore it will be called main.tf. Go into the directory and create the file.

$ cd ~/kvm_project/ $ nano main.tf

In this file we will call the module and set the variables we added earlier. Heed the example below. The same values we used in earlier chapter to set the mac address and network are reused here. We also set the machine name to “ubuntu_1”. The path to the cloud init is in the directory of this file. This means we can use ‘path.module’ function to get our current directory.

module “ubuntu-module-1” {

# load the module

source = “./modules/ubuntu-module/”

# set the variables

machine_name = “ubuntu_1”

network_name = “picalc_net”

mac_address = “52:54:00:6c:3c:02”

user_data_path = “${path.module}/cloud_init.cfg”

}

Make sure to save the file after adding. The final structure of the project should look as follows.

Test the code by running ‘terraform init’ and ‘terraform apply’ in the project workspace where the ‘main.tf’ file is. This should create a new virtual machine. The result should be similar to the last chapter, where the created machine gets an IP leased based on the given MAC address. This will of course only work if the network is already running. When done testing, make sure to destroy the infrastructure. Furthermore, destroy and undefine the libvirt network, as from now on we should be able to create the network from the ‘main.tf’ file as well.

Orchestration example: Load balancing

In this chapter, the module that was created in chapter 8.3 called ‘libvirt_picalc_network’ and the module created in chapter 8.5 calle ‘ubuntu_module’ will be combined to create a single PiCalc service. The ubuntu module will be used to create three individual virtual machines which will be placed in the same network. One of the virtual machines will be the load balancer for the other two machines and will receive the HTTP requests for the PiCalc servers and then decide which server will process the request in round robin style. Heed the sketch below the explanation following it.

Figure 7 — Load Balancer network interface configuration (made with draw.io)

To be able to provision a load balancing service for the virtual machines running the PiCalc server, the use of static IP addresses will be useful. The way to do this using the DHCP protocol was discussed in chapter 8.4. Here, the host was set to a MAC address before creation so that it gets a fixed IP address when available on the network. This can also be seen the sketch above and will be done for all machines.

The load balancing service will be run from a single Terraform configuration file called ’main.tf’. This file was created in the last chapter. Have its contents cleared before starting the instructions. The new additions will be one call to the ‘libvirt_picalc_network’ to create the network, three calls to the ‘ubuntu-module’ to create three virtual machines. Besides these addition in the main file, we will create a new user configuration file specific to the load balancer so that its service is launched on creation. This will be done in a new file called ‘cloud_init_lb.cfg’.

1. Edit ‘main.tf’ and clear its contents.

2. Add the network module definition so that the network is created and managed in the same ‘main.tf’ file.

# run the network

module “picalc-network-module” {

source = “./modules/libvirt_picalc_network/”

}

3. Add the three modules with the example code below. They will be used for creating virtual machines. The first module will create a virtual machine where we run a script for load balancing. This will be discussed in the following steps. The second and third module will create a PiCalc server. This is possible because our current user data configuration, cloud_init.cfg, is configured to setup a PiCalc server on startup as created in chapter 8.1. This is also why the config file for the load balancer is different since it won’t need the PiCalc server to run on start. Besides the user configuration file, note the mac addresses added to each machine. These can of course be any other valid mac addresses.

# load balancer

module “picalc-lb” {

# load the module

source = “./modules/ubuntu-module/”

# set the variables

machine_name = “loadbalancer”

network_name = “picalc_net”

mac_address = “52:54:00:6c:3c:02”

user_data_path = “${path.module}/cloud_init_lb.cfg”

} # picalc server 1

module “picalc-server-1” {

# load the module

source = “./modules/ubuntu-module/”

# set the variables

machine_name = “server_1”

network_name = “picalc_net”

mac_address = “52:54:00:6c:3c:03”

user_data_path = “${path.module}/cloud_init.cfg”

} # picalc server 2

module “picalc-server-2” {

# load the module

source = “./modules/ubuntu-module/”

# set the variables

machine_name = “server_2”

network_name = “picalc_net”

mac_address = “52:54:00:6c:3c:04”

user_data_path = “${path.module}/cloud_init.cfg”

}

4. The virtual machine for the load balancer can already be created with the above example. We only need to create a new cloud_init.cfg which is specific to the load balancer as we don’t want the same PiCalc server to run in the machine. Do this by copying and pasting the ‘cloud_init.cfg’ file to the same directory as ‘cloud_init_lb.cfg’.

$ cp ~/kvm_project/cloud_init.cfg ~/kvm_project/cloud_init_lb.cfg

5. Open and edit the ‘cloud_init_lb.cfg’ file.

6. Change the section under ‘runcmd:’ so that a load balancer gets created on startup. Use the code example below to achieve that. As can be seen, the same PiCalc git repository is used as earlier instructions. The way the program is used is slightly different. To install the load balancer an extra ‘lb’ option is needed. Furthermore, to run the server we have to set the mode to lb and add some configuration using the lb_config option. More about the lb_config option in the next step.

# run command at boot

runcmd:

- [“cd”, “/home/terraform_guest”]

- [“git”, “clone”, “https://github.com/krebsalad/PiCalcPy.git"]

- [“cd”, “PiCalcPy”]

- [“python”, “install_picalc.py”, “lb”]

- [“python”, “run.py”, “mode=lb”, “lb_config= [options]

buffer_size=4096



[mappings]

80=192.168.180.103:8080,192.168.180.104:8080”, “&”]

7. The load balancer is configured very easily. The load balancer used is called PumpkinLB (“Savannah T., PumpkinLB A simple, fast, pure-python load balancer”, (2019a) and can be installed using the PiCalc example cloned from github. The load balancer works as following. first some option can be given behind the [options] tag. We set the buffer size to 4096 in bytes. After the options follows the [mappings] tag. This expects a port to expose for incoming message and one or more IP addresses including the port to listen to. In our case we need to change these to match IP addresses statically configured in the network definition. Do this with the following step.

8. Open the network configuration to add two new static hosts based on MAC addresses defined in step III and the load balancer configuration in the last step.

$ nano ~/kvm_project/modules/libvirt_picalc_network/libvirt_network_config.xml

9. Add the bold mark lines below to your configuration so that the newly added hosts also get a fixed IP address leased by DHCP. Note that the IP address added are the IP addresses of the servers. We already added the address for the load balancer virtual machine before. In any case, make sure the hosts’ MAC addresses match the corresponding addresses given in step III and that the hosts’ IP addresses match the ones set in step VII. Heed the sketch figure 7 for the full hosts table and corresponding hosts.

<dhcp>

<range start=’192.168.180.2' end=’192.168.180.254'/>

<host mac=’52:54:00:6c:3c:02' ip=’192.168.180.102'/>

<host mac=’52:54:00:6c:3c:03' ip=’192.168.180.103'/>

<host mac=’52:54:00:6c:3c:04' ip=’192.168.180.104'/>

</dhcp>

10. Now, save the file and exit.

11. The Terraform configuration should be ready to run. But before running the ‘main.tf’ file it is very useful to make sure no domains, networks or pools with the same name as the ones we are going to create exist. Do this by using various debug commands learned throughout this document. More about these commands can also be found under chapter 13. Troubleshooting. Furthermore, it will be useful if your host machine has all its resources available as three guest machines will be run at the same time. This should work for even less than 4 cores, but some process swapping would occur a lot and make it very slow. In case of the host being a virtual machine, you could expect it to crash when there is too much process swapping.

12. Plan the configuration by going into the main workspace ‘~/kvm_project’ and running ‘terraform init’ and then ‘terraform plan’. You will see that a total of 13 resources are going to be created of which four are for the first PiCalc server, another four for the second PiCalc server, another four for the load balancer and finally one the libvirt network. If you do not remember what these resources are, heed chapter 7.

13. Run the configuration by running ‘terraform apply’ in the main workspace. When you type ‘yes’ the infrastructure will be created.

14. Continue here after the machines are created and Terraform is done. Start three new terminals to monitor the boot of the virtual machines using ‘virsh console <n>’. Change <n> by the number of the console which can be found using the ‘virsh list — all’ command. Open multiple consoles for each of the virtual machines created. Here you will have to wait until the machines start completely. Do not press enter when done, as you can see if the server is running when the cloud init installation on boot is done.

15. Make sure if all the machine got there ip leases using ‘virsh net-dhcp-leases picalc_net’

16. Finally, test the PiCalc load balanced service by sending a HTTP request to ‘192.168.180.102:80/PiCalc/100’. This is the iP address we set for the load balancer. Do this multiple times and you will notice that the returned IP address differs after on consecutive requests.