Terraform from HashiCorp has been a revelation for me since I started using it in anger last year to deploy VeeamPN into AWS. From there it has allowed me to automate lab Veeam deployments, configure a VMware Cloud on AWS SDDC networking and configure NSX vCloud Director Edges. The time saved by utilising the power of Terraform for repeatable deployment of infrastructure is huge.

When it came time for me to play around with Kubernetes to get myself up to speed with what was happening under the covers, I found a lot of online resources on how to install and configure a Kubernetes cluster on vSphere with a Master/Node deployment. I found that while I was tinkering, I would break deployments which meant I had to start from scratch and reinstall. This is where Terraform came into play. I set about to create a repeatable Terraform plan to deploy the required infrastructure onto vSphere and then have Terraform remotely execute the installation of Kubernetes once the VMs had been deployed.

I’m not the first to do a Kubernetes deployment on vSphere with Terraform, but I wanted to have something that was simple and repeatable to allow quick initial deployment. The above example uses KubeSpray along with Ansible with other dependancies. What I have ended up with is a self contained Terraform plan that can deploy a Kubernetes sandbox with Master plus a dynamic number of Nodes onto vSphere using CentOS as the base OS.

I haven’t automated is the final step of joining the nodes to the cluster automatically. That step takes a couple of seconds once everything else is deployed. I also haven’t integrated this with VMware Cloud Volumes and prepped for persistent volumes. Again, the idea here is to have a sandbox deployed within minutes to start tinkering with. For those that are new to Kubernetes it will help you get to the meat and gravy a lot quicker.

The Plan:

The GitHub Project is located here. Feel free to clone/fork it.

In a nutshell, I am utilising the Terraform vSphere Provider to deploy a VM from a preconfigured CentOS template which will end up being the Kubernetes Master. All the variables are defined in the terraform.tfvars file and no other configuration needs to happen outside of this file. Key variables are fed into the other tf declarations to deploy the Master and the Nodes as well as how to configure the Kubernetes cluster IP networking.

# vCenter connection vsphere_vcenter = "vc03.aperaturelabs.biz" vsphere_user = "administrator@vsphere.local" vsphere_password = "PASSWORD" vsphere_unverified_ssl = "true" # VM specifications vsphere_datacenter = "VC03" vsphere_vm_folder = "TPM03-AS" vsphere_vm_name = "TPM03-K8-MASTER-T" vsphere_vm_resource_pool ="TPM03-AS" vsphere_vm_template = "TPM03-AS/TPM03-CENTOS7-TEMPLATE" vsphere_cluster = "MEGA-03" vsphere_vcpu_number = "2" vsphere_memory_size = "8192" vsphere_datastore = "vsanDatastore" vsphere_port_group = "TPM03-730" vsphere_ipv4_address = "10.0.30.196" vsphere_ipv4_netmask = "24" vsphere_ipv4_gateway = "10.0.30.1" vsphere_dns_servers = "10.0.0.2" vsphere_domain = "aperaturelabs.biz" vsphere_time_zone = "UTC" vsphere_vm_password ="Veeam1!" vsphere_tag_category ="TPM03" vsphere_tag_name ="TPM03-NO-BACKUP" # K8 Configuration vsphere_k8_nodes = "3" vsphere_k8_version = "1.15.3" vsphere_k8pod_network = "10.0.30.0/24" vsphere_vm_name_k8n1 = "TPM03-K8-NODE-T" vsphere_ipv4_address_k8n1_network = "10.0.30." vsphere_ipv4_address_k8n1_host ="197" 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 # vCenter connection vsphere_vcenter = "vc03.aperaturelabs.biz" vsphere_user = [email protected]" vsphere_password = "PASSWORD" vsphere_unverified_ssl = "true" # VM specifications vsphere_datacenter = "VC03" vsphere_vm_folder = "TPM03-AS" vsphere_vm_name = "TPM03-K8-MASTER-T" vsphere_vm_resource_pool = "TPM03-AS" vsphere_vm_template = "TPM03-AS/TPM03-CENTOS7-TEMPLATE" vsphere_cluster = "MEGA-03" vsphere_vcpu_number = "2" vsphere_memory_size = "8192" vsphere_datastore = "vsanDatastore" vsphere_port_group = "TPM03-730" vsphere_ipv4_address = "10.0.30.196" vsphere_ipv4_netmask = "24" vsphere_ipv4_gateway = "10.0.30.1" vsphere_dns_servers = "10.0.0.2" vsphere_domain = "aperaturelabs.biz" vsphere_time_zone = "UTC" vsphere_vm_password = "Veeam1!" vsphere_tag_category = "TPM03" vsphere_tag_name = "TPM03-NO-BACKUP" # K8 Configuration vsphere_k8_nodes = "3" vsphere_k8_version = "1.15.3" vsphere_k8pod_network = "10.0.30.0/24" vsphere_vm_name_k8n1 = "TPM03-K8-NODE-T" vsphere_ipv4_address_k8n1_network = "10.0.30." vsphere_ipv4_address_k8n1_host = "197"

[Update] – It seems as though Kubernetes 1.16.0 was released over the past couple of days. This resulted in the scripts not installing the Master Node correctly due to an API issue when configuring the POD networking. Because of that i’ve updated the code to now use a variable that specifies the Kubernetes version being installed. This can be found on Line 30 of the terraform.tfvars. The default is 1.15.3.

The main items to consider when entering in your own variables for the vSphere environment is to look at Line 18, and then Line 28-31. Line 18 defines the Kubernetes POD network which is used during the configuration and then Line 28-31 sets the number of nodes, the starting name for the VM and then uses two seperate variables to build out the IP addresses of the nodes. Pay attention to the format here of the network on Line 30 and then choose the starting IP for the Nodes on Line 31. This is used as a starting IP for the Node IPs and is enumerated in the code using the Terraform Count construct.

network_interface { ipv4_address = "${var.vsphere_ipv4_address_k8n1_network}${"${var.vsphere_ipv4_address_k8n1_host}" + count.index}" ipv4_netmask = "${var.vsphere_ipv4_netmask}" } 1 2 3 4 network_interface { ipv4_address = "${var.vsphere_ipv4_address_k8n1_network}${" $ { var . vsphere_ipv4_address_k8n1_host } " + count.index}" ipv4_netmask = "${var.vsphere_ipv4_netmask}" }

By using Terraforms remote-exec provisioner, I am then using a combination of uploaded scripts and direct command line executions to configure and prep the Guest OS for the installation of Docker and Kubernetes.

#=============================================================================== # vSphere Resources #=============================================================================== # Create a vSphere VM in the folder # resource "vsphere_virtual_machine" "TPM03-K8-MASTER" { # VM placement # name = "${var.vsphere_vm_name}" resource_pool_id = "${data.vsphere_resource_pool.resource_pool.id}" datastore_id = "${data.vsphere_datastore.datastore.id}" folder = "${var.vsphere_vm_folder}" tags = ["${data.vsphere_tag.tag.id}"] # VM resources # num_cpus = "${var.vsphere_vcpu_number}" memory = "${var.vsphere_memory_size}" # Guest OS # guest_id = "${data.vsphere_virtual_machine.template.guest_id}" # VM storage # disk { label = "${var.vsphere_vm_name}.vmdk" size = "${data.vsphere_virtual_machine.template.disks.0.size}" thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}" eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}" } # VM networking # network_interface { network_id = "${data.vsphere_network.network.id}" adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}" } # Customization of the VM # clone { template_uuid = "${data.vsphere_virtual_machine.template.id}" customize { linux_options { host_name = "${var.vsphere_vm_name}" domain = "${var.vsphere_domain}" #time_zone = "${var.vsphere_time_zone}" } network_interface { ipv4_address = "${var.vsphere_ipv4_address}" ipv4_netmask = "${var.vsphere_ipv4_netmask}" } ipv4_gateway = "${var.vsphere_ipv4_gateway}" dns_server_list = ["${var.vsphere_dns_servers}"] dns_suffix_list = ["${var.vsphere_domain}"] } } # Configure Kubernetes # provisioner "file" { source = "configure_phase1.sh" destination = "/tmp/configure_phase1.sh" connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "chmod +x /tmp/configure_phase1.sh", "/tmp/configure_phase1.sh", ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "yum install -y docker kubelet-${var.vsphere_k8_version} kubeadm-${var.vsphere_k8_version} kubectl-${var.vsphere_k8_version} --disableexcludes=kubernetes" ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "file" { source = "configure_phase2.sh" destination = "/tmp/configure_phase2.sh" connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "chmod +x /tmp/configure_phase2.sh", "/tmp/configure_phase2.sh", ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "kubeadm init --pod-network-cidr=${var.vsphere_k8pod_network}" ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "file" { source = "configure_phase3.sh" destination = "/tmp/configure_phase3.sh" connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "chmod +x /tmp/configure_phase3.sh", "/tmp/configure_phase3.sh", ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "cat << EOF > /etc/hosts", "${var.vsphere_ipv4_address} ${var.vsphere_vm_name}", "${var.vsphere_ipv4_address_k8n1_network}${"${var.vsphere_ipv4_address_k8n1_host}" + 0} ${var.vsphere_vm_name_k8n1}${count.index + 1}", "${var.vsphere_ipv4_address_k8n1_network}${"${var.vsphere_ipv4_address_k8n1_host}" + 1} ${var.vsphere_vm_name_k8n1}${count.index + 2}", "${var.vsphere_ipv4_address_k8n1_network}${"${var.vsphere_ipv4_address_k8n1_host}" + 2} ${var.vsphere_vm_name_k8n1}${count.index + 3}", "EOF" ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 #=============================================================================== # vSphere Resources #=============================================================================== # Create a vSphere VM in the folder # resource "vsphere_virtual_machine" "TPM03-K8-MASTER" { # VM placement # name = "${var.vsphere_vm_name}" resource_pool_id = "${data.vsphere_resource_pool.resource_pool.id}" datastore_id = "${data.vsphere_datastore.datastore.id}" folder = "${var.vsphere_vm_folder}" tags = [ "${data.vsphere_tag.tag.id}" ] # VM resources # num_cpus = "${var.vsphere_vcpu_number}" memory = "${var.vsphere_memory_size}" # Guest OS # guest_id = "${data.vsphere_virtual_machine.template.guest_id}" # VM storage # disk { label = "${var.vsphere_vm_name}.vmdk" size = "${data.vsphere_virtual_machine.template.disks.0.size}" thin_provisioned = "${data.vsphere_virtual_machine.template.disks.0.thin_provisioned}" eagerly_scrub = "${data.vsphere_virtual_machine.template.disks.0.eagerly_scrub}" } # VM networking # network_interface { network_id = "${data.vsphere_network.network.id}" adapter_type = "${data.vsphere_virtual_machine.template.network_interface_types[0]}" } # Customization of the VM # clone { template_uuid = "${data.vsphere_virtual_machine.template.id}" customize { linux_options { host_name = "${var.vsphere_vm_name}" domain = "${var.vsphere_domain}" #time_zone = "${var.vsphere_time_zone}" } network_interface { ipv4_address = "${var.vsphere_ipv4_address}" ipv4_netmask = "${var.vsphere_ipv4_netmask}" } ipv4_gateway = "${var.vsphere_ipv4_gateway}" dns_server_list = [ "${var.vsphere_dns_servers}" ] dns_suffix_list = [ "${var.vsphere_domain}" ] } } # Configure Kubernetes # provisioner "file" { source = "configure_phase1.sh" destination = "/tmp/configure_phase1.sh" connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "chmod +x /tmp/configure_phase1.sh" , "/tmp/configure_phase1.sh" , ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "yum install -y docker kubelet-${var.vsphere_k8_version} kubeadm-${var.vsphere_k8_version} kubectl-${var.vsphere_k8_version} --disableexcludes=kubernetes" ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "file" { source = "configure_phase2.sh" destination = "/tmp/configure_phase2.sh" connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "chmod +x /tmp/configure_phase2.sh" , "/tmp/configure_phase2.sh" , ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "kubeadm init --pod-network-cidr=${var.vsphere_k8pod_network}" ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "file" { source = "configure_phase3.sh" destination = "/tmp/configure_phase3.sh" connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "chmod +x /tmp/configure_phase3.sh" , "/tmp/configure_phase3.sh" , ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } provisioner "remote-exec" { inline = [ "cat << EOF > /etc/hosts" , "${var.vsphere_ipv4_address} ${var.vsphere_vm_name}" , "${var.vsphere_ipv4_address_k8n1_network}${" $ { var . vsphere_ipv4_address_k8n1_host } " + 0} ${var.vsphere_vm_name_k8n1}${count.index + 1}" , "${var.vsphere_ipv4_address_k8n1_network}${" $ { var . vsphere_ipv4_address_k8n1_host } " + 1} ${var.vsphere_vm_name_k8n1}${count.index + 2}" , "${var.vsphere_ipv4_address_k8n1_network}${" $ { var . vsphere_ipv4_address_k8n1_host } " + 2} ${var.vsphere_vm_name_k8n1}${count.index + 3}" , "EOF" ] connection { type = "ssh" user = "root" password = "${var.vsphere_vm_password}" } } }

You can see towards the end I have split up the command line scripts to ensure that the dynamic nature of the deployment is attained. The remote-exec on Line 82 pulls in the POD Network Variable an executes it inline. The same is done for Line 116-121 which configures the Guest OS hosts file to ensure name resolution. They are used together with two other scripts that are uploaded and executed.

The scripts have been build up from a number of online sources that go through how to install and configure Kubernetes manually. For the networking, I went with Weave Net after having a few issues with Flannel. There are lots of other networking options for Kubernetes… this is worth a read.

For better DNS resolution on the Guest OS VMs, the hosts file entries are constructed from the IP address settings set in the terraform.tfvars file.

provisioner "remote-exec" { inline = [ "cat << EOF > /etc/hosts", "${var.vsphere_ipv4_address} ${var.vsphere_vm_name}", "${var.vsphere_ipv4_address_k8n1_network}${"${var.vsphere_ipv4_address_k8n1_host}" + 0} ${var.vsphere_vm_name_k8n1}${count.index + 1}", "${var.vsphere_ipv4_address_k8n1_network}${"${var.vsphere_ipv4_address_k8n1_host}" + 1} ${var.vsphere_vm_name_k8n1}${count.index + 2}", "${var.vsphere_ipv4_address_k8n1_network}${"${var.vsphere_ipv4_address_k8n1_host}" + 2} ${var.vsphere_vm_name_k8n1}${count.index + 3}", "EOF" ] 1 2 3 4 5 6 7 8 9 provisioner "remote-exec" { inline = [ "cat << EOF > /etc/hosts" , "${var.vsphere_ipv4_address} ${var.vsphere_vm_name}" , "${var.vsphere_ipv4_address_k8n1_network}${" $ { var . vsphere_ipv4_address_k8n1_host } " + 0} ${var.vsphere_vm_name_k8n1}${count.index + 1}" , "${var.vsphere_ipv4_address_k8n1_network}${" $ { var . vsphere_ipv4_address_k8n1_host } " + 1} ${var.vsphere_vm_name_k8n1}${count.index + 2}" , "${var.vsphere_ipv4_address_k8n1_network}${" $ { var . vsphere_ipv4_address_k8n1_host } " + 2} ${var.vsphere_vm_name_k8n1}${count.index + 3}" , "EOF" ]

Plan Execution:

The Nodes can be deployed dynamically using a Terraform var option when applying the plan. This allows for zero to as many nodes as you want for the sandbox… though three seems to be a nice round number.

.\terraform apply -var 'vsphere_k8_nodes=3" --auto-approve 1 . \ terraform apply -var ' vsphere_k8_nodes = 3" -- auto -approve

The number of nodes can also be set in the terraform.tfvars file on Line 28. The variable set during the apply will take precedence over the one declared in the tfvars file. One of the great things about Terraform is we can alter the variable either way which will end up with nodes being added or removed automatically.

Once applied, the plan will work through the declaration files and the output will be similar to what it shown below. You can see in just over 5 minutes we have deployed one Master and three Nodes ready for further config.

The next step is to use the kubeadm join command on the nodes. For those paying attention the complete join command was outputted via the Terraform apply. Once applied on all nodes you should have a ready to go Kubernetes Cluster running on CentOS ontop of vSphere.

While I do believe that the future of Kubernetes is such that a lot of the initial installation and configuration will be taken out of our hands and delivered to us via services based in Public Clouds or through platforms such as VMware’s Project Pacific having a way to deploy a Kubernetes cluster locally on vSphere is a great way to get to know what goes into making a containerisation platform tick.

Build it, break it, destroy it and then repeat… that is the beauty of Terraform!

References:

https://github.com/anthonyspiteri/terraform/tree/master/deploy_kubernetes_CentOS

Share this: Twitter

LinkedIn

Reddit

WhatsApp



Like this: Like Loading...