Last Updated on August 10, 2019

Reading Time: 4 minutes

It’s common to use Terraform to provision Kubernetes clusters in the cloud. Which means that a lot of the variables that are useful to pass into Helm Charts are conveniently available in Terraform.

In many cases the control plane is a managed service so configuration must be run locally and pointed at a remote endpoint.

People gravitate towards using the Kubernetes and Helm Terraform providers and then face issues related to statefulness.

In this tutorial we’ll demonstrate how to use Terraform in conjunction with the docker_container resource to execute Helmfile against a remote Kubernetes cluster. Running all of the configuration in a Docker container keeps all of the dependency management nice and neat.

Setup K3s

First we need to clone the terraform-helmfile repo and then execute docker-compose up -d.

This starts K3s which is a lightweight Kubernetes cluster. Bridge networking is enabled so we can connect other Docker containers to it later.

Run Terraform

Now we can run terraform init followed by terraform apply to install a sample Kubernetes dashboard chart using Helmfile.

When we execute terraform apply a Docker container is started by Terraform which contains the Helmfile binary. Terraform also renders some configuration files inside the container which specify how to connect to the K3s cluster and what charts to install.

Finally, a local-exec is used to poll the docker logs for the container and exit with a valid status code depending on success or failure.

Run it again..

Helmfile is idempotent and will only make changes if something changes. You can test this by executing terraform apply again.

At the bottom of the output the second time we run Terraform it says “No affected releases” and then exits successfully.

A look at the code

Let’s walk through the code in the Git repo and discuss how you can use this in your projects.

Terraform

We can start by looking at the Terraform code that starts the container in main.tf.

resource "docker_container" "helmsfile" { name = "terraform-helmfile" image = "quay.io/roboll/helmfile:v0.80.2" links = ["k3s-server"] entrypoint = ["/entrypoint.sh"] start = true upload = { content = "${data.template_file.entrypoint.rendered}" file = "/entrypoint.sh" executable = true } upload = { content = "${data.template_file.kubeconfig.rendered}" file = "/kubeconfig.yaml" } upload = { content = "${data.template_file.helmfile.rendered}" file = "/helmfile.yaml" } depends_on = [ "null_resource.dockerrm", ] } 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 resource "docker_container" "helmsfile" { name = "terraform-helmfile" image = "quay.io/roboll/helmfile:v0.80.2" links = [ "k3s-server" ] entrypoint = [ "/entrypoint.sh" ] start = true upload = { content = "${data.template_file.entrypoint.rendered}" file = "/entrypoint.sh" executable = true } upload = { content = "${data.template_file.kubeconfig.rendered}" file = "/kubeconfig.yaml" } upload = { content = "${data.template_file.helmfile.rendered}" file = "/helmfile.yaml" } depends_on = [ "null_resource.dockerrm" , ] }

Here we’re creating a Docker container called terraform-helmfile that starts the quay.io/roboll/helmfile:v0.80.2 image.

For our demo I’ve added a link to the k3s-server container and rendered the kubeconfig.yaml into the container. This gives us the connection to the Kubernetes cluster. When using this for real we can remove the link and render a kubeconfig.yaml from variables pointing at a real cluster.

The upload blocks show we’re also copying an entrypoint.sh that executes helmfile and the helmfile.yaml which specifies what Helm Charts to install.

The depends_on ensures that our docker container is always deleted before each new terraform run.

Now let’s look at the data_sources.tf.

data "template_file" "entrypoint" { template = "${file("${path.module}/entrypoint.sh")}" } data "template_file" "kubeconfig" { template = "${file("${path.module}/kubeconfig.yaml")}" } data "template_file" "helmfile" { template = "${file("${path.module}/helmfile.yaml")}" } 1 2 3 4 5 6 7 8 9 10 11 data "template_file" "entrypoint" { template = "${file(" $ { path .module } / entrypoint .sh ")}" } data "template_file" "kubeconfig" { template = "${file(" $ { path .module } / kubeconfig .yaml ")}" } data "template_file" "helmfile" { template = "${file(" $ { path .module } / helmfile .yaml ")}" }

It’s kind of basic in our example code. In a real world example we’d pass in terraform variables into each template_file resource.

data "template_file" "helmfile" { template = "${file("${path.module}/helmfile.yaml")}" vars = { kubernetes_dashboard_enabled = "${var.kubernetes_dashboard_enabled}" kubernetes_dashboard_version = "${var.kubernetes_dashboard_version}" } } 1 2 3 4 5 6 7 8 data "template_file" "helmfile" { template = "${file(" $ { path .module } / helmfile .yaml ")}" vars = { kubernetes_dashboard_enabled = "${var.kubernetes_dashboard_enabled}" kubernetes_dashboard_version = "${var.kubernetes_dashboard_version}" } }

Then we can use these variables in our helmfile.yaml.

repositories: - name: stable url: https://kubernetes-charts.storage.googleapis.com helmDefaults: tillerless: true atomic: true verify: false wait: true timeout: 600 recreatePods: true force: true releases: - name: kubernetes-dashboard namespace: dashboard chart: stable/kubernetes-dashboard version: ${kubernetes_dashboard_version} installed: ${kubernetes_dashboard_enabled} set: - name: rbac.clusterAdminRole value: true - name: enableInsecureLogin value: true - name: enableSkipLogin value: true 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 repositories : - name : stable url : https : / / kubernetes - charts .storage .googleapis .com helmDefaults : tillerless : true atomic : true verify : false wait : true timeout : 600 recreatePods : true force : true releases : - name : kubernetes - dashboard namespace : dashboard chart : stable / kubernetes - dashboard version : $ { kubernetes_dashboard_version } installed : $ { kubernetes_dashboard_enabled } set : - name : rbac .clusterAdminRole value : true - name : enableInsecureLogin value : true - name : enableSkipLogin value : true

Here you can see I’ve used set the version and installed to match those variables from above.

Helmfile

A lot of the magic that happens with upgrades is defined by the options passed into helm upgrade. I’ve set these as defaults in our config.

helmDefaults: tillerless: true atomic: true verify: false wait: true timeout: 600 recreatePods: true force: true 1 2 3 4 5 6 7 8 helmDefaults : tillerless : true atomic : true verify : false wait : true timeout : 600 recreatePods : true force : true

For our options we’ve chosen to go tillerless and set atomic, force and recreatePods so that our chart upgrades work nicely. I had failures downloading charts with verify enabled so that’s set to false.

Let’s also look at the entrypoint.sh that the container executes when it is run.

#!/bin/bash sed -i 's/localhost:6443/k3s-server:6443/' /kubeconfig.yaml export KUBECONFIG=/kubeconfig.yaml helm init --client-only helm repo update helm plugin install https://github.com/rimusz/helm-tiller helmfile apply 2>&1 1 2 3 4 5 6 7 #!/bin/bash sed - i 's/localhost:6443/k3s-server:6443/' / kubeconfig .yaml export KUBECONFIG = / kubeconfig .yaml helm init -- client - only helm repo update helm plugin install https : / / github .com / rimusz / helm - tiller helmfile apply 2 > & 1

The sed line is a bit of hack to point our kubeconfig at the k3s-server linked container. This wouldn’t be required when connecting to a remote cluster with a properly rendered kubeconfig.

Most of the entrypoint is related to setting up Helm. We initialise the repos and then install the helm-tiller plugin. This is required as we set tillerless: true in our helmfile.yaml.

The actual execution of helmfile apply is fairly boring. We just execute it and redirect all output to stdout so everything is visible using docker logs.

Tailing the logs

We have our local exec that starts a small Python script.

resource "null_resource" "dockerlogs" { provisioner "local-exec" { command = "./logtail.py $(docker inspect --format={{.Id}} terraform-helmfile)" } triggers = { always_run = "${timestamp()}" } depends_on = [ "docker_container.helmfile", ] } 1 2 3 4 5 6 7 8 9 10 11 12 13 resource "null_resource" "dockerlogs" { provisioner "local-exec" { command = "./logtail.py $(docker inspect --format={{.Id}} terraform-helmfile)" } triggers = { always_run = "${timestamp()}" } depends_on = [ "docker_container.helmfile" , ] }

This depends on the docker_container resource being created. We pass in the container id to logtail.py so the script knows what container to tail.

Then our logtail.py script executes the docker logs and prints the output so that we get it in the terraform output.

#!/usr/bin/env python from __future__ import print_function import subprocess import sys import re def execute(cmd): popen = subprocess.Popen(cmd, stdout=subprocess.PIPE, universal_newlines=True) for stdout_line in iter(popen.stdout.readline, ""): yield stdout_line popen.stdout.close() return_code = popen.wait() if return_code: raise subprocess.CalledProcessError(return_code, cmd) for logline in execute(['docker','logs', '-f', sys.argv[1]]): print(logline, end="") inspect_command = ["docker", "inspect", sys.argv[1], "--format='{{.State.ExitCode}}'"] output, error = subprocess.Popen(inspect_command, universal_newlines=True, stdout=subprocess.PIPE, stderr=subprocess.PIPE).communicate() status = int(re.search(r'\d+', output).group()) sys.exit(status) 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 #!/usr/bin/env python from __future__ import print_function import subprocess import sys import re def execute ( cmd ) : popen = subprocess .Popen ( cmd , stdout = subprocess .PIPE , universal_newlines = True ) for stdout_line in iter ( popen .stdout .readline , "" ) : yield stdout_line popen .stdout .close ( ) return_code = popen .wait ( ) if return_code : raise subprocess .CalledProcessError ( return_code , cmd ) for logline in execute ( [ 'docker' , 'logs' , '-f' , sys .argv [ 1 ] ] ) : print ( logline , end = "" ) inspect_command = [ "docker" , "inspect" , sys .argv [ 1 ] , "--format='{{.State.ExitCode}}'" ] output , error = subprocess .Popen ( inspect_command , universal_newlines = True , stdout = subprocess .PIPE , stderr = subprocess .PIPE ) .communicate ( ) status = int ( re .search ( r '\d+' , output ) .group ( ) ) sys .exit ( status )

This initially runs docker logs -f <container id> and yields the log lines. It stops when the container disappears.

At the end we inspect the container exit code using docker inspect. The exit code gets converted to an int and we exit with whatever the container reported so that Terraform apply will fail if Helmfile fails.

Summary

We used Terraform to execute a local Docker container running Helmfile. We used Terraform to render some configuration into the container and execute it against a remote K3s cluster.

All log output is printed out during the Terraform run. Also, Terraform will fail is Helmfile fails.

Finally, zero state is stored in the container or Terraform statefiles. Helmfile enumerates the state based upon its desired state file and what’s currently running on the cluster. We specified many options to help reduce the chance that deployments get blocked by Chart state issues.

Hopefully this has been useful. If anyone notices any bugs please feel free to submit a pull request on the terraform-helmfile Github repo.