Helm has become the de facto package management tool for Kubernetes resources. As an example, take a look at these installation instructions for Istio (a Kubernetes service mesh and observability tool).

While many common Helm chart installation instructions encourage you to run a very simple command ( helm install <chart> ), and - hey presto - some new software is running in your Kubernetes cluster, I think that this workflow should generally (if not always) be avoided.

The big disadvantage of this workflow is that you sacrifice repeatability.

Repeatability is critical

Consider the scenario when you need to reinstall your Helm charts.

Say, for example, you need to migrate to a new Kubernetes cluster, for some reason. You can run helm ls to determine all currently installed charts, and their versions, and then install all of those on the new cluster, but this is significant manual work, and it only applies if you have a functioning cluster from which to 'copy' your Helm charts.

If for some reason your cluster is sufficiently broken — or, perhaps, accidentally deleted — you’ve now lost the accurate record of which Helm charts you had installed, and at what versions.

'helm install' is anti-GitOps

The GitOps model to managing Kubernetes resources — where a Git repo is treated as the source-of-truth for what should be running in the cluster — is precisely the solution for the repeatability problem. Somebody made a manual change to the cluster that broke something? No problem, we’ll just rollback to what’s in Git. (Indeed, preferably some automation will detect the change and do it for you!)

On the other hand, manually running helm install commands completely breaks this model, because your Git repo no longer completely encapsulates a description of what should be running in your Kubernetes cluster.

To be fair to Helm, it is possible to work around this problem. As long as you’re willing to package up all of your Kubernetes resources into Helm chart(s), with all dependencies listed (managed using semver), you can continue using the GitOps deployment model. But this forces you to use Helm for everything.

What you should do instead

If you want to install Helm charts to your Kubernetes cluster, I strongly recommend:

vendoring the chart into your Git repo (or otherwise fully specifying the precise version of the chart in source control) using helm template on the chart to render it as Kubernetes YAML running plain-old kubectl apply on the result

This makes your Kubernetes cluster’s configuration fully repeatable. It also means that all chart installations, and version upgrades, are fully auditable in source control.

In fact — if you take another look at the Istio installation instructions — you’ll see that this is exactly the recommended workflow for installing Istio using Helm!

We do all of this using our build system — Bazel. We’ve written some custom Bazel rules that help us achieve this workflow — feel free to use them, if you like.

Bringing a new chart into the build system is simple. In your Bazel WORKSPACE file, you'll need to include the following code:

# Add the 'dataform' repository as a dependency.

git_repository(

name = "df",

commit = "de1eb66e558fbd349092d9519a8d5a1edefba94f",

remote = "https://github.com/dataform-co/dataform.git",

)



# Load the Helm repository rules.

load("@dataform//tools/helm:repository_rules.bzl", "helm_chart", "helm_tool")



# Download the 'helm' tool.

helm_tool(

name = "helm_tool",

)

To add a single new chart:

# Download the 'istio' Helm chart.

helm_chart(

name = "istio",

chartname = "istio",

repo_url = "https://storage.googleapis.com/istio-release/releases/1.4.0/charts/",

version = "v1.4.0",

)

Then, when you want to template it, add a BUILD rule somewhere, looking something like:

helm_template(

name = "istio",

chart_tar = "@istio//:chart.tgz",

namespace = "istio-system",

values = { ... },

)

The output of this rule is plain Kubernetes YAML, ready for you to deploy to your cluster however you wish. (We use the standard Bazel Kubernetes rules.)