Photo by Frank Eiffert on Unsplash

Edit 14/02/2020: Added note regarding how the -n flag has changed. Also added note about namespace creation.

Edit 22/05/2020: Added some suggestions by Bharti and Gernot Höbenreich

I was lucky enough to test migrating from Helm 2 to 3 recently. Helm 3 is a major release that addresses many of the concerns with Helm 2. For the most part, it’s a smooth transition. The helm-2to3 plugin makes the process simple and transparent, and Helm 3 is a breeze to use once you’ve moved all of your estate across. That being said, know that your CI/CD scripts will probably break. There are breaking changes to the arguments Helm accepts, how Helm queries releases, and finally the new OpenAPI validation process for charts before they are sent to the Kubernetes API. I certainly wasn’t aware of any of these changes until I ran into these problems myself, and found little in the way of migration docs that called them out specifically. So, I thought I’d document all of this here for anyone else who is scratching their head wondering why deployments are failing!

Changes to Flags (New Behaviors, Deprecations, etc)

This will be mostly (if not entirely) down to the changes in which flags are available in the new version of Helm. Here is a list of the changes to the the flags you most likely used with Helm 2:

The -n flag no longer exists when using helm install . With Helm 2, you would use -n to specify the name of the release, instead of using one of the automatically generated names.

helm install . -n my-awesomechart --namespace dev

This was problematic, because when using kubectl to interact with your cluster, -n is used to refer to the namespace in which you run commands. With Helm 3, -n is gone altogether. Instead, the name of the release is a positional argument:

➜ helm3 install -h This command installs a chart archive.

[...]

Usage:

helm install [NAME] [CHART] [flags]

Additionally, if you want Helm to generate the release name for you, you must now supply the --generate-name parameter and omit the positional argument for the release name.

2. Purging is now the default behavior with Helm 3. Therefore, any scripts where you were passing the --purge flag will now throw an error:

➜ helm delete --purge

Error: unknown flag: --purge # If you want to keep the release history like Helm 2 used to when omitting the --purge flag, do this:

➜ helm delete --keep-history

3. Now that Tiller is gone, the --tls flag has gone with it. Using this flag will throw the same unknown flag error as seen above.

4. The --recreate-pods flag doesn’t exist anymore either. This was marked as deprecated in Helm 2 and the documentation has been updated to show you how to do this properly with Helm 3. You can see an example in the docs here.

5. The --timeout flag now requires a unit of duration. Before you could specify 300, and this would translate to 300 seconds. Now you must specify m for minutes or s for seconds, e.g. 300s , 5m or 5m0s .

6. The --strict flag for helm repo update is gone. There doesn’t appear to be an option similar to this in Helm 3.

7. The --save flag for helm dependency update is gone. Again, there doesn’t appear to be an option similar to this in Helm 3.

8. helm status no longer shows the status of the resources Helm creates. There is an open GitHub issue to bring this functionality back in Helm 3. You can read about this here.

Thanks to Bharti for items 6, 7, and 8!

Helm 3 Lists Releases by Namespace

Helm 3 now behaves similarly to kubectl, which I think is a fantastic addition. With Helm 2, you could query all releases across all namespaces simply by typing helm ls . With Helm 3, running helm ls will only show releases in the default namespace, just like kubectl get pods would. To list releases in any non-default namespace, you must supply the -n argument and the namespace, just as you would when using kubectl. Here’s a better example of what I mean:

# Helm 2: shows all releases across all namespaces

➜ helm ls

NAME REVISION UPDATED ... NAMESPACE

prometheus 1 Sat Jan 04 ... default

devapp 2 Sat Jan 04 ... dev

prodapp 2 Sat Jan 04 ... production # Helm 3: only shows releases in default namespace

➜ helm ls

NAME REVISION UPDATED ... NAMESPACE

prometheus 1 Sat Jan 04 ... default # Helm 3: using -n shows release in specified namespace

➜ helm ls -n production

NAME REVISION UPDATED ... NAMESPACE

prodapp 2 Sat Jan 04 ... production # Helm 3: shows all releases across all namespaces

➜ helm ls --all-namespaces

NAME REVISION UPDATED ... NAMESPACE

prometheus 1 Sat Jan 04 ... default

devapp 2 Sat Jan 04 ... dev

prodapp 2 Sat Jan 04 ... production

NOTE: Helm 3 has no visibility of releases managed by Helm 2 unless you migrate them. The above sample is just to illustrate how listing has changed. You will have to migrate your releases to Helm 3 using the above 2-to-3 plugin before Helm 3 will see them.

This is a small change but it’s non-trivial if your Bash scripts were previously hammering out helm ls and then using grep on the output. Now, those commands will fail because they will only check the default namespace. If you want the same behavior you had in Helm 3, make sure you pass the --all-namespaces flag!

Helm Doesn’t Create Namespaces for You — Unless You Tell It To

This isn’t really that big of a change, but you should be aware of it nonetheless. When installing a chart, Helm 2 would create the destination namespace for you if it didn’t exist already. Helm 3 only installs releases into preexisting namespace and will throw an error stating the destination namespace doesn’t exist. As of Helm v3.2.0, you can tell Helm explicitly that the namespace should be created by using the --create-namespace flag (thanks Gernot Höbenreich!).

Kubernetes OpenAPI Validation

Helm 3 validates all of your rendered templates against the Kubernetes OpenAPI schema. This was enabled in v3.0.0. Prior to this, Helm would render your templates and send them to the Kubernetes API and leave it up to the API to determine what was valid. Anything that was improperly indented would throw an error, and arguments that were passed to the wrong endpoint would often be thrown out without logging anything to stdout/stderr.

With Helm 3, your rendered templates are validated before they are sent to the Kubernetes API. That means any invalid fields will trigger a non-zero exit code and fail the release. This is nice if your YAML files are all perfectly aligned with the Kubernetes API docs; if not you’ll have to remove any invalid fields that are present.

Here’s an example. Take the stable prometheus chart and insert an invalid parameter into the prometheus container’s volumeMounts section. In this example I’m going to add the defaultMode parameter, which isn’t valid according to the Kubernetes API spec:

[...]

volumeMounts: - name: config-volume mountPath: /etc/config defaultMode: 256 readOnly: true [...]

I’ll try to install this chart now with Helm 2:

helm install . -n prometheus --name prometheus

➜ helm ls

NAME REVISION UPDATED ... NAMESPACE

prometheus 1 Sat Jan 04 ... default

Well, that was easy, and Kubernetes didn’t seem to complain. Let’s try that with Helm 3:

➜ helm install prometheus .

Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[0]): unknown field "defaultMode" in io.k8s.api.core.v1.VolumeMount

If we remove the invalid parameter, we can see Helm 3 has no issues installing the chart now:

➜ helm install prometheus .

NAME: prometheus

LAST DEPLOYED: Mon Jan 6 15:11:14 2020

NAMESPACE: default

STATUS: deployed

REVISION: 1

TEST SUITE: None

NOTES:

The Prometheus server can be accessed via port 80 on the following DNS name from within your cluster:

prometheus-server.default.svc.cluster.local

...

You may not have any of these issues with your charts, but if they were written aeons ago and have been working happily with Helm 2, chances are you may see some validation failures like this. No need to worry; simply check the Kubernetes API docs and make sure the parameter is valid or the indentation is correct.

Conclusion

These are just a few of the issues I found when moving to Helm 3. I should mention that this is not to knock Helm 3 in any way; it’s a vast improvement over Helm 2 and I highly recommend migrating to it. If there are any issues you’ve seen, feel free to drop me a line and I’ll update this post!