I have stumbled across this article questioning the role of Helm/Tiller just now, and basically, I agree with the bias expressed, however, for different reasons than given in the posting.

A truly automated environment will standardize many features and remove unnecessary freedom and means of control from developers. This leads to principles as the following:

Microservices should be happy with one exposed port and a single API endpoint prefix. Think “Lambda” and you’ll cut down this even more restrictively.

Microservices have to be state-less, so all state is kept in back-end storage services, such as databases, NoSQL services, file systems, or more specialized repositories.

Security will most likely not be handled by Kubernetes Ingress controllers, but rather by a decent API gateway also dealing with authentication (like NGINX or your cloud provider’s gateway technology).

Deployments should happen in a fully automated fashion through a CI/CD process. There, pipelines have the task of generating not only artifacts but also any housekeeping information that is required for deployments.

Developers will not have direct access to Kubernetes, anyway, they can only deliver artifacts for deployment. Unnecessary freedom of configuration can be taken away if there is a clear model of state-less containers providing microservices.

Debugging and testing facilities have to be built into services and are available via logs/tracing and special test/health-check endpoints provided. Developers may access these.

Microservice environments are already a zoo of components, so let’s try to keep complexity as low as possible.

In consequence, configuration files passed between CI/CD and Kubernetes to actually deploy service containers will be generated on the basis of pre-defined patterns, possibly filling in some parameters for some degrees of freedom still granted to developers (e.g., the number of service replicas or resource allocations). Helm/Tiller just seems to be another level of syntax between CI/CD and Kubernetes (adding complexity to the process as such) — something you don’t really need if you can generate Kubernetes deployments directly.

I can also not accept the versioning argument because proper cloud architectures and devops processes should inherently provide these four characteristics as essential pre-conditions for orderly operation:

Your repository knows which software revisions developers created.

Your CI/CD pipeline will know when you deployed an artifact and what that was.

Your logging/monitoring services of the K8s cluster will know when services spin up or down, and how they behave in between.

You can ask each service for its health status and version information — if done right, even for a documentation of the API it supports.

Helm/Tiller seems to be the attempt to create something that makes Kubernetes deployments more digestible and tangible… by providing an extra layer that is more intellegible for humans. It also creates a lot of additional requirements in relation to security, additional storage of meta-information, and service components required. In simplistic scenarios where the pre-conditions mentioned above are not met, this may really help fulfil needs not satisfied otherwise. However, if the entire lifecycle of software components is managed in line with existing best practice for cloud environments, you can build on a seamless, automated process (with its own means of documentation) already, and Helm charts are degraded to simply an intermediate data structure that is ephemeral and can be reproduced at any time.

With this perspective, do we really need Helm? Possibly not. What do you think?