Helm vs Kustomize: How to deploy your applications?

Alexander Hungenberg
6 min readMar 19, 2020

--

There was a time when Helm was one of the most hated tools in this great new world of Containerization and Kubernetization!

And that’s not without reason. But a few weeks ago, the time came when my team started fresh and had to decide how we would like to handle application deployment on our Kubernetes cluster.

So I went to Google to ask it what to do…

… and it left me alone. Kustomize, Helm, … With lots of outdated comparisons. So hopefully the following chapters will help you to inform your decision!

TLDR;

Use Helm v3. You’ll love it!

It is built on well known, easy to understand, and robust patterns. It has a super nice command line interface. New and existing team members will instantly understand what’s going on with your deployments and it helps you to do easy bookkeeping of all the resources on your Kubernetes cluster!

Reason 1: Helm adds a layer of abstraction where you want one

When you deploy an application on K8s, you’ll usually deploy it more than once. For example, you might want to deploy your microservice in two environments (test and staging), or you need to deploy two load-balancers for two different backends.

For every deployment of the same application you have to create a large set of low-level K8s resources (Deployments, Services, ConfigMaps, …), which can quickly get cumbersome to handle using only kubectl . Moreover, these resource definitions are usually >90% identical between individual deployments of the same application (e.g. test and staging), but still need small tweaks (for example the staging deployment needs to point to a staging database cluster).

As we all have learned that code duplication is very bad, we want to have a system where the 90% identical stuff only needs to be written once, while still allowing us to tweak the stuff that matters.

  • Helm v3 solves this issue using a template approach. You just write all your Kubernetes resource definitions as usual. As soon as you realize that some specific part in them needs to be different in deployment A and B, you can add a placeholder and let Helm replace it with a user provided value at installation time. That’s all!
  • Kustomize tries to achieve the same goal using a polymorphic inheritance approach, combined with a domain-specific language for post-processing (patches). This means that your file which specifies the staging deployment of your microservice will usually inherit from the ‘generic’ microservice description, and then add customization and patches to that using its custom configuration language.

If the latter sounded complex, it’s because it is! And it suffers from the same issues which inheritance has in object-oriented programming languages: It’s often a leaky abstraction — or in other words, it introduces issues like the fragile base class problem.
To summarize the issue in two sentences: When you write your kustomization.yaml file to configure your application deployment, you need to know the implementation details of the generic application description you’re inheriting from. And it get’s worse: From now on you also can’t change your generic application description anymore without breaking your deployment, because these two things are tightly coupled!

Helm, on the other hand, puts a nice interface between these two elements (application and its configuration), to decrease coupling and allow easy long-term maintainability.

Your generic application description (the so called ‘helm chart’) only exposes a small set of variables to the outside, completely hiding the implementation details of the underlying Kubernetes resources. Now, when configuring your staging deployment, it is only necessary to set and track these variables!

When deploying with Helm, you don’t need to care what resource name this ConfigMap has, which is needed by some logging sidecar container

By the way: Check out the Appendix of this article, to find a possible directory layout for your K8s configuration Git repository.

Reason 2: Explicit is better then implicit

Kustomize

Kustomize uses its own (YAML-based) configuration language, which has a few specific keywords that perform very helpful, but also partially complex and not easy to understand operations. Let’s have a look at the following example:

# file: microserviceA-staging/kustomization.yaml
bases:
- ../_bases/microserviceA/
commonLabels:
app: microserviceA-staging

The ‘bases’ keyword takes care of inheriting all K8s resources from the generic microserviceA resources. However, to all these resources we will add an additional label app: microserviceA-staging .

This is certainly nice and easy. However, commonLabels actually does a lot more magic than just changing the metadata.labels field in your resource definitions:

  • It also adds the label to Pod templates
  • It also adds the label to Selectors in Deployments, Services or Ingress resources

Don’t get me wrong: It is _good_ that Kustomize does this. Because it makes the whole approach work and fairly easy for standard usecases! However, it’s not documented, it’s probably unexpected for most new users and fairly opaque!

I could list tons of additional examples of behaviour which is kind of ‘weird’ and undocumented:

  • The namespace field is not inherited from the files listed in bases
  • When using configMapGenerator to add needed configuration files, these files are being modified (stripping newlines and whitespace). This can (and does!) break whitespace-sensitive config files or binaries

Helm

Compare the previous section to the following Helm template and installation command:

# file: _charts/microserviceA/templates/secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: mysecret
type: Opaque
stringData:
database_password: {{ .Values.databasePassword }}

And install the staging deployment using:

helm install \
--set databasePassword=mypassword \
microserviceA-staging ./_charts/microserviceA

I guess we all can understand what’s going on here in less than 10 seconds.

Reason 3: It tracks the created resources

Kustomize is basically a compiler, which creates ‘customized’ versions of the referenced generic Kubernetes resources. These can then be directly applied using a kubectl command like this:

kustomize build . | kubectl apply -f -

Although this is a very ‘unix-y’ approach, it has the disadvantage that it doesn’t track the performed changes in your Kubernetes cluster… And there is no way of easily rolling them back. Or applying a new and improved version of your deployment, without risking to leave behind old, now unused resources.

So far we only talked about Helm as a system to replace placeholders in templates. This is the ‘packaging’ functionality of the Helm tool. But Helm also solves the second part, namely the ‘configuration management’ of your cluster. Or in other words:
Which packages are installed, what resources belong to them and what is their individual configuration? In this sense it is the ‘apt-get’ or ‘yum’ of the Kubernetes world.

Resultingly, Helm doesn’t only have an install command:

helm list <...>         # show currently installed packages
helm install <...> # install a new package
helm upgrade <...> # upgrade an already installed package
helm get values <...> # show the configuration of an installation
helm uninstall <...> # remove an already installed package

The Bottom Line

This article is the result of me trying both, Helm and Kustomize, to help us maintain our K8s deployments.

To fully disclose myself: In this test I first tried Helm v3 and liked it. However, I wanted to do a deep dive into Kustomize, to understand how it is being used… and I couldn’t find many comparisons to Helm 3.

While trying Kustomize, my feelings about it roughly changed in the following order:

  1. I hated the start, because I didn’t really understand how it was supposed to work — and it took a while
  2. I got the gist of it, and thought ‘it’s not so bad’
  3. I stumbled on all the weird edge cases and lacking documentation

… which finally caused me to go back to the happy place, Helm :-).

In the aftermath, I tried my best to ‘formalize’ my feelings — resulting in this article. I hope it will at least give you some interesting arguments, when trying to decide which direction you want to go! And take it with a grain of salt, maybe I didn’t understand Kustomize correctly, after all…

Appendix: How to organize your cluster package/configuration repository

For the sake of example, let’s assume we want to deploy the following applications on our Kubernetes cluster: Two different kinds of backends, where each backend has its own API gateway deployed in front of it.

I suggest to store both of the following things in a Git repository:

  • your generic application packages (Helm charts)
  • the configuration of your individual deployments (values files)
_charts/
backendA/
Chart.yaml
Values.yaml
...
backendB/
...
api-gateway/
...
prod/
backendA_values.yaml
backendB_values.yaml
api-gateway_backendA_values.yaml
api-gateway_backendB_values.yaml
staging/
backendA_values.yaml
backendB_values.yaml

Besides the _charts folder, we see one folder per Kubernetes namespace (resembling our staging and production environment). In these folders we keep the actual configuration values for the individual deployments. So when you need to upgrade the backendB in your prod namespace, you would run a command like the following:

helm -n prod upgrade \
-f prod/backendB_values.yaml
backendB ./_charts/backendB

Of course this layout is just a possible example to get started. The most important thing is to not only keep the packages under source control, but also the configuration! This can be done as shown above, or even in a completely separate repository.

And of course you can start to add all the shenanigans you like, such as additional commonly used ‘values’ files for namespace- or cluster-wide configuration settings (like a central logging server).

--

--