Deploy a containerized HashiCorp Vault on k8S

In this part we’ll learn how to deploy a highly available HashiCorp Vault server on kubernetes.

Choose a backend storage

You can find the documentation about storage backends here. This choice is important for several reasons:

Some backends cannot start in high availability (HA) mode.

mode. Some backends are not production ready (testing purposes only): you will never use the in memory storage in production!

in production! Some backends are complicated to master in the context of a kubernetes deployment.

We decided to store vault’s data in Google Cloud storage because we already used the platform on a daily basis and because it had support for the HA mode!

Note that this decouples storage from computation: we could take down the server and restart it, the data would still exist. You should just remember that the vault always restart in a sealed mode.

Safely create a bucket in Google Cloud

In the example, we create a bucket named kurandalabs-vault. When configured, all the data from the vault lies in this bucket. Do not worry, the data is encrypted by design.

As the storage is not coupled to the computation, the single point on failure is the storage. If you delete the bucket, you lose all your secrets. This is why you should be extra careful on how you setup the bucket.

Create a new service account to get access from the k8s cluster

Under the IAM and admin tab, create a new service account but do not assign any role to it! We named it kurandalabs-vault. Issue the related service account key

Under the API & services tab, click on Create credentials and select the service account key option. Of course, you must create it for the service account kurandalabs-vault. Keep this JSON file safe on your machine. Create the bucket

Under the Storage tab, create a new bucket and choose the Bucket Policy Only. Grant access to your service account

So far, using the service account key, you are not able to access this bucket. Use the following command which grants the storage.objectAdmin role:

Grant role to your service account.

You are now setup to access your bucket with the service account key. In the Permissions tab, verify that you have only two members with access to the bucket:

As a user, you might be owner of the entire Google Cloud project. It also means that you can delete the bucket, so be careful! This is why you see my personal email address.

Create the vault configuration file

Vault reads a configuration file when it starts. This file configures several options including the backend storage, TLS, etc…

Configure the storage

Configure the listener

Here we disable TLS because Kuranda Labs uses a service mesh (Istio) that already configures the TLS policy for us.

If you do not use a service mesh, you must absolutely configure TLS in the listener. Otherwise, your data will not be encrypted during server communications.

Final configuration

Your configuration should look like this. We enable the UI and we specify the api_addr to point to our load balancer url (check here for the docs).

Create secrets from the service account key / vault configuration

Secrets in kubernetes are encrypted and are used to store sensitive information within the cluster. We use them to load the service account key and the vault configuration. Run the following command within the directory containing both files (here, named sa-creds.json and vault-config.hcl):

Create the k8s deployment

HashiCorp released an official docker image that you can use for the deployment. As you can see, we mount the two secrets as volumes. We also specify the path to the google credentials using the GOOGLE_APPLICATION_CREDENTIALS env variable.

We create on top of this deployment a service to access the containers. Because we use Istio, we do not need to create a LoadBalancer service: we use the SNI routing of Istio and specify in the gateway that we accept https trafic for the vault. You might want to use a service of type LoadBalancer to directly expose your vault server with an IP.

If everything went right, printing the container logs should display something like that in your shell: