Unlocking the secrets of the Vault Open Cloud Service on Kubernetes

• By Rob Szumski

In December, CoreOS shipped Tectonic 1.8, the latest release of our enterprise Kubernetes platform, which includes a new catalog of Open Cloud Services. These industry-first services, designed to run on top of and fully integrate with Kubernetes, offer the same near-effortless operations customers have come to expect from cloud providers, while avoiding the pitfalls of cloud vendor lock-in.

The first three services offered in the catalog are etcd, Prometheus, and Vault. In this first post in a series examining these services, we'll take a deeper dive into the Vault secrets management solution and how the Vault Open Cloud Service can help solve some of the thornier challenges of developing and deploying distributed applications.

The cloud authentication conundrum

Authentication is the backbone of modern, distributed applications. We are long past the days of accessing an unauthenticated MySQL database over localhost. In a modern application, every user interaction, every API call to another service, and every database query requires a root of trust in order to operate securely. Providing these secrets is the hardest part of application configuration, along with restricting the number of secrets to only those that are absolutely necessary.

Moving to the cloud has exacerbated these authentication challenges due to the rise in the amount and dynamism of infrastructure under management. The major cloud vendors do provide VM-based Identity and Access Management (IAM) offerings; but while these are convenient to use, they aren’t container native. Even worse, coupling your application with one of these proprietary APIs locks you to a single vendor's cloud, negating the portability benefit of containers.

On the surface, Kubernetes secrets might seem like a perfect solution to this problem. Unlike cloud IAM, Kubernetes secrets are designed to be granular enough to address containerization. But realistically there are times when a workload or the cluster itself will need to use existing, external APIs, such as Amazon’s Security Token Service (STS). That's where the Vault Open Cloud Service comes in. It offers a solution to our authentication problem that's convenient to use, works across any provider, and ties directly to a container’s identity that is already established in Kubernetes.

Enter the Vault of secrets!

Hashicorp’s Vault project is a category leader in secure secrets management, including rotation, leasing, and revocation of secrets. Vault has a robust open source community, which makes it a safe bet to use it as an intermediation layer between cloud IAM and your applications. Or, you can also skip the cloud IAM altogether and rely on Vault and Kubernetes on their own.

Deploying Vault like this has a downside, however. The convenience of using cloud IAM comes from having someone else manage the software for you. When you deploy Vault on top of Kubernetes, it's up to you to handle maintenance tasks like installation, scaling, security updates, backup and restore, and so on. That additional operational overhead is often enough to make the cloud IAM option attractive, even if it means you'll have a hard time porting your application to a new cloud later.

Of course, the big cloud vendors don't assign individual human operators to manage your software for you. Rather, they do it through automation – and that's how you should think about Open Cloud Services. We designed Open Cloud Services to deliver a degree of automation similar to what the cloud vendors provide, without locking you in to proprietary software or vendor-owned data centers.

Open Cloud Services are both container-native and first-class citizens of Kubernetes. What's more, they package and act on operational knowledge of all the components necessary to successfully deploy a service in production. When you click “Create new Vault,” for example, an etcd cluster is created for storage, TLS certificates are generated and stored as Secrets, and Services are created that route traffic to the components.

Tying Vault to container identity

Setting up Vault and securely storing secrets within it is just one half of our authentication problem. The other half is restricting access to the subset of secrets required for our application. Fortunately, Kubernetes contains a feature called Service Accounts, which are software-centric credentials that can be attached to Roles and bound to your application Pods. Using Kubernetes Service Accounts as the source of identity is both container-native and provider-agnostic, so it suits our purpose well.

Better still, Vault has an authorization plugin that enables it to tie Service Accounts to Vault access policies, and this plugin can be used with the Vault Open Cloud Service. For more complex applications, multiple Roles and Service Accounts can be created for the desired level of granularity.

The resulting combination provides an API to store, rotate and revoke credentials through Vault, and connect them to our applications (and RBAC Roles) already modeled in Kubernetes. This means our application can start up, provide Service Account credentials to Vault, and receive secrets scoped down to the desired subset, all through automation.

Tying it all together

There is more than one way to integrate Vault with Kubernetes clusters. Below are two example architectures for using the Vault Open Cloud Service within an enterprise, both of which use the authorization plugin.

Running a Vault cluster per application

In the above architecture, a Vault instance dedicated to a single app team is running in a tools namespace, alongside production , staging , and so on. A Service Account is used for authentication and authorization to view production Secrets.

This architecture allows each engineering team to manage and configure Vault to their liking, with the Open Cloud Service providing automated operations for high availability and secure configuration.

SRE provides Vault for the cluster

On the other hand, if you have centralized site reliability engineering (SRE) team that wants to manage and expose Vault to many namespaces, the Open Cloud Service can operate in this fashion as well. This method would employ a large set of Roles that map to each individual teams. The Vault keyspace could also be subdivided as needed.

Exposing Vault outside of the Tectonic cluster is also quite easy, by using the built-in Ingress routing. All that is needed are TLS certificates to secure the connection to the cluster.

Using the Vault Open Cloud Service on Tectonic

If you'd like to see these features in action, we've prepared a one-hour on demand webinar that walks through the Open Cloud Services concept and features a live demo of the Vault Open Cloud Service.

Future Ambitions

While we're on the subject, work on Kubernetes Secrets and Service Accounts is far from over. The Kubernetes community has a number of proposals to improve and extend these features to be even more powerful, including:

CoreOS is involved in these efforts through our leadership in Kubernetes SIG Authentication, but we would love to see you involved, too! SIG Auth meetings take place biweekly on Wednesdays at 10:00 a.m. PT – join in to see how you can best contribute.