Photo by Jon Moore on Unsplash

How do you insure encrypted connections between your end users and your services running in Kubernetes?

The common solution to this problem is to use a reverse proxy or API Gateway. Clients send encrypted requests over TLS/SSL to the reverse proxy, which handles TLS termination.

In building TLS support into Ambassador, we’ve discovered a myriad of use cases associated with TLS/SSL termination and Kubernetes. This post outlines the common approaches that we’ve seen for TLS termination.

Terminating at an external load balancer

A common strategy for TLS/SSL termination and Kubernetes is to use an external load balancer such as an AWS Elastic Load Balancer or Google Cloud Load Balancer. This approach offloads the computation and management of TLS/SSL to another system.

While terminating TLS with an external load balancer may simplify your architecture, there are a few limitations:

Many load balancers don’t support redirecting HTTP to HTTPS, so you’ll still need something in Kubernetes that supports this

You’ll still need a L7 load balancer / proxy behind the external load balancer to properly load balance traffic to your Kubernetes services

Your L7 load balancer will need to support the PROXY protocol and/or the X-FORWARDED-PROTO header to properly redirect from cleartext to TLS and process the original client IP address (otherwise, every request looks like it comes from the external load balancer!).

TLS/SSL termination in the cluster

You can also terminate TLS/SSL in the cluster with a Kubernetes ingress or API Gateway. This approach gives you more control and flexibility (e.g., support client certificates or Server Name Indication). In addition, if your API Gateway supports Kubernetes, configuring the API Gateway can be done with the same workflow as your other services.

Using a project such as JetStack’s cert-manager simplifies the workflow for managing and provisioning TLS certificates .

Ambassador and TLS