A few weeks ago we released Heptio Contour, a Kubernetes ingress controller powered by Lyft’s Envoy proxy. In this post I explain why Heptio chose to use Envoy for Contour, how Contour and Envoy work together to provide an ingress controller for your Kubernetes cluster, and what’s next for Contour.

Why Envoy?

Why did we choose to build Contour on Envoy?

Envoy is a traffic proxy designed for dynamic configuration. This feature makes Envoy exceptionally programmable, and has made Envoy the default choice for projects like Istio Pilot.

Contour is not a competitor in the service mesh space, but Heptio recognises the need for a robust, straightforward ingress controller that users could deploy when standing up a Kubernetes cluster. There are strong parallels between Envoy’s dynamic configuration support and the dynamic nature of the Kubernetes API server. Combined with Envoy’s active development community, it was a natural fit.

How do Contour and Envoy work together?

At a high level, Contour can be thought of as a translator of Kubernetes API objects to an Envoy configuration. Contour watches the Kubernetes API for Ingress, Service, and Endpoint objects, and translates them into JSON fragments that it serves up to Envoy. In turn, Envoy does what it does best: direct network traffic.

Envoy relies on a management server for its dynamic configuration. This management server provides a web service that responds to the REST API calls that Envoy makes to learn about vhosts to proxy, clusters to send traffic to, their members, and so on. A Contour container fills the role of management server for its sidecar Envoy container, watching the Kubernetes API and forwarding changes to Envoy.

What’s next for Contour?

It’s still early days for Contour, but I want to call out a few important improvements that will land soon.

TLS support. At the moment deploying your ingress objects with HTTPS requires coordination with your cloud service provider or IT department to manage SSL offload before the traffic hits Contour. This is because Envoy doesn’t currently support Server Name Indication (SNI). SNI allows multiple HTTPS virtual hosts to share a single IP address, which is important for deploying multiple services behind a single ingress controller. Envoy’s SNI support is under way now, and when it lands we’ll roll support for HTTPS into Contour.

Envoy v2 API. One of Contour’s current limitations exists because Envoy’s v1 REST API is polling based. Contour sets the polling interval very low (a few seconds) to reduce the latency between an object appearing in Kubernetes and Envoy applying the configuration change, but a short polling interval isn’t ideal for performance or resource utilization. Fortunately, the Envoy developers have a solution in the form of their new gRPC based v2 API. Supporting the v2 API is planned as the next major feature for Contour. This will allow Contour to push changes to Envoy as soon as it learns them, and thus eliminate the polling.

Support for more annotations. The Ingress object, which is still in beta, has some problems with expressibility. While addressing those problems fully is the work of SIG-Networking, commonly used annotations on Ingress and Service objects have been employed by Ingress controllers to express configuration that isn’t possible in the Ingress object itself. Adding support for at least the subset of the most popular annotations is high on our priority list, to allow Kubernetes users to move transparently between their current Ingress controller and Contour.

Check out Contour on Github. Try it out, and let us know what you think!