Authored by Dave Cheney, Steve Sloka, Nick Young, and James Peach

Exactly two years ago, we launched a new open-source ingress controller. Contour was the first ingress controller to take advantage of the growing popularity of Envoy to create a layer 7 load-balancing solution for Kubernetes users. Contour was also the first Ingress controller to make use of Envoy’s gRPC API, which allowed changes to be streamed directly from Kubernetes.

Today, we are very happy to announce Contour 1.0. We’ve come a long way since that first commit two years ago! This blog post takes stock of our journey to Contour 1.0 and describes some of the new features in this milestone release.

The Journey from 0.1 to 1.0

Since its launch, Contour has held to a monthly release cadence. This pace allowed us to explore the problems that administrators and development teams were struggling with when deploying modern web applications on top of Kubernetes. In the process, we’ve iterated rapidly to introduce a compelling set of features. After two years, we’ve decided it’s time to move out of the shadow of perpetual beta and commit to a stable version of Contour and its CRD types.

Contour 1.0 moves the HTTPProxy CRD to version 1 and represents our commitment to evolve that API in a backward-compatible manner.

From Ingress to IngressRoute to HTTPProxy

The Kubernetes Ingress object has many limitations. Some of those are being addressed as part of SIG-Networking’s work to bring Ingress to v2, but other limitations that prevent Ingress from being used safely inside multi-tenant clusters remain unsolved.

In this spirit, Contour 0.6, released in September of 2018, introduced a new CRD, IngressRoute. IngressRoute was our attempt to address the issues preventing application developers from utilizing modern web development patterns in multi-tenant Kubernetes environments.

As part of preparations for bringing IngressRoute from beta to v1, it has been renamed HTTPProxy. This new name reflects the desire to clarify Contour’s role in the crowded Kubernetes networking space by setting it apart from other networking tools.

HTTPProxy brings with it two new concepts, inclusion and conditions, both of which, like the transition from IngressRoute to HTTPProxy, represent evolutions of the IngressRoute’s delegation model and limited support for prefix-based matching.

Decoupled Deployment

Originally, our recommended deployment model for Contour was for the Contour and Envoy containers to share a pod, controlled by a DaemonSet. However, this model linked the lifecycles of Contour and Envoy. You could not, for instance, update Contour’s configuration without having to take an outage of the co-located Envoy instances.

In order to change this model, we needed to make it straightforward to secure your xDS traffic. This is because the TLS certificates and keys used for serving traffic must be transmitted to Envoy across the xDS connection. To make sure that these can’t leak out, we introduced secure gRPC over TLS and a subcommand, contour certgen , that generates a set of self-signed keypairs for you. When you install Contour 1.0 using our example deployment, this is all done automatically for you.

Splitting Contour and Envoy apart also allowed us to enable leader election for Contour. Kubernetes leader election is a standard feature that allows you to use Kubernetes primitives as a distributed locking mechanism, and designate one Contour as the leader. In Contour 1.0, this leadership is what controls which Contour instance can write status updates back to HTTPProxy objects.

Notable Features

Contour 1.0 supports outputting HTTP request logs in a configurable structured JSON format.

Under certain circumstances, it is now possible to combine TLS pass-through on Port 443 with Port 80 served from the same service. The use case for this feature is that the application on Port 80 can provide a helpful message when the service on Port 443 does not use HTTPS.

One service per route can be nominated as a mirror. The mirror service will receive a copy of the read traffic sent to any non-mirror service. The mirror traffic is considered read only; any response by the mirror will be discarded.

Per-route idle timeouts can be configured with the HTTPProxy CRD.

What’s Next after 1.0?

What does the future hold for Contour? In a word (if this is a word): HTTPProxy. We plan to continue to explore the ways Contour can complement the work of application teams deploying web application workloads on Kubernetes, and the operations teams responsible for supporting those applications in production.

We will continue to follow the progression of discussions about Ingress v1 and v2 in the Kubernetes community, and at this time, we expect to add support for those objects when they become available.

We’re also mindful of the ever-present feature backlog we’ve accrued during the process of delivering Contour 1.0. The backlog will require careful prioritization, and we will have to walk the line between endless configuration knobs and a recognition that no two applications, deployments, or clusters are identical.

Contributor Shoutouts

We’re immensely grateful for all the community contributions that help make Contour even better! The lifeblood of any open source project is its community.

The sign of a strong community is how users communicate through Slack and GitHub Issues as well as make contributions back to the project. We couldn’t have made it to 1.0 without you. Thank you!

Note: Stats above were taken on Oct. 31, 2019.

Join the Contour Community!