In a microservice architecture, services communicate with each other through L7 protocols such as gRPC and HTTP. Since the network is not reliable (and services can go down!), managing L7 communications is critical for reliability and scale.

Smart RPC

The first efforts at managing L7 came around 2010 in the form of smart RPC libraries. The team at Twitter created Finagle, the team at Netflix created Hystrix, and Google introduced gRPC. The library concept wasn’t perfect though, because it was difficult to port and maintain the libraries in multiple languages. This problem became more difficult as polyglot architectures became more fashionable.

Smart Proxies

In 2013, AirBnB announced SmartStack, which combined HAProxy and Apache Zookeeper. Quickly adopted by other companies such as Yelp, SmartStack was the spiritual ancestor of the modern day service mesh. SmartStack was designed as a sidecar, and deployed adjacent to each service. All service egress traffic was routed through SmartStack, which introduced client-side load balancing and resiliency patterns.

2016

2016 was a major year for proxies and service meshes. In early 2016, Buoyant announced Linkerd, which implemented Finagle as a sidecar proxy. This model enabled non-JVM users the ability to use Finagle as the core RPC protocol, adding resilience and observability to a microservice application. Linkerd helped popularize the service mesh concept.

In September 2016, Lyft announced Envoy. Envoy, written in C++, provided rich L7 management capabilities (resilience, observability). Designed with microservices in mind, Envoy has a tiny memory footprint, broad protocol support (e.g., gRPC and HTTP/2), and zero downtime reloads.

The incumbents respond

NGINX and HAProxy weren’t going to take the challenge from Envoy Proxy lying down. NGINX released NGINX Plus R13 less than a year after Envoy was announced, adding a runtime API for dynamic configuration and traffic shadowing. HAProxy released 1.8 soon thereafter, adding support for hitless reloads (finally!), HTTP/2, and a runtime API.

The proxy landscape today

Envoy Proxy is now a full Cloud Native Computing Foundation project, with a broad and diverse community. Of the big three proxies, Envoy is the only project that does not have a dominant commercial vendor. (We’ve written how this was one of the drivers for us to adopt Envoy in Ambassador.)

Envoy pioneered the use of dynamic APIs for management, and an ecosystem of additional open source projects that use Envoy has evolved. These projects generally function as so-called control planes to manage Envoy. Projects that use Envoy proxy include Consul Connect, Istio, and Ambassador.

Summary

Managing L7 is critical to modern cloud-native applications

HAProxy, NGINX, and Envoy Proxy are evolving to meet these new requirements

With neutral governance and the fastest growing community, Envoy Proxy looks to be the new standard for L7 proxies

Most users don’t use Envoy directly; they use a control plane like Ambassador.

Learn More

Flynn gave a talk on this subject at DevOps Days Boston this year. You can check out the slides below.

If you have any questions or feedback on this topic, we’d love to hear it. Feel free to drop a line in the comments, join our Slack channel, or follow Flynn (@_flynn) on Twitter.

If you’d like to learn more about Envoy, check out our resources page: Envoy Proxy 101: What it is, and why it matters?. To try Ambassador yourself you can learn more at https://www.getambassador.io or visit the documentation to install Ambassador now.