Rotor: A Complete Control Plane for Envoy

Rotor now supports user-defined routes, custom domains, and gRPC

Today we’ve released an important update to Rotor. In addition to its existing integrations with Kubernetes, Consul, AWS, and more, Rotor now supports fully custom domains, routes, and protocols from a central configuration file. This means you can take full advantage of Envoy’s dynamic service discovery integrations without re-writing any of your existing Envoy listeners or routes. Simply put, Rotor is now the only full-featured control plane for Envoy with a full suite of service discovery integrations.

Why do you need a control plane? In modern environments, hosts don’t live for months or years. Cloud VMs may live for days, and Kubernetes pods may only live for minutes. Envoy’s dynamic configuration allows it to keep up-to-date on what infrastructure is available and healthy. Modern apps at any scale need a standardized approach for dealing with this ever-changing infrastructure. Rotor easily integrates with service discovery into a control plane for Envoy to act on.

But what about everything else? To support the full power of Envoy, Rotor’s new static configuration supports four specific use cases:

Custom domains and routes

Additional static clusters

gRPC and other cluster options

A path to a more dynamic control plane

Domains and Routes

Envoy defines the routes it can serve via the Listener Discovery Service (what host and port each domain is served on) and the Route Discovery Service (what paths within that domain map to each upstream). Previously, Rotor defined a default set of domains and routes, but these defaults rarely matched what customers had in production. Particularly when migrating from a web server like NGINX or HAProxy, there are pre-existing routes that need to be ported over to make everything work.

Rotor now picks up a set of routes in the same format as Envoy’s static config files with the ROTOR_XDS_STATIC_RESOURCES_FILENAME=<config file> environment variable, which can contain a set of listeners and routes. A simple set of routes might look like:

listeners:

- address:

socket_address:

address: 0.0.0.0

port_value: 80

filter_chains:

- filters:

- name: envoy.http_connection_manager

config:

codec_type: AUTO

stat_prefix: ingress_http

route_config:

virtual_hosts:

- name: backend

domains:

- "example.com"

routes:

- match:

prefix: "/service/1"

route:

cluster: service1

- match:

prefix: "/service/2"

route:

cluster: service2

http_filters:

- name: envoy.router

config: {}

The clusters’ names ( service1 and service2 ) would have to match the label on those hosts (e.g. tbn_cluster: service1 ). See the Rotor documentation for more information on connecting your service discovery regime.

Additional Static Clusters

In an ideal world, all services use the same template and same format and behave the same way. In a real app, sometimes there are exceptions that must be hardcoded. Maybe there’s a service that still lives in a different cloud provider, or one that hasn’t been added to the service discovery yet. Rotor makes it easy to add a set of clusters that are statically defined. In the same way, add a set of Envoy clusters with the ROTOR_XDS_STATIC_RESOURCES_FILENAME=<config file> environment variable, under clusters :

# A cluster with a hardcoded DNS

clusters:

- name: service1

connect_timeout: 0.25s

type: LOGICAL_DNS

lb_policy: ROUND_ROBIN

http2_protocol_options: {}

hosts:

- socket_address:

address: olddomain.example.com

port_value: 8888

By default, static clusters are merged with any clusters read from service discovery. In the case of conflicts, the static file wins.

gRPC and other options

Sometimes it’s best to let all clusters be dynamically defined, but with certain overrides. But, how do you define certain aspects of a cluster without knowing the cluster beforehand?

Templates! By setting a cluster_template in the static file defined by the ROTOR_XDS_STATIC_RESOURCES_FILENAME=<config file> variable, you can set options. For instance, setting http_protocol_options: {} enables gRPC:

cluster_template:

connect_timeout: 0.25s

lb_policy: RING_HASH

type: EDS

eds_cluster_config:

eds_config:

api_config_source:

api_type: GRPC

cluster_names:

- tbn-xds

refresh_delay: 120.000s

Now, Rotor will override the dynamic clusters with these values, allowing you to use dynamic EDS with pre-set cluster configuration. In this example, it configures all the clusters to use Rotor for EDS, while overriding the connection timeout, load balancing policy, and refresh delay.

A Path to a Dynamic Control Plane

While specifying routes in a config file is a great place to start, there are three paths to make your routing definitions as dynamic and flexible as your endpoint definitions:

Add your config file to version control and periodically re-deploy Rotor with config changes. Envoy will pick up the change and reload the configuration with no downtime. Use Rotor as a starting point for your own dynamic control plane. For instance, you could store the routes in a MySQL database and provide an API to change them. Get the best of both worlds and use Houston, Turbine Labs’ management plane. Just add a Houston API key and Rotor will serve the routing configuration you define in Houston. You can ever run multiple Rotors to bridge multiple types of infrastructure, which helps you with projects like migrating safely to Kubernetes.

Check out the project on Github, or sign up for Houston today!