Managing Ingress Controllers on Kubernetes: Part 2

Making it Actually Happen: The Various Ingress Controllers

Photo by Irina Blok on Unsplash

Making an Ingress Resource, doesn’t actually establish any routing capability. For that, we need an Ingress controller. Like all the other controllers that get deployed on a Kubernetes cluster, an Ingress controller watches for changes to Ingress resources, and then uses the rule definitions to route incoming traffic.

Ingress controllers are usually implemented in-cluster as pods. They are provided by the community and by other third parties. As a result, there are plenty different implementations. Some that even provide similar capabilities. This means there is a choice to be made, and where there’s choice, sometimes it’s difficult to cut through to the best option.

So to help you make that choice, let’s compare some of the more popular Ingress controllers, starting with the Nginx Ingress controller.

The Nginx Ingress Controller

The first Ingress controller we’ll take a look at, is the Nginx- based Ingress controller that is maintained by the Kubernetes community (hereinafter, referred to as the k8s-ingress-nginx controller). This is not to be confused with the Nginx-based Ingress controller maintained by the Nginx community, or the commercially available Ingress controller based on Nginx Plus. They are similar in nature, but the k8s-ingress-nginx-controller supports more advanced configurations than the Nginx community supported controller, whilst the Nginx Plus controller must be licensed with a subscription.

The k8s-ingress-nginx controller is based on the open source version of the Nginx reverse proxy, but with some additional third-party modules compiled in. Let’s look at some of the characteristics that set it apart.

Highly Configurable

Nginx is a highly-configurable reverse proxy, and the many different configuration options are directly available as part of the k8s-ingress-nginx controller. The controller comes with a comprehensive default configuration embodied in the controller itself, which can be overridden in several ways.

To fine tune the Nginx configuration that is to be applied to traffic that matches the rules in any Ingress resource definition associated with a k8s ingress-nginx controller, the configuration must be defined and deployed as a ConfigMap, which is referenced by the controller’s command line argument (— configmap ), in its Deployment resource definition. For example, to change the default keep-alive value from 75 to 60 , the ConfigMap must contain the following snippet:

data: keep-alive: "60"

If the preference is to provide the configuration customization at the Ingress resource definition level, rather than at the controller level, then annotations can be applied to a specific Ingress resource definition, specifying the relevant configuration. Let’s assume we have an Ingress resource definition, that is one of many associated with a k8s-ingress-nginx controller, and we need to authenticate users accessing a service backend defined in a rule, we might use the following annotations as part of the Ingress resource definition:

metadata: name: important-service annotations: nginx.ingress.kubernetes.io/auth-type: basic nginx.ingress.kubernetes.io/auth-secret: users

The final means of applying configuration, is through the use of a template file, specifying configuration using the Golang text/template syntax. This is a more advanced technique for customization, but renders itself vulnerable to breaking changes in future controller updates.

All in all, the k8s-ingress-nginx controller provides a comprehensive Ingress capability, with a sane default configuration, but with the ability to fine-tune that configuration to suit practically any use case.

Efficient Configuration Reload

In a dynamic, cloud-native environment such as Kubernetes, workloads will get deployed, scaled and removed, continually. To route traffic to the pods that make up a service, the k8s-ingress-nginx controller uses the service endpoints instead of its virtual IP address. This means that the underlying Nginx configuration requires continual refreshing, so that the controller knows where to route traffic to. If constant endpoint changes forced an Nginx reload on every occasion, this could significantly affect the performance of the controller, and its ability to efficiently route Ingress traffic.

Luckily the k8s-ingress-nginx controller side-steps the need to do this for endpoint changes, with the inclusion of the OpenResty lua_nginx_module. The controller watches for endpoint changes, provides a list to the Lua handler, which determines which backend peer, or endpoint, the incoming request should be sent to. Nginx then takes care of routing the request.

In a cluster with a large deployment of services, removing the need for constant configuration reloads, makes the k8s-ingress-nginx controller a viable option for managing Ingress. It should be noted, that whilst dynamic configuration reloading has been present in the k8s-ingress-nginx controller since version 0.12, it has only been turned on by default, since version 0.18.

Support for TCP/UDP Services

Kubernetes Ingress is generally limited to layer 7 HTTP routing, and doesn’t provide any ‘native’ capability to route traffic based on addressed port and protocol. However, the k8s-ingress-nginx controller can be configured with a ConfigMap for both TCP and UDP-based services, that enables it to map a client-facing port to a namespaced service and port. The ConfigMap for each protocol needs to be supplied as arguments to the controller at start-up. A snippet from a ConfigMap to expose a service might look something like this:

data:

6379: “default/redis:6379”

Technically, this is a workaround for the lack of support for layer 4 load balancing in the Kubernetes Ingress API object, but nevertheless, a useful feature of the k8s-ingress-nginx controller.

Session Affinity

We’ve already discussed that the k8s-ingress-nginx controller uses the Endpoints API to watch for changes to pods, instead of routing traffic based on a service’s virtual IP address, which brings some useful benefits. In bypassing the normal proxying of service requests via kube-proxy , it allows for the use of session affinity for client connections, which wouldn’t be possible by routing to the service’s virtual IP.

Contour Ingress Controller

Contour is an open source Ingress controller, that makes use of another well regarded cloud-native proxy; Envoy. Envoy has a number of applications beyond being just a reverse proxy, but it is ideally suited to the task of being an Ingress controller.

Typically, the Contour Ingress controller is deployed as a single pod, comprising of an Envoy container, and a Contour container, which acts as Envoy’s ‘management server’. The main purpose of the Contour component is two-fold. Firstly, it watches for changes to various Kubernetes API objects that are associated with Ingress, and then translates them to the equivalent resource objects used by Envoy. Secondly, it responds to polling by Envoy, by implementing the various discovery service (xDS) APIs used by Envoy, and returns the necessary configuration for Envoy to use. Contour serves the xDS APIs over gRPC, and Envoy consumes them. The features that set this Ingress controller apart are:

Object Translation

Kubernetes and Envoy do not speak the same language. Kubernetes relies on resources like Ingress, Services, Endpoints and Secrets. The Contour Ingress controller achieves dynamic configuration changes by performing the translation between these Kubernetes resources and the corresponding Envoy resources.

An Envoy cluster is roughly equivalent to a Kubernetes Service, and the communication between Contour and Envoy is carried out using Envoy’s cluster discovery service (CDS) API. The listener discovery service (LDS) API is used to configure Envoy listeners, with data provided by Ingress objects and Secret objects containing TLS artifacts. Ingress objects also provide the routing configuration, which is communicated via the route configuration discovery service (RDS) API. Finally, Kubernetes Endpoint objects are translated into Envoy ClusterLoadAssignment objects using the endpoint discovery service (EDS) API.

API Driven Configuration

Whilst the k8s-ingress-nginx controller avoids configuration reloads with its inclusion of the the OpenResty lua_nginx_ module, the Contour Ingress controller benefits from Envoy’s dynamic API driven configuration. This means that Envoy doesn’t require configuration reloads, which ensures that the controller remains performant, even when there is continual change in configuration, as we would expect in a cloud-native, containerized environment.

IngressRoute Custom Resource Definition

Whilst the Contour Ingress controller works perfectly well with the standard Ingress API resource, it can also make use of a custom resource called an IngressRoute. The standard Ingress API resource was introduced in Kubernetes 1.1, towards the end of 2015. It has changed very little since and doesn’t fully model the behavior required for many Ingress scenarios. As a result many Ingress controllers make use of annotations in order to overcome the deficiencies in the default Ingress API resource.

To improve on this the IngressRoute CRD aims to enrich the Kubernetes Ingress experience with a more comprehensive capability. Some of the additional capabilities include: definition of weighting and load balancing strategies, load balanced routing to multiple upstream services for a specific route, health checking with upstream service granularity, and much more.

The API driven configuration provided by Envoy is a perfect fit for an Ingress controller, and the capabilities afforded by the IngressRoute CRD make the Contour Ingress controller a formidable resource for managing Ingress to a Kubernetes cluster.

Let’s now take a look at the final Ingress controller in this series, the Traefik Ingress Controller:

Traefik Ingress Controller

It won’t be a surprise to learn that the Traefik Ingress controller is based on the Traefik reverse proxy and load balancer. Much like Envoy, Traefik was conceived during the recent container- driven, cloud-native movement. It is open source software with commercial support provided by its creators: Containous. Traefik is general purpose in nature and supports a variety of different infrastructure environments or ‘providers’ in addition to Kubernetes.

The Traefik Ingress controller works much like the Ingress controllers already discussed, making use of standard Ingress API objects and a controller that is contained within a binary, which is deployed from a minimal Docker image. The controller dynamically updates its configuration based on watched Kubernetes API objects, and is provided with a default, built-in Traefik configuration, which can be overridden if desired.

The Traefik Ingress controller implements the behavior associated with the Ingress API object, providing the host and path-based routing for services deployed in the cluster. As a comprehensive reverse proxy, however, the Traefik Ingress controller offers many interesting configuration options via a set of optional annotations. Let’s see some of the more interesting capabilities.

Rate Limiting

The Traefik Ingress controller can limit the number of client requests sent to a backend service over a given time period by specifying both an average rate and a burst rate. This limit is applied per client IP address. The rate limit is specified using the traefik.ingress.kubernetes.io/rate-limit key, with a YAML value specifying the time period, average, and burst number of requests. This is particularly useful for protecting the backend service from being overwhelmed by requests and can protect against DDoS attacks.

Circuit Breaking

Circuit breaking is a fairly standard feature of most modern proxies, and allows a system to shut down the route to a backend when the backend is considered to be in a failed or failing state. The Traefik Ingress controller provides this capability with an expression that defines latency and/or network errors and/or HTTP response code ratios. The circuit breaker expression is provided using the traefik.ingress.kubernetes.io/circuit-breaker-expression annotation provided as part of the backend Service definition.

Session Affinity

Sticky sessions can be effected by setting the traefik.ingress.kubernetes.io/affinity annotation to “true” on the backend service. A further annotation allows for setting the name of the cookie.

Traffic Splitting with Service Weights

The traefik.ingress.kubernetes.io/service-weights annotation allows for traffic addressed to a specific host and path to be split between different backend services according to weights defined per service. The value of the key is YAML specifying a percentage for each contributing service. The perfect use case for traffic splitting in this way is performing canary releases, where a proportion of the traffic can be directed to the canary service, with gradual increments as confidence is gained in the newer release. Service weights are a feature of Traefik’s upcoming 1.7 release, it is not available in releases prior to this.

This is just a subset of the features that are provided by the Traefik Ingress controller, with many more advanced features available with the use of Ingress or Service object annotations.

Written by Puja Abbassi — Developer Advocate @ Giant Swarm

Ready to get your cloud native project into production? Simply request your free trial at https://giantswarm.io/