Ingress

Real life cluster setup

When getting into space where we are managing more than one web server with multiple different sets of pods, above mentioned services turn out to be quite complex to manage in most of the real life cases.

Let’s review the example we had before — 2 APIs, redis and frontend, and imagine that APIs have more consumers than just frontend service so they need to be exposed to open internet.

Requirements are as following:

frontend lives on www.example.com

API 1 is search api at www.example.com/api/search

API 2 is general (everything else) api that lives on www.example.com/api

Setup needed using the above services:

ClusterIP service to make components easily accessible to each other within the cluster.

service to make components easily accessible to each other within the cluster. NodePort service to expose some of the services outside of node, or maybe

service to expose some of the services outside of node, or maybe LoadBalacer service if in the cloud, or

service if in the cloud, or proxy server like nginx, to connect and route everything properly (30xxx ports to port 80, different services to paths on the proxy etc)

like nginx, to connect and route everything properly (30xxx ports to port 80, different services to paths on the proxy etc) Deciding on where to do SSL implementation and maintaining it across

So

ClusterIP is necessary, we know it has to be there — it is the only one handling internal networking, so it is as simple as it can be.

External traffic however is different story, we have to set up at least one service per component plus one or multiple supplementary services (load balancers and proxies) in order to achieve requirements.

Number of configs / definitions to be maintained skyrockets, entropy rises, infrastructure setup drowns in complexity…

Solution

Kubernetes cluster has ingress as a solution to the above complexity. Ingress is essentially a layer 7 load balancer.

Layer 7 load balancer is name for type of load balancer that covers layers 5,6 and 7 of networking, which are session, presentation and application

Ingress can provide load balancing, SSL termination, and name-based virtual hosting.

It covers HTTP, HTTPS.

For anything other then HTTP and HTTPS service will have to be published differently through special ingress setup or via a NodePort or LoadBalancer, but that is now a single place, one time configuration.

Ingress setup

In order to set up ingress, we need two components:

Ingress controller — a component that manages ingress based on provided rules

— a component that manages ingress based on provided rules Ingress resources — Ingress HTTP rules

Ingress controller

There are few options you can choose from, among them nginx, GCE (google cloud) and Istio. Only two are officially supported by k8s for now — nginx and GCE.

We are going to go with nginx as the ingress controller solution. For this we, of course, need new deployment.

apiVersion: extensions/v1beta1

kind: Deployment

metadata:

name: nginx-ingress-controller

spec:

replicas: 1

selector:

matchLabels:

name: nginx-ingress

template:

metadata:

labels:

name: nginx-ingress

spec:

containers:

- name: nginx-ingress-controller

image: quay.io/kubernetes-ingress-controller/nginx-ingress-controller

args:

- /nginx-ingress-controller

- configMap=$(POD_NAMESPACE)/ingress-config

env:

- name: POD_NAME

valueFrom:

fieldRef:

fieldPath: metadata.name

- name: POD_NAMESPACE

valueFrom:

fieldRef:

fieldPath: metadata.namespace

ports:

- name: http

containerPort: 80

- name: https

containerPort: 443

Deploy ConfigMap in order to control ingress parameters easier:

apiVersion: v1

kind: ConfigMap

metadata:

name: nginx-configuration

Now, with basic deployment in place and ConfigMap to make it easier for us to control parameters of the ingress, we need to set up the service to expose ingress to open internet (or some other smaller network).

For this we setup node port service with proxy/load balancer on top (bare-metal /on-prem example) or load balancer service (Cloud example).

In both mentioned cases, there is a need for Layer 4 and Layer 7 load balancer:

NodePort and possibly custom load balancer on top as L4 and Ingress as L7.

LoadBalancer as L4 and Ingress as L7.

Layer 4 load balancer — Directing traffic from network layer based on IP addresses or TCP ports, also referred to as transport layer load balancer.

NodePort for ingress yaml, to illustrate the above:

apiVersion: v1

kind: Service

metadata:

name: nginx-ingress

spec:

type: NodePort

ports:

-targetPort: 80

port: 80

protocol: TCP

name: http

-targetPort: 433

port: 433

protocol: TCP

name: https

selector:

name: nginx-ingress

This NodePort service gets deployed to each node containing ingress deployment, and then load balancer distributes traffic between nodes

What separates ingress controller from regular proxy or load balancer is additional underlying functionality that monitors cluster for ingress resources and adjusts nginx accordingly. In order for ingress controller to be able to do this, service account with right permissions is needed.

apiVersion: v1

kind: ServiceAccount

matadata:

name: nginx-ingress-serviceaccount

Above service account needs specific permissions on cluster and namespace in order for ingress to operate correctly, for particularities of permission setup on RBAC enabled cluster look at this document in nginx ingress official docs.

When we have all the permissions set up, we are ready to start working on our application ingress setup.

Ingress resources

Ingress resources configuration lets you fine-tune incoming traffic (or fine-route).

Let’s first take a simple API example. Assuming that we have just one set of pods deployed and exposed through service named simple-api-service on port 8080, we can create simple-api-ingress.yaml.

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

name: simple-api-ingress

spec:

backend:

serviceName: simple-api-service

servicePort: 8080

When we kubectl create -f simple-api-ingress.yaml we setup an ingress that routes all incoming traffic to simple-api-service.

Rules

Rules are providing configuration to route incoming data based on certain conditions. For example, routing traffic to different services within the cluster based on a subdomain or a path.

Let us now get to the initial example:

frontend lives on www.example.com and everything not /api

and everything not api 1 is search api at www.example.com/api/search

api 2 is general (everything else) api that lives on www.example.com/api

Since everything is on the same domain, we can handle it all through one rule:

apiVersion: extensions/v1beta1

kind: Ingress

metadata:

name: proper-api-ingress

spec:

rules:

-http:

paths:

-path: /api/search

backend:

serviceName: search-api-service

servicePort: 8081

-path: /api

backend:

serviceName: api-service

servicePort: 8080

-path: /

backend:

serviceName: frontend-service

servicePort: 8082

There is also a default backend that is used to serve default pages (like 404s), and it can be deployed separately. In this case, we will not need it since the frontend will cover 404s.

You can read more at https://kubernetes.io/docs/concepts/services-networking/ingress/