Building cloud-native applications and running your code in containers deployed on Kubernetes became the preferred runtime environment at many organizations. Leveraging the process and network level isolation of containers decreases the maintenance overhead of application configuration an increases security, but makes accessing you services from outside the cluster a bit tricky.

Red Hat OpenShift uses the concept of Routes to direct ingress traffic to applications deployed on the cluster. The solution is based on HAProxy instances running on a 1–3 dedicated nodes (infrastructure nodes) that take care of virtual hosting. This means that we’ll have multiple services behind the same IP address and port and they are only distinguished by hostname (e.g. orders.apps.mycompany.com, stock.apps.mycompany.com).

Hostnames are not part of the basic TCP/IP stack, but the HAProxy Router needs to know which service the client wants to access. There are two places the proxy checks for the hostname:

HTTP Host header

TLS Client Hello message

Using the Host header obviously works only for HTTP/S traffic, but typically that’s exactly what we want to expose. It’s automatically added by most HTTP client. It looks like this:

$ curl -v http://fuse7-hello-plain.192.168.99.100.nip.io/api/hello

...

> GET /api/hello HTTP/1.1

> Host: fuse7-hello-plain.192.168.99.100.nip.io

> User-Agent: curl/7.54.0

> Accept: */*

>

< HTTP/1.1 200 OK

...

TLS Client Hello is a more generic solution that works for any TLS connection (including HTTPS) using Server Name Indication (SNI). The hostname is sent unencrypted, so the proxy can decide where to forward the encrypted traffic:

curl -k https://fuse7-hello-passthrough.192.168.99.100.nip.io/

Route types

There are four different type of routes in OpenShift based on TLS offloading:

No TLS (port 80): Non-encrypted HTTP traffic

(port 80): Non-encrypted HTTP traffic Edge (port 443): Encrypted HTTPS traffic between the client and the router proxy . The pod exposes non-encrypted HTTP endpoint.

(port 443): Encrypted HTTPS traffic between the client and the router proxy . The pod exposes non-encrypted HTTP endpoint. Re-Encrypt (port 443): Encrypted traffic is terminated by the router proxy just like for edge routes, but the pod also exposes an HTTPS endpoint. So there is an another TLS connection between the proxy and the pod.

(port 443): Encrypted traffic is terminated by the router proxy just like for edge routes, but the pod also exposes an HTTPS endpoint. So there is an another TLS connection between the proxy and the pod. Passthrough (port 443): The router is not involved in TLS offloading. The traffic is encrypted end-to-end between the client and the pod. This type can be used for non-HTTP TLS endpoints as well.

The route type determines if the proxy checks the HTTP Host header or the hostname in TLS Client Hello. The certificate shown for the client hitting the route’s endpoint also depends on the route’s configuration. In the following we’ll check how to verify the behavior of different route types. For those less interested in the details, let’s start with a quick summary:

In case of edge and re-encrypt the TLS is terminated by the router proxy so it can access the unencrypted HTTP traffic. The hostname is expected in the HTTP Host header. The individual certificate configured for the route or — in most cases — the default wildcard certificate installed (e.g. *.apps.mycompany.com) is being used.

In case of passthrough the proxy can’t access the unencrypted traffic — which may not even be HTTP — so the hostname is picked from TLS Client Hello message and the certificate on the pod’s endpoint is seen by the client.

Does this even matter? Well, in most cases it does not. An HTTP client works whichever route type it hits. Understanding the proxy can be important, for example if you have to setup a health-check in an external load balancer hitting the infrastructure nodes by their IP address to check if an application is deployed or not on that OpenShift cluster.

Environment

For a simple test we can use MiniShift (see Red Hat CDK), which is a local one-node OpenShift virtual machine. See the versions used for this blog:

$ minishift version

minishift v1.27.0+5981f996

CDK v3.7.0-1 oc version

oc v3.11.69

kubernetes v1.11.0+d4cacc0

features: Basic-Auth

Server

kubernetes v1.11.0+d4cacc0 oc v3.11.69kubernetes v1.11.0+d4cacc0features: Basic-AuthServer https://192.168.99.100:8443 kubernetes v1.11.0+d4cacc0 $ minishift config view

- iso-url : file:///Users/bszeti/.minishift/cache/iso/minishift-rhel7.iso

- memory : 6GB

- openshift-version : v3.11.82

- vm-driver : virtualbox

Let’s create the following routes in OpenShift for our simple Hello World API application (see the Appendix with the commands):

There is a route created for each route type following MiniShift’s default *.192.168.99.100.nip.io naming convention that uses nip.io to have the domain names resolved the VM’s IP. We also created an extra route with a custom name to test that it’s not required to stick with this naming pattern.

Tools

OpenSSL is usually available on Linux or Mac. It can be used to open a TLS connection, print the certificate and send HTTP commands manually:

$ openssl s_client -showcerts -connect fuse7-hello-edge.192.168.99.100.nip.io:443

CONNECTED(00000005)

depth=1 CN = openshift-signer@1551218868

verify error:num=19:self signed certificate in certificate chain

verify return:0

---

Certificate chain

0 s:/CN=*.router.default.svc.cluster.local

i:/CN=openshift-signer@1551218868

-----BEGIN CERTIFICATE-----

...

---

GET /api/hello HTTP/1.1

Host: fuse7-hello-edge.192.168.99.100.nip.io HTTP/1.1 200 OK

...

By default the hostname used in the command is added to TLS Client Hello, but it can be set manually:

$ openssl s_client -connect 192.168.99.100:443 --servername any.custom.name

CONNECTED(00000003)

...

---

GET /api/hello HTTP/1.0 HTTP/1.1 200 OK

...

Curl sets the hostname in HTTP Host header as well as in TLS Client Hello automatically. The header can be changed easily, but the hostname in TLS message needs a little trick around DNS resolution:



$ curl -vk --resolve any.custom.name:443:192.168.99.100 -H 'Host: myhost' https://any.custom.name/api/hello

* Added any.custom.name:443:192.168.99.100 to DNS cache

* Hostname any.custom.name was found in DNS cache

* Trying 192.168.99.100...

* TCP_NODELAY set

* Connected to any.custom.name (192.168.99.100) port 443 (#0)

...

> GET /api/hello HTTP/1.1

> Host: myhost

> User-Agent: curl/7.54.0

> Accept: */*

>

< HTTP/1.1 200 OK # Flag '-k' is used to skip certificate verification.* Added any.custom.name:443:192.168.99.100 to DNS cache* Hostname any.custom.name was found in DNS cache* Trying 192.168.99.100...* TCP_NODELAY set* Connected to any.custom.name (192.168.99.100) port 443 (#0)...> GET /api/hello HTTP/1.1> Host: myhost> User-Agent: curl/7.54.0> Accept: */*< HTTP/1.1 200 OK

Let’s dance

As we have our environment and tools ready, let’s have a quick look how different route types behave.

No TLS

There is not much to see around plain HTTP routes. The router proxy decides which pods to hit based on the HTTP Host header.

$ curl -v http://fuse7-hello-plain.192.168.99.100.nip.io/api/hello

> GET /api/hello HTTP/1.1

> Host: fuse7-hello-plain.192.168.99.100.nip.io

> User-Agent: curl/7.54.0

> Accept: */*

>

< HTTP/1.1 200 OK # If the Host header is incorrect, the service is not found

$ curl -v http://fuse7-hello-plain.192.168.99.100.nip.io/api/hello -H 'Host: xxx'

> GET /api/hello HTTP/1.1

> Host: xxx

> User-Agent: curl/7.54.0

> Accept: */*

>

< HTTP/1.0 503 Service Unavailable

Edge

The routing decision is made by the HTTP Host header, the hostname in TLS Client Hello is ignored. The router’s default wildcard certificate — or the route’s individual certificate if set — is used.

$ curl -vk https://fuse7-hello-edge.192.168.99.100.nip.io/api/hello

> GET /api/hello HTTP/1.1

> Host: fuse7-hello-edge.192.168.99.100.nip.io

>...

< HTTP/1.1 200 OK

$ curl -vk --resolve nonexistinghost:443:192.168.99.100 https://nonexistinghost/api/hello -H 'Host: fuse7-hello-edge.192.168.99.100.nip.io'

> GET /api/hello HTTP/1.1

> Host: fuse7-hello-edge.192.168.99.100.nip.io

> ...

< HTTP/1.1 200 OK # Hostname in TLS Client Hello is ignored> GET /api/hello HTTP/1.1> Host: fuse7-hello-edge.192.168.99.100.nip.io> ...< HTTP/1.1 200 OK # If the Host header is incorrect, the service is not found

$ curl -vk https://fuse7-hello-edge.192.168.99.100.nip.io/api/hello -H 'Host: xxx'

> GET /api/hello HTTP/1.1

> Host: xxx

>...

< HTTP/1.0 503 Service Unavailable

Re-encrypt

Just like in case of Edge, the HTTP Host header matters. The client sees the router’s (default or route specific) certificate. It’s important that the router proxy must trust the certificate provided by the pod so the destinationCACertificate must be set accordingly on the route. To trust a selfsigned certificate, simply add that here. For a certificate signed by a CA add the CA’s root (or intermediate) certificate. The CN (hostname) on the pod’s certificate is not verified.

$ curl -vk https://fuse7-hello-reencrypt.192.168.99.100.nip.io/api/hello

...

* Server certificate:

* subject: CN=*.router.default.svc.cluster.local

...

> GET /api/hello HTTP/1.1

> Host: fuse7-hello-reencrypt.192.168.99.100.nip.io

>...

< HTTP/1.1 200 OK # Hostname in TLS Client Hello is ignored

$ curl -vk --resolve nonexistinghost:443:192.168.99.100 https://nonexistinghost/api/hello -H 'Host: fuse7-hello-reencrypt.192.168.99.100.nip.io'

> GET /api/hello HTTP/1.1

> Host: fuse7-hello-reencrypt.192.168.99.100.nip.io

> ...

< HTTP/1.1 200 OK # If the Host header is incorrect, the service is not found

$ curl -vk https://fuse7-hello-reencrypt.192.168.99.100.nip.io/api/hello -H 'Host: xxx'

> GET /api/hello HTTP/1.1

> Host: xxx

>...

< HTTP/1.0 503 Service Unavailable

Passthrough

TLS is terminated by the pod, so the proxy can’t access the unencrypted traffic. The routing decision is based on the hostname in TLS Client Hello, the Host header is ignored. Also the traffic doesn’t have to be HTTPS as the protocol wrapped by TLS is only handled by the pod.

$ curl -vk https://fuse7-hello-passthrough.192.168.99.100.nip.io/api/hello

> GET /api/hello HTTP/1.1

> Host: fuse7-hello-passthrough.192.168.99.100.nip.io

> ...

< HTTP/1.1 200 OK # Incorrect Host header causes no problem

$ curl -vk https://fuse7-hello-passthrough.192.168.99.100.nip.io/api/hello -H 'Host: xxx'

> GET /api/hello HTTP/1.1

> Host: xxx

>...

< HTTP/1.1 200 OK # If the TLS Client Hello is incorrect, the service is not found

$ curl -vk --resolve nonexistinghost:443:192.168.99.100 https://nonexistinghost/api/hello -H 'Host: fuse7-hello-passthrough.192.168.99.100.nip.io'

> GET /api/hello HTTP/1.1

> Host: fuse7-hello-passthrough.192.168.99.100.nip.io

> ...

< HTTP/1.0 503 Service Unavailable

Appendix

Explaining how to setup MiniShift, build and deploy an app and how to create the OpenShift resources is out of scope of this post. As a guideline see the commands used to prepare the environment for our tests above.

Build app and create image

$ oc project openshift $ oc new-build java:8~ https://github.com/bszeti/camel-springboot.git --context-dir=fuse7-hello $ oc logs bc/camel-springboot -f

...

Running 'mvn -e -Popenshift -DskipTests -Dcom.redhat.xpaas.repo.redhatga -Dfabric8.skip=true --batch-mode -Djava.net.preferIPv4Stack=true -s /tmp/src/configuration/settings.xml -Dmaven.repo.local=/tmp/artifacts/m2 package'

... $ oc get is camel-springboot -n openshift

camel-springboot 172.30.1.1:5000/openshift/camel-springboot

Start app with HTTP

$ oc new-project hello-http # Required only to read secrets and configMaps

$ oc policy add-role-to-user edit -z default $ cat <<EOF | oc apply -f -

apiVersion: apps.openshift.io/v1

kind: DeploymentConfig

metadata:

name: fuse7-hello

labels:

app: fuse7-hello

spec:

replicas: 1

selector:

app: fuse7-hello

template:

metadata:

labels:

app: fuse7-hello

spec:

containers:

- name: default-container

image: 172.30.1.1:5000/openshift/camel-springboot:latest

readinessProbe:

failureThreshold: 3

httpGet:

path: /health

port: 8080

initialDelaySeconds: 10

timeoutSeconds: 1

resources:

limits:

memory: 512Mi

EOF $ oc get pod -n hello-http $ oc create service clusterip fuse7-hello --tcp=8080:8080 $ cat <<EOF | oc apply -f -

apiVersion: route.openshift.io/v1

kind: Route

metadata:

labels:

app: fuse7-hello

name: fuse7-hello-plain

spec:

host: fuse7-hello-plain.192.168.99.100.nip.io

port:

targetPort: 8080-8080

to:

kind: Service

name: fuse7-hello

EOF $ curl -k http://fuse7-hello-plain.192.168.99.100.nip.io/api/hello

{"message":"Hello World!"} $ cat <<EOF | oc apply -f -

apiVersion: route.openshift.io/v1

kind: Route

metadata:

labels:

app: fuse7-hello

name: fuse7-hello-edge

spec:

host: fuse7-hello-edge.192.168.99.100.nip.io

port:

targetPort: 8080-8080

tls:

termination: edge

to:

kind: Service

name: fuse7-hello

EOF $ curl -k https://fuse7-hello-edge.192.168.99.100.nip.io/api/hello

{"message":"Hello World!"}

Start app with HTTPS