Photo by Erin East on Unsplash

Lately I have been deploying applications on Kubernetes. In most of these Kubernetes environments I ended up configuring my applications with Istio service mesh. Using Istio with Kubernetes help address many cross cutting concerns like rate limiting, retrying, mutual TLS, and many other but it introduces additional complexity into the ecosystem.

In a Kubernetes environment we configure an Ingress resource to make applications available to the external world. In this blog, I will help you build an understanding of how a request flow happens in a Kubernetes-Istio cluster. This understanding helps in day-to-day issues like rolling updates, canary deployments etc.

Let’s say we have a simple Python-Flask application, with the following code

The above application exposes a root route / that responds to GET request on port 5000(flask default). It would output the current time with a version V1. I have a V2 version as well which does the same but with version V2. I will deploy the above application using the following deployment

$ kubectl create -f gsapp.yaml

The following figure depicts the state of my ecosystem.

Now I can validate my gsservice by making a few requests.

Since I have enabled Istio on the default namespace application pods are accompanied by sidecar containers. I can validate the above requests in our sidecar logs.

I would need to expose the service via Ingress Gateway, so that they can be accessed from outside the cluster. I would need to build a VirtualService and a DestinationRule for the same. In this example I would build a VirtualService which would send all requests to V2 version only.

$ kubectl create -f gs-gw.yaml

After applying the above configuration we can validate that all requests to our gateway serves V2 version.

$ curl -HHost:gs.xebia.com http://127.0.0.1:31380/

Thinking around how my request is being served, I would build the following request flow :

Analysing Request Flow

Now, let’s look into the details and try to match the understanding (built above) with the how the request is actually served. Each of the istio-proxy is running with a set of routes/endpoints configuration. Istio Pilot generates this configuration by using the specified VirtualSevice. Istio Pilot then pushes this to all the sidecars. We can check if all sidecars are updated with the current configuration by using the command shown below:

Looking at the above response the xDS headers specifies a discovery service.I can see that which proxies are in SYNC for said service. In my case I have a gateway which is listening for my requests and then delegating it to the V2 service. In order to build the flow I need to determine which ports the ingress gateway is listening on :

I can run this with -o json option also. It will show the complete configuration which is executed for a request coming to port 80. It has a few important attributes like the envoy log format, mixer attributes etc.

Next, lets look for routes which are configured for the listener. This can be using the following command:

The virtualHosts specifies it can handle gs.xebia.com host. The routes section define which routes are matched and then send request to which addresses. In my case / is sent to “outbound|80|v2|gsservice.default.svc.cluster.local”.

Now the last part is determined by the address “outbound|80|v2|gsservice.default.svc.cluster.local” This can be done using endpoints option.

The endpoint specifies the address where the request is sent. This is the address of my application sidecar ‘10.1.83.52’. Now since I have the complete configuration we can lookup ingress access log, to determine request forwarding

The above log is in the format specified in the listener configuration. I can match the different pieces of the information to determine the address where the request is forwarded.

Similarly I can look into the pod sidecar for the Envoy configuration. Lastly looking into the logs I can all the incoming requests.

Finally, I have been able to match the actual flow with the understanding.