automated observability

There is a quote I particularly like from Paul Graham.

To get rich you need to get yourself in a situation with two things, measurement and leverage.

Sadly, this post is not about how to get rich(Please leave a comment, if you know how :)), I am just stating the importance of measurement(observability) in a totally unrelated manner.

Kubernetes means helmsman of a ship, and Istio means sail in Greek.

Coincidence? Nah, by the naming alone, we can infer that Istio is the logical next step after Kubernetes. At least, that is the plan, I guess.

At its core, all the fancy Kubernetes things could be roughly classified into two big categories. Category I, What is being served(Deployment, Statefulset …)? Category II, How is it served(Service, Ingress …)? And, Istio is a delightful Category II framework comes with many great features.

For the past couple of months, I’ve used Nginx as an Ingress controller at work. It has been serving us well, but it is an awkwardish existence, to say the least.

Nginx is a brilliant piece of software, and 31% of all the websites are served via Nginx servers. However, if you are running microservice architectured application on Kubernetes with Ningx as ingress controller/load balancer/proxy, it takes a lot to get things right.

There is an excellent article about proxies and load balancers by Matt Klein, who is also the author of Envoy. After reading this, it is not difficult to come to the realization that if you want rich Protocol support, Dynamic configuration, Advanced load balancing, Observability, Extensibility, and Fault tolerance etc at minimum cost, maybe Nginx(Engine X) is not the right engine for the job.

Without further adieu, let’s walk through how to get started with Istio.

Installation



kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml

kubectl apply -f install/kubernetes/istio-demo.yaml # download istio https://github.com/istio/istio/releases kubectl apply -f install/kubernetes/helm/istio/templates/crds.yamlkubectl apply -f install/kubernetes/istio-demo.yaml # or with mutual TLS

kubectl apply -f install/kubernetes/istio-demo-auth.yaml

In case you are using GKE , there is one more extra step to create ClusterRoleBinding.

kubectl create clusterrolebinding cluster-admin-binding \

--clusterrole=cluster-admin \

--user=$(gcloud config get-value core/account)

Installation could be done within 2–3 minutes, and the following command can be used to check whether Istio deployments are ready.

kubectl -n istio-system get pods

Demo



kubectl apply -f kubectl create namespace demokubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

To make things clear, create a demo namespace, and then deploy the bookinfo application provided under samples directory.

╰─λ kubectl get pods 0 < 07:03:01

NAME READY STATUS RESTARTS AGE

details-v1-6865b9b99d-dr9td 1/1 Running 0 24s

productpage-v1-f8c8fb8-lgvhs 1/1 Running 0 22s

ratings-v1-77f657f55d-tfk5z 1/1 Running 0 23s

reviews-v1-6b7f6db5c5-62kwk 1/1 Running 0 23s

reviews-v2-7ff5966b99-5pqgj 1/1 Running 0 23s

reviews-v3-5df889bcff-qx46s 1/1 Running 0 22s

Without Istio being enabled, it is a pretty typical application.

╰─λ kubectl port-forward productpage-v1-f8c8fb8-lgvhs 9080:9080 0 < 07:04:15

Forwarding from 127.0.0.1:9080 -> 9080

Handling connection for 9080

Handling connection for 9080

Once the port is forwarded, open http://localhost:9080 to see what it is about.

It is just a bland boring app, nothing interesting. 😄

Now turn the Istio switches on, and recreate bookinfo.



kubectl delete -f

kubectl apply -f kubectl label namespace demo istio-injection=enabledkubectl delete -f samples/bookinfo/platform/kube/bookinfo.yaml kubectl apply -f samples/bookinfo/platform/kube/bookinfo.yaml

Let’s see what is different now.

╰─λ kubectl get pods 1 < 07:17:06

NAME READY STATUS RESTARTS AGE

details-v1-6865b9b99d-ss8h7 2/2 Running 0 44s

productpage-v1-f8c8fb8-xrxlq 2/2 Running 0 41s

ratings-v1-77f657f55d-4cmsw 2/2 Running 0 43s

reviews-v1-6b7f6db5c5-tqn8r 2/2 Running 0 43s

reviews-v2-7ff5966b99-wb94z 2/2 Running 0 41s

reviews-v3-5df889bcff-4n9lm 2/2 Running 0 42s

We can port forward once more to see anything different, but it is the same old application. However, The READY has changed from 1/1 to 2/2, because Istio injects a sidecar container for each deployment if that namespace has istio-injection enabled.

Now Expose the service.

Disclaimer: I use GKE for personal applications

Let’s check out the istio-system/istio-ingressgateway

╰─λ k -n istio-system get svc 0 < 07:53:51

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

grafana ClusterIP 10.3.243.175 <none> 3000/TCP 8m

istio-citadel ClusterIP 10.3.240.31 <none> 8060/TCP,9093/TCP 8m

istio-egressgateway ClusterIP 10.3.245.53 <none> 80/TCP,443/TCP 8m

istio-galley ClusterIP 10.3.246.144 <none> 443/TCP,9093/TCP 8m

istio-ingressgateway LoadBalancer 10.3.242.19 35.225.71.94 80:31380/TCP,443:31390/TCP,31400:31400/TCP,15011:31132/TCP,8060:31014/TCP,853:32122/TCP,15030:30811/TCP,15031:31339/TCP 8m

istio-pilot ClusterIP 10.3.248.83 <none> 15010/TCP,15011/TCP,8080/TCP,9093/TCP 8m

....

Let’s make some fake traffic. Run the following in the terminal. I am lazy 😂

Since Istio intercepts all the traffic, It can do a lot with it.

╰─λ kubectl -n istio-system get pods 0 < 08:11:12

NAME READY STATUS RESTARTS AGE

grafana-65554f9fd6-l7dl2 1/1 Running 0 25m

istio-citadel-5dcc5c9f6d-6h547 1/1 Running 0 25m

istio-egressgateway-7b5468cd87-d7h9x 1/1 Running 0 25m

istio-galley-5bc4b47745-2cq4n 1/1 Running 0 25m

istio-ingressgateway-694fbc46dd-mb7c2 1/1 Running 0 25m

istio-pilot-66897c47dd-4xnmp 2/2 Running 0 25m

istio-policy-5464fb49fd-p2kvj 2/2 Running 0 25m

istio-sidecar-injector-5598fb559c-tqz6w 1/1 Running 0 25m

istio-statsd-prom-bridge-7f44bb5ddb-hwglg 1/1 Running 0 25m

istio-telemetry-59fc577dc9-t4qrw 2/2 Running 0 25m

istio-tracing-ff94688bb-j74xj 1/1 Running 0 25m

prometheus-84bd4b9796-8t79g 1/1 Running 0 25m

servicegraph-65894499c8-xtsvq 1/1 Running 0 25m

Service Graph

kubectl -n istio-system port-forward servicegraph-65894499c8-xtsvq 8088:8088

Grafana

╰─λ kubectl -n istio-system port-forward grafana-65554f9fd6-l7dl2 3000:3000 0 < 08:15:13

Forwarding from 127.0.0.1:3000 -> 3000

Handling connection for 3000

Handling connection for 3000

Handling connection for 3000

Tracing

╰─λ kubectl -n istio-system port-forward istio-tracing-ff94688bb-j74xj 16686:16686 0 < 08:18:49

Forwarding from 127.0.0.1:16686 -> 16686

Handling connection for 16686

Handling connection for 16686

I think this just scratches the surface of what Istio can do. In fact, It offers much more than that, especially traffic management, security wise. Check out their official documentation https://istio.io/docs/concepts/what-is-istio/.

It is time to sail :)