To all who are serving their duties and staying at home or, I hope you are doing well during the spread of the coronavirus epidemic.

As a Kubernetes developer, Istio service mesh might have drawn your attention, you might urge to explore it but you found that you needed a Kubernetes cluster first. An easier option to prepare one is Minikube or MicroK8s, these two platforms are fully supported by Istio. Furthermore, you might want to try it with a multiple nodes cluster, which is closed to your production environment in the future.

Set up a multiple nodes cluster is not a single-command task, but it is practically achievable and affordable. I had discussed how to set up my home lab with the Rancher server in my previous post, and I will tell you how did I learn Istio with my home lab.

A brief introduction

Istio Service Mesh improves service-to-service communication in Kubernetes cluster by allowing the fine-grained control and monitoring the traffic. While Kubernetes Service gives the basic load balancing function above Pods, Istio takes one more step, which makes both incoming and outgoing traffic from Pods manageable and configurable by injecting an Envoy proxy sidecar into the Pod.

The advantage of sidecar pattern is transparent to applications, in the other words, it means you don’t need a delegated client in your code; your call to other services within the mesh is only a plain REST call; service discovery, load balancing, circuit breaking, or retry policy can be decoupled from your application. The independency from applications gives you the freedom for program language or tools selection and you still can enjoy the benefit of Service Mesh.

Install Istio

Installing Istio is pretty straightforward, the Istio Getting Started Guide is more than enough to get started but first, you need to have your Kubernetes cluster ready. I prepared my cluster with the Rancher server, and Istio also supports other Kubernetes platforms likes MicroK8s and Minikube.

After downloading the package and set istioctl CLI in the PATH variable, I installed Istio with the demo profile, the profile installed the istio-ingressgateway service for later use. I also appended the istio-injection label into the default namespace, afterward for each Pod created in that namespace, an Envoy proxy will be injected into the Pod.

istioctl manifest apply --set profile=demo

kubectl label namespace default istio-injection=enabled

My example setup

Referring to the helloworld example from Istio, I prepared an example starting from an empty yaml file, and I believe that is the best way to verify my understanding.

The example aimed to expose a simple Nginx service outside of the Kubernetes cluster and assumed that the service was rolling out a new version. At this point, half of the incoming requests had already routed to the new version, and another half were still handled by the old version. The following diagram shows the deployment overview and relationship among different resources, and all yaml files can be found in my GitHub repo.