Istio is by default requiring more permissions to be able to run than what most users wanted to give their pods and containers (check out this presentation from ContainerDays 2019). Since Istio 1.2 there is luckily a solution available. This post tries to explain the mechanism made to reduce the needed permissions and to thus increase security of your mesh.

Let’s first have a look at the situation as it was before the changes

The classic setup with a sidecar in the pod.

You see that outside traffic (red arrow) is coming into the pod to the Sidecar (Envoy proxy), which then forwards requests to the business container. In order for this to work, there is the Init container, which modifies the network routing tables (iptables) to direct outside traffic to the sidecar and then makes this proxy talk to the business container. When you just do kubectl get pod you don’t see the init-container listed, but only the application containers:

Output of kubectl get pod — pods have 2 containers for business and sidecar

You can though use kubectl with an output format (or kubectl describe pod) to see the init-container:

Getting pod details to list the init-container

The modification of iptables can only work when the init-container (and thus the entire pod) runs with NET_ADMIN privileges. The downside of this is that each container in the pod runs as NET_ADMIN and malicious business code could also modify the iptables, which is not desired, as this could allow the pod to talk to other pods, that was originally forbidden by the Kubernetes configuration.

On top, the init-container insists on running under a specific user id, which especially OpenShift does not like, so that one would need to grant additional rights for the application to function properly:

Setting Security Context Constraints for the users namespace

Introducing Istio-CNI

To fix the above we need to remove the init-container from the application pod and ensure that the iptables are changed anyway.

We use a node-level helper pod

A new component, the Istio-CNI plugin, represented by the istio-cni-node DaemonSet (in Maistra it is called istio-node) has been introduced for this purpose (some colleagues of mine have written about that idea last year). Istio-CNI watches for new pods and determines if they should be part of the mesh by checking if the pod meets certain criteria like having an istio-proxy container or not being on the excluded_namespaces list. If so, that helper pod will update the iptables.

The pod has been labeled during sidecar injection with a label name of k8s.v1.cni.cncf.io/networks, which tells Kubernetes to attach the istio-system-istio-cni network to the pod. This makes the pod wait until the Istio-CNI network is present (i.e. the istio-cni-node pod has done its work). Kubernetes also updates the configuration to set the status of the attached networks.

The pod can continue with its normal startup once the network is set up, after which the sidecar and the business container can start normally.

Inspecting the pod again we can now see that there is no more init-container present in the pod:

Installation

Istio-CNI does not come plain out of the box, but you have to explicitly install it.

Enable Istio-CNI on Maistra at install time

The Maistra downstream project allows to switch on Istio-CNI in TP12 and will have Istio-CNI enabled by default afterwards. For TP 12 you just need to add 2 lines of Yaml to the control plane configuration as shown in the box above.

Conclusion

With the introduction and use of Istio-CNI we don’t need any elevated privileges for the users business code anymore. The Istio-node still needs NET_ADMIN rights, but it is considered a platform pod like many others in Kubernetes — there are no parts that a user could directly modify.