Click here to share this article on LinkedIn »

Here are some of the practices I used in my cluster. There is very little documentation and tutorials around this topic.

I’m running on AWS using Kops to create and manage my Kubernetes cluster.

Use latest Kubernetes version

Make sure you use the latest Kubernetes (today it’s 1.9.0). Add the options --kubernetes-version=1.9.0 to kops create cluster command when creating the cluster for the first time.

A later note: Writing these words, Kops 1.9 is not out yet so, some things might not work well (I didn’t encounter any issues yet… will update here if I do). You can see the compatibility in the site

Private topology with Calico

Calico is an open-source project to manage and enforce network policy of the cluster and it comes built-in in the latest google container releases.

A later note: It’s important to say that there are more network management tools other than Calico out there — you can read more here

Add the options --topology private --networking calico to kops create cluster command when creating the cluster for the first time.

Network Policies

If you created your cluster with private topology you can use Network Policies. Set up your network policy to explicitly allow/deny connections between elements in the cluster.

So, make sure you have some separation logic (app, role, etc). I used those resources when I created the NetworkPolicies:

You can test your policies with simple debian images you run on your clusters and use curl/wget to test the connections.

kubectl run --image=debian mycontainer -- sleep infinity

kubectl exec -ti mycontainer_*** bash (in the container)

apt-get update

apt-get install curl -y

curl -Lk -X GET http://***

Bastion

Access with SSH through a single point-of-contact: Bastion.

By default, all nodes have a public IP and are accessible to SSH from the outside world. With a Bastion you can limit the vulnerabilities of penetration to your cluster.

Follow the instructions on https://github.com/kubernetes/kops/blob/master/docs/bastion.md

# Verify you have an SSH agent running. This should match whatever you built your cluster with.

ssh-add -l

# If you need to add the key to your agent:

ssh-add path/to/private/key



# Now you can SSH into the bastion

ssh -A admin@<bastion-ELB-address>



# Where <bastion-ELB-address> is usually bastion.$clustername (bastion.example.kubernetes.cluster) unless otherwise specified

Default Authorization with RBAC

Add the option --authorization=RBAC to kops create cluster command when creating the cluster for the first time.

Notice that some services that used to work like (autoscaler for example) will crash. Check out the logs, you will get “forbidden” messages. you need to assign RBAC policies to those containers.

POD dangerous IAM credentials

By default every pod has the powers of its hosting node in terms of AWS access (IAM). To fix, you will install kube2iam which is a daemonset that runs on each instance and provides a firewall to the IAM credentials requests fromt he containers on those instances. This way, we will not fall to the default behaviour of container receiving wide IAM credentials.

We are going to install kube2iam Daemonset with helm (package manager for Kubernetes). Assuming you created your cluster with RBAC enabled by default we need to provide the helm tiller (the pod issueing the requests we make to helm on our cluster) with the appropriate RBAC credentials to operate. This snippet is taken from here.

(make sure you have helm installed first)

kubectl create serviceaccount --namespace kube-system tiller kubectl create clusterrolebinding tiller-cluster-rule --clusterrole=cluster-admin --serviceaccount=kube-system:tiller kubectl patch deploy --namespace kube-system tiller-deploy -p '{"spec":{"template":{"spec":{"serviceAccount":"tiller"}}}}' helm init --service-account tiller --upgrade

now, install kube2iam:

helm install stable/kube2iam —- name kube2iam —- namespace=kube-system —- set host.iptables=true —- set rbac.create=true --set host.interface=cali+

iptable — i don’t think that this feature should have be added to kube2iam but we do have it so i’m going to use it. Read about it here. This feature is about adding a special rule to each Node not to allow url requests to the special Ec2 metadata api on 169.254.169.254. This is a url that any pod can query with curl/wget and get all hosting instance IAM Role credentials — this is considered very dangerous!

After creating the kube2iam setup you need to verify that the iptable rule was created correcly, in order to it login to a pod and run

you need to see an error message, this is how you know the iptable rule is ok. If not — you may need to change to arguments in the last command according to your setup. for example, if using non-calico network you need to specify a different host.interface. all the possible parameters are in the helm yaml.

I found some real ways you can test that your changes took place in the excellent article.

Avoid default Credentials injection into pod

As you can read in the documentation: “When you create a pod, if you do not specify a service account, it is automatically assigned the default service account in the same namespace…… You can access the API from inside a pod using automatically mounted service account credentials,”.

You can avoid this default behaviour by using:

kubectl patch serviceaccount default -p "automountServiceAccountToken: false"

sources: