The light/simple kubernetes cluster could be done with kubeadm tool. In current day 2016.03.26, kubernetes v1.6.0 is not released. To deploy a release candidate for playing I met with few issues.

Where and What is my playground:

Cloud: AWS (or any host providers)

OS: CentOS Linux release 7.3.1611 (Core)

Docker: v1.12.6

All my machines I setup using packer(to build images) and terraform. For the article I would omit those tools.

Prepare the packages

For stable kubernetes builds I usually use original repo yum.kubernetes.io from https://kubernetes.io/docs/getting-started-guides/kubeadm/. Here is the first problem: I could not find release candidate builds in the repo. I suppose it should be somewhere, but it was not my day to find it. Found the beautiful release builder https://github.com/kubernetes/release. Small changes in the kubelet.spec to provide exactly my required build version:

%global KUBE_VERSION 1.6.0-rc.1

%global KUBE_VERSION_MAJOR 1.6.0

then replaced all Version: %{KUBE_VERSION} with Version: %{KUBE_VERSION_MAJOR} . Because of https://github.com/kubernetes/release/issues/290

cd ./rpm

./docker-build.sh

And Wuala: we have nice rpms in the output folder:

$ ls output/x86_64

kubeadm-1.6.0-0.x86_64.rpm kubelet-1.6.0-0.x86_64.rpm repodata kubectl-1.6.0-0.x86_64.rpm kubernetes-cni-0.3.0.1-0.07a8a2.x86_64.rpm

Upload the packages to https://packagecloud.io/miry/kubernetes.

Node setup

All nodes (master and slaves) in the cluster should have same versions of kubernetes files.

https://packagecloud.io/miry/kubernetes/install

curl -s curl -s https://packagecloud.io/install/repositories/miry/kubernetes/script.rpm.sh | sudo bash sudo setenforce 0 || true # Required for K8S

yum install -y docker kubelet kubeadm kubectl kubernetes-cni



There is changes in the systemd service file for kubelet .

The difference to original, is that I added --cgroup-driver=systemd to kubelet . Docker and Kubernetes should have same cgroup .

Enable and run services:

sudo systemctl enable docker && sudo systemctl start docker sudo systemctl enable kubelet && sudo systemctl start kubelet

Master

The kubeadm is still in development and as I expected there are changes in the flags and options.

kubeadm init --apiserver-advertise-address=$NODE_PRIVATE_IP --apiserver-cert-extra-sans="my.example.com" --kubernetes-version="v1.6.0-rc.1"

Besides changes in the flags, now kubeadm waits until node would be alive and checking for DNS pod. That is strange, because the pod could not work without any Network addons. Open a new connection I started to install network addons. But here also have some changes. So

For k8s the network addon installation file is different, because of changes in the daemon set. Because the kubeadm was not finished the config file is not located in the default path. By default, there is an Authorization option enabled and the non secure API port 8080 is not available even for localhost.

I use next command to install an application(network addon): kubectl — kubeconfig=/etc/kubernetes/admin.conf apply -f https://raw.githubusercontent.com/projectcalico/canal/master/k8s-install/kubeadm/1.6/canal.yaml and kubeadm finished the process in the first session.

To access from local computer copy the file /etc/kubernetes/admin.conf to ~/.kube/config . Verify that it works: kubectl get nodes . If it does not work, check that the IP address is correct and the port 6443 reachable.

Slave

In the slave joins also changed a bit. Now you need always provide the discovery port. Kubernetes API server secure port now is 6443. The typical join would be:

kubeadm join --token="${k8s_token}" ${master_ip}:6443

Dashboard

I thought most hard work finished, and continued playing with k8s. Install dashboard as usual without any problems:

Checked that kube config is valid and run small ruby and bash script to import client certificate to the Mac OS Keychain:

Open in the browser https://<master-ip>:6443/ui and got the error page: Permission denied to get list of pods. Investigated in the ServiceAccounts a bit and came to https://blog.kcluster.io/setup-role-based-access-control-rbac-and-audit-logs-for-kubernetes-clusters-9e4db5b67020#.ghvjfwjhr Because of Authentication, there is only 1 admin user.

Granted permissions to the default service account of UI:

After this command the UI should work properly.