In some earlier post, I provided instructions on how one could deploy Kubernetes (K8S) on Photon Controller using the built-in photon controller CLI. In this next post, I want to show you how Kubernetes can be deployed natively, using kube-up and kube-down, on Photon Controller v0.9.

In this example I am using Photon OS, a minimal Linux container host, optimized to run on vSphere (and other VMware products). Now in order to deploy K8S, a number of additional tooling needs to be added to Photon OS. The requirements are highlighted in this earlier blog post. Once all the necessary components are in place, we are ready to deploy Kubernetes.

*** Please note that at the time of writing, Photon Controller is still not GA ***

Step 1: Download Kubernetes

You can get it from github:

# git clone https://github.com/kubernetes/kubernetes.git Cloning into 'kubernetes'... remote: Counting objects: 275681, done. remote: Compressing objects: 100% (35/35), done. remote: Total 275681 (delta 18), reused 4 (delta 4), pack-reused 275642 Receiving objects: 100% (275681/275681), 240.78 MiB | 3.02 MiB/s, done. Resolving deltas: 100% (180982/180982), done. Checking connectivity... done. #

Step 2: Build Kubernetes

Now that it is downloaded, we need to build it. Remember that if you are doing this via a VM that is running Photon OS, you need to make sure that you have the pre-requisites, such as docker running, and “awk” installed:

# cd kubernetes # make quick-release KUBE_RELEASE_RUN_TESTS=n KUBE_FASTBUILD=true build/release.sh +++ [0601 11:51:50] Verifying Prerequisites.... +++ [0601 11:52:00] Building Docker image kube-build:build-8a869d1de6. +++ [0601 11:52:15] Running build command.... Go version: go version go1.6.2 linux/amd64 +++ [0601 11:52:15] Building the toolchain targets: k8s.io/kubernetes/hack/cmd/teststale +++ [0601 11:52:15] Building go targets for linux/amd64: cmd/kube-dns cmd/kube-proxy cmd/kube-apiserver cmd/kube-controller-manager cmd/kubelet cmd/kubemark cmd/hyperkube federation/cmd/federation-apiserver federation/cmd/federation-controller-manager plugin/cmd/kube-scheduler Go version: go version go1.6.2 linux/amd64 +++ [0601 11:54:49] Building the toolchain targets: k8s.io/kubernetes/hack/cmd/teststale +++ [0601 11:54:49] Building go targets for linux/amd64: cmd/kubectl Go version: go version go1.6.2 linux/amd64 +++ [0601 11:55:23] Building the toolchain targets: k8s.io/kubernetes/hack/cmd/teststale +++ [0601 11:55:23] Building go targets for linux/amd64: cmd/integration cmd/gendocs cmd/genkubedocs cmd/genman cmd/genyaml cmd/mungedocs cmd/genswaggertypedocs cmd/linkcheck examples/k8petstore/web-server/src federation/cmd/genfeddocs vendor/github.com/onsi/ginkgo/ginkgo test/e2e/e2e.test test/e2e_node/e2e_node.test +++ [0601 11:55:34] Placing binaries +++ [0601 11:55:41] Running build command.... +++ [0601 11:55:42] Output directory is local. No need to copy results out. +++ [0601 11:55:42] Building tarball: salt +++ [0601 11:55:42] Building tarball: manifests +++ [0601 11:55:42] Starting tarball: client linux-amd64 +++ [0601 11:55:42] Building tarball: src +++ [0601 11:55:42] Waiting on tarballs +++ [0601 11:55:42] Building tarball: server linux-amd64 +++ [0601 11:55:43] Starting Docker build for image: kube-apiserver +++ [0601 11:55:43] Starting Docker build for image: kube-controller-manager +++ [0601 11:55:43] Starting Docker build for image: kube-scheduler +++ [0601 11:55:43] Starting Docker build for image: kube-proxy +++ [0601 11:55:43] Starting Docker build for image: federation-apiserver +++ [0601 11:55:49] Deleting docker image gcr.io/google_containers/kube-scheduler:8d9c04654bf89c345123c4d0aa7a4b86 Untagged: gcr.io/google_containers/kube-scheduler:8d9c04654bf89c345123c4d0aa7a4b86 Deleted: sha256:69c17746922e13b0361e229941f6b6924f1168ab622903d139f6de2598a383ff Deleted: sha256:8441af9c75eeba75314d251dec93abf69a8723f170ba35b5b06b88d3fdbee212 +++ [0601 11:55:50] Deleting docker image gcr.io/google_containers/kube-controller-manager:9a068086c7dc01f44243d58772fe59a6 Untagged: gcr.io/google_containers/kube-controller-manager:9a068086c7dc01f44243d58772fe59a6 Deleted: sha256:820d6ef7a1716b6b58c354d294f5e3889f8d91fa2356610fb5f2d618d7ac504f Deleted: sha256:fd3408ba3ab5127a99ef1cc004dcf4cc313cd7164b8c84dfee4f21ee5076942d +++ [0601 11:55:51] Deleting docker image gcr.io/google_containers/federation-apiserver:31b11e174e8509209a235b3f7ab62b42 Untagged: gcr.io/google_containers/federation-apiserver:31b11e174e8509209a235b3f7ab62b42 Deleted: sha256:f99aeff02124b2726d3a211d97742ef40fc76703ec90a8125e5cc7972d3f372e Deleted: sha256:77fc7de4a144f1e454a121f5371fdab5d113a8f9f143d5e9d548cd1df6840a7e +++ [0601 11:55:51] Deleting docker image gcr.io/google_containers/kube-apiserver:8abeb14a2ac02a7ad0eac43cd6762930 Untagged: gcr.io/google_containers/kube-apiserver:8abeb14a2ac02a7ad0eac43cd6762930 Deleted: sha256:2415fedfd62c8e90328356a9abd3e00786787e56776e5f82ae391e8683eaeb8c Deleted: sha256:96e96720f8ac5834fc648c8b235e5c60bc7d8d6b3794866bd192062388b43f28 +++ [0601 11:56:14] Deleting docker image gcr.io/google_containers/kube-proxy:91a356e7e28ec2dad4664f9931058a97 Untagged: gcr.io/google_containers/kube-proxy:91a356e7e28ec2dad4664f9931058a97 Deleted: sha256:e4442e192e652189bd9cba9cc0c9bbe7bf48ac6364575d0209334f99a78cccde Deleted: sha256:c3fa6304ddc47504a59b9632cce4fa44d36b89895bf04ac5d1ff59366be6d339 +++ [0601 11:56:14] Docker builds done +++ [0601 11:57:17] Building tarball: full +++ [0601 11:57:17] Building tarball: test #

Step 3: Install an SSH Public Key

We need to have an SSH public key installed. This will be used to gain access to the K8S control plane and worker VM’s using the K8S user account, `kube`.

root@photon-machine [ ~ ]# ssh-keygen -t rsa -b 4096 -C "account-details" Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:xxxxxxxxxxxxxxxxxxxxxx xxxxx@xxxxxx.com The key's randomart image is: +---[RSA 4096]----+ xxxxxxxxxxxxxxxxxxx +----[SHA256]-----+ root@photon-machine [ ~ ]# root@photon-machine [ ~ ]# eval $(ssh-agent) Agent pid 17674 root@photon-machine [ ~ ]# ssh-add ~/.ssh/id_rsa Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) root@photon-machine [ ~ ]#

Step 4: Prepare Photon Controller

Because support for Photon Controller v0.9 is now fully baked into K8S, there are some very helpful scripts provides to setup the Photon Controller environment, such as tenant, projects, etc. If you change directory to kubernetes/cluster/photon-controller, you will see a set of scripts for ease of setup.

# ls config-common.sh config-default.sh config-test.sh setup-prereq.sh templates util.sh

If you wish to change the names of the tenant, project, etc from the default of “kube”, edit the config-common.sh script. The script that creates the environment is setup-prereq.sh. However, before we run it, we need to provide an image for the K8S control plane and worker. VMware had provided one (a Debian 8 image) which you can get from bintray here. I placed it in the folder /workspace/images. The URL provided below as the first argument to setup-prereq.sh is the Photon Controller IP address and load-balancer port.

# ./setup-prereq.sh http://10.27.44.34:28080 /workspace/images/debian-8.2.vmdk Photon Target: http://10.27.44.34:28080 Photon VMDK: /workspace/images/debian-8.2.vmdk Making tenant kube-tenant Making project kube-project Making VM flavor kube-vm Making disk flavor kube-disk Uploading image /workspace/images/debian-8.2.vmdk

Step 5: Run kube-up

Like mentioned previously, the upload of the image can take some time, so log onto the host client and verify that the images are in place before proceeding to deploying the framework.

We are now ready to run kube-up. You need to change directory to the kubernetes/cluster folder, and run the script kube-up.sh. Note that you need to precede it with KUBERNETES_PROVIDER=photon-controller to ensure that the correct platform is chosen:

# KUBERNETES_PROVIDER=photon-controller ./kube-up.sh ... Starting cluster using provider: photon-controller ... calling verify-prereqs ... calling kube-up +++ [0603 11:52:34] Validating SSH configuration... Identity added: /root/.ssh/id_rsa (/root/.ssh/id_rsa) +++ [0603 11:52:34] Validating Photon configuration... +++ [0603 11:52:35] Starting master VM... +++ [0603 11:57:16] Created VM kubernetes-master: b08b43b9-b52b-4975-88d4-1cb30bfecfa8 +++ [0603 11:57:20] Started VM kubernetes-master, waiting for network address... +++ [0603 11:58:09] VM kubernetes-master has IP: 10.27.33.75 +++ [0603 11:58:09] Installing salt on master... +++ [0603 11:58:15] Uploading kubernetes-server-linux-amd64.tar.gz to kubernetes-master... +++ [0603 11:58:22] Uploading kubernetes-salt.tar.gz to kubernetes-master... +++ [0603 11:58:43] Creating nodes and installing salt on them... +++ [0603 11:58:45] Created VM kubernetes-node-1: 4beb7dcf-739f-4a35-9d09-0974fdb86280 +++ [0603 11:58:45] Created VM kubernetes-node-3: ee189cc2-8a8a-4937-8a2a-753aceac018f +++ [0603 11:58:45] Created VM kubernetes-node-2: 8b0dcf2d-d7f5-4dd7-b589-cdbb72f31c24 +++ [0603 11:58:47] Started VM kubernetes-node-1, waiting for network address... +++ [0603 11:58:49] Started VM kubernetes-node-2, waiting for network address... +++ [0603 11:58:49] Started VM kubernetes-node-3, waiting for network address... +++ [0603 11:59:39] VM kubernetes-node-2 has IP: 10.27.33.112 +++ [0603 11:59:40] VM kubernetes-node-3 has IP: 10.27.33.115 +++ [0603 11:59:47] VM kubernetes-node-1 has IP: 10.27.33.120 +++ [0603 12:00:06] Waiting for salt-master to start on kubernetes-master for up to 10 minutes... +++ [0603 12:01:08] Installing Kubernetes on kubernetes-master via salt for up to 10 minutes... +++ [0603 12:05:44] Waiting for salt-master to start on kubernetes-node-3 for up to 10 minutes... +++ [0603 12:05:44] Waiting for salt-master to start on kubernetes-node-1 for up to 10 minutes... +++ [0603 12:05:44] Waiting for salt-master to start on kubernetes-node-2 for up to 10 minutes... +++ [0603 12:05:49] Installing Kubernetes on kubernetes-node-3 via salt for up to 10 minutes... +++ [0603 12:05:49] Installing Kubernetes on kubernetes-node-1 via salt for up to 10 minutes... +++ [0603 12:05:49] Installing Kubernetes on kubernetes-node-2 via salt for up to 10 minutes... +++ [0603 12:07:04] Waiting for Kubernetes API on kubernetes-master for up to 10 minutes... +++ [0603 12:08:13] Waiting for Kubernetes API on kubernetes-node-1... for up to 10 minutes... +++ [0603 12:08:13] Waiting for Kubernetes API on kubernetes-node-2... for up to 10 minutes... +++ [0603 12:08:13] Waiting for Kubernetes API on kubernetes-node-3... for up to 10 minutes... +++ [0603 12:08:13] Waiting for cbr0 bridge on kubernetes-node-1 to have an address for up to 10 minutes... +++ [0603 12:08:18] Waiting for cbr0 bridge on kubernetes-node-1 to have correct address for up to 10 minutes... +++ [0603 12:09:21] cbr0 on kubernetes-node-1 is 10.244.1.0/24 +++ [0603 12:09:21] Waiting for cbr0 bridge on kubernetes-node-2 to have an address for up to 10 minutes... +++ [0603 12:09:26] Waiting for cbr0 bridge on kubernetes-node-2 to have correct address for up to 10 minutes... +++ [0603 12:09:36] cbr0 on kubernetes-node-2 is 10.244.2.0/24 +++ [0603 12:09:36] Waiting for cbr0 bridge on kubernetes-node-3 to have an address for up to 10 minutes... +++ [0603 12:09:41] Waiting for cbr0 bridge on kubernetes-node-3 to have correct address for up to 10 minutes... +++ [0603 12:09:51] cbr0 on kubernetes-node-3 is 10.244.0.0/24 +++ [0603 12:09:51] Configuring pod routes on kubernetes-node-1... +++ [0603 12:10:07] Configuring pod routes on kubernetes-node-2... +++ [0603 12:10:27] Configuring pod routes on kubernetes-node-3... +++ [0603 12:10:43] Copying credentials from kubernetes-master +++ [0603 12:11:29] Creating kubeconfig... cluster "photon-kubernetes" set. user "photon-kubernetes" set. context "photon-kubernetes" set. switched to context "photon-kubernetes". Wrote config for photon-kubernetes to /root/.kube/config ... calling validate-cluster Found 3 node(s). NAME STATUS AGE 10.27.33.112 Ready 3m 10.27.33.115 Ready 3m 10.27.33.120 Ready 3m Validate output: NAME STATUS MESSAGE ERROR controller-manager Healthy ok scheduler Healthy ok etcd-0 Healthy {"health": "true"} etcd-1 Healthy {"health": "true"} Cluster validation succeeded Done, listing cluster services: Kubernetes master is running at https://10.27.33.75 KubeDNS is running at https://10.27.33.75/api/v1/proxy/namespaces/kube-system/services/kube-dns kubernetes-dashboard is running at https://10.27.33.75/api/v1/proxy/namespaces/kube-system/services/kubernetes-dashboard To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'. #

And there you have it, K8S is now up and running.

Step 6: Explore Kubernetes with kubectl.sh

I already wrote a bit on how you can familiarize yourself with K8S using the kubectl command in my earlier K8S post. This is pretty much the same, but you need to precede the kubectl command with KUBERNETES_PROVIDER=photon-controller as follows:

# KUBERNETES_PROVIDER=photon-controller ./kubectl.sh get pods NAME READY STATUS RESTARTS AGE web-server-6v84d 1/1 Running 0 4m web-server-aw6rx 0/1 ErrImagePull 0 23s # KUBERNETES_PROVIDER=photon-controller ./kubectl.sh describe rc,svc Name: web-server Namespace: default Image(s): nginx Selector: app=web-server Labels: app=web-server Replicas: 2 current / 2 desired Pods Status: 1 Running / 1 Waiting / 0 Succeeded / 0 Failed No volumes. Events: FirstSeen LastSeen Count From SubobjectPath Type Reason Message --------- -------- ----- ---- ------------- -------- ------ ------- 4m 4m 1 {replication-controller } Normal SuccessfulCreate Created pod: web-server-6v84d 50s 50s 1 {replication-controller } Normal SuccessfulCreate Created pod: web-server-aw6rx Name: kubernetes Namespace: default Labels: component=apiserver provider=kubernetes Selector: <none> Type: ClusterIP IP: 10.244.240.1 Port: https 443/TCP Endpoints: 10.27.32.141:443 Session Affinity: ClientIP No events. Name: web-server Namespace: default Labels: app=web-server Selector: app=web-server Type: LoadBalancer IP: 10.244.241.118 Port: tcp-80-80-emh7z 80/TCP NodePort: tcp-80-80-emh7z 30937/TCP Endpoints: 10.244.1.2:80 Session Affinity: None No events.

Step 7 – kube-down

Shutting down Kubernetes is straight forward also, using the kube-down.sh command. Note that this did not always manage to shut down all the control plane and worker components. I get this on occasion:

# KUBERNETES_PROVIDER=photon-controller ./kube-down.sh Bringing down cluster using provider: photon-controller +++ [0614 13:12:20] Master: kubernetes-master (10.27.33.225) +++ [0614 13:12:21] Node: kubernetes-node-1 (10.27.33.228) +++ [0614 13:12:22] Node: kubernetes-node-2 (10.27.33.233) +++ [0614 13:12:23] Node: kubernetes-node-3 (10.27.33.226) +++ [0614 13:12:23] Deleting VM kubernetes-master +++ [0614 13:12:26] Deleting VM kubernetes-node-1 +++ [0614 13:12:28] Deleting VM kubernetes-node-2 +++ [0614 13:12:34] Deleting VM kubernetes-node-3 !!! [0614 13:12:34] Error: could not delete kubernetes-node-3 (86680f49-222e-4793-8fe7-c4753a704311) !!! [0614 13:12:34] Please investigate and delete manually

If this occurs, you need to use the photon controller CLI commands to list, stop and delete any Kubernetes VMs left hanging around. The commands are “photon vm list”, “photon vm stop ” and “photon vm delete ” if you experience a similar issue. If you do not clean up the VMs’ subsequent K8S deployments may not succeed.

Troubleshooting

If you run into any other issues during the K8S deployment, you now have the ability to login to any of the K8S nodes using the “kube” username. The SSH key that we added will allow you to access – no password is required. Deployment logs can be found in /tmp of the VMs/nodes.