Here we continue the CKA reboot of the existing CKAD challenge series.

Series Content

All CKA challenges

Rules!

be fast, avoid creating yaml manually from scratch use only kubernetes.io/docs for help. check my solution after you did yours. You probably have a better one!

Notices

This challenge was tested on k8s 1.18. Please let us know should you encounter any issues in the comments

how to be fast with Kubectl ≥ 1.18

Scenario Setup

You will start a two node cluster on your machine, one master and one worker. For this you need to install VirtualBox and vagrant, then:



cd cka-example-environments/cluster1

./up.sh



vagrant ssh cluster1-master1

vagrant@cluster1-master1:~$ sudo -i

root@cluster1-master1:~# kubectl get node git clone git@github.com :wuestkamp/cka-example-environments.gitcd cka-example-environments/cluster1./up.shvagrant ssh cluster1-master1$ sudo -i:~# kubectl get node

You should be connected as root@cluster1-master1 . You can connect to other worker nodes using root, like ssh root@cluster1-worker1

If you want to destroy the environment again run ./down.sh . You should destroy the environment after usage so no more resources are used!

Todays Task: Investigate Multi container Pod issue

Get the amount of nodes plus their status and all available kubectl contexts. In namespace management there is a pod named web-server, check its status. Find the reason / error in the pod logs. Directly gather the logs of the docker containers and check for issues. Fix the pod and ensure its running.

Solution

The following commands will be executed as root@cluster1-master1 :

alias k=kubectl

1.

We are connected to a new cluster, so first we get an overview:

k get node # should show one master and one worker

k config get-contexts # only one context

2.

k -n management get pod # there are other resources

k -n management get pod web-server # shows ERROR

k -n management describe pod web-server # doesn't show much

3.

k -n management logs web-server -c nginx # nothing

k -n management logs web-server -c httpd

The logs from the httpd container show:

(98)Address in use: AH00072: make_sock: could not bind to address 0.0.0.0:80

The error is that both container try to bind port 80 and in a pod, all containers share the same linux kernel network namespace, hence this is not possible.

4.

For this we need to check where the pod is scheduled:

k -n management get pod web-server -o wide

Its schedule on cluster1-worker1 , so we ssh into it:

ssh root@cluster1-worker1

root@cluster1-worker1:~# docker ps | grep web-server

This lists the pause container which is always created for a pod and the nginx container.

# use the nginx container ID

root@cluster1-worker1:~# docker logs e212e6b1788f

This shows the same empty response as using k -n management logs web-server -c nginx .

The httpd could not be created hence we cannot gather logs. But we can check docker events:

docker events --until 0m | grep web-server

docker events --until 0m | grep web-server | grep die

This shows us some information about the container creation and deletion.

5.

Well, this was a bit of a broad request :) First of all there should probably never be nginx+httpd in one pod, as both do the same thing as they are webserver.

We could try to run one of the containers on a different port but this would require to add and alter the default configuration.

Clean up

Run: ./down.sh .

All CKA challenges

More challenges on