This will be last article in the 101 series, as I think I have covered off most of the introductory storage related items at this point. One object that came up time and again during the series was services. While not specifically a storage item, it is a fundamental building block of Kubernetes applications. In the 101 series, we came across a “headless” service with the Cassandra StatefulSet demo. This was where service type ClusterIP was set to None. When we started to look at ReadWriteMany volumes, we used NFS to demonstrate these volumes in action. In the first NFS example, we came across a blank ClusterIP entry. This was the service type entry when NFS client Pods were mounting file shares from an NFS server Pod. We then looked at a Load Balancer type service, which we used to allow external NFS clients outside of the K8s cluster mount a file shared from an NFS Server Pod.

When a service is created, it typically gets (1) a virtual IP address, (2) a DNS entry and (3) networking rules that ‘proxy’ or redirects the network traffic to the Pod/Endpoint that actually provides the service. When that virtual IP address receives traffic, kube-proxy is responsible for redirecting the traffic to the correct back-end Pod/Endpoint. You might ask what the point of a service is? Well, services address an issue where Pods can come and go, and each time Pods are restarted, they most likely get new IP addresses. This makes it difficult to maintain connectivity/communication to them, especially for clients. Through services, K8s provides a mechanism to maintain a unique IP address for the lifespan of the service. Clients can then be configured to talk to the service, and traffic to the service will be load balanced across all the Pods that are connected to it.

At this point, lets revisit some of the internal K8s components that we already came across. It will be useful to appreciate the purpose of each in the context of services.

kubeDNS / coreDNS revisited

In the failure scenarios post, we talked a little about some of the internal components of a K8s cluster. When a service is created, it is assigned a virtual IP address. This IP address is added to DNS to make service discovery easier. The DNS name-service is implemented with by coreDNS or kubeDNS. The name server implementation depends on your distribution. PKS uses coreDNS whilst upstream K8s distros use kubeDNS.

kube-proxy revisited

Another one of the internal components we touched on was kube-proxy. I described the kube-proxy as the component that configures K8s node networking. It routes network requests from virtual IP addresses of a service to the endpoint (Pod) implementing the service, anywhere in the cluster. Thus, a front-end Pod on one K8s node would be able to seamlessly communicate with a back-end Pod on a completely different K8s node in the same K8s cluster.

Why don’t we go ahead and tease out these services in some further detail, and look at some of the possible service types that you may come across. I am going to use the following manifest files for my testing. The first is an nginx web-server deployment which has 2 replicas, thus there will be two back-end (behind a service) Pods deployed.

apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment spec: selector: matchLabels: app: nginx replicas: 2 template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:latest ports: - containerPort:80

I am also using this manifest for another simple busybox Pod to allow me to do Pod to Pod testing in the cluster.

apiVersion: v1 kind: Pod metadata: name: demo-sc-pod spec: containers: - name: busybox image: "k8s.gcr.io/busybox"

Now we need to create a Service manifest, but we will be modifying this YAML file with each test. Let’s start our testing of Kubernetes Services with a look at clusterIP.

1. clusterIP

clusterIP can have a number of different values when it comes to services. Let’s look at the most common.

1.1 clusterIP set to “” blank

This is the default service type in Kubernetes. With clusterIP set to “” or blank, the service is accessible within the cluster only – no external access is allowed from outside of the cluster. There is also no direct access to the back-end Pods via DNS as the Pods are not added to DNS. Instead, there is a single DNS name for the group of back-end Pods (of course the Pods are still accessible via IP Address).

Lets assume that this service has been assigned to a group of Pods running some application. Access to the application is available via the virtual IP address of the service, or via the DNS name assigned to the service. When a client accesses the service via the virtual IP address or DNS name for the group of Pods, the first request proxies (by kube-proxy) to the first Pod, the second request goes to the second Pod, and so on. Requests are load balanced across all Pods in the group.

Let’s now deploy our Pods and service (using the below manifest), and look at the behaviour in more detail. The service manifest has the clusterIP set to “” blank. Note also that the port matches the container port in the deployment (80), and that the selector for the service is the same as the label of the deployment (nginx).

apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx-svc spec: clusterIP: “" ports: - name: http port: 80 selector: app: nginx

Now we shall deploy the ‘deployment’ and the ‘service’ manifests. Once these have been deployed, we will look at the service, and its endpoints. The endpoints should be the two Pods which are part of the deployment. We will also see that there is no external IP associated with the service. It is internal only.

$ kubectl get deploy NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE nginx-deployment 2 2 2 2 103m $ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deployment-8b8f7ccb4-qct48 1/1 Running 0 104m 172.16.6.3 6ac7f51f-af3f-4b55-8f47-6449a8a7c365 <none> nginx-deployment-8b8f7ccb4-xpk8p 1/1 Running 0 104m 172.16.6.2 2164e6f0-1b8a-4edd-b268-caaf26792dd4 <none> $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc ClusterIP 10.100.200.20 <none> 80/TCP 3m31s $ kubectl get endpoints NAME ENDPOINTS AGE nginx-svc 172.16.6.2:80,172.16.6.3:80 3m36s $ kubectl describe endpoints Name: nginx-svc Namespace: svc-demo Labels: app=nginx Annotations: <none> Subsets: Addresses: 172.16.6.2 , 172.16.6.3 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- http 80 TCP Events: <none>

We will now deploy the simple busybox Pod (called demo-nginx-pod) in the same namespace as the nginx deployment, and after opening a shell to the Pod, we will try to reach the nginx service. This should be possible, using both the service name nginx-svc, and also the assigned Cluster-IP address of 10.100.200.20. Note also how the service name resolves to the IP address. I will use the wget command to verify that I can pull down the nginx landing page (which is a simple welcome message) from the back-end Pods.

$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE demo-nginx-pod 1/1 Running 0 3m 172.16.6.4 2164e6f0-1b8a-4edd-b268-caaf26792dd4 <none> nginx-deployment-8b8f7ccb4-qct48 1/1 Running 0 104m 172.16.6.3 6ac7f51f-af3f-4b55-8f47-6449a8a7c365 <none> nginx-deployment-8b8f7ccb4-xpk8p 1/1 Running 0 104m 172.16.6.2 2164e6f0-1b8a-4edd-b268-caaf26792dd4 <none> $ kubectl exec -it demo-nginx-pod -- /bin/sh / # cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 172.16.6.4 demo-nginx-pod / # nslookup nginx-svc Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local Name: nginx-svc Address 1: 10.100.200.20 nginx-svc.svc-demo.svc.cluster.local / # wget -O - nginx-svc Connecting to nginx-svc (10.100.200.20:80) <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> - 100% |***************************************************************************************************************************| 612 0:00:00 ETA / #

This all looks very good. I can reach the nginx service running on the deployment Pods via the DNS name. Now the final point to make is that because this is clusterIP set to “” (blank), the Pods that are the endpoints for the service are not added to DNS. However, the busybox Pod (demo-nginx-pod) which we are using to test the service, has been added to DNS. We can verify this once again from the busybox Pod, where we can see it resolve itself, but not the deployment Pods.

/ # nslookup demo-nginx-pod Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local Name: demo-nginx-pod Address 1: 172.16.6.4 demo-nginx-pod / # nslookup nginx-deployment-8b8f7ccb4-qct48 Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local nslookup: can't resolve 'nginx-deployment-8b8f7ccb4-qct48' / # nslookup 172.16.6.3 Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local Name: 172.16.6.3 Address 1: 172.16.6.3 / #

That completes our first look at the clusterIP setting of “” (blank). Let’s now look at the subtle differences with a headless service where clusterIP is set to “None”.

1.2 clusterIP set to “None” (aka headless)

With clusterIP explicitly set to “None“, the service is once again accessible within the cluster only. However, the difference to this setting compared to the last one is that the DNS name of the service resolves to the IP addresses of the individual Pods, not its own virtual IP address. This service is typically used when you want to control which specific Pod or Pods that you want to communicate with, rather than simply communicate with them in a round-robin fashion. Let’s look at this in more detail now.

We will use the same setup as before. The only difference this time is that the service manifest has a single change, highlighted in blue below.

apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx-svc spec: clusterIP: "None" ports: - name: http port: 80 selector: app: nginx

We will now do the same set of tests as before, and note the differences. One major difference is there is no internal Cluster IP address associated with the service. This now appears as None.

$ kubectl create -f nginx-service.yaml service/nginx-svc created $ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc ClusterIP None <none> 80/TCP 5s $ kubectl get endpoints NAME ENDPOINTS AGE nginx-svc 172.16.6.2:80,172.16.6.3:80 10s $ kubectl describe endpoints Name: nginx-svc Namespace: svc-demo Labels: app=nginx Annotations: <none> Subsets: Addresses: 172.16.6.2,172.16.6.3 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- http 80 TCP Events: <none> $ kubectl exec -it demo-nginx-pod -- /bin/sh / # cat /etc/hosts # Kubernetes-managed hosts file. 127.0.0.1 localhost ::1 localhost ip6-localhost ip6-loopback fe00::0 ip6-localnet fe00::0 ip6-mcastprefix fe00::1 ip6-allnodes fe00::2 ip6-allrouters 172.16.6.4 demo-nginx-pod / # nslookup demo-nginx-pod Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local Name: demo-nginx-pod Address 1: 172.16.6.4 demo-nginx-pod

Now we get to the interesting part of the headless service (clusterIP set to “None”). When I resolve the service name from the busybox Pod, I get returned the list of IP address for the back-end Pods rather than a unique IP for the service itself. You can also see the request going to the different Pods in a round-robin basis. (Note: I had read that headless always goes to the first Pod with requests, but that does not seem to be the case with my testing – perhaps this behaviour changed in later version of K8s).

/ # nslookup nginx-svc Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local Name: nginx-svc Address 1: 172.16.6.2 Address 2: 172.16.6.3 / # ping nginx-svc PING nginx-svc (172.16.6.3): 56 data bytes 64 bytes from 172.16.6.3: seq=0 ttl=64 time=1.792 ms 64 bytes from 172.16.6.3: seq=1 ttl=64 time=0.284 ms 64 bytes from 172.16.6.3: seq=2 ttl=64 time=0.332 ms 64 bytes from 172.16.6.3: seq=3 ttl=64 time=0.384 ms ^C --- nginx-svc ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.284/0.698/1.792 ms / # ping nginx-svc PING nginx-svc (172.16.6.2): 56 data bytes 64 bytes from 172.16.6.2: seq=0 ttl=64 time=1.022 ms 64 bytes from 172.16.6.2: seq=1 ttl=64 time=0.218 ms 64 bytes from 172.16.6.2: seq=2 ttl=64 time=0.231 ms 64 bytes from 172.16.6.2: seq=3 ttl=64 time=0.217 ms ^C --- nginx-svc ping statistics --- 4 packets transmitted, 4 packets received, 0% packet loss round-trip min/avg/max = 0.217/0.422/1.022 ms / # wget -O - nginx-svc Connecting to nginx-svc (172.16.6.3:80) <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> - 100% |***************************************************************************************************************************| 612 0:00:00 ETA / #

One final point to note is that the deployment Pods are once again not added to DNS when using this service type.

/ # nslookup demo-nginx-pod Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local Name: demo-nginx-pod Address 1: 172.16.6.4 demo-nginx-pod

/ # nslookup nginx-deployment-8b8f7ccb4-xpk8p Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local nslookup: can't resolve 'nginx-deployment-8b8f7ccb4-xpk8p' / #

1.3 clusterIP set to “X.X.X.X” IP Address

Let’s briefly look at one last setting. It seems that another option available in clusterIP is to set your own IP address. While I have never needed to use this, according to the official documentation, this could be useful if you have to reuse and existing DNS entry or if you have legacy systems tied to a specific IP address, and you can’t reconfigure.

2. LoadBalancer

As we have seen, ClusterIP services are only accessible from within the cluster. LoadBalancer services exposes the service externally. Kubernetes provides functionality that is similar to ClusterIP=””, and any incoming requests will be load-balanced across all back-end Pods. However, the external load balancer functionality is provided by a third party cloud load balancer provider, in my case, this is provided by NSX-T. As soon as I specify type: LoadBalancer in my Service manifest file, NSX-T will retrieve an available address from the preconfigured pool of Floating IP addresses, and allocate it to my service. As the Service receives client requests on what the external IP address, load balancer has been updated with entries for the Kubernetes pods, so these requests are redirected or proxied to the back-end Pods.

Let’s begin with a look at the modified service manifest, the clusterIP entry has been removed and the spec now includes a LoadBalancer type.

apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx-svc spec: ports: - name: http port: 80 selector: app: nginx type: LoadBalancer

After deploying the service, we see the external IP address populated (in blue below). Now, I’m not 100% sure why we are seeing two IP addresses show up below. One of them (10.64.0.1) is related to the NSX-T Logical Router, whilst the other (192.168.191.69) is the IP address allocated from the floating point pool of IP addresses configured in NSX-T. I’m assuming this is a nuance of the NSX-T implementation.

$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc LoadBalancer 10.100.200.178 100.64.0.1,192.168.191.69 80:31467/TCP 8m27s $ kubectl get endpoints NAME ENDPOINTS AGE nginx-svc 172.16.6.2:80,172.16.6.3:80 8m31s $ kubectl describe endpoints Name: nginx-svc Namespace: svc-demo Labels: app=nginx Annotations: <none> Subsets: Addresses: 172.16.6.2,172.16.6.3 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- http 80 TCP Events: <none>

And now for the external test. Can I reach the nginx server on the Pods from outside the cluster? Let’s try a wget from my desktop:

$ wget -O - 192.168.191.69 --2019-07-02 10:48:15-- http://192.168.191.69/ Connecting to 192.168.191.69:80... connected. HTTP request sent, awaiting response... 200 OK Length: 612 [text/html] Saving to: ‘STDOUT’ - 0%[ ] 0 --.-KB/s <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> - 100%[========================================================================================>] 612 --.-KB/s in 0s 2019-07-02 10:48:15 (123 MB/s) - written to stdout [612/612]

Looks like it is working. You could of course also open a browser, and point it to the external IP address. You should see the ‘Welcome to nginx!’ welcome page rendered. One last point – if you queries this from inside the cluster, you would continue to see the internal IP address, as follows:

$ kubectl exec -it demo-nginx-pod -- /bin/sh / # nslookup nginx-svc Server: 10.100.200.10 Address 1: 10.100.200.10 kube-dns.kube-system.svc.cluster.local Name: nginx-svc Address 1: 10.100.200.178 nginx-svc.svc-demo.svc.cluster.local / #

3. NodePort

The last service I want to discuss is NodePort, something I have used often in the past when I have not had an external load balancer available to my cluster. This is another method of exposing a service outside of the cluster, but rather than using a dedicated virtual IP address, it exposes a port on every K8s node in the cluster. Access to the service is then made via a reference to the node IP address plus exposed port. Let’s look at an example of that next. First, lets look at the manifest file which now sets the type to NodePort.

apiVersion: v1 kind: Service metadata: labels: app: nginx name: nginx-svc spec: ports: - name: http port: 80 selector: app: nginx type:NodePort

When this service is deployed, you will notice is that it is continues to get allocated a cluster IP address, but now the type is NodePort. There is also no external IP address. The PORT field below is telling us that the nginx server/Pod port 80 is accessible via node port 31027 (in this example).

$ kubectl get svc NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE nginx-svc NodePort 10.100.200.119 <none> 80:31027/TCP 3m29s $ kubectl get endpoints nginx-svc NAME ENDPOINTS AGE nginx-svc 172.16.6.2:80,172.16.6.3:80 3m50s $ kubectl describe endpoints nginx-svc Name: nginx-svc Namespace: svc-demo Labels: app=nginx Annotations: <none> Subsets: Addresses: 172.16.6.2,172.16.6.3 NotReadyAddresses: <none> Ports: Name Port Protocol ---- ---- -------- http 80 TCP Events: <none>

Now, in order to access the nginx server/Pod, we need to know the port on which it is running, and the node. The port information is available above. We can get the node information using a combination of kubectl Pod and node commands. First, I can choose one of the nginx pods, and determine which node it is running on. Then I can get the IP address of the node, and use that (along with the port number) to run a wget command against the nginx server/Pod.

$ kubectl get pods -o wide NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE nginx-deployment-8b8f7ccb4-qct48 1/1 Running 0 19h 172.16.6.3 6ac7f51f-af3f-4b55-8f47-6449a8a7c365 <none> nginx-deployment-8b8f7ccb4-xpk8p 1/1 Running 0 19h 172.16.6.2 2164e6f0-1b8a-4edd-b268-caaf26792dd4 <none> $ kubectl get nodes -o wide NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME 2164e6f0-1b8a-4edd-b268-caaf26792dd4 Ready <none> 6d23h v1.12.4 192.168.192.4 192.168.192.4 Ubuntu 16.04.5 LTS 4.15.0-43-generic docker://18.6.1 6ac7f51f-af3f-4b55-8f47-6449a8a7c365 Ready <none> 6d23h v1.12.4 192.168.192.5 192.168.192.5 Ubuntu 16.04.5 LTS 4.15.0-43-generic docker://18.6.1 aaab83d2-b2c4-4c09-a0f4-14c3c234aa7b Ready <none> 6d22h v1.12.4 192.168.192.6 192.168.192.6 Ubuntu 16.04.5 LTS 4.15.0-43-generic docker://18.6.1 cba0db9c-eb9e-41e3-ba5a-916017af1c98 Ready <none> 6d23h v1.12.4 192.168.192.3 192.168.192.3 Ubuntu 16.04.5 LTS 4.15.0-43-generic docker://18.6.1 $ wget -O - 192.168.192.5:31027 --2019-07-02 11:30:08-- http://192.168.192.5:31027/ Connecting to 192.168.192.5:31027... connected. HTTP request sent, awaiting response... 200 OK Length: 612 [text/html] Saving to: ‘STDOUT’ - 0%[ ] 0 --.-KB/s <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> - 100%[========================================================================================>] 612 --.-KB/s in 0s 2019-07-02 11:30:08 (48.3 MB/s) - written to stdout [612/612]

And in fact, you can connect to that port on any of the nodes. If you end up on a node that is not running the nginx Pod, every node in the cluster will have proxied that port to the service, so the request will be proxied/routed/redirected to the back-end Pod. Therefore I could run the wget against any of the nodes in the cluster, and as long as I was using the correct port number, my wget request would succeed.

There are some other service types, such as ExternalName and ExternalIPs that I have never used, but I will add a short note for completeness. My understanding is that this ExternalName is used when you want to map your service to an external DNS name. ExternalIPs is when your nodes have external IP addresses, and you want to use those to reach your service. From what I read, there is no proxying done in Kubernetes for these service types. You can read more about them in the official documentation.

That completes my overview of services in Kubernetes. These are the ones that I have come across most often when working with Kubernetes on vSphere.