Kubernetes architecture’s quick insight

Master node is responsible for managing full cluster, it monitors the health of the all nodes. When a worker node fails, it will try to move the workload from failed nodes to a healthy nodes. A master node generally doesn’t consist of any pods directly.

Master node is responsible for scheduling, provisioning, controlling and exposing APIs to the client and responsible for managing the entire cluster.

Kubernetes cluster must have at-least 1 master node. In production system, there will be more then 1 master node for ‘high-availability’ & ‘fault-tolerance’ purpose.

In real world, a master node could have 10, 100+ or 1000+ worker node. Kubernetes supports up-to 5000 worker nodes per cluster.

Scheduler inside the master node will be responsible for physically scheduling of the pods across multiple worker nodes.

Controller Manager -There are the 4 types of controller available –

Node Controller: Responsible for noticing and responding when nodes go down.

Replication Controller: Responsible for maintaining the correct number of pods for every replication controller object in the system.

Endpoints Controller: Populates the Endpoints object (that is, joins Services & Pods).

Service Account & Token Controllers: Create default accounts and API access tokens for new namespaces

Etcd — is a highly available, distributed key-value database for Kubernetes to store current cluster state at any point of time.

Pods— Each pod may consist one or more containers. There could be maximum 100 pods per node. Each pods has a unique IP address inside a k8s cluster.

A pod should be controlled by a replica set or by replication controller otherwise in case of issue, such pod dies and can’t be recreated.

Container exist inside the pod and holds containerized application, libraries and its dependencies.

Kubelet — It’s a primary node agent works on every worker node inside the cluster. In-case kubelet founds an issue with in a pod, it tries to restart the pod inside the same worker node. If the issue is related to work node itself, K8s master detect the node failure and decide to recreate the pod inside another healthy worker node.

Kube-proxy — is a critical element inside a Kubernetes cluster which is responsible for maintaining the entire network configurations. It also exposes services to the out-side world.