Kubernetes basic glossary

Must-know terminology to understand Kubernetes concepts

When I was starting to learn Kubernetes, I got overwhelmed with all of the elements it introduces. On the one hand, the more I was diving into it, the more new aspects I had to familiarize with. After a while though, I realized I wouldn’t leverage some elements at all, a couple of them I just don’t need yet and only a few are actually useful for my current requirements.

I strongly believe this might be true in your case as well and I see lots of people having similar issues I described. That’s why I decided to present here a minimal stack required to deploy your own application onto Kubernetes and expose it externally.

Prerequisites

I assume you already have a Kubernetes cluster available. It can be installed locally or hosted, among the others, on Google Cloud Platform.

Additionally, you may have a look at my recent article which will help you to set everything up:

Glossary

Let’s go through all the components we are planning to use in our setup. Once you are familiar with them, you’ll be able to start configuring them properly for your application.

Image

An image is a lightweight, standalone, executable package of software that includes everything needed to run an application — its code, a runtime, system libraries and tools, environment variables, and configuration files.

Container

A container is launched by running an image. It’s a runtime instance of an image. It is what the image becomes in memory when executed.

Containers are an abstraction at the app layer, they can run on the same machine and share the OS kernel with other containers, each running as an isolated process in userspace.

Node

A node is a representation of a VM or a physical machine in Kubernetes where containers are deployed. Each node is managed by master components.

A node is not inherently created by Kubernetes: it is created externally by cloud providers or it exists in your pool of physical or virtual machines. Hence when Kubernetes creates a node, it creates an object that represents the node.

The services on each worker node include:

software that is responsible for running containers

a node agent which communicates with the Kubernetes Master

a network proxy which reflects Kubernetes networking services

Cluster

A cluster consists of at least one master node and multiple worker nodes. These machines run the Kubernetes cluster orchestration system. Once you have a running Kubernetes cluster, you can deploy your containerized applications on top of it. The actual application is deployed dynamically across the cluster.

The master is the unified endpoint for your cluster. All interactions with the cluster are done via Kubernetes API calls, and the master runs the Kubernetes API Server process to handle those requests. The cluster master is responsible for deciding what runs on all of the cluster’s nodes. Thus, each node is managed from the master.

Kubernetes

Kubernetes coordinates a cluster of nodes that are connected to work as a single unit. The abstractions in Kubernetes allow you to deploy containerized applications to a cluster without tying them specifically to individual machines. Kubernetes automates the distribution and scheduling of application containers across a cluster in a more efficient way.

End users can use the Kubernetes API directly to interact with the cluster. Kubernetes helps you make sure those containerized applications run where and when you want and helps them find the resources and tools they need to work.

Here is the hierarchy of components we described: