The fundamental concept behind a microservices-based architecture is also its biggest advantage. Rather than having a single monolithic app running everything internally, it is possible to set up multiple microservices–each handling its own functions and specific tasks–and have that big lump of an app divided into small, manageable, independently deployable services. The approach allows the app itself to be more reliable; failure in one service doesn’t affect the entire system if built right.

Microservices are great in containers, and Kubernetes offers one of the best environments of them all for container orchestration. Once again, it is due to the way Kubernetes is designed that makes the ecosystem perfect for apps that have a lot of microservices.

Microservices in Kubernetes

Before we can explore how the two work really well together, we need to take a closer look at Kubernetes itself. In a more conventional container setup, you use containers to separate certain functions within a server or one environment.

Using LAMP as an example, you can put MySQL in one container, and the web server and PHP in another. The two containers can then communicate with each other, creating a usable web server capable of running PHP-based apps. With this setup, however, production-grade features like scaling and fault tolerance becomes incredibly complex.

Kubernetes simplifies the whole thing. Pods function the way standard containers do, and Kubernetes runs as an independent unit. What’s different is the way you can run Pods across multiple servers in a cluster. Other than that, Pods are as easy to setup as standard containers.

It is then up to Deployments to govern how each Pod instance behaves. To top it all off, you can add Kubernetes Services to control traffic to groups of Pods, all without having to worry about where the Pods are located and how many Pods you need to control. Services handle things such as port management and load balancing.

So, how does the way Kubernetes functions help microservices orchestration?

With an environment made for microservices, the possibilities are endless. You can have functions such as authentication, API gateway, data management, and other small–well, microservices–all run from their own Pods. Implementing a complex structure of microservices becomes straightforward.

Here’s another big advantage of pairing microservices with Kubernetes: unrivaled flexibility. Each Pod and the services in it can be developed, tested, and maintained by separate teams remotely. They can be scaled up or down independently. You can even set up the API gateway to allow changes to requests and formats.

Getting Started with Microservices and Kubernetes

A system that uses microservices is never simple, hence setting up one using containers is equally complex. That said, there are some basic steps you can take in order to get the right environment for your microservices up and running in no time.

You can start by setting up and running Kubernetes on your system. So, Minikube would be the quickest way to go. It makes it easy to run a single-node Kubernetes cluster locally and is ideal for users looking to try out Kubernetes or develop with it day-to-day.

There are several ways to deploy Kubernetes clusters in a cloud-based environment:

With native cloud-provider tools (e.g., GKE, AKS, and EKS)

Or using third-party tools (like Canonical conjure-up or Kops)

Once we have a Kube cluster up and running, it’s time to deploy the microservices (each one encapsulated in its own container) on top of it. Through yaml files, we describe the desired state of every Kubernetes object in the cluster (Pods, deployments, services, etc.). Then, Kubernetes will work to make the current cluster state of those objects match the desired state we defined. By performing a variety of automated tasks to achieve it.

Kube Objects

Some of the main Kube objects we need to be aware of when working with microservices are:

> Pods: The smallest deployable unit on a node. It’s a group of one or more containers which must run together. A Pod usually contains one container.

This object represents your microservice running on K8s.

> ReplicaSet: Controls how many identical copies of a pod should be running somewhere on the cluster.

> Deployment: An object that can represent an application module running on your cluster. When we create a deployment object, we set specifications like the container image to run (Pod), the number of replicas (ReplicaSet), and the deployment strategy to use when adding or removing Pods, etc.

> Services: Pods are ephemeral. They can be launched or killed at any time (e.g., when scaling up or down) and eternally being assigned new internal IPs. Service objects offer a well-known endpoint for Pods, also acting as load balancer too. For example, Pods that compose a frontend interact with the backends through the backend Service, meaning the frontend Pods don’t need to be aware of which specific backend pod they are interacting with. The Service abstraction enables this decoupling.

There are several Service types, the most common being:

LoadBalancer: Which creates a load balancer in the cloud provider—useful to expose the Service-Pods to external components outside the cluster

ClusterIP: Which exposes the service on a cluster-internal IP, making the Service only reachable from within the cluster

> ConfigMaps: Objects which allow us to decouple configuration artifacts from the container image content to keep containerized applications portable. We can pass that configuration to Pods as config files they read or as environment variables.

> Secrets: Which are intended to hold sensitive data, such as passwords or keys. Putting this information in a secret is safer and more flexible than holding it in a container image.

Benefits in Every Use Case

There are many situations when microservices and Kubernetes is the perfect combination to use. The setup brings many additional benefits that conventional infrastructures don’t. At the top of that list of benefits, we have speed.

Microservices and Kubernetes offer the kind of velocity in both development and deployment that is difficult to match. The way Kubernetes is set up allows for better updates, faster deployments, and rapid iterations, all without taking the entire server down. You even have the option to update individual Pods or commit changes to your Deployments.

Kubernetes also handles iterations better. Rather than applying updates on top of the entire environment, you take a more immutable approach towards the system. When an update needs to be deployed, you create a new container image with distinct tags, push the new image to the corresponding container registry, and you just update the deployment definition by editing the container tag in the pod specification. Kubernetes will then automatically adjusts all replica sets according to the deployment strategy, making it possible to perform updates without affecting application availability. When the update doesn’t work as expected, switching back to the old version is easy.

And then there is the fact that the whole system is completely scalable. You can decouple components, scale individual parts of the system based on specific needs (i.e. expand the system’s database capabilities without changing the rest) and further use Services to boost flexibility of the entire system.

Resources for Developers

Kubernetes come with its own set of tools for developers. There are also a lot of third-party tools now that make working with Kubernetes immensely popular—we’ve compiled a list here. You can even run Dashboard if you prefer a GUI for managing Kubernetes.

Let’s not forget that there is still Helm Charts too. What is it and how can it be used? Let’s save the discussion for another article, shall we?

Caylent provides a critical DevOps-as-a-Service function to high growth companies looking for expert support with microservices, containers, cloud infrastructure, and CI/CD deployments. Our managed and consulting services are a more cost-effective option than hiring in-house, and we scale as your team and company grow. Check out some of the use cases, learn how we work with clients, and profit from our DevOps-as-a-Service offering too.

>