Part 1 of this series discussed some of the main advantages of microservices, and touched on some areas to consider when working with microservices.

In this second part we will look into how containers fit into the microservices story. For some developers and architects microservices and containers are still fairly new areas, so there is still some confusion around these two terms in that they sometimes are used interchangeably. Microservices and containers are actually two different things. Microservices is an architectural style and containers are a "tool" that often helps with microservices-based applications.

This blog post aims to provide some insights into how containers can help with microservices, as well as some of the high level concepts when running containerized microservices in a cluster using an orchestrator. Those who are new to containers or who want to gain a deeper understanding of their inner workings may want to start with this article.

Containers and Microservices

For the purpose of this blog you can think of containers as encapsulated, individually deployable components running as isolated instances on the same kernel, leveraging operating system level virtualization, and thus boot really fast (low single digit number of seconds).

Isolation and Density

The isolation level of containers sits between the isolation levels of a virtual machine (VM) and a process. From a microservices perspective, running services inside a VM offers better isolation but comes at the cost of not being able to scale out quickly, as it usually takes a while to boot VMs. Containers boot within seconds which allows one to react to increased load or traffic almost instantaneously.

Processes on the other hand start fast, dynamically allocate resources such as RAM, and are efficient at sharing these. From a density perspective, running multiple service instances per process or process group can be beneficial as well. The downside with running one or multiple microservices instances per process is that they are not well isolated from the rest of the environment, which can quickly result in noisy neighbor situations, potentially compromising the entire virtual machine if the code was not written well. With containers the service code, its runtime, dependencies, system libraries, etc. are packaged together and run inside a container with its own full, private access to their own isolated view of operating system constructs which addresses some of the isolation concerns you may encounter when running them in processes.

Figure 1 shows the different isolation levels isolation

Benefits in DevOps

Good devops practices are key to successful microservices applications and containers can be very beneficial for that. Consider a microservices-based application that consists of services written in variety of languages. In this case the devops team needs to have special knowledge on how to deploy each of the services, which ultimately adds to the operational complexity. If you package microservices as part of containers, the container image becomes the unit of deployment and your devops team only needs to know how to deploy containers, irrespective of what type of application runs inside. You can also avoid potential service failures due to missing dependencies or mismatch of versions on the host system as everything your service needs (frameworks, dependencies, etc.) is packaged together (immutable environment), typically as part of continuous integration. From an operations perspective containers allow you to tie the system telemetry collected to the service itself (CPU, memory, etc.) as you typically run one service instance per container. Containers also shield developers and operations from being concerned with the specific details of the machine and OS. For example, if your infrastructure team decides to switch the Linux distro on the host it would not have an impact on the application at all as containers run on any Linux distro.

Those are just some advantages of using containers. From a microservices view you can think about them as the bridge between dev and ops, which makes your life a lot easier with regards to the operational aspects of your microservices environment.

Let's have a look at an example. Figure 2 shows a microservices application running in containers. The application consists of three microservices: an order service, a profile service and a catalog service. Each of the services, their dependencies and runtimes are packaged inside containers. Due to the encapsulation and isolation levels of containers you can run them on the same host even if you have two versions of the service (e.g. Order V1 and V2), different runtimes or dependent components (Profile service depends on LibraryC V1 and Catalog service depends on LibraryC V2). If one of the services misbehaves and uses up all the available resources like CPU or RAM it would not have an impact on the other services as the service can only use the resources assigned to the container.

Figure 2: a microservices application running in containers

Orchestration of Microservices

This all sounds pretty straight forward if you run on one host, but running on one host is not a production realistic scenario for various reasons; for example in order to provide HA you need at least two hosts. As microservices applications are essentially distributed applications you typically run them on a cluster, which is a set of coupled computers (usually called nodes) that can be seen as one single system and that are connected through a network. Scheduling new services in a cluster seems to be simple. However, you also need something that keeps the services alive if they fail, that moves the services to other nodes if the nodes fail or if they are being serviced, and something that allows you to enable rolling upgrades, so that your services are "always on". In addition, it needs to understand dependencies, placement and resource constraints, optimizations, different types of resources, and requirements for those resources just to name a view of the challenges. The good news is that you don't have to worry about those things when running in containers, since container orchestrators, also sometimes referred to as schedulers, take care of this. The most popular ones that you can install yourself are Kubernetes, DC/OS, and Docker Swarm. You can also use managed container services such as the Oracle Container Cloud Service or some other flavor of a managed container service. The advantage of using a fully managed service is that you don't need to worry about the underlying infrastructure at all. From a core functionality all the container orchestration solutions, managed or not, offer a similar set of functionality.

From a microservices perspective you are dealing with a very dynamic environment. You usually don't know where your services (containers) are located within a cluster, as you rely on the orchestrator to find the best place for the service based on availability of resources and/or placement constraints etc. Service registry and service discovery are the solutions to the problem as they allow services to be registered and easily discovered by other services within the system. Service discovery within a cluster is also sometimes referred to as east-west routing. Many service discovery solutions go beyond just storing the endpoints of services; quite often, additional service metadata is stored along with the service endpoint information and/or health checks are performed and the health state of service instances is maintained. Etcd, Consul, and Zookeeper are very popular service registry and discovery solutions.

Figure 3 shows a high level view of a cluster with microservices instances and their endpoints stored in a service registry where one instance of the Order service is calling the Catalog service. The Order service calls a well-known endpoint on the node and the local proxy services handle the lookup for that service. Please note that there are multiple solutions to this and some of the managed container services or orchestration solutions have even built in service registry and discovery. This sample is meant to offer a simplified view of how some of the service discovery solutions work.

Figure 3: service discovery solution - high-level view

What you need to consider is how to route the traffic from a client outside your cluster to your services inside a cluster. This is also sometimes referred to as north-south routing.

A very common pattern is to use an application gateway, also commonly referred to as an API Gateway. In a microservices architecture an API gateway is used for traffic aggregation and request routing of client requests to the necessary services. Additional use cases for API gateways are authentication offload, SSL offload, quality of service throttling, and monitoring just to name a view. NGINX and HA-Proxy are two of the more popular technologies used as application gateways today. Typical deployment patterns of gateways include peer gateway routing, which basically means that you run the gateway on every node in the cluster. This pattern is usually used with smaller cluster sizes. The other pattern is to have dedicated or even separate gateway nodes. Dedicated means that the gateway service is placed on specific nodes in the cluster through placement constraints. The public load balancer is set up in a way to direct the traffic only to these nodes. The standalone gateway nodes pattern works similarly except the nodes hosting the gateway services are not part of the cluster scheduling. Figure 4 shows an architecture with a cluster and specific gateway nodes where a user request is made to the Order service. The load balancer directs the traffic to one of the gateway instances. The gateway routes the request using a path (/order) to one of the Order service instances returned by the service registry.

Figure 4: user request to Order service

Summary

In addition to service registry and discovery and gateway services, developers and devops persona have had to think about avoiding potential port conflicts, assigning an IP per container etc. This has forced many developers to learn more about infrastructure and networking. The good news is that container orchestrators have been evolving very fast and almost every one of the orchestrators now offers some functionality to make those things easier for developers on an infrastructure level. That said, there are still many things developers need to consider when building microservices-based applications. The next couple of blog posts will cover more details on microservices patterns and the actual "how to" of packaging and devops for containerized microservices. Stay tuned.