The Deploy Factors

A build is only as valuable as the success of its deployment. Within 12-factor, a majority of the factors describe best practices for how microservices should be deployed and how microservices should depend on and resolve details about other microservices.

A microservice is only as reliable as its most unreliable dependency.

Let’s see what these means from Kubernetes architecture in the following images:

The Pod and related Kubernetes objects.

Factor II: Dependencies

A microservice is only as reliable as its most unreliable dependency. We think about build dependencies typically, and really the 12-factor use of dependencies is referring to build management. Still, I like to group the Dependencies factor with the Deploy factors because service dependencies to other APIs or datastores has such a broad impact on the reliability of your microservices. Kubernetes includes readinessProbes and livenessProbes that enable you to do ongoing dependency checking. The readinessProbe allows you to validate whether you have backing services (more on that in a moment) that are healthy and you’re able accept requests. The livenessProbe allows you to confirm that your microservice is healthy on its own. If either probe fails over a given window of time and threshold attempts, the Pod will be restarted.

If you haven’t had a chance to consume the wisdom of Release It! (https://pragprog.com/book/mnee/release-it), then consider making time to read it and apply the architectural patterns it describes to improve your own applications’ reliability with approaches like Circuit Breaker, Fail Fast, and Timeouts.

Factor III: Config

The Config factor calls for storing configuration sources in your process environment table (e.g. ENV VARs). By separating the configuration from the code, the microservice should be completely independent from its environment can be moved to another environment with no source code changes. Kubernetes provides ConfigMaps and Secrets that can be managed in source repositories (although Secrets should never be source controlled without an additional layer of encryption). Containers can retrieve the config details at runtime. You want to store configuration in your environment for scalability, and to handle an increasing number of services.

Factor VI: Process

In Kubernetes, a container image runs as a container process within a Pod. The observation from 12-factor is that the Linux kernel has done a lot of great optimization with resource sharing around the process model. Kubernetes (and containers in general) simply provide a facade to provide better isolation of the container process from other containers running on the same host. Using a process model enables easier management for scaling and failure recover (e.g. restarts). Typically, the process should be stateless to support scaling the workload out through replication. With Kubernetes, we also have seen the impact of stateful workloads like databases or caches as well.

For any state used by the application, you should use a persistent datastore that all instances of your application process will discover via your Config. In Kubernetes-based applications where multiple copies of pods are running, requests can go to any pod, hence the microservice cannot assume sticky sessions.

By adhering to the process model, your services should be easily scaled by creating more instances of the process using Kubernetes controllers such as ReplicaSets, Deployments, StatefulSet, ReplicationController, etc.

See the following code snippet with replicas to 2 on Line 7.

A Kubernetes deployment specifies the desired number of replicas declarative (Line 7).

Factor IV: Backing Services

When you have network dependencies, we treat that dependency as a “Backing Service”. We should think about the lifecycle of these upstream services as independent of the lifecycle of our microservice. At any time, a backing service could be attached or detached and our microservice must be able to respond appropriately.

For example, you have an application that interacts with a database, you should isolate all interaction to that database with some connection details (either dynamic service discovery or via Config in a Kubernetes Secret). Then consider whether your network requests implement fault tolerance such that if the backing service fails at runtime, your microservice does not trigger a cascading failure (more on cascading failures in Release It!). That service may also be running in a separate container or somewhere off-cluster. Your microservice should not care as all interactions then occur through APIs to interact with the database.

Factor VII: Port Binding

In a production environment where multiple microservices provide different functionality, you need microservices communicating over well-defined protocols. We can use Kubernetes Service objects to declare the network endpoints of our microservices and to resolve the network endpoints of other services in the cluster or off-cluster.

Without containers, whenever you deployed a new service (or new version), you would have to perform some amount of collision avoidance for ports that are already in use on each host. Container isolation allows you to run every process (including multiple versions of the same microservice) on the same port (by using network namespaces in the Linux kernel) on a single host. The Service object then exposes the pool of microservices across all hosts and performs rudimentary load balancing of incoming requests.

Services are declarative in Kubernetes and automatically take care of the work to load balance requests to Pods.