Image credit: Evan Lovely

Kubernetes: Always Powerful, Occasionally Unwieldy

Kubernetes’s gravity as the container orchestrator of choice continues to grow, and for good reason: It has the broadest capabilities of any container orchestrator available today. But all that power comes with a price; jumping into the cockpit of a state-of-the-art jet puts a lot of power under you, but how to actually fly the thing is not obvious.



Kubernetes’ complexity is overwhelming for a lot of people jumping in for the first time. In this blog series, I’m going to walk you through the basics of architecting an application for Kubernetes, with a tactical focus on the actual Kubernetes objects you’re going to need. I’m not, however, going to spend much time reviewing 12-factor design principles and microservice architecture; there are some excellent ideas in those sort of strategic discussions with which anyone designing an application should be familiar, but here on the Docker Training Team I like to keep the focus on concrete, hands-on-keyboard implementation as much as possible.

Furthermore, while my focus is on application architecture, I would strongly encourage devops engineers and developers building to Kubernetes to follow along, in addition to readers in application architecture roles. As container orchestration becomes mainstream, devops teams will need to anticipate the architectural patterns that application architects will need them to support, while developers need to be aware of the orchestrator features that directly affect application logic, especially around networking and configuration consumption.

Just Enough Kube

When starting out with a machine as rich as Kubernetes, I like to identify the absolute minimum set of things we’ll need to understand in order to be successful; there’ll be time to learn about all the other bells and whistles another day, after we master the core ideas. No matter where your application runs, in Kubernetes or anywhere else, there are four concerns we are going to have to address:



Processes : Your actual running code, compiled or interpreted, is the core of your application. We’re going to need a set of tools not only to schedule these processes, but to maintain and scale those processes over time. For this, we’re going to use pods and controllers.

: Your actual running code, compiled or interpreted, is the core of your application. We’re going to need a set of tools not only to schedule these processes, but to maintain and scale those processes over time. For this, we’re going to use pods and controllers. Networking : The processes that make up your application will likely need to talk to each other, external resources, and the outside world. We’re going to need tooling to allow us to do service discovery, load balancing and routing between all the components of our application. For this, we’re going to use Kubernetes services.

: The processes that make up your application will likely need to talk to each other, external resources, and the outside world. We’re going to need tooling to allow us to do service discovery, load balancing and routing between all the components of our application. For this, we’re going to use Kubernetes services. Configuration : A well-written application factors out its configuration, rather than hard-coding it. This is a direct consequence of applying the paradigm of Don’t Repeat Yourself when coding; things that may change based on context, like access tokens, external resource locations, and environment variables should be defined in exactly one place which can be both read from and updated as needed. An orchestrator should be able to provision configuration in modular fashion, and for this we’re going to use volumes and configMaps.

: A well-written application factors out its configuration, rather than hard-coding it. This is a direct consequence of applying the paradigm of Don’t Repeat Yourself when coding; things that may change based on context, like access tokens, external resource locations, and environment variables should be defined in exactly one place which can be both read from and updated as needed. An orchestrator should be able to provision configuration in modular fashion, and for this we’re going to use volumes and configMaps. Storage: Well-built applications always assume their containers will be short lived, and that their filesystems will be destroyed with no warning. Any data collected or generated by a container, as well as any data that needs to be provisioned to a container, should be offloaded to some sort of external storage. For this, we’ll look at Container Storage Interface plugins and persistentVolumes.

And that’s it. I’ll provide some ‘advanced topics’ pointers throughout the blog series to give you some ideas on what to study after you’ve mastered the basics in order to take your Kubernetes apps even further. When you’re starting out, focus on the components mentioned above and detailed in this series.

Just Enough High-Level Design

I promised above to keep this series more tactical than strategic, but there are some high-level design points we absolutely need to understand the engineering decisions that follow, and to make sure we’re getting the maximum benefit out of our containerization platform. Regardless of what orchestrator we’re using, there are three key principles we need to keep in mind that set a standard for what we’re trying to achieve when containerizing applications: portability, scalability, and shareability:

Portability : Whatever we build, we should be able to deploy it on any Kubernetes cluster; this means not having hard dependencies on any feature or configuration of the underlying host or its filesystem. If the idea of moving your app from your dev machine to a testing server sounds stressful, something probably needs to be rethought.

: Whatever we build, we should be able to deploy it on any Kubernetes cluster; this means not having hard dependencies on any feature or configuration of the underlying host or its filesystem. If the idea of moving your app from your dev machine to a testing server sounds stressful, something probably needs to be rethought. Scalability : Containerized applications scale best when they scale horizontally: by adding more containers, and not just containers with more compute resources. It doesn’t matter how many resources are allocated to a container; they are still mortal and often short-lived objects managed by your orchestrator as it tries to adapt to changing cluster conditions and load. Therefore, we’re going to need to arrange our applications to easily leverage more copies of the containers it expects, typically by using the routing and load balancing features of our orchestrator, and by trying to make our containers stateless whenever possible.

: Containerized applications scale best when they scale horizontally: by adding more containers, and not just containers with more compute resources. It doesn’t matter how many resources are allocated to a container; they are still mortal and often short-lived objects managed by your orchestrator as it tries to adapt to changing cluster conditions and load. Therefore, we’re going to need to arrange our applications to easily leverage more copies of the containers it expects, typically by using the routing and load balancing features of our orchestrator, and by trying to make our containers stateless whenever possible. Shareability: We don’t want to be trapped maintaining and consulting on every app we build forever. It’s crucial that we’re able to share our apps with other developers who we may hand it off to in future, operators who have to manage it in production, and third parties who may be able to leverage it in an open-source context. We’re halfway there with portability, ensuring that it’s possible to move our app from cluster to cluster, but beyond it just being technically possible, shareability emphasizes that hand off should also be easy and reliable. Standing up our app on a new cluster should be as foolproof as possible, at least for a first pass.

Thinking Through Your First Application on Kubernetes

For the rest of this series, let’s think through containerizing a simple three-tier web app for Kubernetes, with the following typical components:

A database for holding all the data required by the application

An API which is allowed to access the database

A frontend which is reachable by users on the web, and which uses the API to interact with the database

Even if applications like these are the furthest thing from what you work with, the example is instructive; we’ll focus on decision points that are widely applicable to many different kinds of applications, so you can see examples of how to make these decisions. The generic application above is just a vehicle for touring the relevant concepts, but they apply quite generally.

Let’s begin by imagining that you’ve already created Docker images for each component of your application, whether they resemble the components listed above or are completely different. If you’d like a primer on designing and building Docker images, see my colleague Tibor Vass’s excellent blog post on dockerfile best practices.

Checkpoint #1: Make your images.

Before you’ll be able to orchestrate anything, you’ll need images built for every type of container you want to run in your application.



Also note, we’re going to consider some of the simplest cases for each concern; start with these, and when you master them, see the Advanced Topics subsection in each step for pointers to what topics to explore next.



You’ve made it this far! In the next post, I explore setting up processes as pods and controllers. You can read part 2 here.

We will also be offering training on Kubernetes starting in early 2020. To get notified when the training is available, sign up here:



To learn more about running Kubernetes with Docker: