In the previous article in this series, I gushed over how much money a business can save with Kubernetes using Supergiant. In this article I want to dive deeper into container strategies and give you a look at the business end of container architecture so you learn how to achieve this savings.

Containers are an awesome tool, but success with containers is directly related to an engineer’s or a business leader’s ability to understand them and use them correctly.

The most progressive organizations have already used containers to reduce costs dramatically and improve their infrastructure performance. The container space has matured at a prodigious rate, and many organizations, like Netflix and Spotify, have already invested and reaped the benefits of a containerized infrastructure.

It is important to be aware that your competition could be developing or using a containerization strategy. If you find yourself wondering how your competition appears to have such nimble technical ability, it’s a good bet they already are.

The key to using containers effectively, is having a great container management system. As stated in our previous article, we believe the best solution today is Kubernetes. Kubernetes is very stable at this point. It’s also production ready.

If you have not read our previous article, I encourage you to glance over the resource allocation features of Kubernetes and Supergiant because I will only be covering architectural concerns in this article.

I’ve picked my container management system. Now what?

In this article, I am going to summarize different types of containers and how to organize them in Kubernetes.

Resource allocation is only 50% of the money saving formula. The other 50% comes from knowledge of how to architect your applications in containers properly.

Container Architecture Types

First, let’s go over what a good container looks like. I will list containerized application architectural types from most to least desirable. There is no direct right or wrong here. Some applications simply do not allow you to use the more desirable container architectures, but we can try.

The Scratch Container (The Holy Grail)

This is the most desirable type of container. This container is based on a “scratch” environment, which means that the container by default has no access to SSH, environment, or any higher-level operating system functions. It simply contains a super small linux kernel.

This option is the most desirable because it is fast. Blazing fast. These containers are also usually no larger than a couple of megabytes. This translates to applications that are very nimble that can boot quickly to cope with demand.

The downside of this container type is that since the scratch image is very basic, it contains none of the usual higher-level components expected in an operating system. If your application requires SSL certs, etc., you must ensure the dependencies you need are packed with the app.

With some creativity, it is possible to get a complicated application written in a non-compiled language to play well with scratch containers.

I like to use the original Kubernetes dashboard as a prime example. This container is a NodeJS application that uses a very simple Golang web server. The Golang web server allows the NodeJS application to be run in a scratch environment, even though NodeJS is not typically able to run in such a minimal environment.

Here is a quick diagram of what this container looks like. You can see here that we are able to run our whole application in one very fast scratch container:

The “Container OS” Container Tthe Next-Best Thing)

These types of containers are a great option if you can’t get your application running in a scratch environment.

It’s important to remember that a container should ideally do just one thing well: run a database, a web server, etc. When running single applications like this, we have no need for many of the default higher-level operating system functions that can steal CPU and RAM we would otherwise like devoted to our application.

Unlike scratch containers, most of these container images provide a package management tool that allows you to install dependencies. A great example of one of these operating systems is Alpine OS.

These containers also have the virtue of being very small — usually 5-8 megabytes in size. However, dependencies can make these container sizes balloon quickly when they are not kept in check. After installing a typical NodeJS or Rails environment, it is not uncommon to see your container size balloon to 50 or 100 megabytes or more, and this can severely impact container performance.

Let’s use our dashboard example from the scratch container to see how it would look in this type. Remember, we only run one application per container, but we can use Docker container linking to link them together into one object. Things get a bit more complicated, but our applications is still fairly fast:

The “Full OS” Container (The Least Desirable)

The Full Operating System container is similar to the Container OS container, but it has all of the features you expect of a full OS — access to SSH, init, multiple shells, etc. These come at a steep cost to performance.

An example of one of these operating systems would be Ubuntu, or Red Hat server. These types of containers can range from 300-800 megabytes in size. When your container management system deploys this container hundreds or thousands of times a day/week, you can see how the data transfer size can get cumbersome.

The good news is major operating system providers are aware of these issues, and many are releasing smaller and more compact container-specific versions. These new versions can range from 50-200 megabytes in size, and it appears they are making them smaller and smaller as the container space matures.

I am not going to include a diagram here because it would look the same as the Alpine diagram above. The only exception is that this type of container is the least performant of the bunch.

Put It to Work

So now you know a few container types you want to have in your architecture, but how should you organize your containers so Kubernetes can make the most of them?

Let’s lay out a web application that requires a database to run:

Okay… don’t run away screaming just yet. I promise I will break this down for you.

Let’s follow the user’s path through this containerized Web Application on Kubernetes. I will explain what everything does as we step through each part.

Load Balancer/Kube Proxy: This is the entrypoint to the Kubernetes cluster, and I like to add a little more network security here in a Load Balancer configured to work with the Kube Proxy. The Kube Proxy’s job is to route traffic from the outside to the correct containers inside.

This is the entrypoint to the Kubernetes cluster, and I like to add a little more network security here in a Load Balancer configured to work with the Kube Proxy. The Kube Proxy’s job is to route traffic from the outside to the correct containers inside. Namespace: A Kubernetes Namespace logically groups resources. You create these Namespaces to share services or manage resources within the Namespace.

A Kubernetes Namespace logically groups resources. You create these Namespaces to share services or manage resources within the Namespace. Web App Service and Database Service: A service is what you would have if a load balancer, a network switch, and a router all had a baby. It can be configured to do a lot of things with traffic, but its basic purpose is to route traffic from IP address or DNS to the containerized Apps that do the work. In this example, the “Web App Service” routes traffic to each of the “Web App” containers, using a metadata tag called “website” (or whatever works for you). All your users see is a single, speedy “website,” but behind the scenes there could be multiple instances of the app container crunching on problems.

A service is what you would have if a load balancer, a network switch, and a router all had a baby. It can be configured to do a lot of things with traffic, but its basic purpose is to route traffic from IP address or DNS to the containerized Apps that do the work. In this example, the “Web App Service” routes traffic to each of the “Web App” containers, using a metadata tag called “website” (or whatever works for you). All your users see is a single, speedy “website,” but behind the scenes there could be multiple instances of the app container crunching on problems. Replication Controller: A replication controller has a pretty simple function: it works as an autoscaler for your containers. It contains a blueprint of what a replica should look like, so when a containerized app is in trouble, the Replication Controller will simply spin up what it needs to keep a service running.

A replication controller has a pretty simple function: it works as an autoscaler for your containers. It contains a blueprint of what a replica should look like, so when a containerized app is in trouble, the Replication Controller will simply spin up what it needs to keep a service running. Pods: Pods are groups of containers that may be “linked” together, like in our earlier Alpine OS diagram. If your container is not linked, it will just be in a Pod all by itself. No need to get too far into the weeds here. In this example, our “Web App” is technically a Pod.

Inside this arrangement of services, each service sees the other as a network resource. This allows each service to see the other as a service, and, well, its services all the way down.

What do I do now, with my new degree in container architecture?

Now that I have made your eyes bleed for a few minutes, with your new honorary degree in container architecture, let’s get down to why this setup and others like it are so great.

This web app isn’t just a few programs running on a few big, expensive boxes. Instead, it’s spread over smaller, clustered servers that do the same job, but they have the ability to spin down and cost far less when not needed.

And in the event there is a failed physical server, or a failed Pod, Kubernetes immediately replicates or moves that Pod to another physical server.

The Replication Controller always ensures the Pod is running, and if your containers are engineered with the methodology I describe here, you can expect this process to be very quick. In most cases, nobody but the logs will even notice when an issue occurs.

So what would this application look like in Supergiant?

I have to say, the above is a lot to absorb. Just in case some of it isn’t sticking, I have good news.

One of Supergiant’s jobs is to abstract Services, Replication Controllers, and Pods away into a single object called a Component. We simply felt that abstracting this into a simple concept would be more welcome than repeating complicated configurations for Kubernetes applications.

For users who still like dealing with like that sort of thing (we get it — really), we have not obscured the guts at all. The nitty-gritty features are all still there and can also be incorporated/tweaked within Supergiant Components to suit your needs.

However, our hope is that this abstraction will help clear up those bleeding eyes a bit.

This is the same Web Application as above except this is how it looks to Supergiant users:

Now let’s follow the user’s path through this containerized Application on Supergiant:

Entrypoint: An Entrypoint wraps up the Load Balancer and Kube Proxy settings into one configurable object, and Supergiant sets this up for you. It is important to note here that Kubernetes currently does not allow multiple services to share an external load balancer. A load balancer for every service can get expensive, so we made Entrypoints “sharable” between components and apps, and it is far less complicated than trying to figure out how to configure the Kube Proxy and Load Balancer to play well together.



An Entrypoint wraps up the Load Balancer and Kube Proxy settings into one configurable object, and Supergiant sets this up for you. It is important to note here that Kubernetes currently does not allow multiple services to share an external load balancer. A load balancer for every service can get expensive, so we made Entrypoints “sharable” between components and apps, and it is far less complicated than trying to figure out how to configure the Kube Proxy and Load Balancer to play well together. Application: This is the same as a Kubernetes Namespace, but we felt Application better conveys the purpose of the Namespace, and it’s easier to think of structures in the Supergiant API this way.

This is the same as a Kubernetes Namespace, but we felt Application better conveys the purpose of the Namespace, and it’s easier to think of structures in the Supergiant API this way. Component: This is the coolest part! A Component is a service, replication controller, and pods all wrapped up into one object.

Most of the time, you will find that configuration of Kubernetes services in relation to replication controllers and pods can be complicated or repetitive. The Supergiant API puts all configuration into one simple config. This can then be exported, imported, saved away, or shared with other Supergiant users. And since the Component contains everything you need for a running application, the person you share your Component with does not really need to know anything about your application or how it works.

One of Supergiant’s goals is to lower the entry-level knowledge required by businesses wanting to use Kubernetes, and for those who have the knowledge, to make things fast and simple to configure.

Supergiant gives you the tools to switch to a containerization strategy in no time, so you can start saving your business money.

This was a dense article.

I am trying to walk the line of exposing why Kubernetes container management is such a win to business readers while explaining some technical aspects that show why, so if you have any questions about what you have read here today, I encourage you to comment below and hit us up on our Slack, Twitter, or our Reddit.

For more information on Supergiant visit our Toolkit page and we welcome your contributions on GitHub.