Everyday, millions of Docker containers are spun up, used for their resources, and then killed once they’ve served their purpose and are no longer useful.

It’s a brutal existence but one that is extremely beneficial to you and me as developers.

You see, in the past few years, containers have dramatically changed the way software companies build, ship, and maintain their applications.

That’s because containers allow us to package an application’s code and all of its dependencies so it can run smoothly and quickly regardless of the computing environment it’s on.

Not only that, but containers lend themselves beautifully to a Continuous Integration/Continuous Deployment (CI/CD) methodology — allowing you to get new features, enhancements, or bug fixes out to your customers as quickly as possible which, in turn, drives the growth and improvement of the software you’re delivering.

And those are just the workflow benefits.

Remember the “birthing and murdering” of containers I told you about before? Well, that cycle is called horizontal scaling — and it’s the result of spinning up extra containers when your application is seeing more traffic than your current resources can handle… and then killing off those containers once traffic has died down.

When you’re operating at scale, this whole deployment automation process becomes essential.

Now, you might be thinking, “Whoa! That sounds like a TON of work!”…

And you’re right. But here’s the good news. Container orchestration systems do all the heavy lifting for you.

And while you do have options for picking a container orchestration system — Docker Swarm, Apache Mesos — nothing on the market comes close to the popularity of Kubernetes.

And for good reason.

Kubernetes is an open-source tool that allows you to take advantage of on-premises, hybrid, or public cloud infrastructure, giving you the freedom to move workloads wherever you want. It offers security, networking, and storage services and can manage more than one cluster at a time.

Furthermore, it automates many processes that you used to have to carry out manually. For instance:

Controls which server will host your container

Allows you to easily and quickly scale your resources

Automates rollbacks in case something goes wrong with your application

Calculates the “best location” for your containers in order to optimize container balancing

Scales resources and applications in real-time

Pretty cool, right!?

Ultimately, Kubernetes makes more efficient use of hardware, allowing you to maximize your resources and save money.

But here’s where things get tricky.

You see, when you use a container orchestration tool like Kubernetes you describe the configuration of your application in a YAML file.

This configuration file is where you tell Kubernetes how to do things like gather container images, how to establish networking between containers, how to mount storage volumes, and where to store logs for that container.

Containers are deployed onto hosts, usually in replicated groups. And when it’s time to deploy a new container into a cluster, Kubernetes schedules the deployment and looks for the most appropriate host to place the container based on predefined constraints of your choosing, like CPU or memory availability.

Basically, once the container is running on the host, Kubernetes manages its lifecycle according to the specifications you laid out in the container’s Dockerfile.

Which means that Kubernetes is automating all of these tasks for you… but it does so based on the configuration YOU set up as the developer.

And while you may be a crack shot engineer, chances are you don’t know EXACTLY how much traffic you’re going to get within the first month of deployment — or how your application will behave.

That’s why, especially for those first couple of months, monitoring your Kubernetes clusters is super important.

Now, there are some really good open-source monitoring tools out there for your desktop.

For instance, Prometheus contains a powerful and flexible query language — PromQL — that allows you to scrape your Kubernetes clusters and record insightful, real-time metrics in a time-series database.

And when you pair Prometheus with Grafana — a data visualization tool — the result is beautifully displayed metrics in easy-to-reason-about charts.

Prometheus & Grafana together provides robust data visualization & monitoring capabilities for your Kubernetes clusters.

But these tools are only available on desktop.

Which means for that first month or so — when you’re still fine-tuning things and getting to know how your cluster will behave or how much traffic your application sees — you’re tethered to your desk.

In fact, until recently there weren’t any great mobile solutions for keying in on the metrics of your cluster when you’re on-the-go.

Fortunately, if you’re running the Prometheus/Grafana monitoring stack on your Kubernetes cluster, there’s now a mobile application called Aetos that allows you to keep an eye on the health & performance of your Kubernetes cluster directly from your phone.

This is super helpful when you’re in that first month or so of your launch, especially if you don’t want to be tethered to your desk the whole time.

And the application is really easy to use. Just pop your Grafana URL into the app as well as your Grafana API key and Aetos gives you real-time data in easy-to-read graphs.

Right now the app offers metrics for CPU Usage, Mem Usage, Network Saturation, and System Saturation in an all-in-one scrollable view.

The results are presented in a few different charts, allowing you to get an instant glimpse of performance, and migrate between them with a simple swipe of your thumb.

Plus, it’s built on an open-source stack so there’s no need to worry about pricing.

Needless to say, it has its benefits.

But regardless of HOW you monitor your clusters, if you’ve got a legacy codebase that you haven’t containerized yet, you might want to start thinking about making it happen.

After all, containerization is the way of the future.

Plus, when it comes down to killing off containers that have reached the end of their lifecycle, I think you’ll find it easier to let Kubernetes do the dirty work for you… rather than stare them in the face and pull the trigger yourself.

… Unless you’re into that sort of thing (you heartless monster).