Kubernetes: your base setup towards container orchestration

Your takeaways from this blog:

Get a full open source orchestrator solution to manage your containers

Easy deploy, hot version migration with Kargo project

Understand and get running Kubernetes and its monitoring and logging free tools

Be ready to build on top and deploy your own applications

Who should keep reading this post?

DevOps and system infrastructure engineers, who are looking for solutions to manage 10+ containers on Linux

IT engineers evaluating competitors like Docker-Swarm, Mesos, Nomad, Openshift...

Anyone who loves containers and powerful open source solutions!

Why Kubernetes?

When you start working with more than 10 containers, you realize that managing each container on each server is not very optimal:

Containers usually don't use all server resources, and if a peak occurs, they are physically stuck on the same hardware

If your server crashes, your containers are not restarted on another host

Many manual steps are needed if you want to scale up/down similar containers, on multiple hosts

Kubernetes brings solutions to these problems with the following characteristics:

Portable: public, private, hybrid, multi-cloud

Extensible: modular, pluggable, hookable, composable

Self-healing: auto-placement, auto-restart, auto-replication, auto-scaling

Even if other solutions exist (docker swarm, mesos, nomad ...), Kubernetes (or k8s) - a Google all-star opensource project on Github - is becoming the reference for scheduling and managing containers. Many companies which were already proposing their own orchestration solutions, are now integrating Kubernetes as their container engine.

As a serious contender, we can cite for example Openshift. This RedHat product is leveraging Kubernetes engine framework with additional layers to get to a full PAAS. While winning on ease of use (GUI to pilot the cluster, auto-deploy from registry), it is losing on flexibility (slower Kubernetes updates, integration with other solutions) and community.

Kubernetes alone is then very versatile and powerful. However, it requires a learning curve to understand the concepts and get it running. This post will help you get a base working setup, where you can start running your own containers on top.

More info on k8s: http://kubernetes.io/

Architecture and components of our base setup

This schema explains how it all works together. Let's break down each component:

Kargo : this project use ansible recipe to deploy and migrate Kubernetes version on many types of servers (coreos, ubuntu, CentOS) and cloud (AWS, Azure, OpenStack, baremetal). So, the choice is yours. In my doc, I will just deploy CoreOS servers manually, and give IP info to Kargo. But you could use Kargo to provision AWS or Google servers in one command line.

1 master node running Kubernetes components for container orchestration, it will pilot and gives work to the minions

1 master/minion node : In order to have master redundancy and running containers too

2(or more) minion nodes running the actual containers and doing the actual work

CoreOS : this minimal and secure OS is perfect for running Kubernetes masters and nodes.

EFK (logging) : we will send all Kubernetes container logs to an Elasticsearch db, via Fluentd, and visualize dashboards with Kibana

Prometheus (monitoring) will check all this infra, with Grafana dashboards

Kubernetes dashboard addon (not EFK dashboard), where you can visualize Kubernetes component in a GUI

Service-loadbalancer : public gateway to access your internal Kubernetes services (Kibana, Grafana). In the setup later, you will have 2 choices of lb, a static (HAProxy) and a dynamic lb (Traefik)

Registry: a private docker registry deployed in the Kubernetes cluster

The heart of Kubernetes machine

This schema represents Kubernetes internal components after the Kargo install. Nearly all of them are running as containers. You will be able to adjust the number of masters, minions and ETCD to fit your needs.

Whole status of your infra in few dashboards

Below is a visual preview of what you will get with this setup, using open source tools, which integrate perfectly with Kubernetes.

Logging with EFK:

First you will collect container logs with EFK, so you can see who is very talkative, or which application is in pain and sending errors/timeout. Once setup, it is all automatic: any newly created containers will send logs to EFK.

Monitoring with Prometheus

Then we will dig deeper in the stats and counters (CPU, RAM, disk, network), where we can investigation bottlenecks, memory leaks and plan for capacity management.

You can already enjoy the loaded detailed dashboards, and you can find a lot more online (thanks to the community!)

Ready? Get the code and deploy

Follow this github repo to get a full Kubernetes stack running.

I added to this setup lots of explanations on how to launch services and access them. Plus, an extra monitoring tool (Heapster), a demo of Gitlab CI/CD and some troubleshooting tips to fix some headaches you may have.

A previous setup for a deployment on CloudStack on Exoscale is available here. Less flexible as you can't migrate Kubernetes version (it doesn't use Kargo), this version however takes care of firewalls.

Now you got a working setup, it is time to run your own containers in Kubernetes.

Start building your yaml manifest file based on the examples provided. If you want to start from an existing docker-compose.yml file, use that fast/easy converter: Kompose.

Some other cool Kubernetes projects to try:

Kubernetes could run locally on your laptop with minikube

You can pilot it from your desktop (windows/mac) via a GUI : SkippBox

Even with your mobile phone: check Cabin ;-)

Thank you for reading :-) See you in the next post!