If you haven’t seen it, this is a very accurate and smart Downfall satire on Docker’s ecosystem, technology, and culture.

It’s so good that I thought it would be instructive to annotate it so that the state of the art and some technical details of Docker could be better explained.

Tweet me @ianmiell if you spot a problem or want to suggest an improvement.

Annotated script

Henchman: We pushed the images to DockerHub, then used docker-compose to deploy to the cluster

Docker Hub is the public repository of Docker containers,

analagous to GitHub.

docker-compose is a tool for managing multiple

containers as a unit on a single machine.

You’d more likely use docker-swarm to deploy to a cluster

Henchman: We mounted data volumes on these nodes, and linked the app container here. Finally we updated the dns records.

A data volume is the persistent store of data for containers.

Since containers are generally ephemeral, persistent data

is ‘mounted’ to containers on startup.

Hitler: So we’re running 20 containers on every node now. When can we get rid of the excess servers?

It is a promising aspect of Docker that it can reduce your

physical server footprint through ‘max-packing’ containers

on physical tin through reduced resource claims and not

needing to run multiple kernels (the so called ‘Hypervisor tax‘).

Henchman: Mein Fuerer, the kernel… A third party container caused a kernel panic.

Docker containers talk to the Linux API. If they can

cause a container panic outside of Docker, then they will

inside Docker also. Docker allows some safeguards

against this, for example by reducing capabilities

and/or system calls the container can use.

Henchman: We’ve lost 70% of the cluster and the data volumes

Presumably the Kernel panic caused this loss.

Presumably also Hitler blames Docker users’ encouragement

of free container downloading as the cause (see below).

Hitler: If you never used Docker in production, leave the room now

Figures are hard to come by, but Docker use in production

heavily lags development and test. This is to be expected,

since most new technologies bubble up through developers to

production.

Hitler: What were you thinking? Who the hell uses public containers from DockerHub? For all you know they were made by Russian hackers!

Container security is a hot topic.

Hitler: You might as well use `curl | sudo bash`!

A convenient means of distributing software that effectively hands

over control of your machine to the internet.

Hitler: You think anything in public repos is secure because it’s OSS? You’re a bunch of node.js hipsters that just HAVE to install everything you read on Hacker News!

Guilty (but not the node.js bit).

Henchman: But Docker allows us to run our application anywhere!

A paraphrase of Docker’s slogan. ‘Build, ship, and run any app anywhere.’

Hitler: You use a VM just to run Docker on your laptop!

Many users can’t run Docker directly on their machines,

for example if they use Windows or OSX. I use

Docker on an Ubuntu VM running on a Mac.

I’m not sure what Hitler would make of this.

Henchman: Main Fuerer, docker-machine uses a lightweight VM!

Docker uses a lightweight Linux distro to run on OSX/Windows.

Hitler: Do you hear yourself? Why do we need docker if we’re running a VM? A container inside a container!!!

In this context, a VM can be used for isolation,

so is considered a container also. A docker advocate

would argue that the image is lightweight and easier

to deploy than a VM. Or as I like to say, Docker doesn’t

give you anything a VM can’t, but a computer gives you

nothing nothing an abacus can’t – user experience is key.

Hitler: You archived a whole Linux O/S then used CoW storage because it’s too big.

By ‘Linux O/S’ Hitler here means an

operating system’s filesystem, which makes up most

of a Docker image. CoW (copy on write) storage is a

feature of Docker where changes to the filesystem

are copied on write to make a new ‘layer‘ ready

for committing as a new image. Images are made

up of these layers, which can be shared between

containers, reducing disk usage. Hitler’s point here

is that the images contain a lot of data, which can be wasteful.

Hitler: Just so you can deploy a 10MB go binary!

Docker is written in Go, a fashionable language. One

of Go’s features is that it generates portable

binaries that can be run across different distributions

with little difficulty. Hitler is making the point that

if that Go binary is portable, why bother with Docker at all?

Hitler: Don’t even talk to me about resource constraints. All that cgroups magic and it still can’t stop a simple fork bomb!

CGroups is a technology used by Docker and others to attempt to

ensure fair (or guaranteed) resource allocation. It can be tricky to

learn. Fork bomb attacks have been known to work on Docker,

but work has been done on this recently.

Hitler: And if the database needs all the resources on the server how exactly will Docker allow you to run more programs on it!? Before Docker I just picked the right size VMs.

Docker is not magic. If you need the tin for your application,

it won’t help you get more resources.

Hitler: Suddenly people talk to me about datacenter efficiency and “hyperconvergence”. Everybody thinks they’re Google!

Far too many organisations act like they are

running at Google scale when they are not.

Hitler: You don’t even run your own machines anymore! People run on GCE, in VM instances that run in Linux containers on Borg!

Google Compute Engine is Google’s alternative to Amazon

Web Services. They run VMs within Linux containers that

themselves run Docker, which presumably Hitler thinks is

laughable, but is there to provide greater levels of

security, and likely because Google is not short of

compute! Borg is Google’s cluster management

software, on which Kubernetes is based.

Hitler: People even think Docker is configuration management. They think Docker solves everything!

If Docker is anything, it’s package management.

You might use Dockerfiles for primitive configuration

management, but you can use traditional

CM tools like Chef and Puppet to provision your images.

Hitler: Even Microsoft has containers now. I’m moving everyone to Windows!

The Windows picture is quite complicated. You can:

– Run Docker within a VM running on Windows (see above)

– Run a Windows container (not widely available yet)

that implements the Docker API. This will talk to the Windows

OS API (I assume) rather than the Linux Kernel API

so the images built will not run across the systems.

– Run bash in Windows natively. See below.

Henchwoman: Don’t cry, you can run bash on windows 10 now.

You can. In a Windows Linux ‘subsystem’. This is not a VM technology.

Hitler: Docker is supposed to have better performance yet that userland proxy is slower than a 28.8k modem and for what, just bind on port 0.

A userland proxy is one written in software and outside

the kernel. In-kernel proxies are much faster.

Binding on port 0 gets any available port from the OS.

Docker does something similar by default.

Docker performance is not better than natively-run software,

but in some cases is arguably better than VMs.

Hitler: Even enterprises want to run Docker now and they still have Red Hat 5 installed.

This happens.

RedHat is an enterprise-supported implementation

of Linux. RedHat5 was released in 2007.

Hitler: You idiots that Docker will help your application scale.

It won’t. It can allow you to run more instances

of your application, which is not the same thing.

Hitler: Use Openstack for all I care.

Openstack is an open-source cloud technology, which is

powerful but costly to manage, and somewhat out of favour now.

Author is currently working on the second edition of Docker in Practice

Get 39% off with the code: 39miell2