One of the insights of Google running containers at scale and reported in the Borg paper caused the introduction of the concept of a pod. Another way to put it: having a mechanism allowing to co-locate containers onto one host and guaranteeing a number of ‘local’ communication and sharing mechanisms such as network and data is a great thing to have. Or is it?

In the following I’ll challenge this (research) outcome.

First, what makes sense for Google, that is, if you have Google’s infrastructure and their workload, does not necessarily make sense for you (since, by definition, you’re not Google). So, while the lesson learned—pods are necessary—might apply to Google’s problems and might solve certain cases for them, this doesn’t necessarily imply it does for you and your workload.

OK, this is a weak argument and can be rebutted, I suppose. But there’s more.

So, another one: there are plenty of great Kubernetes overview and introduction presentations. I’ve done it myself a couple of times and whenever I did it or sat in a presentation of a Googler motivating the pod construct the same use case is given (and yeah, I used it myself, mea culpa)—think of a render process, say, a web server and a fetcher process that delivers some data to the render process.

OK. And? What else is out there where pods are indispensable? Right. They are syntactic sugar, a convenience but (again: if you’re not Google) something you can probably do without.

But where’s the data, Michael, give me the data!

Let’s have a look at a popular repository of Kubernetes runtime definitions, the helm/charts GitHub repository. If pods are somewhat useful you’d expect to see them used, say, at least in 20% to 30% of the cases, right? I was unable to find one single example in this above repository. Not a single time a RC defines more than one Docker image. I wonder why …