Much commotion has surrounded this column in the past few weeks. Not even counting the systemd discussion, my call for a server-only Linux distribution that does not support any desktop applications or frameworks caused a tizzy, mostly from folks who couldn't quite grasp that I wasn't only talking about not selecting desktop packages during installation.

You may also have noticed that InfoWorld underwent a massive redesign last week, and unfortunately part of that required discarding the previous Disqus commenting system, so the hundreds of comments on those pieces have been lost. It's probably just as well. I hope that with enough time, what I was actually discussing will have more mind share than the explosion of misunderstanding that ensued.

[ The beginner's guide to Docker walks you through the essentials of the red-hot open source framework. | Get the latest insight on the tech news that matters from InfoWorld's Tech Watch blog. ]

Let's put that mostly to bed. Those of us who build and maintain large-scale Linux infrastructures would be happy to see a highly specific, highly stable mainstream distro that had no desktop package or dependency support whatsoever, so was not beholden to architectural changes made due to desktop package requirements. When you're rolling out a few hundred Linux VMs locally, in the cloud, or both, you won't manually log into them, much less need any type of graphical support. Frankly, you could lose the framebuffer too; it wouldn't matter unless you were running certain tests. They're all going to be managed by Puppet, Chef, Salt, or Ansible, and they're completely expendable.

Let's explore that a bit. I think perhaps there's a growing gulf not only between Linux desktop users and server admins, but also between Linux server use cases. I can clearly see how someone knowledgeable in Linux to the point where they are running a handful of Linux servers mostly via GUI tools would be perplexed at the idea of a server that doesn't have a framebuffer, but believe me, there are plenty of examples where that is absolutely the case. In fact, many physical servers built by Sun had no framebuffer — all local administration was done via serial console, as preferred by many admins.

Now with VMs, the lack of framebuffer support is somewhat immaterial because it's not a hardware consideration anymore. But the overall concept still applies — in many cases, any interactive administrative access to Linux servers other than SSH is simply not useful.

This, again, is at scale and for certain use cases. It is, however, the predominant way that cloud server instances are administered. In fact, at scale, most cloud instances are never interactively accessed at all. They are built on the fly from gold images and turned up and down as load requires.

Further, these instances are usually one-trick ponies. They perform one task, with one service, and that's it. This is one of the reasons that Docker and other container technologies are gaining traction: They are designed to do one thing quickly and easily, with portability, and to disappear once they are no longer needed.

The number of extraneous packages present even in a minimal installation of a major distribution is high in this type of system. These systems can be pared down to the barest of bare bones because they're running Memcached or Nginx. They're doing nothing else, and they never will. This is a vastly different use case than most other types of Linux servers running today, but it's an example of why the inclusion of desktop packages in a distribution can be less than useless.

If we rewind even a few years, the idea of running server instances for single services and processes was possible, but generally wasteful. You could run a ton of individual VMs, each powering a single service, but you would require more RAM and disk space than would be needed for beefier VMs handling more load on the same hardware. You also had load-balancing considerations, slower storage, and a host of other dependencies.

Cloud services were lighter on all resources, and thus you generally needed server instances to handle multiple tasks at once for Web-scale loads, and implementations that weren't Web-scale were nowhere near the cloud for the most part. That's been changing rapidly, and now we're moving into use cases where a Linux server distribution is almost quaint, because it doesn't really matter what's in use in these containers, or in many paravirtualized VMs or cloud server instances.

In many implementations, we're not talking about a Linux distribution as much as we're talking about embedded servers, albeit embedded in containers or virtualized frameworks that can give those servers extensive resources. In those arenas and at that scale, the thinner, lighter, and simpler the distro, the better. Distributions like CoreOS, which is designed to drive clusters running Docker containers, are a step in this direction.

To create such a beast, most vendors have taken existing distributions, excised as much as possible, and tuned them for their infrastructure. They then offer these images to build base images for provisioning. It's only a matter of time before a Linux distribution that caters solely to these considerations becomes mainstream and is offered alongside more traditional distributions.

The face and function of Linux servers has been ever evolving, and this is the next step.

I'd imagine that on the other side of containers and cloud-scale service instances, we'll see something a bit more streamlined and simpler because we'll see a complete reorganization of how we deliver services. But that's a concept for another discussion.