I was asked recently to write a containers-versus-unikernels article, and I said, “Sure, but it won’t be the article you think it is because I share Per Buer’s sentiment that unikernels are not simply containers 2.0.” I seem them as apples and oranges. I think a lot of the confusion stemmed from the acquisition of Unikernel Systems by Docker a few years ago; they were the team that coined the term, after all. What the company might not have intended was to spawn birth to more than 10 different unikernel implementations that exist today. Indeed, there were already projects that could’ve been called a “unikernel” before their papers came out, and some projects—while not adopting the moniker—talk, act and walk like a unikernel, so here we are today.

Let’s start with a few quick descriptions. First, a container is a set of Linux kernel primitives that allow one to package an application in a form that makes it more difficult to interfere with other applications. Chroot has been used for decades to perform “clean room”-style packaging and to also build things such as Debian packages. It was only when people start building networking and filesystem abstractions on top of these primitives and started utilizing them as an application delivery mechanism that we have the ecosystem we know as containers today. There’s been an awful lot of hand-wringing over trying to create infrastructure out of containers with Kubernetes being the de facto stack; however, we’ve yet to see an AWS or Google cloud built of containers, so as raw infrastructure they still exist a layer above.

A unikernel is an entirely different beast. A unikernel, if you simply read an article or two, looks to be on the surface an “improved container” because of the security and performance benefits they bring. Indeed, many people have called it “containers 2.0,” but that simply isn’t the case. While many pundits, myself included, have described unikernels as an application running without an operating system it’s not technically true—we just say that to non-developers because it’s easier to explain what exactly is going on. It is true that unikernels do have operating systems, but they are not a general purpose operating system like Linux. They are single purpose. Containers, on the other hand, have a shared kernel. So each unikernel has its own kernel while all the containers on one host share a kernel. This is a fairly large distinction. Most of them descend from the microkernel family tree of operating system design and most of them fully embrace the single-process model.

Let’s pause right there and talk about that. Typically a Linux system will have a hundred or so processes running when you boot it up without you installing a single thing. Unikernels only have one process running: the application designed to run it. Containers are not like this; while it’s considered best practice to only run one process in a container, it’s trivial and sometimes encouraged to run other processes inside as well.

Also, I’ve been using the word “process” intentionally. A program that might be installed in a container potentially could have many processes. Forking new processes was an older way to scale in the ’90s but it’s much slower than threading. Many interpreted languages scale via pre-forking through a web server or by running many app instances behind a load balancer. That’s because a lot of interpreted languages don’t have true threading support—many will just implement “green threads” if at all. The container is more like a padded room in which you can do whatever you want and you won’t disturb your neighbors (although this is proving to be mostly untrue), but a unikernel will only execute one process, and if you want another one you need to spin up a new unikernel. There are many unikernels that support multi-threading, though, and this is good. This single-process nature though is where unikernels get a lot of their security and performance.

What’s interesting about a unikernel versus a normal program running by itself in a Linux VM is that typically there is little, if any, separation between the process running and its host operating system. However, this statement and the next one are extremely nuanced and I’ll later explain what I mean by that. Indeed, the majority of unikernels embrace what’s called a single address space model. This allows them to run at screaming speeds. There are considerations to be had for this model, but they aren’t as dangerous as some people try to make them out to be. It’s assumed that unikernels are single-process by nature and it’s assumed that they reside in a VM. For instance, in the shared model, if the application accidentally maps/writes to a memory address it’s not supposed to have, you could kill the instance, but is that a problem with the system as a whole or the app? The app would’ve segfaulted under Linux anyway. However, I am not the OSI and since there exist multiple process unikernels and unikernels with address space separation between kernel and user, this is now officially a muddy area. Thank our lucky stars there is no Unikernel Foundation offering “certified unikernel” designations yet, although there are unikernel foundations. 😀

SMP further complicates end-users’ understanding of how all of this works. Some unikernels employ syscalls and some don’t. If you’re running 64-bit applications (which the majority reading this article will be), the ABI dictates that you use virtual memory, and if that’s the case, then you also have paging. 64-bit is a necessity for doing anything “modern,” such as running a JVM app or a database, because those are guaranteed to chew through more memory than what you’d get in 32-bit land. A lot of what people might find on the internet about all of this is simply outdated and you’ll need to consult the gospel (Intel or AMD manuals. But beware: There are snakes inside those hallow documents as well!).

So will unikernels replace Docker containers? Probably not. The container ecosystem being what it is has a wealth of software built in it that is simply not interchangeable toward unikernels because of how incredibly different they are implemented. For instance, I get asked a ton if unikernels can run under kubernetes and my answer is they can, but why would you, considering the environments they would be running in?

Will unikernels steal a portion of the server-side Linux market? Absolutely. Among other companies and projects, Ulrich Drepper, the well-known libc maintainer, is behind a unikernel project. Linux security has never been good and while unikernels are still subject to things such as RFI (remote file inclusion) attacks, they are highly resilient to various RCE (remote code execution) attacks that are dependent on the capability of running entirely different programs they simply don’t have support for. One could argue that the hypervisor itself would then become the attack target—and that’s definitely not impossible, but if we start seeing attacks with the frequency and scale we see them against Linux servers and Docker containers on the public clouds, then public clouds as an infrastructure option just might go away without a newer technology addressing that. The implied deal public clouds brokered with us was that if you let hackers into your Linux VMs that’s your fault, but if we let them inside our machines that’s our fault. The barrier that keeps that line of demarcation open is the venerable virtual machine.

To sum up, asking whether unikernels are like containers is kind of like asking if unikernels are like Linux. In some ways, yes, but in many ways, no.