It all started when the Chroot Jail and the Chroot system call were introduced during the development of Version 7 Unix in 1979. Chroot jail is for “Change Root” and it’s considered as one of the first containerization technologies. It allows you to isolate a process and its children from the rest of the operating system. The only problem with this isolation is that a root process can easily exit the chroot. It was never intended as a security mechanism. The FreeBSD Jail was introduced in FreeBSD OS in the year 2000 and it was intended to bring more security to the simple Chroot file isolation. Unlike the Chroot, FreeBSD implementation isolates also the processes and their activities to a particular view of the filesystem.

A Chroot Jail. Source: https://linuxhill.wordpress.com/2014/08/09/014-setting-up-a-chroot-jail-in-crunchbang-11debian-wheezy

When the operating system-level virtualization capabilities were added to the Linux kernel, Linux VServer was introduced in 2001 and it used both a chroot-like mechanism combined with “security contexts” and operating system-level virtualization (containerization) to provide a virtualization solution. It is more advanced than the simple chroot and it lets you run multiple Linux distributions on a single distribution (VPS).

In February 2004, Sun (acquired later by Oracle) released (Oracle) Solaris Containers, an implementation of Linux-Vserver for X86 and SPARC processors.

SPARC is a RISC (reduced instruction set computing) architecture developed by Sun Microsystems.

A Solaris Container is a combination of system resource controls and the boundary separation provided by “zone”.

Oracle Solaris 11.3

Similar to Solaris Containers, the first version of OpenVZ was introduced in 2005. OpenVZ, like Linux-VServer, uses the OS-level virtualization and it was adopted by many hosting companies to isolate and sell VPSs. OS-level virtualization has some limits since containers share the same architecture and kernel version the disadvantage happens in situations where guests require different kernel versions than that of the host.

Linux-VServer and OpenVZ require patching the kernel to add some control mechanisms used to create an isolated container. OpenVZ patches were not integrated into the Kernel.

In 2007, Google released CGroups, a mechanism that limits and isolates the resource usage (CPU, memory, disk I/O, network, etc.) of a collection of processes. CGroups was, as opposed to OpenVZ Kernel, mainlined into the Linux kernel in 2007.

In 2008, the first version of LXC (Linux Containers) was released. LXC is similar to OpenVZ, Solaris Containers and Linux-VServer, however, it uses CGroups which is already implemented in the Linux Kernel. Then CloudFoundry created Warden in 2013, an API to manages isolated, ephemeral, and resource controlled environments. In its first versions, Warden used LXC.

In 2013, the first version of Docker was introduced. It performs, like OpenVZ and Solaris Containers, operating-system-level virtualization.

In 2014, Google introduced LMCTFY ( Let me contain that for you), the open-source version of Google’s container stack, which provides Linux application containers. Google engineers have been collaborating with Docker over libcontainer and porting the core concepts and abstractions to libcontainer. So the project is not actively being developed and in the future, the core of this project will be probably replaced by libcontainer.

LMCTFY runs applications in isolated environments on the same Kernel and without patching it since it uses CGroups, Namespaces and other Linux Kernel features.

Photo by Paweł Czerwiński on Unsplash

Google is a leader in the container industry. Everything at Google runs on containers. There are more than 2 billion containers running on Google infrastructure every week.

In December 2014, CoreOS released and started to support rkt (initially released as Rocket) as an alternative to Docker.

Jails, Virtual Private Servers, Zones, Containers, and VMs

Isolation and resource control are the common goals behind using Jails, Zones, VPSs, VMs, and Containers but every technology uses different ways to achieve it has its limits and its advantages.

Until now, we have briefly seen how a Jail work and we introduced how Linux-VServer allows running isolated user spaces in which computer programs run directly on the host operating system’s kernel but have access to a restricted subset of its resources.

Linux-VServer allows running “Virtual Private Servers” and host kernel must be patched to use it. (Consider VPS as the commercial name.)

Solaris containers are called Zones.

A “Virtual Machine” is a generic term to describe an emulated virtual machine on top of a “real hardware machine”. The term was originally defined by Popek and Goldberg as an efficient, isolated duplicate of a real computer machine.

Virtual machines can be either “System Virtual Machines” or “Process Virtual Machines”. In our everyday use of the word VMs, we usually mean “system virtual machines” which is the emulation of the host hardware to emulate an entire operating system. However, “Process Virtual Machine”, sometimes called “Application Virtual Machine”, are used to emulate the programming environment for the execution of an individual process: Java Virtual Machine is an example.

The OS Level virtualization is also called containerization. Technologies like Linux-VServer and OpenVZ can run multiple operating systems while sharing the same architecture and kernel version.

Sharing the same architecture and kernel have some limitations and disadvantages in situations where guests require different kernel versions than that of the host.

System containers (e.g LXC) offer an environment as close as possible as the one you’d get from a VM but without the overhead that comes with running a separate kernel and simulating all the hardware.

VM vs Container. Source: Docker Blog

OS Containers vs App Containers

OS-level virtualization helps us in creating containers. Technologies like LXC and Docker use this type of isolation. We have two types of containers here:

OS Containers where the operating system with the whole application stack is packaged (example LEMP).

Applications Containers that usually run a single process per container.

In the case App Containers, we would have 3 containers to create a LEMP stack: