Learn and use LXD system containers — especially for development and testing Cristian Posoiu Follow Mar 18 · 14 min read

Virtual machine like experience but with a much smaller overhead! Many use cases even if you’re also using application containers!

Photo by Manuel Geissinger from Pexels

Summary

Very short intro to virtualization.

How lightweight LXD is.

LXD — some quick facts/features.

When to use — a quick short list.

Use case examples — with more details.

Lightweight … intro to containers and virtualization

When we talk about virtualization we usually imply running an entire operating system inside another operating system, in a secured manner, with resource isolation.

When we refer to containers, we talk about running some application(s) in constrained, secured and separate environments, with much lighter resource consumption.

If we go deeper into the technicalities, we have:

A) Full virtualization engines — VirtualBox, KVM, Xen, VMWare, etc.

You run a full operating system, with device drivers, disks, and so on. No limits on what OS (Operating System) you can run. Pretty high in resource consumption (memory, cpu).

One step further, for speed increase, is the use of paravirtualization, where the host OS is running an hypervisor, and the guest OS has dedicated code and knowledge to communicate with and use that hypervisor in order to run some of its components in a more efficient way.

B) OS level virtualization

The host OS Kernel allows the existence of multiple isolated user space instances — be them called containers, virtual private servers, jails and so on.

Examples — Docker, OpenVZ, chroot, LXC/LXD.

We can consider here the subgroup of ‘application containers’ (docker, rkt) where you run one single application in that isolated environment.

While you also can define various resource (cpu, memory, storage) and security constraints, they are still way lighter than a full virtual machine (VM). Meaning you can pack more of them on the same physical hardware.

As you can see, application containers are a form of virtualization as well, but they are lightweight and concerned about the application(s) running and not the full OS.

A short version of Docker “versus” LXC/LXD

First, there is LXC — i.e. the userspace interface for the Linux kernel containment features. It is an API and a simple tool to manage system or application containers.

Initially, Docker also used LXC as its default execution environment, but at some point it replaced it with its own component.

LXD comes into play as a system container manager — providing virtual machine like experience using Linux containers. Meaning, using LXC, building on it and coming with many more features, including a server-client architecture, so that you can control remote LXD servers/containers.

What comes up as confusing initially :-), is that LXD’s own CLI tool is actually named ‘lxc’ (LX Client? 😃). The original’s LXC cli are actually starting with ‘lxc-’.

Docker delivers application(s) as containers while LXD is for providing “system containers”.

One way you could think about LXD is that it runs very lightweight virtualized Linux operating system.

So Docker focuses on/contains one individual application (ahem.. most of the time), while LXD focuses on full but lightweight Linux OS.

Notice: while Docker can be run also on non Linux systems, at this time, LXD (the ‘server’ part) is for Linux only. LXD clients exists also for Windows and MacOS.

LXD — lightweight

Let’s reiterate one of the advantages of LXD: very lightweight Linux virtual environment. That means that whenever you need something to be run in a Linux virtual machine, or in a more secured environment, you don’t need to think that much about resource usage and you can run it very easy.

Installation is very simple, usually a single command — see their documentation.

With a local Ubuntu 18.04 image, on a desktop with Intel CPU E3–1240 V2 @ 3.40GHz, 32GB Ram, a SSD as storage, doing also other mundane activities, let’s start a new container, stop, and restart — and remember, confusingly :-), the CLI is named ‘lxc’ :

Recap: 6–12 seconds to start a new container, 1–2 seconds to stop, less than 1 second to start if it was already created!

Ah, yes, they call the instances ‘containers’ as well, even if you have a full working Linux OS in it :)

What is inside?

As you can see, unlike a Docker container, but as a normal Linux system, it does run a few applications, starting from the pid 1 ‘init’ program, to systemd, cron, and so on.

Memory wise, being OS-level virtualization — that is, running on an already running Linux OS, it shouldn’t consume anything besides whatever programs running in the container are using.

If you would run a full virtual machine, it will use way more CPU and more memory. And I don’t think you’d beat those start/stop times. Wasn’t it so easy to start and stop?

I don’t know about your experiences, but for me, sometimes, a virtual KVM machine can end up eating 100% of one of my CPUs, doing … nothing.

LXD — A few facts that could be of interest

it is image based. That is, you start usually by running pre-existing images — like ubuntu, archlinux, centos, etc.

containers are usually started without privileges — good for security, but things can be tuned

it uses storage from so called “storage pools”, which can be offered by various backends — directory, lvm, btrfs, zfs, ceph — each with its own list of features and capabilities

you can configure networking devices for each container (device names, mac addresses, bridges, etc).

- can help if you want to get a fixed IP, simulate a specific device, or get an IP from an external dhcp server

- can help if you want to get a fixed IP, simulate a specific device, or get an IP from an external dhcp server you can add “proxies" — for example to expose services from the container to the outside — in case you don’t want to use container’s IP

you can mount various directories inside it from the host system

- super useful for development environments!

- super useful for development environments! it is structured in 2 parts: the server — which manages the actual containers, storage, etc, and which offers an API ; and a client for easy access to all of it.

Because of this separation, you can easily manage containers over the network, on remote nodes.

Because of this separation, you can easily manage containers over the network, on remote nodes. it offers container migrations between nodes, including live migration

resource allocation control (cpu, memory, etc)

device passthrough — usb, gpu, block devices, etc

you do not need ssh server to get access into the container (you run lxc exec [remote/]container)

you can create ‘profiles’ where you define various commonly used settings, and then attach those to a container. You could have a profile that defines settings to allow more privileges for the container, one that allows to run a program with access to your XServer, one that defines the ethernet device to get its IP from an external DHCP server, etc

Advice: while the client ‘lxd’ tool offers everything, if you want to script things, for retrieving data use the query API. The client program usually has output for human consumption, versus the output of ‘lxc query’:

lxc query "/1.0/containers/t4" 2> /dev/null | jq -r '.status'

versus

lxc info t4 | grep 'Status:' | cut -d ' ' -f 2

When to use — quick list

Today, you might work on your machine with docker containers, with docker-compose or you might even have some local Kubernetes node. But there are still cases where using a virtual machine instead of an application container makes sense. And if you add that you can run a VM with a low overhead and quick start/stop time (LXD), even more cases can pop up.

Let’s see some examples:

you want to learn about Linux but in a safer way

you need to test a new program before installing it on your machine/server

you need a constrained or separate development environment(s) for whatever system you are creating

when hardware resources are low for a full virtual machine (or too low to host many VMs)

you want to test a new Linux distribution — usually console based

you need to do Linux package management

you have to run an application that is made of multiple components, for which a set of docker images and setup do not exist yet or are hard to do

similar to the one above: you need to run some legacy application(s)

you need to create a docker container 😃

you need to have a more complex setup, closer to providing infrastructure services. And maybe even provide them to Docker/Kubernetes — like some storage cluster

alternative to Virtualbox/Vagrant for development environments

for hosters — offering their products inside virtual OSes — better Wordpress hosting for example

offering lightweight virtual Linux systems, in addition to normal VMs, in dedicated cloud management interfaces (OpenNebula, OpenStack)

running multi-node, multi-version Kubernetes cluster on your local machine

run Steam games in a more controlled environment — see this

I’ll take some of the examples and get into more details below.

Source directory mounting example

This is not actually an use case but it is needed for many of them.

Let’s say you want to test some scripts, from your machine, but you don’t want them to be copied into the container (you’re having them open in the editor, for example):

Tip: when running Ubuntu images, if you run an ‘apt’ command but get an error like “E: Could not get lock /var/lib/dpkg/lock-frontend — open (11: Resource temporarily unavailable)” this is because the default images comes with some auto-update package program which is starting immediately after container start. Just retry your command.

Replacing Vagrant

Many people are using Vagrant as a way to start a development environment. They start a Linux image, with some setup, and source code mounted in it.

But usually it is using Virtualbox as the VM engine (heavy) and maybe even NFS for exposing source directories.

For me at least, whenever I wanted to use Vagrant, after not using it for awhile, it always gave me errors, especially from plugins, with cryptic messages. And many times, fixing those errors is not trivial. Even now, preparing for this post, it happened to me again, and after I tried what it said, I ended doing a “rm -rf ~/.vagrant.d”

For replacing Vagrant you might need to create a small shell script as a replacement for the Vagrantfile, so that you launch the appropriate image, set resource limits (I usually don’t take this extra step for dev environment), mount source directory and so on. But after this, it will be quick to run/stop, and easier with your computer resources.

A few years ago, I had to create a development environment for a LAMP setup. Initially I used Vagrant, but I encountered various impediments, so I made the Vagrant setup work for the other user — on MacOS, and for me, I used LXD containers. Way faster and easier. Container setup was the same as inside Vagrant — Saltstack based.

Theoretically you can add a plugin for Vagrant to use LXD instead of Virtualbox, but it is hard to use, and after it gave me errors saying something about NFS, I gave up (why NFS when you can just tell LXD to mount directories?). I happily coded without Vagrant since many years 👍

I also remember that when I had to use it recently, it forced upon me to use 15Gb of disk storage for one Linux distribution — probably a setting inside the distribution’s image. On a laptop, with SSD, depending on what else you have on it, allocating 15Gb for a simple test VM doesn’t seem right. With LXD, I didn’t come into such disk allocation schemes, and usually, the images are small: 80–400Mb, and you start with the disk usage from there.

Learning

Do you want to learn some Linux commands? Why would you learn/test directly on your own machine and peril your system? You really don’t want to test some “rm -rf /“ command on your box, right? 😃 What if you want to learn how to play with Ubuntu/CentOS/etc package management? Do you really want to install packages on your own machine? What if you are running Ubuntu but need to learn some CentOS package management? You wouldn’t need a full Virtual Machine for that.

Testing software before adopting it (gitlab in this case)

You’re in search for a piece of software doing something specific. As it is normal, you would want to test it before deciding on its adoption. Most of the time, the applications are having either some dedicated Linux distribution packages or some setup script, and could come with a lot of dependencies. You probably don’t want them all to be installed on your machine before even knowing if that application really fits your needs.

Lets take an example — we’d like to have a ‘git’ repository “server” installed on premises. Several variants are available, like Bitbucket, gitlab, etc. Let’s try ‘gitlab’. Somehow we know it is a big piece of software and we do not want to clutter our own machine and then spend time cleaning. So we’ll want to use a VM or, better, LXD !

We go to their install documentation and select Ubuntu and follow the few setup instructions.

Now point your browser to http://10.207.127.224 (or whatever IP you got), change the password as being asked, then login with ‘root’ and the new password. Then go and create a new project, ‘test1’.

Cool — it worked! Now explore further! If you like it, then you need to actually check all the install variants (because there are many) and see which ones fits your environment.

If you are not decided or do not have time to continue testing at that moment, remember that starting/stopping a LXD container is quick:

> time lxc stop gitlab

0.03s user 0.07s system 2% cpu 4.486 total # and when you want to continue exploring:

> time lxc start gitlab

0.05s user 0.07s system 23% cpu 0.510 total # lets see what is running inside!

> lxc exec -t gitlab bash

ps aux # wow... a lot:

gitaly, postgresql, unicorn, sidekiq, nginx, some *-exporter, grafana, redis, etc

As a side note: gitlab in particular has also a way to be run as a Docker image. But starting that image takes also a lot of time (same setup seems to happen) and inside, the same big bunch of processes are running. Not exactly an ‘application’ container 😃 and what docker was built for. Personally, I’m running the Helm chart variant setup.

Run a graphical program — general setup

When you want to run a graphical program you will need to let the container access some of the resources from your host. Usually, this means access to your GUI (X Server) and sound — usually PulseAudio.

We’re going to create a LXD profile for that, which will allow it to be easily reused.

Run a graphical program — Kodi

Kodi = “ultimate entertainment center”. This text from their site made my evening: “Kodi puts your smart TV to shame”. Funny. True? Probably 😉

This use case is a combination of running or testing a program which could have a lot of dependencies — especially if you want to test various Kodi plugins, but also with a Graphical User Interface (GUI).

Being a graphical program, we’ll need to let it access some our host’s resources so you’ll need to first go through previous chapter and create the ‘gui’ LXD profile. After that, then:

You actually might want now to mount a directory with your own media. Something like:

lxc config device add kodi src disk source=$HOME/Pictures path=/pics

Then, inside Kodi, you can add the “/pics” folder as a media folder.

You can also search on google for ‘good kodi addons’

Run a program within a more secure environment — Firefox

Maybe you want to run a program in a more constraint environment just to be on a safer side — i.e. it does not change other things that you don’t want to. Or maybe you want to be sure it does not spy on you/your data.

For example, you might want to run a Firefox browser to really be sure you’re using a profile that does not interfere with your day by day Firefox use.

You saw how simple it was, now that you also had the ‘gui’ LXD profile?

Run existing eCommerce software, on local machine

I had to work at some point with a Magento setup running on AWS. The setup, at that time, was not using Docker images. It consisted of several machines, with nginx, php-fpm, mysql, redis, inside a VPC and using a load balancer.

For what I needed to do, I used a setup with a few LXD containers, consisting of a haproxy container (the “AWS” load balancer), one mysql container (AWS RDS equivalent), one Redis container, and N web servers with nginx + php-fpm + magento. Quick to start and stop, easy on resources, and using almost the same configuration files as on production (that was one of the motivations). No money required for separate development environment on AWS; it can be fully disconnected — like if you want to work on a plane 😄; editing with your preferred IDE and seeing the results instantly.

While such specific Magento setup can now be done also with containers or even Kubernetes, this can still be an example of how to reflect a live ‘old’ setup from production to your local machine.

Migrating legacy setups (into a more dense setup)

This is a theoretical discussion.

You have several separate legacy systems, each on its own physical machine and maybe even not using all their resources (cpu, ram).

You need to migrate for whatever reasons. Maybe just to better use the hardware resources. Or, maybe even if you’d want, you can’t upgrade them to a new architecture — like containers/Kubernetes because they’re too old or it is too expensive. So you’re mostly stuck with their way of being set up.

You can try to migrate them to run inside LXD containers. If you can’t find an LXD image to use for the application’s base, you could also try to make one from the Linux system they’re running from. Then, you can run these containers on whatever new machines you want. And, since this is lightweight, even run multiple instances of your applications on a single physical machine, achieving a denser setup with better resource utilization.

Note that not every legacy setup can run easily into a LXD container — you might have to run some of them in real full VMs.

If you’re migrating to a cloud system, you can probably also just generate images (like Amazon AMIs) from the old systems and run each one in its own instance. But you can still choose to use LXD containers for them, if, for whatever reasons (higher density, powerful reserved instances) you want to have more of them running on the same cloud instance. Yes, LXD seems to work on EC2.

Local, multi-node, multi-version kubernetes cluster

You most probably had/have the need to run a Kubernetes cluster on your own machine — for development or testing purposes. There are many solutions that help you with that, but so far, I found the following issues:

some of them are using full VMs. Meaning, slow, high resource consumption, or even strange 100% cpu load for nothing, from time to time

some are very light, no virtualization, but, you have a single cluster made of a single node

If you want to test some application’s HA (High Availability) setup in a multi-node cluster, or just plainly see what happens when a Kubernetes node gets stopped; or even have readily available a few clusters for different testing purposes, it is going to be hard.

What can you do? Use LXD containers as individual cluster nodes :-)

Unfortunately the setup is not that straightforward, but it is doable.

I’ll publish soon a package that should help with that, so stay tuned.

Wordpress/other service — hoster

Instead of using full VMs or a single machine in a shared setup, you could use a LXD container for each of your clients. A better security is achieved as well as high density. You can also put resource limits based on your client’s needs/subscription plan! At least with Wordpress, I know at least one hoster that is using LXD.

You need to create a docker container

Wait — what? Yes, sometimes you have to create a more complex Dockerfile — installing packages, configuring files, replacing bits and pieces here and there, you need to have some dedicated startup script, and so on.

In those cases, instead of trying to mostly guess (I’m exaggerating a bit) what to put in the Dockerfile, and continuously rebuilding the Docker image, you can quickly start a LXD container, based on an image as close as possible to your Docker one, and practice the install steps, the configuration, the startup script, in there.

Final words

I feel that LXD has its place in the day to day use, even if you’re also using containers, and that it can make your tech life easier. But somehow it is not known enough — hence the reason for this article 😄

I hope you enjoyed it and learned a few things. Also, feel free to share your use cases!