A quick start guide to use the awesome ZFS file system as a storage pool for your LXC container, using LXD.

There are different storage types for LXC containers, from a basic storage directory to LVM volumes and more complex file systems like Ceph, Btrfs, or ZFS.

In this post, we're gonna setup a ZFS pool for our LXC containers, via LXD.

Why ZFS?

ZFS is an awesome file system. It's a 128 bits file system meaning that we can store a nearly unlimited amount of data (no one will never attain its limit). It replaces RAID arrays by much simpler, safer and faster "pools", and had very good performance by using compression, copy-on-write, dynamic block size, dynamic stripping, and an extensive use of RAM cache.

The latter means it uses quite an amount of RAM, so I don't recommend to use it on small devices.

See this page for for details on these features.

Install ZFS

On Ubuntu

ZFS should be installed by default on Ubuntu Server. If it's not the case, install the zfsutils-linux package:

apt install zfsutils-linux

On Debian

ZFS is available trough the contrib repository on Debian.

For Stretch, make sure you have contrib on for the main repo in /etc/apt/sources.list , e.g.:

deb http://deb.debian.org/debian stretch main contrib

For Jessie, it's available in the contrib backports:

deb http://deb.debian.org/debian jessie-backports main contrib

First, install the kernel headers. They will allow us to compile and install kernel modules.

apt install linux-headers-$(uname -r)

Then install ZFS and its DKMS module:

apt install zfs-dkms zfsutils-linux

The installation can take quite some time because it will build the ZFS kernel module, with DKMS.

When it's done, you will need to load the module:

modprobe zfs

By default you will have to do this after each reboot. To load the module automatically, add it to /etc/modules :

echo "zfs" >> /etc/modules

You should reboot to make sure all the ZFS services are running.

Setup the ZFS pool with LXD

Creating and using a ZFS pool with LXD is super easy. Just run the lxd init command and choose to configure a new storage pool.

[email protected]:~# lxd init Do you want to configure a new storage pool (yes/no) [default=yes]? Name of the new storage pool [default=default]: zfs_lxd Name of the storage backend to use (dir, btrfs, ceph, lvm, zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? Would you like to use an existing block device (yes/no) [default=no]? Size in GB of the new loop device (1GB minimum) [default=15GB]: 20

If it's your first time running lxd init, you may want to setup a network bridge afterwards. Otherwise, just skip it:

Would you like to create a new network bridge (yes/no) [default=yes]? n LXD has been successfully configured.

Use a pool on a real device

Note that by default it will create a loop device for you ZFS pool, which means we're using ZFS over our existing filesystem.

This works, but it is not ideal so you may want to create a pool on a partition, a whole disk, or even mulitple disks.

To do that, select yes for Would you like to use an existing block device during lxd-init . You may then write the name of your drive ( /dev/sdX ) or partiton ( /dev/sdaX ). This will erase data on the selected device

You can also create the pool manually with this command:

zpool create zpool_name /dev/sdX

Then select it:

[email protected]:~# lxd init Do you want to configure a new storage pool (yes/no) [default=yes]? Name of the new storage pool [default=default]: lxd_zfs Name of the storage backend to use (dir, btrfs, ceph, lvm, zfs) [default=zfs]: Create a new ZFS pool (yes/no) [default=yes]? n Name of the existing ZFS pool or dataset: zpool_name

Using the ZFS pool

To check our newly created pool, run:

[email protected]:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfs_lxd 19.9G 267K 19.9G - 0% 0% 1.00x ONLINE -

You can also use:

[email protected]:~# zpool status pool: zfs_lxd state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zfs_lxd ONLINE 0 0 0 /var/snap/lxd/common/lxd/disks/zfs_lxd.img ONLINE 0 0 0 errors: No known data errors

Here, the whole ZFS filesystem is stored in /var/snap/lxd/common/lxd/disks/zfs_lxd.img . If you're running the pool on a drive, it'll look like this:

[email protected]:~# zpool status pool: zfs_lxd state: ONLINE scan: none requested config: NAME STATE READ WRITE CKSUM zfs_lxd ONLINE 0 0 0 sdb ONLINE 0 0 0 errors: No known data errors

The default profile is set automatically by lxd-init to use the ZFS pool:

[email protected]:~# lxc profile show default config: {} description: Default LXD profile devices: eth0: nictype: bridged parent: lxdbr0 type: nic root: path: / pool: zfs_lxd type: disk name: default

We can test it by creating a new CT:

[email protected]:~# lxc launch images:debian/9 c1 Creating c1 Starting c1

We can see it uses some space in our pool:

[email protected]:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfs_lxd 19.9G 174M 19.7G - 0% 0% 1.00x ONLINE -

Experience the magic of COW. 🐄

[email protected]:~# lxc launch images:debian/9 c2 Creating c2 Starting c2

[email protected]:~# zpool list NAME SIZE ALLOC FREE EXPANDSZ FRAG CAP DEDUP HEALTH ALTROOT zfs_lxd 19.9G 176M 19.7G - 0% 0% 1.00x ONLINE -

Awesome, isn't it?

Enjoy!

You now have a high-performance ZFS pool for you LXC containers. ZFS has a lot of features that I didn't use here as this is supposed to be more like a quick start guide for LXD + ZFS, but just using the defaults gives a lot of benefits!

Sources: