One of my recent personal interest projects was to get OpenBSD cloud images running on our OpenStack cluster. I used and extended the same pcib software we use for building our Linux images. In doing so, I learned some cool new things about OpenBSD and learned more about its limitations.

Overall, I found adapting OpenBSD to the cloud to be a surprisingly straightforward experience, given that the OpenBSD developers eschew the complexity of x86 virtualisation. I credit this to the OpenBSD project’s approach of emphasising simplicity, correctness and portability in its design choices.

Bootstrapping OpenBSD

We begin with the tedious, yet rewarding task of putting all the bits into place which extract an OpenBSD filesystem tree into a chroot and make it bootable. The bulk of this work happened in three new pcib plugins:

partitioner/disklabel : Partitions an image in a way that OpenBSD understands.

: Partitions an image in a way that OpenBSD understands. fs/ffs : Provides a Berkeley Fast Filesystem to install into.

: Provides a Berkeley Fast Filesystem to install into. os/openbsd: Handles installing OpenBSD into a chroot.

Given that we’re already well past the point of disregarding the OpenBSD project’s security advice by running OpenBSD on top of QEMU, I decided to simplify partitioning by also ignoring their partitioning advice and using one large filesystem. (That’s not to suggest their advice is bad; it just provides questionable benefits in the cloud.) This makes the implementations of partitioner/disklabel and fs/ffs trivial.

The juicy part is all in os/openbsd. In the BSD world, the package manager is only used to install third-party software and not the base OS; the bootstrapping process for OpenBSD consists of extracting tarballs (“sets”) into a directory. Because of the fact that there is no package manager to handle signature verification for these sets, I instead perform verification manually at the very start of the image build. That way, I was able to write the rest of the plugin assuming the sets are known to be trustworthy.

The last step is to install the bootloader, which brings a whole lot less pain and headaches than our Linux images. Compare the one-liner for installing OpenBSD’s bootloader with the tangled mess for installing and configuring GRUB.

SSH host keys

If there are no SSH host keys at boot time, /etc/rc will create them, enabling me to simply wrap the part of pcib that handles host keys in an if block to exclude OpenBSD. Simple.

The objective of this design choice in OpenBSD, dating back to the introduction of sshd as a standard daemon in 1999, is to ensure that a host always has a set of keys appropriate to its installed SSH server. For example, when ECDSA keys were added to OpenSSH, one would have been automatically generated upon reboot without any special action needed, either from the sysadmin or by some upgrade automation. Happily, this simple approach makes it just work for our use case, too.

Growing the root filesystem

OpenBSD’s uncloudiness starts to show its true colours once you start creating a small image for use on a variable-sized instance. Resizing filesystems has never been treated as a common task on OpenBSD; while there is a utility (growfs) for doing so, it cannot be run on a mounted filesystem and is very slow (it takes around 5 minutes to expand a 1 GB filesystem to 40 GB).

The first limitation is what makes things difficult for a cloud image; since the root filesystem cannot be mounted when it is resized, the resize operation must be done from a ramdisk kernel. The pcib task for building this ramdisk kernel is fairly straightforward, if somewhat paranoid. It first nukes all results of prior builds, then copies the custom ramdisk config into place and builds a fresh kernel. The final step is to install the kernel as bsd.gf (for “growfs”), and configure the bootloader to boot into it by default.

The actual ramdisk kernel config is just as straightforward. It consists of a Makefile that does little other than leverage the existing build infrastructure; a list.local that tells the Makefile to include a growfs binary in the ramdisk; and a .profile that does all the heavy lifting.

Because a ramdisk kernel boots into single-user mode by default, anything in /.profile in the ramdisk will be run automatically. This mechanism is how the OpenBSD installer boots, too. In our case, .profile is just a shell script which removes /etc/boot.conf (so that the instance will boot into its real OS on the second boot), then resizes the first partition to occupy the entire disk and grows the filesystem in it. Finally, it reboots into the real OS.

The thing that really struck me about all this is how easy it is to modify OpenBSD to fit my use case. I went into this expecting that it was going to require me to patch the existing build infrastructure, or at least duplicate a large portion of it. What I found is that it’s very easy to produce a customised ramdisk kernel in a short amount of time.

EC2 metadata

Mercifully, cloud-init is not available for OpenBSD. This afforded me the opportunity to write my own EC2 metadata plugin for pcib, making use of OpenBSD’s /etc/rc.firsttime feature. If it exists, /etc/rc.firsttime is guaranteed to only ever be run once, on first boot. /etc/rc invokes it as follows:

# If rc.firsttime exists, run it just once, and make sure it is deleted if [ -f /etc/rc.firsttime ]; then mv /etc/rc.firsttime /etc/rc.firsttime.run . /etc/rc.firsttime.run 2>&1 | tee /dev/tty | mail -Es "`hostname` rc.firsttime output" root >/dev/null fi rm -f /etc/rc.firsttime.run

This makes it the perfect mechanism for initialising things like the hostname and SSH keys from the EC2-compatible metadata service we use with OpenStack. The implementation is around 40 lines of shell; shorter than the configuration file for cloud-init.

Stable release updates

OpenBSD does not provide builds of stable release updates (for the simple reason that nobody has stepped up to do the builds). Rather, updates are provided in the form of source patches only.

However, there is a third-party service from M:Tier, which provides “binpatches” containing stable release updates for amd64 and i386. These take the form of packages that update only the changed binaries in the system with the latest patches.

In order to ensure that our OpenStack images aren’t lagging behind in security fixes the moment they are booted, we have a repo/m-tier pcib plugin which is responsible for:

configuring M:Tier’s updates repo; installing available updates at image build time; and configuring the image to install available updates on first boot.

The last point is achieved, once again, through the use of /etc/rc.firsttime.

The future

All known major bugs have been ironed out, and the latest images I’ve built boot and run smoothly (modulo the 5-minute wait for growfs to do its thing). The next non-trivial work will come in November, when OpenBSD 5.8 is released.

Our images are built without swap at the moment, partly to avoid unnecessary load on our Ceph cluster. Since we haven’t needed it, pcib is currently unable to build OpenBSD images with a swap partition, but that will be fairly easy to add now that the basics are operational.

There is a GSoC project to port the HAMMER2 filesystem to OpenBSD this year. While bootloader support is an explicit non-goal, it’s likely to appear in OpenBSD in the next few years, given that HAMMER is the only modern filesystem with a licence acceptable to the OpenBSD project. If so, extending pcib to build OpenBSD images on HAMMER instead of FFS would definitely be a priority.

Closing thoughts

Given the OpenBSD project’s aversion to x86 virtualisation, I expected the process of building cloud images to be a lot more challenging than it actually was. I would put this down to three aspects of OpenBSD’s design:

Simplicity : A repeating theme throughout the OpenBSD source tree is clean, simple code that just does what it says. Complemented with OpenBSD’s comprehensive man pages, it’s hard to find a part of the system that can’t be easily understood.

: A repeating theme throughout the OpenBSD source tree is clean, simple code that just does what it says. Complemented with OpenBSD’s comprehensive man pages, it’s hard to find a part of the system that can’t be easily understood. Correctness : OpenBSD’s hardcore approach to correctness includes taking a firm stance against binary blobs and NDAs. They don’t only want their source code to be free, they want it to be fixable, and to achieve that it (and the hardware it runs on) needs to be documented. The result is a slower development pace than Linux users are accustomed to, but software that just does the right thing.

: OpenBSD’s hardcore approach to correctness includes taking a firm stance against binary blobs and NDAs. They don’t only want their source code to be free, they want it to be fixable, and to achieve that it (and the hardware it runs on) needs to be documented. The result is a slower development pace than Linux users are accustomed to, but software that just does the right thing. Portability: Despite the fact that we’re all amd64 here, I believe the historical portability of the source tree (dating back to 4.3BSD-Tahoe in 1988) goes a long way to improving its reliability. The fact is that the OpenBSD developers are used to testing their changes across many different platforms which expose many different sorts of bugs, usually before they get committed to the tree. The result is fewer bugs and better adaptability for everyone.

At the end of it all, I had set out dabbling with my favourite OS on a personal interest project and ended up open-sourcing a tool for building OpenBSD cloud images. I count that as a win.