FreeBSD now on all EC2 instance types

Six months ago I announced here that I had managed to get FreeBSD running on 64-bit Amazon EC2 instances by defenestrating Windows AMIs. That took the set of EC2 instance types FreeBSD could run on from three (t1.micro and c[cg]1.4xlarge) up to nine by adding all of the large and extra-large instance types; but FreeBSD still couldn't boot on "high-CPU medium" instances or on the "standard small" instance type — the one which got EC2 started, and which I suspect is still the most popular of all the options. Today I am pleased to announce that FreeBSD 9.0-RELEASE AMIs are now available for all EC2 instance types

I tried building FreeBSD AMIs for the 32-bit EC2 instance types the same way as I did the 64-bit instances — by defenestrating Windows AMIs — but there was a catch: They wouldn't boot. In fact, when I tried this six months ago, they not only didn't boot, but they didn't even produce any console output to help me figure out what was going wrong. A few days ago I tried again and found that while FreeBSD still wasn't booting, it was now at least producing console output (I'm guessing Amazon changed something, but I couldn't say what) and I was able to see where the boot process was failing: It was at the point when FreeBSD launched its paravirtualized drivers.

Disabling the paravirtualized drivers and running FreeBSD in "pure" HVM (aka. a "GENERIC" kernel) got it booting, but wasn't a useful solution: The EC2 network is only available as a paravirtual device. Talking to other FreeBSD developers, I confirmed that the non-functionality of PV drivers under i386/HVM was a known issue, but nobody had managed to track it down yet. I started digging — a process which involved building FreeBSD kernels on a 64-bit EC2 instance and installing them onto an Elastic Block Store volume which I moved back and forth between that instance and a 32-bit instance; starting and stopping the 32-bit instance to trigger an a boot attempt; and reading the output of my debugging printfs from the EC2 console.

And it turned out that the bug was embarrassingly trivial. When the HVM framework for paravirtualized drivers was ported from 64 bits to 32 bits, a definition needed to be added for the __ffs function; we naively assumed that FreeBSD's ffs function would do the trick. Sadly no; while both functions have the same basic functionality (finding the first set bit in an integer) they have one critical difference: ffs counts from one, while __ffs counts from zero. Fix one line, and FreeBSD could boot under HVM with paravirtualized drivers enabled. Run through my Windows-AMI-defenestrating scripts, and I had a FreeBSD AMI which worked on 32-bit instances. From there it was all straightforward. Some minor reorganization of my patches; the final AMI build; and the slow process of copying the AMI I built in the US-East region out to the six other regions — that last step fortunately being made considerably less painful by the scripts I wrote yesterday for loading host keys into .ssh/known_hosts based on fingerprints printed to the EC2 console.

What's next for FreeBSD/EC2? Well, the technical issues have been resolved, and FreeBSD is available everywhere; but there's still a few non-technical issues to handle. On FreeBSD's side, I need to merge my patches into the main tree, and we need to build an EC2-compatible kernel (aka. the XENHVM kernel configuration) as part of the release process. On Amazon's side, I'm hoping that at some point they'll eliminate the 'Windows tax' by providing a mechanism for running in HVM mode without being labelled as a "Windows" instance; and I'd love to see the FreeBSD logo showing up in the EC2 Management Console instead of the Windows logo.

But those are all minor problems. The hard work is done, and for now — after five years of trying — I'm going to enjoy having an EC2 small instance run my operating system of choice.

Disqus