Further Reading Bitrot and atomic COWs: Inside “next-gen” filesystems just announced a major addition that will please anyone interested in file storage. Ubuntu 16.04 will include the ZFS filesystem module by default, and the OpenZFS-based implementation will get official support from Canonical.

ZFS support was already available "as a technology preview" in Ubuntu 15.10, where it's installable via an apt-get command and has to be compiled from source code first. This is no longer the case in 16.04, though you'll still need to download and install the zfsutils-linux package to create and manage ZFS volumes. Putting an official, installed-by-default, fully supported version into an LTS version of Ubuntu is a big vote of confidence, especially since people running Ubuntu-based servers often stick to LTS releases for maximum stability.

ZFS is used primarily in cases where data integrity is important—it's designed not just to store data but to continually check on that data to make sure it hasn't been corrupted. The oversimplified version is that the filesystem generates a checksum for each block of data. That checksum is then saved in the pointer for that block, and the pointer itself is also checksummed. This process continues all the way up the filesystem tree to the root node, and when any data on the disk is accessed, its checksum is calculated again and compared against the stored checksum to make sure that the data hasn't been corrupted or changed. If you have mirrored storage, the filesystem can seamlessly and invisibly overwrite the corrupted data with correct data.

This and other resiliency features make ZFS a popular filesystem for file storage, and it's the default filesystem for storage-oriented operating systems like FreeNAS. A potential downside is that the filesystem really likes RAM, which is used as a read and write cache and for verifying checksums and parity data. 1GB of RAM per TB of storage is the generally recommended rule of thumb. If you decide to use its space-saving data deduplication features (not likely to be very beneficial for home use, though much more valuable in enterprise scenarios), the recommended RAM requirements go up even more.

More details and instructions for getting started with ZFS in the new version of Ubuntu are available in the announcement post.

Update: Richard Yao, a ZFS developer, contacted us after this article was published to clarify some points about ZFS' RAM requirements. "The code base [for non-deduplication cases] would work fine with 1GB of system RAM for any amount of storage," Yao told Ars, adding that "the only downside is that the cache rate declines if the working set is big." The 1GB of RAM per 1TB of storage requirement is actually related to ZFS' data deduplication features, but according to Yao the math required to calculate the ideal amount of RAM is so highly variable that it defies easy rules of thumb. RAM requirements are related directly to the amount of duplicated data stored on your volumes and a variety of other factors.

"The '1GB rule of thumb' is commonly cited as describing memory requirements for data deduplication and is unfortunately wrong," said Yao. "It started many years ago and despite my best efforts to kill that misinformation, people keep spreading it faster than I can inform them of the actual requirements."