UPDATE: The URLs below are dead. I no longer work at Canonical, and don’t know if file system benchmarking is still part of their kernel testing process.

I’ve been working to implement file system benchmarking as part of the test process that the kernel team applies to every kernel update. These are intended to help us spot performance issues. The following announcement I just sent to the Ubuntu kernel mailing list covers the specifics:

[EDIT] Fixed tags to enable the copied email text to flow.

——————————————————————————————

The Ubuntu kernel team has implemented the first of what we hope will be

a growing set of benchmarks which are run against Ubuntu kernel

releases. The first two benchmarks to be included are iozone file system

tests, with and without fsync enabled. These are being run as part of

the testing applied to all kernel releases.

== Disclaimers ==

READ THESE CAREFULLY

1. These benchmarks are not intended to indicate any performance metrics

in any real world or end user situations. They are intended to expose

possible performance differences between releases, and not to reflect

any particular use case.

2. Fixes for file system bugs reduce performance in some cases.

Performance decreases between releases may be a side effect of fixing

bugs, and not a bug in themselves.

3. While assessments of performance are valuable, they are not the only

criteria that should be used to select a file system. In addition to

benchmarks, file systems must be tested for a variety of use cases and

verified for correctness under a variety of conditions.

== General Information ==

1. The top level benchmarking results page is located here:

http://kernel.ubuntu.com/benchmarking/

This page is linked from the top level index at kernel.ubuntu.com

2. The tests are run on the same bare-metal hardware for each release,

on spinning magnetic media.

3. Test partitions are sized at twice system memory size to prevent the

entire test data set from being cached.

4. File systems tested are ext2, ext3, ext4, xfs, and btrfs

5. For each release, each test is run on each file system five times,

and then the results are averaged.

== Types of results ==

There are three types of results. To find performance regressions, we

(the Ubuntu kernel team) are primarily interested in the second and

third types.

1. The Iozone test generates charts of the data for each individual file

system type. To navigate to these, select the links under the “Ran” or

“Passed” columns in the list of results for each benchmark, then select

the test name (“iozone”, for example) from that page. The graphs for

each run for each file system type will be available from that page in

the “Graphs” column.

The second and third result sets are generated by the

iozone-results-comparator tool, located here:

http://code.google.com/p/iozone-results-comparator/

2. Charts comparing performance among all tested file systems for each

individual release. To navigate to these, select the links under the

“Ran” or “Passed” columns in the list of results, then select the

“charts” link at the top of that page.

3. Charts comparing different releases to each other. These comparisons

are generated for each file system type, and are linked at the bottom of

the index page for each benchmark. These comparisons include:

3A. Comparison between the latest kernel for each Ubuntu series (i.e.

raring, saucy, etc).

3B. Comparison between the latest kernel for each LTS release.

3C. comparison of successive versions within each series.