The IOPS limitation of GlusterFS has little to do with the lower level storage. It is entirely to do with the overhead required to fetch a file from the cluster when the location of that file is not in the DHT , and every node needs to be queried.During the R&D phase I tested single 250GB drives, and then 2x250GB drives in a RAID0 stripe. The different in performance for Gluster was almost nothing, despite there being double the spindles available, and local (non-Gluster, standard local XFS) per-node IOPS testing reflecting that.Additionally, each system has three levels of cache:* Linux file system cache (32GB RAM per node)* LSI Controller RAM cache (1GB RAM per controller)* LSI Controller SSD cache (512GB Intel SSD per node)All three of these buffer random IOPS and serialise them to stream them to the platter drives on page flush. The OS itself is set to use the scheduler elevator=noop for all disks, allowing the controller to get data directly without interference or processing from the OS, which works better on file storage systems (doing the same on your laptop or desktop would be worse, but it's a different use case).With 16 drives per node, RAID6+1 is purely for redundancy and maximising available disk. RAID10 wouldn't give us much benefit at all, given the little benefit to Gluster, the existing triple cache layer, and the waste of disk with 16 drives per node.