Distributed storage is something I’ve been very interested in for a while. It has gotten me into fiber channel SAN systems (although the cost really prohibits any further exploration) and now I’m on to Ceph. The real driving force behind this is the need for network storage for OpenStack. I would like to implement permanent storage for VMs as well as allow live migration in the event of a node failure. Neither of those are really much of an option without some form of distributed storage and Ceph seems to be an excellent candidate!

Admittedly I don’t really have any good hardware for this so I’m going to make due with what I have at the moment. As I mentioned in the last state of the lab post I would like to convert my 4U supermicro into a dedicated storage server, but that means I need a new hypervisor first, which I don’t have yet. So just to get things started I’m going to be using my R410 as a storage system.

This system is configured with 2 Xeon E5520s an 24GB DDR3 RAM. It has an H700 RAID controller with 2x 1TB SAS drives, 1x 1TB SATA, and 1x 2TB SATA. The two SAS drives are configured in a redundant array which will run the base OS (I know its stupid overkill but its what I have for spares at the moment) and the two SATA drives will be used for pools.

My plan is to use the 2TB disk for volumes and the 1TB disk for VMs but I may change that all later.

I’m not really familiar with Ceph at all so before really digging into integrating it with OpenStack I’m going to try and explore it on it’s own. I’ll post more about my explorations once I get it all installed and running.

See you soon!