History time.

The server contained a Hyper-V Windows 2018 installation, with a single VM. The internal RAID Controller presented a single tremendous volume that stored some virtual volumes for that single VM. The VM was called Archive02.

The Archive02 VM was full, and I had no other space on any other servers or tapes. So no chance for reinstalling the machine from scratch. The total amount where more than 90 TiB, so no efficient cloud could help.

The server is a Supermicro SSG-6048R edition. That’s a 60Bay Storage Server, and for my purpose, it had eleven free disk slots available. I filled them up with Seagate Exos X14; they are 14 TB Drives.

How to migrate, and use those eleven free slots?

Windows storage server does not have support for ZFS. But by using Hyper-V, I created a FreeNAS Server as a VM and presented each of the drives as single RAID-0 drives to that FreeNAS VM. The reason for this workaround is that the RAID controller did not support mixed RAID and JBOD.

RAID-0 as JBOD tiny small hack.

Yes, I did build a “mini LAB” to test it before getting to serious ;)

I have read on iXsystems forum, about a person who faked JBOD, by changing his RAID Card to give out multiple RAID-0, one per drive. ( even thou “cyberjock” nearly killed him ) I replicated that setup and planned later in the process that I would change the RAID card to a pure JBOD Card.

An co-worker said that the actual RAID information is at the very beginning of a disk, before any FileSystem information, so in theory, where ZFS stores it first blocks, are after the blocks where the RAID information is stored; hence I could jump the drivers, from RAID to JBOD.

FreeNAS Setup.

My goal, with the TANK, was never a speed server, but a server who could lose some drives, and still be operational. There is an old article that we need to have with us when working with 14 TB Drives, “Why RAID 5 stops working in 2009”. It’s is just as relevant to RAID-Z1.

Back to the triangle from Calomel — Open Source Research and Reference,

our focus this time, is totally on integrity, and a tiny amount of capacity.

capacity

/\

/ \

/ \

/ \

performance /________\ integrity

My budget allowed me to buy 35x Seagate Exos X14, and I had to consider that in the beginning, I will only have 11 slots free.

Some math.

11x 14 TB Drives is 154 TB RAW. I needed at least 90 TB, so I could remove some old harddrives and install more of the beautiful 14 TB Drives.

Speedwise I have learned that I only get the IOPS from a single drive in one VDEV. I won’t be needing a lot of IOPS, as this is an Archive, and basically will be Backup of the production server, and just a few folks will have access to the archive. As I focus on integrity, I want to have at least three drives as a fault tolerance per 15 drives.

— — — — — — — — — — — — — — — — — — — — — —

Single RAID-Z3, with all the 11 drives. (3x Fault tolerance )

Zpool storage capacity: 153.931628

Reservation for parity and padding: 41.981353

Zpool usable storage capacity: 111.950275

Slop space allocation: 3.498446

ZFS usable storage capacity: 108.451829

— — — — — — — — — — — — — — — — — — — — — —

Dual RAID-Z2, with ten drives. (4x Fault tolerance, double IOPS)

Zpool storage capacity: 139.637977

Reservation for parity and padding: 56.889546

Zpool usable storage capacity: 82.748431

Slop space allocation: 2.585888

ZFS usable storage capacity: 80.162542

Capacity calculator

— — — — — — — — — — — — — — — — — — — — — —

I had to use the Single RAID-Z3 setup, because of the 90 TB size from Archive02. It also where the one that gave most space per drive, with only a 30% loss.

Therefore the end design I ended on was :

5x RAID-Z3 VDEV’s with 11x drives. ( total 55 Drives )

3x 1TB SSD’s as L2ARC

2x Hotspares, one for the old drive, and one for the new.

With this design, I allowed a total of 15 Drives to fail, or three within the same VDEV before losing data. I could use 33x of the Seagate Exos X14 drives as active storage that I had in my budget, and it gave me a single hot spare, and a shelf spare. I also used 22x of the older Toshiba drives, that I could phase out in the next not so crazy period.

Let’s focus back on the VM’s from hell.

The server has now two VM’s. Archive02, with the Virtual Disk, stored on a big RAID, and TANK with “fake” JBOD trough RAID-0.

The design where ready, all the drives were added into a single RAID-Z3. I created the volume storage and created a single samba share. On the Windows VM, I mounted up FreeNAS and used Robocopy to dump the files from one VM to another.

robocopy “G:\SOURCE” “\\TANK\DESTINATION” /copyall /mir /fft /r:1 /w:1 /mt:64 /zb /np /ndl /xjd

Speed was OK — the job where only from server to server. I got from 100 to 300 MB write, small files VS big files.

One week later

Ever have the feeling, that what you are doing is incredibly stupid?

Well, the day I shut down the VM Machine from hell, took out the 22 of the old drives, and swapped the RAID Card with a JBOD Card.

Booted up the machine,

Installed a fresh FreeNAS on the OS Drives, and imported back the TANK Volume.

That was the day, but … it worked.