5 min read

Introduction

In Windows Server 2016, Microsoft added a new type of storage called Storage Spaces Direct (S2D). S2D enables building highly available storage systems with local attached disks, and without the need to have any external SAS fabric such as shared JBODs or enclosures. This is a significant step forward for Microsoft in Windows Server 2016 software-defined storage (SDS) which reduces the cost even further.

The following two diagrams show you an overview of the Storage Spaces Direct stack in Converged (disaggregated) and Hyper-Converged model.

Storage and compute in separate clusters [Image Source: Microsoft]

Storage and compute in the same cluster [Image Source: Microsoft]

In today’s blog post, I will walk you through how to expand and resize an existing Storage Spaces Direct Clustered Shared Volume(s).

Expand S2D CSV Volume

In this example I am using a Hyper-Converged model with 3 nodes and 3-way mirror disk.

As you can see below, I have four virtual disks in the S2D cluster named “Collect”, “vDisk02”, “vDisk03”, “vDisk04” with capacity of 127 GB and 700 GB.

And each node has 2 X 960 GB SSDs for caching and 4 X 1 TB HDDs for capacity.

First things first, we need to check the health, operational status and the foot print on the storage pool of the existing virtual disk before doing any changes.

Let’s open Windows PowerShell and get the existing virtual disk information.

Get-VirtualDisk * | Sort FriendlyName | FT FriendlyName, OperationalStatus, HealthStatus, @{Label=’Size(GB)’;Expression={$_.Size/1GB}}, @{Label=’FootPrintOnPool(GB)’;Expression={$_.FootprintOnPool/1GB}} -autosize

As mentioned earlier, in this example I am using 3-way mirror as resiliency, so for 700 GB virtual disk, it will occupy (700 GB X 3) = 2.1 TB of footprint each.

Here is the Show-PrettyVolume output written by Cosmos Darwin, Program Manager on the storage team at Microsoft.

Next, let’s check the remaining capacity in the Storage Pool by running the following command:

Get-StorageSubSystem *Cluster* | Get-StorageHealthReport

The 16.16 TB is the total physical pool capacity, and the 4.56 TB is remaining of the physical storage pool without resiliency .

Let’s see now the maximum resilient capacity that we can add to the virtual disk in order to expand the existing volume(s).

Get-StorageTierSupportedSize -FriendlyName Capacity -ResiliencySettingName Mirror | FT @{l="TierSizeMax(TB)";e={$_.TierSizeMax/1TB}}

The remaining mirror capacity is 1.45 TB.

Please note that Microsoft recommends leaving 2 X 1 TB drives’ worth of capacity based on this example, but it’s just that – a recommendation, because if you experience a drive failure, Storage Spaces Direct will not be able to do an immediate and “in-place” repair, meaning it will successfully repair only after you have replaced the physical device. If instead you leave at least 2 TB of free space in the pool, then Storage Spaces Direct would be able to repair immediately, even before the physical disk is replaced.

We can confirm this by running the following command:

Get-StorageSubSystem *Cluster* | Debug-StorageSubSystem

Let’s now resize and expand the volume, by running the following command:

# Expand the size of all virtual disks to 1TB Get-VirtualDisk vDisk* | Get-StorageTier | ? ResiliencySettingName -eq Mirror | Resize-StorageTier -Size 1024GB #This was a 324GB increase

Here is another important point to remember: When you are resizing the storage space direct CSV volume, you have to specify the new total size, and not the amount you want to increase, so in this example, the existing virtual disk volume is 700 GB, I need to add 324 GB (new) + 700 GB (existing) = 1,024 GB total. The same concept will apply if you are resizing a Multi-Resilient hybrid volume (Performance / Capacity).

Let’s check now the new size of each virtual disk by running the following command:

Get-VirtualDisk vDisk* | FT FriendlyName, @{Label=’Freespace(GB)’;Expression={$ _.Size/1GB}}, @{Label=’FootPrintOnPool(GB)’;Expression={$_.FootprintOnPool/1GB}} -autosize

And here is the result shows in Failover Cluster Manager:

We are not done yet, Failover Cluster Manager shows under Disks that the Cluster Virtual Disk(s) are 1 TB in capacity, but the Cluster Shared Volume (CSVFS) is still @ 700 GB as shown in the next screenshot.

Once the virtual disk(s) is expanded, you will have also to expand the partition size for each one.

Please note that when you want to expand the partition, you need to do to so on the owner node of that volume.

To automate this process, I created the following script that you can run from your management machine to expand the partition for all volumes.

$Cluster = "S2DCLU" $vDisks = Get-VirtualDisk vDisk* -CimSession $Cluster foreach ($vDisk in $vDisks) { $vDiskMax = Get-VirtualDisk $vDisk.FriendlyName -CimSession $Cluster | Get-Disk | Get-Partition | ? Type -eq Basic | Get-PartitionSupportedSize Get-VirtualDisk $vDisk.FriendlyName -CimSession $Cluster | Get-Disk | Get-Partition | ? Type -eq Basic | Resize-Partition -Size $vDiskMax.SizeMax }

And here is the final result in Failover Cluster Manager:

Conclusion

Microsoft has a great Storage Spaces Direct Overview which goes into more detail and is well worth a read.

Hopefully the above notes and screenshots illustrate how you can expand and resize a Storage Spaces Direct CSV volume when you have a need to do so.

Until next time… Enjoy your weekend!

Cheers,

[email protected]