One of the cool newly enabled features of vSphere 6.5 is the come back of VMFS storage space reclamation. This feature was enabled in a manual way for VMFS5 datastores and was able to be triggered when you free storage space inside a datastore when deleting or migrating a VM…or consolidate a snapshot. At a Guest OS level, storage space is freed when you delete files on a thinly provisioned VMDK and then exists as dead or stranded space. ESXi 6.5 supports automatic space reclamation (SCSI unmap) that originates from a VMFS datastore or a Guest OS…the mechanism reclaims unused space from VM disks that are thin provisioned.

When storage space is deleted without this automated feature the delete operation leaves blocks of unused space on the datastore. VMFS uses the SCSI unmap command to indicate to the array that the storage blocks contain deleted data, so that the array can unallocate these blocks.

On VMFS6 datastores, ESXi supports automatic asynchronous reclamation of free space. VMFS6 generally supports automatic space reclamation requests that generate from the guest operating systems, and passes these requests to the array. Many guest operating systems can send the unmap command and do not require any additional configuration. The guest operating systems that do not support automatic unmaps might require user intervention.

I was interested in seeing if this worked as advertised, so I went about formatting a new VMFS6 datastore with the default options via the Web Client as shown below:

Heading over the hosts command line I checked the reclamation config using the new esxcli namespace:

[root@LAB-ESXI-01:~] esxcli storage vmfs reclaim config get -l=SSD-01 Reclaim Granularity: 1048576 Bytes Reclaim Priority: low 1 2 3 [ root @ LAB - ESXI - 01 : ~ ] esxcli storage vmfs reclaim config get - l = SSD - 01 Reclaim Granularity : 1048576 Bytes Reclaim Priority : low

Through the Web Client you can only set the Reclamation Priority to None or Low, however through the esxcli command you can set that value to medium or high as well as low or none, but as I’ve literally just found out, these esxcli only settings don’t actually do anything in this release.



[root@LAB-ESXI-01:~] esxcli storage vmfs reclaim config set -l=SSD-01 -p high [root@LAB-ESXI-01:~] esxcli storage vmfs reclaim config get -l=SSD-01 Reclaim Granularity: 1048576 Bytes Reclaim Priority: high 1 2 3 4 [ root @ LAB - ESXI - 01 : ~ ] esxcli storage vmfs reclaim config set - l = SSD - 01 - p high [ root @ LAB - ESXI - 01 : ~ ] esxcli storage vmfs reclaim config get - l = SSD - 01 Reclaim Granularity : 1048576 Bytes Reclaim Priority : high

For the low setting in terms of reclaim priority and how long before the process kicks off on the datastore, the expectation is that any blocks that are no longer used will be reclaimed within 12 hours. I was keeping track of a couple of VMs and the datastore sizes in general and saw that after a day or so there was a difference in the available storage.

PowerCLI C:\> Get-Datastore | ft * FileSystemVersion DatacenterId Datacenter ParentFolderId ParentFolder DatastoreBrowserPath FreeSpaceMB CapacityMB Accessible Type StorageIOControl Enabled ----------------- ------------ ---------- -------------- ------------ -------------------- ----------- ---------- ---------- ---- ---------------- 6.81 Datacenter-datacenter-2 LAB-DC-01 Folder-group-s5 datastore vmstores:\lab-vc-01.sliema.lab@443\LAB-DC-01\SSD-01 253194 457728 True VMFS False 6.81 Datacenter-datacenter-2 LAB-DC-01 Folder-group-s5 datastore vmstores:\lab-vc-01.sliema.lab@443\LAB-DC-01\SSD-02 76005 457728 True VMFS False PowerCLI C:\> Get-Datastore | ft * FileSystemVersion DatacenterId Datacenter ParentFolderId ParentFolder DatastoreBrowserPath FreeSpaceMB CapacityMB Accessible Type StorageIOControl Enabled ----------------- ------------ ---------- -------------- ------------ -------------------- ----------- ---------- ---------- ---- ---------------- 6.81 Datacenter-datacenter-2 LAB-DC-01 Folder-group-s5 datastore vmstores:\lab-vc-01.sliema.lab@443\LAB-DC-01\SSD-01 275073 457728 True VMFS False 6.81 Datacenter-datacenter-2 LAB-DC-01 Folder-group-s5 datastore vmstores:\lab-vc-01.sliema.lab@443\LAB-DC-01\SSD-02 90534 457728 True VMFS False 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 PowerCLI C : \ > Get-Datastore | ft * FileSystemVersion DatacenterId Datacenter ParentFolderId ParentFolder DatastoreBrowserPath FreeSpaceMB CapacityMB Accessible Type StorageIOControl Enabled -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 6 . 81 Datacenter -datacenter -2 LAB-DC -01 Folder -group -s5 datastore vmstores : \ lab -vc -01 . sliema . lab @ 443 \ LAB-DC -01 \ SSD -01 253194 457728 True VMFS False 6 . 81 Datacenter -datacenter -2 LAB-DC -01 Folder -group -s5 datastore vmstores : \ lab -vc -01 . sliema . lab @ 443 \ LAB-DC -01 \ SSD -02 76005 457728 True VMFS False PowerCLI C : \ > Get-Datastore | ft * FileSystemVersion DatacenterId Datacenter ParentFolderId ParentFolder DatastoreBrowserPath FreeSpaceMB CapacityMB Accessible Type StorageIOControl Enabled -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- - -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- -- 6 . 81 Datacenter -datacenter -2 LAB-DC -01 Folder -group -s5 datastore vmstores : \ lab -vc -01 . sliema . lab @ 443 \ LAB-DC -01 \ SSD -01 275073 457728 True VMFS False 6 . 81 Datacenter -datacenter -2 LAB-DC -01 Folder -group -s5 datastore vmstores : \ lab -vc -01 . sliema . lab @ 443 \ LAB-DC -01 \ SSD -02 90534 457728 True VMFS False

You can see that I clawed back about 22GB and 14GB on both datastores in the first 24 hours. So my initial testing with this new feature shows that it’s a valued and welcomed edition to the new vSphere 6.5 release. I know that for Service Providers that thin provision but charge based on allocated storage, they will benefit greatly from this feature as it automates a mechanism that was complex at best in previous releases.

There is also a great section around UNMAP in the vSphere 6.5 Core Storage White Paper that’s literally just been released as well and can be found here:

References:

http://pubs.vmware.com/vsphere-65/topic/com.vmware.ICbase/PDF/vsphere-esxi-vcenter-server-65-storage-guide.pdf

https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057513

Share this: Twitter

LinkedIn

Reddit

WhatsApp



Like this: Like Loading...