0 Flares Twitter 0 Facebook 0 LinkedIn 0 Email -- 0 Flares ×

There has been a lot of discussions about ReFS 3.1 after Veeam released its version 9.5 with support for the block clone API. With this integration between the two product, users can now design a repository that combines the speed of a non-deduplicated array, with some important space savings that usually belongs to those dedicated appliances. We have seen many many discussions in our Veeam forums, and I also published two articles on this topic you may want to read: Windows 2016 and Storage Spaces as a Veeam backup repository and An example for a Veeam backup repository using Windows 2016.

Now that people are starting to use ReFS, another question has risen: which cluster size should I use?

ReFS 3.1 in fact supports two different cluster sizes: 4KB and 64KB.

Too long, don’t read: go for 64KB whenever you want to use ReFS as a Veeam repository.

If you also want to know why we are suggesting this cluster size, and how to check and modify this parameter, keep reading.

Why 64KB?

NOTE: These notes come from the experiences we are collecting with our customers, in the few months since Veeam Backup & Replication 9.5 and Microsoft Windows 2016 have been both available. These information may change in the future, and are not anyway related to the official position of the two companies.

Microsoft has just posted a new blog article on Technet: Cluster size recommendations for ReFS and NTFS, to answer the same question, but as much as they are suggesting to use 4KB (it’s in fact the default cluster size when a new volume is formatted), there are a couple of notes in the same article that tells you that 64KB may be even better for backups. The use cases they list are not really comparable to a Veeam backup repository, while the phrase “64KB clusters are applicable when working with large, sequential IO” fits way better into Veeam use cases: backups and restores are mostly sequential, and Veeam blocks are always bigger than 64KB.

But even more than relying on official information, me and my colleauges work in the field, side by side with our customers, and regardless the official notes, our live experience is telling us that 64K is the way to go. Performance especially are really different when the same exact volume, on the same exact hardware, is formatted with 64KB cluster size instead of 4KB. People have observed up to 4 times an increase in the speed of merge operations like incremental forever or synthetic fulls. Both the merge operations are still considerably faster than regular I/O operations, thanks to block cloning API, but the difference between the two is not something one can easily ignore.

There are also some posts on Veeam forums, like this one, suggesting that 4KB clusters have also an impact on the server resources, memory in this case. With many more blocks to be managed I would not be surprised, after all there are 16 times the blocks to be mapped and traced.

The only downside we have observed so far is an additional space consumption compared to the 4KB cluster, and this is again expected, and around some 5–10%. The reason is that each block created by Veeam is initially at a fixed size, like for example the default 1MB or 1024Kb if you like. But Veeam also has compression enabled by default, and while a safe assumption is that “on average” blocks will be compressed by 50%, the final result is variable depending on the final result.

First of all, you can see even visually that 64KB clusters are way bigger than 4KB ones. Now, let’s suppose that the source block is compressed and its final size becomes 506KB. On a filesystem with 4KB clusters, this is going to consume 126,5 clusters; but since one cluster cannot be shared among multiple blocks, in reality we will consume 127 clusters, and 0,5 clusters will be wasted as no other block could be written there. 0,5 clusters at 4KB size means 2KB are wasted.

Now, think about writing the same block into the 64KB file system. 506 KB means 7,90 clusters, but again we can only consume whole clusters, so the consumption will be 8, or 512KB. The wasted space in this case is 6KB.

And, as you can imagine, this is already a good case. There may be blocks that fit into the latest 64KB cluster for way less that the available KBs, and so the wasted space could be even more. On average, we observed so far an increase in space consumption around 10%.

So, which saving would you prefer? In my humble option, disk space these days is much cheaper than CPU and memory, and in addition, it can be easily added to a running server by adding more disks. It’s not so easy or cheap to add more CPU or Memory to be able to manage volumes using 4KB clusters. Things may change in the future, but for now my suggestion is to stick to 64KB.

How?

Once we understood why we are suggesting to go for 64KB clusters, how can someone do it? The sad news is that 4KB is the default cluster size, so if someone didn’t know about this supposedly good practice, there are high chances he/she has formatted the volume with the default parameters, thus using 4KB clusters. and another sad news is that there’s no easy way to migrate from one cluster size to the other, and the operation is quite destructive.

First, in order to check for the actual cluster size, this command is really quick and useful:

fsutil fsinfo refsinfo driveletter:\

and this is the expected output:

In our example, the D:\ drive has been formatted with the default 4KB cluster, and this can be read in the “Bytes per Cluster” value, that equals to 4096. Luckily in my example, this new Veeam repository had received only a few GB of backups, so it was easy to move all the folders and files into another location, in order to format again the volume. Obviously, during the format operation you have to select this time the 64KB cluster option:

Once the format is complete, you can check that the cluster size is not correct:

Now the volumes are ready to be used even better by Veeam Backup & Replication.