I was looking into vSAN File Services this week as I had some customers asking about requirements and constraints. I wanted to list some of the things to understand about vSAN File Service as it is important when you are designing and configuring it. First of all, it is good to have an understanding of the implementation, well at least somewhat as vSAN File Services is managed/upgraded/update as part of vSAN. It is not an entity you as an admin don’t manage the appliance you see deployed. I created a quick demo about vSAN File Services which you can find here.

If you look at the diagram (borrowed from docs.vmware.com) above you can see that vSAN File Service leverages Agent/Appliance VMs and within each Agent VM a container, or “protocol stack”, is running. The protocol stack is what exposes the file system as an NFS file share. That has a few implications, and I want to make sure that people understand those before they start with vSAN File Services. Let’s list the requirements, constraints, and some of the things to know so they are obvious.

Targeted use case: Cloud Native Applications and file services for traditional apps

NFS v3 and NFS v4.1 are both supported

A minimum of 3 hosts within a cluster

A maximum of 64 hosts within a cluster

Not supported today on 2-node

Not supported today on a stretched cluster

Not supported in combination with vLCM (Lifecycle Manager)

It is not supported to mount the NFS share from your ESXi host

Maximum of 8 active FS containers/protocol stacks are provisioned

Each host will have an FS VM, you can have more FS VMs than containers!

FS VMs are provisioned by vSphere ESX Agent Manager You will have one FS VM for each host up to 8 hosts

FS VMs are tied to a specific host from a compute and storage perspective, and they align of course!

FS VMs are not integrated with vSAN Fault Domains

FS VMs are powered off and deleted when going into maintenance mode

FS VMs are provisioned and powered on when exiting maintenance mode

On a standard and distributed (v)Switch, the following settings are enabled on the port group automatically: Forged Transmits, Promiscuous Mode

vSAN automatically downloads the OVF for the appliance, if vCenter Server cannot connect to the internet you can manually download it The ovf is stored on the vCenter Appliance here, if you ever want to delete it: /storage/updatemgr/vsan/fileService/

The FS VM has its own policy (FSVM_Profile_DO_NOT_MODIFY), which should not be modified! The appliance is not protected across hosts, it is RAID-0 as resiliency is handled by the container layer!



So what does this mean? Well, from a networking standpoint, I would highly recommend creating a dedicated port group for vSAN File Service! Why? Well, Forged Transmits and Promiscuous Mode or MAC Learning are enabled by default during the configuration on the port group you selected for the vSAN File Service deployment. You may ask why this needs to be enabled, well basically because a MAC address and IP address are assigned to the container within the FS VM. This allows for resilience at the container layer but means that from a networking perspective the environment needs to be aware of it.

Another thing I would like to briefly discuss is that the instantiation of protocol stack containers as it does not take vSAN fault domains into account. That means that if you have a 64 node cluster with 8 fault domains you could theoretically end up with all protocol stack containers in the same fault domain. So this is definitely something to take into consideration. Of course, our engineering and product management team is aware, and they are aiming to solve this in a future release.

I hope the above details will help folks when deploying vSAN File Services in their environment. Remember, this is the first release, and some of the limitations and constructs will definitely change in the upcoming releases!

Share it: Tweet







