I recently caught an interesting VMTN thread where a user wanted to move an exiting VSAN Cluster from one vCenter Server to another vCenter Server with minimal impact to the ESXi hosts and running Virtual Machines. The great news is that this can be done without any impact to your ESXi hosts and more importantly, there is no impact to your workloads. I have personally performed this operation on several occasions without any problems and the process is actually quite straight forward and thought I walk you through it because it is literally a couple of steps.

The main reason this is not a challenge is that VSAN has been architected to not have a reliance on vCenter Server for its normal operations. It is true that vCenter Server is required for the configuration and management of the VSAN Cluster and VM Storage Policies, but once those configurations have been applied, then the vCenter Server is no longer in picture from operational point of view. This means if you need to move your VSAN Cluster from a development vCenter Server to a production vCenter Server or if you accidentally destroyed your original vCenter Server, the VSAN Cluster can easily be re-created on a new vCenter Server.

To demonstrate the process, I have a 3 Node VSAN Cluster with a running Virtual Machine on vCenter Server (vcenter55-1) and I have built a new vCenter Server (vcenter55-3) which I would like to move the existing VSAN Cluster over to.

UPDATE2 (11/02/17) - There was a question a couple of weeks back on whether the procedure outlined below could also apply to a vSAN Stretched Cluster. I did not see any technical reasons preventing this and one of our GSS Engineers had recently validated this with a customer and successfully moved a vSAN Stretched Cluster. I asked if he could share the modified instructions in case others were interested.

Copy all VDS settings to new cluster Enable vSAN on new cluster (follow Step 2 below) Disable stretched cluster Move each host Move witness Re-enable stretched cluster (follow Step 4 below)

Step 1 - Deploy a new vCenter Server and create a vSphere Cluster with VSAN Enabled.



Step 2 -

UPDATE1 (05/02/17) - Updated to include vSAN 6.6. specific instructions.

Pre-vSAN 6.6 - Disconnect one of the ESXi hosts from your existing VSAN Cluster and then add that to the VSAN Cluster in your new vCenter Server.

Note: Technically, you do not even have to disconnect the ESXi hosts from the old vCenter Server. You could just add the ESXi hosts to the new vCenter Server and once you have confirmed you wish to move the ESXi host, it will automatically be disconnected once added. This would actually save you an extra step.

vSAN 6.6 - An additional configuration is needed to be applied to all ESXi hosts PRIOR to disconnecting from the original vCenter Server and adding them into the new vCenter Server. Below are a few examples on how to apply the ESXi Advanced Setting which should be a value of 1:

Here is an example using ESXCLI (local or remotely) on an individual ESXi host:

esxcli system settings advanced set -o /VSAN/IgnoreClusterMemberListUpdates -i 1

Here is an example of using PowerCLI to apply the setting across all ESXi hosts if the original vCenter Server is still available:

Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {

$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false

}

Here is an example of using PowerCLI to apply the setting directly to an ESXi host if the original vCenter Server is no longer available:

Get-VMHost -Name 192.168.1.100 | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 1 -Confirm:$false



Once you have successfully added the ESXi host, you should see a warning within the VSAN Configuration page stating there is a "Misconfiguration detected" which is expected. What is happening is that this ESXi has been configured in an existing VSAN Cluster and the ESXi hosts that it is supposed to be able to communicate with are not part of this VSAN Cluster. Once we add the remainder ESXi hosts, then the VSAN Cluster will be happy and this error will go away.

Note: If you try to add all of the ESXi hosts from the existing VSAN Cluster to the new VSAN Cluster at once, you will see an error regarding UUID mismatch. The trick is to add one host first and once that has been done, you can then bulk add the remainder ESXi hosts and you will not have an issue. This is handy if you are trying to automate this process.

Step 3 - Add the remainder ESXi hosts to the VSAN Cluster in the new vCenter Server. Once all hosts have been added to the new VSAN Cluster, you will see the warning icons disappear and your VSAN Cluster is now fully managed by the new vCenter Server. We can also confirm that there are no network partition as all original VSAN configurations have been retained on the ESXi hosts.

UPDATE1 (05/02/17)

Step 4 - This last step is ONLY applicable to vSAN 6.6 hosts. Once all hosts have been successfully to the new vCenter Server and you have verified cluster status is healthy and there are no network partitions. We now need to update the ESXi Advanced Setting we had set earlier from a value of 1 back to value of 0.

Here is a PowerCLI snippet which given a vSAN Cluster, it will automatically go through all ESXI hosts and update the setting:

Foreach ($vmhost in (Get-Cluster -Name VSAN-Cluster | Get-VMHost)) {

$vmhost | Get-AdvancedSetting -Name "VSAN.IgnoreClusterMemberListUpdates" | Set-AdvancedSetting -Value 0 -Confirm:$false

}



Disclaimer: As mentioned there is no impact to the ESXi hosts (other than not being able to manage it while you disconnect and re-connect on the new vCenter Server) and there is no impact to the running Virtual Machines and any VM Storage Policies that have been applied to the VM will still be enforced by each of the ESXI hosts. However, one thing to be aware of is that the VM Storage Policies in your original vCenter Server will not be available in the new vCenter Server. You will need to re-create each of the VM Storage Policies and re-attach them to the existing Virtual Machines. This can of course be automated by using the vSphere API or leveraging the new PowerCLI 5.8 R1 release which includes VM Storage Policie cmdlets.

Here is an example of exporting a VM Storage Policy named "FTT=1" to a file called policy.xml on your desktop:

Export-SpbmStoragePolicy -StoragePolicy (Get-SpbmStoragePolicy -Name FTT=1) -FilePath C:\Users\Administrator\Desktop\policy.xml

Currently this is the only impact by moving a VSAN Cluster from one vCenter Server to another and of course this assumes you have created VM Storage Policies aside from the default policies.

I received a couple of questions regarding the networking setup for my VSAN Cluster. In the above example I was using VSS (Virtual Standard Switch). I did however, retest this scenario completely on VDS (Virtual Distributed Switch) and the results were the same. When all ESXi hosts have been added to the new vCenter Server, you will see a warning about proxy host switch. The key to properly migrating the networks (VMkernel & VM Portgroup) is to add each ESXi hosts to the new VDS that you will need to create. If you original vCenter Server is still available, you can export and import the VDS configuration. If it is not available, then you will need to manually re-create the Distributed Portgroups before proceeding.

The first step is to go to the Networking view and right click select "Add and Manage Hosts"



Go ahead and walk through the guided wizard and make sure you only add one hosts at a time, as I saw issues when trying to add multiple hosts at a time. Once the ESXi host has been added to the new VDS and its uplinks, VMkernel and VM Portgroups are all connected. You should now see two VDS under the Networking view of ESXi host under "Manage".



This can be seen clearly using the vSphere C# Client as it allows you to view both on the same screen. Once you have confirmed that the everything looks good, then you can go ahead and remove the old VDS switch as shown in the screenshot above. At this point, your ESXi hosts networking is now running on the new VDS. You will continue this same workflow for the remainder ESXi hosts until they all have been migrated over to the new VDS.