This is a bit of a long post, but there is a lot to cover. In a previous post, I walked through the deployment of Photon Platform v1.2, which included the Photon Installer, followed by the Photon Controller, Load-Balancer and Lightwave appliances. If you’ve read the previous post, you will have read that Photon Platform v1.2 include the OVAs for these components within the Photon Installer appliance. So no additional download steps are necessary. However, because vSAN is not included, it will have to be downloaded separately from MyVMware. The other very important point is that Photon Platform is not currently supported with vSAN 6.6. Therefore you must ensure VMware ESXi 6.5, Patch 201701001 (ESXi650-201701001) is the highest version running on the ESXi hosts. The patch’s build number is 4887370. One reason for this is that vSAN 6.6 has moved from using multicast to using unicast, which in turn uses vCenter for cluster membership tracking. Of course, there is no vCenter in Photon Platform so a way of handling vSAN over unicast membership is something that needs to be implemented before we can support it.

Now, I have already blogged about how to deploy vSAN with Photon Platform 1.1. However, some things have changed with Photon Platform 1.2. With that in mind, let’s go through the deployment process of vSAN with Photon Platform version 1.2. As before, I have 4 ESXi hosts available. 1 of these will be dedicated for management and the other 3 will be cloud hosts for running container schedulers/frameworks. These 3 hosts will also participate in my vSAN cluster.

Step 1 – Deploy the Photon Installer

This has already been covered in a previous post. It is a simple OVA deploy. Place it on the management ESXi server. I used the HTML5 client of the ESXi host to deploy it.

Step 2 – Deploy the Lightwave Appliance

In the last blog on PP 1.2, I showed how there is a new deployment method called “photon-setup“. It takes as an argument a YAML file, with various blocks of information for Photon Controller, Load-Balancer and Lightwave. Refer back to the previous post to see a sample YAML config for Lightwave. As we need Lightwave to authenticate the vSAN Manager Appliance against (more on this later), we can just deploy out the Lightwave appliance for now. Lightwave is essentially the photon platform authentication service. The following command will just pickup the Lightwave part of the YAML file and deploy it.

root@photon-installer [ /opt/vmware/photon/controller/share/config ]# ../../\ bin/photon-setup lightwave install -config /opt/vmware/photon/controller/share\ /config/pc-config.yaml Using configuration at /opt/vmware/photon/controller/share/config/pc-config.yaml INFO: Parsing Lightwave Configuration INFO: Parsing Credentials INFO: Lightwave Credentials parsed successfully INFO: Parsing Lightwave Controller Config INFO: Parsing appliance INFO: Parsing Credentials INFO: Appliance Credentials parsed successfully INFO: Parsing Network Config INFO: Appliance network config parsed successfully INFO: Appliance config parsed successfully INFO: Lightwave Controller parsed successfully INFO: Lightwave Controller config parsed successfully INFO: Lightwave Section parsed successfully INFO: Parsing Photon Controller Configuration INFO: Parsing Photon Controller Image Store INFO: Image Store parsed successfully INFO: Managed hosts parsed successfully INFO: Parsing Photon Controller Config INFO: Parsing appliance INFO: Parsing Credentials INFO: Appliance Credentials parsed successfully INFO: Parsing Network Config INFO: Appliance network config parsed successfully INFO: Photon Controllers parsed successfully INFO: Photon section parsed successfully INFO: Parsing Compute Configuration INFO: Parsing Compute Config INFO: Parsing Credentials INFO: Parsing Credentials INFO: Parsing Credentials INFO: Parsing Credentials INFO: Compute Config parsed successfully INFO: Parsing vSAN Configuration INFO: Parsing vSAN manager Config INFO: Parsing appliance INFO: Parsing Credentials INFO: Appliance Credentials parsed successfully INFO: Parsing Network Config INFO: Appliance network config parsed successfully INFO: vSAN manager config parsed successfully INFO: Parsing LoadBalancer Configuration INFO: Parsing LoadBalancer Config INFO: Parsing appliance INFO: Parsing Credentials INFO: Appliance Credentials parsed successfully INFO: Parsing Network Config INFO: Appliance network config parsed successfully INFO: LoadBalancer Config parsed successfully INFO: NSX CNI config is not provided. NSX CNI is disabled Installing Lightwave instance at 10.27.51.35 2017-05-25 09:05:14 INFO Info: Lightwave does not exist at the specified IP address. Deploying new Lightwave OVA 2017-05-25 09:05:14 INFO Start [Task: Lightwave Installation] 2017-05-25 09:05:14 INFO Info [Task: Lightwave Installation] : Deploying and powering on the Lightwave VM on ESXi host: 10.27.51.5 2017-05-25 09:05:14 INFO Info: Deploying and powering on the Lightwave VM on ES Xi host: 10.27.51.5 2017-05-25 09:05:14 INFO Info [Task: Lightwave Installation] : Starting appliance deployment 2017-05-25 09:05:25 INFO Progress [Task: Lightwave Installation]: 20% 2017-05-25 09:05:27 INFO Progress [Task: Lightwave Installation]: 40% 2017-05-25 09:05:29 INFO Progress [Task: Lightwave Installation]: 60% 2017-05-25 09:05:40 INFO Progress [Task: Lightwave Installation]: 80% 2017-05-25 09:05:43 INFO Progress [Task: Lightwave Installation]: 0% 2017-05-25 09:05:44 INFO Stop [Task: Lightwave Installation] 2017-05-25 09:06:24 INFO Info: Lightwave already exists. Skipping deployment of lightwave. COMPLETE: Install Process has completed Successfully.

We can then verify that it deployed successfully by pointing a browser at https://<ip-of-lightwave>. Add the Lightwave domain name (which you provided as part of the YAML file) and then provide login credentials (also specified in YAML) and verify you can login. If you can, we can move to the next steps.

Step 3 – Create a vSAN user and vSAN group in Lightwave

This is where you create the user that you will use to authenticate your RVC session when you create the vSAN cluster. In a previous post, I showed how to do this from the CLI. In this post, I will do it via the UI. Once logged into Lightwave, there are 3 steps:

Create a VSANAdmin group Create a vsanadmin user Add the vsanadmin user as a member of the VSANAdmin group

This should be very intuitive to do, and when complete, you can view the user and group membership. It should look similar to the following:

vsanadmin user

vsanadmin user is a member of the VSANAdmin group

Step 4 – Deploy the vSAN Manager Appliance

Caveat: There seems to be the ability to include a vSAN stanza in the YAML file, and this is include in the sample YAML on the Photon Controller appliance. But then the photon-setup looks in “/var/opt/vmware/photon/controller/appliances/vsan.ova-dir/vsan.ovf.bak” on the photon installer for the OVA, and there is none. One could possibly move the vSAN OVA to this location, but I could find no documented guidance on how to do this. Therefore I reverted to doing a normal OVA deploy via the H5 client on my management host. I covered this in the previous post also. The only important part to this is the ‘Additional Settings’ step. I’ve included a sample here:

A few things to point out here. The DNS should be the Lightwave server that you deployed in step 2. The Lightwave Domain is the name specified in the YAML file. Administrator Group (VSANAdmins) is the group you created in Step 3 (bit of a gotcha here – I’ll discuss it in more detail in step 5). Hostname (last field) actually refers to the name of this vSAN Management appliance that you are deploying, although it seems to be located with Lightwave information.

Once the appliance is deployed, open an SSH session to it and login as root.

Step 5 – Verify credentials by launching RVC



Now we come to the proof of the pudding – can we authenticate using the credentials above, and build our vSAN cluster using RVC, the Ruby vSphere Console.

Warning: I found an issue with the credentials. It seems that those provided via the OVA in step 4 are not being persisted correctly in the config file on the vSAN Manager. Fortunately, it is easy to address. You simply stop the vSAN service on the vSAN Management appliance, edit the config file to set the correct administratorgroup, then restart the service. Here are the steps:

root@vsan-mgmt-srvr [ ~ ]# grep administratorgroup /etc/vmware-vsan-health/config.conf administratorgroup = rainpole.local\Administrators root@vsan-mgmt-srvr [ ~ ]# systemctl stop vmware-vsan.service root@vsan-mgmt-srvr [ ~ ]# vi /etc/vmware-vsan-health/config.conf root@vsan-mgmt-srvr [ ~ ]# grep administratorgroup /etc/vmware-vsan-health/config.conf administratorgroup = rainpole.local\VSANAdmins root@vsan-mgmt-srvr [ ~ ]# systemctl start vmware-vsan.service

After making those changes, you can now see if your RVC session with authenticate using the “vsanadmin” user which is a member of the VSANAdmins group created back in step 3.

root@vsan-mgmt-srvr [ ~ ]# rvc vsanadmin@rainpole.local@vsan-mgmt-srvr\ .rainpole.local:8006 Install the "ffi" gem for better tab completion. password: 0 / 1 vsan-mgmt-srvr.rainpole.local/ >

Success! OK, now we are ready to create a vSAN cluster, but first we need to setup the network on each of the 3 ESXi hosts that will participate in the vSAN cluster.

Step 6 – Setup the vSAN network on each ESXi host

The following are the commands used to create a vSAN portgroup, add a VMkernel interface, and tag it for vSAN traffic. These are run on the ESXi hosts and would have to be repeated on each host. Note that I used DHCP for the vSAN network. You might want to use a static IP. And of course, you could very easily script this with something like PowerCLI.

[root@esxi-dell-f:~] esxcli network vswitch standard list [root@esxi-dell-f:~] esxcli network vswitch standard portgroup add -p vsan -v vSwitch0 [root@esxi-dell-f:~] esxcli network vswitch standard portgroup set --vlan-id 51 -p vsan [root@esxi-dell-f:~] esxcli network ip interface add -p vsan -i vmk1 [root@esxi-dell-f:~] esxcli network ip interface ipv4 set -t dhcp -i vmk1 [root@esxi-dell-f:~] esxcli network ip interface tag add -t vSAN -i vmk1 [root@esxi-dell-f:~] esxcfg-vmknic -l Interface Port Group/DVPort/Opaque Network IP Family IP Address Netmask Broadcast MAC Address MTU TSO MSS Enabled Type NetStack vmk0 Management Network IPv4 10.27.51.6 255.255.255.0 10.27.51.255 24:6e:96:2f:52:75 1500 65535 true STATIC defaultTcpipStack vmk0 Management Network IPv6 fe80::266e:96ff:fe2f:5275 64 24:6e:96:2f:52:75 1500 65535 true STATIC, PREFERRED defaultTcpipStack vmk1 vsan IPv4 10.27.51.34 255.255.255.0 10.27.51.255 00:50:56:6e:b7:dc 1500 65535 true DHCP defaultTcpipStack vmk1 vsan IPv6 fe80::250:56ff:fe6e:b7dc 64 00:50:56:6e:b7:dc 1500 65535 true STATIC, PREFERRED defaultTcpipStack [root@esxi-dell-f:~]

Step 7 – Create the vSAN cluster via RVC



OK – the vSAN network is configured on the 3 x ESXi hosts that are going to participate in my vSAN cluster. Return to the vSAN Management Appliance, and the RVC session. These are the commands that are run to create a cluster, and add the hosts to the cluster. I also set the cluster to automatically claim disk for vSAN.

root@vsan-mgmt-srvr [ /etc/vmware-vsan-health ]# rvc \ vsanadmin@rainpole.local@vsan-mgmt-srvr.rainpole.local:8006 Install the "ffi" gem for better tab completion. password: 0 / 1 vsan-mgmt-srvr.rainpole.local/ > cd 1 /vsan-mgmt-srvr.rainpole.local> ls 0 Global (datacenter) /vsan-mgmt-srvr.rainpole.local> cd 0 /vsan-mgmt-srvr.rainpole.local/Global> ls 0 vms [vmFolder-datacenter-1]/ 1 datastores [datastoreFolder-datacenter-1]/ 2 networks [networkFolder-datacenter-1]/ 3 computers [hostFolder-datacenter-1]/ /vsan-mgmt-srvr.rainpole.local/Global> cd 3 /vsan-mgmt-srvr.rainpole.local/Global/computers> ls /vsan-mgmt-srvr.rainpole.local/Global/computers> cluster.create pp-vsan /vsan-mgmt-srvr.rainpole.local/Global/computers> ls 0 pp-vsan (cluster): cpu 0 GHz, memory 0 GB /vsan-mgmt-srvr.rainpole.local/Global/computers> vsan.cluster_change_autoclaim 0 -e : success No host specified to query, stop current operation. /vsan-mgmt-srvr.rainpole.local/Global/computers> cluster.add_host 0 \ 10.27.51.6 10.27.51.7 10.27.51.8 -u root -p xxxx : success : success : success /vsan-mgmt-srvr.rainpole.local/Global/computers> ls 0 pp-vsan (cluster): cpu 0 GHz, memory 0 GB

Looks like it worked. All 3 x ESXi hosts have been successfully added to my cluster. Let’s now run a few additional RVC commands to make sure the vSAN cluster is formed and the vSAN health check is happy.

/vsan-mgmt-srvr.rainpole.local/Global/computers> vsan.cluster_info 0 2017-05-25 11:03:45 +0000: Fetching host info from 10.27.51.6 (may take a moment) ... 2017-05-25 11:03:45 +0000: Fetching host info from 10.27.51.7 (may take a moment) ... 2017-05-25 11:03:45 +0000: Fetching host info from 10.27.51.8 (may take a moment) ... Host: 10.27.51.6 Product: VMware ESXi 6.5.0 build-4887370 vSAN enabled: yes Cluster info: Cluster role: master Cluster UUID: 07fdaefe-a579-4048-ba95-df1f7ed3ba2f Node UUID: 5926907e-8562-24c1-2766-246e962f5270 Member UUIDs: ["5926907e-8562-24c1-2766-246e962f5270", "592693fd-f919-1aee-9ae4-246e962f4850", "59269241-1af4-bed2-5978-246e962c2408"] (3) Node evacuated: no Storage info: Auto claim: no Disk Mappings: SSD: Local ATA Disk (naa.55cd2e404c31f9ec) - 186 GB, v2 MD: Local ATA Disk (naa.500a07510f86d6b3) - 745 GB, v2 SSD: Local Pliant Disk (naa.5001e82002664b00) - 186 GB, v2 MD: Local ATA Disk (naa.500a07510f86d686) - 745 GB, v2 FaultDomainInfo: Not configured NetworkInfo: Adapter: vmk1 (10.27.51.53) Host: 10.27.51.7 Product: VMware ESXi 6.5.0 build-4887370 vSAN enabled: yes Cluster info: Cluster role: backup Cluster UUID: 07fdaefe-a579-4048-ba95-df1f7ed3ba2f Node UUID: 592693fd-f919-1aee-9ae4-246e962f4850 Member UUIDs: ["5926907e-8562-24c1-2766-246e962f5270", "592693fd-f919-1aee-9ae4-246e962f4850", "59269241-1af4-bed2-5978-246e962c2408"] (3) Node evacuated: no Storage info: Auto claim: no Disk Mappings: SSD: Local Pliant Disk (naa.5001e82002675164) - 186 GB, v2 MD: Local ATA Disk (naa.500a07510f86d693) - 745 GB, v2 SSD: Local ATA Disk (naa.55cd2e404c31ef84) - 186 GB, v2 MD: Local ATA Disk (naa.500a07510f86d69d) - 745 GB, v2 FaultDomainInfo: Not configured NetworkInfo: Adapter: vmk1 (10.27.51.54) Host: 10.27.51.8 Product: VMware ESXi 6.5.0 build-4887370 vSAN enabled: yes Cluster info: Cluster role: agent Cluster UUID: 07fdaefe-a579-4048-ba95-df1f7ed3ba2f Node UUID: 59269241-1af4-bed2-5978-246e962c2408 Member UUIDs: ["5926907e-8562-24c1-2766-246e962f5270", "592693fd-f919-1aee-9ae4-246e962f4850", "59269241-1af4-bed2-5978-246e962c2408"] (3) Node evacuated: no Storage info: Auto claim: no Disk Mappings: SSD: Local ATA Disk (naa.55cd2e404c31f898) - 186 GB, v2 MD: Local ATA Disk (naa.500a07510f86d6bd) - 745 GB, v2 SSD: Local Pliant Disk (naa.5001e8200264426c) - 186 GB, v2 MD: Local ATA Disk (naa.500a07510f86d6bf) - 745 GB, v2 FaultDomainInfo: Not configured NetworkInfo: Adapter: vmk1 (10.27.51.55) No Fault Domains configured in this cluster /vsan-mgmt-srvr.rainpole.local/Global/computers>

This looks good. The Member UUIDs show 3 members in our cluster. vSAN is enabled, and each host has claimed storage. It is also a good idea to run the health check from RVC, and look to make sure nothing is broken before continuing.

/vsan-mgmt-srvr.rainpole.local/Global/computers> vsan.health.health_summary 0 Overall health: yellow (Cluster health issue) +------------------------------------------------------+---------+ | Health check | Result | +------------------------------------------------------+---------+ | Cluster | Warning | | ESXi vSAN Health service installation | Passed | | vSAN Health Service up-to-date | Passed | | Advanced vSAN configuration in sync | Passed | | vSAN CLOMD liveness | Passed | | vSAN Disk Balance | Passed | | Resync operations throttling | Passed | | vSAN cluster configuration consistency | Warning | | Time is synchronized across hosts and VC | Passed | | vSphere cluster members match vSAN cluster members | Passed | +------------------------------------------------------+---------+ | Hardware compatibility | Warning | | vSAN HCL DB up-to-date | Warning | | vSAN HCL DB Auto Update | Passed | | SCSI controller is VMware certified | Passed | | Controller is VMware certified for ESXi release | Passed | | Controller driver is VMware certified | Passed | +------------------------------------------------------+---------+ | Network | Passed | | Hosts disconnected from VC | Passed | | Hosts with connectivity issues | Passed | | vSAN cluster partition | Passed | | All hosts have a vSAN vmknic configured | Passed | | All hosts have matching subnets | Passed | | vSAN: Basic (unicast) connectivity check | Passed | | vSAN: MTU check (ping with large packet size) | Passed | | vMotion: Basic (unicast) connectivity check | Passed | | vMotion: MTU check (ping with large packet size) | Passed | | Network latency check | Passed | | Multicast assessment based on other checks | Passed | | All hosts have matching multicast settings | Passed | +------------------------------------------------------+---------+ | Physical disk | Passed | | Overall disks health | Passed | | Metadata health | Passed | | Disk capacity | Passed | | Software state health | Passed | | Congestion | Passed | | Component limit health | Passed | | Component metadata health | Passed | | Memory pools (heaps) | Passed | | Memory pools (slabs) | Passed | +------------------------------------------------------+---------+ | Data | Passed | | vSAN object health | Passed | +------------------------------------------------------+---------+ | Limits | Passed | | Current cluster situation | Passed | | After 1 additional host failure | Passed | | Host component limit | Passed | +------------------------------------------------------+---------+ | Online health (Disabled) | skipped | | Customer experience improvement program (CEIP) | skipped | +------------------------------------------------------+---------+ Details about any failed test below ... Cluster - vSAN cluster configuration consistency: yellow +------------+------+--------------------------------------------------------------+----------------+ | Host | Disk | Issue | Recommendation | +------------+------+--------------------------------------------------------------+----------------+ | 10.27.51.6 | | Invalid request (Correct version of vSAN Health installed?). | | | 10.27.51.7 | | Invalid request (Correct version of vSAN Health installed?). | | | 10.27.51.8 | | Invalid request (Correct version of vSAN Health installed?). | | +------------+------+--------------------------------------------------------------+----------------+ Hardware compatibility - vSAN HCL DB up-to-date: yellow +--------------------------------+---------------------+ | Entity | Time in UTC | +--------------------------------+---------------------+ | Current time | 2017-05-25 11:08:22 | | Local HCL DB copy last updated | 2017-02-23 13:28:22 | +--------------------------------+---------------------+ [[1.964084003, "initial connect"], [7.352502473, "cluster-health"], [0.011859728, "table-render"]] /vsan-mgmt-srvr.rainpole.local/Global/computers>

OK some issue with HCL DB file (it is out of date and I should update it) and something about consistency. Not sure what the latter one is at this point (still investigating), but overall it seems to be OK. Great, I can now go ahead and deploy the remainder of the Photon Platform components (load balancer, photon controller, agents for ESXi hosts.

Step 8: Deploy Photon Platform

First of all, I am going to make some modifications to my YAML file to ensure that the vSAN datastore is added. To do this, just make sure that the vSAN datastore is added to the list of allowed-datastores, e.g. allowed-datastores: “isilion-nfs-01, vSANDatastore”. Don’t worry if you do not get the spelling right – you can always modify the name of the datastore later on to match what is in the YAML, and Photon Platform will automatically detect it.

Now we pop back onto the photon installer and rerun the photon-setup command seen already, but this time for the platform. Since the Lightwave appliance is already deployed, that part will be skipped. I have included the whole of the output for completeness.

root@photon-installer [ /opt/vmware/photon/controller/share/config ]# ../../\ bin/photon-setup platform install -config /opt/vmware/photon/controller/share/config/pc-config.yaml Using configuration at /opt/vmware/photon/controller/share/config/pc-config.yaml INFO: Parsing Lightwave Configuration INFO: Parsing Credentials INFO: Lightwave Credentials parsed successfully INFO: Parsing Lightwave Controller Config INFO: Parsing appliance INFO: Parsing Credentials INFO: Appliance Credentials parsed successfully INFO: Parsing Network Config INFO: Appliance network config parsed successfully INFO: Appliance config parsed successfully INFO: Lightwave Controller parsed successfully INFO: Lightwave Controller config parsed successfully INFO: Lightwave Section parsed successfully INFO: Parsing Photon Controller Configuration INFO: Parsing Photon Controller Image Store INFO: Image Store parsed successfully INFO: Managed hosts parsed successfully INFO: Parsing Photon Controller Config INFO: Parsing appliance INFO: Parsing Credentials INFO: Appliance Credentials parsed successfully INFO: Parsing Network Config INFO: Appliance network config parsed successfully INFO: Photon Controllers parsed successfully INFO: Photon section parsed successfully INFO: Parsing Compute Configuration INFO: Parsing Compute Config INFO: Parsing Credentials INFO: Parsing Credentials INFO: Parsing Credentials INFO: Parsing Credentials INFO: Compute Config parsed successfully INFO: Parsing LoadBalancer Configuration INFO: Parsing LoadBalancer Config INFO: Parsing appliance INFO: Parsing Credentials INFO: Appliance Credentials parsed successfully INFO: Parsing Network Config INFO: Appliance network config parsed successfully INFO: LoadBalancer Config parsed successfully INFO: NSX CNI config is not provided. NSX CNI is disabled Validating configuration Validating compute configuration 2017-05-25 09:48:05 INFO Executing the command esxcli system version get | grep Version | awk '{print $2 }' on hypervisor with ip '10.27.51.5' 2017-05-25 09:48:06 INFO Executing the command esxcli system version get | grep Build | awk '{print $2 }' on hypervisor with ip '10.27.51.5' 2017-05-25 09:48:08 INFO Executing the command esxcli system version get | grep Version | awk '{print $2 }' on hypervisor with ip '10.27.51.6' 2017-05-25 09:48:09 INFO Executing the command esxcli system version get | grep Build | awk '{print $2 }' on hypervisor with ip '10.27.51.6' 2017-05-25 09:48:10 INFO Executing the command esxcli system version get | grep Version | awk '{print $2 }' on hypervisor with ip '10.27.51.7' 2017-05-25 09:48:11 INFO Executing the command esxcli system version get | grep Build | awk '{print $2 }' on hypervisor with ip '10.27.51.7' 2017-05-25 09:48:12 INFO Executing the command esxcli system version get | grep Version | awk '{print $2 }' on hypervisor with ip '10.27.51.8' 2017-05-25 09:48:13 INFO Executing the command esxcli system version get | grep Build | awk '{print $2 }' on hypervisor with ip '10.27.51.8' Validating identity configuration Validating photon configuration 2017-05-25 09:48:15 INFO Installing Lightwave 2017-05-25 09:48:15 INFO Install Lightwave Controller at lightwave-1 2017-05-25 09:48:16 INFO Info: Lightwave already exists. Skipping deployment of lightwave. 2017-05-25 09:48:16 INFO COMPLETE: Install Lightwave Controller 2017-05-25 09:48:16 INFO Installing Photon Controller Cluster 2017-05-25 09:48:16 INFO Info: Installing the Photon Controller Cluster 2017-05-25 09:48:16 INFO Info: Photon Controller peer node at IP address [10.27.51.30] 2017-05-25 09:48:16 INFO Info: 1 Photon Controller was specified in the configuration 2017-05-25 09:48:16 INFO Start [Task: Photon Controller Installation] 2017-05-25 09:48:16 INFO Info [Task: Photon Controller Installation] : Deploying and powering on the Photon Controller VM on ESXi host: 10.27.51.5 2017-05-25 09:48:16 INFO Info: Deploying and powering on the Photon Controller VM on ESXi host: 10.27.51.5 2017-05-25 09:48:16 INFO Info [Task: Photon Controller Installation] : Starting appliance deployment 2017-05-25 09:48:24 INFO Progress [Task: Photon Controller Installation]: 20% 2017-05-25 09:48:27 INFO Progress [Task: Photon Controller Installation]: 40% 2017-05-25 09:48:30 INFO Progress [Task: Photon Controller Installation]: 60% 2017-05-25 09:48:33 INFO Progress [Task: Photon Controller Installation]: 80% 2017-05-25 09:48:36 INFO Progress [Task: Photon Controller Installation]: 0% 2017-05-25 09:48:36 INFO Stop [Task: Photon Controller Installation] 2017-05-25 09:48:36 INFO Info: Getting OIDC Tokens from Lightwave to make API Calls 2017-05-25 09:48:37 INFO Info: Waiting for Photon Controller to be ready 2017-05-25 09:49:03 INFO Info: Using Image Store - isilion-nfs-01, vSANDatastore 2017-05-25 09:49:04 INFO Info: Setting new security group(s): [rainpole.local\Administrators, rainpole.local\CloudAdministrators] 2017-05-25 09:49:05 INFO COMPLETE: Install Photon Controller Cluster 2017-05-25 09:49:05 INFO Installing Load Balancer 2017-05-25 09:49:05 INFO Start [Task: Load Balancer Installation] 2017-05-25 09:49:05 INFO Info [Task: Load Balancer Installation] : Deploying and powering on the HAProxy VM on ESXi host: 10.27.51.5 2017-05-25 09:49:05 INFO Info: Deploying and powering on the HAProxy VM on ESXi host: 10.27.51.5 2017-05-25 09:49:05 INFO Info [Task: Load Balancer Installation] : Starting appliance deployment 2017-05-25 09:49:15 INFO Progress [Task: Load Balancer Installation]: 20% 2017-05-25 09:49:17 INFO Progress [Task: Load Balancer Installation]: 40% 2017-05-25 09:49:18 INFO Progress [Task: Load Balancer Installation]: 60% 2017-05-25 09:49:20 INFO Progress [Task: Load Balancer Installation]: 80% 2017-05-25 09:49:22 INFO Progress [Task: Load Balancer Installation]: 0% 2017-05-25 09:49:22 INFO Stop [Task: Load Balancer Installation] 2017-05-25 09:49:22 INFO COMPLETE: Install Load Balancer 2017-05-25 09:49:22 INFO Preparing Managed Host esxi-2 to be managed by Photon Controller 2017-05-25 09:49:22 INFO Registering Managed Host esxi-2 with Photon Controller The allowed datastore is {"ALLOWED_DATASTORES":"isilion-nfs-01, vSANDatastore"}2017-05-25 09:49:29 INFO COMPLETE: Registration of Managed Host 2017-05-25 09:49:29 INFO Installing Photon Agent on Managed Host esxi-2 2017-05-25 09:49:29 INFO Start [Task: Hypervisor preparation] 2017-05-25 09:49:29 INFO Info: Found Lightwave VIB at /var/opt/vmware/photon/agent/vibs/VMware-lightwave-esx-1.0.0-5075989.vib 2017-05-25 09:49:29 INFO Info: Found Photon Agent VIB at /var/opt/vmware/photon/agent/vibs/photon-controller-agent-v1.1.1-319facd.vib 2017-05-25 09:49:29 INFO Info: Found Envoy VIB at /var/opt/vmware/photon/agent/vibs/vmware-envoy-latest.vib 2017-05-25 09:49:29 INFO Info [Task: Hypervisor preparation] : Establishing SCP session to host 10.27.51.6 2017-05-25 09:49:29 INFO Info [Task: Hypervisor preparation] : Skipping Syslog configuration on host 10.27.51.6 2017-05-25 09:49:29 INFO Info [Task: Hypervisor preparation] : Copying VIBs to host 10.27.51.6 2017-05-25 09:49:29 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/vmware-envoy-latest.vib to remote location /tmp/vmware-envoy-latest.vib 2017-05-25 09:49:30 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/photon-controller-agent-v1.1.1-319facd.vib to remote location /tmp/photon-controller-agent-v1.1.1-319facd.vib 2017-05-25 09:49:30 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/VMware-lightwave-esx-1.0.0-5075989.vib to remote location /tmp/VMware-lightwave-esx-1.0.0-5075989.vib 2017-05-25 09:49:30 INFO Info [Task: Hypervisor preparation] : Installing Photon Agent on host 10.27.51.6 2017-05-25 09:49:30 INFO Info: Leaving the domain in case the ESX host was already added 2017-05-25 09:49:30 INFO Executing the command /usr/lib/vmware/vmwafd/bin/domainjoin leave --force on hypervisor with ip '10.27.51.6' 2017-05-25 09:49:31 INFO Executing the command /etc/init.d/unconfigure-lightwave stop remove on hypervisor with ip '10.27.51.6' 2017-05-25 09:49:32 INFO Info: Unconfiguring Lightwave on the ESX host 2017-05-25 09:49:32 INFO Info: Uninstalling old Photon VIBS from remote system 2017-05-25 09:49:32 INFO Executing the command /usr/bin/esxcli software vib remove -f -n lightwave-esx -n photon-controller-agent -n envoy on hypervisor with ip '10.27.51.6' 2017-05-25 09:49:35 INFO Info: Installing Photon VIBS on remote system 2017-05-25 09:49:35 INFO Executing the command /usr/bin/esxcli software vib install -f -v /tmp/photon-controller-agent-v1.1.1-319facd.vib -v /tmp/vmware-envoy-latest.vib -v /tmp/VMware-lightwave-esx-1.0.0-5075989.vib on hypervisor with ip '10.27.51.6' 2017-05-25 09:50:37 INFO Info [Task: Hypervisor preparation] : Joining host 10.27.51.6 to Lightwave domain 2017-05-25 09:50:37 INFO Info: Attempting to join the ESX host to Lightwave 2017-05-25 09:50:37 INFO Executing the command /usr/lib/vmware/ic-deploy/bin/configure-lightwave.py 10.27.51.35 rainpole.local 'VxRail!23' 1 10.27.51.6 'VxRail!23' on hypervisor with ip '10.27.51.6' 2017-05-25 09:50:49 INFO Info: Restart Photon Controller Agent 2017-05-25 09:50:49 INFO Executing the command /etc/init.d/photon-controller-agent restart on hypervisor with ip '10.27.51.6' 2017-05-25 09:50:50 INFO Info [Task: Hypervisor preparation] : Removing VIBs from host 10.27.51.6 2017-05-25 09:50:50 INFO Info: Removing Photon VIBS from remote system 2017-05-25 09:50:50 INFO Executing the command /bin/rm -f /tmp/photon-controller-agent-v1.1.1-319facd.vib -v /tmp/vmware-envoy-latest.vib -v /tmp/VMware-lightwave-esx-1.0.0-5075989.vib on hypervisor with ip '10.27.51.6' 2017-05-25 09:50:51 INFO Stop [Task: Hypervisor preparation] 2017-05-25 09:50:51 INFO COMPLETE: Install Photon Agent 2017-05-25 09:50:51 INFO Provisioning the host to change its state to READY 2017-05-25 09:50:59 INFO COMPLETE: Provision Managed Host 2017-05-25 09:50:59 INFO Preparing Managed Host esxi-3 to be managed by Photon Controller 2017-05-25 09:50:59 INFO Registering Managed Host esxi-3 with Photon Controller The allowed datastore is {"ALLOWED_DATASTORES":"isilion-nfs-01, vSANDatastore"}2017-05-25 09:51:06 INFO COMPLETE: Registration of Managed Host 2017-05-25 09:51:06 INFO Installing Photon Agent on Managed Host esxi-3 2017-05-25 09:51:06 INFO Start [Task: Hypervisor preparation] 2017-05-25 09:51:06 INFO Info: Found Lightwave VIB at /var/opt/vmware/photon/agent/vibs/VMware-lightwave-esx-1.0.0-5075989.vib 2017-05-25 09:51:06 INFO Info: Found Photon Agent VIB at /var/opt/vmware/photon/agent/vibs/photon-controller-agent-v1.1.1-319facd.vib 2017-05-25 09:51:06 INFO Info: Found Envoy VIB at /var/opt/vmware/photon/agent/vibs/vmware-envoy-latest.vib 2017-05-25 09:51:06 INFO Info [Task: Hypervisor preparation] : Establishing SCP session to host 10.27.51.7 2017-05-25 09:51:06 INFO Info [Task: Hypervisor preparation] : Skipping Syslog configuration on host 10.27.51.7 2017-05-25 09:51:06 INFO Info [Task: Hypervisor preparation] : Copying VIBs to host 10.27.51.7 2017-05-25 09:51:06 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/vmware-envoy-latest.vib to remote location /tmp/vmware-envoy-latest.vib 2017-05-25 09:51:06 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/photon-controller-agent-v1.1.1-319facd.vib to remote location /tmp/photon-controller-agent-v1.1.1-319facd.vib 2017-05-25 09:51:06 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/VMware-lightwave-esx-1.0.0-5075989.vib to remote location /tmp/VMware-lightwave-esx-1.0.0-5075989.vib 2017-05-25 09:51:06 INFO Info [Task: Hypervisor preparation] : Installing Photon Agent on host 10.27.51.7 2017-05-25 09:51:06 INFO Info: Leaving the domain in case the ESX host was already added 2017-05-25 09:51:06 INFO Executing the command /usr/lib/vmware/vmwafd/bin/domainjoin leave --force on hypervisor with ip '10.27.51.7' 2017-05-25 09:51:08 INFO Executing the command /etc/init.d/unconfigure-lightwave stop remove on hypervisor with ip '10.27.51.7' 2017-05-25 09:51:09 INFO Info: Unconfiguring Lightwave on the ESX host 2017-05-25 09:51:09 INFO Info: Uninstalling old Photon VIBS from remote system 2017-05-25 09:51:09 INFO Executing the command /usr/bin/esxcli software vib remove -f -n lightwave-esx -n photon-controller-agent -n envoy on hypervisor with ip '10.27.51.7' 2017-05-25 09:51:12 INFO Info: Installing Photon VIBS on remote system 2017-05-25 09:51:12 INFO Executing the command /usr/bin/esxcli software vib install -f -v /tmp/photon-controller-agent-v1.1.1-319facd.vib -v /tmp/vmware-envoy-latest.vib -v /tmp/VMware-lightwave-esx-1.0.0-5075989.vib on hypervisor with ip '10.27.51.7' 2017-05-25 09:52:13 INFO Info [Task: Hypervisor preparation] : Joining host 10.27.51.7 to Lightwave domain 2017-05-25 09:52:13 INFO Info: Attempting to join the ESX host to Lightwave 2017-05-25 09:52:13 INFO Executing the command /usr/lib/vmware/ic-deploy/bin/configure-lightwave.py 10.27.51.35 rainpole.local 'VxRail!23' 1 10.27.51.7 'VxRail!23' on hypervisor with ip '10.27.51.7' 2017-05-25 09:52:23 INFO Info: Restart Photon Controller Agent 2017-05-25 09:52:23 INFO Executing the command /etc/init.d/photon-controller-agent restart on hypervisor with ip '10.27.51.7' 2017-05-25 09:52:24 INFO Info [Task: Hypervisor preparation] : Removing VIBs from host 10.27.51.7 2017-05-25 09:52:24 INFO Info: Removing Photon VIBS from remote system 2017-05-25 09:52:24 INFO Executing the command /bin/rm -f /tmp/photon-controller-agent-v1.1.1-319facd.vib -v /tmp/vmware-envoy-latest.vib -v /tmp/VMware-lightwave-esx-1.0.0-5075989.vib on hypervisor with ip '10.27.51.7' 2017-05-25 09:52:26 INFO Stop [Task: Hypervisor preparation] 2017-05-25 09:52:26 INFO COMPLETE: Install Photon Agent 2017-05-25 09:52:26 INFO Provisioning the host to change its state to READY 2017-05-25 09:52:34 INFO COMPLETE: Provision Managed Host 2017-05-25 09:52:34 INFO Preparing Managed Host esxi-4 to be managed by Photon Controller 2017-05-25 09:52:34 INFO Registering Managed Host esxi-4 with Photon Controller The allowed datastore is {"ALLOWED_DATASTORES":"isilion-nfs-01, vSANDatastore"}2017-05-25 09:52:36 INFO COMPLETE: Registration of Managed Host 2017-05-25 09:52:36 INFO Installing Photon Agent on Managed Host esxi-4 2017-05-25 09:52:36 INFO Start [Task: Hypervisor preparation] 2017-05-25 09:52:36 INFO Info: Found Lightwave VIB at /var/opt/vmware/photon/agent/vibs/VMware-lightwave-esx-1.0.0-5075989.vib 2017-05-25 09:52:36 INFO Info: Found Photon Agent VIB at /var/opt/vmware/photon/agent/vibs/photon-controller-agent-v1.1.1-319facd.vib 2017-05-25 09:52:36 INFO Info: Found Envoy VIB at /var/opt/vmware/photon/agent/vibs/vmware-envoy-latest.vib 2017-05-25 09:52:36 INFO Info [Task: Hypervisor preparation] : Establishing SCP session to host 10.27.51.8 2017-05-25 09:52:36 INFO Info [Task: Hypervisor preparation] : Skipping Syslog configuration on host 10.27.51.8 2017-05-25 09:52:36 INFO Info [Task: Hypervisor preparation] : Copying VIBs to host 10.27.51.8 2017-05-25 09:52:36 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/vmware-envoy-latest.vib to remote location /tmp/vmware-envoy-latest.vib 2017-05-25 09:52:37 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/photon-controller-agent-v1.1.1-319facd.vib to remote location /tmp/photon-controller-agent-v1.1.1-319facd.vib 2017-05-25 09:52:37 INFO Info: Copying file /var/opt/vmware/photon/agent/vibs/VMware-lightwave-esx-1.0.0-5075989.vib to remote location /tmp/VMware-lightwave-esx-1.0.0-5075989.vib 2017-05-25 09:52:37 INFO Info [Task: Hypervisor preparation] : Installing Photon Agent on host 10.27.51.8 2017-05-25 09:52:37 INFO Info: Leaving the domain in case the ESX host was already added 2017-05-25 09:52:37 INFO Executing the command /usr/lib/vmware/vmwafd/bin/domainjoin leave --force on hypervisor with ip '10.27.51.8' 2017-05-25 09:52:38 INFO Executing the command /etc/init.d/unconfigure-lightwave stop remove on hypervisor with ip '10.27.51.8' 2017-05-25 09:52:39 INFO Info: Unconfiguring Lightwave on the ESX host 2017-05-25 09:52:39 INFO Info: Uninstalling old Photon VIBS from remote system 2017-05-25 09:52:39 INFO Executing the command /usr/bin/esxcli software vib remove -f -n lightwave-esx -n photon-controller-agent -n envoy on hypervisor with ip '10.27.51.8' 2017-05-25 09:52:42 INFO Info: Installing Photon VIBS on remote system 2017-05-25 09:52:42 INFO Executing the command /usr/bin/esxcli software vib install -f -v /tmp/photon-controller-agent-v1.1.1-319facd.vib -v /tmp/vmware-envoy-latest.vib -v /tmp/VMware-lightwave-esx-1.0.0-5075989.vib on hypervisor with ip '10.27.51.8' 2017-05-25 09:53:42 INFO Info [Task: Hypervisor preparation] : Joining host 10.27.51.8 to Lightwave domain 2017-05-25 09:53:42 INFO Info: Attempting to join the ESX host to Lightwave 2017-05-25 09:53:42 INFO Executing the command /usr/lib/vmware/ic-deploy/bin/configure-lightwave.py 10.27.51.35 rainpole.local 'VxRail!23' 1 10.27.51.8 'VxRail!23' on hypervisor with ip '10.27.51.8' 2017-05-25 09:53:52 INFO Info: Restart Photon Controller Agent 2017-05-25 09:53:52 INFO Executing the command /etc/init.d/photon-controller-agent restart on hypervisor with ip '10.27.51.8' 2017-05-25 09:53:54 INFO Info [Task: Hypervisor preparation] : Removing VIBs from host 10.27.51.8 2017-05-25 09:53:54 INFO Info: Removing Photon VIBS from remote system 2017-05-25 09:53:54 INFO Executing the command /bin/rm -f /tmp/photon-controller-agent-v1.1.1-319facd.vib -v /tmp/vmware-envoy-latest.vib -v /tmp/VMware-lightwave-esx-1.0.0-5075989.vib on hypervisor with ip '10.27.51.8' 2017-05-25 09:53:55 INFO Stop [Task: Hypervisor preparation] 2017-05-25 09:53:55 INFO COMPLETE: Install Photon Agent 2017-05-25 09:53:55 INFO Provisioning the host to change its state to READY 2017-05-25 09:54:03 INFO COMPLETE: Provision Managed Host COMPLETE: Install Process has completed Successfully. root@photon-installer [ /opt/vmware/photon/controller/share/config ]#

Step 9: Does Photon Platform see both datastores



The deployment has completed successfully. The final step in all of this is to make sure that I can see the vSAN datastore (and the NFS datastore) from Photon Platform. First off, I can use the UI to determine this. Point a browser at https://<ip-address-of-load-balancer>:4343, login with lightwave administrator credentials and see if both datastores are present.

OK – this looks pretty promising. But can we verify that one of these is vSAN? Yes we can. From the “tools” icon is the top right hand corner of the Photon Platform UI, select the option to open the API browser. This gives us a Swagger interface to Photon Platform. One of the API calls allows you to “Get datastores”. Here is the output from my setup:

That looks good, doesn’t it? I can see both the NFS datastore and my vSAN datastore. Cool! Now I’m ready to deploy an scheduler/orchestration framework. Kubernetes 1.6 is now supported, so I think I’ll give that a go soon.

Summary

Things to look out for:

Make sure the ESXi version is supported. If you try to use a version that support vSAN 6.6, you’ll have problems because we cannot handle membership using unicast without vCenter at this point in time.

The issue with the vSAN manager OVA credentials highlighted in step 5 caught me out. Be aware of that one.

If you misspell the name of the vSAN datastore, it won’t show up in Photon Platform. However you can always change it to what is in the YAML file, and Photon Platform will pick this up once they match (within 15 minutes I believe). This does work, as I had to do exactly that.

Although the vSAN RVC health check command reported some inconsistency in the output, it does not appear to have any effect on the overall functionality. We’re still looking into that one.



Thanks for reading this far. Hope you find it a useful reference.