VMware ESXi 6.0.0b Release Notes

Updated on: 07 July 2015 ESXi 6.0.0b | 07 JULY 2015 | ISO Build 2809209 Check for additions and updates to these release notes.

What's in the Release Notes

The release notes cover the following topics:

What's New

This release of ESXi 6.0.0b delivers a number of bug fixes that have been documented in the Resolved Issues section.

Earlier Releases of ESXi 6.0

Features and known issues of ESXi 6.0 are described in the release notes for each release. Release notes for earlier releases of ESXi 6.0, are:

For compatibility, installation and upgrades, product support notices, and features see the VMware vSphere 6.0 Release Notes.

Internationalization

VMware vSphere 6.0 is available in the following languages:

English

French

German

Japanese

Korean

Simplified Chinese

Traditional Chinese

Components of VMware vSphere 6.0, including vCenter Server, ESXi, the vSphere Web Client, and the vSphere Client do not accept non-ASCII input.

Patches Contained in this Release

This release contains all bulletins for ESXi that were released earlier to the release date of this product. See the My VMware page for more information about the individual bulletins.

Patch Release ESXi600-201507001 contains the following individual bulletins:

Patch Release ESXi600-201507001 contains the following image profiles:

Resolved Issues

This section describes resolved issues in this release:

CIM and API Issues

sfcbd service might stop responding

The sfcbd service might stop responding and you might find the following error message in the syslog file:



spSendReq/spSendMsg failed to send on 7 (-1)

Error getting provider context from provider manager: 11



This issue occurs when there is a contention for semaphore between the CIM server and the providers.



This issue is resolved in this release.



The sfcbd service might stop responding and you might find the following error message in the syslog file:This issue occurs when there is a contention for semaphore between the CIM server and the providers.This issue is resolved in this release.

ESXi host might fail to send CIM indications from sfcb to ServerView Operations Manager after reboot

An ESXi host might fail to send CIM indications from sfcb to ServerView Operations Manager after you reboot the host. An error similar to the following is written to syslog file:



spGetMsg receiving from 72 20805-11 Resource temporarily unavailable

rcvMsg receiving from 72 20805-11 Resource temporarily unavailable

--- activate filter failed for indication subscription

filter=root/interop:cim_indicationfilter.creationclassname=br"CIM_IndicationFilter"

,name="FTSIndicationFilter",systemcreationclassname="CIM_ComputerSystem",

systemname="xx.xxx.xxx.xx", handler=root/interop:cim_indicationhandlercimxml.creationclassname="CIM_IndicationHandlerCIMXML"

,name="FTSIndicationListener:xx.xxx.xxx.xx", systemcreationclassname="CIM_ComputerSystem",systemname="xx.xxx.xxx.xx",

status: rc 7, msg

No supported indication classes in filter query or no provider found



This issue is resolved in this release.



An ESXi host might fail to send CIM indications from sfcb to ServerView Operations Manager after you reboot the host. An error similar to the following is written tofile:This issue is resolved in this release.

Unable to monitor hardware status with vCenter Server

If the CIM client sends two requests of Delete Instance to the same CIM indication subscription, the sfcb-vmware_int service might stop responding due to memory contention. You might not be able to monitor the hardware status with the vCenter Server and ESXi.



This issue is resolved in this release.



If the CIM client sends two requests ofto the same CIM indication subscription, theservice might stop responding due to memory contention. You might not be able to monitor the hardware status with the vCenter Server and ESXi.This issue is resolved in this release.

After upgrading firmware, false alarms might appear in the Hardware Status tab

After upgrading firmware, false alarms appear in the Hardware Status tab of the vSphere Client even if the system has been idle for two to three days. Error messages similar to the following might be logged in the /var/log/syslog.log file:



sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x8 FAILED cc=0xffffffff

sfcb-vmware_raw[nnnnn]: IpmiIfcFruChassis: Reading FRU Chassis Info Area length for 0x0 FAILED

sfcb-vmware_raw[nnnnn]: IpmiIfcFruBoard: Reading FRU Board Info details for 0x0 FAILED cc=0xffffffff

sfcb-vmware_raw[nnnnn]: IpmiIfruInfoAreaLength: Reading FRU for 0x0 at 0x70 FAILED cc=0xffffffff

sfcb-vmware_raw[nnnnn]: IpmiIfcFruProduct: Reading FRU product Info Area length for 0x0 FAILED

sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: data length mismatch req=19,resp=3

sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0001,resp=0002

sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0002,resp=0003

sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0003,resp=0004

sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0004,resp=0005

sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0005,resp=0006

sfcb-vmware_raw[nnnnn]: IpmiIfcSelReadEntry: EntryId mismatch req=0006,resp=0007



This issue is resolved in this release.



After upgrading firmware, false alarms appear in thetab of the vSphere Client even if the system has been idle for two to three days. Error messages similar to the following might be logged in thefile:This issue is resolved in this release.

Openwsman might not support createInstance()

The ESXi WSMAN agent (Openwsman) included in the ESXi 5.0 Update 3 or ESXi Patch Release ESXi500-201406001, the ESXi 5.1 Update 2 or ESXi Patch Release ESXi510-201407001, or the ESXi 5.5 Update 2 might not support array parameter to createInstance(). When you run wsmand service to create a CIM instance with array type property value, using createInstanse() in Openwsman, messages similar to the following is displayed:



wsmand[6266]: working on property: DataSize

wsmand[6266]: prop value: 572

wsmand[6266]: xml2property([0xnnnn]DataSize:572)

wsmand[6266]: working on property: PData

wsmand[6266]: prop value: 7

wsmand[6266]: xml2property([0xnnnn]PData:7)

wsmand[6266]: *** xml2data: Array unsupported

wsmand[6266]: working on property: ReturnCode

wsmand[6266]: prop value: 0



This issue is resolved in this release.



The ESXi WSMAN agent (Openwsman) included in the ESXi 5.0 Update 3 or ESXi Patch Release ESXi500-201406001, the ESXi 5.1 Update 2 or ESXi Patch Release ESXi510-201407001, or the ESXi 5.5 Update 2 might not support array parameter to createInstance(). When you run wsmand service to create a CIM instance with array type property value, using createInstanse() in Openwsman, messages similar to the following is displayed:This issue is resolved in this release.

Memory leak in CIM providers while sending CIM Indications

Common Information Model (CIM) provider running on an ESXi host might experience memory leak while sending CIM indications from Small-Footprint CIM Broker (sfcb) service.



This issue is resolved in this release.



Common Information Model (CIM) provider running on an ESXi host might experience memory leak while sending CIM indications from Small-Footprint CIM Broker (sfcb) service.This issue is resolved in this release.

Unable to monitor hardware status on an ESXi host

An ESXi host might report an error in the Hardware Status tab due to the unresponsive hardware monitoring service (sfcbd). An error similar to the following is written to syslog.log file:



sfcb-hhrc[5149608]: spGetMsg receiving from 65 5149608-11 Resource temporarily unavailable

sfcb-hhrc[5149608]: rcvMsg receiving from 65 5149608-11 Resource temporarily unavailable

sfcb-hhrc[5149608]: Timeout or other socket error

sfcb-LSIESG_SMIS13_HHR[6064161]: spGetMsg receiving from 51 6064161-11 Resource temporarily unavailable

sfcb-LSIESG_SMIS13_HHR[6064161]: rcvMsg receiving from 51 6064161-11 Resource temporarily unavailable

sfcb-LSIESG_SMIS13_HHR[6064161]: Timeout or other socket error

sfcb-kmoduleprovider[6064189]: spGetMsg receiving from 57 6064189-11 Resource temporarily unavailable

sfcb-kmoduleprovider[6064189]: rcvMsg receiving from 57 6064189-11 Resource temporarily unavailable

sfcb-kmoduleprovider[6064189]: Timeout or other socket error



This issue is resolved in this release.



An ESXi host might report an error in thetab due to the unresponsive hardware monitoring service (sfcbd). An error similar to the following is written tofile:This issue is resolved in this release.

The openwsmand service might stop responding when RAID controller properties are changed

The openwsmand service might stop responding when you change RAID controller properties using the ModifyInstance option. This happens when properties for the following are changed:



Rebuild priority

Consistency check priority

Patrol read priority



This issue is resolved in this release.



The openwsmand service might stop responding when you change RAID controller properties using the ModifyInstance option. This happens when properties for the following are changed:This issue is resolved in this release.

CIM client might display an error due to multiple enumeration

When you execute multiple enumerate queries on VMware Ethernet port class using the CBEnumInstances method, servers running on an ESXi 6.0 might notice an error message similar to the following:



CIM error: enumInstances Class not found



This issue occurs when the management software fails to retrieve information provided by VMware_EthernetPort()class . When the issue occurs, query on memstats might display the following error message:



MemStatsTraverseGroups: VSI_GetInstanceListAlloc failure: Not found.



This issue is resolved in this release.



When you execute multiple enumerate queries on VMware Ethernet port class using the CBEnumInstances method, servers running on an ESXi 6.0 might notice an error message similar to the following:This issue occurs when the management software fails to retrieve information provided by. When the issue occurs, query on memstats might display the following error message:This issue is resolved in this release.

Miscellaneous Issues

Unable to end the sfcb process when the UserWorld is stuck in HeapMoreCore

When the UserWorld is stuck in HeapMoreCore with an infinite timeout due to an improper stop order, you are unable to end the sfcb process. Error message similar to the following is displayed:



failed to kill /sbin/sfcbd (8314712): No such process



This issue is resolved in this release.



When theis stuck inwith an infinite timeout due to an improper stop order, you are unable to end theprocess. Error message similar to the following is displayed:This issue is resolved in this release.

Networking Issues

ESXi host might become unusable with no connectivity until reboot

When an ESXi host has three or more vmknics, if you reset network settings from DCUI or apply a Host Profile where the vmknics are on a DVS, including the management vmknic, a Hostctl exception might occur. This might cause the host to become unusable with no connectivity until it is rebooted.



This issue is resolved in this release.



When an ESXi host has three or more vmknics, if you reset network settings from DCUI or apply a Host Profile where the vmknics are on a DVS, including the management vmknic, a Hostctl exception might occur. This might cause the host to become unusable with no connectivity until it is rebooted.This issue is resolved in this release.

Throughput statistics for TX and RX might be very high causing unnecessary remapping of source ports

The values of TX and RX throughput statistics might be very high leading to unnecessary remapping of source ports to different VMNICs. This might be due to a miscalculation of statistics by the Load-Based Teaming algorithm.



This issue is resolved in this release.



The values of TX and RX throughput statistics might be very high leading to unnecessary remapping of source ports to different VMNICs. This might be due to a miscalculation of statistics by the Load-Based Teaming algorithm.This issue is resolved in this release.

ESXi host might lose network connectivity after enabling port mirroring sessions

An ESXi host or a virtual machine might lose network connectivity after you enable port mirroring sessions on the vSphere Distributed Switch.



This issue is resolved in this release.



An ESXi host or a virtual machine might lose network connectivity after you enable port mirroring sessions on the vSphere Distributed Switch.This issue is resolved in this release.

Security Issues

NTP package updated

The NTP package is updated to address a stability issue.





Server Configuration Issues

Windows 8 and Windows 2012 Server virtual machine reboot on an ESXi host might fail

Attempts to reboot Windows 8 and Windows 2012 Server virtual machine on an ESXi host might fail. For more information, see



This issue is resolved in this release.



Attempts to reboot Windows 8 and Windows 2012 Server virtual machine on an ESXi host might fail. For more information, see Knowledge Base article 2092807 This issue is resolved in this release.

Attempts to create more than 16 TB of VMFS5 datastore on storage device fail

An ESXi host might fail when you attempt to expand a VMFS5 datastore beyond 16TB. An error message similar to the following is written to the vmkernel.log file:



cpu38:xxxxx)LVM: xxxx: [naa.600000e00d280000002800c000010000:1] Device expanded (actual size 61160331231 blocks, stored size 30580164575 blocks)

cpu38:xxxxx)LVM: xxxx: [naa.600000e00d280000002800c000010000:1] Device expanded (actual size 61160331231 blocks, stored size 30580164575 blocks)

cpu47:xxxxx)LVM: xxxxx: LVM device naa.600000e00d280000002800c000010000:1 successfully expanded (new size: 31314089590272)

cpu47:xxxxx)Vol3: xxx: Unable to register file system ds02 for APD timeout notifications: Already exists

cpu47:xxxxx)LVM: xxxx: Using all available space (15657303277568).

cpu7:xxxxx)LVM: xxxx: Error adding space (0) on device naa.600000e00d280000002800c000010000:1 to volume xxxxxxxx-xxxxxxxx- xxxx-xxxxxxxxxxxx: No space left on device

cpu7:xxxxx)LVM: xxxx: PE grafting failed for dev naa.600000e00d280000002800c000010000:1 (opened: t), vol xxxxxxxx- xxxxxxxx-xxxx-xxxxxxxxxxxx: Limit exceeded

cpu7:xxxxx)LVM: xxxx: Device scan failed for <naa.600000e00d280000002800c000010000:1>: Limit exceeded

cpu7:xxxxx)LVM: xxxx: LVMProbeDevice failed for device naa.600000e00d280000002800c000010000:1: Limit exceeded

cpu32:xxxxx)<3>ata1.00: bad CDB len=16, scsi_op=0x9e, max=12

cpu30:xxxxx)LVM: xxxx: PE grafting failed for dev naa.600000e00d280000002800c000010000:1 (opened: t), vol xxxxxxxx- xxxxxxxx-xxxx-xxxxxxxxxxxx: Limit exceeded

cpu30:xxxxx)LVM: xxxx: Device scan failed for <naa.600000e00d280000002800c000010000:1>: Limit exceeded



This issue is resolved in this release.



An ESXi host might fail when you attempt to expand a VMFS5 datastore beyond 16TB. An error message similar to the following is written to thefile:This issue is resolved in this release.

Overall ESXi utilization might decrease when you set the CPU limit of a single processor virtual machine

When you set the CPU limit of a single processor virtual machine, the overall ESXi utilization might decrease due to a defect in the ESXi scheduler. This happens when the ESXi scheduler is making incorrect CPU load balancing estimations, and considers virtual machines as running. For more details, see



This issue is resolved in this release.



When you set the CPU limit of a single processor virtual machine, the overall ESXi utilization might decrease due to a defect in the ESXi scheduler. This happens when the ESXi scheduler is making incorrect CPU load balancing estimations, and considers virtual machines as running. For more details, see Knowledge Base article 2096897 This issue is resolved in this release.

The iSCSI network port-binding might fail even when there is only one active uplink on a switch

The iSCSI network port-binding fails even when there is only one active uplink on a switch.



The issue is resolved in this release by counting only the active uplinks to decide if the VMkernel interface is compliant or not.



The iSCSI network port-binding fails even when there is only one active uplink on a switch.The issue is resolved in this release by counting only the active uplinks to decide if the VMkernel interface is compliant or not.

iSCSI initiator name allowed when enabling software iSCSI via esxcli

This release provides the option to pass an iSCSI initiator name to the esxcli iscsi software set command.



This release provides the option to pass an iSCSI initiator name to thecommand.

Persistently mounted VMFS snapshot might not get mounted

Persistently mounted VMFS snapshot volumes might not get mounted after you reboot the ESXi host. Log messages similar to following are written to the syslog file:



localcli: Storage Info: Unable to Mount VMFS volume with UUID nnnnnnnn-nnnnnnnn-nnnn-nnnnnnnnnnnn.

Sysinfo error on operation returned status : Bad parameter count. Please see the VMkernel log for detailed error information

localcli: StorageInfo: Unable to restore one or more conflict-resolved VMFS volumes



This issue is resolved in this release.



Persistently mounted VMFS snapshot volumes might not get mounted after you reboot the ESXi host. Log messages similar to following are written to thefile:This issue is resolved in this release.

Reduced IOPS than the configured limit for the read-write operation

When you limit the Input Output Per Second (IOPS) value for a disk from a virtual machine, you see reduced IOPS than the configured limit for the read-write operation (I/O), if the size of the read-write operation (I/O) is greater than or equal to 32 KB. This is because I/O scheduler considers 32 KB as one scheduling cost unit of an IO operation. Any operation of size greater than 32 KB is considered as multiple operations and results in throttling I/O.



Th issue is resolved in this release by making the SchedCostUnit value configurable as per the application requirement.



To view the current value, run the following command:

esxcfg-advcfg -g /Disk/SchedCostUnit



To set a new value, run the following command:

esxcfg-advcfg -s 65536 /Disk/SchedCostUnit



When you limit the Input Output Per Second (IOPS) value for a disk from a virtual machine, you see reduced IOPS than the configured limit for the read-write operation (I/O), if the size of the read-write operation (I/O) is greater than or equal to 32 KB. This is because I/O scheduler considers 32 KB as one scheduling cost unit of an IO operation. Any operation of size greater than 32 KB is considered as multiple operations and results in throttling I/O.Th issue is resolved in this release by making thevalue configurable as per the application requirement.To view the current value, run the following command:To set a new value, run the following command:

The vmkiscsid process might stop responding

The vmkiscsid process might stop responding when you run an iSCSI adapter rescan operation using IPv6.



This issue is resolved in this release.



Theprocess might stop responding when you run an iSCSI adapter rescan operation using IPv6.This issue is resolved in this release.

ESXi host might not receive SNMP v3 traps with third-party management tool

An ESXi host might not receive SNMP v3 traps when you are using a third-party management tool to collect SNMP data. Entries similar to the following are written to /var/snmp/syslog.log file:



snmpd: snmpd: snmp_main: rx packet size=151 from: 172.20.58.220:59313

snmpd: snmpd: SrParseV3SnmpMessage: authSnmpEngineBoots(0) same as 0, authSnmpEngineTime(2772) within 0 +- 150 not in time window

....



For further information, see



This issue is resolved in this release.



An ESXi host might not receive SNMP v3 traps when you are using a third-party management tool to collect SNMP data. Entries similar to the following are written tofile:For further information, see Knowledge Base article 2108901 This issue is resolved in this release.

Attempts to boot an ESXi 6.0 host from an iSCSI SAN might fail

Attempts to boot an ESXi 6.0 host from an iSCSI SAN might fail. This happens when the ESXi host is unable to detect the iSCSI Boot Firmware Table (iBFT), causing boot to fail. This issue might occur with any iSCSI adapter, including Emulex and QLogic.



This issue is resolved in this release.



Attempts to boot an ESXi 6.0 host from an iSCSI SAN might fail. This happens when the ESXi host is unable to detect the iSCSI Boot Firmware Table (iBFT), causing boot to fail. This issue might occur with any iSCSI adapter, including Emulex and QLogic.This issue is resolved in this release.

The setPEContext VASA API call to a provider might fail

The setPEContext VASA API call to a provider might fail. An error message similar to the following might be reported in the vvold.log file:



VasaOp::ThrowFromSessionError [#47964]: ===> FINAL FAILURE setPEContext, error (INVALID_ARGUMENT / failed to invoke operation: setPEContext[com.emc.cmp.osls.api.base.InstanceOps.checkPropertyValue():269 C:ERROR_CLASS_SOFTWARE

F:ERROR_FAMILY_INVALID_PARAMETER X:ERROR_FLAG_LOGICAL Property inBandBindCapability is required and cannot be null.] / ) VP (VmaxVp) Container (VmaxVp) timeElapsed=19 msecs (#outstanding 0)

error vvold[FFDE4B70] [Originator@6876 sub=Default] VendorProviderMgr::SetPEContext: Could not SetPEContext to VP VmaxVp (#failed 1): failed to invoke operation



This issue is resolved in this release.



TheVASA API call to a provider might fail. An error message similar to the following might be reported in thefile:This issue is resolved in this release.

Random EMC targets might not recognize the initiator

Applying HostProfile, initially assigns a randomly generated iSCSI initiator name and then renames it to the user defined name. This might cause some EMC targets to not recognize the initiator.



This issue is resolved in this release.



Applying HostProfile, initially assigns a randomly generated iSCSI initiator name and then renames it to the user defined name. This might cause some EMC targets to not recognize the initiator.This issue is resolved in this release.

IBM BladeCenter HS23 might be unable to write the coredump file when the active coredump partition is configured

The IBM BladeCenter HS23 that boots from a USB device is unable to write the coredump file when the active coredump partition is configured on an USB device. A purple screen displays a message that the dump is initiated but is not completed.



This issue is resolved in this release.



The IBM BladeCenter HS23 that boots from a USB device is unable to write the coredump file when the active coredump partition is configured on an USB device. A purple screen displays a message that the dump is initiated but is not completed.This issue is resolved in this release.

Latest PCI IDs added

The pci.ids file is refreshed to contain the latest PCI IDs.



Thefile is refreshed to contain the latest PCI IDs.

Storage Issues

Slow NFS storage performance in virtual machines running on VSA provisioned NFS storage

Slow NFS storage performance is observed on virtual machines running on VSA provisioned NFS storage. Delayed acknowledgements from the ESXi host for the NFS Read responses might cause this performance issue.



This patch resolves this issue by disabling delayed acknowledgements for NFS connections.



This issue is resolved in this release.



Slow NFS storage performance is observed on virtual machines running on VSA provisioned NFS storage. Delayed acknowledgements from the ESXi host for the NFS Read responses might cause this performance issue.This patch resolves this issue by disabling delayed acknowledgements for NFS connections.This issue is resolved in this release.

Upgrade and Installation Issues

First boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX might halt server due to a segmentation fault

The first boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX halts server after loading vmw_satp_alua module due to a segmentation fault during the process of discovering controllers.



The lsu-lsi-lsi-mr3-plugin and lsu-lsi-megaraid-sas-plugin VIBs are updated to upgrade the Storelib from version 4.26 to 4.30 to resolve this issue.



The first boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX halts server after loadingmodule due to a segmentation fault during the process of discovering controllers.TheandVIBs are updated to upgrade the Storelib from version 4.26 to 4.30 to resolve this issue.

First boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX might halt server due to a segmentation fault

The first boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX halts server after loading vmw_satp_alua module due to a segmentation fault during the process of discovering controllers.



The lsu-lsi-lsi-mr3-plugin and lsu-lsi-megaraid-sas-plugin VIBs are updated to upgrade the Storelib from version 4.26 to 4.30 to resolve this issue.



The first boot of VMware ESXi 6.0 on a Dell PowerEdge VRTX halts server after loadingmodule due to a segmentation fault during the process of discovering controllers.TheandVIBs are updated to upgrade the Storelib from version 4.26 to 4.30 to resolve this issue.

Virtual Machine Management Issues

Virtual hardware versions prior to version 11 incorrectly claim support for Page Attribute Table

The virtual hardware versions prior to version 11 incorrectly claim support for Page Attribute Table (PAT) in CPUID[1].EDX[PAT] .



This issue is resolved in this release by extending support for the IA32_PAT MSR to all versions of virtual hardware.



Note: This support is limited to recording the guest's PAT in the IA32_PAT MSR. The guest's PAT does not actually influence the memory types used by the virtual machine.



The virtual hardware versions prior to version 11 incorrectly claim support for Page Attribute Table (PAT) inThis issue is resolved in this release by extending support for the IA32_PAT MSR to all versions of virtual hardware.This support is limited to recording the guest's PAT in the IA32_PAT MSR. The guest's PAT does not actually influence the memory types used by the virtual machine.

Deleting a VDI environment enabled desktop pool might delete VMDK files from a different desktop pool

When you delete a VDI environment enabled desktop pool, the VMDK files of the other virtual machine from a different desktop pool might get deleted. Multiple virtual machines from different desktop pools might be affected. This happens when after deleting the disk, the parent directory gets deleted due to an error where the directory is perceived as empty, even though it is not. The virtual machine might fail to power on with the following error:



[nnnnn info 'Default' opID=nnnnnnnn] [VpxLRO] -- ERROR task-19533 -- vm-1382 -- vim.ManagedEntity.destroy: vim.fault.FileNotFound:

--> Result:

--> (vim.fault.FileNotFound) {

--> dynamicType = ,

--> faultCause = (vmodl.MethodFault) null,

--> file = "[cntr-1] guest1-vm-4-vdm-user-disk-D-nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn.vmdk",

--> msg = "File [cntr-1] guest1-vm-4-vdm-user-disk-D-nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn.vmdk was not found",

--> }

--> Args



VMDK deletion occurs when a particular virtual machine's guest aperating system and user data disk are spread across different datastores. This issue is not visible when all VM files reside in the same datastore.



This issue is resolved in this release.



When you delete a VDI environment enabled desktop pool, the VMDK files of the other virtual machine from a different desktop pool might get deleted. Multiple virtual machines from different desktop pools might be affected. This happens when after deleting the disk, the parent directory gets deleted due to an error where the directory is perceived as empty, even though it is not. The virtual machine might fail to power on with the following error:VMDK deletion occurs when a particular virtual machine's guest aperating system and user data disk are spread across different datastores. This issue is not visible when all VM files reside in the same datastore.This issue is resolved in this release.

Automatic option for a virtual machine startup or shutdown might not work

The Automatic option for virtual machine startup or shutdown might not work when the vmDelay variable value is set to more than 1800 seconds. This might occur in the following situations:



If the vmDelay variable is set to 2148 seconds or more, the automatic virtual machine startup or shutdown might not be delayed, and might cause the hostd service to fail.

If the vmDelay variable is set to more than 1800 seconds, then the vim-cmd command hostsvc/autostartmanager/autostart might not delay the auto startup or shutdown tasks on a virtual machine. This is because the command might timeout if the task is not completed within 30 minutes.

Note: Specify the blockingTimeoutSeconds value in the hostd configuration file, /etc/vmware/hostd/config.xml . If the sum of delays is larger than 1800 seconds, then you must set blockingTimeoutSeconds to a value larger than 1800 seconds.



For example:

<vimcmd>

<soapStubAdapter>

<blockingTimeoutSeconds>7200</blockingTimeoutSeconds>

</soapStubAdapter>

</vimcmd>



This issue is resolved in this release.



The Automatic option for virtual machine startup or shutdown might not work when thevariable value is set to more than 1800 seconds. This might occur in the following situations:Specify thevalue in the hostd configuration file,. If the sum of delays is larger than 1800 seconds, then you must setto a value larger than 1800 seconds.For example:This issue is resolved in this release.



Virtual SAN Issues

ESXi host in a Virtual SAN cluster with 40 or more nodes might display a purple diagnostic screen

An ESXi host that is part of a Virtual SAN cluster with 40 or more nodes might display a purple diagnostic screen due to a limit check when the nodes are added back into the membership list of a new master after master failover.



This issue is resolved in this release.



An ESXi host that is part of a Virtual SAN cluster with 40 or more nodes might display a purple diagnostic screen due to a limit check when the nodes are added back into the membership list of a new master after master failover.This issue is resolved in this release.

Reducing the proportionalCapacity policy does not affect the disk usage

Reducing the proportionalCapacity policy does not affect the disk usage. This is because modifications made to the policy parameters are not passed on to the components on which they are applied.



This issue is resolved in this release.



Reducing thepolicy does not affect the disk usage. This is because modifications made to the policy parameters are not passed on to the components on which they are applied.This issue is resolved in this release.

Provisioning a virtual machine using a storage policy with the Flash Read Cache Reservation attribute might fail

Attempts to provision a virtual machine using a storage policy with the Flash Read Cache Reservation attribute fails in a Virtual SAN All-flash cluster environment.



This issue is resolved in this release.



Attempts to provision a virtual machine using a storage policy with the Flash Read Cache Reservation attribute fails in a Virtual SAN All-flash cluster environment.This issue is resolved in this release.

Lightweight Virtual SAN Observer capable of collecting statistics without requiring hostd introduced

The Virtual SAN Observer is unable to collect statistics when hostd is not reachable as the collection happens through hostd. This release introduces a lightweight Virtual SAN Observer capable of collecting statistics without requiring hostd.



This issue is resolved in this release.



The Virtual SAN Observer is unable to collect statistics when hostd is not reachable as the collection happens through hostd. This release introduces a lightweight Virtual SAN Observer capable of collecting statistics without requiring hostd.This issue is resolved in this release.

VMware Tools Issues

VMware Tools might fail to automatically upgrade when the VM is powered on for the first time

When a virtual machine is deployed or cloned with guest customization and the VMware Tools Upgrade Policy is set to allow the VM to automatically upgrade VMware Tools at next power on, VMware Tools might fail to automatically upgrade when the VM is powered on for the first time.



This issue is resolved in this release.



When a virtual machine is deployed or cloned with guest customization and the VMware Tools Upgrade Policy is set to allow the VM to automatically upgrade VMware Tools at next power on, VMware Tools might fail to automatically upgrade when the VM is powered on for the first time.This issue is resolved in this release.

Attempts to open telnet using the start telnet://xx.xx.xx.xx command might fail

After installing VMware Tools on a Windows 8 or Windows Server 2012 guest operating system, attempts to open telnet using the start telnet://xx.xx.xx.xx command fails with the following error message:



Make sure the virtual machine's configuration allows the guest to open host applications



This issue is resolved in this release.



After installing VMware Tools on a Windows 8 or Windows Server 2012 guest operating system, attempts to open telnet using thecommand fails with the following error message:This issue is resolved in this release.

The vShield Endpoint drivers renamed as Guest Introspection drivers

The vShield Endpoint drivers are renamed as Guest Introspection drivers and two of these drivers, NSX File Introspection driver ( vsepflt.sys ) and NSX Network Introspection driver ( vnetflt.sys ), can be installed separately now. This allows you to install the file driver without installing the network driver.



This issue is resolved in this release.



The vShield Endpoint drivers are renamed as Guest Introspection drivers and two of these drivers, NSX File Introspection driver () and NSX Network Introspection driver (), can be installed separately now. This allows you to install the file driver without installing the network driver.This issue is resolved in this release.

Applications such as QuickTime might experience a slowdown in performance with Unidesk

When you use Unidesk in conjunction with VMware View or vSphere with vShield Endpoint enabled, applications such as QuickTime might experience a slowdown in performance. This is due to an interoperability issue that is triggered when the Unidesk volume serialization filter driver and the vShield driver are present on the stack. For each file opened by the application, even if it is just for reading the attributes, the vShield driver calls FltGetFileNameInformation , causing further processing to be performed on the files. As a result, Unidesk driver opens directories and causes an overall application performance degradation.



This issue is resolved in this release.



When you use Unidesk in conjunction with VMware View or vSphere with vShield Endpoint enabled, applications such as QuickTime might experience a slowdown in performance. This is due to an interoperability issue that is triggered when the Unidesk volume serialization filter driver and the vShield driver are present on the stack. For each file opened by the application, even if it is just for reading the attributes, the vShield driver calls, causing further processing to be performed on the files. As a result, Unidesk driver opens directories and causes an overall application performance degradation.This issue is resolved in this release.

IPv6 Router Advertisements do not function as expected when tagging 802.1q with VMXNET3 adapters on a Linux virtual machine

IPv6 Router Advertisements (RA) do not function as expected when tagging 802.1q with VMXNET3 adapters on a Linux virtual machine as the IPv6 RA address intended for the VLAN interface is delivered to the base interface.



This issue is resolved in this release.



IPv6 Router Advertisements (RA) do not function as expected when tagging 802.1q with VMXNET3 adapters on a Linux virtual machine as the IPv6 RA address intended for the VLAN interface is delivered to the base interface.This issue is resolved in this release.

Quiesced snapshot might fail during snapshot initialization

Quiesced snapshot might fail due to a race condition during snapshot initialization. An error message similar to the following is displayed on the Tasks and Events tab of the vCenter Server:



An error occurred while saving the snapshot



You might also see the following information in the guest event log:



System Event Log

Source: Microsoft-Windows-DistributedCOM

Event ID: 10010

Level: Error

Description:

The server {nnnnnnnn-nnnn-nnnn-nnnn-nnnnnnnnnnnn} did not register with DCOM within the required timeout.



This issue is resolved in this release.



Quiesced snapshot might fail due to a race condition during snapshot initialization. An error message similar to the following is displayed on thetab of the vCenter Server:You might also see the following information in the guest event log:This issue is resolved in this release.

Performing a quiesced snapshot on a virtual machine running Microsoft Windows 2008 or later might fail

Attempts to perform a quiesced snapshot on a virtual machine running Microsoft Windows 2008 or later might fail and the VM might panic with a blue screen and error message similar to the following:



A problem has been detected and Windows has been shut down to prevent damage to your computer. If this is the first time you've seen this Stop error screen restart your computer. If this screen appears again, follow these steps:



Disable or uninstall any anti-virus, disk defragmentation or backup utilities. Check your hard drive configuration, and check for any updated drivers. Run CHKDSK /F to check for hard drive corruption, and then restart your computer.



This issue is resolved in this release. For more information, see



Attempts to perform a quiesced snapshot on a virtual machine running Microsoft Windows 2008 or later might fail and the VM might panic with a blue screen and error message similar to the following:This issue is resolved in this release. For more information, see Knowledge Base article 2115997

Known Issues

For other existing issues that have not been resolved and documented in the Resolved Issues section, please see Known Issues section in the VMware vSphere 6.0 Release Notes.