As you can see not much has actually changed besides the amount of Powered-on VM’s and hosts a vCenter SSO Domain can hold. But I thought it was worth a mention.

Another small update is that vSphere Portgroups are now secure “out-of-the-box”. What this means is that the Security settings are all disabled by default. Previously this was not the case. If you ask me this should’ve been done years ago. But great that VMware fixed this in this release.

VM Hardware version 17: Watchdog timer devices. A watchdog timer helps reset the VM if the guest OS is no longer responding. This is especially important for clusters database or filesystem applications. Precision Time Protocol (PTP) support. With PTP you can achieve sub-millisecond clock accuracy. To use PTP a service on the ESXi host has to be enabled and a Precision Clock device has to be added to the virtual machine.

Support for vSGX/Secure Enclaves on Intel CPU’s. Intel Software Guard Extensions (SGX) allows applications to work with hardware to create a secure enclave that cannot be viewed by the Guest OS or hypervisor. This is rather new, but it can be used by organizations that want to have pieces of code saved in a encrypted portion of memory.

External Platform Services Controllers Deprecated

Previously already announced by VMware, the External Platform Services Controller (PSC) is now really deprecated. You can no longer deploy a vCenter with an External PSC during the deployment.

VMware vCenter Migration options

Starting with the release of vSphere 7 you can automatically execute the following two options, that used to be a manual process with the previous versions:

Converging an External PSC configuration to an Embedded PSC configuration.

Converting a Windows VMware vCenter server to the VMware vCenter Appliance.

This is great! No longer do we need to use the (sometimes) complicated CLI on the vCenter Appliance to do these jobs. Just load up the vCenter Server Appliance installer and merge/migrate!

vCenter Server Profiles

With the release of vSphere 7 VMware releases a new feature called “vCenter Server Profiles. With this new feature you can create a consistent configuration file that can be used across multiple vCenters. This configuration profile can be:

Exported in a .json format from and imported into a vCenter Server through REST APIs calls. “List” configuration “Export” configuration “Validate” configuration “Import” configuration

Maintained with version control between vCenter Servers.

Used for an easy revert to a known good vCenter Server configuration.

These vCenter Profiles can be consumed with DCLI, PowerCLI, Ansible, Puppet, Chef and more automation tools. You can propagate the profiles across a maximum of 100 vCenter Servers at this time.

There is also a validation mechanism in place to ensure that the configuration that you would like to import is valid. If this validation returns an invalid status you will see the error which is causing this, so that it can be fixed. I can imagine this functionality be particularly useful for organizations that want to maintain a consistent state of their configuration over all of their vCenter Servers.

vCenter Server Profiles

Improved vCenter Server Certificate Management

A couple of releases ago VMware added the possibility to view and manage your vCenter Server certificates right from within the VMware vSphere UI. As we all know this looks something like this:

Certificate Management in vCenter Server 6.7 U3

If you like me only ever change the vCenter Server machine-ssl certificate to throw an internal or external approved certificate on it, the Solution certs won’t get touched. This is something VMware recognized and changed. The vsphere-webclient, vpxd and vpxd-extensions certs are now longer visible or manageble from the vSphere UI. Makes sense since these are back-end services anyway. The UI has been simplified down to the bare essentials and now looks like this:

Certificate Management in vCenter Server 7.0

You can easily replace any certificate from the UI now, but also programmatically with APIs! A great addition to the vSphere UI if you ask me!

vCenter Server Multi-Homing support

With this new release Multi-Homed vCenters are finally supported. Like we all know and William Lam pointed out in his blog, this was never actually supported by VMware, even though you can easily add a new NIC inside the VCSA VAMI nowadays. Well I have good news for you! Starting with this release, it is officially supported.

The only remark you should know about is that the first adapters (NIC1) is reserved for vCenter HA (VCHA) and that there is a limit of 4 NIC’s per vCenter Server. So everybody that used a second (or third/fourth) NIC in a previous version of vSphere for a dedicated backup network, or external access network can now breath and relax because it’s supported!

vCenter Server 7.0 Multi-Homing support

vCenter Server Content Library

The vCenter Server Content Library also received some love this update. You can now use advanced versioning on templates, check out/in templates and revert templates to previous versions. Some UI elements have also changed to better reflect the versioning possibilities. You can now also edit Advanced configuration for a Content Library to increase efficiency on transfers and change settings in regards to the Auto-Sync Frequencies.

Another big change within vCenter is that there now is something called a vCenter Server Update Planner. This is a completely new feature build right into the vSphere client. This new feature will help organizations see two things without going out of the vSphere Client:

Pre-Update Checks With this check you can select your target vCenter Server version from a list of available updates and receive a Pre-Update Check report that you can use to plan the upgrade.

Interoperability Matrix Before doing upgrades on a VMware vSphere platform it is extremely useful to check the compatibility with the other vSphere products that are being used on the platform. We always had to go to the online Interoperability Matrix. Starting with vCenter Server 7.0 we don’t have to anymore. There is now a build in Interoperability matrix right inside the vSphere Client that automatically detects installed vSphere products and shows their compatible versions, all with a link to the Release Notes.



vCenter Server Update Planner – Interoperability Matrix

This new feature will save VI-Admins precious time by instantly displaying the versions that we can upgrade to within our own vSphere environments. The last great thing about this is that you can also do “What-If” upgrades. Which means it will check what will happen or what will have to be checked once you decide to upgrade the environment. This is a great way to provide easy pre-upgrade test results to your colleagues.

vSphere vSphere LifeCycle Manager (vLCM)

vSphere Update Manager (VUM) received quite the change in this release. VUM is getting replaced by vSphere LifeCycle Manager (vLCM). This new tool aims to deliver the VI-Admins a new way of upgrading their vSphere environment. You can now finally patch, update or upgrade ESXi servers at scale with RESTful APIs to automate lifecycle management and use a desired state image while you’re at it.

These desired state images are now cluster wide, and are called Cluster Images. The thought behind these Cluster Images is that you only have to maintain one single image for the entire cluster. Previously in VUM you could potentially use more than one Baseline/Image inside a cluster, which in turn might provide you with inconsistencies that you don’t need or want. These Cluster Images consist out of the following three parts:

ESXi Base Image The installation software required to install the ESXi hypervisor.

Vendor Add-Ons Vendor specific driver Add-Ons.

Firmware and Drivers Add-Ons Host firmware and drivers.

Components Separate .VIB based features.



Yes, you are not dreaming! You can now use vLCM to patch host firmware and ESXi patches in one maintenance window. At the moment this only works in conjunction with Dell OpenManage and HPE Oneview though. You don’t need to have two maintenance windows, or go to different tools to update both the firmware and ESXi hypervisor. It all works from within vSphere. Another cool thing that vLCM introduces is that it will check the VMware Compatibility Guide (VCG) and Hardware Compatibility List (HCL) by using the new build in recommendations engine during the Remediation Pre-Check phase. This will remove the risk of unsupported drivers or firmware within your environment. #lovethisfeature

vLCM also provides a better insight during remediations by displaying a detailed log file in the vSphere UI. This helps VI-Admins better understand the progress vLCM has made. Because of this detailed status report, you can now choose to “Skip Remaining Hosts” if you feel the need for this on the same page. vLCM can also detect Compliancy drifts and act upon them by remediating hosts that have lost compliancy to the desired state cluster image, but to me that’s not actually that different from the VUM Compliancy checks and remediations.

Another cool thing vLCM can do is that you can Export and Import the desired state Cluster Image to other collectively managed clusters, or to other vCenter Servers. You can export the images in the following three formats:

JSON Download the Cluster Image as a json file. This only contains metadata about the image but no software packages.



ISO You can download an installable ISO image based upon the Cluster Image.



ZIP (Offline Bundle) Download a ZIP Offline Bundle that contains all components and software packages. You can upload this into an Update Manager’s depot or use this for ROBO sites where you don’t want to transfer images from remote vCenter Server Appliances (VCSA) to local sites because of network throughput constraints.



vLCM export cluster image

As far as I know of, vLCM is not enabled by default. You can choose to use VUM on a cluster by disabling the cluster setting called “Manage image setup and updates on all hosts collectively.”. If you wish to transition to the new collectively managed cluster image setting you just have to enable this on existing clusters. Just edit the Cluster and check the tickbox.

Enable vLCM on a cluster

The last thing I want to say on vLCM is that VMware also made sure it works with Auto Deploy. You can create an Auto Deploy Deploy Rule which can be connected to all hosts or a pattern, select the Cluster Image that you created earlier and you’re done! Auto Deploy automatically creates a new Image Profile based on the Cluster Image that fresh ESXi hosts can use. Don’t forget to activate the rule though, you won’t be able to use it if you don’t.

vCenter Server Namespaces

When you use the vSphere with Kubernetes capabilities, you will receive a new grouping construct within vSphere. The need for this new grouping construct, next to vApps and Resource Pools comes from the fact that modern applications consist out of more than virtual machines. The new construct is based upon the Kubernetes Namespace model. A Namespace is a collection of resource objects. It’s basically a supersized vApp/Resourcepool combination which can hold more than virtual machines, such as serverless functions, Kubernetes environments, containers, disks etc. On this Namespace you can set QoS services, Limit’s, Encryption, Security, Availability and access control policies.

This new construct also simplifies the vSphere inventory a lot. Instead of having thousands of virtual machines in the vSphere inventory you get a couple of Namespaces that hold all of the services for a modern application. A great example for this simplification is that VI-admins can for example use vMotion on a Namespace and with that action they could potentially move hundreds of virtual machines in a single click. How this looks like in the UI can be seen from the below screenshot:

vCenter Server Namespace construct

vCenter Server Namespaces

Future vMotion (vMotion 2.0)

During this update vMotion received a large update! I’ve actually talked about these improvements before last November. Like I said back then, VMware has been working on a complete overhaul for vMotion for a while now. It hasn’t actually changed all that much since it was released back in the days. So, it’s about time some major issues are fixed! vMotion has been improved in the following three main areas:

vMotion memory page tracing performance enhancements During vMotion dirty memory pages (changed memory pages) need to be tracked so that a sync can occur during the final flipover to the other ESXi host. Tracing these dirty memory pages is done by installing traces on all vCPU’s. This in return gives a short hick-up/performance drop (microseconds) during the pre-copy vMotion phase. The new enhanced vMotion doesn’t install a trace on all vCPU’s, but only on one vCPU. Which means the other vCPU’s are still free to perform and run workloads.

The time a vCPU spends in trace install time is significantly reduced.

One vCPU is in charge of the memory page tracing.

vMotion Old + vMotion 2.0

Compacted memory bitmap transfers A memory bitmap is a map of pages that have changed (dirty) during the vMotion process. This bitmap can get quite large when you are talking about “monster” virtual machines. A memory bitmap for a virtual machine with 1GB RAM is 32KB. So back in the day this wasn’t hard to transfer over to another host. But when you are talking about virtual machines that has around 24TB of RAM, the memory bitmap file is already 768MB large. Which takes about 2 seconds to transfer. This also gives the virtual machine a long stun time, in which no operations can be executed. This has been enhanced by compacting the memory bitmap file. Apparently, the memory bitmap file is full of blank spaces but since a lot of memory has already been copied over during the pre-copy phase in the vMotion process, this is not needed. When you compact the blanks out of the memory bitmap file the file size gets reduced significantly. Because the memory bitmap gets compacted a virtual machine with 24TB memory (768MB memory bitmap size) doesn’t take 2 seconds to be transferred anymore, but only 175 milliseconds.



Compacted memory bitmap transfer

Fast Suspend Resume addition This is a technique that VMware uses for hot adding devices and storage vMotion. It’s different from vMotion because it doesn’t have to change the active memory file to another host. This technique actually creates a shadow virtual machine, adds the new resources or does the storage vMotion, quiesces the virtual machine, copies over the memory metadata and resumes the virtual machine. Transferring over this memory metadata file is currently done by using one vCPU. Because of this the stun time during one of the operations mentioned above is rather large when you are editing virtual machines with a large amount of RAM. VMware enhanced this by using all the vCPU’s the virtual machine has. This significantly reduces the stun time.



And last but not least there are also new additions to the selectable EVC Modes in a cluster. You can now enable EVC for Intel Cascade Lake and AMD Zen 2 (EPYC Rome) CPU’s.

These are welcome changes to vMotion technique since virtual machines are getting bigger and bigger (Monster VM’s) and applications are becoming more sensitive to latency and performance drops nowadays. So, a pre-copy stun, performance decrease during vMotion and post-copy stun can really leave an impact on the continuity of applications. But, with these enhancements they should no longer have a large impact. If you want to read more on vMotion (this is the old way) just click here!

Distributed Resource Scheduler 2.0 (DRS 2.0)

Since DRS got released back in 2006. Since then virtual machines grew, applications changed with cloud-native, services, container additions. DRS however, did not change that much. There were a couple of enhancements and changes in vSphere 6.7 though. The current version of DRS is a cluster centric service. This means that it wants to ensure that the load on the cluster is balanced so that hosts are not experiencing contention, if they don’t have to. If DRS sees a cluster imbalance, it will calculate if vMotioning a virtual machine to another host will fix this imbalance. If it does, the vMotion will automatically be launched.

The new version of DRS, DRS 2.0 is a complete revamp of the first version. DRS 2.0 is now a virtual machine (workload) centric service instead of a cluster centric service. To understand the changes made to DRS, I will explain the three main categories of these changes:

DRS cost-benefit model: Like I said above, DRS 2.0 focusses on a virtual machine “happiness” score instead of a cluster imbalance. This “VM Happiness” score ranges from 0% (not happy) to a 100% (happy). A lower bucket score does not directly mean the VM is not running properly. It’s a number which displays the execution efficiency of a VM. VM’s are placed into VM DRS Score “buckets”. These buckets are 0-20%, 20-40%, 40-60%, 60-80% and 80-100%. “VM Happiness” score is a new metric introduced with this release. The “VM Happiness” score is calculated from over a dozen of metrics. The core metrics that define the most on this score are the Host CPU Cache behavior, VM CPU Ready Time, Swapped Memory, Migration Cost and VM burst capacity. You can find the “VM Happiness” score right from within the vSphere Webclient UI on a VM basis. DRS 2.0 checks the “VM Happiness” score and decides to vMotion a VM to another ESXi host if it can improve it.



DRS 2.0 VM Happiness buckets

Support for new resources and devices DRS 2.0 can now do proper distributed load scheduling based on Network Load Balancing. The old DRS version never actually took Network Load as a metric to base loadbalancing decisions on. It would prefer a CPU or Memory metric before the network metric. DRS 2.0 is now hardware aware. This means that if you are using vGPU’s on some virtual machines, DRS 2.0 will only vMotion those virtual machines to other hosts that can provide the vGPU’s. Initial placement support for VM’s with vGPU’s and PCIe devices configured in passthrough. If you have virtual machines that have a fluctuating stable/unstable workload profile, these will also receive some love in the new DRS version. Unstable/Stable workloads are now a part of the cost metric for DRS. This effectively means that virtual machines don’t get pushed around all the time depending on their workload. DRS 2.0 will also check how long the new VM Happiness score would stay stable. This means it will calculate the benefit for a move and how long this benefit would last. This will ensure that unnecessary vMotion’s occur less than before.



Faster and scalable In regards to scalability, DRS 2.0 changed a core mechanism that it used to use for its cluster-wide standard deviation model. Because DRS is now virtual machine centric it doesn’t need to take a cluster-wide snapshot to calculate what to do. This means DRS runs every minute instead of every 5 minutes. DRS also doesn’t use Active Memory anymore. It now uses Granted Memory. This changed because the world has changed and businesses don’t really overcommit on their memory anymore. DRS Scalable shares provide relative resource entitlement to ensure that VM’s in a Resourcepool set to High shares, really get more resource prioritization over lower share Resourcepools. This setting is not enabled by default. It is on vSphere with Kubernetes. In the previous DRS version, it could possibly occur that VM’s in a Resourcepool with shares to Normal got the same resource entitlement as a High share Resourcepool. This is now fixed.



vSphere Identity Federation

Ever wondered when you would be able to use Identify Federation to access the vSphere Client? Well you don’t have to wait any longer. vCenter Server 7.0 has Identity Federation possibilities. You are now able to add enterprise identity providers (IdPs) to handle the authentication. This removes the need for cloud providers to provide customers with credentials and managing their passwords.

Initially this will only work with Microsoft Active Directory Federation Services (ADFS). Later on it will work with more providers. This works like you would expect it to, the vSphere client redirects to the external IdP, in this screen you can enter your credentials and once authenticated you get logged into the vSphere Client. A small overview can be seen below.

vSphere Identity Federation Overview

vSphere Trust Authority (vTA)

The vSphere Trust Authority (vTA) creates a hardware root of trust using a separate ESXi host cluster. It is responsible for ensuring that ESXi hosts are running trusted software and for releasing encryption keys only to trusted ESXi hosts. The vTA is here to create a Trusted Infrastructure.

The vTA is also a separate hardware cluster that runs the attestation services. This vTA cluster can be configured with the Principle of Least Privilege. This ensures that only a select number of VI-Admins have access to this cluster.

The vTA cluster checks if a workload ESXi hosts passes the attestation before passing it encryption keys from the KMS server. Once the workload ESXi host passed the attestation, they are marked as trusted. If they don’t pass the attestation, the ESXi hosts get marked untrusted. A valid attestation report from the Attestation Service can be a requirement before the ESXi host receives any encryption keys from the Trusted Key Provider.

If you have a secured/trusted workload running on your environment, let’s say an encrypted virtual machine, the vTA and it’s trusted/untrusted ESXi hosts ensure that the secured workload is only allowed to move on trusted ESXi hosts. You will not be able to move a secured workload from a trusted to an untrusted ESXi host, since the vTA will not provide that untrusted ESXi host the encryption keys it needs.

Key difference with previous versions is that the vCenter Server doesn’t actually need a connection with the KMS server anymore, like in vCenter 6.5/6.7. This also means that the vCenter Server no longer distributes the encryption keys to the ESXi hosts. You can connect the KMS server with the vTA environment directly. This can have some implications to your KMS licensing though. You should check that before using the vTA services. TPM 2.0 is a requirement to for vTA. But most recent hardware have TPM 2.0 implemented so that shouldn’t be an issue.

vTA Overview

This concludes the changes made to VMware Cloud Foundation 4.0 and VMware vCenter Server 7.0 that were noteworthy to report on. Because there is a lot more to come I am going to split the blogpost in several pages. Please continue to the second page to find out more on vSphere 7.0 and its new enhancements along the complete product portfolio.

More on this release can be found on multiple blogposts by VMware HERE.