A brand new release of VMware Cloud Foundation 3.0 (VCF 3.0) has just been announced at VMWorld 2018 in Las Vegas. This release brings a lot of new stuff but also some changes in the overall VCF 3.0 architecture. I’ve been following the internal development of VCF 3.0 very closely by VMware Engineering and I love to see how this product matures.

What’s new in VCF 3.0

Architecture Changes

Bring Your Own Network – VCF 3.0 now supports customer choice of any network switch for the TOR and Rack Interconnect switches. What this means is that a customer is no longer subject to the limitations of a VMware hardware compatibility list for the switching infrastructure in order to make VCF work in their environment. This enables you to bring your own network vendor(s) and your own network topologie(s) and devices from those vendors, and VMware Cloud Foundation will be able to run on top of those. The switches will also be configured by customer’s network team.

– VCF 3.0 now supports customer choice of any network switch for the TOR and Rack Interconnect switches. What this means is that a customer is no longer subject to the limitations of a VMware hardware compatibility list for the switching infrastructure in order to make VCF work in their environment. This enables you to bring your own network vendor(s) and your own network topologie(s) and devices from those vendors, and VMware Cloud Foundation will be able to run on top of those. The switches will also be configured by customer’s network team. Physical Hardware – different vendor & model ESXi hosts are now allowed in the same rack but hardware within each vSphere cluster should be the same as a regular vSphere cluster. This means that the VCF 3.0 Hardware Compatibility List is not applicable to VCF. Instead, nearly all vSAN Ready Node models can be used.

– different vendor & model ESXi hosts are now allowed in the same rack but hardware within each vSphere cluster should be the same as a regular vSphere cluster. This means that the VCF 3.0 Hardware Compatibility List is not applicable to VCF. Instead, nearly all vSAN Ready Node models can be used. Multi-site support – VCF simplifies adoption of tested processes for a variety of day-2 tasks including guidance for configuring vSAN Stretched Clusters in VCF environments.

– VCF simplifies adoption of tested processes for a variety of day-2 tasks including guidance for configuring vSAN Stretched Clusters in VCF environments. Multi-cluster Workload Domains – customers can now create additional clusters within a single Workload Domain and expand it accordingly.

– customers can now create additional clusters within a single Workload Domain and expand it accordingly. Cloud Foundation Builder VM – VCF no longer uses the VIA appliance but leverages a brand new Cloud Foundation Builder VM for the bring-up process. It is a Photon OS OVA file that contains all components to deploy a full SDDC stack for the Management Domain.

– VCF no longer uses the VIA appliance but leverages a brand new Cloud Foundation Builder VM for the bring-up process. It is a Photon OS OVA file that contains all components to deploy a full SDDC stack for the Management Domain. Manual installation of ESXi – customers will install the ESXi image using vendor specific ISO image. This ensure the correct drivers ans VIB’s are in place from the beginning.

– customers will install the ESXi image using vendor specific ISO image. This ensure the correct drivers ans VIB’s are in place from the beginning. Network Pools – VCF will now use a pre-defined IP pools for vSAN and vMotion vmkernel ports. These ports will be used by SDDC manager when configuring ESXi hosts.

– VCF will now use a pre-defined IP pools for vSAN and vMotion vmkernel ports. These ports will be used by SDDC manager when configuring ESXi hosts. Updated Bill-of-Materials – VCF now includes vSphere 6.5 EP7/U2b release.

User Interface Changes

SDDC Manager – New UI (Clarity) with richer dashboards and streamlined workflows but also improved responsiveness and performance.

Hybridity Options