The chant "VVOLs are coming" has been ringing since August 2011 when VMware first publicly presented the concept in a technical session at VMworld. Finally, after nearly four years, VMware announced the general availability of Virtual Volumes (VVOLs) at VMware Partner Exchange 2015 in conjunction with vSphere 6 and vSphere APIs for Storage Awareness (VASA) 2.0. A number of storage array vendors simultaneously announced VVOL support in select products. So what are VMware VVOLs, how do they work and what benefits do they bring to users?

Simply stated, VVOLs enable the provisioning, monitoring and management of application storage at a virtual machine (VM) level of granularity in the storage arrays that support them. Before VMware came into the world of computing, applications enjoyed a 1:1 relationship with a LUN or volume that was carved out of a storage array. The performance, capacity and data services (compression, caching, thin provisioning, snapshots, cloning, replication, deduplication, encryption and so on) were defined precisely but statically for that LUN. The application running on the physical server had access to all the services available to the LUN.

When VMware abstracted the compute side with the hypervisor, one could run multiple applications in the form of VMs on a single physical server. Storage remained essentially the same as before. While VMware spoke VMs, storage continued to speak LUNs and volumes.

The resulting mismatch wreaked havoc for a decade. The only way to make storage play nicely with VMware was to have a single LUN support a fairly large number of VMs. If an application started performing poorly, there was no way to find the exact cause since the storage performance data was only available at the LUN level. The lack of VM-level visibility made it difficult, if not impossible, to isolate the issue and deal with it.

How do VMware VVOLs work? VVOLs are designed to solve this fundamental problem in a holistic way by eliminating the architectural mismatch between storage and VMs. The technology enables precise, policy-based allocation of storage resources to a VM. These resources may include type, amount and availability of storage, as well as data services such as deduplication, snapshots, replication and so on. These resources can also be modified on the fly as requirements for applications change. To fully grasp the ins and outs of VMware VVOLs, it is important to understand the concepts outlined below.

Storage containers and virtual data stores A NAS or SAN storage array is initially split into a handful of storage containers, each representing a different set of capacities and capabilities (classes of service). Storage containers are a logical construct and are typically created by a storage administrator. They are presented to vSphere as virtual data stores, thus requiring no changes on the vSphere side. VVOLs live inside the storage containers. VVOLs, sometimes called virtual disks, define a new virtual disk container that is independent of the underlying physical storage representation (LUN, file system or object). In other words, regardless of the type of physical storage attached (except for DAS), storage is presented to vSphere in an abstracted format of a VVOL. It is the smallest unit of storage resource allocation that can be assigned to a VM (see "Anatomy of a VM"). It is also the smallest unit of measurement for the management of a storage array. This means resources can be provisioned at a VM level, and all monitoring and management can be performed at the VM level. Anatomy of a VM A virtual machine (VM) today consists of a swap file, a config file and at least one VMDK file. Each snapshot of the VMDK produces another VMDK file. In the new world of VVOLs, each of these files is represented by a VVOL. As a result, each VM produces a minimum of three VVOLs and maybe more, depending on the number of VMDKs in the VM. For a mission-critical VM that is snapshotted every 15 minutes and stores one week's worth of snapshots, the number of VVOLs quickly approaches 675 per VM, assuming only one VMDK per VM. One can see how quickly the number of VVOLs adds up. To get VM-level management without VVOLs, you would have to create 675 LUNs. Given the 256 LUN limit for each VMware host, and the fact that most existing storage arrays have an internal LUN limit, this is impossible. VVOLs are designed to get past these limits but more importantly, because they are created on demand with automated management, they enable the creation of Web-scale infrastructures.

Storage policy-based management and policy-driven control plane The policy-driven control plane acts as a bridge between applications and the storage infrastructure. It is responsible for mapping the VM to a storage container that is capable of meeting the policy. In this software-defined storage model, the VM administrator uses the storage policy-based management (SPBM) interface in vSphere to define a set of policies that can be applied individually to each VM. These policies define the type of resources to be delivered by the storage arrays. For instance, a platinum policy may use flash resources and the best data protection, capacity optimization and disaster recovery capabilities of the available storage arrays, whereas a gold policy may use lesser resources. Since all VVOLs and VMs are provisioned and managed automatically via policy using SPBM, the VMware infrastructure can scale to thousands or tens of thousands of VMs without increasing costs. Contrast this with how difficult it is to upgrade or downgrade a VM that is tied to a LUN. In addition to allocating the appropriate storage services to the VM, the control plane is also responsible for ongoing monitoring of these VMs to ensure each VM continues to get the resources assigned to it by the policy.

Virtual data plane The virtual data plane abstracts all the storage services available on an array, so they can be delivered (or not) to individual VMs. Historically, a VM sitting inside a given LUN received whatever capabilities and services were available to that LUN. For instance, a VM that did not need to be replicated to another site was replicated whether or not the LUN was set up with that service. These abstracted services are made available to the control plane for consumption. These resources do not have to come solely from external storage arrays; they may come from Virtual SAN (VSAN), from vSphere itself or from third parties. The control plane decides which services are to be made available to a given VM, based on the policy associated with that VM. VVOLs are VMware's implementation of the virtual data plane for external storage arrays, whereas VSAN provides x86 hypervisor-converged storage.

Protocol Endpoints The communication between an ESXi host and the storage arrays is handled by Protocol Endpoints (PEs). This is a transport mechanism that connects VMs to their VVOLs on demand. One PE can connect to a very large number of VVOLs and does not suffer from the configuration limit of LUNs (a VMware host can only connect to 256 LUNs). In a Network File System (NFS) storage array environment, the PE is discoverable as an NFS server mount point and each VMDK produces its own VVOL; in addition, each VVOL is inside its own storage container.

VASA provider The VASA provider is software typically implemented in the storage array that tells the ESXi host and vCenter what capacities and capabilities are available in the storage array. It is through the VASA provider that the storage communicates if it has flash, different types of hard disk drives, caching, snapshots, compression, deduplication, replication, encryption, cloning and other capabilities. The topology information is also communicated this way. Is it a Fibre Channel array? If so, how many ports? Does it have multipathing? All this information is used in the creation of policies and virtual disks. If the storage array has built-in quality of service (QoS) support, VASA would inform vSphere and the ESXi hosts of its availability. VASA 2.0 is required for VVOL support.

Overall benefits of VVOLs By now, it should be evident that VVOLs represent a major shift in the way storage is provisioned and managed in a VMware environment. The concept of a LUN does not disappear, but storage administrators don't have to deal with them anymore. All external storage becomes abstracted, as do all storage services. Applications become associated with the right type of storage and only those services that are needed for that VM. All monitoring and management becomes VM-centric, and resources are not wasted as they are in the LUN world. Performance management is more precise and issues can be pinpointed more easily. As applications' needs change over time, resources can be added/subtracted automatically and non-disruptively. Also, no changes are needed to applications and no forklift upgrades are required to get into the world of VVOLs. Customers can continue to run existing applications as they switch to VVOLs. The two environments can coexist and be managed from a common vCenter console. However, VVOLs do require vSphere 6 and VASA 2.0, and the storage array also must support VASA 2.0.

Can existing storage arrays support VVOLs? A commonly asked question is "Will my existing storage array support VMware VVOLs?" The short answer is no. But if the question is "Can the existing storage architecture be modified to support VVOLs?" the answer is yes. A non-trivial amount of engineering is required, especially if the architecture is 15 to 20 years old. A commonly asked question is 'Will my existing storage array support VVOLs?' The short answer is no. This is why storage products that support VVOLs are just coming to market and only a few models at a time. EMC is starting with VNXe and VMAX3, and will add models/products over time. Hewlett-Packard is starting with 3PAR models. I expect each storage array vendor to have a phased strategy of support, given the magnitude of the task. It is important to remember that simply supporting the provisioning of VVOLs is not enough. Until they are, one may be able to provision VVOLs but only those data services that are supported will apply. If replication is not supported, for example, the policies cannot include this capability. Getting into VVOLs will not be a simple matter of a one-time upgrade. Users need to understand the full picture of what models and services are supported to decide when to upgrade their infrastructure and in what order.

What about vendors that have VM-centric products? NexGen Storage, Nutanix Inc., Scale Computing, SimpliVity, Tintri and several other vendors have already implemented VM-centricity and have been shipping and supporting products for several years. Do they lose all their advantage now that VVOLs are out? Does VVOL level the playing field? The short answer is "No way." All their data services are VM-centric already. It will take other vendors a minimum of one year, and more likely two years, to get all their models and data services supported for VVOLs. Another factor to keep in mind is that many of the players listed above have implemented extremely strong QoS features. And, lest we forget, VVOLs do not give you automatic QoS for applications. The underlying storage array must implement it. If it does, then QoS can be surfaced via VASA to vSphere and be attached to the policies. There is a general misunderstanding in the marketplace that VVOLs inherently deliver QoS. This is not true. Very few storage arrays today have sophisticated QoS functionality built in, especially compared to products from some of the vendors mentioned above -- and adding it is non-trivial. There are exceptions, of course. This means those vendors with high-quality QoS will continue to enjoy competitive advantages for the foreseeable future.