Virtualization technology has been used in high-end servers for quite some time. The evolution of virtualization has brought with it the desire to reduce the software (S/W) overhead portion of virtualization, particularly for I/O devices. Virtualization enables a single physical I/O device to masquerade as multiple virtualized I/O devices, one for each virtual machine and with complete independence from each other. I/O virtualization is migrating from having a predominately software implementation to incorporating more virtualization functionality into the hardware of next-generation Enterprise Computing systems. Incorporating more functionality into hardware improves the overall system performance, but requires a number of changes to the PCI Express® (or PCIe®) interface in order to support I/O virtualization.

What is Virtualization?

In the context of a computer system, the physical components of a computer include the processors, memory, I/O and storage systems, all working together to run the operating system and software applications. An example of a non-virtualized system is shown in Figure 1.

Figure 1: Non-Virtualized System

The virtualization of this system begins by dividing the physical components into a set of physical resources that operate independently with well-defined interfaces and functions. Next, a Virtualization Intermediary (VI) is introduced to the system that takes sole ownership of the underlying hardware. The Virtualization Intermediary abstracts the details of the physical resources, isolates them and then maps them into virtual resources while managing their allocation in the system. Since the Virtualization Intermediary is creating and managing the virtual resources, it can create multiple virtual resources for each of the physical resources while providing isolation between them.

In the context of virtualization, a System Image (SI) is the software component consisting of the operating system and applications which are assigned to run on a specific virtual resource. The system image only needs to know the details of the Virtual System they are assigned to.

The virtualization of the system in Figure 1 is shown in Figure 2.

Figure 2: Virtualized System

Multiple system images can simultaneously exist in a virtualized system with each system image being managed and isolated from one another by the Virtualization Intermediary. The set of virtual resources required to run a system image is referred to as a Virtual System (VS).

Once a computer system has been virtualized, it has a much greater flexibility because it allows a fixed physical system to be presented in a variety of different ways. By isolating the hardware, the operating system(s) and the applications from one another through virtualization a number of benefits can be achieved. Some example benefits are resource sharing, resource isolation, simplified version management, legacy software migration, resource extension and resource hot-change.

I/O Virtualization

Now that the basics and benefits of virtualization have been discussed, what is the downside to using virtualization? Where is the catch? Well, the Virtualization Intermediary is the sole component responsible for implementing all the virtualization features in the system, and manages every system image while intercepting and processing each and every hardware access. Today’s Virtualization Intermediary is implemented entirely in software, so when scaling the system beyond a certain number of system images and virtualization entities, it is easy to see how the Virtualization Intermediary becomes the performance bottleneck of a virtualized system. To improve the performance of the virtualized system, virtualization capabilities must be migrated into hardware.

Figure 3: I/O Virtualization

Over the last several years, Intel has been adding a number of virtualization features to their processors and chipsets to improve performance in PCs and servers. However, that only solves part of the performance issue. The performance bottleneck in these systems is the I/O, so new functionality needs to be added into the I/O subsystem hardware to directly support virtualization and improve the overall system performance. The need for I/O subsystems to support virtualization has been the main motivation behind recent industry efforts to define and standardize virtualization functionality for I/O devices. The goals are to: (1) provide scalable and extensible views of I/O devices with reasonable hardware costs; and (2) limit or remove the intervention of the Virtualization Intermediary during system operation. Devices supporting I/O virtualization are called IOV devices, and introducing them into a virtualized system allows the system to have a thinner and lighter VI, which relies on a physical system with more complex and sophisticated IOV devices. Conceptually, this is shown in Figure 3.

What is the Relationship between PCI Express and IOV?

PCI Express is the major performance-oriented interface protocol for many of the modern compute and telecommunications platforms, so it is not surprising that it is being extended to incorporate virtualization. PCI Express devices provide support for I/O virtualization based on a collection of specifications that precisely defines what these devices must do to resolve the performance limitations of the traditional software-oriented virtualization approach. At the core is the PCI Express specification, currently at version 2.1. The next major revision, PCI Express 3.0, will move the link speed from 5.0 GT/s to 8.0 GT/s. Added as part of the base specification, but as optional features to incorporate I/O virtualization, are the specifications for Alternative Routing-ID Interpretation (ARI), Function Level Reset (FLR) and Address Translation Services (ATS).

There are two types of I/O virtualization defined in relation to the PCI Express specifications: Multi Root and Single Root. The specification for Multi Root I/O Virtualization (MR-IOV) defines the features to extend the concepts to platforms in which multiple hosts want to share a common pool of resources. The Single Root I/O Virtualization (SR-IOV) specification defines the features to enable I/O virtualization in a system with a single PCIe Root Complex and is the focus for the remainder of this article.

SR-IOV

SR-IOV defines the ability to create light-weight functions, enabling them to be scalable to large numbers with acceptable hardware costs. To understand SR-IOV a bit more, let’s review some definitions:

· The SR-IOV capability is a new extended capability structure that is defined in the configuration space.

· A Physical Function (or PF) is simply a traditional PCI Express function when discussed in the context of SR-IOV. The name Physical Function is a way of distinguishing it from a Virtual Function. A PF has all of the typical features of a PCIe function, including a full configuration space, a completely independent set of BARs, etc. It also has a supervisory role for the Virtual Functions it is associated with. When identifying a PF, it is adequate to number them 0 through 7.

· A Virtual Function (or VF) is a new entity that is introduced in the SR-IOV specification. This is the low-cost function created for the purposes of having a high number of functions, but they are subject to certain limitations. Specifically, they are associated with a physical function and as such, share the resources of the PF with which they are associated. When referring to a Virtual Function a PF,-VF pair is required, i.e., VFm,n where m is the PF number and n is the VF number.

· Single Root PCI Manager (SR-PCIM) is a software component and is embedded into the VI. This component takes care of the configuration and management of the PFs within the system.

SR-IOV using the Alternative Routing ID Interpretation (ARI) provides the mechanism to provide a high number of functional devices with the benefits that we are looking for. Through the use of VFs it is possible to scale to a high number of functions with acceptable hardware costs.

Figure 4: PFs and VFs in a virtualized system

How is this important to virtualization and I/O virtualization? Figure 4 is utilizing the previous virtualized system diagrams and expanding the I/O subsystem to show the PFs and the VFs. The diagram is highlighting that PFs can have multiple VFs associated with them, and that IOV devices and non-IOV devices can coexists within the same system and hierarchy. With the ability to create light-weight VFs that are associated with a PF, the Virtualization Intermediary can directly assign a VF to a specific SI. While the system image appears to have full control over the physical resource, in essence, it is just controlling a copy. This limits the intervention and the involvement of the Virtualization Intermediary during main data movement operations and the overall result is a higher performing virtualized system.

Summary

Virtualization has been around for many years, primarily in software form. There are many usage models for virtualization which may benefit from improved I/O performance made possible by hardware acceleration. SR-IOV is an important component of that hardware acceleration for PCI Express I/O devices.

SR-IOV impacts a number of functional areas within a PCIe design and requires some PCI Express-based features that were previously optional. Because of the significant scope of changes that are required when adding SR-IOV, it is important to carefully manage the size and complexity of the implementation to fully realize the added benefits of SR-IOV while taking advantage of the cost savings made possible by the specification.

DesignWare IP Solution for PCI Express

The Synopsys DesignWare® IP for PCI Express solution provides the port logic necessary to implement and verify high-performance designs using the PCIe interconnect standard. The complete, integrated solution is silicon-proven and includes a comprehensive suite of configurable digital controllers, high-speed mixed-signal PHY, and verification IP, all of which are compliant with the PCIe 1.1, 2.1 and the 3.0 specifications. The DesignWare PCI Express solutions support the Single Root I/O Virtualization Technology (SR-IOV) specification from the PCI Special Interest Group (PCI-SIG). By providing a complete solution from a single IP vendor, Synopsys reduces integration risk by helping to ensure that all the IP functions seamlessly together. Synopsys DesignWare IP for PCI Express provides designers with a high performance IP solution that is extremely low in power consumption, area and latency.

As the market leader for PCI Express IP for the last four years (Gartner 2010), Synopsys continually delivers next generation and innovative PCIe IP solutions to the market. With a strong focus on delivering high quality, the DesignWare IP for PCI Express has undergone extensive third party interoperability testing, with products shipping in volume production. Using strict quality measures and backed by an expert technical support team, Synopsys enables designers to accelerate time-to-market and reduce integration risk for next generation, PCIe-enabled desktop, mobile, consumer and communication system-on-chips.

Explore Synopsys IP here