In the first article in our two-part series on virtualization, we asked the IT pros in the Ars forums to share their tips and best practices for first-time virtualization deployments. The results were extremely useful, and we got plenty of positive reader feedback on the effort. So for this second and final installment, we asked those same users about the future of virtualization, and about what they see as the next phase in this fast-moving trend's evolution. Their answers provide a glimpse at where the folks in the IT trenches expect virtualization to go, and at what they hope to see happen in the near- to medium-term.

I've broken the discussion down into a few main themes, some of which will be familiar to Ars readers. We've been tracking some of these trends for a while in our coverage, and the forum discussion indicates that they're still worth watching.

The VM will move down the stack

Right off the bat, user K0DE set the tone for the discussion with his opening post, which is a response to my initial question about the future of the hypervisor:

"I'd guess it's going to be a piece of firmware/hardware. As long as the hooks into the hypervisor are easy for an OS to connect to I can't see why it wouldn't be done in hardware."

A large part of the ensuing discussion evolved around variants on this theme, with many users emphasizing that the VM will shift down the software/hardware stack from being OS-hosted via a client application, to being hosted in a hypervisor that lives in embedded firmware on the motherboard and is loaded at boot-time.

Surprisingly, despite the widespread agreement that a flash-based hypervisor is the way to go, none of the responses tried to make a detailed case for this move. It may be because the benefits are obvious, and indeed I've written previously about the numerous advantages to this approach, but it's worth recapping why it's a good idea.

With a flash-based hypervisor (see Intel's Rapid Boot Toolkit for an example of this) you can do stateless network booting. I've seen Windows booted in this fashion over a network, and it ran quite comfortably; I can also imagine that a stripped-down Linux or BSD image runs really well in this fashion. This kind of stateless net booting has a few benefits, not the least of which is the fact that you can do live provisioning where you decide boot time whether you want a node to be stateless or stateful, and you can easily switch between the two options.

Perhaps the main advantage to stateless netbooting is the power saved per node, since the node doesn't need a hard drive to run the OS. This means that server nodes can go entirely without a local hard drive, or they can opt not to use an installed hard drive by just rebooting from the network.

Of course, OS-hosted virtualization approaches won't go away, even if most datacenters do end up using a flash-based hypervisor. So the best answer to the question of the eventual location of the VM in the software stack is probably "everywhere, depending on the usage scenario," including directly beneath individual applications (see the next section for more on the last point).

The VM will move up the stack

The forum discussion started with the VM taking up residence at the bottom of the stack, but other users suggested that the VM is headed in the opposite direction, as well.

So-called application virtualization, in which an individual application is deployed (often over the network) to a client machine by wrapping it in a VM, is something I've seen promoted by a number of vendors who have software solutions in this space, as well as by Intel. The theory here is that app virtualization, especially in its application streaming form, gives you a number of licensing, security, and policy advantages over running an app directly. Here are a few of the bonuses:

A virtualized app leaves no trace in the OS, so there are no registry, DLL, or other issues to deal with.

With app streaming you can serve up the version of an app that you have a license for, so it solves the license compliance problems that still plague the enterprise.

Application streaming lets you update the app centrally and serve this updated version to clients; this is in contrast the traditional method of trying to push updates to all of the clients in an enterprise.

The drawback to app streaming, which I once pointed out to a Symantec who didn't want to hear it, is that for some categories of applications, you may break a power user's workflow by forcing an update on them. When this happens, the users that are the most productive—i.e., the ones who have custom scripts, toolbars, and tweaks that tend to be version-dependent—are the ones who suffer, while the least productive users (i.e. they use the out-of-the-box configuration) are spared any interruption. The degree to which this is actually a big deal greatly depends, of course, on the specific organization and on its user base.

The embedded hypervisor will be guest-OS agnostic

Quite a few users insisted that the hypervisors that end up embedded on the motherboard will have to support multiple guest OS types. Many of our forum folks work in heterogeneous OS environments, and they have to be able to support all of the users on their network, regardless of OS.

Of course, as one contributor pointed out, if you're an all-Windows shop, then there doesn't seem any reason not to use Microsoft's hypervisor, and likewise with RedHat's in an all-Linux environment. But if you're supporting a heterogeneous OS environment, then you have no choice but to use a hypervisor that fully supports any guest OS that you may have to bring up on your network.

The pendulum swings back from client to server

One of the larger themes to come up in multiple virtualization threads is the idea that virtualization somehow represents a cyclical shift back to the server from the client. In other words, if you tell the story of the past four decades of computer evolution in terms of an ongoing tug-of-war between a centralized server (a mainframe, a file/app server, a cluster, "the cloud") and a constellation of clients, then virtualization as an enabling technology represents a shift of power and control back in the server direction.

This theme came up in the latest conversation in the context of user comments about the growing importance of enterprise-level management tools, which some users envision as a single, centralized point of management and monitoring that gives admins visibility across all the layers of the stack from a single tool.

So what's interesting about this latest turn of the wheel is that the "mainframe" on this cycle is really a large cluster of more-or-less commodity machines that will eventually present itself to the admin as a single entity and present to the client as either a single server or as a flexible resource pool ("the cloud").

The future of the OS

One worthwhile point that came up in our forum conversation was that any discussion of the future of virtualization is really about the future of the OS. Abstractly speaking, virtualization is just the latest development in the operating system's ongoing adaptation to different forms of multitenancy (protected memory and multiuser support being two earlier points on this trend).

So virtualization provides for even more robust forms of multitenancy than had previously been possible by cleaving the OS into separate hardware- and application-facing components. Perhaps ironically, the drivers still live on the app-facing (guest OS) side, which means that when Marc Andressen famously called Windows "a bag of drivers" back in the Netscape era, he wasn't wrong—just really early.

This article is part of our temporary focus on virtualization here at Ars. For more on the topic, check out our virtualization page, sponsored by Microsoft.