It's been just under a year since Microsoft released Windows Server 2012. Touted as an operating system built for the cloud, Server 2012 promised some significant improvements to storage, networking, and virtualization services. It tried to strike a balance between the complex and varied needs of its data center customers and the simplicity smaller organizations needed to keep down costs.

Now Microsoft has unveiled the preview of Windows Server 2012 R2. It's not just a "service pack" of bug fixes for last September's release—this is a full update with a raft of further improvements targeted at further knocking down the walls between on-premises servers and private and public clouds. Some of those changes fine-tune the balance between simplicity of management and the enterprise power Microsoft was going for. They continue to enhance the server platform's suitability both as a component of a cloud-computing environment and as an on-ramp to cloud services for small and mid-sized organizations.

Microsoft is also previewing updates to its system management platform System Center 2012 R2 and to Windows Server 2012 Essentials, the "easy deploy" successor to Microsoft's Windows Small Business Server. Among other things, System Center 2012 R2 and Windows Server 2012 together improve Microsoft's support for Linux virtual hosts within a Microsoft-managed environment. And the new version of Server Essentials has bigger ambitions than just the server under your desk—it's been beefed up to appeal to mid-sized businesses and optimized further for deployment in the cloud. Now service providers can offer hosted Windows domains to their customers and give them simple-to-use administrative tools that can be remotely accessed.

All of these pieces fit into what Microsoft has called the "Cloud OS," an over-arching architecture that will connect on-site servers at small and medium businesses and servers in corporate data centers with cloud-based services. It blurs the boundaries between what's yours, what's your service providers', and what runs in Microsoft's software-as-a-service and cloud infrastructure services. So most of the changes to the internals in R2 are focused on enhancements to storage, networking, and virtualization. But there are a few visible changes that will appeal to organizations that aren't necessarily looking to scale out a cloud on their own.

We've been testing Windows Server 2012 R2 Preview for the past few weeks in tandem with Microsoft's expanding cloud service portfolio and a collection of desktop and mobile clients. (We took a brief look at the Windows Desktop Experience in R2 just as the preview was released.) For this first look at what's coming in 2012 R2, we'll focus on some of the features that have the broadest appeal and will have the most direct impact on users.

Hyper-V, the next generation

The Hyper-V hypervisor is at the heart of Microsoft's push for relevance in the "cloud"—whether in a hosting company's rack space, a private corporate data center, or a server under your desk. There were some major improvements to Hyper-V in the last release of the platform. And Microsoft offered up a free standalone version called Hyper-V Server 2012, which it released at the time of Windows Server 2012's launch (as a sort of loss-leader to draw attention to the platform and away from VMware). But despite the really great licensing deal and the general improvements in Hyper-V, there were still a few gaps in functionality that left it out of contention for many virtualization applications.

There are some significant changes in R2 that help narrow (but perhaps not quite close) those gaps. Replication between Hyper-V servers has been beefed up in terms of speed, frequency, and expanded disaster recovery options. There's also the ability to now set storage quality of service levels for specific VMs to guarantee them specific levels of disk I/O throughput. This way you can give servers supporting databases priority over Web servers in getting disk I/O.

You don't need to be running clustered virtual machines with big databases to get benefits from some of the new features in Hyper-V, however. Some of the most long sought-after improvements coming in R2's Hyper-V are in its support for Linux virtual machines. Linux has run on Hyper-V since 2009, but it's been something of a second-class virtual citizen in Hyper-V land. Yes, it ran. But even in Server 2012, which made big strides with Hyper-V, there wasn't support for remote replication of Linux VMs for disaster recovery.

R2 adds that critical feature, plus a few others that were restricted to Windows VMs in previous releases. These include things like dynamic memory and dynamic resizing of the virtual drives associated with Linux VMs. Previously memory and disk resources for a Linux server were pretty much stuck at whatever you dialed in at configuration. In R2's Hyper-V, you can reclaim un-partitioned space from a virtual SCSI drive or grow any VHD or VHDX virtual SCSI drive dynamically without shutting down the virtual machine. And in R2, enterprise backup tools that are built to work with Hyper-V will be able to directly back up Linux VMs just as they've been able to do with Windows VMs in the past.











Another major update to Hyper-V is for new Windows platforms only. VMs based on Windows Server 2012 and the 64-bit Windows 8 client operating system can now be configured as "Generation 2" virtual machines, a new class of VM that dumps Microsoft's legacy virtual hardware for a more modern architecture. It sounds great, but the benefits to most Hyper-V users will be marginal.

The current generation of production Hyper-V VMs—what Microsoft has now dubbed "Generation 1"— all are based on a software architecture that emulates what is essentially a late 1990s PC. Specifically, they pretend to use an Intel 440BX chipset (designed for the Pentium II) with a mix of PCI and ISA expansion and boot with BIOS with an elderly IDE controller. The 440BX was the king of chipsets back in its day, but its day was 15 years ago.

Microsoft did this because just about every operating system will run on a 440BX-based system. Copying that architecture ensured that the Hyper-V virtual hardware was compatible with real operating systems. But the hardware had some constraints. For example, it supported virtual SCSI controllers but couldn't use them to boot from. Only the emulated Intel IDE was bootable. Generation 1 hardware also included two kinds of network adaptors: a virtual one that required Hyper-V-specific drivers and a hardware-mimicking "legacy" controller that used drivers for the DEC 21140 10/100 Mbps card. The virtual Ethernet device performed better, but the legacy device supported network booting with PXE. And 64-bit versions of Windows don't support the driver for the legacy hardware, so 64-bit versions of Windows consequently can't boot over the network using PXE.

Microsoft hasn't dropped support for such virtual machines—that's what Linux VMs are based on. But Hyper-V's new Generation 2 machines are based on a new emulated system architecture that is essentially legacy-free: they don't include legacy buses like ISA, they don't mimic old Intel IDE controllers, and they don't use BIOS to boot (it's UEFI instead).

They also enable a few things not possible in Generation 1 machines: specifically, they can boot from their (virtual) SCSI controller (in fact, they must—Generation 2 machines don't support IDE/ATA controllers at all), and they can use PXE booting on their (virtual) Ethernet card.

What difference does this make in practice? Microsoft says that Generation 2 machines will boot faster and can install their operating systems faster, though there's apparently little change in normal operational performance. The UEFI firmware also supports (and defaults to) Secure Boot, so it protects virtual machines against certain kinds of boot-level malware.

The biggest advantage of Generation 2 is probably going to be in flexible cloud deployments, where the ability to quickly spin up new VMs, booting them from the network using PXE, is useful. Microsoft's own Azure cloud infrastructure uses PXE booting, for example.

The support for all-SCSI systems might also simplify storage management somewhat. Generation 1 VMs needed at least one IDE virtual hard disk to boot from, in addition to any large SCSI virtual disks for storage. That created a need for at least one additional virtual disk per VM. Unfortunately, that's still the case for all Linux VMs and VMs based on older versions of Windows, as well as 32-bit Windows 8 VMs.

Listing image by Sean Gallagher