Can Operating Systems tell if they're running in a Virtual Machine? Saturday, October 28, 2006

Or, do androids know they're dreaming of electric sheep...

There was some recent news on Windows Vista EULA restrictions relating to Virtual Machines. Vista Home Editions aren't allowed to be run inside a Virtual Machine, and Vista Ultimate in a VM will restrict access to applications which use DRM. We're still waiting for clarification from Microsoft, but it seems like the popular interpretations are basically right.

This raises the question - is this a EULA restriction, or is it going to be enforced. Can it be enforced? Can an operating system tell if it's running in a Virtual Machine?

That's really two questions:

Can Operating Systems currently detect if they're running in a VM? Will Operating Systems always be able to detect if they're running in a VM?

Well, I only know what I read. Let me know if you disagree...

Can Operating Systems currently detect if they're running in a VM?

Yes, they can. Right now they do it through a couple of techniques - direct hardware fingerprinting and inferred hardware fingerprinting.

Direct hardware fingerprinting is pretty straightforward. Virtual Machines have predictable hardware profiles, so you can just query for "virtual hardware" that's only available in VM's and can't easily be changed. The Virtual PC Guy describes this approach here:

The easiest way to detect that you are inside of a virtual machine is by using 'hardware fingerprinting' - where you look for hardware that is always present inside of a given virtual machine. In the case of Microsoft virtual machines - a clear indicator is if the motherboard is made by Microsoft... [WMI Script to check the motherboard vendor]

If the motherboard is made by "Microsoft Corporation" then you are inside of one of our virtual machines.

The inferred hardware fingerprinting approach is a bit more dodgy. It works by making direct machine level calls to the virtualized CPU that will reveal if the CPU is real or virtual. Some of these call instructions that the VMM's don't currently support. Others make system calls that will only succeed on specific virtual hardware, usually because of special machine calls the VM's implement to allow communication with the host OS and optimize use of host OS resources (e.g. the Virtual Machine Additions for Virtual PC / Virtual Server , or VMWare's VMware Command Line Tools). This kind of stuff is pretty slick, but it makes "undocumented system calls" look boring.

Here are some examples of indirect hardware fingerprinting:

A program on CodeProject that can detect if it is running in either VPC or VMWare.

More information on detecting the VMWare host through the presence of special IO ports implemented as system calls

The Red Pill approach, which exploits the fact that the interrupt descriptor table registor (IDTR) is relocated by VMM's. It writes to the IDTR via an SIDT instruction, the reads from the ITDR. If the values don't match, the code is executing in a VM.

Of course, this approach is subject to the whims of each VMM release, and it may vary from host OS to host OS.

These two approaches remind me of the two ways to target CSS to different browsers - ask them nicely, or beat it out of them.

Will Operating Systems always be able to detect if they're running in a VM?

Of course, that's not a question I can answer with certainty until I can get my hands on a flux capacitor and 1.21 gigawatts. That won't keep me from speculating, though...

Let's step back a second and think about whether or not we want Operating Systems to know if they're running in a virtual environment. In the context of the recent Vista EULA flap, we might want to say no - the EULA restriction is stupid, and it's a good thing that they can't enforce it.

But let's talk about The Blue Pill. It's a theoretical malware application of VM technology in which a rootkit consumes the host operating system and runs as a hypervisor (a hardware virtual machine manager). Once it's done that, it can do whatever it wants without the operating system knowing it's been compromised:

The idea behind Blue Pill is simple: your operating system swallows the Blue Pill and it awakes inside the Matrix controlled by the ultra thin Blue Pill hypervisor. This all happens on-the-fly (i.e. without restarting the system) and there is no performance penalty and all the devices, like graphics card, are fully accessible to the operating system, which is now executing inside virtual machine. This is all possible thanks to the latest virtualization technology from AMD called SVM/Pacifica. [via invisiblethings.blogspot.com]

It's mesmerizing and scary at the same time, kind of like BooBah. There's some doubt as to whether it's code or just talk at this point:

However, there is great doubt throughout computer security circles as to whether blue pill is real or a mere stunt, since details and a working sample of the source code have not been made available, contravening the industry wide standard of full disclosure. [via Wikipedia]

Regardless, the concept has been validated. Microsoft Research and a group from University of Michigan proposed SubVirt (pdf), a VMM rootkit, in May 2006. Their paper is a fascinating schizophrenic game of cat and mouse: well, you could detect this by blah, but then we could zhoop, and even if you flurped we could just breeble. The SubVirt rootkit doesn't take advantage of hypervisor technology and requires a reboot, but on the other hand it seems to be more mature.

We built VMBRs (Virtual Machine Based Rootkits) based on two available virtual-machine monitors, including one for which source code was unavailable. On today’s x86 systems, VMBRs are capable of running a target OS with few visual differences or performance effects that would alert the user to the presence of a VMBR. In fact, one of the authors accidentally used a machine which had been infected by our proof-of concept VMBR without realizing that he was using a compromised system! [Subvirt paper pdf]

The point remains, though - we probably want our operating systems to know if they're running on virtual machines. It sounds like they should always be able to do that. Anthony Liguori, and IBM software engineer who has worked on the Xen hypervisor for two years, says:

Hardware virtualization requires a technique know as "trap and emulation". The idea is that the hardware traps certain instructions and the VMM emulates those instructions in such a way as to make the software believe it is running in a virtual machine. Software emulation implies that these instructions take much longer to complete when executed under a VMM then on normal hardware. This fact is what can be used to detect the presence of a VMM. [via virtualization.info]

You may have noticed that I jumped from talking about software VMM's (VMWare, VirtualPC) to both software and hardware VM rootkits. From what I've read, it looks like this is going to be a cat and mouse game, but the VM rootkits will always need to deal with the timing issues that Anthony mentioned. The SubVirt authors discussed this, too:

A VMBR adds CPU overhead to trap and emulate privileged instructions, as well as to run any malicious services. These timing differences can be noticed by software running in the virtual machine by comparing the running time of benchmarks against wall-clock time. A VMBR can make the detector’s task more difficult by slowing down the time returned by the system clock, but the detector can overcome this by using a clock that can be read without interference from the VMBR (e.g., the user’s wristwatch). [Subvirt paper pdf]

Well, I hope we can do better than wristwatch checks. I'd hope that an OS could check the time of day once an hour and notice a 1% drag due to VM hosting, or at least pick it up over the course of a full day. Not great, but at least it'd be detectable.

There's one more secret weapon against bad VMM's. It's probably the best defense, but you probably aren't going to like it. I'm talking about the TPM, the Trusted Platform Module. Microsoft's Next Generation Secure Computing Base Digital Rights Management (DRM) technology (called Palladium back when Vista was Longhorn) ran on the TPM. Trusted computing works by using a hardware crypto chip which verifies hardware and software loaded by the hypervisor (which runs above the hardware virtualization layer, which runs above the good old CPU's... sheesh, this is getting complicated...).

It's as if an OS running on a Trusted Computing platform was using HTTPS (SSL) to talk to hardware and trusted software like DRM software, but with much stronger crypto. That's a good thing from the point of view of safeguarding against rootkits. It's bad news if you want to use software that works by virtualizing hardware (such as virtual soundcards which record streaming music like TotalRecorder, or virtual DVD drives which let you read ISO images like Daemon Tools or Alcohol 120). It's also bad news if you want full access to DRM protected content, since DRM processing protected by a TPM is quite a bit more robust than the flimsy DRM stuff they're using today. DRM'd media running on a Trusted platform could be sent from disk to soundcard with the same kind of anti-tampering assurance you'd expect when you connect to your bank's website across the big, bad internet. Hmm. Well, we've got a little while to think this through, since it's mostly been removed from Vista and won't ship until future versions of Windows.