Blame Tom Cruise.

Ever since Minority Report hit theaters in 2002 we've dreamed of controlling interfaces with our hands just like Cruise's crime-fighting character did in Steven Spielberg's film.

SEE ALSO: Microsoft is putting AI everywhere it can

Our reality, though, tends to be more disappointing. We still don't have holographic interfaces (unless we wear a headset like Microsoft's HoloLens) and the gesture controls we have usually require you to keep your hands in some sort of virtual box, or wear some special device (e.g. Myo). Gesture control needs to be simple, obvious. It gets better when you combine, say, a Leap Motion with VR, but then you're mostly cut off from the outside world.

That's going to soon change for Microsoft Windows 10 PC users, though. On the night before Microsoft's Build developers conference in Seattle, the company showed me how developers can enable gesture control for Windows with the addition of a plug-and-play Gesture API.

The programming interface for Microsoft's Gesture API. Image: Microsoft

Called Project Prague, the Gesture API uses Microsoft's Kinect motion sensor. I watched as a developer used his hands to manipulate images in PowerPoint, interact with emoji stickers in live video, and play a game. He didn't appear to worry much about precision or where his hands were. Obviously, Kinect has a pretty wide field of view (an advantage), though most people, even if they have one, don't have it connected to their PC (a disadvantage, though perhaps mitigated by future hardware).

Still in the early stages of development, the Gesture API is a product of Microsoft's new Cognitive Services Lab that was announced at Build on Wednesday. The software package will enable developers integrate a wide array of Microsoft cognitive APIs and services in their apps.