Microsoft may be taking an official wait and see approach before following companies like Oculus and Sony down the virtual reality headset path. That isn't stopping the company's research arm from looking into interesting ways to use Kinect and projector technology to create holodeck-style augmented reality experiences in the living room, though. Microsoft Research has prepared a number of interesting demos and papers on these lines for the Association for Computing Machinery's User Interface Software and Technology Symposium, showing off just how far those efforts have come and how they could lead to interesting new forms of gaming in the future.

The first project, RoomAlive, promises to "transform any room into an immersive augmented virtual gaming experience," as the researchers put it. The system uses six paired projector/Kinect units, mounted to the ceiling so they have somewhat overlapping fields of view. These units can auto-calibrate themselves with a series of projected light patterns, transforming their individual Kinect depth maps into a unified 3D point-cloud model of the room.

From there, RoomAlive translates the point data into a series of vertical and horizontal surfaces representing the walls and furniture, then translates that into a 3D environment in the Unity game engine. Using that virtual representation of the room, the system then figures out how to project a unified image on those walls and surfaces, warping the projection so it appears correct on each surface. The effect is akin to transforming the entire room into a computer screen or monitor, complete with player-tracking through the array of Kinect cameras.

In addition to some non-interactive demos, MSR showed off a few gaming concepts that use the system. In one "whack-a-mole" game, users, tracked by Kinect, can touch or shoot at critters that appear on the wall. In another, a gun-toting character controlled with a handheld controller runs across the wall, down on to a table, and then onto the floor while being chased by robots. The final demo puts virtual spike traps on the wall for users to dodge and bathes the room in red when and if they are hit.

Dual perspectives and finger detection

In a similar ACM demo, called Mano-a-Mano, a team of two MSR researchers uses a trio of projector/Kinect combos to create an augmented reality effect that provides correct three-dimensional perspectives for two different users. Each projector displays virtual objects against the walls, floors, and fixtures in a room in such a way that they appear to float in the middle of the room. The apparent perspective and size of those virtual objects changes as the user's position and head angle are detected by Kinect to give the illusion of real depth and position in the middle of the room.

That's a decent faux 3D solution for a single user, but how can such a system account for two people looking at a virtual object from different angles? That's where the multiple projector setup comes in, giving each user their own view of the virtual scene. By "assuming that each user is unaware of graphics projected on the wall behind them or their own bodies," as the researchers explain, the system can show two different perspectives of the same scene that look correct to each user. In the demo, the system is shown for a simple game of catch and for a "combat style game" where a user can summon fireballs in their hand and fling them at the user on the other side of the room.

The last of MSR's ACM demos that might be of interest to gamers and game makers is Handpose, a system that adds a degree of detail and articulation to Kinect-based hand and finger tracking. With a new tracking algorithm, researchers appear to be able to distinguish individual fingers and hand gestures with much more detail than was previously possible with a standard Kinect v2 sensor.

Users are shown throwing complex finger positions at many different angles while the tracking system quickly and accurately tracks those positions in a 3D model of the hand. This tracking is "robust to tracking failure," works up to "several meters" away from the sensor, and works regardless of where the camera is positioned, even if the tracking camera is moving, the researchers say. In a video demo, users are shown using the system to easily grasp and move virtual objects simply by moving their fingers together and apart.

Coming to stores... never?

These kinds of augmented reality experiments aren't exactly new for Microsoft and Microsoft Research. MSR's latest demos pivot off of IllumiRoom, an impressive demonstration from last year that showed projectors being used to extend game action past the bounds of a TV screen. And let's not forget that Microsoft's 2012 "Project Fortaleza" leak and subsequent patents both point to an interest in heads-up augmented reality displays.

Of course, no actual products, or even real hints of actual consumer products, have actually come from those revelations as of yet. Microsoft Research's efforts are only loosely connected to the consumer-focused divisions of the company, and these proof-of-concept demonstrations shouldn't be seen as indications that the Xbox division will be heading in this direction any time soon. Even if they did, technology would have to get a lot smaller and cheaper before the average consumer was willing to mount three to six projector/camera combos on the ceiling.

Still, it's nice to see at least one Microsoft division pushing the bounds of gaming past the flat screens and controls we're used to. If we do one day get to a world where projector-and-tracker-based gaming is a feasible consumer reality, it'll be this kind of basic research that provided the seed.