With the aid of Microsoft’s much-loved Kinect sensor, engineers at Purdue University have created a system that can turn any surface — flat or otherwise — into a multi-user, multi-finger touchscreen.

The setup is disgustingly simple: You point the Kinect at some kind of surface, plug it into a computer, and then proceed to poke and prod that object as if it was a multi-finger touchscreen. Additionally, you can throw a projector into the mix, and the object becomes a multitouch display — a lot like (the original) Microsoft’s Surface tabletop display (which is now called PixelSense, as Microsoft decided to re-use the trademark with its upcoming tablets).

The magic, of course, is being performed in software. The first time you use the system, which the engineers call “Extended Multitouch,” the Kinect sensor analyzes the surface it’s pointing at. If you want to use a table as a touchscreen, for example, the Kinect takes a look at the table and works out exactly how far away the surface is (a depth map, in essence). Then, if you want to interact with the touchscreen, the Kinect works out how far away your fingers are — and if they’re the same distance as the table, the software knows you’re making contact. The software can do this for any number of fingers (or other implements, such as pens/styluses).

Furthermore, because the “touch sensor” is above the display (rather than inside it), Extended Multitouch can also work out handedness (whether you’re using your left or right hand), which specific fingers/thumbs you’re using, and by looking at your wrist and palm it can understand gestures (say, one hand “throwing” an object to another hand). Extended Multitouch can also use the space above the screen, too — to activate a context menu, for example, you might lift a hand or finger off the display.

To test Extended Multitouch, the engineers wrote a few simple programs, including a sketching program that allowed users to draw with a pen and multiple fingers simultaneously. Overall accuracy is good, but not as reliable as displays with built-in touch sensors (such as your smartphone). As far as handedness and gesture recognition goes, accuracy is around 90% — but that will increase with sensor resolution and continued software tweaks.

Unlike built-in touch sensors, which are expensive and require a local computer controller, it would be quite easy to blanket a house in depth-sensing cameras. If you buried a grid of Kinects in your ceiling, you could turn most of your environment into a touch interface. In the words of Niklas Elmqvist, one of Extended Multitouch’s developers, “Imagine having giant iPads everywhere, on any wall in your house or office, every kitchen counter, without using expensive technology.”

The Purdue University researchers have applied for patents covering Extended Multitouch, and unfortunately there’s no indication that they intend to open-source the software. I would be very surprised if Microsoft isn’t working on similar applications for Kinect 2.0, though, which must surely be on its way.

Now read: Disney Touché turns everyday objects into multi-touch, gesture-recognizing interfaces

Research paper: Extended Multitouch: Recovering Touch Posture, Handedness, and User Identity using a Depth Camera