Multitouch screens are so versatile and easy to use, why limit them to smartphones and tablets? Researchers have been working for several years to extend multitouch to arbitrary surfaces, but a project called OmniTouch from Microsoft Research and a PhD student at the Human-Computer Interaction Institute at Carnegie Mellon University may bring it closer to reality.

OmniTouch turns body parts and nearby surfaces into touch interfaces. Users can read and reply to an e-mail by touching their hands or a nearby wall, or even use multiple applications at once on multiple surfaces. The results from a user study “suggest our prototype system approaches the accuracy of conventional, physical touch screens, but on arbitrary, ad hoc surfaces,” the researchers say in a video.

The project is led by Carnegie Mellon student and former Microsoft Research intern Chris Harrison and Microsoft researchers Hrvoje Benko and Andrew Wilson. “We wanted to capitalize on the tremendous surface area the real world provides,” Benko says in a Microsoft research article. “The surface area of one hand alone exceeds that of typical smart phones. Tables are an order of magnitude larger than a tablet computer.”

OmniTouch is reminiscent of the SixthSense system developed at the MIT Media Lab, which had students projecting a gestural interface onto the world around them with the help of a device containing a projector, mirror and camera worn around their necks, as well as sensors placed upon their fingers. OmniTouch, however, requires only a device to be worn on one’s shoulder, with nothing special on the hands or arms. A research paper on OmniTouch notes the influence of SixthSense and other similar projects, but says these systems did not create true touch interactions because they “could not differentiate between clicked and hovering fingers.” The limitation was due partly to an “inability to track surfaces in the environment, which also made it impossible to have the projected interface change and follow the surface as it moved.”

The proof-of-concept OmniTouch system consists of a depth-sensing camera and laser-based pico-projector. It is tethered to a desktop computer in its prototype stage, so is not yet truly portable.

Using technology principles similar to Microsoft’s Kinect, OmniTouch starts by generating a depth map of a scene, while isolating fingers from appropriate touch surfaces, including a hand, forearm, notepad, table, or wall. While researchers say the system generates few false positives, it is sensitive to the angle at which fingers appear in front of the camera. Some sophisticated computation is performed to differentiate fingers touching a surface from fingers merely hovering above a surface.

“In this case, we're detecting proximity at a very fine level,” Benko says. “The system decides the finger is touching the surface if it’s close enough to constitute making contact. This was fairly tricky, and we used a depth map to determine proximity. In practice, a finger is seen as ‘clicked’ when its hover distance drops to one centimeter or less above a surface, and we even manage to maintain the clicked state for dragging operations.” In user trials involving 12 participants, 96.5 percent out of 6,048 clicks were correctly perceived by the system. In the remaining 3.5 percent, the system either did not perceive a click or incorrectly detected more than one.

Potential applications include projecting a full keyboard onto a table, zooming in and out of a map projected onto a notepad, or turning a paper document into an interactive surface for the purpose of adding annotations.

“It is now conceivable that anything one can do on today’s mobile devices, they could do in the palm of their hand,” Microsoft says on the project website. Although the shoulder contraption is bulky, Microsoft says there are no significant barriers to miniaturization and the entire system could eventually be the size of a matchbox and worn as a pendant or watch. But there’s no indication of concrete plans to turn this into a commercial product.

Beyond tracking a person’s fingers and environment, the system must also project interfaces similar to what we might expect in a smartphone. OmniTouch uses several methods, including creating a “lock point” to provide an interface that will stay on the surface—such as your hand—as it moves. OmniTouch can automatically generate interfaces on easily distinguishable objects, like a notepad, table and wall, but researchers say more sophisticated depth-driven object recognition will be needed to make the system capable of recognizing literally any surface. They have introduced one way to sidestep the complications, by letting the user define the surface by “clicking” to create a generically sized area or by clicking and dragging to create one of a custom size.

Once the surface is defined, projecting the interface is relatively straightforward “since our projector is precisely calibrated to the depth camera coordinate system.” The prototype projects a 2D interface, but the team says “our approach easily lends itself to experimenting with 3D interfaces that take into account the true geometry of the projected surface.”

OmniTouch is one of two similar Microsoft projects being unveiled this week at the Association for Computing Machinery’s Symposium on User Interface Software and Technology in Santa Barbara, Calif. The second project is called PocketTouch, and lets users interact with their smartphones without removing them from their pockets.

Instead of taking your phone out of your pocket to dismiss a call or quickly respond to a text, PocketTouch lets you manipulate the phone simply by touching the outside of your pocket. For example, a user could trace letters on the outside of a pants pocket in order to write a text message. Researchers tested the prototype on 25 types of fabrics to ensure responsiveness. While the prototype is unwieldy, with extra hardware and a cable attached to the phone, it could theoretically be modified to work with a smartphone’s capacitive touchscreen.