From Apple's iPhone to Microsoft's Surface, multi-touch user interfaces are becoming all the rage. The only trouble is that our fingers are both a tool and a hindrance; whatever we're interacting with on the display is momentarily blocked by our finger and our hands.

Microsoft, Mitsubishi, and the University of Toronto are working on a solution, and it's called LucidTouch. Ars spoke with Daniel Wigdor, one of the researchers at the Mitsubishi Electric Research Labs (MERL), to get the inside scoop on what LucidTouch is, what problems the team has run into along the way, and what's coming in the future.

LucidTouch is currently the name for a prototype device that allows users to interact with an 800x480 pixel screen without having to touch the front surface. Instead, users can choose to manipulate objects by touching a sensor pad on the back of the device; this allows users to resize images and text, as well as navigate around any graphical user interface without their hands getting in the way of the display. To achieve this, LucidTouch displays a transparent version of your hands on its screen so that you know where your fingers are but can also see through them. If you want, you can touch the front surface for some tasks through a resistive touch screen and also use the back surface for others.



LucidTouch in action: the 2.0 prototype

LucidTouch is currently limited by technology, however. In order to display hands and fingers in a semi-translucent fashion, Wigdor told Ars that the device currently needs a "boom cam" attached to the back on a small arm. The device sends data back to a computer through USB, where the desktop does all of the information processing. As you can tell, the prototype is thus tethered to a PC, limiting the portability of the device. As a solution, Wigdor told Ars, "We have a few ideas about how the sensing could be done with technologies embedded in the back of the LCD... such as with a capacitive array or with an array of LEDs to both illuminate and sense."

MERL and Microsoft are both already well versed in multi-touch technology. In fact, MERL has a similar technology to Microsoft's Surface called "DiamondTouch." Instead of cameras below the surface, however, users sit in different chairs and are connected capacitively with receivers that interact with the multi-touch display. One hundred models are already in use by universities and research organizations around the world, according to MERL. While MERL looks and seems almost exactly like Surface, it doesn't appear to offer the same ability to interact with consumer electronics like cameras or PDAs and seems to be better-suited for sharing multi-touch applications for maps, photo applications, and blueprints.

Since LucidTouch is still limited by current technology and requires a camera boom and PC, it's not apparent when it will hit the mainstream market. However, when and if it does, it will likely be used in larger tablet PCs and most useful for those who rely heavily on screen real estate, such as engineers and architects. We also think it would be a killer feature on small web browsing devices, such as future versions of the Nokia N800 or the iPhone/iPod Touch. That's a long way away, however.

On Friday, the prototype version 2.0 of LucidTouch was completed. The new prototype is "much more usable than [the] first prototype," with a thinner 1cm thick LCD screen. There will be a live demonstration of LucidTouch 2.0 as well as a discussion of a research paper (LucidTouch: A See-Through Mobile Device) at the 2007 ACT User Interface Software and Technology (UIST) symposium in Newport, RI, from October 7-10. We'll be there to check it out.