Every once in a while I see a demo video that’s too good to be true. That’s what I first thought when I saw a video of Leap’s new motion-tracking peripheral for personal computers. As a result, I’ve delayed writing about it until I could see it in action for myself. After spending time with co-founders David Holz and Mike Buckwald, and getting a hands-on demo, I’m a believer in their technology and in the Leap as a revolutionary peripheral.

The USB device and its software really do full-on, real-time, sub-millimeter-resolution 3D tracking of fingers, hands, pencils, and just about anything else that fits within a 2-foot range of where you place it — all without draining the computer’s CPU.

The Leap’s production version will also be smaller than most mice, as you can see from the picture of David holding one in his hand (the prototype units are a little larger). The Leap connects to a PC or Mac using USB 2.0, and can seamlessly replace either a mouse or a multi-touch touchpad without additional software. That’s only the beginning, though. The Leap allows for complex interactions with applications, including sophisticated 3D gestures which can include molding and extruding.

To tap into the wide range of possible applications, Leap plans to send out over 10,000 early units to developers, starting later this month — along with an SDK that allows programming in Java, C#, C++, and Python for starters. The full consumer release is scheduled to follow around the end of the year or early 2013, with pre-orders taken now for $70 each.

Structured light: Is the Leap a mini-Kinect?

Early coverage of the Leap has been quick to compare it to Microsoft’s revolutionary Kinect peripheral, although the Leap is much smaller and less expensive. Kinect uses a “structured light” field of infrared beams to allow it to measure depth. Its camera views the way a carefully-designed array of spots from the beams appear on the surfaces in front of it. It can then use the way beams are deformed to determine how far away objects are.

Since Leap has confirmed that its devices uses infrared LEDs and small cameras, like the Kinect, it is tempting to speculate that the Leap uses a similar technology. However the Kinect is only accurate to less than a centimeter, while the Leap claims accuracy of 0.01mm, so it would have had to do a major leapfrog over the technology in Kinect.

One possibility is that Leap is using a more sophisticated form of structured light, known as phase-shift projection. With this system, several patterns of light are emitted, each shifted slightly out of phase from the previous. Phase-shift systems, using at least three emitters and three cameras, can achieve greater precision than single emitter systems. However, they still haven’t achieved the kind of accuracy that Leap claims — at least at that low a cost — so it is hard to imagine that Leap is meeting its goals using any well-known form of structured light technology.

Leap: A breakthrough in time-of-flight technology?

It is more likely that Leap uses an amazingly compact version of time-of-flight (TOF) technology to map the depth of users’ hands, fingers, and hand-held tools. Time-of-flight cameras rely on knowing the precise amount of time light takes to travel to, and return from, a subject to determine how far away it is. The technique is decades old, but early models were very large and expensive. Only recently has the technology been miniaturized to the point where it can fit in the palm of your hand.

Current miniature TOF products realize their low cost and small size by relying on one frequency of emitted light and one camera to measure distance. As a result they do not have very high spatial resolution. For example competitor Mesa’s new Swiss Ranger 4000 miniaturized TOF unit is only accurate to around 10mm.

By using multiple cameras, Leap appears to have found a way to combine their data to create a super-accurate model of the “bubble” of space in front of its device.

Next page: How does Leap obtain such staggering (0.01mm) accuracy?