Researchers at MIT Media Lab have developed a $500 "nano-camera" that can operate at the speed of light. According to the researchers, potential applications of the 3D camera include collision-avoidance, gesture-recognition, medical imaging, motion-tracking and interactive gaming.

The team which developed the inexpensive "nano-camera" comprises Ramesh Raskar, Achuta Kadambi, Refael Whyte, Ayush Bhandari, and Christopher Barsi at MIT, and Adrian Dorrington and Lee Streeter from the University of Waikato in New Zealand.

The nano-camera uses the "Time of Flight" method to measure scenes, a method also used by Microsoft for its new Kinect sensor that ships with the Xbox One. With this Time of Flight, the location of objects is calculated by how long it takes for transmitted light to reflect off a surface and return to the sensor. However, unlike conventional Time of Flight cameras, the new camera will produce accurate measurements even in fog or rain, and can also correctly locate translucent objects.

What makes it difficult for conventional Time of Flight cameras to accurately measure the distance light has travelled (in a changing environment, along semi-transparent surfaces, or to an object in motion) is the creation of multiple reflections, which smear the original signal's reflection before being collected by the sensor, resulting in inaccurate data. To avoid the problem, the team used an encoding technique that is used in the telecommunications industry.

Raskar, associate professor of media arts and sciences, and leader of the Camera Culture group at the Media Lab, explains the new method: "We use a new method that allows us to encode information in time. So when the data comes back, we can do calculations that are very common in the telecommunications world, to estimate different distances from the single signal."

Kadambi adds: "By solving the multipath problem, essentially just by changing the code, we are able to unmix the light paths and therefore visualize light moving across the scene."

Raskar's group had unveiled a trillion-frame-per-second "femto-camera" in 2011, which costs roughly $500,000 to build. The technique scans a scene with a femtosecond (one quadrillionth of a second) impulse of light, and then uses extremely expensive laboratory-grade optical equipment to capture an image each time.

The team's "nano-camera" instead measures the scene with a continuous-wave signal, oscillating at nanosecond periods (one thousand-millionth of a second). According to the researchers, this means the nano-camera can reach a "time resolution" (size of interval between images) within one order of magnitude of femtophotography, while costing just $500. They call the technique nanophotography, which is claimed to deliver similar results to femtography, with marginally lower quality, and a fraction of the cost.

For more details, refer to the MIT Media Lab page.