For robots to operate in the physical world, they need a decent pair of eyes. Usually, this job is taken care of using LIDAR — a technology that bounces light off nearby surfaces to create a 3D map of the world around it. LIDAR is just like radar in its basic mechanics, but because it uses light, not radio waves, it's much more accurate; able to pick out individual leaves on a tree when mounted on a plane, or track the movements of cyclists and pedestrians when fitted to a self-driving cars.

However, LIDAR systems are also bulky and expensive. High-end models costs tens of thousands of dollars, and even the smallest new systems are the size of a hockey puck. Here's what a LIDAR sensor normally looks like, mounted on top of one of Google's self-driving cars.

The sensor is big because it's mechanical — it has moving parts. The section on the top spins around constantly, so the lasers can build up a 360-degree map. This also limits the refresh rate of the LIDAR image: the sensor can only know where something is when it's bouncing a light off it, and although it can accurately infer where an object is based on its past location, direction of travel, and speed, it's still a blind-spot of sorts.

Researchers from MIT and DARPA might have a solution though — a new version of LIDAR that shrinks the light-bouncing apparatus onto a chip that's smaller than a grain of rice. Writing for IEEE Spectrum, researchers Christopher Poulton and Michael Watts claim that their prototype sensors "promise to be orders of magnitude smaller, lighter, and cheaper than LIDAR systems available on the market today." They could be the eyes of future robots.

"Orders of magnitude smaller, lighter, and cheaper"

The key to this new technology is a discipline known as silicon photonics, in which engineers create miniature circuits that guide and steer light on a microscopic level. Poulton and Watts compare it to the fabrication of microchips, which shrunk electronic systems that used copper wire and components onto ever-smaller transistors. (For a more detailed explained of silicon photonics, be sure to read to Poulton and Watts' full post.)

The end result is a LIDAR sensor that fits on a 0.5 millimeter by 6 millimeter chip, and which could cost as little as $10 to manufacture at scale. These chips have a number of limitations, including a relatively narrow field of view and a short range (currently, they can only "see" distances of two meters), but those involved say these properties could be improved relatively quickly. "We hope to achieve a 10-meter range within a year," write Poulton and Watts. "We believe that commercial LIDAR-on-a-chip solutions will be available in a few years."

The end result could be cheap sensors that do for machine vision what mobile processors have done for machine computation. Even if mechanical LIDAR systems retain their usefulness for their long-range accuracy, solid state LIDAR chips could give depth-sensing capabilities to a range of products. Poulton and Watts suggest placing their chips on a robot's fingers so it can "see what it is grasping," but the technology could also find its way into the next-generation of AI-powered toys, 3D-mapping smartphones, and autonomous drones.