Depth sensors are used in smartphones for features like face unlock. These work by using thousands of laser dots to map out the details of a face, which requires a fast processor and a large battery to perform lots of computations. The researchers were looking for a way to perform similar functions but in small devices with limited battery life, such as smart watches or microrobots.

To find a more efficient way to measure depth, they turned to spiders for inspiration. Unlike humans, in whom each eye captures a slightly different image and the two are compared to measure depth, jumping spiders need highly accurate depth perception despite their tiny brains. So they have layers of retinas in each eye which capture images with different degrees of blur. An object will appear blurry in one eye and crisp in another, which allows an efficient calculation of depth.

In order to replicate the spiders' abilities in a sensor, the scientists used a new type of lens called a metalens which can produce two images with different degrees of blur simultaneously. "Instead of using layered retina to capture multiple simultaneous images, as jumping spiders do, the metalens splits the light and forms two differently-defocused images side-by-side on a photosensor," Zhujun Shi, a Ph.D. candidate at Harvard and co-first author of the paper, explained.

The final piece of the puzzle is a highly efficient algorithm which analyzes the two images produced by the metalens and uses them to create a depth map. Taken together, the metalens and algorithm form a new type of depth camera which could be used for technologies from lightweight VR headsets to wearables to microrobots.

The research is published in the journal Proceedings of the National Academy of Sciences.