Seeing around corners is difficult, but not impossible. When light scatters off an object that is hidden from view, it carries information about the object that can be reconstructed computationally. This imaging method, known as non-line-of-sight imaging, typically requires expensive, specialized equipment. But i﻿n a paper in Nature, Saunders et al.1 report an approach that needs only a single photograph captured using a standard digital camera. The technique can reconstruct the position of an opaque object, as well as the scene behind the object, when both the object and the scene are out of direct sight.

Read the paper: Computational periscopy with an ordinary digital camera

There are two types of reflection: specular and diffuse. In specular reflection, incident light is deflected by a specific angle, whereas in diffuse reflection, it is scattered in many directions. In a conventional periscope — such as those once widely used by submerged submarines to scan the sea surface — specular reflection at the surface of a mirror is used to deflect the path of light onto areas outside the observer’s line of sight.

Advances in computer science over the past few years have enabled optical-imaging systems to use information collected from diffusely reflecting surfaces to look around corners and to view scenes that are out of direct sight. In this case, the optical path is not simply deflected by mirrors, and all of the imaging information is destroyed by diffuse reflection. The information has to be reconstructed computationally from a series of measurements, in a similar way to that used in the X-ray imaging method known as computed tomography.

Pioneering experiments2,3 demonstrated that the surface geometries of objects hidden from direct view could be reconstructed. In these experiments, a diffusely reflecting ‘relay’ surface is irradiated with ultrashort laser pulses to indirectly illuminate a target that is behind an obscuring structure. This light is reflected from any surface back towards the relay surface and detected by a specialized optical sensor.

For certain types of imaging, known as transient and time-resolved imaging, such a sensor can measure the arrival times of photons with extremely high precision. This timing information, together with the angles at which the photons hit the sensor and details about the relay surface, can be used to deduce the locations of reflecting surfaces by computational means. For example, the principle of non-line-of-sight transient imaging has been used to track objects in real time4 and to reconstruct object shapes and textures5.

A more challenging task is non-line-of-sight imaging using an ordinary camera. In this case, photon arrival times are not recorded and therefore cannot be used to estimate a target’s spatial properties. As a result, much more computation is needed. An ordinary camera equipped with a continuous light source has been used to track the position and rotation of an object hidden from direct view6. However, the shape and size of the object had to be known in advance.

Another source of imaging information can be found in areas containing shadows or penumbrae — regions of shadows in which only some of the light source is obscured. In non-line-of-sight imaging, the obscuring structure can block certain optical paths and cast a shadow on the relay surface (Fig. 1). Therefore, information about the hidden scene is represented not only by the photons that arrive at the detector, but also by the photons that are blocked.

Figure 1 | Non-line-of-sight imaging. Saunders et al.1 report a technique for imaging objects that are outside the direct field of view of a camera. In their approach, some of the light that is emitted from a hidden target is blocked by an obscuring structure of unknown position. The blocked light produces a shadow on a ‘relay’ surface, whereas the rest of the light illuminates this surface. Finally, a camera takes a photograph of the surface and feeds this information through a computer algorithm (not shown) that can reconstruct an image of the hidden target and give an estimate of the position of the obscuring structure.

The idea of using shadows in non-line-of-sight imaging was first demonstrated by turning an ordinary camera into a corner camera7, in which shadows cast by the edge of a doorway or the corner of a wall were analysed. Small variations in intensity and colour in penumbrae were detected and used to observe the movement of people hidden around a corner. However, owing to the large size of the obscuring structures, only some of the spatial information could be reconstructed. The idea was later used in a more general approach to reconstruct a hidden scene from intensity variations in the penumbrae cast by relatively small structures, such as the leaves of a plant8. Nevertheless, a detailed calibration of the scene was needed to determine the photons’ direction of propagation.

Saunders and colleagues present an approach in which light emanating from a hidden target is partially blocked by an obscuring structure of unknown position, producing a pattern of illumination and shadow on a relay surface (Fig. 1). A standard digital camera takes a photograph of this pattern and feeds the information into a computer algorithm that has substantial improvements over previous algorithms in the analysis of areas containing shadows.

The algorithm can estimate the position of the obscuring structure and produce an image of the hidden target. Furthermore, from a single photograph, brightness and colour variations in the target can be reconstructed with unprecedented resolution. By analysing a series of photographs, any motion of the target could be observed and displayed on a monitor.

The authors’ approach can extend the perception range of ordinary cameras, and therefore enhance the equipment’s sensing capabilities. Future improvements to the technique might enable the shape of the obscuring structure to be determined and allow a 3D reconstruction of the hidden scene. Because we can see objects only in our direct field of view, such non-line-of-sight imaging could revolutionize how we think about our perception of the environment.

Saunders and colleagues’ work could lead to improvements in microscopy and in medical-imaging devices such as endoscopes. Moreover, their approach might find applications in the monitoring of hazardous or inaccessible areas such as chemical or nuclear plants, and in the industrial inspection of, for example, turbines and enclosed areas. Finally, the technique could be used by vehicles to avoid collisions, and by firefighters and first responders to look into burning or collapsed structures. The results of this work will therefore have a large impact on the development of imaging devices that have extended perception ranges.