John Hunt, Duke ECE

With a new imaging technique

, Duke University researchers may have brought us one step closer to affordable self-driving cars , cheaper and more accurate airport scanners, and improved medical imaging.

Taking an ordinary picture is simple: A lens focuses light onto a detector ray, which contains millions of individual silicon detectors, one for each pixel in the final photograph. But when you venture beyond the visible spectrum and begin imaging in regions with longer wavelengths, such as microwaves or millimeter waves, things get more difficult and you need larger, more sensitive, and more expensive detectors that are capable of detecting the phase of light as well as its intensity. These images are useful because such light can move through materials that light in the optical range does not penetrate. But the only feasible way to get them, since providing a sensor for every point you need to capture would cost orders of magnitude more than it does in a simple camera, is to take a single detector or a line of them and move them from point to point across the plane you wish to capture. The process is slow and requires heavy hardware.

An example you've probably seen is a millimeter-wave airport scanner; the swiveling bar contains a line of detectors, and takes a millimeter-wave image as you stand still. Such detectors are also used in self-driving cars, as they can cut through fog and dust in the air to sense approaching obstacles, but their weight and cost holds back the possibility of a mass-marketed, unmanned automobile.

The new imaging system, explained in a study published in Science today, replaces all that material–previously either an expensive wall of detectors or a cumbersome machine to swivel a few detectors across the visual field–with a single slab. John Hunt, a graduate student at Duke and corresponding author, says the design covers the entire area you'd have scanned using an aperture fed by a single detector. "It has a large aperture, which is good for resolution, but it's thin, has no moving parts, and it made of relatively cheap materials," he says. With the flat slab facing a scene, the researchers illuminate the room with radiation. Back-scattered radiation from objects in the scene floods the slab, with certain frequencies making it through to be recorded by a vector network analyzer that plots the location of the obstacles.

It works because of the metamaterial Hunt and his colleagues have developed. Light enters the aperture and enters a thin sheet of metamaterial against the other side of it. The material is a wave-guiding structure, like a fiberoptic cable. Light gets stuck inside it, but at certain frequencies some parts of the sheet become transparent and allow light to leak out. Those escaping beams travel back into the aperture, which collects them. Since the metamaterial behaves differently for each frequency of light, it creates a different set of beams that go in different directions for each frequency. By focusing on a narrow range of frequencies in the microwave range, the researchers are able to collect a complete image.

The device can work with only one detector by using compressive imaging, which collects the minimum amount of data needed to reconstruct a visible image. The reconstruction of that data into an image could produce any number of pictures, so the team does some creative math to add parameters for expected results. When taking a picture of the night sky, for example, you would filter possible results for images with the fewest bright spots, as you'd expect the photo to be mostly black.

Compressive imaging is a fairly new concept, but one day these algorithms could be used to make the outlandish super-resolution powers seen on crime shows (the ones that turn a pixelated security-cam shot into a glossy headshot of the perp) a reality. "It's kind of magical," Hunt says, "Because you end up with more data in the final picture than you finally collected, but it's not just a good approximation. It's the true image."

In their experiments, Hunt and his team produced accurate recreations of the placement of objects in a room they flooded with microwaves. But because their current aperture is one-dimensional, consisting of a strip of metamaterial lying flat, the images were cross sections of a horizontal plane through the scene, showing range only. The next step is a two-dimensional aperture, which will produce three-dimensional images. With more complex, realistic scenes being imaged, Hunt believes the technology will be ideal for self-driving cars. But he's confident the technology will also find other industries to thrive in.

"The cost is so much lower than current systems that people will figure out plenty of ways to use it," he says. Even the most advanced aircraft still use mechanically gimbaled dishes for radars, which require large volumes inside the vehicle to house their turning mechanisms. Eventually, he hopes the new system will be a cheap, unobtrusive addition to both manned and unmanned vehicles.

This content is created and maintained by a third party, and imported onto this page to help users provide their email addresses. You may be able to find more information about this and similar content at piano.io