The optics of the camera obscura ha­ve faithfully served photographers for ages. The recipe has been simple: a lens, aperture, dark box and something to record the light.

But the camera as we know it is changing. A revolution in digital imaging research could surpass the camera obscura in almost every technical way: resolution, size and energy efficiency. It’s called computational photography, and it stems from the idea that if you can capture visual data instead of a true image, then the picture can be reconstructed with software.

With cameras capturing light differently, a lens isn’t necessarily needed anymore. Instead, visual data can be gathered by playing tricks with light, like forcing it through a microscopic grating or diffracting it through a glass sphere. Years ago, this technology was just in the lab. But now it has made its way into consumer smartphones.

More on Lens In the mid-1970s a young engineer invented the digital photographic process. Some of his bosses were not impressed. His employer? Eastman Kodak. Kodak’s First Digital Moment

Shree K. Nayar is an evangelist for computational photography. His mantra: optical coding, computational decoding. Outside the lab, that means what you see isn’t necessarily what you get.

“It’s a departure from the old school, when we were thinking about capturing the image you’re going to show, and making sure that that image is of the highest quality,” Mr. Nayar said. “Here we are saying that the image that you record is not even seen.”

Mr. Nayar leads the Columbia Vision Laboratory at Columbia University, and he has built high dynamic range sensors that are found in smartphones, small gigapixel cameras and sensors that power themselves when taking photos. He has also published papers in collaboration with Sony’s imaging research team detailing HDR sensors for more than 10 years.

Sony seems to have taken great interest in HDR sensor technology. HDR sensors of different designs have started to be installed in major smartphones like Sony’s Xperia line, according to a Sony spokesman.

Photo

“That’s your first example of a widespread use of computational photography in some ways,” Mr. Nayar said, referring to the adoption of HDR sensors in smartphones. “The user doesn’t even know it’s happening.”

In traditional sensors, every pixel is created equally. If lots of light hits one part for too long, the image is overexposed and that detail is gone. But with Mr. Nayar’s HDR sensor, some pixels are less reactive, some more so, and some are weighted normally. If a few pixels are overexposed, information from surrounding pixels can recapture that detail. Sony’s HDR approach for its smartphones is different; instead, pixels are actually layered on top of sensors’ circuits.

A computational approach has also allowed the Columbia lab to experiment with extremely high-resolution imaging, namely a gigapixel camera.

The light passes through a glass sphere instead of a traditional lens, and it is diffracted into a cup of angled sensors. Because the camera knows exactly how light will pass through the sphere, it decodes and stitches the data from each sensor to make a complete image.

“The hope is that if this architecture is adopted, you’ll see smaller and smaller cameras producing images with orders of magnitude higher resolution,” Mr. Nayar said.

That might mean in the future you could have a gigapixel camera in your smartphone, or SLR could stand for “spherical lens reflex.”

Another invention out of Columbia is a concept camera that powers itself by the light of the image itself, because the sensors are made of the same basic electric parts used in solar panels.

“So we said why not redesign the pixels in the camera to do both? It can measure light and convert light to electricity,” Mr. Nayar said.

Photo

Right now the Columbia prototype can sustain itself indefinitely by taking one photo per second in a well-lit room indoors. The scientists call it the Eternal Camera. As the technology develops, it could be used to make smartphone cameras less battery-reliant, or sustain remote stand-alone cameras for long periods of time.

Shirking the camera obscura also allows researchers to lose the lens altogether.

A group at Rambus Labs is developing lensless image sensors. Instead of the traditional glass, a microscopic grating is placed over the sensor. Light spreads out as it passes through the grating, creating complex patterns on the sensor. The patterns are deliberately cast by the grating, so that the image can be instantly reconstructed by software. Why all the hassle? It’s for an entire camera that’s less than a millimeter thick.

But what we think of as “the picture” would be unrecognizable to the human eye.

“It doesn’t look like an image whatsoever, we call it a blob,” David G. Stork, a fellow at Rambus Labs, said. “It just looks like a mess, but it contains the proper information.”

Rambus’s lensless sensors are low-resolution at 400×400 pixels, and Mr. Stork intends for them to stay that way. He sees them taking photos that no humans would ever see: reading QR codes to launch websites, or recognizing faces to unlock mobile phones.

To Mr. Stork, we’re just at the beginning of seeing what computational photography can do.

“What we capture on the sensor doesn’t have to look like an image,” he said. “It’s broadening our notion of what an image is.”

Follow @davegershgorn on Twitter. You can also find Lens on Facebook and Instagram.