The days of saying "cheese" are numbered. Soon, you may be saying "whoa" as you look into the multiple lenses of the unique Light L16. It's like having a staring contest with four spiders: 16 cameras, arranged in a seemingly haphazard fashion, peer back at you. Their tiny mirrors shimmer behind a thin sheet of glass. It's the physical embodiment of a Google Deep Dream image.

When you see this camera, you will immediately want to pick it up and shoot pictures with it. What will they look like, the images shot with this bizarre multi-camera? I can't say, unfortunately. I wasn't able to try it. The Light executives I met with didn't have a fully operational version of the camera with them during our meeting. They're still working on the software, the editing tools, and the custom circuitry needed to put this camera in the hands of the masses.

But if the L16 performs as advertised—and Light's own sample gallery of images taken with the L16 are very impressive—a revolutionary new type of professional-level camera is on the horizon. It takes really high-resolution images. It lets you adjust the depth of field in your pictures after you take a shot. It excels in low light despite having nothing bigger than a smartphone sensor and nothing more expensive than a plastic lens embedded in it. And despite its phone-like face, with no moving parts, it has an optical zoom range of 35mm to 150mm.

The Eye Phone

The front of the L16 is flat, with the multiple lenses embedded behind a transparent glass pane. Behind the scenes, each optical porthole feeds light into a separate 13-megapixel sensor—and the camera has the ability to capture images at three discrete focal lengths. Five sensors tucked behind five 35mm lenses point straight out. From there, things get crazy: The other 11 cameras, using a lens arrangement the Light team calls "folded optics," are like periscopes. Their sensors, tucked behind five 70mm lenses and six 150mm lenses, are perpendicular to the front of the camera. Little robotic mirrors, ones that are visible through the front of the L16 as you mug for a shot, deflect photons at right angles toward the sensors.

And because those long-zoom lens barrels are positioned sideways, there's ample room to pack more lens elements in each barrel. According to Light co-founder and CEO Dave Grannan, that extra room makes a huge difference when it comes to the camera's optical oomph.

"These big heavy lenses in your conventional camera may have 15 glass elements," Grannan says. "Sometimes a flourite element or some exotic material in there. Traditionally they're spherical, but one of the elements might have a slight asphere, which is a very expensive process for a glass lens… But the molded-plastic technology used in our cellphones has been perfected. They've got these lenses so good, they're diffraction limited. When you make them aspherical on each surface, so there's five of these and they're on both sides, that's 10 aspherical surfaces. The incremental cost to make these aspherical is zero—it's a plastic mold, you punch them out. In glass, it's very expensive. Just one of them might be an asphere. Five of these in a barrel for your Android or iPhone at volume costs a buck."

But why does a camera need 16 sensors and 16 multi-element lenses and 11 mirrors just to take a photograph? Well, it doesn't. But Light has its sights set on the holy grail with the L16: A slim and compact camera that performs like a bulky DSLR without the need for expensive interchangeable lenses or even a large sensor. And if it's a success, this camera's internal infrastructure could change the way our smartphones take pictures very very soon.

Today's smartphone cameras are already excellent, but there are a few key areas in which they struggle. They have small sensors, so they're not the best performers in low light, nor do they produce the pleasantly shallow depth of field—the "bokeh"—of a DSLR or larger-sensored camera. They also don't have optical-zoom lenses; you've got to move your feet to zoom or digitally zoom the image, which negatively impacts detail.

Some smartphone cameras have taken aim at these problems with creative solutions, and those results have been impressive. The Nokia Lumia 1020 famously crammed 41 megapixels into its sensor, giving its users a lot more room to crop and digitally zoom without destroying image detail. Panasonic's Lumix CM1 packed a larger sensor into a smartphone body, giving it image capabilities that match not just a dedicated compact camera, but a great one.

But packing more megapixels into a small sensor doesn't necessarily mean better image quality, and putting a large sensor into a phone doesn't make it optically zoom. With its multi-camera, multi-lens approach and some complex computational imaging, Light says it can squeeze big-sensor image quality, very high resolution, and big-lens zoom out of its flat-fronted, futuristic brick.

The Power of 16 Smartphone Sensors

According to Light co-founder and CEO Dr. Rajiv Laroia, all the sensors in the Light camera work in tandem to mimic the benefits of a very large sensor. And the sensors Light is using are relatively cheap—about three bucks a pop, according to co-founder and CEO Grannan—so it's the amount of them and the computational magic that unites them all that really make the camera special.

Laroia explains that the main drawback with even the very best smartphone cameras is that they have small sensors and are very unpredictable at the pixel level. Cramming a tiny sensor full of pixels makes the size of those pixels smaller, which affects the number of photons it can read at the pixel level, which in turn affects the accuracy of the overall image. Laroia likens each pixel to a bucket that collects photons.

"The pixels in these sensors, modern cellphone sensors, have a bucket size of about 5,000 photons," Laroia explains. "When you have a pixel, you're not sensing an on/off signal. You're sensing shades, you want to construct the image as you see it. So let's say you want to put 1,000 levels to sense the shades from a bucket that has a 5,000-photon capacity. That means each level responds to five photons. That's not a lot of light energy. It's very little. In fact, quantum mechanics claims that at that level, that much light energy, what you record isn't statistical. It's a probabilistic thing."

That's why small sensors record a lot of grain and noise, Laroia says, especially in low-light settings. And while DSLRs take more-detailed pictures in the dark due to their larger sensors, Laroia says the Light L16 uses a distributed array of smaller sensors to spread out the work and do the same thing.

"We can mimic a much bigger pixel than we actually have by using multiple sensors," Laroia says. "The effective sensor area becomes a lot larger. The effective bucket size becomes 50,000 photons instead of 5,000 photons. And noise goes down significantly. The dynamic range increases significantly. Even if you don't do anything fancy, the dynamic range increases by tenfold in what we do here."

How Many Sensors Does it Take to Take a Photo?

Well, not 16.

When you take a picture with the L16, you're not taking a photo with all 16 sensors. Depending on your focal length, up to 10 of its cameras are capturing shots and stitching them all together to create a single high-resolution image.

At 35mm, its widest-angle field of view, all five 35mm cameras fire. The images captured with each of those sensors slightly overlap, which helps the camera stitch together a seamless image. It's also the secret sauce to lining up the shots for an additional five-camera capture.

"70mm is about one quarter of the field of view of that 35mm shot, and 150mm is one quarter of the 70mm," Grannan explains. "Those five 35mm cameras all fire at the same time, and they're slightly overlapping. That's key because we can use geometry, we can use parallax to determine where everything is in relation to each other."

The mirrors deflecting photons toward the 70mm and 150mm aren't static. They're on actuators, and they slightly tilt to adjust for each photo. In this case, the mirrors seated in front of the 70mm lenses quickly adjust to capture an image with more detail. 52 megapixels of detail with a file size between 30 and 50MB, to be exact.

"We use those 70mm's, normally they're pointed at the center of the image, and we move one so it takes this upper left quadrant," Grannan says. "It's a 13-megapixel picture of one quarter of the scene. Another one goes to the upper right quadrant. Two others go to the lower right and left quadrants. And one stays in the middle, because we want our image quality the best there… So then, you end up with all these 13-megapixel sensors taking one 52-megapixel image."

The same process occurs with the 70mm and 150mm modules when the camera takes a 70mm photo. At 150mm, it's just the 150mm modules firing. At 52 megapixels, it also gives each image a ton of resolution room to crop digitally beyond there—with the ample effective sensor area to retain a lot of detail and dynamic range.

But there are limitations in the setup. In between each set focal length, you lose a bit of resolution in the image.

"Let's say we're taking a 50mm, which is between 35 and 70," Grannan says. "We go ahead and fire the five 35s, and we crop. We're going to lose a little bit of this light energy on the edges. Not ideal, but it's a tradeoff we make. But with the 70s, we move it so that we are just capturing within that 50mm frame, so all the light energy stays within the view of 70mm… With this, it's not 52 megapixels, it's about 40 megapixels, which is still a lot in the world of DSLRs."

And because the cameras are capturing so much data, at different aperture values and focal lengths, the post-shot retouching options are groundbreaking. According to Laroia and Grannan, the time of exposure needs to be set at the time you take the picture. Things like depth of field, ISO, and dynamic range can be adjusted after the fact, and they've only started to delve into the possibilities.

"We have a team of 11 PhDs in computational optics," Laroia says. "They know what they're doing. And it's an ongoing effort, it'll go on for the next 20 years."

"The great thing is the depth map," Grannan says. "We have great 3D information, and some developer could write the indoor mapping app that leverages that information. Another one is facial recognition, because we get such precise information, or a 3D-printing app."

But Will It Sell?

Here's the thing with the L16: It's still pretty thick. It's nearly the size of a stack of Lenovo Phabs than a single slim smartphone. Serious photographers may also take offense at its non-removable battery, 128GB of fixed internal storage, and touchscreen-based manual controls. And then there's the price: $1,300 if you preorder it between now and the end of the month, and $1,700 thereafter. While Light plans to ship the L16 to its earliest preorders next summer, anyone who preordered the camera after mid-October will need to wait till the fall.

But Laroia and Grannan say the response to its camera has been "overwhelming," and that slimmer versions of the camera are in the works. The company has already struck a deal with Foxconn, whereby the Chinese manufacturing giant will produce this first run of L16 cameras in exchange for a licensing agreement to use the technology in smartphones. Grannan says you may even see a smartphone with Light's imaging technology in it by the end of next year.

"We didn't make the first product too aggressive," says Laroia. "The internal components can get slimmer. There's only so much risk you want to put in the first product."

Grannan says that for a smartphone, the design could be modified to eliminate the 150mm lenses, which take up the most internal space. Instead, they could offer wider-angle lenses—possibly an 18mm focal length—than the 35mm optics. Those modules would face forward and take up less room.

"The DSLR quality, with that great low-light performance and shallow depth of field, the zoom up to 70mm, we can fit that in something the size of an iPhone 6 Plus," Grannan says. "It's a bigger cellphone, but it's definitely cellphone-size."