Shallow depth-of-field: a software story

The other thing permitted by a two-lens system is shallow depth-of-field. This is what give you this “pro look” on images, with a beautifully blurred background / foreground while the subject stays in focus:

This effect is usually hard to achieve because it requires a wide aperture and a relatively long focal and being reasonably close to the subject. Basically, you have different sensor sizes in cameras: from an iPhone’s 4.8 x 3.6mm to a full-frame DSLR’s 36 x 24mm. The larger the sensor, the better image quality (the difference is particularly noticeable when it’s dark) and the longer the focal length for a given angle of view. Hence, because shallow depth-of-field required long focal length, it was reserved to DSLR’s and hybrids large sensors. If you want the full explanation and deprive yourself of sleep for the next 24 hours, you should read this:

But that was until now, because Apple says it can do the same with a small, iPhone-class, sensor. To be fair, they’re not the first ones to say so. HTC said so when they presented the One M8 and Huawei said so when they introduced the P9. All solutions rely onto the same hypothesis: because you have two lenses that are separated by a few millimetres, they act essentially like the human eyes and they can “see” in 3D (if you want the full detail, Wikiepedia can help once again).

Because the phone can “see” in 3D, it can produce what Apple pompously calls a “depth map”: a visualisation of the image over different levels of depth. Therefore, you can choose to put one part of the image in focus while knowing how much blur you should apply to the other parts (the farther it is from the focus point, the blurrier it gets).

The quality of the background blur is called bokeh (pronounced boka by Sir Phil Schiller himself, so I assume this is how we say it). Anyway, that depth map should play a big role in this bokeh thing. While traditional software-generated shallow depth-of-field effects result in a rather flat, artificial blur (because you don’t have these different levels of depth), the dual lens + software implementation should produce much more natural results.

But even if we take the most recent iteration of this technology, aka the Huawei P9, it doesn’t look great. Blurred areas seem artificial. See the example below: the white “pop out” and don’t have any detail. There is no progressivity in the background blur, white spots are just computer-blurred (you don’t see the dots anymore) and the list could go on.

Huawei P9 on the left and Leica Q on the right. Image courtesy of DigitalRev.

Of course, to the untrained eye, it doesn’t matter but still, any photographer can tell you it does make a difference and, in the end, the human eye can detect something is wrong. Can Apple change that? Well, we only have Apple’s samples to judge, but they certainly look great. Nevertheless it’s hard to reach a conclusion since they obviously have been taken into the best conditions you can possibly imagine (what, you don’t always carry a big studio flash around?).

Apple’s bokeh sample

Here, the white spots render much better and it’s possible that Apple built in a much better engine to harness all the information contained in the “depth map”. What’s more, we’re given to understand that Apple will use machine learning to make bokeh more pleasing. Hence, it’s totally possible the software will take care of any specificity in the background to make it as pleasing as possible. If such is the case, the iPhone 7 Plus might be the first smartphone that can achieve a truly beautiful bokeh.