by

Field dispatch, Berlin, December 2019: I normally don’t write things on the road, both because I prefer to see where I’m going and because I find observations on anything need some sitting time; think of it as a curation of thoughts. But I’ve been slapped upside the head a little bit on this trip. Firstly, it isn’t a photographic one – it’s a spend-time-with-the-family one; even so, I’ve been paring down gear more and more of late to the point that a Nikon Z7 and two lenses is about the most I’ll do. In this case, the 24-70/4 S and the 85/1.8 S. Both are excellent but I find myself hardly using both the camera, and when I do, the 24-70 is left feeling lonely. Why? Well, I picked up the iPhone 11 Pro shortly before I left.

Regular readers will know that I am not easily impressed, especially given how much hardware I’ve had the privilege of using over the last years, and the number of times said hardware has done unimpressive things in the field (or thrown unexpected results, gotten in the way, or worse, failed entirely). As good as phone cameras have started becoming in the last few years, the smartphone has always been a communications device – and later, a means to manage business when not at one’s computer. It is increasingly a portable field computer more than a means to talk to somebody. My previous phone, the iPhone XS Max, had an impressive and genuinely useful primary (26mm-e) camera, and a useful-in-good-light tele (56mm-e) camera. Both stabilised, both small-sensor with minimal computational photography.

The iPhone 11 Pro, on the other hand, has a rather sloppy looking design and placement of the camera modules (especially for Apple) with three of them now in a triangular pattern*; a 13mm-e non-stabilised very small sensor thing with fixed focus; a faster-lensed 26mm-e thing with the largest sensor, stabilisation and the full computational feature set (stitching, stacking/oversampling for noise reduction and detail retention etc) and a 56mm-e with a slightly smaller sensor but otherwise the same feature set as the 26mm-e module. This is so far nothing new or special; multi-camera and multi shot computational abilities have been seen before in other phones.

*I suspect this might have something to do with the array required for spatial calculations/ depth of field effects to maximise the information collected by each camera module.

What is different – and surprising – is the implementation. Usually, there’s an abrupt switch or jump in quality between computational (read: stacked) and non-computational (single shot) modes. You usually have to manually deploy the computational modes (“night mode”) and there are limitations in the way it must be used. The iPhone 11 Pro is the first phone – nay, camera – that’s done this effectively seamlessly: when light levels fall below what the camera deems is an acceptable single shot result, it starts stacking. You literally just compose and press the button; it’s possible to affect focus point or exposure, but most of the time the camera gets both of those spot on. I suspect this is less of a consequence of PDAF on sensor (which it has) as a lot of subject recognition algorithms. Furthermore, with long exposures – effective 10s is the highest I’ve seen, but most of the time 3s is determined sufficient – all of this is done handheld, with no apparent shake but very nice motion blur of subjects that should be moving (e.g. cars or people transiting a scene). All in all, it feels as liberating as moving from the APSC CCD D200 to the FF CMOS D3 back in the day.

I believe it works as follows: the camera is shooting a lot of frames during that “total exposure time” but then aggregating them for both noise reduction and to mask and separate out moving elements that should not be moving (i.e. camera shake). There is almost certainly also data taken from the phone’s accelerometers to determine what in the scene is intentional motion blur, and what’s caused by phone motion. To do this requires both advanced pattern recognition of a) blur b) subjects and c) sensor noise, and a huge amount of computational power: there’s only the slightest appreciable lag between releasing the shutter/completing the exposure and seeing a processed result. It may well be doing the calculations real time.

The iPhone 11 Pro seems to be doing most of this during daylight images, too: it isn’t clear how many images the camera is actually shooting to make a single one, given that HDR output in daylight requires multiple exposures, too. The upshot is that it’s very difficult to clip any highlights, and exposure adjustment isn’t required anywhere near as often as for previous cameraphones. Editing post-capture in the new editor (that has controls a lot more like a conventional photo editor, and finally, separation of the temperature and tint axes of white balance) – shows a pliancy and nonlinearity to the files that feels like they came from a much larger sensor. I suspect they’d even print pretty well up to reasonable sizes; perhaps not as large as a FF 12MP DSLR, but no worse than M4/3. And you can always get significantly more resolution by doing a pano sweep.

Pixel level results aren’t the mush you’d expect, either: there’s decent bite even at very low light levels, though they’re not going to be at a large sensor. The real question is, how large does ‘large’ have to be to be better?

My guess is that depending on which camera module (the three have differently-sized sensors), we’re talking between 2/3″-1″ for the ultra wide, to 1″-M4/3 for the tele, and M4/3 to APSC for the wide. I realise this is a controversial opinion, but bear with me. The largest sensor is a 1/2.55″ type, which is about 7x5mm, or 35mm2. APSC is 24x16mm, or 384mm2. You’d only have to stack 11 images (best case) to provide equivalent performance; and remember, there are some advantages to a perfectly matched lens system, too – which I’m sure the iPhone comes quite close to. I’m also pretty sure that during long exposures, the iPhone must be taking more than 11 images – no way each image during a 3s stack is that long, given the limitations of stabilisers and hand holding. Read out the sensor fast enough, throw enough computing power and coding cleverness at subject recognition and frame combination, and I think it’s easy to see how we’ve now got a larger sensor under basically every situation. It’s this, and the seamless of integration that’s really impressive.

It’s impressive to the point that I find myself using either the phone or the Z7, and not really anything in between – because the shooting envelope of anything else is worse. A camera is a very linear device in which if light levels fall you either need more sensor area, more aperture or more exposure; it can’t readout at 240fps (or higher?) and then have the computational power to chunk through all of that data. Remember the new iPhone processors have been benchmarked at almost laptop-grade; this is in line with my experience of forcing LR mobile to work with the Z7’s raw files and relative speeds compared to my 2018 MacBook pro. (It’ll also handle a 100MP Hasselblad file just fine.) If it’s doing nothing else but parsing a 12MP JPEG or HEIF file, there’s definitely a lot of power to spare.

But, as with everything – it isn’t all perfect. There are still some annoying limitations, a few of which are to do with software; some with hardware. The first of them for me is still the lack of a two stage shutter button – as fast as AF is, and as nice to use as tap to focus is, the finger dance required to engage AE/AF lock (if you’re waiting for a subject) is annoying. Just hitting the shutter and trusting the camera is not as annoying as it used to be as metering is better at not getting thrown for small, bright subjects (probably pattern recognition again) and AF is fast enough to be effectively instantaneous. The expanded viewing area outside the current camera module (visible on wide and tele modes) is really distracting, and confusing if any of your other cameras have an information overlay that you learn to see through; here, the overlays are not in the frame.

The ultrawide camera lacks stabilisation and AF, which means focus is optimised for one (middleish) distance and hyperfocal; it isn’t as sharp as the other two cameras nor can you do long exposures at decent quality (which is one of the more interesting uses of an ultrawide). Finally, the implementation of ‘zoom’ handoff between the cameras remains poor: this is one of the few situations in which you can see quality noticeably drop. I suspect it’s because an intermediate zoom level crops from the next longest lens rather than trying to combine information from the wider and the longer one. I wish the tele was an 85mm-e, or there was perhaps a 120mm additional lens, but I there are undoubtedly significant physical constraints there, and it’d need a sensor at least of the quality of the 56mm-e module to be useful. Lastly – and this criticism can be levelled at any new or high technology: it’s really expensive as a camera; but not so much if you consider you also get a phone and computer thrown in, too. Even then, remember the old adage about technology: small, cheap, good: choose any two.

What I find really interesting though: there’s no way phone camera technology is going to get any worse. Not only is the compact dead but things that are higher up the food chain also have numbered days. I’d go so far as to argue that the shooting envelope of the iPhone 11 Pro is greater than the XF10 or GR3, even if peak IQ isn’t as high under ideal conditions. Remember: interesting stuff doesn’t tend to happen under ideal conditions most of the time…and we’re again back to the best camera being the one you always have with you. We now have not just a truly pocketable visual scrapbook, but one which is transparent enough not to have to require imagination or make excuses for. And that, I think deserves our support. MT

Images shot with an iPhone 11 Pro, mostly out of camera JPG with watermarking in PS. Any adjustments made were using the built in editor.

__________________

Prints from this series are available on request here

__________________

Visit the Teaching Store to up your photographic game – including workshop videos, and the individual Email School of Photography. You can also support the site by purchasing from B&H and Amazon – thanks!

We are also on Facebook and there is a curated reader Flickr pool.

Images and content copyright Ming Thein | mingthein.com 2012 onwards unless otherwise stated. All rights reserved