by



Kuala Lumpur skyline after rain. An example image for which there is no perfect output medium at present: web sizes we don’t need to talk about. Full resolution screens lack the tonal resolution to render the clouds in a transparent manner; print comes closest, but ultimately is a reflective medium and so lacks the dynamic range to represent the difference between the foreground trees in deep shadow and the light in the buildings.

Let’s take stock of where we are at the moment in terms of viewing options for images: there’s basically still only digital and print. On the digital side, displays have been steadily increasing in resolution and information density – and to some extent also size; we have 4K monitors in some laptops at 14″ and under, 8K in some televisions with an enormous jump to 50″+, and the majority of devices sit in the 2-4MP range somewhere between 12″ and 30″. There are also mobile devices with HD, QHD or even 4K (the recently announced Sony Z5) resolutions in sub-6″ screens; that’s an absurdly huge range of pixel densities. Everything from about 100PPI to 800+PPI. Clearly, preparing content for this is not going to be easy; viewing distances don’t necessarily have anything to do with perceived information density (say pixels per degree of observed FOV), either. You can hold your mobile at such a distance that it subtends the same angle as your 27″ 5K iMac, but the problem is the iMac will actually have double or more of the information density – just look at the number of pixels along the long axis. Or the converse might be true. As image makers, how do we manage this?

Before I attempt to answer that question, we need to consider the second display medium: print. This encompasses everything from my fine art Ultraprints and contact prints to billboards; they sit everywhere along the effective information density spectrum from 5PPI to 720PPI+. Each subset of the medium has digital analogs: if 4″ drugstore prints are average smartphones, regular prints are normal computer displays, then Ultraprints are 4K on a smartphone (and actually look quite similar in terms of information density); billboards are giant electronic signboards. Once again, we can perform the same experiment as with the phone and the 5K iMac: view so that two images occupy the same angle of view, and the amount of information contained can be wildly different. A print in an average quality photo book has nowhere near the same amount of information as an Ultraprint at the same size – 144PPI or less with four colours offset and a relatively wide pixel mask compared to 720PPI with eleven colors and sub-2 picoliter droplet size with continuous dithering is clearly going to look different. Yet ultimately they may well represent the same image.

On top of this, we need to take into account the way content is consumed: something presented and created with deliberation in mind has to stand up to far higher scrutiny than casual in-your-face advertising. A print is going to be pored over for far longer periods of time than a billboard; you’re probably not going to notice anything amiss in the latter unless there are jarring typos or layout errors. And here we come to our first hypothesis, which should apply to all output images regardless of media:

1. The more deliberate the intent and presentation, the higher the information density must be.

Information density takes two forms: spatial frequency and resolution, and tonality/color. The former is what gives you the ability to resolve hair in a monochrome image; subtle variations in the latter are what creates the impression of transparency in an image. Shifting tonality or color in a plausible but unusual way creates emotion by evoking experience. When you have amounts of both that exceed the limits of our eye/brain combination to process, then the result is indistinguishable from reality.

The problem here is of course photographs are always going to be two dimensional; even three dimensional presentations, stereoscopic experiments etc. will always be approximations simply because your vantage point is fixed. You can’t step to the left or move your head and peer around the tree in front of you. That said, we are really the foundation of the second hypothesis:

2. The limit is human vision: there is no point in presenting more information than we can proceess.

Up to this point, monitors and prints have been clearly lacking simply because we can easily perceive their constituent subelements with little effort: go close to a monitor and the illusion of continuity breaks down into a regular rectangular pixel mask; go close to most prints and see dots. Apple was the first company to actively market a challenge of these limits with its ‘retina’ displays; the premise here was that a 326PPI display with a tight pixel mask (the dark separation grid between adjacent pixels) at typical viewing distances for a phone would exceed the limits of human vision to produce an effectively continuous image. Yes and no; look close and hard and the elements are still there. The problem is that firstly our eyes do not render in a regular, linear way like cameras. We actually resolve closer to 1000PPI at minimum focusing distance for a person with perfect eyesight; ability to identify these and distinguish individual elements is closer to about 600-700PPI in reality (each element needs another contrasting element next to it to separate it from adjacent ones). To produce a truly seamless/continuous viewing experience, you need to either exceed this barrier by a reasonable margin with a regular grid display, or use one that isn’t regular.

For a very long time, capture technology has exceed display technology; until very recently, by a large margin. We had 20+MP cameras and only 2MP displays; that requires throwing or downsampling 90% of the information captured. Of course the Bayer interpolation process means that it isn’t quite that bad, and I suppose a truer approximation is probably something like 13-15 MP of information captured, but there’s still a clear disconnect. Now it’s 20-50MP for the majority of capture, and 4MP-14MP for output: that’s a much better ratio. This downsampling for output display has had two consequences: firstly, oversampling produces a cleaner image; secondly, we don’t quite get all of the information. On top of that, if you look too closely, there are still gaps: each screen pixel’s RGB elements won’t fully light unless that area of the image is white; dark colors appear somewhat murky because tones are not truly continuous – there’s a perceptual gap created by the pixels that are dimly lit. Clearly, there’s still benefit to seeing it presented in a different, and hopefully more complete way: print.

Printing does not display information with a regular array of RGB elements; instead, each physical location, luminance and color information bit (a ‘pixel’) is translated by the printer’s software and hardware into a series of ink dots; many ink dots are used to represent one pixel. The more inks the printer has, the wider gamut of colours it can represent – but the tradeoff is that it has to lay more ink to represent some of these colours, resulting in spread (dot gain). Not only is spread of an individual ink-dense dot a concern, but the accuracy in overlaying these ink dots is important, too: you may land up with a lot more spread than you think, simply because the printer is not laying ink precisely*.

*There are any number of causes for this: imprecise positioning, clogged heads spraying ink at an angle, voltage spikes affecting the piezomechanics that activate ink spraying, old or partially dried ink, changes in humidity and temperature, paper bleed characteristics or fiber types – remember, we are talking about very, very small tolerances here. 720PPI in an Ultraprint means 1/720″ or 35 micrometers between adjacent pixels, with 11 ink colours overlaid on that spot, and droplet sizes of under two picoliters – that’s 0.0000000000002 litres. Fluids don’t even behave as you’d expect at those scales.

The upshot of all of this is basically an irregular and hopefully somewhat continuous matrix of ink dots where information from one pixel location seamlessly blends into the next one as a consequence of the dithering/ printing process. Information is now perceptually continuous; there are no pixel mask boundaries or hard defined elements as with a monitor. And so far, it’s still possible to represent far more information in a print – giclee Ultraprint or optical contact print – than digitally. This represents reality far more accurately: the world is continuous, not made up of infinitesimally small blocks of Lego.

But it may well be only a matter of time before displays catch up. There have been prototype monitors that display 700+ PPI already shown, and smaller ones now available (and that is probably the most widely available representation of an Ultraprint I can suggest – look at a critically sharp image that has been perfectly sized and processed/sharpened on a 4k phone screen, and imagine that in a print). Imagine one of these at say 30″ sizes. The bottleneck now is not production of the output hardware: it’s the signal to drive it. We’re looking at the 100MP range, and a file of this size is enormous; the video processing capabilities aren’t quite there yet, not to mention capture. But at the rate computing power has been advancing, it’s probably not even that far off. And visually, the diminishing returns get ever steeper. At 4″ viewing distance, I can clearly make out the pixel grid in my 27″ Thunderbolt display (109PPI); an Ultraprint still looks continuous. At my usual 2ft working distance from the monitor, I can make out some elements are resolution limited, like text, but images still look acceptable. A 5K iMac adds a layer of transparency, but brings other challenges, as we’ll soon see.

We are now in a transition period for output: even though output information density is increasing to more closely match input/ capture density, adoption isn’t going to happen overnight. I see this most acutely in my day job: I’ve got to make content that’s going to hold up for clients and readers on high resolution retina+ displays, and at the same time still ‘works’ visually for those that aren’t. For the moment, I can’t retouch on retina because I simply cannot see the dots that my clients can with regular monitors; postprocessing is also challenging because it’s difficult to determine how much is enough sharpening. Too much, and you land up with coarse haloes and a gritty micro texture on regular devices. Too little, and everything looks soft – even though the increased information density on retina devices looks great. It means retouching on non-retina, and proofing on an iPad just to check that everything looks fine at higher densities.

Here’s where I gaze into the crystal ball: when we had fewer pixels, they all had to count. When we reach the point that we have more than we can see, they no longer have to be perfect, just plausible. More resolution brings additional possibilities: full translation ideas that require transparency; the enough plausibility to force your audience to suspend disbelief for scenes that cannot be real or appear surreal; even larger sizes. Images that don’t currently ‘work’ at small web sizes because of insufficient information density (e.g. subtle texture in waves, or water, or leaves) will do. Until this point, the medium will have a huge influence on the impact and translation of an image: we have to shoot with the end output in mind, or risk a weak image. Unless a specific assignment requires otherwise, I create images that I know will work as prints; most will work at web sizes, but not as well. I think we will be eventually liberated from this. At that point resolution probably won’t matter other than for scaling of physical size, and even then, to a much lesser degree because of viewing distances.

If it sounds like I’m predicting the death of printing, the truth couldn’t be further. Digital viewing always has an implied transience that never gives an image weight or encourages further contemplation; it is partially due to the ease of creation and partially due to our psychological conditioning in the way we consume content. Perhaps those other ‘forum experts’ who are happy to degrade a print they have never seen, but gush over the 5K iMac might eventually come around – or perhaps not, since that is the way of the internet. For most, the easier it is to visualise the purpose of the Ultraprint and how it appears in person, the easier it is to appreciate – especially the difference in impact between seeing arbitrary subsampling of the information presented in bold (i.e. a reduced-resolution monitor graphic) and all of the information (at least to the limits of capture). My biggest challenge til now has been to explain that viewing experience other than with prints, in person – which is good and bad. Regardless of medium, I think there will always be a place for this kind of immersive experience whether it is delivered via a 100MP 30″ LCD of the future, or today’s Ultraprint. I just hope our lenses can cope! MT

__________________

Be inspired to take your photography further: Masterclass Chicago (27 Sep-2 Oct) and Masterclass Tokyo (9-14 Nov) now open for booking!

__________________

Visit the Teaching Store to up your photographic game – including workshop and Photoshop Workflow videos and the customized Email School of Photography; or go mobile with the Photography Compendium for iPad. You can also get your gear from B&H and Amazon. Prices are the same as normal, however a small portion of your purchase value is referred back to me. Thanks!

Don’t forget to like us on Facebook and join the reader Flickr group!

Images and content copyright Ming Thein | mingthein.com 2012 onwards. All rights reserved