Perks & perils of computational photography

We’ve been slowly entering an exciting new era of “computational” photography where software continues to overcome previous limitations of hardware. The iPhone XS is capable of running 1 trillion operations per image, and while extremely powerful, I’m also keenly aware of the fact that we know less and less about what’s going on to capture our image.

A key part of the creative process and achieving one’s artistic vision is troubleshooting. In order to troubleshoot, one must understand what is actually happening and what is causing the problem.

With a traditional SLR camera, if my image was too bright, too dark, too soft, etc., I knew exactly what to change/tweak to get closer to my vision. Today, with cameras heavily relying on software, sometimes things happen that I just don’t understand. Perhaps the tones in the sky don’t look quite right, or a vertical pano isn’t in focus like I wish. The difference is, I don’t know WHY it doesn’t look the way I want it to, which means I don’t know what to tweak to fix it.

Of course, the upsides of computational photography far outweigh the downsides, and almost always the software helps me capture exactly what I want, but I’m curious about how this conversation will develop over the next few years and how Apple will explore new ways to facilitate artistic expression.



