Which is better? Unblinking scientific testing, or judging the percieved quality of the image? How should we actually evaluate cameras?

You’d think that the quality and usefulness of devices made from electronics and software would be completely deterministic. In other words, you should be able to tell what a given product is like from a detailed description of it. A bit like, if I describe something as an equilateral triangle, few would disagree that it should have three points and the same angle (60 degrees) at each corner.

But as we all know, nothing could be further from the case with camera.

One example of this is that a camera that has about as high a reputation as you can get in Hollywood, the ARRI Alexa, which has a lower resolution sensor (in terms of pixel numbers) than many of its competitors, and yet I have yet to find anyone that doesn’t agree that it makes lovely pictures.

So the question is, how do you assess a new camera? In reality, this is harder to do than it might seem.

To approaches - at extremes to each other

At the extremes, there are two approaches. One is to hook the camera up to measuring equipment and determine exactly how it performs, scientifically. You can measure the signal to noise ratio (an essential factor determining the dynamic range); you can count the pixels; you can measure the sharpness of the image, and you can test the colour gamut. This type of testing is essential if, for example, a big broadcaster wants set a policy for buying cameras (or for the quality of submissions from external production companies and freelancers). It begs the question of whether, say, the compression bitrate or the number of pixels on the sensor are the right criteria for judging an image.

At the other end of the scale, you can give the camera to an experienced DOP, tell them to go away with it and shoot some footage, and get viewers to judge the results (although hopefully not via Vimeo or Youtube, both excellent services but which will instantly invalidate any qualitative appraisal).

Which of these two approaches is best?

Neither, exclusively, of course.

But I have to say that I would lean towards the latter of these two. Not only are there plenty of people who don’t completely understand specifications (including professionals who make wonderful films), but specs don’t tell the whole story. And they might miss the point completely. That’s not so say that measurements are useless, just that they need to be viewed in the context of the perceived quality of the camera.

So in our reviews we will continue to emphasise the way a camera looks than what the specifications might suggest. This has several benefits.

Operating in the real world

First, it means that we’re operating in the real world, where what matters is how much viewers like the pictures. Of course it’s important to know how much bit-depth there is and other factors that might influence the choice of camera, or prove problematic for certain post production workflows if a camera falls short of expectations. But ultimately it’s whether or not the pictures are likeable that matters.

Second, the way an image looks is often almost completely intangible. How can you describe the look of a Cooke lens in technical specifications? I don’t believe you can. It might be that there are just so many variables and disparate layers inside a modern camera that it makes an individual technical specification almost meaningless, assuming the camera is at least basically competent.

Lastly, movie making is an art. In the same way that you can’t measure the beauty an oil painting with a ruler, sometimes you just have to accept that if something pleases you, then it’s just because you find it pleasant.

What do you think? Let us know in the comments.