THE SIX POSTULATES OF EQUIVALENCE

PERSPECTIVE Perspective is how objects appear in relation to other objects, and the effect it can have on the image is dramatically demonstrated with these examples. For a given scene and framing, perspective is a function only of the position the photo was taken from. A good way to think of perspective is to consider two objects, one 10 ft from the camera, the other 30 ft from the camera. If both objects are in the frame with the subject being the closer object, and we shoot at 50mm from 10 ft away, then the further object is three times as far away as the subject. If, however, we step back another 10 ft and use 100 mm so that the subject is framed the same, then, if the the further object is even still in the frame, the subject will be 20 ft away and the other object 40 ft away -- only twice as far. Conversely, if we get twice as close and frame use 25mm for the same framing of the subject, now the subject is 5 ft away, and the other object is 25 feet away -- five times as far. Not only does the subject-camera distance change the perspective by changing the relative distances of subjects within the frame, it also changes, in a similar fashion, how widely separated they are in the frame. In fact, when we use a longer perspective, we will often find that much of what was in the frame of a closer perspective is now outside the frame (the tree photos here are an excellent example of this). Inasmuch as the scene as a whole matters, rather than simply the actual subject, perspective can be one of the most striking elements of a photograph. FRAMING For a given perspective, the framing can be thought of as the whole of the captured scene, and is synonymous with the FOV (field of view), which is a combination of the horizontal and vertical AOV (angle of view). Unless otherwise specified, the term "AOV" refers to the diagonal AOV. The distinction between AOV and FOV need not be made when systems share the same aspect ratio, but the greater the difference in aspect ratios, the more important the distinction between the terms. In addition, it is important to note that the focal length (and f-ratio) marked on a lens is for infinity focus (magnification, m, is equal to zero). As the magnification increases (subject-camera distance decreases), both the AOV and f-ratio will increase in the same proportion, which is an especially important point for macro, and near macro, photography, and discussed further down. We can compute the horizontal, vertical, and diagonal AOVs for infinity focus with the following formula: AOV = 2 · tan-1 [ s / [ (2 · FL) ] where AOV = angle of view ( degrees )

s = sensor dimension (mm)

FL = focal length (mm) For example, the diagonal, horizontal, and vertical AOV for infinity focus (m=0) on 35mm FF at 50mm is: Diagonal AOV for 50mm on 35mm FF = 2 · tan-1 [43.3mm / (2 · 50mm)] ~ 47°

Horizontal AOV for 50mm on 35mm FF = 2 · tan-1 [36mm / (2 · 50mm)] ~ 40°

Vertical AOV for 50mm on 35mm FF = 2 · tan-1 [24mm / (2 · 50mm)] ~ 27° Solving the AOV formula for focal length, we have: FL = s / [ 2 · tan (AOV / 2) ] Let's now compute the focal length for 35mm FF, 1.5x, 1.6x, and 4/3 for a diagonal AOV of 47 ° at infinity (m=0): FL for FF = 43.3mm / [ 2 · tan (47 ° / 2) ] ~ 50mm

FL for 1.5x = 28.4mm / [ 2 · tan (47 ° / 2) ] ~ 33mm

FL for 1.6x = 26.7mm / [ 2 · tan (47 ° / 2) ] ~ 31mm

FL for 4/3 = 21.6mm / [ 2 · tan (47 ° / 2) ] ~ 25mm Note that these focal lengths are all proportional to the sensor ratio: 50mm / 1.5 ~ 33mm

50mm / 1.6 ~ 31mm

50mm / 2 ~ 25mm Now we'll repeat for a horizontal AOV of 40 ° at infinity (m=0): FL for FF = 36mm / [ 2 · tan (40 ° / 2) ] ~ 50mm

FL for 1.5x = 23.7mm / [ 2 · tan (40 ° / 2) ] ~ 33mm

FL for 1.6x = 22.2mm / [ 2 · tan (40 ° / 2) ] ~ 31mm

FL for 4/3 = 17.3mm / [ 2 · tan (40 ° / 2) ] ~ 24mm Once again, we see these are proportional to the sensor ratio: 50mm / 1.5 ~ 33mm

50mm / 1.6 ~ 31mm

50mm / 2.08 ~ 24mm And for a vertical AOV of 27 ° at infinity (m=0): FL for FF = 24mm / [ 2 · tan (27 ° / 2) ] ~ 50mm

FL for 1.5x = 15.7mm / [ 2 · tan (27 ° / 2) ] ~ 33mm

FL for 1.6x = 14.8mm / [ 2 · tan (27 ° / 2) ] ~ 31mm

FL for 4/3 = 13mm / [ 2 · tan (27 ° / 2) ] ~ 27mm And, again, these focal lengths are proportional to the sensor ratio: 50mm / 1.5 ~ 33mm

50mm / 1.6 ~ 31mm

50mm / 1.85 ~ 27mm The effective focal length (EFL) of the lens for a subject at a distance d (mm) from the aperture is given by: EFL = FL · (1 + m / p) where m = image magnification (ratio of the height of the image on the sensor to the height of the actual object)

p = pupil magnification (the ratio of the diameter of the exit pupil to the diameter of the entrance pupil) Symmetric lenses have equal entrance pupil and exit pupil diameters. Thus, p=1 for a symmetric lens and we can disregard it. Normal lenses tend to be closer to symmetric designs, as a general rule, longer lenses tend to have larger entrance pupils than exit pupil, thus p<1 (progressively smaller the longer the lens), and wider lenses (especially retrofocal) are the opposite, with p>1. The following table demonstrates the effect of focal distance on the EFL of a symmetric 50mm lens (p=1): Magnification EFL for a symmetric 50mm lens 1 : ∞ m = 0 50mm 1 : 20 m = 0.05 52.5mm 1 : 10 m = 0.1 55mm 1 : 5 m = 0.2 60mm 1 : 3 m = 0.33 67mm 1 : 2 m = 0.5 75mm 1 : 1 m = 1 100mm

A useful relationship between focal length, sensor size, subject to aperture distance, and the height or width of the focal plane in the photo is: EFL / d = s / (s + h) where all variables below are given in mm (1m = 1000mm, 1 ft = 304.8mm) EFL = effective focal length

s = sensor dimension (sensor height for landscape orientation, sensor length for portrait orientation -- given in the table just a bit further down)

d = distance to subject

h = height of frame For low magnifications, the formula reduces to: FL / d ≈ s / h For example, let's say we have a landscape oriented photo of a model who is 5' 8" (1727mm) tall, takes up 2/3 of the frame from bottom to top, and wish to know what distance the model was from the camera if taken on FF with an 85mm lens. The calculation is as follows: 85 / d = 24 / (1727 / ⅔) → d = 9175mm = 30 ft. Listed below are tables of common ERs (equivalence ratios -- crop factors) in relation to 35mm FF for images using the same AOV (see here for a more complete list). When given in stops, the ER is rounded to the nearest 1/3 stop. The reason that 35mm FF (24mm x 36mm) is chosen as a standard is due to its popularity in the days of film and the fact that there are more lenses made for this particular format which many of the smaller sensor DSLRs also use, but we can use any format as a reference. Due to different aspect ratios, when cropping to the dimensions of the more square sensor, we use the ratio of the shorter dimensions of the sensor to compute the ER, and when cropping to the dimensions of the more elongated sensor, we use the ratio of the longer sensor dimensions. In the case of 3:2 being cropped to 4:3, or vice-versa, this will result in less than a 1/3 stop difference. One side effect of cropping 3:2 images to 4:3 is that it greatly mitigates any softness that might show in the extreme corners. However, we must also realize that this comes at the expense of removing 1/9 of the pixels from the image. But as 3:2 systems generally have more pixels than 4:3 systems of the same generation, this can be done without any detail penalty when comparing systems. Realistically, however, the extreme corners make up so little of the image, and are so close between systems anyway at the same DOF that it is only a consideration for the most hardcore of "pixel-peepers". Please see this image as an example of what would be called a "huge" difference in the corners of different systems at the same DOF. I simply see it as a non-issue, especially considering that the differences elsewhere in the frame matter more by far, but others see it as a serious disadvantage. In any event, framing slightly wider and cropping to 4:3 will basically eliminate even that extreme case.

Compacts / Cell Phones:

Sensor Size Dimensions (mm) Diagonal (mm) Area (mm²) ER ER (stops) 1/3.2” (iPhone) 3.42 x 4.54 5.68 15.5 7.62x 5.86 → 6 1/2.7” 4.04 x 5.37 6.72 21.7 6.44x 5.38 → 5 1/3 1/2.5” 4.29 x 5.76 7.18 24.7 6.02x 5.18 → 5 1/3 1/2.33" 4.60 x 6.13 7.66 28.2 5.65x 5 1/1.8” 5.32 x 7.72 8.93 41.0 4.84x 4.56 → 4 1/2 1/1.7” 5.7 x 7.6 9.5 43.3 4.55x 4.38 → 4 1/3 2/3” 6.6 x 8.8 11.0 58.1 3.93x 3.95 → 4 1" (Sony RX100) 8.8 x 13.2 15.9 116 2.73x 2.89 → 3 DSLRs / mirrorless: Sensor Size Dimensions (mm) Diagonal (mm) Area (mm²) ER ER (stops) CX (Nikon 1) 8.8 x 13.2 15.9 116 2.73x 2.89 → 3 4/3 (Olympus, Panasonic) 13.0 x 17.3 21.6 225 2.00x 2 APS-C (Sigma) 13.8 x 20.7 24.9 286 1.74x 1.60 → 1 2/3 APS-C (Canon) 14.9 x 22. 3 26.8 332 1.61x 1.38 → 1 1/3 APS-C (Sony, Nikon, K-M, Pentax, Fuji) 15.7 x 23.7 28.4 372 1.52x 1 .22 → 1 1/3 APS-H (Canon 1D series) 19.1 x 28.7 34.5 548 1.26x 0.66 → 2/3 35mm FF (Canon 1Ds series, 5D; Nikon D3, D700) 24 x 36 43.3 864 1.00x 0 Leica S2 30 x 45 54.1 1350 0.80x -0.64 → -2/3 Pentax 645 33 x 44 55 1452 0.79x -0.69 → -2/3 MF (Mamiya ZD) 36 x 48 60 1728 0.72x -0.94 → -1 Rather than relate to an arbitrary standard, such as 35mm FF, the ER between any two systems using the lengths of their respective sensors, or, more simply, either divide the ERs of the respective systems, or subtract their sensor ratios when using stops, using the values in the table above. For example, the SR between a Canon 40D and Olympus E3 can be computed (for the same AOV) as 2.00 / 1.62 ~ 1.23 (2/3 of a stop to the nearest 1/3 stop, or, more simply: 2 stops - 1 1/3 stops = 2/3 of a stop). Thus, 25mm f/2 ISO 100 on 4/3 would have the same AOV, DOF, and shutter speed as 31mm f/ 2.5 ISO 160 on 1.6x, since 25mm x 1.23 ~ 31mm, f/2 x 1.23 ~ f/2.5, and ISO 100 x 1.23² ~ ISO 160 (or, alternatively, f/2 + 2/3 stops = f/2.5 and ISO 100 + 2/3 stops = ISO 160).

DOF / Diffraction / Total Amount of Light on the Sensor The DOF, diffraction, and total amount of light projected on the sensor are all intimately related to the aperture diameter. This section will begin by discussing DOF, followed by a discussion on diffraction. The discussion on the total amount of light projected on the sensor is a different section, Exposure, Lightness, and Total Light. The DOF (depth of field) is the distance between the near and far points from the focal plane that appear to be in critical focus and is a central player in the amount of detail rendered in an image. It is also important not to confuse DOF with background blur (which is discussed further down). Photos with: the same perspective (subject-camera distance)

the same framing

the same aperture diameter

the same display size

the same viewing distance

viewed with the same visual acuity will have the same DOF (and diffraction). Alternatively, photos with: the same perspective (subject-camera distance)

equivalent focal lengths

equivalent relative apertures

using the equivalent CoC will also have the same DOF (and diffraction). Note that neither number of pixels nor the size of the pixels figure into the CoC at all, except inasmuch as the size we display a photo depends on the size and/or number of pixels that make up the photo, such as when viewing 100% crops on a computer monitor. The mathematics demonstrating the equivalencies is worked out a bit further down -- do try to contain your excitement! ; ) Moving right along, only an infinitesimally small portion of the image is actually in focus (the focal plane), but as our eyes and brain cannot see with infinite precision, the focal plane is perceived to have some depth. As we enlarge the image, we can more clearly see that less and less of the image is within focus, and this is how the DOF changes with enlargement. Of course, no lens is perfect, so the focal plane is not a plane at all, but rather a surface. In some instances, the curvature of the focal plane (field curvature) can be extreme enough that what appears to be edge softness is actually a flat surface falling outside the focal "plane". In addition, the focus falloff is gradual -- the closer elements in the scene are to the focal surface, the sharper they will appear. The DOF is the depth from an ideal focal plane in which we consider elements of the scene to be "sharp enough". The number of pixels, or sharpness of the lens, on the other hand, have nothing to do with DOF. These are independent factors in the sharpness of the photo -- a low resolution image displayed with large dimensions does not necessarily have low DOF -- the blur is a result of the lower resolution. The difference between the blur due to limited DOF and the blur due to other factors (soft lens, low pixel count, camera shake, diffraction, etc.) is that these other sources of blur affect the entire photo equally, whereas the blur associated with shallow DOF will be greater for the portions of the scene further from the focal plane. Blur do to motion, of course, will selectively affect objects that have the greatest relative motion in the frame (that is, a slow moving object close to the camera may have greater blur than a fast moving object far from the camera) and blur due to field curvature will increase as we move away from the focal point, which in many cases may mimic a more shallow DOF. Most, if not all, online DOF calculators (as well as DOF tables) are based on "standard viewing conditions" of an 8x10 inch photo (or any photo displayed with a 12.8 inch -- 325mm -- diagonal) viewed from a distance of 10 inches with 20-20 vision. Change any of those parameters (and please note that the pixel size is not one of the parameters), and you'll change the DOF (although, for example, if you double both the display dimensions and the viewing distance, these two effects will cancel each other out), and these parameters are accounted for with the CoC (circle of confusion) in the DOF formula(s). Let's compute the CoC for the "standard viewing conditions" with FF, APS-C, and mFT (4/3): Viewing distance = (10 in) · (2.54 cm / in) = 25 cm

(2.54 cm / in) = 25 cm Final image resolution for 20-20 vision with a viewing distance of 10 in (25 cm) = 5 lp / mm

Enlargement: 325 mm / 43.3 mm = 7.5 for FF, 325 mm / 28.4 mm = 11.4 for 1.5x, 325 mm / 26.8 mm = 12.1 for 1.6x, and 325 mm / 21.6 mm = 15 for mFT (4/3) Plugging into the CoC Formula, CoC (mm) = viewing distance (cm) / desired final-image resolution (lp/mm) for a 25 cm viewing distance / enlargement / 25, we get: FF: CoC = (25 cm) / (5 lp / mm) / 7.5 / 25 = 0.027 mm

1.5x: CoC = (25 cm) / (5 lp / mm) / 11.4 / 25 = 0.018 mm

1.6x: CoC = (25 cm) / (5 lp / mm) / 12.1 / 25 = 0.017 mm

mFT: CoC = (25 cm) / (5 lp / mm) / 15 / 25 = 0.013 mm Let's compute one more example for the CoC using a 20x30 inch photo viewed from 2 ft away with 20-20 vision taken with a FF camera (24mm x 36mm sensor): Viewing distance = (2 ft) · (12 in / ft) · (2.54 cm / in) = 61 cm

(12 in / ft) (2.54 cm / in) = 61 cm Final image resolution for 20-20 vision = 5 lp / mm

Enlargement = (30 in x 25.4 mm / in) / 36 mm = 21.2 Plugging into the CoC Formula, CoC (mm) = viewing distance (cm) / desired final-image resolution (lp/mm) for a 25 cm viewing distance / enlargement / 25, we get CoC = (61 cm) / (5 lp / mm) / (21.2) / 25 = 0.023 mm, which is what we would expect, since viewing a 20x30 inch photo at 2 ft is equivalent to viewing a 8.3x12.5 inch photo at 10 inches (very close to "standard viewing conditions"). More simply, however, there is the Zeiss Formula for calculating the CoC, which is simply the sensor diagonal divided by 1730. In the examples worked above, it comes out to the same as if we used the sensor diagonal divided by 1600: FF: CoC = 43.3mm / 1600 = 0.027 mm (Zeiss: 43.3mm / 1730 = 0.025 mm)

1.5x: CoC = 28.4mm / 1600 = 0.018 mm (Zeiss: 28.4mm / 1730 = 0.016 mm)

1.6x: CoC = 26.8mm / 1600 = 0.017 mm (Zeiss: 26.8mm / 1730 = 0.015 mm)

mFT: CoC = 21.6mm / 1600 = 0.013 mm (Zeiss: 21.6mm / 1730 = 0.012 mm) In any case, what this demonstrates is that the CoC is proportional to the sensor diagonal for a given display size, viewing distance, and visual acuity and independent of the pixel count. A popular online DOF calculator, DOFMaster, uses sensor diagonal / 1400 for the CoC. This online calculator allows you select the CoC; however, for comparative purposes across formats, the CoC will scale by the equivalence ratio (crop factor). On the other hand, the DOF formulas do not include how closely we scrutinize a photo. In other words, two photos might have the same DOF per the mathematical formulas, but if we scrutinize one photo more closely than another (perhaps it is more interesting, for example), then the DOFs may appear different: Scrutinizing one image more critically than another has the same effect as looking at that image with a higher visual acuity than the another. However, for two photos of the same scene displayed at the same size and viewed from the same distance that have the same computed DOF, then whatever the subjective impression of the DOF is for one photo, it will be the same for the other photo (although, as discussed above, it's easy to confuse "blurry" with "less DOF"). As the DOF deepens, more of the image is rendered sharply, both because more of the image is within the DOF, and because the aberrations of the lens lessens as the aperture gets smaller -- up to a point. Depending on the sensor pixel size and display size of an image, the effects of diffraction softening will begin to degrade the sharpness of the image more than the deeper DOF and lesser aberrations increase the sharpness. However, the point diffraction softening outweighs a deeper DOF and lesser aberrations depends tremendously upon the scene and the lens sharpness. It is common to read about "diffraction limited apertures", but these are based on a "perfect" lens and images where the whole of the scene lies within the DOF. In other words, it is quite common to achieve a sharper and more detailed image that is past the "diffraction limited" aperture due to the deeper DOF including more of the scene. At the opposite end of the DOF spectrum, shallow DOFs serve to isolate the subject from the background. However, while a more shallow DOF does lead to a greater background blur, it is not the only, or, in many instances, even the major player in the quantity of background blur, much in the same way that many confuse the bokeh (the quality of the out-of-focus areas of an image) with the quantity of the blur. For example, if the subject is 10 ft from the camera, 50mm f/2 will have the same framing and DOF on the same format as 100mm f/2 for a subject 20 ft away. That is, the same distance from the focal plane will be considered to be in critical focus. But the nature of the background blur will be very different -- the longer focal length will magnify the background blur. In fact, we can be more specific. The amount of background blur (assuming the background is well outside the DOF) is proportional to the ratio of the aperture diameters. For example, while the DOF for 50mm f/2 and 100mm f/2 will be the same for the same framing (in most circumstances), the background blur will be double for 100mm f/2 since the aperture diameter is twice as large for 100mm f/2 than for 50mm f/2 (100mm / 2 = 50mm, 50mm / 2 = 25mm). A good tutorial on this can be found here and here is an excellent blur calculator/demonstrator. We can now make the following generalizations about the DOF of images on different formats for non-macro situations (when the subject distance is "large" compared to the focal length), keeping in mind that aperture diameter = focal length / f-ratio, and assuming that all images are viewed from the same distance with the same visual acuity: For the same perspective, framing, relative aperture, and display size, larger sensor systems will yield a more shallow DOF than smaller sensors in proportion to the ratio of the sensor sizes.



For the same perspective, framing, aperture diameter, and display size, all systems have the same DOF.



If both formats use the same focal length and relative aperture (and thus also the same aperture diameter), but the larger sensor system gets closer so that the subject occupies the same area of the frame, and the photos are displayed at the same dimensions, then the larger sensor system will have a more shallow DOF in proportion to ratio of the sensor sizes.



For the same perspective and focal length, larger sensor systems will have a wider framing. If the same relative aperture is used, then both systems will also have the same aperture diameter. As a result, if the photo from the larger sensor system is displayed at a larger size in proportion to ratio of the sensor sizes, or the photo from the larger sensor system is cropped to the same framing as the image from the smaller sensor system and displayed at the same size, then the two photos will have the same DOF. Let's give examples for each scenario using mFT (4/3), 1.6x, and FF (forgive me for leaving out 1.5x, as it is so close to 1.6x as to be all but redundant to use for the purpose of examples, as I am repeating the process several times). As noted earlier, the condition of "same display size" only requires the same diagonal length, rather than the same length and width. This distinction is unnecessary when the systems have the same aspect ratio, but can sometimes be a factor when the aspect ratios are not the same (for example, if we display a photo with a 15 inch diagonal, then a 4:3 photo would be 9 x 12 inches and a 3:2 photo would be 8.3 x 12.5 inches). In all cases, we assume the same viewing distance and visual acuity: Let's say we are taking a photo of a subject 10 ft away, and use 40mm f/2.8 on mFT (4/3), 50mm f/2.8 on 1.6x, and 80mm f/2.8 on FF. All will have the same perspective, since the subject-camera distance is the same, and all will have the same AOV, since 40mm x 2 = 50mm x 1.6 = 80mm. Since all are using f/2.8, then if we display the photos at the same size, FF will have the least DOF, 1.6x will have 1.6x more DOF than FF, and mFT (4/3) will have the twice the DOF of FF (1.25x more DOF than 1.6x).



Again, let's say we are taking a photo of a subject 10 ft away, but this time use 40mm f/4 on mFT (4/3), 50mm f/5 on 1.6x, and 80mm f/8 on FF. Once again, all will have the same perspective since the subject-camera distances are the same, and all will have the same AOV since 40mm x 2 = 50mm x 1.6 = 80mm. The aperture diameters will also be the same since 40mm / 4 = 50mm / 5 = 80mm / 8 = 10mm. In this case, all photos will have the same DOF when displayed at the same dimensions.



This time, let's shoot the subject from 20 ft at 40mm f/4 on mFT (4/3), 16 ft at 40mm f/4 on 1.6x, and 10 ft at 40mm f/4 on FF. While the perspectives are different (since the subject-camera distances are not the same), the AOVs are the same since 20 ft / 2 = 16 ft / 1.6 = 10 ft, but FF will have the most shallow DOF, 1.6x will have a DOF 1.6x deeper, and mFT (4/3) will double the DOF.



We now shoot the same subject from 10 ft away with all formats, but this time use the same focal length and same f-ratio as well (for example, 50mm f/2.8). If we display the mFT (4/3) photo with a 12 inch diagonal, the 1.6x photo with a 15 inch diagonal, and the FF photo with a 24 inch diagonal, and view the images from the same distance, then all will have the same DOF. Note how the diagonals correspond to the focal multipliers of the respective systems: 12 in x 2 = 15 in x 1.6 = 24 in, which means that if we cropped the photos to the same framing, they would all be the same dimensions. Let's now demonstrate the DOF equivalence mathematically. As stated earlier, the DOF is the distance from the focal plane where objects in this zone are considered to be critically sharp. However, the distance from the focal plane is not always an even split. When the subject distance (d) is "large" compared to the focal length of the lens (non-macro distances), the far limit of critical focus (d f ) , near limit of critical focus (d n ), and DOF can be computed as: d f ~ [H · d] / [H - d]

d n ~ [H · d] / [H + d]

DOF = d f - d n ~ [2 · H · d²] / [H² - d²] where d is the distance to the subject and H is the hyperfocal distance. We can now compute the DOF behind the subject and the DOF in front of the subject: DOF behind = d f - d = d² / [H - d]

DOF in front = d - d n = d² / [H + d] Note that the smaller the subject-camera distance (d) becomes in comparison to the hyperfocal distance (H), the more evenly the DOF is split in front and behind the subject, since (H - d) and (H + d) are nearly equal for values of d that are small compared to H. In other words, the common wisdom that 1/3 of the DOF is in front of the subject and 2/3 of the DOF is behind the subject is not always true. This "rule" is valid when only when the subject-camera distance, d, is equal to 1/3 the hyperfocal distance, H. As the subject distance changes from that particular value, the 1/3 - 2/3 DOF split becomes a progressively less accurate description of the split of the DOF in front and behind the subject. In another scenario, it is also interesting to note that as subject distance approaches the hyperfocal distance, the far distance of critical focus approaches infinity, and the near distance of critical focus approaches half the hyperfocal distance, thus giving infinite DOF beyond half the hyperfocal distance. Another interesting scenario to consider is that when the subject-camera distance, d, is small compared to the hyperfocal distance, H, then, for the same format, the DOF will be essentially the same for the same framing and f-ratio. For example, 50mm at 10 ft has the same framing as 100mm at 20 ft on 35mm FF. If we shoot the scene at f/2 in each case, we will get the same DOF since the hyperfocal distance is 137 ft for a CoC of 0.03mm (the value used in most DOF calculators for 35mm FF, which corresponds to an 8x10 inch print viewed from a distance of 10 inches), which is much larger than the subject distance of 10 ft. However, were we instead to compare 24mm f/2 at 30 ft to 48mm f/2 at 60 ft (same framing), we would get a different DOF since the hyperfocal distance works out to 30 ft (for a CoC of 0.03mm), which is the same, rather than much larger, than the subject-camera distance. In any case, we can see that the DOF is a function only of the hyperfocal distance (H) and the subject distance (d). The role of the focal length (FL), f-ratio (f), and CoC (c) are contained in the hyperfocal distance: H ~ FL² / (f · c) If we scale the focal length, f-ratio, and CoC by the equivalence ratio (R), the hyperfocal distance remains the same: H' ~ (FL·R)² / [(f · R) · (c · R)] = [FL² · R²] / [(f · c) · R²] = FL² / (f · c) = H Consequently the DOF is invariant for the same perspective, framing, and aperture diameter. By expressing H in terms of aperture diameter (a), angle of view (AOV), and the proportion of the sensor diagonal that the CoC covers (p), we get a format independent expression for the hyperfocal distance, and consequently DOF: H ~ a / [2·p·tan (AOV/2)] Thus, for non-macro situations, the DOF for the same perspective, framing, and output size is also the same. A consequence of a larger sensor means that a longer focal length is required for the same perspective and framing, as well as a larger f-ratio to obtain the same aperture diameter. For example, let's consider images taken of the same scene from the same position with the same framing: A7R2 at 80mm, f/8 (aperture diameter = 80mm / 8 = 10mm)

D500 at 53mm, f/5 (aperture diameter = 53mm / 5 ~ 10mm)

80D at 50mm, f/5 (aperture diameter = 50mm / 5 = 10mm)

EM1.2 at 40mm, f/4 (aperture diameter = 40mm / 4 = 10mm) Since the perspective, framing, and aperture diameters are all the same, then for the same display size and viewing distance, their DOFs will also be the same. As a side, if the shutter speeds are also the same (which will require a higher ISO for the higher f-ratios to maintain the same lightness), then the images will be made with the same total amount of light as well, which will result in the same relative noise if the sensors have the same efficiency. Another reason that DOF is so important, even if DOF, per se, is not an issue to the photographer, is that it is also intimately connected with sharpness, diffraction softening, and vignetting. The reason that DOF affects sharpness is twofold. First of all, as shown above, the DOF is directly related to the aperture, and the larger the aperture diameter, the greater the aberrations, and, in some instances, the greater the field curvature. Secondly, a more shallow DOF means that less of the scene will be within the DOF, and, by definition, elements of the scene outside the DOF will not be sharp. This second point is especially important, since, as noted earlier, DOF calculators usually base their calculations off a CoC for an 8x10 print viewed from 10 inches away. Since so many now evaluate the sharpness of the lens on the basis of 100% crops on a computer monitor, the DOF that is seen at 100% on the computer screen is significantly more narrow than the DOF computed by the calculators. In addition to DOF and sharpness, the aperture is also intimately connected to diffraction. Diffraction softening is the result of the wave nature of light representing point sources as disks (known as Airy Disks), and is most definitely not, as is misunderstood by many, an effect of light "bouncing off" the aperture blades. The diameter of the Airy Disk is a function of both the f-ratio and the wavelength of light: d ~ 2.44·λ·f, where d is the diameter of the Airy Disk, λ is the wavelength of the light, and f is the relative aperture . Larger relative aperture (deeper DOFs) result in larger disks, as do longer wavelengths of light (towards the red end of the visible spectrum) so not all colors will suffer from diffraction softening equally. The wavelengths of light in the visible spectrum differ by approximately a factor of two, so that means, for example, that red light will suffer around twice the amount of diffraction softening as blue light. Diffraction softening is unavoidable at any aperture, and worsens as the lens is stopped down. However, other factors mask the effects of the increasing diffraction softening: the increasing DOF and the lessening lens aberrations. As the DOF increases, more and more of the photo is rendered "in focus", making the photo appear sharper. In addition, as the aperture narrows, the aberrations in the lens lessen since more of the aperture is masked by the aperture blades. For wide apertures, the increasing DOF and lessening lens aberrations far outweigh the effects of diffraction softening. At small apertures, the reverse is true. In the interim (often, but not always, around a two stop interval), the two effects roughly cancel each other out, and the balance point for the edges typically lags behind the balance point for the center by around a stop (the edges usually suffer greater aberrations than the center). In fact, it is not uncommon for diffraction softening to be dominant right from wide open for lenses slower than f/5.6 equivalent on FF, and thus these lenses are sharpest wide open (for the portions of the scene within the DOF, of course). The optimum DOF is often more a matter of artistic intent than resolved detail. Clearly, more shallow DOFs have less of the scene within critical focus, but this is by design. What is not by design is that, at very wider apertures, lens aberrations reduce the detail even for the portions of the scene within the DOF, so even if the photographer prefers the more shallow DOF, they may choose to stop down simply to render more detail where detail is important. Likewise, while a photographer may stop down with the intent to get as much of the scene as possible within the DOF so as to have a more detailed photo overall, portions of the scene that were within the DOF at wider apertures will become softer due to the effects of diffraction. Thus, the photographer must balance the increase in detail gained by bringing more of the scene within the DOF against detail lost for portions of the scene that were within the DOF at wider apertures. In addition, deeper DOFs require smaller apertures, which means either longer shutter speeds (increasing the risk/amount of motion blur and/or camera shake) or greater noise since less light will fall on the sensor at more narrow apertures for a given shutter speed. A common myth is that smaller pixels suffer more from diffraction than larger pixels. On the contrary, for a given sensor size and lens, smaller pixels always result in more detail. That said, as we stop down and the DOF deepens, we reach a point where we begin to lose detail due to diffraction softening. As a consequence, photos made with more pixels will begin to lose their detail advantage earlier and quicker than images made with fewer pixels, but they will always retain more detail. Eventually, the additional detail afforded by the extra pixels becomes trivial (most certainly by f/32 on FF). See here for an excellent example of the effect of pixel size on diffraction softening. In terms of cross-format comparisons, all systems suffer the same from diffraction softening at the same DOF. This does not mean that all systems resolve the same detail at the same DOF, as diffraction softening is but one of many sources of blur (lens aberrations, motion blur, large pixels, etc.). However, the more we stop down (the deeper the DOF), diffraction increasingly becomes the dominant source of blur. By the time we reach the equivalent of f/32 on FF (f/22 on APS-C, f/16 on mFT and 4/3), the differences in resolution between systems, regardless of the lens or pixel count, is trivial. For example, consider the Canon 100 / 2.8L IS macro on a 5D2 (21 MP FF) vs the Olympus 14-42 / 3.5-5.6 kit lens on an L10 (10 MP 4/3). Note that the macro lens on FF resolves significantly more (to put it mildly) at the lenses' respective optimal apertures, due to the macro lens being sharper, the FF DSLR having significantly more pixels, and the enlargement factor being half as much for FF vs 4/3. However, as we stop down past the peak aperture, all those advantages are asymptotically eaten away by diffraction, and by the time we get to f/32 on FF and f/16 on 4/3, the systems resolve almost the same. For the same color and f-ratio, the Airy Disk will have the same diameter, but span a smaller portion of a larger sensor than a smaller sensor, thus resulting in less diffraction softening in the final photo. On the other hand, for the same color and DOF, the Airy Disk spans the same proportion of all sensors, and thus the effect of diffraction softening is the same for all systems at the same DOF. Let's work an example using green light (λ = 530 nm = 0.00053mm). The diameter of the Airy Disk at f/8 is 2.44 · 0.00053mm·8 = 0.0103mm, and the diameter of the Airy Disk at f/4 is half as much -- 0.0052mm. For FF, the diameter of the Airy Disk represents 0.0103mm / 43.3mm = 0.024% of the sensor diagonal at f/8 and 0.005mm / 21.6mm = 0.012% of the diagonal at f/4. For mFT (4/3), the diameter of the Airy Disk represents 0.0103mm / 21.6mm = 0.048% at f/8 and 0.005mm / 21.6mm = 0.024% at f/4. Thus, at the same f-ratio, we can see that the diameter of the Airy Disk represents half the proportion of a FF sensor as mFT (4/3), but at the same DOF, the diameter of the Airy Disk represents the same proportion of the sensor. In other words, all systems will suffer the same amount of diffraction softening at the same DOF and display dimensions. However, the system that began with more resolution will always retain more resolution, but that resolution advantage will asymptotically vanish as the DOF deepens. In absolute terms, the earliest we will notice the effects of diffraction softening is when the diameter of the Airy Disk exceeds that of a pixel (two pixels for a Bayer CFA), but, depending on how large the photo is displayed, we may not notice until the diameter of the Airy Disk is much larger. Typically, the effects of diffraction softening do not even begin to become apparent until f/11 on FF (f/7.1 on APS-C and f/5.6 on mFT -- 4/3), and start to become strong by f/22 on FF (f/14 on APS-C and f/11 on mFT -- 4/3). By f/32 on FF (f/22 on APS-C, f/16 on mFT -- 4/3) the effects of diffraction softening are so strong that there is little difference in resolution between systems, regardless of the lens, sensor size, or pixel count. We can now summarize the effects of diffraction softening as follows: Diffraction is always present. As the lens is stopped down, optical aberrations lessen and diffraction softening increases.

All else equal, more pixels will always resolve more detail, regardless of other sources of blur, including diffraction.

The "diffraction limited aperture" is the relative aperture where the effects of diffraction softening overcome the lessening lens aberrations, and will vary from lens to lens as well as where in the frame we are looking (e.g. center vs edges, where the edges typically, but not always, lag around a stop behind the center).

The pixel count has a very minor effect on the diffraction limited aperture. For example, if the diffraction limited aperture on a 12 MP sensor is f/5.6, it may be at f/4 on a 36 MP sensor, all else equal (but the 36 MP sensor will certainly resolve more at f/5.6 than the 12 MP sensor).

All systems suffer the same diffraction softening at the same DOF, but do not necessarily resolve the same detail at the same DOF, as diffraction softening is merely one of many forms of blur (e.g. lens aberrations, motion blur, large pixels, etc.).

As the DOF deepens, all systems asymptotically lose detail, and by f/32 on FF (f/22 on APS-C, f/16 on mFT -- 4/3), the differences in resolution between systems is trivial, regardless of the lens, sensor size, or pixel count. It is worth noting that some lens tests show much greater discrepancies in the effects of diffraction softening that we would expect. Per the lens tests at www.slrgear.com, we can see huge disparities between f / 16 and f / 22 even with high end lenses like the Zuiko 50 / 2 macro (7 blades) and Zuiko 150 / 2 (9 blades), which are far greater than can be accounted for by the minor differences in the aperture shapes. In fact, the Canon 100 / 2.8 macro and the Sigma 105 / 2.8 macro both have 8 blades, but show the same huge differences in sharpness from f / 22 to f / 32 on 1.6x as the Zuikos. The most likely explanation for this is that at the minimum aperture, not all lenses are equally accurate. For example, consider a 50mm lens and a constant "aperture bias" of -0.5mm, that is, the lens always sets the aperture 0.5mm smaller than it should be (whether as a result sloppy quality control or sloppy design). At f/4, the aperture diameter should be 50mm / 4 = 12.5mm. However, a bias of -0.5mm would make the aperture diameter 12mm instead, resulting in a true f-ratio of 50mm / 12mm = f / 4.17 -- 1/9 of a stop off -- which is insignificant. At f / 8, the aperture diameter should be 50mm / 8 = 6.25mm. Again, a bias of -0.5mm would make the aperture diameter 5.75mm resulting in a true f-ratio of 50mm / 5.75mm = f / 8.7 -- 1/4 of a stop off -- bordering on significant, but still small enough to go unnoticed by most people. At f / 22, however, the error becomes much more of an issue. The aperture diameter should be 50mm / 22 = 2.27mm. This time, the -0.5mm bias would make the aperture diameter 1.77mm for a true f-ratio of 50mm / 1.77mm = f / 28 -- 2/3 of a stop different -- very noticeable, and resulting in a considerable difference in diffraction softening at such small apertures. Furthermore, the "aperture bias" need not be constant, and could vary depending on the selected f-ratio, producing even greater differences at small apertures. Of course, this hypothesis for the discrepancies in the effects of diffraction softening in the SLR Gear tests would need to be verified by comparing the exposures at different f-ratios. In addition, the effects of vignetting can confound the issue at wide apertures, but, as demonstrated above, small errors in the aperture diameters are insignificant at wider apertures anyway. Thus, we would test at small apertures, such as f / 22 and smaller, where the discrepancies due to aperture bias error are most noticeable. Unfortunately, SLR Gear does not host (or even still have) these images to make such a comparison, so this conjecture needs to be verified. Furthermore, it is not unlikely that an "aperture bias" could have been an issue with the particular lens they tested, but not endemic to all (or most) copies of the lens. Furthermore, while it is well-known that the shape of the aperture plays a role in how the bokeh is rendered, it is unlikely that it plays any role in the degree of diffraction softening so long as the area of the aperture is the same. Regardless, the effects of diffraction softening are not particularly significant until very small apertures. To get a DOF larger than what the lens can stop down to achieve, we either use a shorter lens and TC (teleconverter), or frame wider and crop to the desired framing. The effect of a TC is to multiply the relative aperture by the same factor as the focal length. For example, by using a 50mm macro at f/22 with a 2x TC, we would effectively be at 100mm f/45. While more convenient than using a TC, the downside to framing wider and cropping is that it costs us pixels. However, since the lenses for all systems can stop down to the diffraction limited resolution of the sensor, much of the detail lost by cropping would have been lost from diffraction softening regardless. For example, an image at 100mm f/32 will have the same DOF and nearly the same detail as an image at 50mm f/16 taken from the same distance and then cropped to the same framing, despite having 1/4 the number of pixels on the subject. This is because the f/32 image has already lost almost the same amount of detail due to diffraction softening, although it will still retain slightly more detail, due to the oversampling of a greater number of diffraction limited pixels still renders slightly more detail than a fewer number of larger pixels. Of course, it would be nice if we didn't have to stop down to increase sharpness for the portions of the image within the DOF, especially as this helps us avoid the effects of diffraction softening. For example, let's say we are taking a photo of a landscape where the entire scene is within the DOF, even at f/2.8. Thus, there would be no reason to shoot at a different f-ratio on different systems to maintain the same DOF. However, the aberrations for larger apertures are more problematical than the aberrations for smaller apertures, and, once again, we realize that larger sensor system will require a higher f-ratio to maintain the same aperture diameter. Thus, even though the DOF may not an issue per se, the aberrations, as well as vignetting, most certainly can be. Of course, one might ask why we simply don't choose the settings on each system that produce the "best" results for each. Well, of course that is how we would use the systems. The section on partial equivalence talks more about this. Putting it all together in terms of AOV, DOF, and shutter speed, let's look at some examples of equivalent settings from common cameras (using the same AOV) with all f-ratios and ISOs rounded to the nearest 1/3 stop, which show how the available DOFs on different formats differ:

Camera Focal Multiplier Focal Length (mm) f-ratio Shutter Speed ISO Canon S3 6.02x 8.3 f / 2.8 1/400 100 Canon G7 4.84x 10.3 f / 3.2 1/400 125 Canon Pro1 3.93x 12.7 f / 4 1/400 160 Olympus E3 2.00x 25 f / 8 1/400 800 Sigma SD14 1.74x 29 f / 9 1/400 1000 Canon 40D 1.62x 31 f / 10 1/400 1250 Nikon D300 1.52x 33 f / 11 1/400 1250 Canon 1DIII 1.26x 40 f / 13 1/400 1600 Canon 5D 1.00x 50 f / 16 1/400 3200 Leica S2 0.80x 62.5 f / 20 1/400 5000 Mamiya ZD 0.72x 67 f / 21 1/400 6400 EXPOSURE TIME



The exposure time (shutter speed), obviously, is the length of time the shutter remains open to achieve the desired exposure. The reason Equivalent photos have the same shutter speed is because the amount of motion blur will be the same for a given shutter speed. However, there are many times when we would not compare formats with the same shutter speed since there is enough light to stop down to achieve the desired DOF and still have a fast enough shutter so that motion blur is a non-issue. Under these circumstances, the larger sensor system can deliver both deliver more detail subject to lens sharpness and pixel count) in addition to a cleaner image since the lower shutter speed results in more light falling on the sensor for a given DOF. For example, let's say we are shooting a landscape. The following settings would be likely candidates for a particular scene: A7R2 at 24mm, f/11, 1/100, ISO 100

D500 at 16mm, f/7.1, 1/250, ISO 100

80D at 15mm, f/7.1, 1/250, ISO 100

EM1.2 at 12mm, f/5.6, 1/400, ISO 100 While landscapes are a common scenario, and such a comparison is of practical value to most photographers, we must take care to note that this partially equivalent scenario is only valid when the shutter speeds are sufficiently high to avoid motion blur, and, if a tripod is not being used, to avoid camera shake. If, instead, we were engaged in street photography near dusk, we would need to compare with fully equivalent settings since a sufficient shutter speed would be crucial to stopping motion blur for the required DOF: A7R2 at 24mm, f/11, 1/100, ISO 400

D500 at 16mm, f/7.1, 1/100, ISO 250

80D at 15mm, f/7.1, 1/100, ISO 250

EM1.2 at 12mm, f/5.6, 1/100, ISO 100 Alternatively, if one system has IS and the other system does not, then if motion blur is not an issue, then the IS system will be able to use the lower shutter speed if a tripod is not used on the non-IS system. In this case, the system with IS will have the noise advantage for a given DOF since more light will fall on the sensor. So if we are using anything other than base ISO, then we cannot discount the importance of shutter speed in comparing systems, since the only time we would not be at base ISO is when shutter speed is a factor. Under these circumstances, the only way for the larger formats to achieve less relative noise than the smaller formats is by using a more shallow DOF, rather than raising the ISO, to maintain the necessary shutter speed. LIGHTNESS



The lightness of the photo is the how bright the photo appears, and is usually adjusted by the ISO setting of the camera. Let's say we have a perfect sensor that is a photon counter. That is, each photon that falls on the pixel is recorded so that if 100 photons fell on a pixel, the image file would record a value of 100 at base ISO. Then at ISO 400, the image file would record a value of 400, at ISO 1600, the image file would record a value of 1600, etc., where "brighter" values would be displayed on a computer monitor or printed on a printer with greater "lightness". See here for a much more in depth discussion.

DISPLAY DIMENSIONS

The display dimensions is the physical size of the viewed image, whether it be a print or on a computer monitor People, including reviewers, tend to compare IQ at the pixel level, rather than the image level, which leads to incorrect conclusions about the image, unless the images are made from the same number of pixels. If two images are made from a different number of pixels, if we are to compare them at the pixel level, then we need to properly resample the images to a common number of pixels. We can increase the IQ of an image by increasing either the native pixel count or increasing the quality of the individual pixel. Thus, if we compare two images with unequal pixel counts at the pixel level (often referred to as a "100% comparison"), we are disregarding the increase in IQ that comes from the additional pixels, which is discussed in more detail in the Megapixels: Quality vs Quantity section of the essay. For example, let's say we wish to compare the Canon 1DsIII (21 MP) and the Nikon D3 (12 MP). Comparing images from the two systems at the pixel level is the same as comparing 16x24 inch prints from the 1DsIII to 12x18 inch prints from the D3, which is hardly a fair comparison. The best way to compare images is to compare in the manner that they will be displayed. For example, if you are going to print the images, then print them and compare. Of course, this is impractical to do unless we already had access to both systems. And, even if the reviewer provides us with the files to print ourselves, that is a bit of a pain, and certainly not a basis for an objective conclusion that we can share with others as all will not be using the same printer. So, what to do? The easiest solution is to resample both images to a common dimension that is at least as large as the larger image and then compare at the pixel level. The reason to compare at a dimension at least as large as the larger image is because downsampling the larger image will cause it to lose detail, which, I presume, is one of the qualities of IQ being measured in the comparison. In addition, if we are comparing relative noise, it only makes sense to do so at the same level of detail, so we would apply NR to the more detailed image to match the level of detail of the less detailed image. Of course, care need be taken in the resampling process, since a poor resampling method can lead to incorrect conclusion about the comparative IQ between systems. This is especially true when comparing relative noise. We simply cannot downsample the larger file to the dimensions of the smaller file. We first need to apply NR (or a specific form of blur) and then downsample. In any event, it is better to upsample the smaller image rather than downsample the larger image. Again, using the example of the 1DsIII vs D3 comparison, we could resample both images to 54 MP (300 PPI for a 20x30 inch print) and then compare at the pixel level. Of course, there's nothing magical about 54 MP, but we would like to incorporate some kind of "future-proofing" for comparisons with future cameras, and need some value larger than 21 MP, so 300 PPI for a 20x30 inch print sounds like a good "standard", as very few would print larger than this, no matter what pixel counts the future holds or what format they shoot. Of course, for those that do print larger, they would, of course, want to compare at the larger output size. Another option would be for a reviewer to print the images at a variety of sizes (e.g. 4x6, 8x12, 12x18, 16x24, and 20x30 inches) on a top-of-the-line printer, scan the prints, and then compare the scans from the same size prints. 'Tis a pain, but probably the most fair way to compare, although I honestly don't know if it would produce different results than resampling the two images to the "appropriate" PPI for each print size. And, of course, we cannot discount the effects of viewing images on non-calibrated monitors (I've seen more than one comparison where someone claimed the highlights of the image to be blown with several others chiming in that they need to calibrate their monitor). Thus, comparing images that have different pixel counts at the pixel level is a very poor way to compare the IQ between systems. However, the closer the pixel counts are, the better such a comparison will approximate the actual differences. For example, it's reasonable to say that a comparison between the 12.1 MP Nikon D700, 12.1 MP Nikon D3, 12.3 MP Nikon D300, and the 12.7 MP Canon 5D would be easily "close enough" without resampling. But when comparing the 10.1 MP Canon 40D, 10.1 MP 1DIII, or the 10.1 MP Olympus E3 to the aforementioned cameras at the pixel level, we are beginning to stretch a bit (12% difference in linear pixel count), and we are certainly stretching when comparing the 1DsIII to any of the above cameras at the pixel level for native image sizes (32% difference in linear pixel count between the 1DsIII and the D3, for example). So, while no comparison is without its potential problems, the easiest mistake to correct is to carefully resample images to a common dimension, as well as applying NR as necessary for comparing relative noise, before comparing at the pixel level.