Share Facebook

Twitter

Pinterest

Overall Summary of Optical Performance & Score

I wanted to give a quick summary of the results related to optical performance for those who prefer to just read the “executive summary.” For my more detailed readers, don’t worry … I go through exhaustive detail of each test as well.

There are a few major aspects to optical performance for a long-range rifle scope, and I’ve tried to weight the major elements appropriately for the overall score. The majority of the score is obviously optical clarity, which refers to the image quality you see through the scope. But just because a scope is crystal clear, doesn’t mean it is ideal for long-range shooting. The apparent field of view on these scopes varied significantly, so a very narrow or very wide field of view could impact the overall performance as well. Finally, part of the score is based on zoom ratio, which describes the magnification range of a scope. A 6-24x scope has a zoom ratio of 4 (24 ÷ 6), where a 4-28x scope has a zoom ratio of 7 (28 ÷ 4). All other things being equal, a larger zoom ratio just makes the scope more flexible. I also measured the actual maximum magnification for each scope, and the zoom ratio is based on that measured data (not simply what the manufacturer advertises).

Weight Optical Clarity (resolution & contrast) 70% Field of View (measured at 18x for direct comparability) 15% Zoom Ratio (range of magnification, based on the measured max zoom) 15%

You can see there were 3 rifle scopes that stood out among the crowd when it came to optical performance:

These scopes were stunning in terms of overall image quality. The Zeiss Victory Diavari 6-24×56 took the prize for pure optical clarity (also commonly referred to as image quality), but it also had one of the smallest zoom ratios of the entire group. The Schmidt and Bender PMII 5-25×56 was right behind the Zeiss in terms of image quality, and it also had one of the wider field of views among these rifle scopes. But although the zoom ratio was slightly higher than the Zeiss, it still has a small magnification range relative to many of the newer designs represented here. The Hensoldt ZF 3.5-26×56 was released just a couple months ago, and it represents the latest scope design techniques. It had an enormous zoom ratio, from 3.5x on the low end to 26x on the high end. The image quality of the Hensoldt scope was also top notch.

Note: If you don’t agree with how the scores are weighted, don’t freak out. I provide the detailed results for each area, and you can ignore these overall scores if you’d like or even calculate your own based on different weights. They’re just intended to give a high-level overview of all the findings in this area, and weighted as the typical long-range shooter would likely rank them.

Optical Clarity (Resolution & Contrast)

When I talk about optical clarity, you can think about that as image quality. This is obviously an important feature of a rifle scope, but it’s also extremely complex, highly technical, and often the topic of heated debate. Many point to things like coatings, HD or ED glass, or where the glass is made as indicators of image quality. Here is how optics expert, ILya Koshkin, sees it:

How do you sell good image quality? Every magazine ad for every scope company for a riflescope talks about how well you can see. You pretty much have to tout something: patented coating recipe, extra low-dispersion glass, “high definition” glass, etc. None of these things by themselves are of any importance and (by my estimate) nearly 100% of what you see in a typical advertisement is, at best, misleading and at worst, pure BS. However, all these tricks are necessary for attracting enough attention to a particular product to at least get you to consider it. – ILya Koshkin (from Rifle Scope Fundamentals)

While coatings and glass specs certainly play into image quality, they aren’t what we really care about … at least not directly. So instead of comparing all those aspects, I focused my tests on the end result we actually care about: How much detail can you see through the scope? How well does the scope transmit contrast?

I spent a ton of time trying to come up with a data-driven approach to quantify optical clarity, because these were the only tests I ran that couldn’t be measured directly. By that, I mean this is a subjective matter, and requires a person to sit behind the scope and make judgments on what they can and can’t see. So I was particularly cautious in how I conducted these tests in order to mitigate bias or other outside factors that could skew the results.

Many brilliant people gave me advice on these tests, including professionals whose full-time job is testing optics. What I ended up with was an original, straight-forward approach. I feel it’s as objective as practically possible for an independent tester (without buying $100,000+ in specialized equipment).

The Double-Blind Tests

To start, I conducted double-blind tests to evaluate optical clarity. That means neither the scope testers nor the conductor of the tests could tell which scope was which. I carefully wrapped each scope, to disguise the brand, size, shape, color, turret design, and anything else I could reasonably obscure. To be honest, a couple times, I thought I knew what scope a tester was handling, and I was wrong 3 times out of 3. The scopes were very well disguised! These blind tests were intended to prevent brand bias. I was afraid if someone saw a brand name that they thought should perform well they might subconsciously try harder to fulfill that expectation, or if they saw a brand with a weaker reputation, they might not put in as much effort. By hiding the brand, we helped level the playing field.

Each scope had a unique letter taped to it that was attached to them at random. I recorded the results for each scope by the letter ID, and only matched that up with the actual scope underneath it after all the testing was complete.

I still allowed testers access to the parallax knob (a.k.a. side-focus adjustment), and they were encouraged to use it to adjust each scope to their eyes. They could adjust it by reaching under the v-block, without having to visually see the knob. A few scopes had unique designs for target focus (like a focus ring on the objective bell, or around the elevation turret), and I found a way to allow them to make those adjustments as well.

Testers & Sample Size

I specifically chose a set of six testers, most of whom were “disinterested parties” (i.e. they weren’t in the market for one of these scopes, and may have never seen a tactical reticle or even own a long-range rifle). I obviously wanted “disinterested parties” to help mitigate bias. Testers ranged in age from 30 to 80 years old, and everything in between.

The six person sample size should also help normalize the results, just meaning they couldn’t be heavily skewed by one or two testers. My statistician friend tells me that ideally I’d have a sample size of 30 people, but it’s hard to find 30 “disinterested parties” willing to give up hours to help me with a boring (almost clinical) test … especially free. So a BIG thanks to the six friends who did help!

I also wanted a larger sample size (i.e. six people instead of just one or two), because everyone’s eyes are slightly different. Some people’s eyes may be more sensitive to resolution (ability to see fine detail) or contrast (ability to differentiate between light/dark and colors), which can impact the apparent clarity they experience behind a scope. Here is an excerpt from an excellent article by ILya Koshkin that explains these competing design characteristics:

I have heard people say that resolution and contrast go hand in hand. That is not, strictly speaking, correct. They are in a perpetual match of “tug of war.” It is impossible to optimize both of them to be as high as possible. If resolution is fully optimized, contrast suffers, and vice versa. In an image with high resolution, but low contrast, there may be a lot of fine detail, but you might have a hard time distinguishing between them, since they do not stand out much. Conversely, if an image has high contrast, but low resolution, all the large details will be very distinct (a common term is to say that they “pop” out at you), but small details will simply not be present. While ideally you would want to have an image with both high contrast and high resolution, that is not easy to achieve. For every optical system, the designer has to compromise between resolution and contrast in order to achieve a well-balanced image. – ILya Koshkin

Ultimately, the average performance over a six person sample size should give you a good idea of whether a scope will be more or less likely to be clear and sharp for someone’s eyes. Yes, each person’s eyes are different, but we didn’t see substantial variations between the testers.

About The Test Setup

I first set all the scopes to 18x magnification, which isn’t as easy as it sounds. A few people had mentioned that you can’t trust the marked indexes on the scope, so I wanted to ensure I really was at exactly 18x for each scope. That would allow an apples to apple comparison of optical clarity, instead of comparing the detail you could see with a 30x scope with an 18x scope. I essentially found a reliable way to ensure I had each scope set at exactly 18x, and while it was very time consuming … it was also straight-forward. I explain it in detail in the How To Measure the Apparent Magnification of a Scope post.

I also set the erectors on each scope to be in the middle for the optics tests, meaning they were centered within the body. This should help minimize any optical skew. I accomplished this by simply putting the scope in a V-block and aiming it at a target. I’d then rotate the scope 360 degrees (without moving the V-block), and adjust the elevation and windage knobs towards the target. I knew the scope was centered if I could have the elevation turret pointing up and it was on target, and I could rotate it with the elevation turret to the left, down, or right and it should still be pointing at the target the whole time.

At first, I tried to set up the tests outdoors, but noticed the ambient light had a big impact on what you could see. I might be able to see something easily near sunset, but I couldn’t even get close to seeing the same level of detail midday. My pupils would dilate, and therefore change the amount of light that made it through my eye. My goal was to make all the tests completely repeatable, so I decided to move the optics tests indoors. Luckily, I have access to a 100+ yard long hallway at my church. This allowed me to completely control lighting, and eliminate any possibility for mirage. It was a great optics test environment.

According to the British Standards Institution specifications, when using test charts to determine visual acuity “the luminance of the presentation shall be uniform” (from BS 4274-1:2003). This is why I used a continuous lighting kit, with a photography umbrella to bounce light onto the charts. This prevents harsh, uneven direct light. I turned off the other lights in the area so that there was a single source of controlled, even light.

I essentially set up 9 different charts exactly 100 yards away, and each chart had a couple things on it. The first was a custom Snellen eye exam chart, which is the same type of chart you read when you got the eye doctor. I asked each tester to read the smallest letters they could possibly make out, and I graded their accuracy. Each of the 9 charts had a cryptographically random, non-repeating string of characters. Each string of characters was completely unique, and didn’t match any string on any chart. This ensured testers couldn’t memorize patterns, and were really forced to read each letter through each scope. No chance of cheating … either they could read it or they couldn’t. I followed the British Standards Institution specification for Snellen charts and only used the letters C, D, E, F, H, K, N, P, R, U, V, and Z, based upon equal legibility of the letters (from BS 4274-1:2003).

For scoring, testers were awarded more points for smaller letters. Testers only received points for letters they read correctly. If they read a line correctly, they were also awarded points for all of the lines above it (i.e. the lines with larger letters than the one they read).

The diagram below shows the height of each line, along with the max score awarded for reading the line 100% correctly (6 for 6). The score was calculated based on the relative size of the letters.

So if a tester was able to read line 5 with 100% accuracy, but couldn’t make out line 6, they’d be award the full 83 points for line 5 plus the sum of the points for all lines above that. So the total points awarded would be 83+67+50+33+5, which equals 238. If another tester was able to accurately read 3 of the 6 letters on line 6, he’d be awarded 50% of the max score for that line. The max points for line 6 is 100, so he’d be awarded 50 points, and that would be added to the 238 for lines 1 through 5, for a total score of 288.

The next two elements on the chart are similar to a 1951 US Air Force Resolution Test Chart. The basic idea for this came from the pioneering work FinnAccuracy developed to evaluate optical equipment. However, I made a few modifications to their test based on feedback I received from several optics testers during my peer-review phase. The first column had black lines on a white background, which I refer to as the high contrast set. To the right of it was another column containing dark gray lines on a medium gray background, which I refer to as the low contrast set. Each tester found the smallest set of lines where they could still differentiate between the lines, meaning they weren’t washed out and simply appearing like a gray box. Leupold shared a great illustration that shows what I’m talking about (below). I showed each tester this illustration to help them understand what they were looking for. I recorded the number each tester identified for the set of high contrast lines they could still resolve, as well as the number for the low contrast set of lines.

I also completely randomized the order of the scopes presented to each tester, so testers weren’t always looking through the same scopes first. I also asked testers to take a break after they’d been looking through scopes for a few minutes. Your eyes can start to fatigue after about 15 minutes (depending on age and eyes), so we tried to prevent someone straining to look through scopes beyond that.

Wow, that’s a lot of information! But I know there are some detail guys out there that would question all this stuff the minute I published it, so I just want to try to be completely transparent about where all this data came from.

The Optical Clarity Results

Finally, ready to see the data? Here are the results of the eye exam chart, averaged over all the testers.

The Zeiss Victory FL Diavari 6–24×56 was the clear winner for the Snellen eye charts (pardon the pun). The Schmidt and Bender PMII 5-25×56 was not far behind, and the Hensoldt ZF 3.5-26×56 performed outstanding as well. One surprise was how many Nightforce scopes ended up in the top half of this test. In fact, there were 2 within the top 5 … including the Nightforce NXS 5.5-22×50, which had been riding on my magnum hunting/target rifle for almost 2 years in rough conditions.

Here are the results from the USAF line charts. The scopes are ordered by the score for the low contrast set.

The top performers with the USAF Line Charts were similar to the Snellen eye exam charts, especially when you just look at the results for the high contrast set. I fully expected to see this type of correlation, which provides some confirmation for the validity of these tests.

But there was a slightly different mix when the chart is ordered by the low contrast performance, which is what you see above. For example, testers were able to really pick up contrast with the Vortex Razor HD 5-20×50 scope, although it didn’t perform as well in the resolution test. There were a few other scopes showing this pattern. These scope designs may lean more towards contrast than resolution.

On the other hand, there were some scopes like the US Optics ER25 5-25×58 that showed the opposite behavior, performing well on the other tests and well on the high contrast set of line, but not performing well on the low contrast lines.

Contrast is especially important in low light conditions, so hunters or long-range shooters that find themselves in low light conditions frequently may want to pay special attention to these results. Although I didn’t specifically test in low light conditions, the results of the low contrast test should be indicative of the performance you can reasonable expect in that scenario.

Combined Optical Clarity Score

I know this is a lot to take in. Since these tests were all focused around resolution and contrast, I combined them into a single score for “optical clarity.” This essentially represents the overall image quality of the scope. I thought the Snellen tests were the most reliable, since they had a natural check built in that ensure the tester really can see what they are saying they can. The Snellen chart is focused primarily on resolution, so I hoped to infer contrast more from the USAF line charts. That is why I weighted the score from the low contrast USAF line chart a little higher. Here are the weights I used:

Elements to Optical Clarity Score Weights Snellen Eye Chart 60% High Contrast USAF Line Chart 10% Low Contrast USAF Line Chart 30%

To calculate the combined score, the results from each test were normalized to a 100 point scale. Then I combined those according to the weights above. Here are the combined results, which reflect the average overall image quality the six testers found for each scope.

You can see the Zeiss Victory FL Diavari 6-24×56 ended up on top. I remember the moment when I first took the Zeiss scope out of the box and looked through it. While it’s hard to know how good a scope is just glancing through it, I could immediately see it was going to be a competitor. And it turned out to be on top out of this long list of capable scopes. But the Schmidt and Bender PMII 5-25×56 wasn’t far behind it. After all the results were added up, those two scopes stood out in terms of optical clarity.

One thing to keep in mind is that scopes with a smaller objective lens diameter should theoretically have lower resolution than scopes with larger objectives (all other things being equal). According to Nikon, “Given the same magnification, the larger the objective diameter, the greater the light-collecting power. This results in higher resolution and a brighter image.” So the scopes with less than a 56mm objective that finished well deserve a tip of the hat, including the Nightforce NXS 5.5-22×50, Valdada IOR 3.5-18×50, Valdada IOR RECON 4-28×50, Bushnell Elite Tactical 3.5-21×50, and Vortex Razor HD 5-20×50. And tactical scopes that have a much smaller objective like the March 3-24×42 FFP and the Leupold Mark 6 3-18×44 may simply have had too much of a handicap to overcome when compared to these larger scopes.

I was surprised to see the new Schmidt and Bender 3-27×56 High Power scope finish so low. I had actually cast my vote for that scope to end up on top overall, but it had disappointing performance here. They may still have a few kinks to work out in this new design, or maybe I just got a bad unit. I actually tried to contact S&B a couple times through this test, with no response. This was one of the brand new scopes that the guys at EuroOptic.com let me borrow for the tests, and since it has a $7,000 price tag … I didn’t feel comfortable asking them for another one to double-check. The optical system for the 3-27 is a more complex design that likely has more lenses than the 5-25, which could contribute to its performance.

I was also surprised by how many Nightforce scopes were represented in the top few spots. 50% of the top 6 scopes were Nightforce. Personally, when I think of Nightforce, I don’t think best of class optical clarity … I think durability or maybe even repeatability, but not image quality. Don’t get me wrong, I feel like Nightforce makes a great scope … I personally own one and paid retail for it. However, this is exactly why I’m so drawn to an objective, data-driven approach. It is hard for us to look through scopes, even if they’re side-by-side, and be able to rank clarity … and when it involves this many, its impossible. The human brain just isn’t made for that, and our short term memory simply can’t hold on to all the information we need for a valid comparison. Here is an excerpt published by the Vanderbilt Vision Research Center on this exact topic:

At any instant, our visual system allows us to perceive a rich and detailed visual world. Yet our internal, explicit representation of this visual world is extremely sparse: we can only hold in mind a minute fraction of the visual scene. These mental representations are stored in visual short-term memory (VSTM). Even though VSTM is essential for the execution of a wide array of perceptual and cognitive functions, and is supported by an extensive network of brain regions, its storage capacity is severely limited.

My point of bringing that up is that maybe we trust our brains too much when making visual comparisons. Even with simple information (not even a full visual scene), studies have shown that we can only store around 5-7 things in our short-term memory at one time. But, the testing procedures I used here help us overcome the limits of visual short-term memory by capturing what testers saw in the moment, and storing that outside of their head for later comparison. That’s at least why I personally trust these results more than my personal experience. I also recognize that all humans are biased … including me. So an objective, double-blind test can sometimes reveal hard truths that our bias might make it hard for us to see otherwise.

Enjoy this type of data-driven information? That’s what this website is all about. Sign-up to receive new posts via email.

Other Post in this Series

This is just one of a whole series of posts related to this high-end tactical scope field test. Here are links to the others:

© Copyright 2020 PrecisionRifleBlog.com, All Rights Reserved.