One of the areas where reviews sometimes fall short is in providing perspective on how component performance evolves over time. While the yearly new hardware launch cadence provides periodic opportunities to revisit how older components stack up to newer ones, the focus in any given launch is going to be on the CPU or GPU that’s being launched, not previous cards. Title-specific coverage, meanwhile, is typically measured and written in the days immediately following a launch, often as part of a game or product review. Stack these two trends together, and it can be more difficult than it ought to be for gamers to figure out how performance has changed over time and to tease out which GPUs are holding their value better than others.

With this in mind, we’ve completed an analysis of the data sets we gathered in our recent review of AMD’s Radeon RX 5700 and Radeon RX 5700 XT, as well as the Gigabyte Aorus RTX 2080 Xtreme 8GB. We updated all of our GPU data sets in late June / early July of 2019, which makes this a good time to look back at how Pascal, Turing, and GCN performance have each evolved over the past nine months.

We’re specifically watching for two trends. First, there have been a number of general concerns from gamers about the ongoing impact of Meltdown and Spectre patches on game performance. Second, there is a perception in certain circles that Nvidia GPUs lose performance more quickly than their AMD counterparts, either because of intrinsic characteristics of Nvidia GPU design or because the company deliberately handicaps older cards to make newer GPUs look better by comparison.

If I’m being honest, I’ve never believed the more sinister version of this argument. Nvidia and AMD pursued somewhat different optimization strategies in the pre-DX12 era, and Nvidia can reasonably be assumed to focus its optimization efforts on newer GPUs rather than older cards. This is not unique to Nvidia, however. Now that AMD has RDNA in-market, it may have to make decisions about how to prioritize its time when working to optimize for its various architectures as well. There is a difference between saying that Nvidia may focus more on optimizing for newer cards and saying that Nvidia deliberately handicaps older GPUs. In any event, the goal here is to measure how performance has evolved over time in the same suite of titles. We’ll see where the results take us.

Test Setup

All of our tests were run on an Asus Prime Z370-A motherboard, with 32GB of DDR4-3200 using an Intel Core i7-8086K. The September 2018 Nvidia GPUs were tested using the 411.63 Turing launch driver, while the June retest used the 430.86 Nvidia driver. The AMD Radeon 64 and Radeon VII used the Adrenaline 19.5.2 driver. A Samsung 1TB 970 EVO was used for storage. The September 2018 tests were run using Windows 10 1803, while the June 2019 tests were run under Windows 10 1903. All Meltdown, Spectre, and related patches were left in default states.

While the dates for the comparison are set to September 2018 and June 2019, this is obviously a bit of a fudge in the Radeon VII’s case (the Radeon VII didn’t launch until February). The Radeon VII’s June performance is being compared with its launch performance in this case.

Two games showed performance declines across both Radeon and GeForce hardware: Ashes of the Singularity: Escalation and Warhammer II. Both games showed declines in all APIs, though AotS: Escalation lost more performance. This is theorized to be the result of Spectre et al protections. No other games showed performance declines, and the declines in these specific titles were not large enough to change the overall trend across our suite of games.

We measured performance in Ashes of the Singularity: Escalation, Deus Ex: Mankind Divided, Hitman, Metro Last Light Redux, Middle Earth: Shadow of War, Rise of the Tomb Raider, Warhammer II, Shadow of the Tomb Raider, Assassin’s Creed: Origins, and Far Cry 5. The performance figures given for each GPU in each resolution reflect the geometric mean of our results. We used a geometric mean instead of an arithmetic mean to calculate averages to account for the fact that minimum frame rates can vary widely by game — Hitman, for example, regularly returns minimum frame rates for all GPUs between 4 – 12fps.

Our September coverage used a standard GeForce RTX 2080, while the June 2019 data is based on the Gigabyte Aorus RTX 2080 Xtreme, which has slightly higher clock rates. This may have had a slight impact on performance (1-2 percent), but the difference is not large enough to be an issue for our purposes.

Results

The slideshow below contains our results, graphed by resolution and by minimum versus average frame rates.

Conclusion:

The minimum frame rate improvements at 1080p and 1440p are quite solid for Vega 64, the RTX 2080, and the GTX 1080 Ti. The average level of improvement is smaller across the board for all cards, but this isn’t necessarily surprising. Vega 64 and Radeon VII are both based on GCN, and GCN has been AMD’s lead architecture for a number of years — long enough to be well-optimized, at this point. Nvidia’s RTX 2080 picks up the most consistent gains across all resolutions, probably thanks to the minor clock tap or the fact that Turing is the newest architecture and had the most performance still on the table. Even the GTX 1080 picks up a few frames at 1080p.

There is no evidence that Nvidia has taken any action to harm Pascal performance or to make its older, 2016 GPUs look less attractive than more recent cards. There are no Pascal performance regressions or frame rate issues in any game that do not affect every GPU (that’s why we think the performance decline in Ashes and Warhammer II is CPU-related, not GPU-related). Vega 64’s performance may have come up the most, but Vega 64 has been on the market for less time than the Pascal family. Despite these improvements, it also only matches or slightly underperforms the GTX 1080 at every resolution and in both minimum and average frame rates. The relative performance of these AMD and NV cards relative to each other has shifted only modestly.

The implication of these findings is only to the good, no matter which GPU you own. AMD or Nvidia, Vega and Pascal cards both show continued expected performance, while Turing’s slightly larger improvements are what we’d expect for a relatively newer architecture. Obviously the particulars of how a GPU architecture ages are going to be specific to each architecture, and since this review didn’t focus on older cards from the Maxwell or Kepler era, we can’t speak to the situations with those GPUs. But as far as Pascal is concerned, Nvidia’s last-generation architecture appears to be aging quite well, while AMD has also improved on Vega’s performance.

Now Read: