Earlier this year, a fellow editor and I did some pie-in-the-sky thinking about Nvidia’s plans for its next-generation GPUs. We wondered how the company would continue the impressive generation-to-generation performance improvements it had been delivering since Maxwell. We guessed that the AI-accelerating smarts in the Volta architecture might be one way the green team would set apart its next-generation products, but past that, we had nothing.

Turns out the company did us one or two better. With the Turing architecture‘s improved tensor cores and unique RT cores, Nvidia is shipping a pair of intriguing new technologies in its next-generation chips while also bolstering traditional shader performance with parallel execution paths for floating-point and integer workloads. On top of that, the company introduced a whole new way of programming geometry-related shaders, called mesh shaders, that promise to break the draw-call bottleneck at the CPU for geometry-heavy scenes. There’s a lot going on in Turing, to put it mildly. Those interested should consult Nvidia’s white paper for more detail.

A logical representation of the TU102 GPU. Source: Nvidia

My speculation about the Turing architecture several weeks back turned out to be more correct than not, at least, even with the wildly incomplete info we had on hand. The GeForce RTX 2080 Ti that we’re testing this morning and the Quadro RTX 8000 that debuted at SIGGRAPH both use versions of one big honkin’ GPU called TU102. At a high level, this 754 mm² chip—754 mm²!—hosts six graphics processing clusters (GPCs) in Nvidia parlance, each with 12 Turing streaming multiprocessors (SMs) inside. The RTX 2080 Ti has four of its SMs disabled for a total of 4352 shader ALUs (or “CUDA cores,” if you like), of a potential 4608.

The full TU102 chip has 96 ROPs, but as a slightly cut-down part, the RTX 2080 Ti has 88 of those chiclets enabled. In turn, the highest-end Turing GeForce so far boasts a 352-bit bus to 11 GB of memory. TU102 gets to play with cutting-edge, 14-Gbps GDDR6 RAM, though, up from the 11 Gbps per-pin transfer rates of GDDR5X on the GTX 1080 Ti. That works out to 616 GB/s of raw memory bandwidth. Nvidia also claims to have improved the delta-color-compression routines it’s been employing since Fermi to eke out more effective bandwidth from the RTX 2080 Ti’s bus. Between GDDR6’s higher per-pin clocks and the improved color-compression smarts of Turing itself, Nvidia claims 50% more effective bandwidth from TU102 compared to the GP102 chip in the GTX 1080 Ti.

Despite its monstrous and monstrously-complex die, the RTX 2080 Ti Founders Edition actually comes with a slightly higher boost clock spec than the smaller GP102 die before it, at 1635 MHz, versus 1582 MHz for the GTX 1080 Ti. Nvidia calls that a factory overclock—if you believe overclocks are something that comes with a warranty, at least. In practice, the GPU Boost algorithm of Nvidia graphics cards will likely push Turing chips to similar real-world clock speeds, given adequate cooling. We’ll need to test that for ourselves soon.

Aside from the big and future-looking changes in Turing chips themselves, Nvidia’s new pricing strategy for the RTX 2070, RTX 2080, and RTX 2080 Ti is going to make for some tricky generation-on-generation comparisons. The $600 RTX 2070 is $150 more expensive than the $450 GTX 1070 Founders Edition. The $800 RTX 2080 Founders Edition sells for $100 more than the GTX 1080 Founders Edition did at launch—and as much as $300 more than that card’s final suggested-price drop to $500. In turn, the RTX 2080 Ti Founders Edition commands a whopping $500 more than the GTX 1080 Ti’s $700 sticker, at $1200.

In the past, then, the RTX 2070 might have been called an RTX 2080, the RTX 2080 a 2080 Ti, and the RTX 2080 Ti some kind of Titan. The reality of Turing naming and pricing seems meant to allow Nvidia to claim massive generation-to-generation performance increases versus Pascal cards by drawing parallels between model names and eliding those higher sticker prices.

Dollar-for-dollar, however, keep in mind that the RTX 2080’s $700 partner-card suggested price and the Founders Edition’s $800 price tag make the $699-and-up GeForce GTX 1080 Ti a better point of comparison for Turing’s middle child. The GeForce GTX 2080 Ti Founders Edition matches the Titan Xp almost dollar-for-dollar. We don’t have a Titan Xp or Titan V handy to test our RTX 2080 Ti against our back-of-the napkin math for those cards, but our theoretical measures of peak graphics performance put the RTX 2080 a lot closer to the GTX 1080 Ti than not. On a price-to-performance basis, then, the improvements in Turing for traditional rasterization workloads could be more modest than Nvidia’s claims suggest.

On top of the naming confusion, the two suggested-price tiers for Turing cards—a cheaper one for partner cards and a more expensive one for Nvidia’s Founders Editions—seem guaranteed to cause double-takes. I expect that at least in the early days of Turing, there’s no reason Nvidia board partners will want to leave a single dollar on the table with those separate, lower prices when Founders Edition cards are commanding more money for what is essentially the same product once the rubber hits the road. In the real world, the Founders Edition suggested price is the de facto suggested price, and retailer listings are already bearing that fact out.

Our testing methods

If you’re new to The Tech Report, we don’t benchmark games like most other sites on the web. Instead of throwing out a simple FPS average—a number that tells us only the broadest strokes of what it’s like to play a game on a particular graphics card—we go much deeper. We capture the amount of time it takes the graphics card to render each and every frame of animation before slicing and dicing those numbers with our own custom-built tools. We call this method Inside the Second, and we think it’s the industry standard for quantifying graphics performance. Accept no substitutes.

What’s more, we don’t rely on canned in-game benchmarks—routines that may not be representative of performance in actual gameplay—to gather our test data. Instead of clicking a button and getting a potentially misleading result from those pre-baked benches, we go through the laborious work of seeking out interesting test scenarios that one might actually encounter in a game. Thanks to our use of manual data-collection tools, we can go pretty much anywhere and test pretty much anything we want in a given title.

Most of the frame-time data you’ll see on the following pages were captured with OCAT, a software utility that uses data from the Event Timers for Windows API to tell us when critical events happen in the graphics pipeline. We perform each test run at least three times and take the median of those runs where applicable to arrive at a final result. Where OCAT didn’t suit our needs, we relied on the PresentMon utility.

As ever, we did our best to deliver clean benchmark numbers. Our test system was configured like so:

Processor Intel Core i7-8086K Motherboard Gigabyte Z370 Aorus Gaming 7 Chipset Intel Z370 Memory size 16 GB (2x 8 GB) Memory type G.Skill Flare X DDR4-3200 Memory timings 14-14-14-34 2T Storage Samsung 960 Pro 512 GB NVMe SSD (OS) Corsair Force LE 960 GB SATA SSD (games) Power supply Corsair RM850x OS Windows 10 Pro with April 2018 Update

Thanks to Corsair, G.Skill, and Gigabyte for helping to outfit our test rigs with some of the finest hardware available. AMD, Nvidia, and EVGA supplied the graphics cards for testing, as well. Behold our fine Gigabyte Z370 Aorus Gaming 7 motherboard before it got buried beneath a pile of graphics cards and a CPU cooler:

Unless otherwise specified, image quality settings for the graphics cards were left at the control panel defaults. Vertical refresh sync (vsync) was disabled for all tests. We tested each graphics card at a resolution of 4K (3840×2160) and 60 Hz, unless otherwise noted. Where in-game options supported it, we used HDR modes, adjusted to taste for brightness. Our HDR display is an LG OLED55B7A television.

The tests and methods we employ are generally publicly available and reproducible. If you have questions about our methods, hit our forums to talk with us about them.

Shadow of the Tomb Raider

The final chapter in Lara Croft’s most recent outing is one of Nvidia’s headliners for the GeForce RTX launch. It’ll be getting support for RTX ray-traced shadows in a future patch. For now, we’re testing at 4K with HDR enabled and most every non-GameWorks setting maxed.





The RTX 2080 Ti blasts out of the gate in Shadow of the Tomb Raider. Its impressive average frame rates are tempered by some concerning patches of frame-time spikes, an issue experienced to a lesser degree by the RTX 2080, as well. We retested the game several times in our location of choice and couldn’t make that weirdness go away, so perhaps some software polish is needed one way or another. Still, the performance potential demonstrated by the GeForce RTX cards is quite impressive. Remember that we’re gaming at 4K, in HDR, with almost all the eye candy turned up in a cutting-edge title.





These “time spent beyond X” graphs are meant to show “badness,” those instances where animation may be less than fluid—or at least less than perfect. The formulas behind these graphs add up the amount of time our graphics card spends beyond certain frame-time thresholds, each with an important implication for gaming smoothness. Recall that our graphics-card tests all consist of one-minute test runs and that 1000 ms equals one second to fully appreciate this data.

The 50-ms threshold is the most notable one, since it corresponds to a 20-FPS average. We figure if you’re not rendering any faster than 20 FPS, even for a moment, then the user is likely to perceive a slowdown. 33 ms correlates to 30 FPS, or a 30-Hz refresh rate. Go lower than that with vsync on, and you’re into the bad voodoo of quantization slowdowns. 16.7 ms correlates to 60 FPS, that golden mark that we’d like to achieve (or surpass) for each and every frame.

To best demonstrate the performance of these systems with a powerful graphics card like the GTX 1080 Ti, it’s useful to look at our strictest graphs. 8.3 ms corresponds to 120 FPS, the lower end of what we’d consider a high-refresh-rate monitor. We’ve recently begun including an even more demanding 6.94-ms mark that corresponds to the 144-Hz maximum rate typical of today’s high-refresh-rate gaming displays.

Despite its fuzziness in our frame-time plots, the RTX 2080 Ti doesn’t chalk up more than a handful of milliseconds past the 33.3-ms threshold. It also spends just under three seconds of our one-minute test run on tough frames that would drop frame rates below 60 FPS. Even the GTX 1080 Ti can’t hope to keep up with this seriously impressive performance. The RTX 2080 is the only thing that comes close.

Project Cars 2





Once again, the RTX 2080 Ti leaps ahead of the pack, even at 4K and with Project Cars 2‘s settings maxed. Its 99th-percentile frame time suggests gamers won’t see frame rates below 60 FPS much, if any, of the time, too. Let’s see how that plays out in our time-spent-beyond-X graphs.





Not a single millisecond shows up in the RTX 2080 Ti’s bucket at our 16.7-ms threshold. This is the holy grail of 4K gaming performance: high average frame rates with worst-case performance that never dips below 60 FPS. The RTX 2080 isn’t far behind by this measure, though. To really show what the RTX 2080 Ti can do, it’s worth flipping over to our 11.1-ms graph. There, the Ti spends just over four seconds on tough frames that drop rates below 90 FPS. The RTX 2080 spends 10 seconds longer on those tough scenes.

Hellblade: Senua’s Sacrifice





Hellblade relies on Unreal Engine 4 to depict its Norse-inspired environs in great detail, and playing it at 4K really brings out the work its developers put in. As we’ve come to expect so far, the RTX 2080 Ti opens a wide lead over the rest of the pack, even if it can’t deliver the 16.7-ms 99th-percentile frame time we’d want for near-perfect smoothness.





Despite that slightly bumpy 99th-percentile frame time, the 2080 Ti spends just under two seconds of our one-minute test run on tough frames that drop the instantaneous frame rate below 60 FPS. The RTX 2080 spends a whopping 12 seconds on similarly difficult work, and the numbers only snowball from there.

Gears of War 4





Gears of War 4‘s DirectX 12 flavor of Unreal Engine 4 takes well to the RTX 2080 Ti. Once again, we get impressively high average frame rates and a sterling 99th-percentile frame time.





To drive home just how well the RTX 2080 Ti plays Gears of War 4, the top-end Turing card so far spends just 23 ms of our one-minute test run on frames that spoil its 60-FPS-or-better performance. It’s hard to ask for more.

Far Cry 5





Despite running Far Cry 5 at 4K with HDR and maxed settings, the RTX 2080 Ti turns in another familiar performance. Its 99th-percentile frame time isn’t quite perfect, but as usual, we can turn to our time-spent-beyond-X metrics to see just how short it fell.





Not too short at all, as it happens. The RTX 2080 Ti spends just 216 ms of our test run on frames that take longer than 16.7 ms to render. That’s seriously impressive performance.

Assassin’s Creed Origins





Assassin’s Creed Origins is one of the most punishing titles of recent memory, and even the RTX 2080 Ti can’t push it much past 60 FPS at 4K with HDR, on average. The card’s 99th-percentile frame time is well-controlled, but it’s well over the 16.7 ms we’re looking for.





As usual, though, the RTX 2080 Ti spends just a blip of our test run on tough frames that take longer than 16.7 ms to render. Even compared to the RTX 2080, the 2080 Ti is entirely in a league of its own.

Deus Ex: Mankind Divided





Deus Ex: Mankind Divided might be a little more aged than some of the games we’re looking at today, but that doesn’t mean it isn’t still a major challenge for any graphics card at 4K and max settings. The RTX 2080 Ti delivers a commendably high average frame rate, as usual, but it can’t keep 99th-percentile frame times in check with the same aplomb.





Despite that high 99th-percentile frame time, our time-spent-beyond-33-ms threshold suggests those frames make up only a small portion of the whole in our test run. With just about three seconds spent working on frames that take longer than 16.7 ms to render, the RTX 2080 Ti continues its impressive streak of smooth 4K gaming, too.

Watch Dogs 2





Like Deus Ex, Watch Dogs 2 is an absolute hog of a game if you start dialing up its settings. Add a 4K target resolution to the pile, and the game crushes most graphics cards to dust. Only the GeForce GTX 1080 Ti, RTX 2080, and RTX 2080 Ti even produce playable frame rates, on average, and their 99th-percentile frame times testify to the fact that there’s no putting a leash on this canine.





For all that, the RTX 2080 Ti does a commendable job of bringing Watch Dogs 2 to heel. It only spends a handful of milliseconds past our 33.3-ms threshold, and it only puts up about five seconds of our one-minute run on the 16.7-ms chalkboard. Gamers after a smoother experience might want to dial back a couple of the eye-candy settings in this title, but even with our demanding setup, the RTX 2080 Ti provides a smooth enough and enjoyable enough time .

Wolfenstein II





So, uh, that’s really something. Wolfenstein II uncorks an as-yet-unseen reserve of performance from our Turing cards. Both are so fast in this game, in fact, that I had to double-check and make sure that I was still testing at 4K. Interestingly enough, Nvidia uses Wolfenstein ii as a demonstration for another Turing feature that we haven’t gotten deep into yet—variable-rate shading—that could enhance performance even further, if you can believe it from these numbers.





The RTX 2080 Ti puts no time at all on the board at our 16.7-ms threshold , and it only spends a little under three seconds in total on tough frames that take longer than 8.3 ms to render. That’s some seriously impressive performance, and if Nvidia is to be believed, this game could still run faster on Turing.

DLSS performance with Epic’s Infiltrator demo









DLSS performance with the Final Fantasy XV benchmark









Conclusions

The GeForce RTX 2080 Ti provides that most satisfying of feelings when you fire it up in tandem with a 4K HDR display: a constant, low-level electricity that feels like the hairs on the back of your neck are about to stand up. That’s thanks to its eyebrow-raising performance in most of the titles that we could throw at it, all with levels of eye candy that make other graphics cards wither.





With the RTX 2080 Ti Founders Edition in our test rig, I found myself spontaneously savoring the smoothness and fluidity of Far Cry 5‘s Montanan waterfalls, marveling at the wavering points of candlelight in the opening scenes of Shadow of the Tomb Raider, and squinting at the fire of the desert sun in Assassin’s Creed Origins just to feel more of that pleasant tingle. It’s the kind of feeling that makes it easy to forget that you dropped $1200 or more on a graphics card.

That tantalizing feeling comes even before we consider the potential of Deep Learning Super-Sampling, or DLSS, an AI model powered by Turing’s tensor cores. We saw enormous gains in performance at 4K in the two canned demos we were able to test with DLSS versus full-fat 4K rendering with temporal anti-aliasing, and even my hyper-critical graphics-reviewer eye couldn’t pick out any significant degradation in image quality from the switch to DLSS. Black magic, that, but it really seems to work.

Assuming its performance carries through to real-world gaming, DLSS is great news for folks who have so far had to compromise on image quality to get high-refresh-rate 4K experiences. We’ll want to reserve final judgment until fully-playable titles with DLSS support hit the market, but I’m optimistic the feature will do a lot to make 4K monitors useful to the enthusiast rather than a curiosity for the pixel-addicted.

Ray-traced effects are the other half of what might make Turing revolutionary, but we’re going to have to save testing them for a later date. We got to try the “Reflections” Star Wars demo that Nvidia used to introduce its RTX technology on our own Turing cards, and man, does it ever look cool to see Captain Phasma’s armor reflect every light in a First Order hallway in real time and with convincing detail. The problem is that said demo is meant to run at a non-interactive 24 FPS with the help of DLSS, and there’s no telling how ray-traced effects will perform in interactive gaming where demands on responsiveness and frame rates are much higher.

This is normally where we’d talk about the competition, but if you hunger for more performance from your ultra-high-end gaming PC than what we’ve enjoyed over the past couple of years, where else are you going to get it? Uncompetitive markets have never been a good thing for PC builders, but the graphics-card space seems poised to become one—and by no fault of Nvidia’s, to be clear. Until Intel or AMD have something to show that can challenge Turing, the high-end PC gaming crown is the green team’s to lose.

The flip side of that bittersweet situation is that TSMC and Nvidia have extended an incredibly consistent and enviably successful streak of execution that began with Maxwell, continued through Pascal, and seems poised to keep going with Turing. I have to admire the sheer ballsiness of the green team for pushing die sizes to the limit and continuing to advance performance even as it faces perhaps the least competition in high-end graphics that it ever has.

So, should you buy an RTX 2080 Ti? Even if we put a -tan sticker on the end of that Ti, this card’s sticker price is going to give all but the most well-heeled and pixel-crazy pause. Titan cards have always been about having the very best, price tags be damned, and Nvidia’s elevation of the Ti moniker to its Titan cards’ former price point doesn’t change that fact.

If you can’t tolerate anything but the highest-performance gameplay at 4K with most every setting cranked, the 2080 Ti is your card. Its potential second wind from DLSS feels almost like showing off, and that’s a switch that owners should be able to flip with more and more games in the near future. Even without DLSS, the RTX 2080 Ti is the fastest single-GPU card we’ve ever tested by a wide margin. If you want the best in this brave new world of graphics, this is it. Just be ready to pony up.