33 million pixels at close to 60 frames per second: that's what two of NVIDIA's new Turing-based GeForce RTX 2080 Ti cards in NVLink can do.

Introduction

If there's something I've always had a passion within a passion for when it comes to technology, it's pushing graphics cards to their limits. I want to see not just what new cards can do for me today and tomorrow, but what about the future of gaming? 4K 120Hz was once thought of to be a fantasy land: something unachievable for many years, and then that future came to us much quicker than anticipated with NVIDIA's reveal of the Turing GPU architecture.

VIEW GALLERY - 33 IMAGES

NVIDIA's introduction of the GeForce RTX 2080 Ti was a massive milestone. It opened the door to beyond 4K 60FPS gaming, a milestone even next-gen consoles won't get close to with beefed-up internals provided by AMD in the form of Navi inside of the PlayStation 5 and Xbox Scarlett consoles.

The next-gen consoles will hopefully hit 4K 60FPS solid but I'm sure it won't be at Ultra settings, with anti-aliasing, 60FPS minimum, an option for 120Hz given NVIDIA's new Big Format Gaming Display (BFGD) TVs will offer 4K 120Hz gaming at a huge 65-inch.

That's PC-only territory, and will be for the forseeable future, hell consoles will have to go discrete GPU to get close and that's just to the last-gen of graphics let alone the future with NVIDIA throwing the benchmark bar into the clouds and will now see gamers, consumers, and game developers deciding where the chips fall and how high the bar was raised.

Enter NVIDIA

PC gaming is starting to get really expensive, so there needs to be something else that warrants spending upwards of $1199 on a new graphics card.

This is where 4K 120FPS gaming comes into play with NVIDIA's new 27-inch 4K 144Hz HDR G-Sync monitors come into play from partnerships with ASUS and Acer, as well as the upcoming BFGDs offering 65-inch 4K 120Hz HDR G-Sync. NVIDIA needs to absolutely own the upper echelon of PC gaming while looking into the future with multi-monitor 4K 144Hz setups, 8K 60Hz monitors and then multiple 8K60 panels.

This is all worlds above anything AMD has on the market with Radeon and its flagship Radeon RX Vega 64 graphics card, something that competes with the GeForce GTX 1080. Remember there's the GTX 1080 11Gbps, GTX 1080 Ti, TITAN Xp, RTX 2080 and RTX 2080 Ti all above AMD's bleeding edge.

But what about 8K? :)

8K Resolution Explained

Beyond 4K 120Hz and 144Hz gaming is 8K with 4x the amount of pixels of a 4K display, the native resolution of 7680 x 4320 might not seem like much in numbers but the jump from 1080p to 4K and then 4K to 8K is 16x. Yes... 8K is rendering 16x the pixels of 1080p. 4K is big leap from 1080p with 4x the pixels, but the leap to 8K is astounding.

As you can see from the image above, 8K really blows away everything else, even 4K. It makes 1080p look like nothing, and everything under 1080p becomes literally nothing compared to the sheer amount of pixels that 7680x4320 provides. We're looking at 33 million pixels, compared to just 2 million from 1080p and 8.2 million from 4K.

480p - 720x480 - 345,600 pixels

720p - 1280x720 - 921,600 pixels

1080p - 1920x1080 - 2,073,600 pixels

1440p - 2560x1440 - 3,686,400 pixels

4K - 3840x2160 - 8,294,400 pixels

8K - 7680x4320 - 33,177,600 pixels

Now, trying to run games at 33 million pixels at 60 frames per second... that's something we haven't even been close to with graphics cards until NVIDIA's GeForce RTX 2080 Ti. Not the RTX 2080, but the beefed-up RTX 2080 Ti. It's the only card close to delivering Ultra graphics on new games at 8K and close to 60FPS. We have some benchmarks of some games we benchmarked, with some really amazing results.

Turing: NVLink Multi-GPU Tech

NVLink = Huge Multi-GPU Upgrade

Before the previous-gen Pascal architecture, NVIDIA GPUs used a single Multiple Input/Output (MIO) interface as the SLI Bridge technology, which allowed the second/third/fourth GPU to transfer its final rendered frame output to the primary GPU that was physically connected to the bridge.

Pascal changed this with a faster dual-MIO interface that arrived as HB-SLI, increasing the bandwidth between the GPUs. This paved the way for higher resolution output, and multiple high-res monitors to be used in NVIDIA Surround. NVLink takes all of this to the next level.NVIDIA has deployed its new multi-GPU technology NVLink into the new GeForce RTX 20 series, but NVLink is not something new for the company as it has used the high-speed interconnect on its Quadro RTX series cards and was made specifically for the Volta GV100 GPU.

TU102 features two NVLink 8x links that can feed out 25GB/sec of bidirectional bandwidth, while total aggregate bidirectional bandwidth hits 100GB/sec.

NVIDIA's new Turing-based TU102 and TU104 GPUs now use NVLink for all SLI GPU-to-GPU data transfers, with the higher-end TU102 featuring two NVLink x8 links while TU104 has a single NVLink x8 link. Each individual link provides up to 50GB/sec of bidirectional bandwidth (25GB/sec per way) while the two links in TU102 push 100GB/sec in total (50GB/sec per way).

Turing GPUs only support 2-way SLI over NVLink, but 3-way and 4-way SLI configurations "are not supported".

Up Close: NVLink Bridge

NVLink Is The New SLI

NVIDIA sells the NVLink bridge in two sizes: 3-slot and 4-slot GPU spacing. All motherboards are different, and so is the spacing between custom RTX series graphics cards. I'm using the 3-slot NVLink bridge between my RTX 2080 Ti graphics cards. At the time of writing I'm benchmarking RTX 2080s in NVLink and will have those results in a separate article.

You can buy the GeForce RTX NVLink Bridge on NVIDIA's store for $79, while my NVLink bridge costs $119 AUD here in Australia. NVIDIA currently limits customers to buying just 1 bridge.

The older SLI-HB bridge was much shorter, with a lower pin count of 52 pins while the new NVLink bridge has 124 pins. It's quite wide, that's for sure.

Test System Specs

Our New GPU Test Rig

Welcome to the latest revision of our GPU test bed, with our system being upgraded from the Intel Core i7-7700K to the Core i7-8700K. The CPU is cooled by the Corsair H115i PRO cooler, with the 8700K overclocked to 5GHz. We've stayed with GIGABYTE for our motherboard with their awesome Z370 AORUS Gaming 7.

We approached our friends at HyperX for a kit of their kick ass HyperX Predator DDR4-2933MHz RAM (HX429C15PB3AK4/32), with 2 x 8GB sticks for a total of 16GB DDR4-2933. The RAM stands out through every minute of our testing as it has beautiful RGB lights giving the system a slick look while benchmarking our lives away, while the Z370 AORUS Gaming 7 motherboard joins in with its own array of RGB lighting.

Detailed Tech Specs

CPU : Intel Core i7-8700K @ 5GHz

Cooler : Corsair Hydro Series H115i PRO

MB : Z370 AORUS Gaming 7

RAM : 16GB (2x8GB) HyperX Predator DDR4-2933

SSD : 1TB OCZ RD400 NVMe M.2

SSD : 512GB OCZ RD400 NVMe M.2

PSU : InWin 1065W PSU

Chassis : In Win X-Frame

OS: Windows 10 Pro x64

Additional Images

Benchmarks - 4K

4K Benchmarks

Rise of the Tomb Raider is one of the best looking games on the market, a truly gorgeous game - and a wonder to benchmark. The team at Crystal Dynamics made a very scalable PC game that plays really well testing graphics cards. We've got DX11 and DX12 results in one here, showing the slight strengths of running DX12 mode.

Middle-earth: Shadow of War is a sequel to the popular Shadow of Mordor, which was powered by the Lithtech engine. When cranked up to maximum detail, it will chew through your GPU and its VRAM like it's nothing.

You can buy Middle-earth: Shadow of War at Amazon.

Rainbow Six: Siege has been a strong entry into the franchise, popular for its realistic feel and great graphics. Stable as a rock for benchmarking, right up to 3440x1440 and 4K.

Far Cry 5 was developed by Ubisoft, and is powered the Dunia Engine, an engine that has been modified over the years for Far Cry. Dunia Engine itself was a modified version of CRYENGINE, scaling incredibly well on all sorts of hardware.

F1 2018 is the latest iteration of the super-popular franchise that Codemasters has worked on for close to a decade now, with F1 2009 kicking it all off. The game is powered by the EGO game engine, which is a modified version of the Neon game engine that was developed by Codemasters and Sony Computer Entertainment for Colin McRae: Dirt, which was released in 2007. The revamped EGO engine was developed to give Codemasters the ability of using more detailed damage and physics in the game world, as well as rendering larger-scale environments.

Shadow of the Tomb Raider is one of the latest games to join our graphics card benchmark lineup, with the game built using the Foundation engine as a base, the same engine in Rise of the Tomb Raider. Eidos Montreal R&D department made lots of changes to the engine during the development of Shadow of the Tomb Raider to make it one of the best-looking games out right now.

Benchmarks - 8K

8K Benchmarks

Rise of the Tomb Raider is one of the best looking games on the market, a truly gorgeous game - and a wonder to benchmark. The team at Crystal Dynamics made a very scalable PC game that plays really well testing graphics cards. We've got DX11 and DX12 results in one here, showing the slight strengths of running DX12 mode.

Middle-earth: Shadow of War is a sequel to the popular Shadow of Mordor, which was powered by the Lithtech engine. When cranked up to maximum detail, it will chew through your GPU and its VRAM like it's nothing.

You can buy Middle-earth: Shadow of War at Amazon.

Rainbow Six: Siege has been a strong entry into the franchise, popular for its realistic feel and great graphics. Stable as a rock for benchmarking, right up to 3440x1440 and 4K.

Far Cry 5 was developed by Ubisoft, and is powered the Dunia Engine, an engine that has been modified over the years for Far Cry. Dunia Engine itself was a modified version of CRYENGINE, scaling incredibly well on all sorts of hardware.

F1 2018 is the latest iteration of the super-popular franchise that Codemasters has worked on for close to a decade now, with F1 2009 kicking it all off. The game is powered by the EGO game engine, which is a modified version of the Neon game engine that was developed by Codemasters and Sony Computer Entertainment for Colin McRae: Dirt, which was released in 2007. The revamped EGO engine was developed to give Codemasters the ability of using more detailed damage and physics in the game world, as well as rendering larger-scale environments.

Shadow of the Tomb Raider is one of the latest games to join our graphics card benchmark lineup, with the game built using the Foundation engine as a base, the same engine in Rise of the Tomb Raider. Eidos Montreal R&D department made lots of changes to the engine during the development of Shadow of the Tomb Raider to make it one of the best-looking games out right now.

Power Consumption

You'd think that with two graphics cards running at 4K 120FPS+ in the gaming tests we're running the cards through would be chewing down some serious power, but Turing is a very power efficient GPU architecture and the new GeForce RTX 2080 Ti packs a huge performance punch without needing that much power.

It definitely chews through more power than the GeForce GTX 1080 Ti and TITAN Xp graphics cards, but not that much more when in multi-GPU mode. The two GeForce RTX 2080 Ti cards in NVLink had whole system power consumption of between 700-770W during our testing with it sitting around 730-740W most of the time.

Comparing this to the GTX 1080 Ti in SLI which uses somewhere around 550-600W, or the TITAN Xp SLI with 600-660W, they're all still low compared to AMD.

Two of AMD's Radeon RX Vega 64 graphcis cards in CrossFire consume between 730-830W of power, and they get a hell of a lot hotter as well as much louder. They aren't ideal in multi-GPU at all, whereas the RTX 2080 Ti is not only fast, but it is power efficient. RTX 2080 Ti blows away Vega 64 with less power used, and RTX 2080 Ti NVLink continues that trend of Vega 64 CF.

8K 60FPS Gaming Is Now A Reality

Just like I said in my article titled 'GeForce RTX 2080 Ti in NVLink: 4K 120FPS Gaming Is Now Here' - these new Turing-based GeForce RTX 2080 Ti graphics cards in NVLink can handle 8K 50FPS average... but would you buy them for that? No, of course not. I'm not here telling you to, I'm purely coming in from a technological standpoint.

The question I'm asking with this article is can they push 8K 60FPS and the answer is... for the first time ever: yes they can. NVIDIA's new GeForce RTX 2080 Ti cards in NVLink can indeed handle 8K 60FPS gaming for the most part. They can't run all games maxed out and with a few settings tweaked I was able to hit 60FPS average without a problem, something no other card in any combination can hit, period.

As for the performance upgrades over the GeForce GTX 1080 Ti SLI and the GeForce RTX 2080 Ti NVLink, we're looking at:

Rise of the Tomb Raider - 43% faster than GTX 1080 Ti SLI

Shadow of War - 45% faster than GTX 1080 Ti SLI

Rainbow Six Siege - 212% ( 85% faster than single RTX 2080 Ti )

Far Cry 5 - 23% faster than GTX 1080 Ti SLI

F1 2018 - 45% faster than GTX 1080 Ti SLI (62% faster than single RTX 2080 Ti)

Shadow of the Tomb Raider - 48% faster than GTX 1080 Ti SLI (100% faster than single 2080 Ti)

Final Thoughts

NVIDIA is the performance king of graphics cards, period.

You might notice that AMD Radeon graphics cards are completely absent from this testing... and that's because I couldn't even get the Dell UP3218K monitor detecting on a Radeon GPU. I tried:

AMD Radeon R9 Fury X

AMD Radeon R9 Nano

Radeon RX 480

Radeon RX 570

Radeon RX 580

AMD Radeon RX Vega 56

AMD Radeon RX Vega 64

AMD Radeon RX Vega 64 LCE

None of these cards would work with the Dell UP3218K on the latest (or the last few driver sets) but they would detect the 8K monitor and it would work with the default drivers that come with Windows 10. Alternatively, if I powered the machine off (and didn't uninstall the Radeon drivers), took the Radeon card out, installed any GeForce card and turned it on, the 8K monitor would detect.

Weird AF. I reached out to AMD to let them know what's going on, but the Radeon team is so low on staff since they've all left... I don't have many people to go to right now, with technical marketing kind of dead. I'll update this article with a '2.0' piece when/if 8K is repaired on AMD. Until then, sorry Team Red... 7680x4320 is just too hardcore for Radeon right now.

Spending $2400 on the two GeForce RTX 2080 Ti graphics cards and another $3700 on the Dell UP3218K is pretty damn crazy as you're looking at a minimum of $10,000 on the PC at that point. Not many people are going to do that, and if they do they can now take a look at real-world 7680x4320 results to see if they want to buy an 8K display.

The other purpose of doing this is because I love to live on the edge. 8K isn't normal, it's beyond the realms of normal. 8K gaming might not even become a reality, but the entire point of buying this display was so I could push every graphics card to its limit. NVIDIA's new Turing GPU architecture, the crazy amounts of horsepower and GDDR6 bandwidth at its disposal, really gets to stretch its legs at 8K.

We can really see the difference between the older Pascal architecture and the new Turing architecture at 8K, with all of that additional bandwidth and horsepower being utilized much more at 8K versus 1080p and 1440p. 4K is still a point of domination for Turing as well, but these results blew even my mind. I didn't expect to get close to 8K 60FPS gaming in 2018, and yet... here we are.