AMD is leading us into a new, exciting era of graphics technology - where ultra-fast memory is connected directly to the core, enabling higher performance, enhanced power efficiency and a new wave of small form-factor graphics cards. The Radeon R9 Fury X is the first GPU to arrive boasting this cutting-edge tech, with AMD telling us that it is the fastest single-chip GPU on the market, a title currently held by Nvidia's mammoth Titan X 12GB. Well, the reality is that the Fury X is a fascinating first-gen product with plenty of positives, but in terms of raw performance, both Nvidia's Titan X and its cut-down GTX 980 Ti are generally faster and more versatile for the high-end enthusiast market.

As always, performance is king, so AMD's inability to be comprehensively competitive with Nvidia's GM200 across the length and breadth of our benchmarks is a little disappointing - but certainly in terms of the physical package, it's great to see that the poor reference cooling design of the 200 series is now a thing of the past. The Fury X is built from quality materials that look good and even feel good, and the dinky, compact nature of the 7.5-inch board is quite remarkable - it's a marvel of integration. The work AMD carried out on the Radeon R9 295X2's reference water cooler is carried over and refined on Fury X, which also has its own closed-loop set-up that is significantly quieter than Nvidia's reference coolers, though it is accompanied by a continuous, consistent, high-pitched tone - presumably emanating from the pump. It was a little bothersome on the test bench, but will hopefully be less of an issue when the card is installed deep within a decent case.

Radeon R9 Fury X specs The Fiji processor in the R9 Fury X is based on AMD's third generation GCN architecture, previously found in the R9 295/380, and codenamed Tonga. Doubling up on stream processors brings the shader count up to a gargantuan 4096, up from the 2816 found in the R9 290X/390X. This core is then combined with the ultra-fast, ultra-wide HBM RAM. Stream Processors: 4096

4096 Texture Units: 256

256 ROPs: 64

64 Max Clock: 1050MHz

1050MHz Memory: 4GB HBM

4GB HBM Memory Clock: 100MHz

100MHz Bandwidth: 512GB/s

512GB/s Process: 28nm

28nm Transistor count: 8.9bn

8.9bn Max TFLOPs: 8.6

8.6 Die Size: 596mm 2

596mm TDP: 275W AMD announced a $650 price-point for Fury X in the USA, bringing it into line with the GTX 980 Ti. In the UK, Fury X starts at £509 - which is a good deal cheaper than current 980 Ti prices here.

Comparisons with decent aftermarket coolers are interesting though - the fan mounted onto the radiator isn't silent when it's really being pushed and the overall package isn't that much quieter than the MSI cooler we saw recently on the R9 390X. The difference comes down to the fact that the radiator is mounted on the case where it can directly push heat out of the chassis - something the fancy third-party coolers rely upon case airflow to achieve.

The aesthetics are finished off with a red LED Radeon logo, along with a series of lights designed to give some rudimentary measurement of GPU load. Other features include a dual-BIOS switch (there are two BIOSes, one of which you can re-write), while power is supplied via two eight-pin inputs fed from your PSU. Display outputs consist of three DisplayPorts, along with HDMI 1.4a video. The end of the DVI port is nigh, it seems.

This content is hosted on an external platform, which will only display it if you accept targeting cookies. Please enable cookies to view. Manage cookie settings

Order the Radeon R9 Fury X [?] from Amazon with free shipping

This is usually the part of the review where we get some idea of a new GPU's capabilities by running it through our Crysis 3 gameplay test - where we attempt to run the game at a display's native resolution and as close to its refresh rate as possible: 60Hz. Which brings up an interesting point - Fury X is seemingly targeted at 4K gamers, but the reality is that the latest top-tier GPUs are much more suited to 1440p gameplay with all the visual trimmings. And this presents a slight issue: as you'll see later on this review, Fury X really is at its best - and at its most competitive - at UHD. Our solution? To carry out the Crysis test at both 1440p and 4K on both Fury X and GTX 980 Ti, so it's lucky that our brand new 4K, 60Hz DisplayPort 1.2 capture solution came online just in time for the occasion.

It is worth bearing in mind that 4K is a 4x increment in pixel count over 1080p, and a 2.25x boost over 1440p. Driving that sort of resolution on top-end quality settings is fool's errand: something has to give, so at UHD we drop down from Crysis 3's very high quality preset to high. Suffice to say that it reduces GPU overhead massively. As is often the case when ultra settings are dropped down just one notch, there's only a limited impact on image quality, and that's mostly unseen in the thick of the action. To add some spice to the proceedings, we also add the GTX 980 Ti to the mix, operating at exactly the same settings.

The end result? Well, 60fps can't be sustained at either of our quality setting/resolution combos but it's clear that it's the GTX 980 Ti that gets closer to the target. We'd need to drop down to medium to sustain something closer to 60fps at 4K, which strongly suggests to us that even the latest 'uber' GPUs don't have the plumbing to drive UHD displays with gameplay fast enough to match the typical 60Hz refresh.

This content is hosted on an external platform, which will only display it if you accept targeting cookies. Please enable cookies to view. Manage cookie settings Fury X and GTX 980 Ti compared in Crysis 3 as we aim - with only limited success - to drive 60fps at 1440p on very high settings, and 4K at the high preset.

Crysis 3 V-Sync Gameplay R9 Fury X 1440p GTX 980 Ti 1440p R9 Fury X 4K GTX 980 Ti 4K Lowest Frame-Rate 40.0fps 44.0fps 28.0fps 30.0fps Dropped Frames (from 18650 total) 1141 (6.12%) 624 (3.35%) 5320 (28.53%) 3626 (19.44%)

Order the graphics cards tested against the R9 390 from Amazon, with free shipping:

But this is just one game, one experience. To get an idea of what Fury X is capable of across more modern titles, it's time to break out the benchmarks, based on the Fury X paired with a Core i7 4790K system overclocked to 4.6GHz, matched with 16GB of 1600MHz DDR3 and running from a Crucial SSD. We usually start our tests at 1080p and scale up from there, but starting at 4K is perhaps the more logical approach - and it's certainly where AMD is most competitive. Similar to the Crysis 4K gameplay test, we knock down all games by one 'notch' from maximum, in an attempt to remove the worst of the diminishing returns found on ultra-level settings.

Out of the nine games, AMD musters four wins over the 980 Ti. Three of them - Ryse, Far Cry and Shadow of Mordor - aren't really that much of a surprise. These titles always seem to run faster on AMD's hardware (the result, we suspect, of console GCN optimisations feeding through to PC GCN hardware). Titles like The Witcher 3 and Call of Duty only show a small advantage to Nvidia, but Battlefield 4 sees the 980 Ti storm ahead, dominating by over 19 per cent - the biggest margin of the lot.

Is 4GB of VRAM enough? There's a simple rule of thumb developers have been telling us for years now - when it comes to making a GPU purchase, the more VRAM you have onboard and the faster it is, the better. This puts AMD in a difficult position with the R9 Fury X. HBM is expensive, and while it's ultra-fast, current designs are limited to 4GB of RAM. Meanwhile, Nvidia ships the GTX 980 Ti with 6GB of GDDR5, while AMD's own R9 390 and 390X have a colossal 8GB of onboard memory. The question is whether 4GB is enough, or if current and future titles need more. Our benchmarks typically omit multi-sampling anti-aliasing (a traditionally large drain on memory) so across 1080p, 1440p and 4K, we don't tend to see anything above 4GB of RAM making much in the way of difference right now. In future, that might be entirely different, of course. However, one of our tests does indeed push memory to the limits - Assassin's Creed Unity, running at 4K on very high settings with FXAA. And here's where we see some interesting data. Click on the images in this sidebar, and note the latency spikes on the red and orange lines (the rectangular drops on the right graph) representing the Fury X and the R9 290X - the only 4GB cards in the comparison. Then note the lack of such spikes on the R9 390X and GTX 980 Ti, both of which have much more than 4GB of RAM. If there's a smoking gun here, it's that we are running 290X and 390X on the same driver - that's the same hardware effectively, the only difference coming from clock-speed and VRAM allocation. If more VRAM wasn't helpful here, we should expect the R9 390X to stutter just like the Fury X and the 290X, but it doesn't. This may be indicative that VRAM - or lack of it - is the culprit for the latency spikes we do see, even though the Fury X has delta compression technology that the 290X lacks, which should help to make its 4GB go further. Right now, we feel that 4GB isn't a deal-breaker for the Fury X - most games fit within that allocation relatively comfortably as the benchmarks attest, though we do shy away from Shadow of Mordor's ultra textures (as the developer does not recommend them for sub-6GB cards). However, based on our discussions with developers, we feel that VRAM utilisation is only moving in one direction - upwards - and that's a consequence of the lavish amount of unified memory available on the current generation of consoles dictating the trend of development. As one well-placed developer told us recently: "The harder we push the hardware and the higher quality and the higher res the assets, the more memory we'll need and the faster we'll want it to be. Our games currently in development are hitting memory limits on the consoles left, right and centre now - so memory optimisation is on my list pretty much constantly."

However, once overclocking is factored in, the GTX 980 Ti is back in the game on all of our test games, increasing its lead or getting very, very close against the titles that favour AMD hardware. We could add 200MHz to the core clock and 400MHz to the RAM on the Nvidia card, where we were stable on our most stringent overclocking stress tests. Fury X has no memory overclocking (AMD tells us that it is pointless, even if you could do it) but you should be able to up the core by 90-100MHz in many titles. However, the card consistently failed our stress tests - we were finally stable at 1113MHz (a six per cent boost via AMD Overdrive), just 63MHz over the base clock.

This content is hosted on an external platform, which will only display it if you accept targeting cookies. Please enable cookies to view. Manage cookie settings At 3840x2160 - 4K resolution - It's a very close battle between Fury X and GTX 980 Ti. It's where the AMD card is at its most competitive, beating 980 Ti on four games out of the nine we test.

3840x2160 (4K) R9 390X GTX 980 Titan X GTX 980 Ti GTX 980 Ti OC R9 Fury X R9 Fury X OC The Witcher 3, High, HairWorks Off, Custom AA 29.1 27.7 37.5 36.9 40.7 36.2 37.6 Battlefield 4, High, Post-AA 44.5 46.8 61.3 61.0 69.6 51.0 52.5 Crysis 3, High, SMAA 40.2 39.0 52.4 52.5 59.7 49.2 51.1 Assassin's Creed Unity, Very High, FXAA 22.7 21.8 27.4 26.5 29.0 25.3 26.7 Far Cry 4, Very High, SMAA 44.4 36.1 46.7 47.1 50.9 50.5 50.5 COD Advanced Warfare, Console Settings, FXAA 76.4 72.0 90.8 86.9 96.9 85.3 88.0 Ryse: Son of Rome, Normal, SMAA 37.8 31.5 42.2 41.7 45.6 44.0 45.7 Shadow of Mordor, High, High Textures, FXAA 50.1 42.4 54.8 54.8 59.7 55.5 57.1 Tomb Raider, Ultra, FXAA 51.4 47.1 64.6 61.3 66.0 63.9 66.8

Based on our initial Crysis 3 test, there's a pretty strong argument that 2560x1440 resolution - not 4K - is perhaps the natural home of this new breed of high-end GPU. If we can run Crytek's scalable title on max settings at something approaching a locked 60fps, that's a very strong statement - we can move up to the ultra presets we couldn't achieve at 4K, and in titles where gameplay can't match the 60Hz refresh of the display, we can strategically tweak settings to increase frame-rates without losing that much in the way of image quality.

What immediately becomes obvious is that the competitiveness of the Fury X begins to slip. It only beats 980 Ti on two of the titles we test here: there's a useful 4.6 per cent boost on Far Cry 4, but Ryse is only 0.3 per cent faster - that's margin of error stuff. Shadow of Mordor, Assassin's Creed Unity and Crysis 3 show fairly close results between the two cards, with GTX 980 Ti between four to six points clear over the AMD challenger. However, The Witcher 3, Battlefield 4 and Advanced Warfare are all at least 17 per cent faster on Nvidia's competing hardare.

But what's fascinating here is the extent to which Nvidia's overclocking yields dividends. With our stable speed increases in place on both products, the gains on the GTX 980 Ti are eye-opening. On the AMD side, gains are minimal and in the case of Assassin's Creed Unity, there's a slight drop, albeit one in the margin of error. As you'll see when the focus shifts to 1080p, the trend seems to be that Fury X is competitive at 4K, but loses its edge the lower the rendering resolution.

This content is hosted on an external platform, which will only display it if you accept targeting cookies. Please enable cookies to view. Manage cookie settings If you're looking to get an excellent 60fps experience from this new wave of graphics cards, we think that a 2560x1440 monitor is the best option right now, preferably one with FreeSync or G-Sync. However, at this resolution, Fury X is not as competitive as its Nvidia counterpart.

2560x1440 (1440p) R9 390X GTX 980 Titan X GTX 980 Ti GTX 980 Ti OC R9 Fury X R9 Fury X OC The Witcher 3, Ultra, HairWorks Off, Custom AA 43.5 47.5 63.3 61.7 70.9 52.4 54.3 Battlefield 4, Ultra, 4x MSAA 54.5 57.0 76.1 75.0 86.7 62.2 64.6 Crysis 3, Very High, SMAA 52.3 50.0 68.0 66.2 75.7 63.4 65.4 Assassin's Creed Unity, Ultra High, FXAA 38.4 39.7 49.6 48.3 54.4 45.8 45.4 Far Cry 4, Ultra, SMAA 69.0 61.3 77.0 75.4 86.9 78.9 78.9 COD Advanced Warfare, Extra, FSMAA 94.7 98.2 123.2 121.3 139.4 103.0 105.6 Ryse: Son of Rome, High, SMAA 62.2 54.1 72.8 71.2 83.3 71.4 73.0 Shadow of Mordor, Ultra, High Textures, FXAA 74.4 66.0 87.2 87.2 101.8 82.5 86.4 Tomb Raider, Ultimate, FXAA 75.6 76.7 101.9 99.2 117.2 91.6 95.7

Let's be clear - Fury X and GTX 980 Ti aren't really designed for 1080p gaming, and it's where we saw the most disappointing returns over lower level cards in both our Titan X and 980 Ti reviews. However, although results were not as high as we hoped, there were still some clear and obvious gains with Titan X and GTX 980 Ti, and there are some practical applications for throwing a ton of GPU power at a full HD display - 120Hz gaming and stereoscopy, for example.

While there is some scalability in Fury X, it's safe to say that the R9 390X is the better bet if you're after an AMD card designed for 1080p gameplay. The GTX 980 Ti is faster than Fury X in every game we tested, and remarkably the non-Ti 980 - a much cheaper card - is competitive on The Witcher 3, Battlefield 4, Assassin's Creed Unity and Call of Duty. The 980 Ti is 23 to 33 per cent faster in The Witcher 3, Battlefield 4, Far Cry 4 and Advanced Warfare. Remarkably, Far Cry 4 is actually a touch slower on the Fury X than it is at 1440p - margin of error stuff perhaps, but this may suggest that the GPU hardware is not the bottleneck. On several titles, R9 390X gets very, very close too - in Call of Duty, for example. These strange results are actually one of the reasons why this review is a little late. We had to re-assess and re-bench the data. We did it several times, because it just didn't look right, but the same results kept coming back.

We can only speculate as to the reasons why, but data from the excellent hardware.info - who went the extra mile and benched at both medium and ultra settings at full HD - confirms what we see here. Whether it's down to AMD's well-known issues with its DX11 API overhead, or whether the Fiji hardware design simply works better with higher resolutions, the R9 Fury X simply doesn't favour gameplay at 1080p.

This content is hosted on an external platform, which will only display it if you accept targeting cookies. Please enable cookies to view. Manage cookie settings It's safe to say that Fury X's 1080p performance is disappointing. You can make the case that the card isn't made for 1080p gaming, but regardless, we shouldn't see GTX 980 Ti command this much of an advantage and it doesn't explain how R9 390X and GTX 980 can challenge it in several titles.

1920x1080 (1080p) R9 390X GTX 980 Titan X GTX 980 Ti GTX 980 Ti OC R9 Fury X R9 Fury X OC The Witcher 3, Ultra, HairWorks Off, Custom AA 57.4 65.8 84.4 82.6 92.1 67.1 70.2 Battlefield 4, Ultra, 4x MSAA 78.3 86.5 112.4 109.9 125.4 86.9 89.9 Crysis 3, Very High, SMAA 80.1 81.5 105.2 104.0 115.5 94.3 96.8 Assassin's Creed Unity, Ultra High, FXAA 56.0 62.4 74.7 74.4 84.3 62.8 65.0 Far Cry 4, Ultra, SMAA 82.4 87.4 101.4 101.2 103.0 75.7 78.5 COD Advanced Warfare, Extra, FSMAA 112.3 128.0 159.9 156.8 173.1 115.1 115.5 Ryse: Son of Rome, High, SMAA 81.8 75.8 99.2 97.8 109.5 85.1 86.0 Shadow of Mordor, Ultra, High Textures, FXAA 101.9 91.7 119.0 118.5 135.4 110.2 113.0 Tomb Raider, Ultimate, FXAA 107.1 118.2 150.1 150.3 168.2 127.4 132.3

Scalability at lower resolutions is clearly a concern but certainly from an overall hardware perspective, there's much to like about the Fiji chip. First of all, after the significant power consumption of the 390X, we were concerned that an even bigger chip based on the GCN architecture would be even more of an energy hog. But the good news is that the Fury X hands in a substantial performance boost over the 390X and does so using considerably lower amounts of power. This may explain how the water-cooling set-up is able to keep the GPU at very low temperatures. In a hot office at 27 degrees Celsius, with the Fury X running through extended overclock stress tests with PowerTune pushed to its 150 per cent maximum, we didn't notice temperatures exceed anything higher than 64 degrees. In most use-case scenarios, it's ten degrees lower.

And in more good news, we also found that Fury X is much, much closer to the GTX 980 Ti in terms of power consumption - a great achievement bearing in mind how much praise Nvidia's hardware has received for its efficiency. After the 80-100W gulf we saw between R9 290X and GTX 980, the gap between Fiji and GM200 is just 32W at stock speeds in our tests. With both graphics cards overclocked as far as we could push them, there's just 6W between them - though the GTX 980 Ti is pushing out more frames.

The conclusion we draw from this is positive - our big concern going into Fury X testing was that the water-cooler was there to manage excessive heat, just as it was on the R9 295X2. A hot chip would also have caused problems for the upcoming air-cooled Fury, coming along some time next month. Based on the kind of heat dissipation we saw on the MSI Radeon R9 390X we reviewed, it's our contention that even a fully enabled Fiji processor at the same clocks as Fury X could be cooled by air - though we may need to forego the smaller form factor to accommodate a larger heat sink and fan. By extension, the power efficiency on display in Fury X also suggests that the small form factor 175W Fury Nano should have no thermal problems whatsoever and could still pack quite a kick.

In our overclocking stress-testing, we found that this scene from Crysis 3 incurred a larger power draw than anything we've previously seen, so we tested Fury X and GTX 980 Ti here on our benchmarking system, though we did remove the overclock from our Core i7 4790K to lessen any spikes in consumption from the CPU.

GTX 980 Ti R9 Fury X GTX 980 Ti OC R9 Fury OC Peak System Power Draw 375W 407W 421W 427W