The fourth generation of AMD Graphics CoreNext GPU architecture has been reportedly codenamed "Polaris" by the company. It makes its debut later this year in the company's "Arctic Islands" GPUs, built on Samsung's 14 nm FinFET node. According to the company, Polaris will provide a "historic leap in performance/Watt" for Radeon GPUs. Chips based on Polaris will feature improvements to not just the compute units, but will also come with generational improvements to pretty much every other component, including a new front-end, display controllers, and a new memory controller supporting HBM2.AMD debuted its first generation GCN architecture with the Radeon HD 7000 series, notably the "Tahiti" silicon. Its second-generation, GCN 2.0, (reported in the press as GCN 1.1), debuted with the R9 290 series, notably the "Hawaii" silicon. The third-generation, GCN 3.0 (reported in the press as GCN 1.2), debuted with the R9 285, notably the "Tonga" silicon; making "Polaris" the fourth-generation. GCN 4.0 will form the core micro-architecture of the "Arctic Islands" family of GPUs, which make their debut in mid-2016.

24 Comments on 4th Generation Graphics CoreNext Architecture Codenamed "Polaris"

#1 btarunr

Editor & Senior Moderator Polaris: Codename for the 4th generation GCN

Arctic Islands: Family of GPUs based on Polaris

Greenland: High-end GPU within Arctic Islands Just to demystify the codenames: Posted on Jan 3rd 2016, 22:51 Reply

#2 FordGT90Concept

"I go fast!1!11!1!"



source



I hope it supports D3D12_1. It's pretty sad that Skylake's GPU has the most D3D12_1 support to date. :cry: On the surface that doesn't look much different from GCN 1.2:I hope it supports D3D12_1. It's pretty sad that Skylake's GPU has the most D3D12_1 support to date. :cry: Posted on Jan 3rd 2016, 23:58 Reply

#3 bubbleawsome

Thanks for the clarification btarunner! Posted on Jan 4th 2016, 0:56 Reply

#4 HumanSmoke

FordGT90Concept On the surface that doesn't look much different from GCN 1.2:



source



I hope it supports D3D12_1. It's pretty sad that Skylake's GPU has the most D3D12_1 support to date. :cry: That source picture isn't quite correct - even though it came from the original launch press deck. The 8 ACE's of the previous generation (Hawaii) have been fine tuned to 4 ACE's + 2 hardware schedulers. Some further arch optimizations are listed on the left hand side of this slide from the architectural presentation at Hot Chips



That source picture isn't quite correct - even though it came from the original launch press deck. The 8 ACE's of the previous generation (Hawaii) have been fine tuned to 4 ACE's + 2 hardware schedulers. Some further arch optimizations are listed on the left hand side of this slide from the architectural presentation at Hot Chips Posted on Jan 4th 2016, 2:34 Reply

#5 FordGT90Concept

"I go fast!1!11!1!"

Funny how that slide has spelling correction squiggles. :roll:





Graphics Command Processor --updated-> Command Processor

L2 Cache --updated-> L2 Cache



The only significant changes I see: It appears to go from 8 memory controllers to 1. The slide could just be leaving out duplicates (seems likely). Obviously it was updated from HBM1 to HBM2. I assume the darker red between Global Data Share and L2 Cache is the "Shader Engine." If Mulmedia Accelerators were removed and replaced with this "Multimedia Cores" inside of each "Shader Engine," that's a major change.

Edit: Looking closer at this, it doesn't make sense for there to be many Video Coding Engines (VCE), Unified Video Decoders (UVD), nor TrueAudio Digital Sound Processors (DSP). If it was intentionally placed under the Shader Engine, it is something new. It doesn't give any indication if it only has one "Shader Engine" or many. If there's only one, that could be a monumental change as well (again, probably just omitting duplicates). Is Display Engine just a wrapper for Eyefinity and Crossfire support or something else? It looks more like GCN 1.3 than GCN 2.0. I am intrigued by these "Multimedia Cores" and "Display Engine" though.



I suspect the "Display Engine" will include support for DisplayPort 1.3 but it remains to be seen of AMD will bother with HDMI 2.0. Looks the same except 4 ACE cores instead of 8.Funny how that slide has spelling correction squiggles. :roll:Graphics Command Processor --updated-> Command ProcessorL2 Cache --updated-> L2 CacheThe only significant changes I see:It looks more like GCN 1.3 than GCN 2.0. I am intrigued by these "Multimedia Cores" and "Display Engine" though.I suspect the "Display Engine" will include support for DisplayPort 1.3 but it remains to be seen of AMD will bother with HDMI 2.0. Posted on Jan 4th 2016, 2:59 Reply

#6 HumanSmoke

FordGT90Concept Looks the same except 4 ACE cores instead of 8. It is my understanding that the hardware schedulers allow for more flexibility in workload compared to just an ACE implementation. Seems to be evolutionary for both gaming and FordGT90Concept Funny how that slide has spelling correction squiggles on the picture. :roll: Maybe AMD's PR people were in a hurry and didn't bother to disable spell/grammar checking - or how to add to the dictionary. The slide might have been a late addition - it seems to be the only one affected in the presentation deck ( It is my understanding that the hardware schedulers allow for more flexibility in workload compared to just an ACE implementation. Seems to be evolutionary for both gaming and HSA workloads Maybe AMD's PR people were in a hurry and didn't bother to disable spell/grammar checking - or how to add to the dictionary. The slide might have been a late addition - it seems to be the only one affected in the presentation deck ( PDF ). Posted on Jan 4th 2016, 3:25 Reply

#7 FordGT90Concept

"I go fast!1!11!1!" Oh, wait, "HWS?" Those were added to the second slide.



It looks like they rushed the slide to remove the four extra ACEs and add in the two HWS. Posted on Jan 4th 2016, 3:29 Reply

#8 bubbleawsome

I guessing we won't really know until release, but are people thinking this is actually a new architecture or is it still very much GCN 1.0+? Also (I'm not a huge architecture guy) is Maxwell truly new or is it just advanced Kepler? IIRC Kepler was a total redesign from Fermi, and GCN was new from whatever AMD had before, so are all our current GPUs just basic evolutions of 4 year old cards? Where do you draw the line? Posted on Jan 4th 2016, 3:50 Reply

#9 HumanSmoke

bubbleawsome I guessing we won't really know until release, but are people thinking this is actually a new architecture or is it still very much GCN 1.0+? Also (I'm not a huge architecture guy) is Maxwell truly new or is it just advanced Kepler? IIRC Kepler was a total redesign from Fermi, and GCN was new from whatever AMD had before, so are all our current GPUs just basic evolutions of 4 year old cards? Where do you draw the line? As a general rule, major architectural advances only really occur when the software evolves - programmable shaders for the DX9 era, unified shaders for DX10/GPGPU, deeper graphics pipeline (additions of hull/tessellation/geometry and post-process compute shading) for DX11. DirectX12 just piggybacks the DX11 graphics pipeline so it isn't unreasonable to think that Arctic Islands/Pascal will be more incremental improvements rather than any radical leap in architecture. Probably down the line you'd see more of a leap as more GPGPU workloads evolve- but I suspect that gaming and GPGPU/co-processors would just become two separate lines. "Always on" compute tends to add a power tax to gaming orientated GPUs, while the full graphics pipeline is wasted space for many co-processor workloads - die space that might be better served by increasing cache and a greater number of more simplified ALUs. As a general rule, major architectural advances only really occur when the software evolves - programmable shaders for the DX9 era, unified shaders for DX10/GPGPU, deeper graphics pipeline (additions of hull/tessellation/geometry and post-process compute shading) for DX11. DirectX12 just piggybacks the DX11 graphics pipeline so it isn't unreasonable to think that Arctic Islands/Pascal will be more incremental improvements rather than any radical leap in architecture. Probably down the line you'd see more of a leap as more GPGPU workloads evolve- but I suspect that gaming and GPGPU/co-processors would just become two separate lines. "Always on" compute tends to add a power tax to gaming orientated GPUs, while the full graphics pipeline is wasted space for many co-processor workloads - die space that might be better served by increasing cache and a greater number of more simplified ALUs. Posted on Jan 4th 2016, 4:40 Reply

#10 FordGT90Concept

"I go fast!1!11!1!" Yeah, case in point, I wish they'd move the media processing to the CPU as an extension of x86 so GPUs don't need that hardware. The reason why that likely won't happen is because of licensing the media. For it to be added to x86 would likely require the encoders/decoders to be public domain.



Polaris definitely looks like GCN 1.3 to me. Posted on Jan 4th 2016, 4:55 Reply

#11 Chaitanya

As with any AMD claims, I won't believe until I read independent reviews. Posted on Jan 4th 2016, 5:01 Reply

#12 xvi

FordGT90Concept I hope it supports D3D12_1. It's pretty sad that Skylake's GPU has the most D3D12_1 support to date. :cry: Intel seems to be pretty good at hopping on to new standards pretty quickly. I'd expect AMD to follow shortly and nVidia will claim it's unnecessary until the bitter end. Intel seems to be pretty good at hopping on to new standards pretty quickly. I'd expect AMD to follow shortly and nVidia will claim it's unnecessary until the bitter end. Posted on Jan 4th 2016, 5:30 Reply

#13 Ferrum Master

xvi Intel seems to be pretty good at hopping on to new standards pretty quickly. I'd expect AMD to follow shortly and nVidia will claim it's unnecessary until the bitter end. Yes but mostly Intel does it via software emulation, not bare metal. Yes but mostly Intel does it via software emulation, not bare metal. Posted on Jan 4th 2016, 8:08 Reply

#14 bug

...historic leap in performance/Watt for Radeon GPUs... Probably means they'll do what Nvidia did with Maxwell a couple of years ago. Normally, I'd say "let's just wait and see", but since I'm really fond of Linux support, I haven't waited for AMD in a long while. Sadly. Probably means they'll do what Nvidia did with Maxwell a couple of years ago. Normally, I'd say "let's just wait and see", but since I'm really fond of Linux support, I haven't waited for AMD in a long while. Sadly. Posted on Jan 4th 2016, 8:10 Reply

#15 Solidstate89

Interesting that it says Radeon Technologies Group instead of AMD in that last slide. Posted on Jan 4th 2016, 8:20 Reply

#16 FordGT90Concept

"I go fast!1!11!1!" RTG is the organization inside of AMD that is responsible for developing Radeon products. Posted on Jan 4th 2016, 8:24 Reply

#17 64K

Yeah, AMD still owns Radeon Group but they spun it off as a individual company. I think they did this to make it easier to sell if they had to. Maybe things won't get that bad though. Posted on Jan 4th 2016, 8:37 Reply

#18 RejZoR

Now this sounds jolly interesting. I hope it's not just bunch of rehashed words but an action giant leap compared to Fury X core. I might be jumping ship again to the "red" camp again... Posted on Jan 4th 2016, 10:04 Reply

#19 ShurikN

If I understood this correctly, the 14nm GPU will arrive before the CPU? Posted on Jan 4th 2016, 10:19 Reply

#20 Slizzo

64K Yeah, AMD still owns Radeon Group but they spun it off as a individual company. I think they did this to make it easier to sell if they had to. Maybe things won't get that bad though. To my understanding it's not spun off into it's own company, but they finally brought all graphics technologies under one department within AMD. To my understanding it's not spun off into it's own company, but they finally brought all graphics technologies under one department within AMD. Posted on Jan 4th 2016, 10:31 Reply

#21 lilhasselhoffer

It's hard to not read this, and face palm.



AMD is claiming that the performance per watt is going to be a historic leap. A claim that is hard to dispute, given that the new process will offer dang near 4x the components in the same surface area. This is AMD claiming that their fab finally upgrading after 3 generations is a good thing.



As far as the rest of the claims, aren't they largely focused on software? The same Crimson software that AMD just released.





Despite the PR stupidity, I'm looking forward to Arctic Islands and Pascal. Both of them should finally give me a reason to upgrade from my 7970. Heck, even that's a low bar. Posted on Jan 4th 2016, 10:39 Reply

#22 bug

lilhasselhoffer ... This is AMD claiming that their fab finally upgrading after 3 generations is a good thing... Actually, that's not even their fab. This is going to be built by Samsung, not GF. Actually, that's not even their fab. This is going to be built by Samsung, not GF. Posted on Jan 4th 2016, 11:39 Reply

#23 lilhasselhoffer

bug Actually, that's not even their fab. This is going to be built by Samsung, not GF. Allow me to be clear, AMD owns no fabrication. Thus, "their fab" only refers to whomever is making their chips at this time. I don't even know why this is a point of contention, as Global Foundries hasn't been AMD's exclusive fab for years. Allow me to be clear, AMD owns no fabrication. Thus, "their fab" only refers to whomever is making their chips at this time. I don't even know why this is a point of contention, as Global Foundries hasn't been AMD's exclusive fab for years. Posted on Jan 4th 2016, 12:05 Reply