AMD's Radeon RX 5600-series could see the company take on the top-end of NVIDIA's GeForce 16-series, such as the GTX 1660 Super and the GTX 1660 Ti. A report from earlier this month pegged a December 2019 product announcement for the RX 5600-series and subsequent availability in the weeks following. Regulatory filings by AMD AIB (add-in board) partners with the Eurasian Economic Commission (EEC) shed more light on the product differentiation within the RX 5600 series. The filings reveal that the RX 5600 and RX 5600 XT feature 6 GB and 8 GB sub-variants.The regulatory filing by ASUS references products across its ROG Strix, TUF Gaming, and Dual lines of graphics cards. As mentioned in the older report, we expect AMD to carve the RX 5600 series out of the larger "Navi 10" silicon, by disabling many more RDNA compute units than the RX 5700, and narrowing the GDDR6 memory bus to 192-bit for the 6 GB variants. AMD has an opportunity to harvest "Navi 10" chips down to stream processor counts such as 1,792 (28 CUs) or 2,048 (32 CUs). It also has the opportunity to use cost-effective 12 Gbps GDDR6 memory chips.

27 Comments on AMD Radeon RX 5600 Series SKUs Feature 6GB and 8GB Variants

1 to 25 of 27 Go to Page 12 PreviousNext

#1 dj-electric

I was wondering when will we see the use of 12Gb GDDR6 chips, it can bring some VERY interesting memory configurations, like having a 192BIT card with 9GB of video memory.

At least later today we can get a glimpse into 16Gb size GDDR6 chips.



;) Posted on Dec 12th 2019, 9:03 Reply

#2 Hyderz

would these req one 6-pin or one 8-pin? Posted on Dec 12th 2019, 9:10 Reply

#3 btarunr

Editor & Senior Moderator dj-electric At least later today we can get a glimpse into 16Gb size GDDR6 chips.



;) I thought those debuted with the 24GB TITAN RTX? I thought those debuted with the 24GB TITAN RTX? Posted on Dec 12th 2019, 9:10 Reply

#4 KV2DERP

So it's a cut down 5700 Posted on Dec 12th 2019, 9:11 Reply

#5 dj-electric

btarunr I thought those debuted with the 24GB TITAN RTX? oh yeah, oh well at least now it comes in a decently priced product. oh yeah, oh well at least now it comes in a decently priced product. Posted on Dec 12th 2019, 9:13 Reply

#6 AlienIsGOD

Vanguard Beta Tester KV2DERP So it's a cut down 5700 Isn't just about any gfx card that's not the top end a cut down version though Isn't just about any gfx card that's not the top end a cut down version though Posted on Dec 12th 2019, 9:28 Reply

#7 Turmania

If they can get 1660 super performance with Consuming around 125W and pricing around the same. Then it can be a great product. Posted on Dec 12th 2019, 10:56 Reply

#8 Vya Domus

AlienIsGOD Isn't just about any gfx card that's not the top end a cut down version though Technically not, cut-down usually means the same chip with portions of it fused off. That being said I really doubt this would be Navi 10 cut down. Technically not, cut-down usually means the same chip with portions of it fused off. That being said I really doubt this would be Navi 10 cut down. Posted on Dec 12th 2019, 11:11 Reply

#9 my_name_is_earl

Keep waiting for AMD to release the Kraken, but they kept on releasing shrimp. Posted on Dec 12th 2019, 11:15 Reply

#10 Imsochobo

Turmania If they can get 1660 super performance with Consuming around 125W and pricing around the same. Then it can be a great product. the power doesn't matter that much.. cost does. the power doesn't matter that much.. cost does. Posted on Dec 12th 2019, 11:18 Reply

#11 _Flare

We don´t know if the memory-interface is now tied with the ROP-count.

If so, a 6G variant will also need to go down to 48ROP, wich is probably only possible when 1 whole array gets shut down, including the prim-unit etc.

If that chip is a cut-down Navi10, well you get 3 times 5 WGP (Dual-Compute-Unit) leading to 1920 Cores, maxed out.

We will se if thats true, next year. Posted on Dec 12th 2019, 11:59 Reply

#12 EarthDog

Turmania If they can get 1660 super performance with Consuming around 125W and pricing around the same. Then it can be a great product. They can't. 5500 XT already has a 130W TBP according to leaks. AlienIsGOD Isn't just about any gfx card that's not the top end a cut down version though That is one way to look at it. More accurately, to me, is when it uses the same GPU as others but is 'cut down' down there. For example, the same Navi 10 GPU with less CUs enabled. I don't call Navi 14 XTX cut down because it isn't Navi 10 but a full SKU different. They can't. 5500 XT already has a 130W TBP according to leaks.That is one way to look at it. More accurately, to me, is when it uses the same GPU as others but is 'cut down' down there. For example, the same Navi 10 GPU with less CUs enabled. I don't call Navi 14 XTX cut down because it isn't Navi 10 but a full SKU different. Posted on Dec 12th 2019, 12:32 Reply

#13 bonehead123

I really don't see many people buying these cut down cards unless the price is more affordable than other cards which are already available with similar specs, speed, features, ect....



Lets wait for reviews & an official launch though :p Posted on Dec 12th 2019, 12:34 Reply

#14 Chloe Price

bonehead123 I really don't see many people buying these cut down cards unless the price is more affordable than other cards which are already available with similar specs, speed, features, ect....



Lets wait for reviews & an official launch though :p Why not? Through generations of GPUs people have been buying cut down models.



e: And since it's AMD, it wouldn't be a surprise if the first 6GB models are 8GB physically, just artificially slowed down via bios. Why not? Through generations of GPUs people have been buying cut down models.e: And since it's AMD, it wouldn't be a surprise if the first 6GB models are 8GB physically, just artificially slowed down via bios. Posted on Dec 12th 2019, 13:36 Reply

#15 _Flare

Regarding efficiency, AMD is fighting more against Pascal then versus Turing.

They where braindead addicted to the mining-hype and did nearly nothing to make any big efficiency gains.

So you fight in 2019/2020 versus the efficiency of Nvidias chips of 2016, thats a pitty if you asked me. Posted on Dec 12th 2019, 14:08 Reply

#16 medi01

_Flare They where braindead addicted to the mining-hype Yeah, and that's how their sales and stock price crashed! Oh, wait... Yeah, and that's how their sales and stock price crashed! Oh, wait... Posted on Dec 12th 2019, 22:24 Reply

#17 Vya Domus

_Flare They where braindead addicted to the mining-hype and did nearly nothing to make any big efficiency gains. I think that the only braindead thing here is your assertion. Not only that the mining boom ended well before Navi was finalized but this architecture would eventually end up in the next generation consoles where power efficiency is critical, you gotta be really out of touch to think Sony or MS would accept anything but excellent power efficiency. AMD always had hardware around that was capable of achieving great efficiency under specific uses case, contrary to the popular belief most of you ill informed gamers or whatnot hold. The world isn't just desktop cards.



You gotta realize at some point that there is a difference between an architecture which is inefficient at it's core and a GPU which is inefficient. Fermi was a great example of an architecture that was very inefficient in all it's forms, Vega (the architecture), for instance, wasn't inefficient shown by the fact that you can have it into APUs that sip power while Vega 56 and 64 (the GPUs) were indeed very inefficient GPUs. I think that the only braindead thing here is your assertion. Not only that the mining boom ended well before Navi was finalized but this architecture would eventually end up in the next generation consoles where power efficiency is critical, you gotta be really out of touch to think Sony or MS would accept anything but excellent power efficiency. AMD always had hardware around that was capable of achieving great efficiency under specific uses case, contrary to the popular belief most of you ill informed gamers or whatnot hold. The world isn't just desktop cards.You gotta realize at some point that there is a difference between an architecture which is inefficient at it's core and a GPU which is inefficient. Fermi was a great example of an architecture that was very inefficient in all it's forms, Vega (the architecture), for instance, wasn't inefficient shown by the fact that you can have it into APUs that sip power while Vega 56 and 64 (the GPUs) were indeed very inefficient GPUs. Posted on Dec 12th 2019, 23:21 Reply

#18 Sithaer

I hope this card will deliver and be a realistic if not better option than 1660 Ti/Super in both pricing and performance. 'one can hope' :)



I'm looking for an upgrade from my RX 570 in that price range but so far theres no alternative,and I would rather not buy used this time since I want to keep the new card for years. Posted on Dec 12th 2019, 23:33 Reply

#19 Keviny Oliveira

Much disappointed with RX 5500 XT and also without hype for a RX 5600, My hype is only for RDNA 2.0, Ampere and Intel XE, this generation of video cards is terrible, worst generation in price per performance. Posted on Dec 13th 2019, 0:31 Reply

#20 Fluffmeister

_Flare Regarding efficiency, AMD is fighting more against Pascal then versus Turing.

They where braindead addicted to the mining-hype and did nearly nothing to make any big efficiency gains.

So you fight in 2019/2020 versus the efficiency of Nvidias chips of 2016, thats a pitty if you asked me. Yeah not great times, especially seeing as they have already played the 7nm card. Yeah not great times, especially seeing as they have already played the 7nm card. Posted on Dec 13th 2019, 0:53 Reply

#21 gamefoo21

I think it's interesting that people don't seem to realize that AMD split the Radeon tree.



The Fury X which is the basis of Polaris, was aimed to do games and not maths. Navi is currently based on this approach and makes it meh at mining. It's also the Nvidia approach for basically everything but Volta really. NV gaming cards get strapped with brutal FP32 to FP64 dividers.



The R290 X which gave rise to the Vega, was more meant for professional stuff, and had great maths. Great for professional stuff, meh for games currently, unless you actually use the maths for RTRT or something. It's great for mining and maths.



R9 290X smacks a Fury X at FP64 and mining. The Fury X desperately needed 8GB Vram though. Posted on Dec 16th 2019, 23:15 Reply

#22 medi01

Fluffmeister especially seeing as they have already played the 7nm card. 7nm DUV only.

7nm EUV brings more, at least, according to declared spec. 7nm DUV only.7nm EUV brings more, at least, according to declared spec. Posted on Dec 17th 2019, 5:00 Reply

#23 EarthDog

gamefoo21 I think it's interesting that people don't seem to realize that AMD split the Radeon tree. I think we get it, but most simply do not care. It was a curious move to bring so much compute when it is marketed as a gaming card. I don't mind a more clear separation of church and state. medi01 7nm DUV only.

7nm EUV brings more, at least, according to declared spec. Hopefully then will it be where the competition is at 12nm. Not that it is far behind now, but still behind even with the process node advantage, details be damned. I think we get it, but most simply do not care. It was a curious move to bring so much compute when it is marketed as a gaming card. I don't mind a more clear separation of church and state.Hopefully then will it be where the competition is at 12nm. Not that it is far behind now, but still behind even with the process node advantage, details be damned. Posted on Dec 17th 2019, 13:26 Reply

#24 olymind1

medi01 7nm DUV only.

7nm EUV brings more, at least, according to declared spec. And let's just hope they don't charge more for it. And let's just hope they don't charge more for it. Posted on Dec 21st 2019, 8:53 Reply

#25 gamefoo21

EarthDog I think we get it, but most simply do not care. It was a curious move to bring so much compute when it is marketed as a gaming card. I don't mind a more clear separation of church and state. Yeah, AMD shouldn't have marketed the V2 so heavily on gaming. It's saving grace in my opinion is that the minimum frame rates are noticeably better, but for most people they only care about what it hits at max, and compared to a 5700XT it's not great.



I would love to get my hands on a dual 64CU V20 card, that uses the fancy interconnect to make the two GPUs act as one. Sadly that's huge money.



I mean even the Fury X2 or Radeon Pro Duo, still commands a hefty price tag used and it's crossfire.



Interestingly I think it's a bit telling that the Xbox One/S and PS4 variants use basically a Polaris based GPU and either can't do 4K or seriously struggle with it and need to use 'optimizations'. While the Xbox One X uses a GPU based on the R290/Vega line and as Sony whined 'brute forces 4K'.



I think having piles of compute power may actually be more future proof. Look at Crytek and their RTRT software demo, they used a Vega 56, and got decent results. The V20 core has significantly more computational power even in consumer dress.



FP64 perf: = Shaders/TMUs/ROPs/CUs

V10 - Vega 64: 0.786 TFlops = 4096/256/64/64

Fastest FP64: 0.854TFlops - Water cooled V64

V20 - V2/VII: 3.360 TFlops = 3840/240/64/60

Fastest FP64: 7.373TFlops- Instinct M60



The V2 gets stuck with less compute hardware, and doubled divider at 1:4 vs the pro cards getting 1:2 for FP64. Seems V10 had the FP64 divider couldn't go past 1:16.



For comparisons sake the fastest Navi GPU and the fastest 2080 Ti...



5700XT PC Liquid Devil: 0.662TFlops

2080 Ti Zo Amp Extreme: 0.494TFlops



So in summary the V20 stuff is meant to crush numbers very quickly. Too bad AMD locked the bios so can't try to flash unlock the V2 cores into fully functional ones like in the past.



I will say though the consumer air cooler for the V2 is probably the best stock air BBA Radeon cooler ever and the 50th AE one just looks sexy to me. Yeah, AMD shouldn't have marketed the V2 so heavily on gaming. It's saving grace in my opinion is that the minimum frame rates are noticeably better, but for most people they only care about what it hits at max, and compared to a 5700XT it's not great.I would love to get my hands on a dual 64CU V20 card, that uses the fancy interconnect to make the two GPUs act as one. Sadly that's huge money.I mean even the Fury X2 or Radeon Pro Duo, still commands a hefty price tag used and it's crossfire.Interestingly I think it's a bit telling that the Xbox One/S and PS4 variants use basically a Polaris based GPU and either can't do 4K or seriously struggle with it and need to use 'optimizations'. While the Xbox One X uses a GPU based on the R290/Vega line and as Sony whined 'brute forces 4K'.I think having piles of compute power may actually be more future proof. Look at Crytek and their RTRT software demo, they used a Vega 56, and got decent results. The V20 core has significantly more computational power even in consumer dress.FP64 perf: = Shaders/TMUs/ROPs/CUsV10 - Vega 64: 0.786 TFlops = 4096/256/64/64Fastest FP64: 0.854TFlops - Water cooled V64V20 - V2/VII: 3.360 TFlops = 3840/240/64/60Fastest FP64: 7.373TFlops- Instinct M60The V2 gets stuck with less compute hardware, and doubled divider at 1:4 vs the pro cards getting 1:2 for FP64. Seems V10 had the FP64 divider couldn't go past 1:16.For comparisons sake the fastest Navi GPU and the fastest 2080 Ti...5700XT PC Liquid Devil: 0.662TFlops2080 Ti Zo Amp Extreme: 0.494TFlopsSo in summary the V20 stuff is meant to crush numbers very quickly. Too bad AMD locked the bios so can't try to flash unlock the V2 cores into fully functional ones like in the past.I will say though the consumer air cooler for the V2 is probably the best stock air BBA Radeon cooler ever and the 50th AE one just looks sexy to me. Posted on Dec 21st 2019, 19:17 Reply