Rumors about AMD’s upcoming 7nm Ryzen 3 family have been floating around for quite some time in various forms. Enthusiasts are closely interested to see what changes and improvements the company introduces with its shift to 7nm. The company disclosed some of its advances on 7nm as they concern its second-generation Epyc processor, codenamed Rome, which currently isn’t expected to ship for significant revenue until after Q2 (exactly when is still unknown). The Ryzen 7nm changes are still unknown, beyond the obvious expectation that Epyc and Ryzen will share a common CPU design.

A new set of rumors from AdoredTV suggest a CPU lineup from AMD that’s simply too starry-eyed to take seriously. The initial claim is that AMD will use the same chiplet and I/O strategy that it’s deploying for 7nm Epyc with Ryzen. AMD used the same die design for both Epyc and Ryzen in its first generation of products, so this could be what the company chose to do again — but it seems unlikely.

The entire point of building a cluster of chiplets around a single unified I/O block with Epyc is that AMD could centralize all of the DDR4 controllers, I/O controllers, and PCIe lanes in a single unified die with 64 CPU cores hooked up around the edges. AMD chose to keep the same ratio of eight cores per physical die, however, meaning that the base ‘unit’ of processing is still an eight-core chip. It’s not at all clear that it makes any kind of sense to have a separate I/O die built on 14nm and a single chiplet on 7nm as opposed to just building a single die to start with. AMD needed to change Epyc’s memory configuration to reduce RAM latency and improve performance; the 2990WX has scaling issues in many applications precisely because its configuration is lopsided, with some dies connected to memory controllers and other dies not. Connecting each chip on a top-end Epyc CPU to its own dedicated memory controller is something AMD would’ve evaluated but obviously decided not to do, possibly because it would leave lower core-count parts with fewer memory channels. When you have a lot of chiplets to hook to a single controller, this split 7/14 approach makes sense.

But given that AMD’s 14nm die is built at GlobalFoundries while its 7nm chiplets are built at TSMC, there’s a cost to splitting the work into two separate sections. With Epyc, the cost is obviously worth the benefit. It’s not clear Ryzen would benefit in anything like the same way, especially since AMD would be putting two chips on every chip with eight cores and below as opposed to one, and three chips on every Ryzen above eight cores as opposed to two live die and two dummies (aka Threadripper).

The real problem with these claims, however, lies in the number of cores and clocks AdoredTV believes AMD will ship. The table below is courtesy of Overclock3D.net. Normally I don’t spend time on debunking bad rumors, but these are egregiously bad.

First, it’s highly unlikely that AMD would kill off its entire product family below the six-core + SMT space or that it would stop using SMT as a feature differentiation in its products. SMT is useful to both AMD and Intel for the same reason — it allows both companies to offer a significant performance uplift as an incentive to customers to buy higher-performing parts, while costing virtually nothing in terms of die size or OS support, since all modern operating systems robustly support the feature. AMD already offers SMT on most of its Ryzen CPUs, but leaving it off the lowest-end models is an up-sell technique.

Second, the closest CPU to the claimed “Ryzen 3 3300X” is the current Ryzen 5 2600X (3.6GHz base, 4.2GHz boost), at $240. The chances that AMD slashes its equivalent CPU pricing by 54 percent on the basis of 7nm improvements is nil. AMD has spent the year emphasizing to investors that its margins should continue to improve over time. The absolute worst way to make that happen is to take a chainsaw to your own product pricing. If you wanted to see a meteoric leap in AMD’s price/performance ratio, the company already delivered it back in 2017.

Third, while it’s possible that AMD will choose to bump up its on-die GPU to 15/20 CUs, it’s not a particularly likely shift unless AMD is simultaneously going to start shipping APUs with HBM attached (something the company has shown no inclination towards doing, at least not yet). AMD’s on-die GPUs are heavily memory bandwidth limited. The more GPU cores you have, the more memory bandwidth you need to fill them. AMD has focused on improving its GPU efficiency far more than its core count. From 2011 – 2017, AMD improved its top-end core count from 400 (A8-3850) to 704 (Ryzen 5 2400G). The chances that AMD goes up to 1280 on-die cores seems low, given the bandwidth limitations of DDR4. We’d expect such a move to happen with the introduction of DDR5, which isn’t expected until AMD’s next socket shift in ~2021.

Fourth, the clock speed and core count targets at the top of the stack reflect exceedingly wishful thinking, not reality. An eight-core chip with 1280 on-die GPU cores and a 4GHz clock in a 95W TDP at $199? That’s the equivalent of a >RX 560 GPU (as far as core count) slapped down alongside a Ryzen 7 2700 (3.2GHz base, 4.1GHz turbo, $269). The cheapest RX 560 on Newegg is $104. Again, there’s simply no way that AMD is going to sell an APU with an onboard GPU at what amounts to a 53 percent price cut. This chart reads as though someone heard that 7nm offers a theoretical 50 percent increase in density and assumed that 100 percent of that improvement would (or could) be translated into price. Costs are also higher at 7nm, and the entire reason AMD didn’t shrink Epyc’s I/O in the conventional manner is because of the limited benefits of doing so. Even if it uses a single die for its 7nm Ryzen parts, not every part of the chip scales equally.

The upper CPUs all receive unrealistic price cuts and significant clock jumps, without the commensurate increase in TDP that would be required. CPU power curves bend upwards sharply as you exceed the targeted sweet spot for silicon. It makes no sense that moving from 3.2GHz base clock to 3.5GHz on six-core chips would require an additional 15W of TDP, but moving from 3.9GHz to 4.3GHz base clock on 16 cores would require just an additional 10W of TDP. And even if you chalk that issue up to the fact that TDP doesn’t represent power consumption, there’s still a major problem here: AMD is not going to slash the price of a 16-core chip from $899 to $449. Again, that’s exactly the wrong move if you’re trying to improve your margins relative to the competition (Intel’s margins are 60 percent and above, AMD has been operating in the upper 30s and low 40s).

Finally, it’s not clear if a 16-core Ryzen actually makes much sense with just two memory channels to work with. At the very least, I’d expect slightly worse scaling compared with a quad-channel platform, and AMD obviously decided to use a quad-channel design for Threadripper, even when it could have specified a lower-cost dual-channel variant. While it’s true that most desktop workloads aren’t particularly memory bandwidth bound, there’s still going to come a point when you don’t have enough bandwidth per-core to avoid negative scaling impacts.

Collectively, these rumors make no sense. They predict unprecedented price cuts, a complete abandonment of AMD’s established CPU feature distribution, large clock jumps without commensurate TDP increases, twice the core count on the same platform even when this makes no sense, and TDPs that must completely depart from the way AMD has reported and measured TDP with the first two generations of Ryzen. And not incidentally, they predict that most of these products will be launched at CES. We haven’t heard a whisper of anything of the sort from AMD.

I don’t believe these rumors. I don’t think you should, either.

Update 12/6/2018, 6:15pm: I’d like to condense some of what I’ve said in comment responses to unify them and make them easier to read. Ranked by order of believability:

1). The two claims I find equally least likely are the argument that AMD will slash prices by ~53 percent, more-or-less across its entire product family and that it will abandon quad-cores and quad-cores + SMT and make a 6/12 CPU its baseline $100 processor. This would cut AMD’s margins in direct contravention of its guidance to Wall Street and Wall Street’s repeated questions about AMD’s likelihood of improving said margins throughout 2019. You can read the relevant transcript here. It would leave the company with a huge amount of overpriced inventory and no ability to move it, except to sell all of its 12nm and 14nm parts at fire-sale prices.

2). Specific clocks and TDPs. The problem here is not boost frequencies, but base frequencies. The predictions are extremely optimistic as far as achievable clocks on high core count CPUs, to the point of seeming unlikely based on prevailing trends in silicon scaling at high frequency. When you factor in the claimed TDPs, the gains are even less plausible. I’ll acknowledge that we don’t know what 7nm clocks or TDPs will look like, so I rank these points a little lower than the first two.

3). The GPU question. The jump to 960 cores makes more sense than the 1,280. An 81 percent increase in GPU cores from the current 704 would not yield anything remotely like an equivalent increase in performance given the realities of memory bandwidth constraints, and AMD’s entire push these last few years has been about maximizing efficiency. The company could be planning to build more cores and run them at lower frequencies to optimize performance/watt (more cores at lower frequency is typically better for power efficiency than fewer cores at high frequency), but the shift to 1,280 is still a very large jump to predict for a 7nm chip as compared with a 14nm, and it’s a jump that doesn’t yield a clear immediate benefit as far as what we know about AMD’s ability to scale GPU performance in a memory bandwidth-constrained environment from 2011-2018. HBM integration would solve this problem, but no one, including AdoredTV, is arguing that AMD will build HBM for APUs next year. So while AMD could absolutely build either a 1,280-core or 960-core APU, I find the first less likely than the second. I think it’s also entirely possible that AMD holds the number of GPU cores relatively constant, but focuses on improving memory bandwidth usage, perhaps with a bump to officially supported DDR4 clock speeds to provide more bandwidth.

4). Finally, I think it’s at least possible that AMD evaluates a unified 7nm Ryzen design, but as I say in this story, “it’s not clear.” This is intended to signal exactly what it says. I think you can argue this one either way, since using an I/O die for Ryzen would allow AMD to experiment with hooking up other kinds of chiplets as well. GPUs could theoretically be plugged in this way, and isolating the CPU and GPU components might allow for performance improvements depending on the scaling of AMD’s upcoming 7nm IF. The chances that they recycle bad Epyc I/O die are very small, however. Epyc’s I/O die contains a huge amount of silicon Ryzen doesn’t need, and server volumes are much smaller than desktop or mobile volumes. It makes far more sense to design a new I/O die for Ryzen and use a chiplet strategy for both than it ever would to attempt to recycle Epyc’s I/O die for a part that requires 20 PCIe lanes (not 128) and 2 DDR4 memory controllers instead of 8. If AMD recycles bad Epyc I/O die, it would likely be for Threadripper, not Ryzen.

Now Read: