TheLostSwede That's not how it works. Especially as the chipset also has things like SATA, USB 3.0 etc. that uses the PCIe bandwidth to/from the CPU.

Now that bandwidth is just going to be even more restricted, but then again, it hasn't stopped Intel, as they have a 4x PCIe 3.0 interconnect between the CPU and the chipset as well, they just called it a fancy name (DMI) to make it sound like it was something special.



When it comes to these chipsets, a PCIe switch would be used to "make" more PCIe lanes, but the bandwidth between the chipset and the CPU is still limited, both for AMD and Intel.

It doesn't appear to limit performance in most cases though, as otherwise Intel boards with NVMe drives connected via the chipset in the case of Intel boards, wouldn't perform as well as they do.

newtekie1 Look up how PCI-E switching works. Intel's chipset provides 24 PCI-E 3.0 lanes with the same 4 lane link to the processor. There are other manufacturers that push it even further, broadcom has a chip that will give 60 PCI-E 3.0 lanes from a single 4 lane uplink.

Uh what? What is more restricted?16/4/4 is what we have now. That's PCI-E slots, m.2 slot and chipset. Using a switch is a neat way to split lanes but you don't get anymore bandwidth. If you hook up a bunch of NVMe drives to the Intel chipset and hammer them, you'll be bottlenecked pretty quickly. AMD took the simple approach of saying you'll need more than 4 lanes for peripherals so you'll only get gen 2 speeds but you'll be guaranteed the full speed at all times. It's two different approaches. There's a trade-off either way. Considering the lanes we see in higher end AMD chips there should be untapped potential for mainstream chips.Yes but are you getting Threadripper levels of bandwidth from your Broadcom chip that splits that x4 link or will that triple SLI setup get choked? Or will your 4 NVMe drive RAID array be able to run full tilt on your DMI link?Don't get me wrong. We need more lanes and sometimes (or even most of the time) a switch is a great solution but it would be disingenuous to call it 24 lanes of PCI-E 3.0 because if you're going to use all those lanes simultaneously then you have a bandwidth problem so it relies on peripherals not needing full bandwidth all the time. It gives you a whole lot of freedom but it isn't perfect and I don't find AMD's approach to be bad either. It's more straight forward in a wysiwyg kind of way. I'm sure the choice to go that route was a combination of cost and outsourcing chipset design but it works though.