AMD researchers are planning on a future without circuit boards, a future where ‘chiplets’ are packed together on large swathes of silicon interposers instead of dumb, slow printed circuit boards. Your days are numbered, motherboards, because AMD has just removed one of the main barriers to making this future a reality: data traffic jams.

The problem with circuit boards, and chip packages on the whole, is that no matter how quickly data can move about inside a chip the whole process slows down when it’s moving between them. They’re both pretty poor conductors of heat and they also slow things down, adding unwanted latency.

Want to know what the best CPU for gaming is right now? Step this way…

As chip packages and circuits board cannot conduct heat particularly well, the energy expended has to be severely limited and therefore you run into frequency speed limits. You also then need to use more energy to move data between chips. In short the humble printed circuit board (PCB) is just holding us back.

Instead it has been proposed that it would be possible to essentially use a silicon circuit board instead. This would take the form of an interposer sitting under the different integrated circuits (ICs) arrayed on top of it – CPUs, GPUs, and memory, for example. These ICs wouldn’t be housed in the same sort of packages they are now, these ‘chiplets’ would just be bare silicon chips all squeezed up against each other, with loads of tiny, dense interconnects flowing through the networked interposer.

I, perhaps mistakenly, previously thought a chiplet was just a small sausage commonly wrapped in bacon at Christmas. But it seems they’re also bare silicon chips stuck into a smart interposer.

This approach would also allow the individual chiplets to be run at extremely high frequencies and have a huge number of them jammed into the same slice of silicon. That would give us serious exascale APUs on a level not seen before.

“It allows the industry to take a variety of system components and integrate them more compactly and more efficiently together,” explained AMD’s award-winning engineer, Gabriel Loh.

Which would be great except with all that data flooding the silicon interposer’s connections it can end up creating traffic jams, which could completely seize up the system causing it to crash.

“A deadlock can happen,” explains Loh, “basically where you have a circle or a cycle of different messages all trying to compete for same sorts of resources causing everyone to wait for everyone else.”

Loh recently won an award for his contributions to the advancement of die-stacked architectures, which makes him someone to listen to. He says it’s possible to specifically design the little chiplets for a specific interposer layout, but that takes away some of the advantages of the chiplet design, namely making different systems quicker and cheaper to design using the same ingredients. This mix and match approach goes out of the window the instant they have to be more rigidly designed.

But, at the International Symposium on Computer Architecture at the start of June, AMD’s engineers presented a potential solution, a set of simple, easy to follow rules for a chiplet’s makeup.

They suggest that so long as you can effectively govern exactly where data is allowed into and out of the chip, and can restrict what direction that data takes when it does enter, you can negate any worries of a deadlock. That means different teams of engineers can work on individual chiplets without having to worry too much about where they’ll be used or how other chiplets placed on an interposer will try and access the network.

There is one big part of the silicon circuit board problem that AMD hasn’t been able to solve – cost. Silicon is super-expensive right now, and only going to get pricier with demand far out-stripping supply and wafer manufacturers seeing no need to change things around so long as they’re getting all the monies.

DigiTimes has reported that silicon prices are likely to continue to spiral upwards through 2025 as the major suppliers see no reason to expand supply. And, while everything needs silicon, demand is just going to keep increasing.

That’s going to make the chiplet design very hard to make as a viable alternative to the current printed circuit board setup, especially if it’s being touted as something that could make things faster and cheaper.

And then we’ve still got the issue for gaming that while CPUs are fine to co-exist in a system and essentially be treated as one thing, GPUs still don’t work like that. It’s the reason AMD’s Navi GPU isn’t going to be a multi-chip module as we once hoped, and it’s the reason that CrossFire and SLI are all but dead for gamers.