Over the past few years, we’ve spent a significant amount of time discussing the increasing difficulty of semiconductor manufacturing and how these problems have impacted the design of modern products. Intel’s two major launches this year — Ivy Bridge-E and Haswell — both failed to budge compute performance more than 5-8% at the top of the market, but that doesn’t mean PC performance is on stuck on autopilot.

Is the PC enthusiast market dead, a casualty of the push into mobile? Not necessarily. Here, we explore three different approaches Intel or other semiconductor manufacturers could take to address some of the major problems facing continued performance scaling today. Each option was chosen for near-term applicability, meaning these are ideas that really could be adapted into shipping products within the next five years — and dramatically improve performance in the process.

Improved CPU cooling

It’s common knowledge that slapping a better heatsink, water cooler, or phase-change unit on a CPU can yield better overclocking results — but there’s more to the CPU cooling issue than simply bolting on a better heatsink. One of the biggest barriers to higher CPU clock speeds is hot spots. Normally, when we talk about CPU power consumption or temperature, we refer to one overall temperature — typically an average of the core temps as reported by software. Even this, however, fails to capture the scope of the problem. The image below is an Intel-provided infrared picture of Clover Trail’s temperatures in active mode. Keep in mind, the entire SoC is between 80-90 square millimeters. The red dots below are a fraction of that size — pinpricks, really.

Keep in mind that the CPU cooler is a homogenized slab of aluminium or copper sitting on top of the core with a thin layer of thermal compound (or Vegemite) in between itself and the CPU. The hot spots receive the same cooling potential as the barely active GPU or I/O blocks. From an engineering perspective, that’s tremendously inefficient. The problem is that as process nodes shrink, the total CPU area shrinks, which means the hot spots squeeze even more power into a smaller and smaller area. So how do you fix that?

There are a few proposed methods. First, you can boost the efficiency of the thermal interface material (TIM). Intel has caught flack in recent years for using thermal paste, not solder, for its microprocessors, but even solder isn’t a perfect solution. Extensive research from IBM shows that up to 50% of the thermal resistance in a CPU cooling structure can be caused by the TIM layers. Optimizing TIM distribution with hierarchical nested channels (shown below) can significantly improve final performance.





Another option is to improve lateral heat transfer within the CPU itself. One of the reasons hot spots develop within the CPU core is that heat moves mostly upwards towards the TIM, not sideways across the die. If we can improve a chip’s ability to move heat outwards, away from critical areas, we can clock the hot areas higher and distribute the heat across a grid rather than at a handful of tiny points. Earlier this summer, scientists announced one potential material that could allow this kind of migration — cubic boron arsenide. This particular structure is amazingly effective at moving heat; integrating it into processors might allow for significantly higher clock speeds.

Other approaches, like computational sprinting, could be combined with new phase change materials like wax to dramatically increase thermal dissipation for short periods of time. While this won’t improve compute performance over sustained periods of time, it would speed latency-sensitive applications like web page loads or brief, intensive computations.

Next page: New semiconductor materials and technologies