In a few weeks, Intel will release Ivy Bridge, the first mass-produced 22nm parts, and more importantly the first to use 3D “tri-gate” FinFET transistors. These CPUs will be incredibly fast and use very little power, but ultimately they are just another last-gasp effort to squeeze a little more life out of a material and process that will soon hit a wall. Computing is still predominantly single-threaded; throwing more transistors and more cores at a problem will only take you so far.

Fortunately, there’s another maturing technology that should provide a much-needed lease of life to the silicon industry: Chip stacking, or to give its formal name, 3D wafer-level chip packaging. Chip stacking is exactly what it sounds like: You take a completed computer chip (DRAM, say), and then place it on top of another chip (a CPU). As a result, two chips that used to be centimeters apart on a circuit board are now less than a millimeter apart. This reduces power consumption (transmitting data over copper wires is messy business), and also improves bandwidth by a huge amount.

Obviously, though, you can’t just take a DRAM chip and whack it on top of a CPU. The chips need to be designed with chip stacking in mind, and it takes specialized machinery to actually line the dies up and attach them. To this end, Applied Materials — the company that makes all of the machines used by Intel, TSMC, Samsung, GloFo, and every other semiconductor manufacturer — and A*STAR’s Institute of Microelectronics (IME) have announced the opening of a bleeding-edge 3D chip packaging lab in Singapore. Built with a combined investment of over $100 million, the Centre of Excellence in Advanced Packaging features a 14,000 square foot cleanroom containing a complete 300-millimeter production line and 3D packaging tools that are unique to A*STAR. The Centre isn’t a commercial fab, however: It’s actually designed as a facility for other companies, such as TSMC or Samsung, to come and experiment with 3D packaging. As far as Applied Materials is concerned, of course, this is an excellent way to demonstrate and sell its machines.

There are three main ways of stacking chips, all of which will be available at the new research center. The most basic technique (Bump + RDL) involves stacking two chips together, and then connecting them both to a flip chip at the bottom of the stack; the chips are physically close, which is a good step forward, but they can’t communicate directly with each other. This technique is already used in some SoCs to place DRAM on top of the CPU. The second technique, which is also the most complex, is called through-silicon via (TSV, pictured right). With TSV, vertical copper channels are built into each die, so that when they’re placed on top of each other the TSVs connect the chips together. This is the technique that IBM and 3M will use to stack hundreds of memory dies together to make super-density DRAM. So far, TSV has only really been used in camera CMOS sensors, but adoption will increase over the next few years as the technology matures.

The third technique, which isn’t technically stacking but still counts as “advanced packaging,” uses a silicon transposer (pictured above, below the stacked chips). A transposer is effectively a piece of silicon that acts like a “mini motherboard,” connecting two or more chips together (if you remember breadboard from your days as a budding electronic engineer, it’s the same kind of thing, but on a much smaller scale). The advantage of this technique is that you can reap the benefits of shorter wiring (higher bandwidth, lower power consumption), but the constituent chips don’t have to be changed at all. Transposers are expected to be used in upcoming multi-GPU Nvidia and AMD graphics cards.

In theory, there’s almost no limit to how many dies can be stacked in this way. Applied Materials, Micron, and Samsung have been mooting the idea of an eight-layer DIMM, but in an interview, Applied Materials tells us more layers should be possible. The only real restriction is heat generation and dissipation, which will limit the number of CPUs that you can have in a stack, but there’s no reason that an entire SoC — CPU, DRAM, NAND flash, radios, power management IC, and GPU — couldn’t be built into a single through-silicon via chip. According to Applied Materials, this would allow for packages that are around 35% smaller, consume 50% less power, and perform significantly faster — desirable traits when it comes to smartphones and tablets. Moving forward, TSV is likely to dominate any space that puts a premium on power efficiency, such as mobile and server.

Finally, chip stacking obviously works in synergy with Intel’s 3D FinFETs — though curiously there is no sign of TSV on Intel’s roadmap, while TSMC is all over it. Perhaps the most important thing to remember is that new production and packaging processes take a long time to roll out: It has taken Intel 10 years to iron out mass production of FinFETs, and likewise, chip stacking has been touted as the next great thing for almost as long. Applied Materials and IME’s new 3D packaging lab is definitely a step in the right direction, but don’t expect your next desktop CPU to have DRAM stacked on top of it; we’re still a couple of years out at least.