Cerebras Systems, makers of the world's largest single processor that weighs in with a whopping 1.2 trillion transistors and 400,000 AI cores, announced today that it has entered into a partnership with the Department of Energy (DOE), long the leader in the supercomputing space, to use its new wafer-scale chips for basic and applied science and medicine with super-scale AI.

(Image credit: Tom's Hardware / GPU for scale)

The Cerebras Wafer Scale Engine (WSE) sidesteps the reticle limitations of modern chip manufacturing, which limit the size of a single monolithic processor die, to create the wafer-sized processor. The company accomplishes this feat by stitching together the dies on the wafer, thus allowing it to work as one large cohesive unit.

(Image credit: Tom's Hardware)

That creates a massive processor that measures 42,225 square millimeters, the largest in the world, that packs 1.2 trillion transistors fabbed on TSMC's 16nm process. That's 56.7 times larger than the world's largest GPU (815mm2 with 21.1 billion transistors). The massive chip also comes packing a whopping 40,000 AI-processing cores paired with 18GB of on-chip memory. That pushes out up to 9 PBps, yes, petabytes per second, of memory bandwidth. We recently had the chance to see the massive chip up close at the Hot Chips conference, and as you can see, it is larger than our laptop's footprint.

Image 1 of 27 Image 2 of 27 Image 3 of 27 Image 4 of 27 Image 5 of 27 Image 6 of 27 Image 7 of 27 Image 8 of 27 Image 9 of 27 Image 10 of 27 Image 11 of 27 Image 12 of 27 Image 13 of 27 Image 14 of 27 Image 15 of 27 Image 16 of 27 Image 17 of 27 Image 18 of 27 Image 19 of 27 Image 20 of 27 Image 21 of 27 Image 22 of 27 Image 23 of 27 Image 24 of 27 Image 25 of 27 Image 26 of 27 Image 27 of 27

The Cerebras WSE's will find a home in the Argonne and Livermore National Laboratories, where they will be used in conjunction with existing supercomputers to speed AI-specific workloads.

The DOE's buy-in on the project is incredibly important for Cerebras, as it signifies that the chips are ready for actual use in production systems. Also, as we've seen time and again, trends in the supercomputing space often filter down to more mainstream usages, meaning further development could find Cerebras' WSE in more typical server implementations in the future.

(Image credit: Cerebras Systems)

The DOE also has a history of investing heavily in the critical software ecosystem needed for mass adoption, as we've seen with its investment in AMD's ROCM software suite for the exascale-class Frontier supercomputer, the work the agency is doing with Intel's OneAPI for the Aurora supercomputer, and the partnership with Cray for El Capitan.

(Image credit: Cerebras Systems)

AI models are exploding in size as models double every five months. That doesn't currently appear to be a problem with the WSE's 18GB of SRAM memory, but because SRAM can't be scaled retroactively, larger models could soon outstrip the chips' native memory capacity. Cerebras tells us that it can simply use multiple chips in tandem to tackle larger workloads because, unlike GPUs, which simply mirror the memory across units (data parallel) when used in pairs (think SLI), the WSE runs in model parallel mode, which means it can utilize twice the memory capacity when deployed in pairs, thus scaling linearly. The company also says that scaling will continue with each additional wafer-size chip employed for AI workloads.

Image 1 of 39 Image 2 of 39 Image 3 of 39 Image 4 of 39 Image 5 of 39 Image 6 of 39 Image 7 of 39 Image 8 of 39 Image 9 of 39 Image 10 of 39 Image 11 of 39 Image 12 of 39 Image 13 of 39 Image 14 of 39 Image 15 of 39 Image 16 of 39 Image 17 of 39 Image 18 of 39 Image 19 of 39 Image 20 of 39 Image 21 of 39 Image 22 of 39 Image 23 of 39 Image 24 of 39 Image 25 of 39 Image 26 of 39 Image 27 of 39 Image 28 of 39 Image 29 of 39 Image 30 of 39 Image 31 of 39 Image 32 of 39 Image 33 of 39 Image 34 of 39 Image 35 of 39 Image 36 of 39 Image 37 of 39 Image 38 of 39 Image 39 of 39

We're told that today's announcement just covers the basics of the partnership, but that more details, specifically in regards to co-development, will be shared at the Supercomputer tradeshow in November.