TORONTO — High Bandwidth Memory (HBM), like many other memory technologies, is being adopted for emerging use cases that didn’t exist at its inception because of specific characteristics such performance, capacity and power consumption. But it won’t be long before there’s pressure to improve upon them as adoption in newer scenarios takes off.

The Jedec Solid State Technology Association’s most recent update to the JESD235 HBM DRAM standard focuses on meeting the needs of applications in which peak bandwidth, bandwidth per watt, and capacity per area are critical metrics. Such applications include high-performance graphics, network and client applications, and high-performance computing.

The JESD235 standard builds on the first HBM standard released in November 2015 with the input of GPU and CPU developers with the goal of keeping ahead of the system bandwidth growth curve supported by traditional discrete packaged memory.

Recommended

AI Fuels Next-Gen HBM Demand

In a telephone interview with EE Times, Barry Wagner, HBM task group chair for Jedec, said that the update reflects the decision to add some density range to the HBM2 class of products before moving on to the HBM3 generation of devices.

“This update was really focused on extending the support and the design from an 8 Gb-per-layer definition to a 16 Gb-per-layer,” Wagner said.

An HBM DRAM has a distributed interface tightly coupled to the host compute die and divided into independent channels, with each channel completely independent of one another and not necessarily synchronous to each other — they are independently clocked. The wide-interface architecture enables high-speed, low-power operation, with each channel interface maintaining a 128-bit data bus operating at double data rate (DDR).

JESD235B includes a legacy mode to support HBM1 and a new pseudo-channel mode in HBM2.

JESD235B adds a new footprint option to accommodate the 16 Gb-per-layer and 12-high configurations for higher-density components and extends the per-pin bandwidth to 2.4 Gbps. Performance-wise, the HBM standard update supports speeds up to 307 GB/s and densities up to 24 GB per device by leveraging wide I/O and TSV technologies. Bandwidth is delivered across a 1,024-bit-wide device interface that is divided into eight independent channels on each DRAM stack.

The standard can support 2-high, 4-high, 8-high, and 12-high TSV stacks of DRAM at full bandwidth to allow systems flexibility on capacity requirements from 1 GB to 24 GB per stack.

NEXT PAGE: Backwards Compatability



Partner Content: A retrospective look at how Taiwan’s innovative power took to CES 2019