SAN JOSE, Calif. — Broadcom’s latest communications processor rides a 2.5D chip stack with HBM2 memory. Jericho2 uses the boost in memory bandwidth to leapfrog the performance of OEM ASICs in high-end switches and routers.

The chip expands into networking the packaging technology pioneered by AMD, Nvidia, and Xilinx in high-end FPGAs and graphics processors. With its StrataDNX Jericho2, Broadcom also takes a small step toward open-programming environments by providing C++ tools for the chip to select customers.

The 16-nm processor, announced Tuesday (March 6), packs a whopping 208 50-Gbits/s PAM4 SerDes to deliver 10 Tbits/s of aggregate throughput, supporting up to 36 400-Gbits/s Ethernet links. It leads a wave of high-end networking devices aiming to enable 400-Gbits/s links in telcom core networks and large data centers.

The HBM2 stack provides eight times the memory bandwidth of the external DRAM used in Broadcom’s previous 28-nm chip. It leapfrogs the performance of Nokia’s FP4, the current king of in-house networking ASICs.

Jericho2 is “a big deal as updates go,” said Bob Wheeler, analyst with the Linley Group. “It’s a major new generation … [that uses HBM and 2.5D] to remove the memory bottleneck.”

While Jericho2 was Broadcom’s first merchant chip to ride a 2.5D stack, the company helped design similar products as machine-learning ASICs for unnamed customers, said Oozie Parizer, a marketing manager for Jericho2. Indeed, Intel’s Nervana uses a 2.5D stack as does an AI training processor expected from startup Graphcore.

A silicon substrate (grey) connects Jericho2 (blue) with an HBM2 stack (yellow). Click to enlarge. Images: Broadcom.

Despite the still-costly nature of 2.5D chip stacks, Broadcom expects that the chip will power OEM systems by the end of the year priced at a relatively low $1,000 per 400-GbE port.

Parizer called on memory vendors to lower the prices of their HBM stacks “to make this more of a commodity market because this is the future in networking and high-end processing. Our advances in processors have been on the order of 5x in two years, but they have not been matched by advances in DRAM.”

The chip is now sampling to customers with devices running in the lab with HBM2 modules running at target speed. Broadcom expects that it will be in production in nine to 12 months.