Since our initial coverage of the TSUBAME3.0 supercomputer yesterday, more details have come to light on this innovative project. Of particular interest is a new board design for NVLink-equipped Pascal P100 GPUs that will create another entrant to the space currently occupied by Nvidia’s DGX-1 system, IBM’s “Minsky” platform and the Supermicro SuperServer (1028GQ-TXR).

The press photo shared by Tokyo Tech revealed TSUBAME3.0 to be an HPE-branded SGI ICE supercomputer. The choice is not surprising considering that SGI has long held a strong presence in Japan. SGI Japan, the primary contractor here, has collaborated with Tokyo Tech on a brand-new board design that we’ve been told is destined for the HPE product line.

The board is first of its kind in employing Nvidia GPUs (four), NVLink processor interconnect technology, Intel processors (two) and the Intel Omni-Path Architecture (OPA) fabric. Four SXM2 P100s are configured into a hybrid mesh cube, making full use of the NVLink (1.0) interconnect to offer a large amount of memory bandwidth between the GPUs. As you can see in the figure on the right, each half the quad connects to its own PLX PCIe switch, which links to an Intel Xeon CPU. The PCIe switches also enable direct one-to-one connections between the GPUs and an Omni-Path link. A slide from a presentation shared by Tokyo Tech depicts how the this hooks into the fabric.

TSUBAME3.0 will be comprised of 540 such nodes for a total of 2,160 SXM2 P100s and 1,080 Xeon E5-2680 V4 (14 core) CPUs.

At the rack level, 36 server blades house a total of 144 Pascals and 72 Xeons. The components are water cooled with an inlet water temperature a warm 32 degrees (C), for a PUE of 1.033. “That’s lower than any other supercomputer I know,” commented Tokyo Tech Professor Satoshi Matsuoka, who is leading the design. (Here’s a diagram of the entire cooling system.)

Each node also has 2TBs of NVMe SSD for I/O acceleration, totalling more than 1 petabyte for the entire system. It can be used locally, or aggregated on-the-fly with BGFS as an ad-hoc “Burst-Buffer” filesystem, Matsuoka told us.

The second-tier storage is composed of DDN’s Exascalar technology, which uses controller integration to achieve a 15.9PB Lustre parallel file system in three racks.

With 15 SGI ICE XA racks and two network racks, TSUBAME3.0 delivers 12.2 petaflops of spec’d computational power within 20 racks (excluding the in-row chillers). This makes TSUBAME 3.0 the smallest >10 petaflops machine in the world, said Matsuoka, who offered for comparison the K computer (10.5 Linpack petaflops, 11.3 peak) which extends to 1,000 racks, a 66X delta.

Like TSUBAME2.0/2.5, the new system continues the endorsement of smart partitioning. “The TSUBAME3.0 node is ‘fat’ but we want flexible partitioning,” said Matsuoka. “We will be using container technology as a default, being able to partition the nodes arbitrarily into pieces for flexible scheduling and achieving very high utilization. A job that uses only CPUs or just one GPU won’t waste the remaining resources on the node.”

As we noted in our earlier coverage, total rated system performance is 12.15 double-precision petaflops, 24.3 single-precision petaflops and 47.2 half-precision petaflops, aka “AI-Petaflops.”

“Since we will keep TSUBAME2.5 and KFC alive, the combined ‘AI-capable’ performances of the three machines will reach 65.8 petaflops, making it the biggest capacity infrastructure for ML/AI in Japan, or 6 times faster than the K-computer,” said Matsuoka.

At yesterday’s press event in Japan, Professor Matsuoka also revealed that Tokyo Tech and the National Institute of Advanced Industrial Science and Technology (AIST) are going to open their joint “Open Innovation Laboratory” (OIL) next Monday, Feb. 20. Prof. Matsuoka will lead this organization and TSUBAME3.0 will be partially used for these joint efforts. The main resource of OIL will be an upcoming massive AI supercomputer, named “ABCI,” announced in late November (2016). So in some respects, TSUBAME3.0, with an operational target of summer 2017, will be a prototype machine to ABCI, which has a targeted installation of Q1 2018.

“Overall, I believe TSUBAME3.0 to be way above class compared to any supercomputers that exist, including the [other] GPU-based ones,” Professor Matsuoka told HPCwire. “There are not really any technical compromises, and thus the efficiency of the machine by every metric will be extremely good.”