Tokyo Institute of Technology today announced plans to create Japan’s fastest AI supercomputer, built on NVIDIA’s accelerated computing platform.

The new system, known as TSUBAME3.0, is expected to deliver more than two times the performance of its predecessor, TSUBAME2.5. It will use Pascal-based Tesla P100 GPUs, which are nearly three times as efficient as their predecessors, to reach an expected 12.2 petaflops of double precision performance. That would rank it among the world’s 10 fastest systems according to the latest TOP500 list, released in November.

TSUBAME3.0 will excel in AI computation, expected to deliver more than 47 PFLOPS of AI horsepower. When operated concurrently with TSUBAME2.5, it is expected to deliver 64.3 PFLOPS, making it Japan’s highest performing AI supercomputer.

TSUBAME3.0

Once up and running this summer, TSUBAME3.0 is expected to be used for education and high-technology research at Tokyo Tech, and be accessible to outside researchers in the private sector. It will also serve as an information infrastructure center for leading Japanese universities.

Tokyo Tech’s Satoshi Matsuoka, a professor of computer science who is building the system, said, “NVIDIA’s broad AI ecosystem, including thousands of deep learning and inference applications, will enable Tokyo Tech to begin training TSUBAME3.0 immediately to help us more quickly solve some of the world’s once unsolvable problems.”

“Artificial intelligence is rapidly becoming a key application for supercomputing,” said Ian Buck, vice president and general manager of Accelerated Computing at NVIDIA. “NVIDIA’s GPU computing platform merges AI with HPC, accelerating computation so that scientists and researchers can drive life-changing advances in such fields as healthcare, energy and transportation.”