It didn’t take long for Nvidia’s monstrous Tesla P100 GPU to make its mark in an ongoing race to build the world’s fastest computers.

Just a day after Nvidia’s CEO said he was “friggin’ excited” to introduce the Tesla P100, the company announced its fastest GPU ever would be used to upgrade a supercomputer called Piz Daint. Roughly 4,500 of the GPUs will be installed in the supercomputer at the Swiss National Supercomputing Center in Switzerland.

Piz Daint already has a peak performance of 7.8 petaflops, making it the seventh-fastest computer in the world. The fastest in the world is the Tianhe-2 in China, which has a peak performance of 54.9 petaflops, according to the Top500 list released in November.

Two of the world’s ten fastest computers use GPUs as co-processors to speed up simulations and scientific applications: Titan, at the U.S. Oak Ridge National Laboratory, and Piz Daint. The latter is used to analyze data from the Large Hadron Collider at CERN.

Nvidia has already made a desktop-type supercomputer with the Tesla P100. The DGX-1 can deliver 170 teraflops of performance, or 2 petaflops when several are installed on a rack. It has eight Tesla P100 GPUs, two Xeon CPUs, 7TB of solid-state-drive storage and dual 10-Gigabit ethernet ports.

The GPU will also be in volume servers from IBM, Hewlett Packard Enterprise, Dell and Cray by the first quarter of next year. Huang said companies building mega data centers for the cloud will be using servers with Tesla P100s by the end of the year.

The Tesla P100 is one of the largest chips ever made and may be one of the fastest. It has 150 billion transistors and packs many new GPU technologies that could give Piz Daint a serious boost in horsepower.

The P100 is based on a new architecture called Pascal, which has new instruction sets to speed up scientific computing and deep learning. Nvidia offers its own parallel programming framework called CUDA that will help developers write applications that harness the computing boost delivered by Pascal.

The GPU has a peak floating-point performance of about 21.2 teraflops, which is almost twice as fast as Nvidia’s Tesla M60, a top-line GPU based on the older Maxwell architecture.

The Tesla P100 has 16GB of HBM2 (High-Bandwidth Memory 2), a new type of fast memory making its way to GPUs. The memory chips are stacked on top of each other in a 3D format instead of being placed next to each other. The new format makes HBM2 a faster and denser form of memory.

The new GPU is also the first with Nvidia’s homegrown NVLink interface, which can transfer data five times faster than PCI-Express 3.0.

The Piz Daint computer upgrade is expected to be deployed by the end of this year.

An earlier version of this story mischaracterized the Piz Daint project. The supercomputer already exists and is being upgraded. It also misstated Piz Daint’s peak performance, which is 7.8 petaflops, and misstated the number of Nvidia DGX-1 computers required to reach 2 petaflops.