In results announced in conjunction with the ISC High Performance 2019 conference, the University of Cambridge’s Cumulus supercomputer claimed the top spot in the latest I/O-500 rankings.

When the University of Cambridge Research Computing Service rolled out its latest supercomputer, the organization’s director, Dr. Paul Calleja, talked about having cracked the HPC storage problem with a unique Data Accelerator.

Dr. Calleja’s observation was validated recently at the ISC High Performance 2019 conference when the Virtual Institute for I/O released it latest I/O-500 list. That list ranked the university’s Cumulus supercomputer as the world’s top HPC system in terms of storage performance. The system posted an astounding 620.69 score on the I/O-500 benchmarks, surpassing the second-place finisher by 290 points.

Similar to the TOP500 list of the world’s most powerful commercially available computer systems, the I/O-500 is a young but widely recognized ranking of the performance of HPC storage systems. It encompasses a suite of benchmarks that enable an apples-to-apples comparison of the storage performance of HPC systems.

The Data Accelerator

The Data Accelerator that is embedded in the Cumulus cluster incorporates technologies from Dell EMC, Intel and Cambridge University, along with an innovative orchestrator built by the University of Cambridge and StackHPC. The Cambridge Research Computing Service leverages the Data Accelerator and the Distributed Name Space (DNE) feature in the Lustre file system to optimize the Cumulus cluster for top I/O performance.

The system optimization work has led to a huge leap forward in storage performance, according to Dr. Calleja. This accelerator provides more than 500 GB/s of I/O read performance, which makes it the UK’s fastest HPC I/O platform, according to the Research Computing Service. The result is a single heterogeneous x86/GPU platform that provides researchers with lightning-fast throughput via the UK’s most advanced supercomputing cloud.

“With DNE, the IOPS performance of this solution is amazing,” Dr. Calleja says in a Dell EMC case study. “The guys had to work around many Lustre bugs and adjust many Lustre parameters just to get it to run, but now we have stable, repeatable and very high performance runs with no error and determinant behavior, so I think we have cracked the HPC storage problem.”[1]

As for that HPC storage problem, for years, the research community has wrestled with persistent storage I/O challenges. While data-processing power raced forward, storage I/O capabilities often lagged behind, creating a bottleneck that slows time to insight. This problem has made I/O performance a top technical concern for those running data-intensive workflows.

And now the bright minds at the University of Cambridge have solved that storage problem. And for that, they landed at the top of the latest I/O-500 list.

To learn more

For a closer look at the University of Cambridge’s HPC cluster, read the Dell EMC case study “UK Science Cloud.” And for a closer and more technical look at the Data Accelerator, visit the Research Computing Services’ Data Accelerator site.

[1] Dell EMC case study, “UK Science Cloud,” November 2018.