Microsoft will add Cray supercomputers to its Azure cloud computing service to handle the needs of those with high performance computing (HPC) workloads.

Cloud computing systems like Azure can be used to build large cluster-like machines for high performance distributed workloads. Combined with FPGAs and GPUs, this makes them competitive, some of the time, with traditional supercomputers.

But sometimes, a workload really does need the high performance, low-latency interconnects and storage that are the hallmark of "real" supercomputers. That's why Microsoft is adding Cray XC and Cray CS supercomputer clusters along with ClusterStor storage to its Azure lineup. The machines are intended for tasks such as analytics, climate modeling, engineering simulations, and scientific and medical research. The companies envisage customers combining Cray HPC with Azure workloads to offer better performance and greater scaling than either Cray or Azure can offer alone.

The Cray machines use a mix of Intel Xeon processors, Nvidia Tesla P100 GPUs, Xeon Phi coprocessors, and FPGAs, with a number of different interconnects, including InfiniBand (also used in Azure) and Cray's own Aries interconnect. These allow processors within each supercomputer to communicate with one another with more bandwidth and lower latency than if they were using common or garden Ethernet. There are many ways of connecting the processors to each other, and likewise the machines within a cluster, with Cray offering a number of configurations and the optimal system topology depending on the needs of the workload.

Unlike most Azure compute resources, which are typically shared between customers, the Cray supercomputers will be dedicated resources. This suggests that Microsoft won't be offering a way of timesharing or temporarily borrowing a supercomputer. Rather, it's a way for existing supercomputer users to colocate their systems with Azure to get the lowest latency, highest bandwidth connection to Azure's computing capabilities rather than having to have them on premises.