The University of Sydney today cut the ribbon on its Artemis supercomputer following a successful launch of the system in partnership with hardware vendor Dell.

Artemis was designed for complex data analysis in areas such as molecular biology, economics, mechanical engineering, and oceanography. It is available for researchers to use free of charge.

Professor Edward Holmes from the Charles Perkins Centre told iTnews Artemis was necessary for the university to enter the world of big data science.

"To make an impact in this area it is crucial to have sufficient computing power, so the launch of Artemis is both important and timely," Holmes said.

The system has been put to work since April on advanced projects in diverse subjects, Holmes said. It is currently being used to perform studies into urgent public health issues.

"Artemis is already helping University of Sydney researchers perform cutting-edge research, such as analysing the spread of Ebola virus through West Africa," Holmes said.

The system's standard computing nodes are based on Dell's PowerEdge R630 server with dual 12-core Intel Xeon E5-2680v3 Haswell generation processors. A total of 128 gigabyte of DDR3 RAM and two 1 terabyte SAS disks in RAID-1 configuration for storage are also included in the standard compute nodes.

Rack housing the Artemis HPC nodes. Source: University of Sydney

This gives the Artemis HPC cluster a total 1344 standard compute cores.

Artemis also features two high memory compute nodes, based on the same Dell server chassis as above, but with 256GB of memory and four 1TB disks in RAID-10 configuration.

Researchers also have access to high performance computing through graphics cards for analytics and similar workloads, which can take advantage of highly-parallelised multi-core hardware such as video card processors.

The five GPU computing nodes are built with Dell PowerEdge R730 servers, each with two NVIDIA K40 cards. This gives a total of ten GPUs.

Artemis also features a management server for handling cluster workflows, along with one control and two login nodes.

Rear view of the Artemis HPC rack.

Communications with the cluster and internally between the nodes is via Mellanox FDR 56 gigabit/s non-blocking InfiniBand links, as well as 1Gbps and 10Gbps Ethernet connections.

Artemis' name arose from one of 60 suggestions to the university's research computing steering group.

The HPC cluster is fully funded by the University of Sydney, which declined to divulge the cost.