From ScientificComputing

This page describes the hardware of the different generations (I-V) of Euler. For information on how to use the cluster, please check the tutorials page.

Introduction

Euler stands for Erweiterbarer, Umweltfreundlicher, Leistungsfähiger ETH-Rechner. It is an evolution of the Brutus concept. Euler also incorporates new ideas from the Academic Compute Cloud project in 2012–2013 as well as the Calculus prototype in 2013.

Euler II (left) and Euler I (right)

Euler has been regularly expanded since its conception in 2013. The first phase — Euler I — was purchased at the end of 2013 and was in operation from 2014 to 2018. The second phase — Euler II — was purchased at the end of 2014 and is in operation from 2015 to 2020. Euler III was purchased at the end of 2016 and is in operation since the beginning of 2017. Euler IV was purchased at the end of 2017 and is in operation since the beginning of 2018. Euler V, which replaced Euler I, was purchased in the fall of 2018 and is in operation since the end of 2018. Euler VI was purchased at the end of 2019 and is in operation since the beginning of 2020. Euler VII is expected to be installed in November 2020.

Specifications

Euler I

Euler I (2014-2018) contained 448 compute nodes — Hewlett-Packard BL460c Gen8 —, each equipped with:

Two 12-core Intel Xeon E5-2697v2 processors (2.7 GHz nominal, 3.0–3.5 GHz peak)

Intel Xeon E5-2697v2 processors (2.7 GHz nominal, peak) Between 64 and 256 GB of DDR3 memory clocked at 1866 MHz (64 × 256 GB; 32 × 128 GB; 352 × 64 GB)

All compute nodes of Euler I were decommissioned in August 2018 to make room for Euler V.

Euler II

Euler II (2015-2020) contained 768 compute nodes of a newer generation — BL460c Gen9 —, each equipped with:

Two 12-core Intel Xeon E5-2680v3 processors (2.5-3.3 GHz)

Between 64 and 512 GB of DDR4 memory clocked at 2133 MHz (32 × 512 GB; 32 × 256 GB; 32 × 128 GB; 672 × 64 GB)

All these compute nodes were decommissioned in July 2020 to make room for new nodes.

Euler II still contains 4 very large memory nodes — Hewlett-Packard DL580 Gen9 —, each equipped with:

Four 16-core Intel Xeon E7-8867v3 processors (2.5 GHz)

3072 GB of DDR4 memory clocked at 2133 MHz

Euler III

Euler III contains 1215 compute nodes — Hewlett-Packard m710x —, each equipped with:

A quad-core Intel Xeon E3-1585Lv5 processor (3.0-3.7 GHz)

32 GB of DDR4 memory clocked at 2133 MHz

A 256 GB NVMe flash drive

All these nodes are connected to the rest of the cluster via 10G/40G Ethernet.

Euler IV

Euler IV contains 288 high-performance nodes — Hewlett-Packard XL230k Gen10 —, each equipped with:

Two 18-core Intel Xeon Gold 6150 processors (2.7-3.7 GHz)

Intel Xeon Gold 6150 processors (2.7-3.7 GHz) 192 GB of DDR4 memory clocked at 2666 MHz

All these nodes are connected together via a new 100 Gb/s InfiniBand EDR network.

Euler V

Euler V contains 352 compute nodes — Hewlett-Packard BL460c Gen10 —, each equipped with:

Two 12-core Intel Xeon Gold 5118 processors (2.3 GHz nominal, 3.2 GHz peak)

Intel Xeon Gold 5118 processors (2.3 GHz nominal, 3.2 GHz peak) 96 GB of DDR4 memory clocked at 2400 MHz

Euler VI

Euler VI contains 216 compute nodes from Swiss company Dalco AG, each equipped with:

Two 64-core AMD EPYC 7742 processors (2.25 GHz nominal, 3.4 GHz peak)

AMD EPYC 7742 processors (2.25 GHz nominal, 3.4 GHz peak) 512 GB of DDR4 memory clocked at 3200 MHz

All these nodes are connected together via a new 100 Gb/s InfiniBand HDR network.

Storage

Euler contains two types of storage system:

An enterprise-class NAS system (NetApp FAS 9000 & AFF A300) for long-term storage, such as home directories, applications, virtual machines, project data, etc.

A high-performance Lustre parallel file system (DDN ES14KX) for short- and medium-term storage, such as scratch and work file systems

Home directories and other critical data are backed up daily; all other data (except scratch) are backed up at least once per week for disaster recovery.

Networks

Euler contains multiple networks:

A common 10 Gb/s Ethernet network for data transfer between the storage systems and the cluster's compute and login nodes

Three separate 56 Gb/s InfiniBand FDR networks for data transfer between the compute nodes themselves (e.g. MPI)

A 100 Gb/s InfiniBand EDR network for data transfer within Euler IV (MPI) and between Euler IV and the new Lustre high-performance storage system

A 200/100 Gb/s InfiniBand HDR network for data transfer within Euler VI, with 100 Gb/s from compute nodes to switches and multiple 200 Gb/s links between switches

Service description

The official service description and the current price list are available on the IT service catalogue.