Pleiades Supercomputer

Pleiades, one of the world's most powerful supercomputers, represents NASA's state-of-the-art technology for meeting the agency's supercomputing requirements, enabling NASA scientists and engineers to conduct modeling and simulation for NASA projects. This distributed-memory SGI/HPE ICE cluster is connected with InfiniBand in a dual-plane hypercube technology.

The system contains the following types of Intel Xeon processors: E5-2680v4 (Broadwell), E5-2680v3 (Haswell), E5-2680v2 (Ivy Bridge), and E5-2670 (Sandy Bridge). Pleiades is named after the astronomical open star cluster of the same name.

System Architecture

Manufacturer: SGI/HPE

158 racks (11,207 nodes)

7.09 Pflop/s peak cluster

5.95 Pflop/s LINPACK rating (#32 on November 2019 TOP500 list)

175 Tflop/s HPCG rating (#17 on November 2019 HPCG list)

Total CPU cores: 241,324

Total memory: 927 TB

3 racks (83 nodes total) enhanced with NVIDIA graphics processing units (GPUs) 614,400 CUDA cores 0.646 Pflop/s total



Pleiades Node Detail Broadwell Nodes Haswell Nodes Ivy Bridge Nodes Sandy Bridge Nodes Number of Nodes 2,016 2,052 5,256 1,800 Processors per Node 2 fourteen-core processors per node 2 twelve-core processors per node 2 ten-core processors per node 2 eight-core processors per node Node Types Intel Xeon E5-2680v4 processors Intel Xeon E5-2680v3 processors Intel Xeon E5-2680v2 processors Intel Xeon E5-2670 processors Processor Speed 2.4 GHz 2.5 GHz 2.8 GHz 2.6 GHz Cache 35 MB for 14 cores 30 MB for 12 cores 25 MB for 10 cores 20 MB for 8 cores Memory Type DDR4 FB-DIMMs DDR4 FB-DIMMs DDR3 FB-DIMMs DDR3 FB-DIMMs Memory Size 4.6 GB per core, 128 GB per node 5.3 GB per core, 128 GB per node 3.2 GB per core, 64 GB per node (plus 3 bigmem nodes with 128 GB per node) 2 GB per core, 32 GB per node Host Channel Adapter InfiniBand FDR host channel adapter and switches InfiniBand FDR host channel adapter and switches InfiniBand FDR host channel adapter and switches InfiniBand FDR host channel adapter and switches

GPU-Enhanced Nodes Sandy Bridge + GPU Nodes Skylake + GPU Nodes Number of Nodes 64 19 Processors per Node Two 8-core host processors and one GPU coprocessor (2,880 CUDA cores) Two 18-core host processors; four GPU coprocessors (for 17 nodes); eight GPU coprocessors (for 2 nodes) Node Types Intel Xeon E5-2670 (host); NVIDIA Tesla K40 (GPU) Intel Xeon Gold 6154 (host); NVIDIA Tesla V100-SXM2-32GB (GPU) Processor Speed 2.6 GHz (host); 745 MHz (GPU) 3.0 GHz (host); 877 MHz (GPU) Cache 20 MB for 8 cores (host) 27.5 MB shared non-inclusive by 20 cores Memory Type DDR3 FB-DIMMS (host); GDDR5 (GPU) DDR4 FB-DIMMS (host); HBM2 (GPU) Memory Size 64 GB per node (host); 12 GB per GPU card 384 GB per node (host); 32 GB per GPU card Host Channel Adapter InfiniBand FDR host channel adapter and switches (host) InfiniBand EDR host channel adapter and switches (host)

Subsystems 8 Front-End Nodes PBS server pbpspl1 PBS server pbspl3 Number of Processors 2 eight-core processors per node 2 six-core processors per node 2 quad-core processors per node Processor Types Xeon E5-2670 (Sandy Bridge) processors Xeon X5670 (Westmere) processors Xeon X5355 (Clovertown) processors Processor Speed 2.6 GHz 2.93 GHz 2.66 GHz Memory 64 GB per node 72 GB per node 16 GB per node Connection 10 Gigabit and 1 Gigabit Ethernet connection N/A N/A

Interconnects

Internode: InfiniBand, with all nodes connected in a partial hypercube topology

Two independent InfiniBand fabrics

InfiniBand DDR, QDR and FDR

Gigabit Ethernet management network

Operating Environment