Resources – last update: January 2022 –
Two computers provide to CERFACS an aggregate peak capacity of about 1.3 Pflop/s for processing our main simulation requirements. To these internal resources are added those of our partners (Meteo-France and CCRT). To afford an additional Support to our research activities (thesis and ANR projects), the resources allocated through GENCI’s calls on the three national centers (Cines, Idris and TGCC) significantly extend our academic resources. These resources are complemented by our participation in international calls (ex. Prace and Incite programs).
CERFACS’ Internal resources
Kraken Cluster (1 peak Pflop/s)
Scalar Partition (723 peak Tflop/s) :
185 compute nodes, each of them with two Intel Xeon Gold 6140 processors (18 cores skylake processor at 2.3 Ghz) and 96 GB DDR4 memory.
40 compute nodes, each of them with two Intel Xeon Platinum 8368 processors (38 cores IceLake processor at 2.4 Ghz) and 256 GB DDR4 memory.
Accelerated partition (255 peak Tflop/s) :
8 computing nodes, each of them with two Intel IceLake processors (16 cores at 2,9 Ghz), 256 GB memory + 4 Nvidia A30/24 GB,
2 computing nodes, each of them with two AMD Rome processors (64 cores at 2 Ghz), 512 GB memory + 1 Nvidia A100/40 GB,
1 node accelerated with 4 Nvidia V100/32GB interconnected with Nvlink,
2 nodes accelerated with one Nvidia V100/16 GB,
1 node accelerated with 1 Nvidia Titan4 (optimized for inferences)
Pre/Post processing Partition (16 peak Tflop/s):
Vizualisation support : 6 nodes with 288 GB memory with Nvidia Tesla M60 card. NICE environment provides remote display to internal / external user’s.
Big memory support : one node with 768 GB memory used for large mesh generation + one node with 1.5 PO of memory dedicated to climate modeling.
Interactive Partition (11 peak Tflop/s) :
1 node bi-socket skylake with 1.5 TB memory for UMR CECI interactive studies,
1 node bi-socket skylake with 768 GB memory for CFD interactive studies,
2 nodes bi-socket skylake with 96 GB memory for non-regression AVBP tests
All nodes of Pre/Post processing partition are bi-socket nodes with Intel Xeon Gold 6140 processors.
Internal network, storage and software environment: The interconnection network is a non-blocking Omnipath Network. An internal GPFS file system offers to users a 1 PO scratch dir capacity. Software environment includes intel development compilers, libraries and tools; TotalView and DDT debuggers; and SLURM job manager. Integrated by Lenovo and serviware, this cluster is in production mode since May 2018.
Nemo Cluster (300 peak Tflop/s)
Compute Partition (276 peak Tflop/s): The Nemo cluster includes 288 compute nodes, each of them with two Intel E5-2680 processors (12 cores haswell processor at 2.5 Ghz) and 64 GB DDR4 memory.
Pre/Post processing partition (13 peak Tflop/s): 12 post-processing nodes with 256 GB memory and Nvidia accelerator + one node with 512 GB memory used for large mesh generation. All these nodes are bi-socket Intel E5-2680.
Knight Landing Partition (11 peak Tflop/s): A four nodes partition of Intel Knight Landing processors (64 cores @ 1.3 Ghz) allow researchers to port and optimize their codes in this environment.
Internal network, storage and software environment: The interconnection network is a non-blocking FDR Infiniband network. An internal GPFS file system offers to users a 1 PO scratch dir capacity. Software environment includes intel development compilers, libraries and tools; TotalView and DDT debuggers; and SLURM job manager. Integrated by Lenovo and serviware, this cluster has been inaugurated on September 30th, 2015.
Scylla Cluster (Big Data Post-Processing)
Inaugurated in February 2019 Scylla cluster is dedicated to big data files management and post-processing. Mainly CMIP5 and CMIP6 (Coupled Model Intercomparison Project Phase 5 and 6) data computed by Cerfacs’ researchers in the Frame of GIEC activities are managed on this cluster.
This cluster is also shared with other research Cerfacs’ teams needing big storage management capacities close to post-processing nodes.
Storage Capacity : 1.4 Po user space. DSS solution (based on IBM Spectrum Scale offerings). 2 nodes dedicated to Metadata management on SSD disks ans 2 nodes dedicated to data management stored on 166 disks each of then with 12 TO capacity.
Pre/Post processing partition :
5 bi-socket Intel Gold 6126 (14 cores @ 2.6 Ghz) with 384 Go memory + Nvidia P4000,
1 bi-socket Intel Gold 6126 with 768 GO memory.
1 mono-socket AMD Milan 16 cores @ 3 Ghz + AMD MI100
Central NAS Server
A central SpectrumScale server with a capacity of 3.1 PO is accessible from all clusters. Its function is to provide a secondary archiving service used by internal and external servers hosting the results of numerical simulation. This technical solution is supported by a LENOVO DSS-G230 appliance.
CERFACS’ External computers acces
Météo-France and CEA CCRT extend our simulation capacity through the access to their supercomputers in the frame of partnerships.
- Météo-France research supercomputer (Belenos): 2 304 nodes bi-socket AMD Rome 64c @ 2.2 Ghz – 10.5 Pflop/s. From 2018 to 2021 a special allocation of 86 Mh has been allocated by Météo-France to Cerfacs’ researchers in the frame of common GIEC simulations.
- CCRT supercomputer (Topaze): 864 nodes bi-socket AMD Milan 64c @ 2.45 Ghz + 48 nodes bi-socket AMD Milan 64c @ 2.45 Ghz accelerated with 4 Nvidia A100 GPU
Through numerous collaboration and support of Genci, Prace and Incite CERFACS accesses multiple external computers. Genci allows our doctoral students to access national resources centers:
- Adastra, HPE Cray EX (74 PFlop/s) at CINES – https://www.cines.fr/calcul/adastra/
- Jean Zay, HPE SGI 8600 (13.9 Pflop/s) at Idris – http://www.idris.fr/jean-zay/
- Joliot-Curie, Atos Supercomputer (22 PFlop/s) at the CEA/TGCC – http://www-hpc.cea.fr/fr/complexe/tgcc-JoliotCurie.htm
Prace attributes the resources to support our more challenging simulations.
- Atos Joliot-Curie (22 PFlop/s – 38° mondial rank in Top500 of november 2020 (AMD)) at TGCC
- Atos Juwells (71 Pflop/s – 7° mondial rank in Top500 of november 2020) in Julich
- HPE Apollo HAWK (25 Pflop/s – 16° mondial rank in Top500 of november 2020) in HLRS
- LENOVO SuperMuc-NG (27 Pflop/s – 15° mondial rank in Top500 of november 2020) at LRZ
- IBM Marconi (29 Pflop/s – 11° mondial rank in Top500 of november 2020) at Cineca
- LENOVO Marenostrum 4 (10 Pflop/s – 42° mondial rank in Top500 of november 2020) at BSC
- Cray XC50 Piz Daint (27 Pflop/s – 12° mondial rank in Top500 of november 2020) au CSCS