Cerfacs Enter the world of high performance ...

CERFACS’ computing resources

Resources – last update: April 2021 –

Two computers provide to CERFACS an aggregate peak capacity of about 0.9 Pflop/s for processing our main simulation requirements. To these internal resources are added those of our partners (Meteo-France and CCRT). To afford an additional Support to our research activities (thesis and ANR projects), the resources allocated through GENCI’s calls  on the three national centers (Cines, Idris and TGCC) significantly extend our academic resources. These resources are complemented by our participation in international calls (ex. Prace and Incite programs).

CERFACS’ Internal resources

Kraken Cluster (608 peak Tflop/s)

Scalar Partition (490 peak Tflop/s): The Kraken cluster includes 185 compute nodes, each of them with two Intel Xeon Gold 6140 processors (18 cores skylake processor at 2.3 Ghz) and 96 GB DDR4 memory.

Accelerated partition (89 peak Tflop/s) :

2 computing nodes, each of them with two AMD Rome processors (64 cores at 2 Ghz), 512 GB memory + 1 Nvidia A100/40 GB,

1 node accelerated with 4 Nvidia V100/32GB interconnected with Nvlink,

2 nodes accelerated with one Nvidia V100/16 GB,

1 node accelerated with 1 Nvidia Titan4 (optimized for inferences)

Pre/Post processing Partition (16 peak Tflop/s):

Vizualisation support : 6 nodes with 288 GB memory with Nvidia Tesla M60 card. NICE environment provides remote display to internal / external user’s.

Big memory support : one node with 768 GB memory used for large mesh generation + one node with 1.5 PO of memory dedicated to climate modeling.

Interactive Partition (11 peak Tflop/s) :

1 node bi-socket skylake with 1.5 TB memory for UMR CECI interactive studies,

1 node bi-socket skylake with 768 GB memory for CFD interactive studies,

2 nodes bi-socket skylake with 96 GB memory for non-regression AVBP tests

All nodes of Pre/Post processing partition are bi-socket nodes with Intel Xeon Gold 6140 processors.

Internal network, storage and software environment: The interconnection network is a non-blocking Omnipath Network. An internal GPFS file system offers to users a 0.5 PO scratch dir capacity. Software environment includes intel development compilers, libraries and tools; TotalView and DDT debuggers; and SLURM job manager. Integrated by Lenovo and serviware, this cluster is in production mode since May 2018.

Nemo Cluster (300 peak Tflop/s)

Nemo3

Compute Partition (276 peak Tflop/s): The Nemo cluster includes 288 compute nodes, each of them with two Intel E5-2680 processors (12 cores haswell processor at 2.5 Ghz) and 64 GB DDR4 memory.

Pre/Post processing partition (13 peak Tflop/s): 12 post-processing nodes with 256 GB memory and Nvidia accelerator + one node with 512 GB memory used for large mesh generation. All these nodes are bi-socket Intel E5-2680.

Knight Landing Partition (11 peak Tflop/s): A four nodes partition of Intel Knight Landing processors (64 cores @ 1.3 Ghz) allow researchers to port and optimize their codes in this environment.

Internal network, storage and software environment: The interconnection network is a non-blocking FDR Infiniband network. An internal GPFS file system offers to users a 1 PO scratch dir capacity. Software environment includes intel development compilers, libraries and tools; TotalView and DDT debuggers; and SLURM job manager. Integrated by Lenovo and serviware, this cluster has been inaugurated on September 30th, 2015.

Scylla Cluster (Big Data Post-Processing)

Inaugurated in February 2019 Scylla cluster is dedicated to big data files management and post-processing. Mainly CMIP5 and CMIP6 (Coupled Model Intercomparison Project Phase 5 and 6) data computed by Cerfacs’ researchers in the Frame of GIEC activities are managed on this cluster.

This cluster is also shared with other research Cerfacs’ teams needing big storage management capacities close to post-processing nodes.

Storage Capacity : 1.4 Po user space. DSS solution (based on IBM Spectrum Scale offerings). 2 nodes dedicated to Metadata management on SSD disks ans 2 nodes dedicated to data management stored on 166 disks each of then with 12 TO capacity.

Pre/Post processing partition :

5 bi-socket Intel Gold 6126 (14 cores @ 2.6 Ghz) with 384 Go memory + Nvidia P4000,

1 bi-socket Intel Gold 6126 with 768 GO memory.

Each of these node is equiped with a Nvidia P4000 accelerator.

Central NAS Server

A central NFS server with a capacity of 1.2 PO is accessible from all clusters and workstations. Its function is to provide a secondary archiving service used by internal and external servers hosting the results of numerical simulation. This technical solution is supported by a 2 LENOVO GPFS Servers associated to a DDN SFA7700 storage solution.

CERFACS’ External computers acces

Météo-France and CEA CCRT extend our simulation capacity through the access to their supercomputers in the frame of partnerships.

  • Météo-France research supercomputer (Belenos): 2 304 nodes bi-socket AMD Rome 64c @ 2.2 Ghz – 10.5 Pflop/s. From 2018 to 2021 a special allocation of 86 Mh has been allocated by Météo-France to Cerfacs’ researchers in the frame of common GIEC simulations.
  • CCRT supercomputer (Cobalt): 1 422 nodes bi-socket Xeon Broadwell 14c @ 2.4 Ghz + 252 nodes bi-socket Xeon Skylake 20c @ 2.4 Ghz

Through numerous collaboration and support of Genci, Prace and Incite CERFACS accesses multiple external computers. Genci allows our doctoral students to access national resources centers:

Prace attributes the resources to support our more challenging simulations.

  • Atos Joliot-Curie (22 PFlop/s – 38° mondial rank in Top500 of november 2020 (AMD)) at TGCC
  • Atos Juwells (71 Pflop/s – 7° mondial rank in Top500 of november 2020) in Julich
  • HPE Apollo HAWK (25 Pflop/s – 16° mondial rank in Top500 of november 2020) in HLRS
  • LENOVO SuperMuc-NG (27 Pflop/s – 15° mondial rank in Top500 of november 2020) at LRZ
  • IBM Marconi (29 Pflop/s – 11° mondial rank in Top500 of november 2020) at Cineca
  • LENOVO Marenostrum 4 (10 Pflop/s – 42° mondial rank in Top500 of november 2020) at BSC
  • Cray XC50 Piz Daint (27 Pflop/s – 12° mondial rank in Top500 of november 2020) au CSCS

NEWS

NextSim General Assembly and TC meeting

CERFACS |  16 September 2021

The General Assembly and TC Meeting took place on 15-16 September 2021. CERFACS is involved in the NextSim project (). The primary objective is to increase the capabilities of Computational Fluid Dynamics tools on extreme-scale parallel computing platforms for aeronautical design. This project has received funding from the European High-Performance Computing Joint Undertaking (JU) under grant agreement N° 956104. The JU receives support from the European Union’s Horizon 2020 research and innovation programme and Spain, France, Germany. This project has received funding from the Agence Nationale de la Recherche (ANR) under grant agreement N° ANR-20-EHPC-0002-02. For more information, please visit Read more


Sophie Valcke from Cerfacs co-authored a new book on atmosphere-ocean modelling

CERFACS |  18 August 2021

new book "Atmosphere-Ocean Modelling - Couling and Couplers” by Prof. Carlos R Mechoso, Prof. Soon-Il An and Dr Sophie Valcke has just been published by World Scientific. The present book fills a void in the current literature by presenting a basic and yet rigorous treatment of how the models of the atmosphere and the ocean are put together into a coupled system. Details are available at  Abstract: Coupled atmosphere-ocean models are at the core of numerical climate models. There is an extraordinarily broad class of coupled atmosphere-ocean models ranging from sets of equations that can be solved analytically to highly detailed representations of Nature requiring the most advanced computers for execution. The models are applied to subjects including the conceptual understanding of Earth’s climate, predictions that support human activities in a variable climate, and projections aimed to prepare society for climate change. The present book fills a void in the current literature by presenting a basic and yet rigorous treatment of how the models of the atmosphere and the ocean are put together into a coupled system. The text of the book is divided into chapters organized according to complexity of the components that are coupled. Two full chapters are dedicated to current efforts on the development of generalist couplers and coupling methodologies all over the worldRead more

ALL NEWS