Cerfacs Enter the world of high performance ...

CERFACS’ computing resources

Resources – last update: July 2024 –

Two computers provide to CERFACS an aggregate peak capacity of about 1.8 Pflop/s for processing our main simulation requirements. To these internal resources are added those of our partners (Meteo-France and CCRT). To afford an additional Support to our research activities (thesis and ANR projects), the resources allocated through GENCI’s calls  on the three national centers (Cines, Idris and TGCC) significantly extend our academic resources. These resources are complemented by our participation in international calls (ex. EuroHPC and Incite programs).

CERFACS’ Internal resources

Calypso Cluster (1 peak Pflop/s)

Scalar Partition (0,8 peak Pflop/s) :

60 compute nodes, each of them with two AMD Genoa processor (96 cores at 2.3 Ghz) and 384 GB memory.

Accelerated partition (331 peak Tflop/s) :

4 Grace Hopper nodes, each of them with one Nvidia ARM processor (72 cores), 480 GB memory + 1 Nvidia H100/96 GB HBM3,

1 computing nodes two AMD Genoa processors (16 cores at 3 Ghz), 384 GB memory + 2 x AMD Mi210/64 GB HBM2e,

Pre/Post processing Partition (14 peak Tflop/s):

Vizualisation support : 4 nodes with 384 GB memory with Nvidia Tesla RTX5000 card. NICE environment provides remote display to internal / external user’s.

Big memory support : 1 node with 1,5 TB memory used for large mesh generation.

Interactive Partition (30 peak Tflop/s) :

1 node bi-socket AMD Genoa with 1.5 TB memory for UMR CECI interactive studies,

2 frontends nodes bi-socket AMD Genoa with 384 GB memory,

Internal network, storage and software environment: The interconnection network is a non-blocking Infiniband HDR 200 Gb/sec Network. An internal IBM ESS GPFS file system offers to users a 1.4 PO scratch dir capacity. Software environment includes intel development compilers, libraries and tools; TotalView and DDT debuggers; SLURM job manager, and SMC BULL administration tools.

Integrated by BULL SAS, this cluster is in production mode since July 2024.

Kraken Cluster (1 peak Pflop/s)

Scalar Partition (723 peak Tflop/s) :

185 compute nodes, each of them with two Intel Xeon Gold 6140 processors (18 cores skylake processor at 2.3 Ghz) and 96 GB DDR4 memory.

40 compute nodes, each of them with two Intel Xeon Platinum 8368 processors (38 cores IceLake processor at 2.4 Ghz) and 256 GB DDR4 memory.

Accelerated partition (255 peak Tflop/s) :

8 computing nodes, each of them with two Intel IceLake processors (16 cores at 2,9 Ghz), 256 GB memory + 4 Nvidia A30/24 GB,

2 computing nodes, each of them with two AMD Rome processors (64 cores at 2 Ghz), 512 GB memory + 1 Nvidia A100/40 GB,

1 node accelerated with 4 Nvidia V100/32GB interconnected with Nvlink,

2 nodes accelerated with one Nvidia V100/16 GB,

1 node accelerated with 1 Nvidia Titan4 (optimized for inferences)

Pre/Post processing Partition (16 peak Tflop/s):

Vizualisation support : 6 nodes with 288 GB memory with Nvidia Tesla M60 card. NICE environment provides remote display to internal / external user’s.

Big memory support : one node with 768 GB memory used for large mesh generation + one node with 1.5 PO of memory dedicated to climate modeling.

Interactive Partition (11 peak Tflop/s) :

1 node bi-socket skylake with 1.5 TB memory for UMR CECI interactive studies,

1 node bi-socket skylake with 768 GB memory for CFD interactive studies,

2 nodes bi-socket skylake with 96 GB memory for non-regression AVBP tests

All nodes of Pre/Post processing partition are bi-socket nodes with Intel Xeon Gold 6140 processors.

Internal network, storage and software environment: The interconnection network is a non-blocking Omnipath Network. An internal GPFS file system offers to users a 1 PO scratch dir capacity. Software environment includes intel development compilers, libraries and tools; TotalView and DDT debuggers; and SLURM job manager. Integrated by Lenovo and serviware, this cluster is in production mode since May 2018.

Scylla Cluster (Big Data Post-Processing)

Inaugurated in February 2019 Scylla cluster is dedicated to big data files management and post-processing. Mainly CMIP5 and CMIP6 (Coupled Model Intercomparison Project Phase 5 and 6) data computed by Cerfacs’ researchers in the Frame of GIEC activities are managed on this cluster.

This cluster is also shared with other research Cerfacs’ teams needing big storage management capacities close to post-processing nodes.

Storage Capacity : 1.4 Po user space. DSS solution (based on IBM Spectrum Scale offerings). 2 nodes dedicated to Metadata management on SSD disks ans 2 nodes dedicated to data management stored on 166 disks each of then with 12 TO capacity.

Pre/Post processing partition :

5 bi-socket Intel Gold 6126 (14 cores @ 2.6 Ghz) with 384 Go memory + Nvidia P4000,

1 bi-socket Intel Gold 6126 with 768 GO memory.

1 mono-socket AMD Milan 16 cores @ 3 Ghz + AMD MI100

Central NAS Server

A central SpectrumScale server with a capacity of 4 PO is accessible from all clusters. Its function is to provide a secondary archiving service used by internal and external servers hosting the results of numerical simulation. This technical solution is supported by a LENOVO DSS-G230 appliance.

CERFACS’ External computers acces

Météo-France and CEA CCRT extend our simulation capacity through the access to their supercomputers in the frame of partnerships.

  • Météo-France research supercomputer (Belenos): 2 304 nodes bi-socket AMD Rome 64c @ 2.2 Ghz – 10.5 Pflop/s. From 2018 to 2021 a special allocation of 86 Mh has been allocated by Météo-France to Cerfacs’ researchers in the frame of common GIEC simulations.
  • CCRT supercomputer (Topaze): 864 nodes bi-socket AMD Milan 64c @ 2.45 Ghz + 48 nodes bi-socket AMD Milan 64c @ 2.45 Ghz accelerated with 4 Nvidia A100 GPU

Through numerous collaboration and support of Genci, Prace and Incite CERFACS accesses multiple external computers. Genci allows our doctoral students to access national resources centers:

EuroHPC attributes the resources to support our more challenging simulations.

  • LUMI – Cray EX (539 Pflop/s crête) – Finland
  • LEONARDO – BULL Sequana XH2000 (316 Pflop/s crête) – Italy
  • MARENOSTRUM 5 – BULL Sequana XH3000 + Lenovo ThinkSystem (296 Pflop/s crête) – Spain
  • MELUXINA – BULL Sequana XH2000 (18 Pflop/s crête) – Luxembourg
  • KAROLINA – HPE Apollo 2000 / 6500 (13 Pflop/s) – République Tchèque
  • DISCOVER – BULL Sequana XH2000 (6 Pflop/s crête) – Bulgary
  • VEGA – BULL Sequana XH2000 (10 Pflop/s crête) – Slovenia
  • DEUCALION – Fujitsu and Bull Sequana (5 Pflop/s crête) – Portugal
  • Soon : JUPITER – BULL Sequana XH3000 (1 ExaFlop/s crête) – Deutchland

CALENDAR

Thursday

31

October

2024

🎓Thomas NAESS thesis defense

Thursday 31 October 2024From 14h00 at 16h00

  Phd Thesis       JCA ROOM, CERFACS, TOULOUSE, FRANCE    

Wednesday

06

November

2024

🎓Paul WERNER thesis defense

Wednesday 6 November 2024From 9h30 at 12h00

  Phd Thesis       JCA room, Cerfacs, Toulouse, France    

Thursday

07

November

2024

🎓Emilio CONCHA thesis defense

Thursday 7 November 2024From 12h30 at 14h30

  JCA room, CERFACS, Toulouse, France    

ALL EVENTS