Cerfacs Enter the world of high performance ...

CERFACS’ computing resources

Resources

Two computers provide CERFACS with an aggregate peak capacity of about 600 Tflop/s for processing our main simulation requirements. To these internal resources are added those of our partners (Meteo-France and CCRT). To afford an additional Support to our research activities (thesis and ANR projects), the resources allocated through Genci’s calls  on the three national centers (Cines, Idris and CCGT) significantly extend our academic resources. These resources are complemented by our participation in international calls (ex. Prace and Incite programs).

CERFACS Internal resources

Nemo Cluster

Nemo3

Nemo Cluster / Peak Performance : 276 Tflop/s

The Nemo cluster includes 288 compute nodes, each of them with two Intel E5-2680 processors (12 cores haswell processor at 2.5 Ghz) and 64 GB DDR4 memory.

The 288 node of this “compute partition” are completed by 12 post-processing nodes with 256 GB memory and one node with 512 GB memory used for large mesh generation.

A for nodes partition of Intel Knight Landing processors allow researchers to port and optimize their codes in this environment.

The interconnection network is a non-blocking FDR Infiniband network. An internal GPFS file system offers to users a 1 PO scratch dir capacity. Software environment includes intel development compilers, libraries and tools; TotalView and DDT debuggers; and SLURM job manager. Integrated by Lenovo and serviware, this cluster has been inaugurated on September 30th, 2015.

Kraken Cluster

The Kraken cluster includes 121 compute nodes, each of them with two Intel Xeon Gold 6140 processors (18 cores skylake processor at 2.3 Ghz) and 96 GB DDR4 memory.

The 121 node of this “compute partition” are completed by 5 post-processing nodes with 384 GB memory with a Nvidia Tesla M60, one node with 768 GB memory used for large mesh generation, one node with 1.5 PO of memory dedicated to climate modeling, 2 nodes accelerated with one Nvidia Volta V100 for Deep Learning and Artificial Intelligence.

The interconnection network is a non-blocking Omnipath Network. An internal GPFS file system offers to users a 0.5 PO scratch dir capacity. Software environment includes intel development compilers, libraries and tools; TotalView and DDT debuggers; and SLURM job manager. Integrated by Lenovo and serviware, this cluster is in production mode since May 2018.

 Central NAS Server

A central NFS server with a capacity of 1.2 PO is accessible from all clusters and workstations. Its function is to provide a secondary archiving service used by internal and external servers hosting the results of numerical simulation. This technical solution is supported by a 2 LENOVO GPFS Servers associated to a DDN SFA7700 storage solution.

Support for technology watch activities

Technology partners support our technology watch activities through the provision of resources:

  • two Intel Xeon-PHI platforms (KNC & KNL) made available by Intel Corporation as part of Intel Parallel Computing Centers programs.

These resources allow us to carry our solvers on these new technologies with the support of our technology partners.

External computers

Météo-France and CEA CCRT extend our simulation capacity through the access to their supercomputers. Through numerous collaboration and support of Genci, Prace and Incite CERFACS accesses multiple external computers. Genci allows our doctoral students to access national resources centers:

  • Atos Occigen (3.5 PFlop / s – 77 ° in the Top 500 global ranking of Oct 2018) at CINES
  • IBM iDataPlex (233 Tflop / s) and BG / Q (1.3 Pflop / s – 368 ° in the Top 500 global ranking of Oct 2018) with Idris
  • Atos Irene (8.2 PFlop / s – 40 ° in the Top 500 global ranking of Oct 2018) at the CEA

Prace attributes the resources to support our borders simulations:

  • Atos Irene (8.2 PFlop / s – 40 ° in the Top 500 global ranking of Oct 2018) at the CEA

 

 

 

NEWS

Sparse Days Meeting 2019 at Cerfacs, Toulouse

Brigitte Yzel |  12 May 2019

The annual Sparse Days meeting will be held at CERFACS in Toulouse on 11th and 12th July 2019. Registration for the Sparse Days is free but we ask people who are coming to register as soon as possible although the deadline is June 14th. Please complete the registration form (deadline : June 14th) indicating whether you want to give a talk and whether you wish to attend the conference dinner. Although an emphasis will be on parallel aspects, any talk that has an association with sparsity is welcome.Read more


The Telemac-Mascaret User Conference 2019 will be held in Toulouse on October 15-17th

superadmin |  2 May 2019

The conference  is organized by the European Center for Advanced Research and Training for Computational Science (CERFACS), on the Météo-France campus, in the conference room CIC. The conference will start with a one-day technical workshop (October 15th 2019), followed by a two-day conference on October 16-17th. A gala dinner will take place on the 16th evening for delegates and guests. Please check  more details.Read more

ALL NEWS