Photo Tim Mossholder on Unsplash

The path we try to follow this year at COOP

COOP focuses on transverse activities towards improving, optimizing and refactoring scientific codes at a sustainable pace.

First of all, if you are interested in Artificial Intelligence Activities (AI), and particularly in Machine Learning for hybrid physical models, proceed to the HELIOS website. Concerning High Performance Computing (HPC) and Computer Science & Engineering (CSE) activities, these are divided into four axes:

Exascale Computing

This year, we have a special focus on some basics of the Exascale Computing thanks to the Center of Excellence Excellerat Phase II. Regarding performance, we are producing blog articles dedicated to people working in HPC but who are not hardware experts. The analogy to understand supercomputers can help PhD students, researchers and Team managers from Physical modeling in understanding the link between usual performance problems and supercomputer architecture.

We also investigate the users’ computing patterns an behaviors. The Seed project is a long-term background activity to enable a user-directed monitoring of our simulations ressources. Indeed, the analysis of the workload from past years at Cerfacs exhibits good practices from several users that are worth sharing more widely.

Finally, we work on large mesh generation through mesh adaptation. Indeed, the multiple costs of generating a huge new grid and testing simulation on it (human, ressources, money) reduces drastically the number of trial-and-error usually done in the process of selecting the correct new grid. A co-funding from two CoEs (COEC, Excellerat) and our Shareholder (CEFORA) allows us to develop an Automated Static Mesh Refinement workflow, able to gradually improve the grid from low-resolution (< 1 Million cells) to high-resolution (>1 Billion cells), fed by intermediate CFD simulations results.

Sustainable Programing

Thanks to the Center of Excellence COEC and Excellerat Phase II, we continue last year’s effort on Codemetrics to elaborate tools dedicated to the mitigation of the technological debt.

The tool Anubis is an open-source Git analyzer which extracts a Team’s History though its repositories.

The tool Marauder’s map is an open-source source code mapper which can generate global callgraphs, importations graphs and various treemaps visualizations concerning complexity, linting or coverage. We are introducing the potential uses of such tools in our blog post about software mapping.


There are many COOP-CSE concepts that we need to write down for training purposes. Here follows the list of the topics that we want to cover this year. if one of the topics is of particular interest to you, please call us!

  • An analogy to understand supercomputers
  • Studying the geography of a software (a primer)
  • A primer about running on supercomputers, and the best practices to submit a job.
  • Studying the history of a software (a primer)
  • The dangers of over-modularity , because avoiding both duplication and tight coupling is a slippery path.
  • About Technology Level Readiness (TRL) in scientific softwares
  • What is a statefull code?
  • Overfitted code.
  • What to know before learning to code
  • Anticipate the stress on a simulation software
  • The various aspects of sustainable programming
  • Human behavior concepts at play around scientific software making

Bringing scientific tools to Shareholders, “LongJing” (march) and “Matcha” (september)

LongJing” (march) and “Matcha” (september) are our bi-annual release of software to the Safran group. This is about the industrialization of our research tools - mainly combustion and CFD - towards a shareholder.

There are two primary goals this year: first, we must provide a reliable computing environment for the users. We develop singularity containers for the production situations on HPC clusters, completed by Python virtual environements for expert usage involving case specific developments.

Second, we develop automated workflows to scale up the production of simulations. We use for this our open-source lightweight workflow manager on HPC clusters: lemmings. The two basic workflows are: - The usual recursive “run until you reach the desired time”. - The Automated Static Mesh Refinement (ASMR, pun intended) which improves the mesh definition gradually to create a smart grid inspired by CFD.

Why some industrialization in a Research Lab?

While we do care a lot about our shareholders’ satisfaction, there are two non-profit advantages for Cerfacs: The industrialization activity of COOP is, first of all, a research catalyst for research teams. The cycle is the following:

  1. Upgrade a scientific tool, out of a research project to an engineering design tool, industrial proof.

  2. Support the engineers in their day-to-day use of the tools.

  3. Collect real-life experience and new situations

  4. Build a new research project to tackle the new situation.

We clarified the AVBP versioning in use in a dedicated post:

Road and Scouts semantic versioning strategy”.

Like this post? Share on: TwitterFacebookEmail

Antoine Dauptain is a research scientist focused on computer science and engineering topics for HPC.

Keep Reading





Stay in Touch