Cerfacs Enter the world of high performance ...

Parallel Algorithms Team

The Parallel Algorithms Project conducts a dedicated research to address the solution of problems in applied mathematics by proposing advanced numerical algorithms to be used on massively parallel computing platforms. The Parallel Algorithms Project is especially considering problems known to be out of reach of standard current numerical methods due to, e.g., the large-scale nature or the nonlinearity of the problem, the stochastic nature of the data, or the practical constraint to obtain reliable numerical results in a limited amount of computing time. This research is mostly performed in collaboration with other teams at CERFACS and the shareholders of CERFACS as outlined in this report.

This research roadmap is known to be quite ambitious and we note that the major research topics have evolved over the past years. The main current focus concerns both the design of algorithms for the solution of sparse linear systems coming from the discretization of partial differential equations and the analysis of algorithms in numerical optimization in connection with several applications including data assimilation. These research topics are often interconnected as it is the case for e.g. large-scale inverse problems (so called big data inverse problems) or the solution of nonlinear systems that require approximate solutions of linearized systems. These research developments rely on a past research expertise in numerical analysis exploiting the structure of the problem in scientific computing, especially in qualitative computing.

A strong focus is given on mathematical aspects. Indeed efficient parallel algorithms are proposed together with their mathematical analysis. Main properties such as convergence of iterative methods, scalability properties, convergence to local or global minima are theoretically investigated.

Solution methods of sparse linear systems are considered in a broad sense by tackling both sparse direct methods and projection based iterative methods. These methods can also be combined to derive hybrid algebraic methods close to domain decomposition or multiscale methods. In addition to graph theory, these activities rely on a strong expertise in software development in linear algebra and on an up-to-date knowledge of the parallel computing platforms.

Optimization methods do occur in several applications at CERFACS. Most often the main goal is to improve the performance of a given system. The Parallel Algorithms Project is mainly focussing on both differentiable optimization and derivative-free optimization. The main research topics concern the convergence to local or global minima and the efficiency of the algorithms in practice.

The Parallel Algorithms Project is also deeply involved in the design and analysis of algorithms for data assimilation. Algorithms related to differentiable optimization or derivative-free optimization are considered together with filtering techniques. All these algorithms must be adapted and improved before tackling potential applications in seismic, oceanography, atmospheric chemistry or meteorology. The Parallel Algorithms Project has notably developed a specific expertise in the field of correlation error modelling based on the iterative solution of an implicitly formulated diffusion equation.

Finally the Parallel Algorithms Project takes an active part in the Training programme at CERFACS and is also regularly organizing seminars, workshops and international conferences in numerical optimization, numerical linear algebra and data assimilation.

 

CALENDAR

Monday

29

April

2024

Code coupling using CWIPI

Monday 29 April 2024

  Training    

Monday

13

May

2024

Implementation and use of Lattice Boltzmann Method

Monday 13 May 2024

  Training    

Tuesday

14

May

2024

Advanced Lattice Boltzmann Methods

Tuesday 14 May 2024

  Training    

ALL EVENTS