Cerfacs Enter the world of high performance ...

From 25 November 2019 to 26 November 2019

Parallel programming models MPI, OpenMP

nasri |  

Announced
Deadline for registration: 15 days before the starting date of each training
Duration : 2 days / (14 hours)

Pre-registration

Abstract

This course aims at learning to parallelize applications in order to reduce compute time or solve larger problems using MPI/OpenMP programming models. The course is taught using formal lectures and practical/programming sessions. All examples in the course will be done in Fortran, the exercices can be done in Fortran or C.

Objective of the training

To learn the fundamental concepts of parallel programming models MPI, OpenMP

Learning outcomes

Learn to parallelize applications in order to reduce compute time or solve larger problems using MPI/OpenMP programming models.

On completion of this course students should be able to :

  • parallelise a simple C/FORTRAN program (50 lines) with MPI library and/or OpenMP directives,
  • understand and use OpenMP directives ( worksharing , synchronisation),
  • understand and use MPI functions (point to point communications, collectives, communicators, topologies).

Target participants

Engineers, physicists, computer scientists and numerical analysts who wish to learn the fundamental concepts of parallel programming models MPI, OpenMP.

Prerequisites

In order to follow this course, you need to:

  • know how to use basic Linux commands,
  • master one of these two programming languages : Fortran or C (It is not possible to do the exercises in Java or in Python).

In order to verify that the prerequisites are satisfied, one the following questionnaire must be completed (Fortran or C). You need to get at least 75% of correct answers in order to be authorized to follow this training session. If you don’t succeed it, your subscription will not be validated. You only have two chances to complete it.

Questionnaire Fortran : https://goo.gl/forms/IqDvVXfOYYqR0NMr1

Questionnaire C : https://goo.gl/forms/WwR3wvQVz2dYy6AX2

Scientific contact : Iabelle d’Ast

Fee

  • Trainees/PhDs/PostDocs : 140 €
  • CERFACS shareholders/CNRS/INRIA : 400 €
  • Public : 800 €

Program

(Every day from 9h to 17h30)

Day 1

  • 9h : Welcome coffee
  • 9:15 – 10h00 : Introduction to parallel computing and parallel programming models MPI – OpenMP
    OpenMP fundamentals – Shared memory
  • 10h – 10h45 : Exercises
  • 10h45 – 11h : Break
  • 11h – 11h45  : Worksharing
  • 11h45 – 12h30 : Exercises
  • 12h30 – 14h : Lunch
  • 14h – 14h45 : Synchronizations – Traps
  • 14h45 – 15h30 : Exercises
  • 15h30 – 15h45 : Break
  • 15h45 – 16h30 : Introduction to MPI parallel programming model – Point to point communications
  • 16h30 – 17h30 : Exercises

Day 2

  • 9h – 10h00 : Point to point communications – Collectives communications
  • 10h – 10h45 : Exercises
  • 10h45 – 11h : Break
  • 11h – 11h45 : Collectives communications
  • 11h45 – 12h30 : Exercises
  • 12h30 – 14h : Lunch
  • 14h – 14h45 : Derived datatypes
  • 14h45 – 15h30 : Exercises
  • 15h30 – 15h45 : Break
  • 15h45 – 16h30 : Communicators – Topologies
  • 16h30 – 17h15 : Exercises
  • 17h15 – 17h30 : Conclusions

Final examination

A final examination will conducted during the training session.

 

Pre-registration

NEWS

First 360-degrees Large-Eddy Simulation of a full engine

Jérôme DOMBARD |  17 June 2020

Within the PRACE project FULLEST (First fUlL engine computation with Large Eddy SimulaTion), a joint collaboration between CERFACS, SAFRAN and AKIRA technologies, Dr. C. Pérez Arroyo (post doctoral fellow at CERFACS) has carried out under the supervision of Dr. J. Dombard the first high-fidelity simulation of a part of the real engine DGEN380 (for now, from the fan to the combustion chamber). This 360-degrees integrated large-eddy simulation contains around two billion cells on the three instances, carried out with the AVBP code of CERFACS.  The CPU cost is obviously large but still within reach, performing around one turn of fan during 5 days over 14400 skylake cores. Post-treatments are in progress and already show, among other complex phenomena, a strong interaction between the high pressure compressor and the combustion chamber (see forthcoming paper GT2020-16288 C. Pérez Arroyo et al). Below a video showing: in the fan an isosurface at mid-height of the vein colored by the Mach number, in the high pressure compressor a gradient of density, in the bypass of the combustion chamber the static pressure and in the flame tube a temperature field. One of the goals of the project is to create a high-fidelity unsteady database to study interactions between modules and may help other teams to develop new lower order models and/or validate existing ones. Beyond the feasibility and the maturity of the AVBP code, this kind of calculation is an important milestone for the aeronautical industry and would allow to apprehend earlier in the design the effect of integration and installation and thus, to reduce the cycle and therefore the cost of the future aircraft engines. We acknowledge PRACE for awarding us access to Joliot-Curie (Genci) hosted at CEA/TGCC, FRANCE, Safran Tech and DGAC fundings within the project ATOM, along with the invaluable technical support at...Read more


B. Cuenot distinguished as Program Chair of international Symposium on Combustion

superadmin |  29 May 2020

B. Cuenot has been distinguished as Program Chair for the 39th International Symposium on Combustion, to be held in Vancouver (Canada) in 2022. The International Symposium on Combustion is a major event for the combustion community, where the current best research is presented.Read more

ALL NEWS