New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
2020WP/HPC-07_mocavero_mpi3 – NEMO
wiki:2020WP/HPC-07_mocavero_mpi3

Version 2 (modified by francesca, 4 years ago) (diff)

--

Name and subject of the action

Last edition: Wikinfo(changed_ts)? by Wikinfo(changed_by)?

The PI is responsible to closely follow the progress of the action, and especially to contact NEMO project manager if the delay on preview (or review) are longer than the 2 weeks expected.

  1. Summary
  2. Preview
  3. Tests
  4. Review

Summary

Action MPI3 collective neighbours communications instead of point to point communications
PI(S) Silvia Mocavero and Italo Epicoco
Digest MPI-3 provides new neighbourhood collective operations that allow to perform halo exchange with a single MPI communication call.
Dependencies If any
Branch source:/NEMO/branches/{YEAR}/dev_r{REV}_{ACTION_NAME}
Previewer(s) Mirek Andrejczuk
Reviewer(s) Mirek Andrejczuk
Ticket #XXXX

Description

This is the continuation of the work started in 2019 (HPC-12_Mocavero_mpi3).

MPI-3 provides new neighbourhood collective operations (i.e. MPI_Neighbor_allgather and MPI_Neighbor_alltoall) that allow to perform halo exchange with a single MPI communication call.

These collective communications have been integrated and tested on the NEMO code during 2019 in order to evaluate the code performance compared with the traditional point-to-point halo exchange currently implemented in NEMO. The first version of the implementation uses a cartesian topology, so it does not support 9-point stencil neither land domain exclusion and the north fold is handled as usual. The use of new collective communications has been tested on a representative kernel implementing the FCT advection scheme.

Preliminary tests show an improvement within a range of 18%-32% on the GYRE_PISCES configuration (with nn_GYRE=200), depending on the allocated number of cores. The output accuracy is preserved.

During 2020 we intend to integrate the graph topology to support the routines that use a 9-point stencil, the land domain exclusion and the north fold exchanges through MPI3 neighbourhood collective communications.

Implementation

Step 1: alignment of the dev_r11470_HPC_12_mpi3 branch with the new trunk

Step 2: integration of graph topology to allow each process to exchange halo with diagonal processes (when 9-point stencil is needed) or with non-neighbours processes (when land domain exclusion is activated or north fold has to be handled)

Step 3: replacement of point-to-point communications with collective ones within the NEMO code

Documentation updates

Error: Failed to load processor box
No macro or processor named 'box' found

...

Preview

Error: Failed to load processor box
No macro or processor named 'box' found

...

Tests

Error: Failed to load processor box
No macro or processor named 'box' found

...

Review

Error: Failed to load processor box
No macro or processor named 'box' found

...