New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
#2011 (HPC-04(2018WP)_Mocavero_mpi3) – NEMO

Opened 6 years ago

Last modified 4 years ago

#2011 closed Task

HPC-04(2018WP)_Mocavero_mpi3 — at Version 5

Reported by: mocavero Owned by: francesca
Priority: high Milestone: 2019 WP
Component: OCE Version: trunk
Severity: minor Keywords:
Cc:

Description (last modified by francesca)

Context

MPI-3 provides new neighbourhood collective operations (i.e. MPI_Neighbor_allgather and MPI_Neighbor_alltoall) that allow to perform halo exchange with a single MPI communication call when a 5-point stencil is used.

Collective communications will be tested on the NEMO code in order to evaluate the code performance compared with the traditional point-to-point halo exchange currently implemented in NEMO.

The replacement of point-to-point communication with new collective ones will be designed and implemented taking care of the results accuracy.

Implementation plan

The work, started in 2018, is described in the following:

Step 1: extraction of a mini-app to be used as test case. The advection kernel has been considered as test case and a mini-app has been implemented. The parallel application performs the MUSCL advection scheme and the dimension of the subdomain as well as the number of parallel processes can be set by the user (done)

Step 2: integration of the new MPI-3 collective communications in the mini-app and performance comparison with the standard MPI-2 point-to-point communications. The evaluation of the proof of concept will be performed by changing the subdomain size. Performance analysis will be executed on systems available at CMCC. However, tests on other systems (available at Consortium partners sites) are welcome (ongoing)

Step 3: the collective communications will be integrated in the NEMO code. The use of collective communications could be optional and the choice between point-to-point and collective communications will be demanded to the user (through a dedicated namelist parameter), also depending on the architecture where the code will run. The initialisation of the cartesian topology will be integrated in the mppini module, while the new version of lbc_lnk (perform a single MPI-3 collective call) will be added in the lib_mpp module. No changes are required in the NEMO routines where the lbc_lnk is called.

The proposed changes do not impact on NEMO usabilty. Reference manual will not be changed.

Commit History (3)

ChangesetAuthorTimeChangeLog
11955mocavero2019-11-22T18:44:17+01:00

Bug fix for MPI3 neighbourhood collectives halo exchange. See ticket #2011

11940mocavero2019-11-20T22:48:28+01:00

Add MPI3 neighbourhood collectives halo exchange in LBC and call it in tracer advection FCT scheme #2011

11496mocavero2019-09-04T10:36:21+02:00

Create HPC-12 branch - ticket #2011

Change History (5)

comment:1 Changed 6 years ago by mocavero

  • Owner set to mocavero
  • Status changed from new to assigned

comment:2 Changed 6 years ago by francesca

  • Description modified (diff)
  • Milestone changed from 2018 WP to 2019 WP
  • Owner changed from mocavero to francesca
  • wp_comment set to Some tests on a significative kernel have been executed (using a mini-app approach). The work will be continued in 2019 and the action is postponed.

comment:3 Changed 6 years ago by nicolasmartin

  • Summary changed from HPC-04_Mocavero_mpi3 to HPC-04(2018WP)_Mocavero_mpi3

comment:4 Changed 5 years ago by nemo

  • Priority changed from low to high

comment:5 Changed 5 years ago by francesca

  • Description modified (diff)
Note: See TracTickets for help on using tickets.