New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
Ticket Diff – NEMO

Changes between Version 5 and Version 11 of Ticket #2011


Ignore:
Timestamp:
2019-11-28T22:00:48+01:00 (4 years ago)
Author:
epico
Comment:

Legend:

Unmodified
Added
Removed
Modified
  • Ticket #2011

    • Property Owner changed from francesca to mocavero
    • Property Wp_comment changed from Some tests on a significative kernel have been executed (using a mini-app approach). The work will be continued in 2019 and the action is postponed. to Some tests on a significative kernel have been executed (using a mini-app approach). The integration of the neighbourhood collective communications in the trunk is ongoing. The branch will be ready for merge party end 2019
  • Ticket #2011 – Description

    v5 v11  
    99== Implementation plan 
    1010 
    11 The work, started in 2018, is described in the following: 
     11The work is described in the following: 
    1212 
    13 Step 1: extraction of a mini-app to be used as test case. The advection kernel has been considered as test case and a mini-app has been implemented. The parallel application performs the MUSCL advection scheme and the dimension of the subdomain as well as the number of parallel processes can be set by the user (done) 
     13Step 1: extraction of a mini-app to be used as test case. The advection kernel has been considered as test case and a mini-app has been implemented. The parallel application performs the MUSCL advection scheme and the dimension of the subdomain as well as the number of parallel processes can be set by the user 
    1414 
    15 Step 2: integration of the new MPI-3 collective communications in the mini-app and performance comparison with the standard MPI-2 point-to-point communications. The evaluation of the proof of concept will be performed by changing the subdomain size. Performance analysis will be executed on systems available at CMCC. However, tests on other systems (available at Consortium partners sites) are welcome (ongoing) 
     15Step 2: integration of the new MPI-3 neighbourhood collective communications in the mini-app and performance comparison with the standard MPI-2 point-to-point communications. The evaluation of the proof of concept has been performed by changing the subdomain size. Performance analysis has been executed on system available at CMCC. 
    1616 
    17 Step 3: the collective communications will be integrated in the NEMO code. The use of collective communications could be optional and the choice between point-to-point and collective communications will be demanded to the user (through a dedicated namelist parameter), also depending on the architecture where the code will run. The initialisation of the cartesian topology will be integrated in the mppini module, while the new version of lbc_lnk (perform a single MPI-3 collective call) will be added in the lib_mpp module. No changes are required in the NEMO routines where the lbc_lnk is called. 
     17Step 3: the neighbourhood collective communications have been integrated in the NEMO code. The first version of the implementation uses a cartesian topology, so it does not support 9-point stencil neither land domain exclusion and the north fold is handled as usual. The use of new collective communications has been tested on a representative kernel implementing the FCT advection scheme. 
    1818 
    19 The proposed changes do not impact on NEMO usabilty. Reference manual will not be changed. 
     19Modified files are: 
     20 
     21- OCE/LBC/lib_mpp.F90 where the new communicator is created, taking into account the different order of MPI processes between the cartesian communicator and NEMO 
     22- OCE/LBC/mppini.F90 where the ranks of the processes are reordered and the call to the communicator creation routine has been added 
     23- OCE/LBC/lbclnk.F90 where generic.h90 files to introduce MPI3 neighbourhood collectives are created 
     24- OCE/TRA/traadv_fct.F90 as example of routine where MPI3 neighbourhood collectives can be used 
     25 
     26Two files have been added: 
     27 
     28- OCE/LBC/lbc_lnk_nc_generic.h90 to handle multi field exchange in MPI3 case 
     29- OCE/LBC/mpp_nc_generic.h90 where the halo exchange is implemented. 
     30 
     31The branch is ready to be merged during 2019 Merge Party. The proposed changes do not impact on NEMO usability. Reference manual will not be changed since code modifications are transparent to the users. 
     32 
     33Step 4 (action will be continued in 2020): integration of graph topology to support the routines that use a 9-point stencil, the land domain exclusion and the north fold exchanges through MPI3 neighbourhood collective communications