New URL for NEMO forge!

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
#1123 (CICE-NEMO MPI communication for 'CICE intregated in NEMO' case) – NEMO

Opened 10 years ago

Closed 6 years ago

#1123 closed Defect (fixed)

CICE-NEMO MPI communication for 'CICE intregated in NEMO' case

Reported by: dupontf Owned by: nemo
Priority: low Milestone:
Component: OCE Version: v3.6
Severity: Keywords:
Cc: frrh


Hi all,

We found that in cases when several components share the same resource and each take its own separate slice, including NEMO-CICE (our own experience is with SAM2, the Mercator-Ocean data assimilation system, but this is applicable to any distributed-resource coupling system), the fact that CICE assumes MPI_World as default can hang the whole system.

However in this particular scenario, CICE is a sub-component of NEMO, just like LIM2, .i.e. a simple call from sbcmod. It should therefore share the same MPI communicator.

Our solution is to pass mpi_comm_opa to the CICE sea-ice sub-component of NEMO as argument through the NEMO-CICE interface (and cascading down to the CICE portion dealing with MPI communicator). Anything in these lines integrated in the next release would be greatly appreciated!




Commit History (0)

(No commits)

Change History (3)

comment:1 Changed 10 years ago by charris

We've also found this to be an issue when trying to run CICE at the same time as XIOS in attached mode. Andrew Coward has resolved this by adding a 'USE lib_mpp, ONLY : mpi_comm_opa' in the CICE code in cice/mpi/ice_communicate.F90 and then replacing

#if (defined key_oasis3 || defined key_oasis4)
    ice_comm = localComm       ! communicator from NEMO/OASISn 
    ice_comm = MPI_COMM_WORLD  ! Global communicator 


#if (defined key_oasis3 || defined key_oasis4)
    ice_comm = localComm       ! communicator from NEMO/OASISn 
#elif key_iomput
    ice_comm = mpi_comm_opa       ! communicator from NEMO/XIOS
    ice_comm = MPI_COMM_WORLD  ! Global communicator 

Do we think this kind of solution would work in all situations in which we are currently having problems? Could we simplify the above logic by using the same communicator from NEMO in all situations (making use of the CICE_IN_NEMO key where necessary)?

I guess part of my question is whether we can avoid the need for any changes in NEMO at all, in which case it isn't a NEMO development issue as such.

comment:2 Changed 10 years ago by frrh

  • Cc frrh added

comment:3 Changed 6 years ago by clevy

  • Resolution set to fixed
  • Status changed from new to closed
Note: See TracTickets for help on using tickets.