New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
#977 (MPI_GROUP_MAX exceeded when running with LIM3) – NEMO

Opened 12 years ago

Closed 12 years ago

Last modified 8 years ago

#977 closed Bug (fixed)

MPI_GROUP_MAX exceeded when running with LIM3

Reported by: acc Owned by: acc
Priority: low Milestone:
Component: OCE Version: v3.4
Severity: Keywords: CPP LBC MPI WG
Cc:

Description

When running an ORCA2_LIM3 reference configuration on an SGI-ALITIX-ICE system the job can fail with a claim to have exceeded the maximum permitted number of MPI_GROUPS. Raising the MPI_GROUP_MAX environmental parameter only delays this happening. The problem can be traced to the mpp_ini_ice routine which creates groups every call but never frees them. Possibly not a problem on all systems but the groups should be free'd after they've been used to create the new communicator (which is itself eventually free'd in limthd). The required changes will be made in lib_mpp.F90 on the trunk.

Commit History (1)

ChangesetAuthorTimeChangeLog
3420acc2012-06-21T15:50:17+02:00

Bugfix #977. Minor changes to lib_mpp.F90 to free mpi_group structures after use in mpp_ini_ice

Change History (4)

comment:1 Changed 12 years ago by acc

  • Resolution set to fixed
  • Status changed from new to closed

Fix committed to the trunk at revision 3420

comment:2 Changed 8 years ago by nicolasmartin

  • Keywords LBC added; lib_mpp removed

comment:3 Changed 8 years ago by nicolasmartin

  • Keywords MPI WG added; MPI_GROUP removed

comment:4 Changed 8 years ago by nicolasmartin

  • Keywords CPP added; key_mpp_mpi removed
Note: See TracTickets for help on using tickets.