New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
#2495 (trunk/OCE won't compile without MPI (1-proc "nemo.exe")) – NEMO

Opened 4 years ago

Closed 4 years ago

Last modified 4 years ago

#2495 closed Bug (fixed)

trunk/OCE won't compile without MPI (1-proc "nemo.exe")

Reported by: laurent Owned by: systeam
Priority: high Milestone: Unscheduled
Component: LBC Version: trunk
Severity: major Keywords: LBC, MPI, MPP, compilation
Cc:

Description

Context

Compilation of the code (OCE) without MPI (i.e. without the CPP key "key_mpp_mpi") crashes when attempting to compile "lbc_lnk.F90".

Analysis

Some MPI-related calls are still in use in 2 included headers after pre-processing stage: mpp_lbc_north_icb_generic.h90 and mpp_nfd_generic.h90. These 2 header files both call "MPI_ALLGATHER" without any "#if defined key_mpp_mpi"...

Fix

(Note: I'm in unknown territories here, this fix worked for my simple test-case "STATION_ASF")

In mpp_nfd_generic.h90, replace:

         CALL MPI_ALLGATHER( znorthloc  , ibuffsize, MPI_TYPE,                &
            &                znorthgloio, ibuffsize, MPI_TYPE, ncomm_north, ierr )

with:

#if defined key_mpp_mpi
         CALL MPI_ALLGATHER( znorthloc  , ibuffsize, MPI_TYPE,                &
            &                znorthgloio, ibuffsize, MPI_TYPE, ncomm_north, ierr )
#endif

In mpp_lbc_north_icb_generic.h90, replace:      

CALL MPI_ALLGATHER( znorthloc_e(1,1-kextj)    , itaille, MPI_TYPE,    &
         &                znorthgloio_e(1,1-kextj,1), itaille, MPI_TYPE,    &
         &                ncomm_north, ierr )

with
#if defined key_mpp_mpi

      CALL MPI_ALLGATHER( znorthloc_e(1,1-kextj)    , itaille, MPI_TYPE,    &
         &                znorthgloio_e(1,1-kextj,1), itaille, MPI_TYPE,    &
         &                ncomm_north, ierr )
#endif

Commit History (1)

ChangesetAuthorTimeChangeLog
13438smasson2020-08-26T11:57:31+02:00

trunk: bugfix to compile and run the code without key_mpp_mpi, see #2495

Change History (5)

comment:1 Changed 4 years ago by hadcv

I can confirm the same issue in GYRE.

At r13383, mpp_lbc_north_icb_generic.h90 has already been resolved by wrapping the contents of ROUTINE_LNK in an #if defined key_mpp_mpi block. I have done the same for ROUTINE_NFD in mpp_nfd_generic.h90 and this seems to work ok.

However, there remain some issues with the mono processor configuration in mppini.F90:

  • prtctl fails to compile because mpp_basesplit, mpp_is_ocean, mpp_getnum and readbot_strip are undefined.

I resolved this by moving these functions outside of the #if defined key_mpp_mpi block

  • Nis0, Njs0, Nie0 and Nje0 are incorrectly calculated because nn_hls is not defined at the time init_doloop is called (it is defined afterwards), resulting in out of bounds accessing of arrays.

I resolved this by moving nn_hls = 1 to before the init_doloop call. However, the nammpp namelist should be read here to get the nn_hls value.

  • jpiglo and jpjglo are set equal to Ni0glo and Nj0glo; this causes XIOS to crash due to inconsistency in the size of the passed global domain and data arrays

I resolved this by setting them to Ni0glo + 2 * nn_hls and Nj0glo + 2 * nn_hls as in the MPP case.

comment:2 Changed 4 years ago by smasson

I agree with the fix proposed.
Some remarks:

  • reading nammpp in a non-MPI run is quite a strange idea... but maybe we should also rename nammpp in nammpi. using values larger than 1 for nn_hls when there is no communications to be done is just a waste of cpu.
  • XIOS requires MPI, so I don't see the idea of using XIOS with NEMO compiled without MPI
  • please don't confuse non-MPI with single core. A single core simulation can be done with the code compiled with MPI (key_mpp_mpi) and executed on a single MPI task.
Last edited 4 years ago by smasson (previous) (diff)

comment:3 Changed 4 years ago by smasson

In 13438:

Error: Failed to load processor CommitTicketReference
No macro or processor named 'CommitTicketReference' found

comment:4 Changed 4 years ago by smasson

fixed in [13438]

I still don't understand why we spend some time and make the code heavier and less readable to maintain the possibility to compile the code without MPI.
Who can pretend to do ocean modeling without having the possibility to use an MPI library in 2020? Who knows a machine on which we cannot install an MPI library?

comment:5 Changed 4 years ago by smasson

  • Resolution set to fixed
  • Status changed from new to closed
Note: See TracTickets for help on using tickets.