#2495 closed Bug (fixed)
trunk/OCE won't compile without MPI (1-proc "nemo.exe")
Reported by: | laurent | Owned by: | systeam |
---|---|---|---|
Priority: | high | Milestone: | Unscheduled |
Component: | LBC | Version: | trunk |
Severity: | major | Keywords: | LBC, MPI, MPP, compilation |
Cc: |
Description
Context
Compilation of the code (OCE) without MPI (i.e. without the CPP key "key_mpp_mpi") crashes when attempting to compile "lbc_lnk.F90".
Analysis
Some MPI-related calls are still in use in 2 included headers after pre-processing stage: mpp_lbc_north_icb_generic.h90 and mpp_nfd_generic.h90. These 2 header files both call "MPI_ALLGATHER" without any "#if defined key_mpp_mpi"...
Fix
(Note: I'm in unknown territories here, this fix worked for my simple test-case "STATION_ASF")
In mpp_nfd_generic.h90, replace:
CALL MPI_ALLGATHER( znorthloc , ibuffsize, MPI_TYPE, & & znorthgloio, ibuffsize, MPI_TYPE, ncomm_north, ierr )
with:
#if defined key_mpp_mpi CALL MPI_ALLGATHER( znorthloc , ibuffsize, MPI_TYPE, & & znorthgloio, ibuffsize, MPI_TYPE, ncomm_north, ierr ) #endif
In mpp_lbc_north_icb_generic.h90, replace:
CALL MPI_ALLGATHER( znorthloc_e(1,1-kextj) , itaille, MPI_TYPE, & & znorthgloio_e(1,1-kextj,1), itaille, MPI_TYPE, & & ncomm_north, ierr )
with
#if defined key_mpp_mpi
CALL MPI_ALLGATHER( znorthloc_e(1,1-kextj) , itaille, MPI_TYPE, & & znorthgloio_e(1,1-kextj,1), itaille, MPI_TYPE, & & ncomm_north, ierr ) #endif
Commit History (1)
Changeset | Author | Time | ChangeLog |
---|---|---|---|
13438 | smasson | 2020-08-26T11:57:31+02:00 | trunk: bugfix to compile and run the code without key_mpp_mpi, see #2495 |
Change History (5)
comment:1 Changed 4 years ago by hadcv
comment:2 Changed 4 years ago by smasson
I agree with the fix proposed.
Some remarks:
- reading nammpp in a non-MPI run is quite a strange idea... but maybe we should also rename nammpp in nammpi. using values larger than 1 for nn_hls when there is no communications to be done is just a waste of cpu.
- XIOS requires MPI, so I don't see the idea of using XIOS with NEMO compiled without MPI
- please don't confuse non-MPI with single core. A single core simulation can be done with the code compiled with MPI (key_mpp_mpi) and executed on a single MPI task.
comment:3 Changed 4 years ago by smasson
In 13438:
comment:4 Changed 4 years ago by smasson
fixed in [13438]
I still don't understand why we spend some time and make the code heavier and less readable to maintain the possibility to compile the code without MPI.
Who can pretend to do ocean modeling without having the possibility to use an MPI library in 2020? Who knows a machine on which we cannot install an MPI library?
comment:5 Changed 4 years ago by smasson
- Resolution set to fixed
- Status changed from new to closed
I can confirm the same issue in GYRE.
At r13383, mpp_lbc_north_icb_generic.h90 has already been resolved by wrapping the contents of ROUTINE_LNK in an #if defined key_mpp_mpi block. I have done the same for ROUTINE_NFD in mpp_nfd_generic.h90 and this seems to work ok.
However, there remain some issues with the mono processor configuration in mppini.F90: