New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
2011WP/2011Stream2/DynamicMemory – NEMO
wiki:2011WP/2011Stream2/DynamicMemory

Version 4 (modified by trackstand, 13 years ago) (diff)

--

Discussion of coding approach/style for dynamic memory

As a basis for the discussion, here's how I've currently coded NEMO (v.3.2) to use dynamic memory. I've used ''...'' to indicate places where I've missed-out chunks of code for clarity/brevity.

Highest-level changes are in OPA_SRC/opa.F90 and the routine opa_init():

#if defined key_mpp_dyndist
      ! Decide on size of grid now that we have our communicator size
      ! If we're not using dynamic memory then mpp_partition does nothing.

#if   defined key_mpp_mpi   ||   defined key_mpp_shmem
      CALL opa_partition()
#else
      jpni = 1
      jpnj = 1
      jpnij = jpni*jpnj
#endif
      ! Calculate domain dimensions given calculated jpi and jpj
      jpi = ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci !: first  dim.
      jpj = ( jpjglo-2*jprecj + (jpnj-1) ) / jpnj + 2*jprecj !: second dim.
      jpim1 = jpi-1                                          !: inner domain indices
      jpjm1 = jpj-1                                          !:   "           "
      jpkm1 = jpk-1                                          !:   "           "
      jpij  = jpi*jpj                                        !:  jpi x j

      ! Now we know the dimensions of the grid, allocate arrays
      CALL opa_alloc()
#endif

The inclusion of the new code is currently controlled by the cpp key 'key_mpp_dyndist' but it would greatly improve the cleanliness of the code if we make a complete break from the static-memory version and thus can drop the use of this cpp key. My understanding of the conclusion of the meeting in Southampton was that this is what we're going to do.

This addition to opa_init() calls two further new routines, opa_partition() and opa_alloc():

   SUBROUTINE opa_partition
     USE par_oce
     IMPLICIT none
     INTEGER, PARAMETER :: nfactmax = 20
     INTEGER :: nfact ! The no. of factors returned
     INTEGER :: ierr  ! Error flag
     INTEGER :: i
     INTEGER :: idiff, mindiff, imin ! For choosing pair of factors that are
                                     ! closest in value
     INTEGER, DIMENSION(nfactmax) :: ifact ! Array of factors
     ierr = 0

#if ! defined key_mpp_dyndist
     RETURN ! If we aren't using dynamic memory then jpnj and jpni are set
            ! at compile time
#else

     CALL factorise(ifact, nfactmax, nfact, mppsize, ierr)

     IF(nfact <= 1)THEN
        WRITE (numout, *) 'WARNING: factorisation of number of PEs failed'
        WRITE (numout, *) '       : using grid of ',mppsize,' x 1'
        jpnj = 1
        jpni = mppsize
     ELSE
        ! Search through factors for the pair that are closest in value
        mindiff = 1000000
        imin    = 1
        DO i=1,nfact-1,2
           idiff = ABS(ifact(i) - ifact(i+1))
           IF(idiff < mindiff)THEN
              mindiff = idiff
              imin = i
           END IF
        END DO
        jpnj = ifact(imin)
        jpni = ifact(imin + 1)
     ENDIF
     jpnij = jpni*jpnj

     WRITE(*,*) 'ARPDBG: jpni = ',jpni,'jpnj = ',jpnj,'jpnij = ',jpnij

#endif

   END SUBROUTINE opa_partition

where factorise() returns the prime factors of its argument.

opa_alloc() oversees the allocation of all of the module-level arrays:

   SUBROUTINE opa_alloc
#if defined key_mpp_dyndist

#if defined key_lim2
     USE limwri_2,   ONLY: lim_wri_2_malloc
     USE limdmp_2,   ONLY: lim_dmp_2_malloc
     USE limhdf_2,   ONLY: lim_hdf_2_malloc
     USE ice_2,      ONLY: ice_2_malloc
     USE limsbc_2,   ONLY: lim_sbc_2_malloc
     USE thd_ice_2,  ONLY: thd_ice_2_malloc
     USE limdia_2,   ONLY: lim_dia_2_malloc
     USE dom_ice_2,  ONLY: dom_ice_2_malloc
#endif
     USE ldfslp,     ONLY: ldf_slp_malloc
     USE ldftra_oce, ONLY: ldftra_oce_malloc
     USE ldfdyn_oce, ONLY: ldfdyn_oce_malloc
#if defined key_vvl
     USE domvvl,     ONLY: dom_vvl_malloc
#endif
     USE dom_oce,    ONLY: dom_oce_malloc
...
...
     IMPLICIT none
     INTEGER, PARAMETER           :: NUMCALLS = 58
     INTEGER, DIMENSION(NUMCALLS) :: ierr
     INTEGER                      :: i, icount

     ierr = 0
     icount = 1

#if defined key_lim2
     ierr = ierr + lim_wri_2_malloc()
     ierr = ierr + lim_dmp_2_malloc()
     ierr = ierr + lim_hdf_2_malloc()
     ierr = ierr + ice_2_malloc()
     ierr = ierr + lim_sbc_2_malloc()
     ierr = ierr + thd_ice_2_malloc()
     ierr = ierr + lim_dia_2_malloc()
     ierr = ierr + dom_ice_2_malloc()
#endif
     ierr = ierr + ldf_slp_malloc()
     ierr = ierr + ldftra_oce_malloc()
     ierr = ierr + ldfdyn_oce_malloc()
#if defined key_vvl
     ierr = ierr + dom_vvl_malloc()
#endif
     ierr = ierr + dom_oce_malloc()
...
...
! Should do an MPI_SUM on ierr and then everyone can MPI_FINALIZE() if the
! memory allocation has failed on any one PE
     IF(ierr > 0)THEN
        WRITE(numout,*) 
        WRITE(numout,*) 'ERROR: Allocation of memory failed in opa_alloc'
        STOP 'ERROR: Allocation of memory failed in opa_alloc'
     END IF
#else
     RETURN ! Not using dynamic memory therefore do nothing
#endif

   END SUBROUTINE opa_alloc

Each of the modules must then be edited and the declaration of arrays with dimensions that depend upon jpi or jpj be changed. e.g. for OPA_SRC/DOM/dom_oce.F90:

MODULE dom_oce
   !!======================================================================
   !!                       ***  MODULE dom_oce  ***
   !!       
   !! ** Purpose :   Define in memory all the ocean space domain variables
   !!======================================================================
   !! History :  1.0  ! 2005-10  (A. Beckmann, G. Madec)  reactivate s-coordinate 
   !!----------------------------------------------------------------------
   !! NEMO/OPA 3.2 , LOCEAN-IPSL (2009) 
   !! $Id: dom_oce.F90 1601 2009-08-11 10:09:19Z ctlod $ 
   !! Software governed by the CeCILL licence (modipsl/doc/NEMO_CeCILL.txt)
   !!----------------------------------------------------------------------
   USE par_oce      ! ocean parameters

   IMPLICIT NONE
...
...
#if defined key_mpp_dyndist
   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: & !: 
#else
   REAL(wp), PUBLIC,              DIMENSION(jpi,jpj,jpk) :: & !:
#endif
      tmask, umask,    &  !: land/ocean mask at T-, U-, V- and F-points
      vmask, fmask        !:

   REAL(wp), PUBLIC, DIMENSION(jpiglo) ::   tpol, fpol          !: north fold mask (nperio= 3 or 4)

#if defined key_noslip_accurate
   INTEGER, PUBLIC,                    DIMENSION(4,jpk) ::   npcoa   !: ???
#if defined key_mpp_dyndist
   INTEGER, PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) ::   nicoa, njcoa        !: ???
#else
   INTEGER, PUBLIC,        DIMENSION(2*(jpi+jpj),4,jpk) ::   nicoa, njcoa        !: ???
#endif
                                         
#endif

   !!----------------------------------------------------------------------
   !! agrif domain
   !!----------------------------------------------------------------------
#if defined key_agrif
   LOGICAL, PUBLIC, PARAMETER ::   lk_agrif = .TRUE.    !: agrif flag
#else
   LOGICAL, PUBLIC, PARAMETER ::   lk_agrif = .FALSE.   !: agrif flag
#endif


#if defined key_mpp_dyndist
   PUBLIC dom_oce_malloc
#endif

#if defined key_mpp_dyndist

CONTAINS

  FUNCTION dom_oce_malloc()
    USE par_oce, Only: jpi, jpj, jpk, jpnij
    IMPLICIT none
    INTEGER :: dom_oce_malloc
    INTEGER :: ierr, ierr_tmp
    
    ierr = 0
...
...
    ALLOCATE(tmask(jpi,jpj,jpk), umask(jpi,jpj,jpk),    & 
             vmask(jpi,jpj,jpk), fmask(jpi,jpj,jpk), Stat=ierr_tmp)
    ierr = ierr + ierr_tmp

#if defined key_noslip_accurate
    ALLOCATE(nicoa(2*(jpi+jpj),4,jpk), njcoa(2*(jpi+jpj),4,jpk), &
             Stat=ierr_tmp)
    ierr = ierr + ierr_tmp
#endif

    IF(ierr != 0)THEN
      WRITE() "Some error message detailing which module's ..._malloc() routine failed"
    END IF

    ! Error status passed back up
    dom_oce_malloc = ierr

  END FUNCTION dom_oce_malloc
#endif

Finally (I think), there's the issue of work-space or local arrays. These are arrays local to subroutines that are declared with their dimensions in terms of jpi and jpj. Since jpi and jpj are no longer parameters, these arrays are now 'automatic' -- that is, they are allocated on the stack when the program enters the subroutine and free'd again when it leaves. Running out of stack memory will cause the program to abort abruptly and therefore it is more controlled to use allocate for big workspace arrays. (Although the amount of available stack can be set to 'unlimited' by using the shell's ulimit command, what limit this actually results in will be OS/set-up dependent.) I suggest therefore that we edit the code such that any large (3d or higher?) workspace arrays are made explicitly allocatable and then allocated the first time the subroutine is called. If we declare them with the SAVE attribute then the allocated memory will remain available to us in future calls, e.g.:

   SUBROUTINE dia_ptr( kt )
      !!----------------------------------------------------------------------
      !!                  ***  ROUTINE dia_ptr  ***
      !!----------------------------------------------------------------------
      IMPLICIT none
      INTEGER, INTENT(in) ::   kt   ! ocean time step index
      !!
      INTEGER  ::   jk, jj, ji   ! dummy loop
      REAL(wp) ::   zsverdrup    ! conversion from m3/s to Sverdrup
      REAL(wp) ::   zpwatt       ! conversion from W    to PW
      REAL(wp) ::   zggram       ! conversion from g    to Pg
#if defined key_mpp_dyndist
      REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: vt, vs   ! 3D workspace
#else
      REAL(wp),              DIMENSION(jpi,jpj,jpk) :: vt, vs   ! 3D workspace
#endif
      !!----------------------------------------------------------------------

      IF( kt == nit000 .OR. MOD( kt, nf_ptr ) == 0 )   THEN

#if defined key_mpp_dyndist
         IF(.not. ALLOCATED(vt))THEN
            ALLOCATE(vt(jpi,jpj,jpk), vs(jpi,jpj,jpk), Stat=ji)
            IF(ji .ne. 0)THEN
               ! Need an interface to MPI_Abort() here really
               STOP 'ERROR: Allocation of vt/vs failed in dia_ptr'
            END IF
         END IF
#endif
...
...

If an allocate() is done in this way then handling any failure is difficult since there's no guarantee that the allocate() call will have failed for all MPI processes. Therefore, I think the best bet is to print out an error message and then do an MPI_Abort on MPI_COMM_WORLD.

Attachments (1)

Download all attachments as: .zip