New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
2011WP/2011Stream2/DynamicMemory (diff) – NEMO

Changes between Version 5 and Version 6 of 2011WP/2011Stream2/DynamicMemory


Ignore:
Timestamp:
2010-11-20T08:50:57+01:00 (13 years ago)
Author:
gm
Comment:

Gurvan' comments added

Legend:

Unmodified
Added
Removed
Modified
  • 2011WP/2011Stream2/DynamicMemory

    v5 v6  
    11= Discussion of coding approach/style for dynamic memory = 
     2 
     3Last edited [[Timestamp]] 
     4 
     5[[PageOutline]] 
     6 
     7== S2-1 : Andrew P.  : opening of the discusion == 
    28 
    39As a basis for the discussion, here's how I've currently coded NEMO (v.3.2) to use dynamic memory. I've used !''...!'' to indicate places where I've missed-out chunks of code for clarity/brevity. 
     
    2935#endif 
    3036}}} 
    31  
    3237The inclusion of the new code is currently controlled by the cpp key 'key_mpp_dyndist' but it would greatly improve the cleanliness of the code if we make a complete break from the static-memory version and thus can drop the use of this cpp key. My understanding of the conclusion of the meeting in Southampton was that this is what we're going to do. 
    3338 
    3439This addition to opa_init() calls two further new routines, opa_partition() and opa_alloc(): 
     40 
    3541{{{ 
    3642   SUBROUTINE opa_partition 
     
    8389 
    8490''opa_alloc()'' oversees the allocation of all of the module-level arrays: 
     91 
    8592{{{ 
    8693   SUBROUTINE opa_alloc 
     
    146153   END SUBROUTINE opa_alloc 
    147154}}} 
    148  
    149155Each of the modules must then be edited and the declaration of arrays with dimensions that depend upon jpi or jpj be changed. e.g. for OPA_SRC/DOM/dom_oce.F90: 
     156 
    150157{{{ 
    151158MODULE dom_oce 
     
    233240#endif 
    234241}}} 
    235  
    236 Finally (I think), there's the issue of work-space or local arrays. These are arrays local to subroutines that are declared with their dimensions in terms of jpi and jpj. Since jpi and jpj are no longer parameters, these arrays are now 'automatic' -- that is, they are allocated on the stack when the program enters the subroutine and free'd again when it leaves. Running out of stack memory will cause the program to abort abruptly and therefore it is more controlled to use allocate for big workspace arrays. (Although the amount of available stack can be set to 'unlimited' by using the shell's '''''ulimit''''' command, what limit this actually results in will be OS/set-up dependent.) 
    237 I suggest therefore that we edit the code such that any large (3d or higher?) workspace arrays are made explicitly allocatable and then allocated the first time the subroutine is called. If we declare them with the SAVE attribute then the allocated memory will remain available to us in future calls, ''e.g.'': 
     242Finally (I think), there's the issue of work-space or local arrays. These are arrays local to subroutines that are declared with their dimensions in terms of jpi and jpj. Since jpi and jpj are no longer parameters, these arrays are now 'automatic' -- that is, they are allocated on the stack when the program enters the subroutine and free'd again when it leaves. Running out of stack memory will cause the program to abort abruptly and therefore it is more controlled to use allocate for big workspace arrays. (Although the amount of available stack can be set to 'unlimited' by using the shell's '''''ulimit''''' command, what limit this actually results in will be OS/set-up dependent.) I suggest therefore that we edit the code such that any large (3d or higher?) workspace arrays are made explicitly allocatable and then allocated the first time the subroutine is called. If we declare them with the SAVE attribute then the allocated memory will remain available to us in future calls, ''e.g.'': 
     243 
    238244{{{ 
    239245   SUBROUTINE dia_ptr( kt ) 
     
    271277If an allocate() is done in this way then handling any failure is difficult since there's no guarantee that the allocate() call will have failed for all MPI processes. Therefore, I think the best bet is to print out an error message and then do an MPI_Abort on MPI_COMM_WORLD. 
    272278 
    273  
    274279---- 
    275280 
    276 >>Richard H. start 
    277  
    278 I agree with the sentiment about dropping 'key_mpp_dyndist' in favour of only supporting the dynamic memory code. (On the basis that proliferation of cpp keys makes maintenance and development difficult in the long term and implies the need to test model developments using equivalent configurations under both static and dynamic configurations).      
    279  
    280 Re the local array work space. Agree that we can't rely on ulimit. With regard to allocating and saving large workspace arrays, would it be viable to allocate space for these in some generic sense at the start of the run rather than locally within each subroutine or code area? That might give us the opportunity of recycling the same space rather than allocating space specifically for each subroutine or code area. It might, however imply the need to  
    281 pass the generic array around through argument lists e.g.: 
     281== S2-2 : Richard H.  comments == 
     282 
     283 
     284I agree with the sentiment about dropping 'key_mpp_dyndist' in favour of only supporting the dynamic memory code. (On the basis that proliferation of cpp keys makes maintenance and development difficult in the long term and implies the need to test model developments using equivalent configurations under both static and dynamic configurations). 
     285 
     286Re the local array work space. Agree that we can't rely on ulimit. With regard to allocating and saving large workspace arrays, would it be viable to allocate space for these in some generic sense at the start of the run rather than locally within each subroutine or code area? That might give us the opportunity of recycling the same space rather than allocating space specifically for each subroutine or code area. It might, however imply the need to  pass the generic array around through argument lists e.g.: 
    282287 
    283288{{{ 
     
    293298    CALL SOME_ROUTINE_Z(GEN_WORK_ARRAY1(1,1,1,1), GEN_WORK_ARRAY2(1,1,1,2), etc) 
    294299}}} 
    295  
    296300Then in the routines we have: 
     301 
    297302{{{ 
    298303    SUBROUTINE SOME_ROUTINE_X(local_work_array_X1, local_work_array_X2, etc) 
     
    310315    END SUBROUTINE     
    311316}}} 
    312 Anyway, just a thought.  
    313  
    314 Aborting using MPI_COMM_WORLD is particularly pertinent to coupled (OASIS based) models (otherwise things just tend to dangle).    
    315  
    316 >>Richard H. end 
     317Anyway, just a thought. 
     318 
     319Aborting using MPI_COMM_WORLD is particularly pertinent to coupled (OASIS based) models (otherwise things just tend to dangle). 
     320 
    317321 
    318322---- 
    319323 
     324== S2-3 : Gurvan M.  comments == 
     325 
     326 * Definitively, we have to make a complete break from static-memory version. The key_mpp_dyndist should disappear. We have all agreed on that at the developer committee. 
     327 
     328 * A namelist (namcfg, cfg stands for ConFiGuration) will provide the domain size (jpiglo, jpjglo, jpk) and the total number of processors and the cutting in i and j direction (jpnij, jpni, jpnj), as well as the configuration name and resolution (cp_cfg, jp_cfg) etc...  In fact all the information given in par_oce_...h90 
     329 
     330 * Obviously the full dynamical allocation will result in the suppression of almost all CPP keys. A priori, the only exceptions are the keys related to the CPP "substitute" (vertical coordinate, eddy coefficient 2D and 3D...). Please, for the first implementation, do not suppress the keys. It should be done as a second step. This will require to significantly change the namelists and the OPA documentation...  In other word, this will be a version 4.0 not simply a 3.4.... 
     331 
     332 * Note that since v3.3 there is a step_oce.F90 module corresponding to almost all modules used in step.F90. A simple USE step_oce in opa_alloc will simplify the routine. 
     333 
     334 * Note also that, for modularity reasons, the sea-ice and biogeochemical tracers should not be allocated by the opa_alloc routine. Instead, a lim_alloc can be created in LIM_SRC_3 (lim_alloc_2 in LIM_SRC_2) called by sbc_init (itself called in opa_init)  if sea-ice is activated and  trc_alloc (or even pisces_alloc, lobster_alloc etc  for TOP_SRC...)   called by trc_init. 
     335 
     336 * issue of work-space or local arrays: 
     337    
     338  In my opinion, we can simply return back to what was done in earlier versions of OPA (v1.0 to v6.0 !!). Declare and allocate one for all 4 3D work arrays, and 4 2D wok arrays. Then use them as workspace in the subroutines. I say 4, as ti was sufficient in those release. Currently, some more can be required, and with the Griffies operator and the merge of TRA and TRC routines some 4D local arrays have appeared arrays.  
     339 
     340  We can check in the code the maximum number of 4D, 3D and 2D arrays are required  to decide the exact number. It should not be that large. 
     341 
     342  Note that such a technique is already used in some modules.For example in zdftke, I use the fact that after field (ua, va, ta, sa) are only used in the momentum and tracer part, so that in the computation of the physics there are considered as workspace.  
     343 
     344  So what I suggest a new module wrk_nemo  (_nemo since it will be probably used in OPA, LIM, CICE, TOP...) :  
     345 
     346{{{ 
     347MODULE wrk_nemo 
     348   !!====================================================================== 
     349   !!                       ***  MODULE  wrk_nemo  *** 
     350   !! NEMO work space:  define and allocated work space arrays used in all component of NEMO 
     351   !!===================================================================== 
     352   !! History :  4.0  !  2011-01  (xxxx)  Original code 
     353   !!---------------------------------------------------------------------- 
     354 
     355   !!---------------------------------------------------------------------- 
     356   !!   wrk_malloc    : update momentum and tracer Kz from a tke scheme 
     357   !!---------------------------------------------------------------------- 
     358   USE par_oce        ! ocean parameters 
     359   USE in_out_manager ! I/O manager 
     360 
     361   IMPLICIT NONE 
     362   PRIVATE 
     363 
     364   PUBLIC   wrk_malloc   ! routine called in opa module (opa_init routine) 
     365 
     366   LOGICAL , PUBLIC, PARAMETER              ::   lk_zdftke = .TRUE.  !: TKE vertical mixing flag 
     367 
     368   REAL(wp), DIMENSION(:,:)    , PUBLIC ::   wrk_2d_1, wrk_2d_1, ...   !: 2D workspace 
     369   REAL(wp), DIMENSION(:,:,:)  , PUBLIC ::   wrk_3d_1, wrk_3d_1, ...   !: 3D workspace 
     370   REAL(wp), DIMENSION(:,:,:,:), PUBLIC ::   wrk_4d_1, wrk_4d_1, ...   !: 4D workspace 
     371 
     372   !! * Substitutions 
     373#  include "domzgr_substitute.h90" 
     374#  include "vectopt_loop_substitute.h90" 
     375   !!---------------------------------------------------------------------- 
     376   !! NEMO/OPA 4.0 , NEMO Consortium (2010) 
     377   !! $Id$ 
     378   !! Software governed by the CeCILL licence     (NEMOGCM/NEMO_CeCILL.txt) 
     379   !!---------------------------------------------------------------------- 
     380CONTAINS 
     381 
     382   SUBROUTINE wrk_malloc 
     383      !!---------------------------------------------------------------------- 
     384      !!                   ***  ROUTINE wrk_malloc  *** 
     385      !! 
     386      !! ** Purpose :   Define in memory one for all the NEMO 2D, 3D and 4d work space arrays 
     387      !!---------------------------------------------------------------------- 
     388      INTEGER :: ierror   ! local integer 
     389      !!---------------------------------------------------------------------- 
     390      ! 
     391      ALLOCATE(wrk_2d_1(jpi,jpj)           , wrk_2d_1(jpi,jpj)          , ...     &  
     392         &     wrk_3d_1(jpi,jpj,jpk)       , wrk_3d_1(jpi,jpj,jpk)      , ...     & 
     393         &     wrk_4d_1(jpi,jpj,jpk, jpts) , wrk_4d_1(jpi,jpj,jpk,jpts) , ...     , Stat=ierror ) 
     394      ! 
     395      IF( ierror /= 0 )   CALL ctl_stop( 'wrk_malloc: unable to allocate work arrays' ) 
     396      !     
     397   END SUBROUTINE wrk_malloc 
     398 
     399   !!====================================================================== 
     400END MODULE wrk_nemo 
     401}}} 
     402 
     403Then, your example of dia_ptr routine becomes:  
     404 
     405{{{ 
     406   SUBROUTINE dia_ptr( kt ) 
     407      !!---------------------------------------------------------------------- 
     408      !!                  ***  ROUTINE dia_ptr  *** 
     409      ... 
     410      !!---------------------------------------------------------------------- 
     411      USE wrk_nemo,   vt  =>   wrk_3D_1   ! use ua as workspace 
     412      USE wrk_nemo,   vs  =>   wrk_3D_2   ! use va as workspace 
     413      ! 
     414      INTEGER, INTENT(in) ::   kt   ! ocean time step index 
     415      ! 
     416      INTEGER  ::   jk, jj, ji   ! dummy loop 
     417      REAL(wp) ::   zsverdrup    ! conversion from m3/s to Sverdrup 
     418      REAL(wp) ::   zpwatt       ! conversion from W    to PW 
     419      REAL(wp) ::   zggram       ! conversion from g    to Pg 
     420      !!---------------------------------------------------------------------- 
     421      !!  
     422      IF( kt == nit000 .OR. MOD( kt, nf_ptr ) == 0 )   THEN 
     423 
     424      ... 
     425      ... 
     426}}} 
     427 
     428  Note that in this example, I have already introduced a 'USE oce, vt   => ua' ...   since dia_ptr is a diagnostics, so that after arrays are available as work space. 
     429 
     430  The main '''DANGER''' of this approach is that the developer must carefull check that the work space he wants to use is not already used to store some information. In particular in case of a subroutine, sub1,  calling another one, sub2, if sub2 needs a work space it will be more secured to have the work space in argument of sub2. 
     431 
     432  I think, we can impose a rule for all subroutine called a lower level than step or opa_init, the required work space have to be passed in argument. 
     433 
     434  I don't see any other drawbacks for this technique...  I also think that if we carefully check and rewrote some part of the code, then hopefully, only 2D and 3D work space arrays will be necessary. 
     435 
     436---- 
     437 
     438 
     439== S2-x : XXX'  comments == 
     440 
     441 
     442---- 
     443 
     444