New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
2011WP/2011Stream2/DynamicMemory (diff) – NEMO

Changes between Version 4 and Version 5 of 2011WP/2011Stream2/DynamicMemory


Ignore:
Timestamp:
2010-11-17T16:28:31+01:00 (13 years ago)
Author:
frrh
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • 2011WP/2011Stream2/DynamicMemory

    v4 v5  
    270270}}} 
    271271If an allocate() is done in this way then handling any failure is difficult since there's no guarantee that the allocate() call will have failed for all MPI processes. Therefore, I think the best bet is to print out an error message and then do an MPI_Abort on MPI_COMM_WORLD. 
     272 
     273 
     274---- 
     275 
     276>>Richard H. start 
     277 
     278I agree with the sentiment about dropping 'key_mpp_dyndist' in favour of only supporting the dynamic memory code. (On the basis that proliferation of cpp keys makes maintenance and development difficult in the long term and implies the need to test model developments using equivalent configurations under both static and dynamic configurations).      
     279 
     280Re the local array work space. Agree that we can't rely on ulimit. With regard to allocating and saving large workspace arrays, would it be viable to allocate space for these in some generic sense at the start of the run rather than locally within each subroutine or code area? That might give us the opportunity of recycling the same space rather than allocating space specifically for each subroutine or code area. It might, however imply the need to  
     281pass the generic array around through argument lists e.g.: 
     282 
     283{{{ 
     284 
     285    Declare/allocate GEN_WORK_ARRAY1(jpi,jpj,jpk), GEN_WORK_ARRAY2(jpi,jpj,jpk), etc. somewhere during model initialisation 
     286 
     287    bla bla bla 
     288    "   "    " 
     289    CALL SOME_ROUTINE_X(GEN_WORK_ARRAY1(1,1,1,1), GEN_WORK_ARRAY2(1,1,1,2), etc) 
     290    "   "    " 
     291    CALL SOME_ROUTINE_Y(GEN_WORK_ARRAY1(1,1,1,1), GEN_WORK_ARRAY2(1,1,1,2), etc) 
     292    "   "    " 
     293    CALL SOME_ROUTINE_Z(GEN_WORK_ARRAY1(1,1,1,1), GEN_WORK_ARRAY2(1,1,1,2), etc) 
     294}}} 
     295 
     296Then in the routines we have: 
     297{{{ 
     298    SUBROUTINE SOME_ROUTINE_X(local_work_array_X1, local_work_array_X2, etc) 
     299    bla bla bla 
     300    "   "   "      
     301    END SUBROUTINE     
     302 
     303    SUBROUTINE SOME_ROUTINE_Y(local_work_array_Y1, local_work_array_Y2, etc) 
     304    bla bla bla 
     305    "   "   "      
     306    END SUBROUTINE     
     307    SUBROUTINE SOME_ROUTINE_Z(local_work_array_Z1, local_work_array_Z2, etc) 
     308    bla bla bla 
     309    "   "   "      
     310    END SUBROUTINE     
     311}}} 
     312Anyway, just a thought.  
     313 
     314Aborting using MPI_COMM_WORLD is particularly pertinent to coupled (OASIS based) models (otherwise things just tend to dangle).    
     315 
     316>>Richard H. end 
     317 
     318---- 
     319