[[PageOutline]] Last edited [[Timestamp]] '''Author''' : rblod (rachid) '''ticket''' : #848 '''Branch''' : [https://forge.ipsl.jussieu.fr/nemo/browser/branches/2011/dev_r2802_LOCEAN10_agrif_lim dev_r2802_LOCEAN10_agrif_lim] ---- === Description === * Target : use agrif with LIM_2 (EVP and VP rheology), implementation for LIM3 should be similar to LIM_2_EVP but is not done on this branch, since LIM3 fiability is still an issue for me * Existing work * The pioneer was off course F. Dupont some years ago with one-way nesting LIM2_VP, nn_fsbc=1 * Work at LOCEAN with LIM2_VP, 2 way nesting, nn_fsc /=5 (R. Benshila, C. Herbaud, D. Iovino) * LOCEAN/LPO, LIM2_EVP, 2 way nesting, nn_fsc /=5 (R. Benshila, C. Talandier) * Preliminary issues (actually several) : * main one is the ice-model call frequency which may not be equal to 1. This will prevent AGRIF to handle correctly the temporal interpolation * defining state variables to be interpolated for sea-ice model. Obvious for sea-ice velocities, not so for ice properties * dealing with B grid with VP * dealing an iterative solver for VP rheology * dealing with a sub-time stepping in EVP rheology. We have to put something at the boundaries at each sub-time step. * Solving above issues * except for EVP, there is no temporal interpolation performed for sea-ice fields boundary conditions. Between 2 calls to the mother ice-model, we put the ''next'' value computed by the mother grid at the boundary, even if the child nest calls the ice model several time in between. To do so, we use systematically the optional argument "calledweight=1." in the calls to agrif_bc. * so we interpolate ''u_ice'' and ''v_ice''. For sea-ice properties, we choose fileds which are advected (limtrp_2), meaning ''hsnm, hicm, frld, tbif,qstoif''. The advection scheme being a high order one, the correct solution seems to interpolate content of the previous variables and their moments (42 variables). This is implemented in agrif_sadv_lim2. Thus, the difference isn't striking when interpolating only the properties at the end of limtrp_2 after advection and diffusion (done in agrif_adv_lim2). In all of this, we assume having correct values after horizontal processes is sufficient since the following processes (thermodynamics) is pure vertical 1D * using the B grid is all in the correct definition of u_ice and v_ice location in agrif_user * in limrhg_2, we interpolate once for all the values to be put at the boundaries during the matrix inversion (agrif_dyn_lim2_load) and we update it at each iteration (agrif_dyn_lim2_copy) * in limrhg, we tried something clever: we store before and next values parent values (work usually done by agrif if nn_fsbc=1) and we perform a temporal interpolation during the time-splitting. Storing "by hand" old and new values and performing a temporal interpolation could actually also be done for ice-properties. [[Image(agri_lim_timestep.png)]] * Update phase The same variable are used to update the mother grid. Currently, we need to have nn_fsbc = nbcline_update to have this working properly. I suspect some inconsistency in the stress computation otherwise, but didn't solve it properly * Modified modules: * OPA_SRC/nemogcm.F90 : add a call to Agrif_Declare_Var_lim2 for initialisation on the mother grid * OPA_SRC/SBC/sbcice_lim_2.F90 : initialisation for child grid and update, also add childfreq which is the child ice sub-timstep and used for hand-made temporal interpolation * LIM_SRC_2/limhdf_2.F90 : stupid modifications for wrk_nemo * LIM_SRC_2/limadv_2.F90 : stupid modifications for wrk_nemo * LIM_SRC_2/iceini_2.F90 : correction of bug #849 * LIM_SRC_2/limtrp_2.F90 : add call to agrif_sadv_lim2 to interpolate the moments and to agrif_adv_lim2 to interpolate the variables. Note the second call could be enough. I also put à_order moments as global variables to be able to interpolate it. * LIM_SRC_2/ice_2.F90 : put 0-order moments as module variables and allocate it * LIM_SRC_2/limrhg_2.F90 : add call to agrif_dyn_lim2_load and agrif_dyn_lim2_copy * LIM_SRC_2/limrhg.F90 : add call to agrif_dyn_lim * NST_SRC/agrif_opa_update.F90 : move nbclineupdate to agrif_oce * NST_SRC/agrif_users.F90 : add declaration (agrif_declare_var_lim2) and initialisation (agrif_InitValues_cont_lim2) * New routines * NST_SRC/agrif_lim2_interp.F90 * NST_SRC/agrif_lim2_interp.F90 * NST_SRC/agrif_ice.F90 === Testing === Testing could consider (where appropriate) other configurations in addition to NVTK]. ||NVTK Tested||!'''YES/NO!'''|| ||Other model configurations||!'''YES/NO!'''|| ||Processor configurations tested||[ Enter processor configs tested here ]|| ||If adding new functionality please confirm that the [[BR]]New code doesn't change results when it is switched off [[BR]]and !''works!'' when switched on||!'''YES/NO/NA!'''|| (Answering UNSURE is likely to generate further questions from reviewers.) 'Please add further summary details here' * Processor configurations tested * etc---- === Bit Comparability === ||Does this change preserve answers in your tested standard configurations (to the last bit) ?||!'''YES/NO !'''|| ||Does this change bit compare across various processor configurations. (1xM, Nx1 and MxN are recommended)||!'''YES/NO!'''|| ||Is this change expected to preserve answers in all possible model configurations?||!'''YES/NO!'''|| ||Is this change expected to preserve all diagnostics? [[BR]]!,,!''Preserving answers in model runs does not necessarily imply preserved diagnostics. !''||!'''YES/NO!'''|| If you answered !'''NO!''' to any of the above, please provide further details: * Which routine(s) are causing the difference? * Why the changes are not protected by a logical switch or new section-version * What is needed to achieve regression with the previous model release (e.g. a regression branch, hand-edits etc). If this is not possible, explain why not. * What do you expect to see occur in the test harness jobs? * Which diagnostics have you altered and why have they changed?Please add details here........ ---- === System Changes === ||Does your change alter namelists?||!'''YES/NO !'''|| ||Does your change require a change in compiler options?||!'''YES/NO !'''|| If any of these apply, please document the changes required here....... ---- === Resources === !''Please !''summarize!'' any changes in runtime or memory use caused by this change......!'' ---- === IPR issues === ||Has the code been wholly (100%) produced by NEMO developers staff working exclusively on NEMO?||!'''YES/ NO !'''|| If No: * Identify the collaboration agreement details * Ensure the code routine header is in accordance with the agreement, (Copyright/Redistribution etc).Add further details here if required..........