Changeset 12165 for NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs
- Timestamp:
- 2019-12-11T09:27:27+01:00 (4 years ago)
- Location:
- NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs
- Files:
-
- 4 deleted
- 7 edited
Legend:
- Unmodified
- Added
- Removed
-
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/AGRIF_DEMO/EXPREF/namelist_ice_cfg
r10535 r12165 38 38 &namdyn_rhg ! Ice rheology 39 39 !------------------------------------------------------------------------------ 40 ln_aEVP = .false. ! adaptive rheology (Kimmritz et al. 2016 & 2017) 40 41 / 41 42 !------------------------------------------------------------------------------ -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/AGRIF_DEMO/README.rst
r10460 r12165 2 2 Embedded zooms 3 3 ************** 4 5 .. todo:: 6 7 4 8 5 9 .. contents:: … … 9 13 ======== 10 14 11 AGRIF (Adaptive Grid Refinement In Fortran) is a library that allows the seamless space and time refinement over12 rectangular regions in NEMO.15 AGRIF (Adaptive Grid Refinement In Fortran) is a library that 16 allows the seamless space and time refinement over rectangular regions in NEMO. 13 17 Refinement factors can be odd or even (usually lower than 5 to maintain stability). 14 Interaction between grid is "two-ways" in the sense that the parent grid feeds the child grid open boundaries and 15 the child grid provides volume averages of prognostic variables once a given number of time step is completed. 18 Interaction between grid is "two-ways" in the sense that 19 the parent grid feeds the child grid open boundaries and 20 the child grid provides volume averages of prognostic variables once 21 a given number of time step is completed. 16 22 These pages provide guidelines how to use AGRIF in NEMO. 17 For a more technical description of the library itself, please refer to http://agrif.imag.fr.23 For a more technical description of the library itself, please refer to AGRIF_. 18 24 19 25 Compilation 20 26 =========== 21 27 22 Activating AGRIF requires to append the cpp key ``key_agrif`` at compilation time: 28 Activating AGRIF requires to append the cpp key ``key_agrif`` at compilation time: 23 29 24 30 .. code-block:: sh 25 31 26 ./makenemoadd_key 'key_agrif'32 ./makenemo [...] add_key 'key_agrif' 27 33 28 Although this is transparent to users, the way the code is processed during compilation is different from29 the standard case:30 a preprocessing stage (the so called "conv"program) translates the actual code so that34 Although this is transparent to users, 35 the way the code is processed during compilation is different from the standard case: 36 a preprocessing stage (the so called ``conv`` program) translates the actual code so that 31 37 saved arrays may be switched in memory space from one domain to an other. 32 38 … … 34 40 ================================ 35 41 36 An additional text file ``AGRIF_FixedGrids.in`` is required at run time.42 An additional text file :file:`AGRIF_FixedGrids.in` is required at run time. 37 43 This is where the grid hierarchy is defined. 38 An example of such a file, here taken from the ``ICEDYN`` test case, is given below ::44 An example of such a file, here taken from the ``ICEDYN`` test case, is given below 39 45 40 1 41 34 63 34 63 3 3 3 42 0 46 .. literalinclude:: ../../../tests/ICE_AGRIF/EXPREF/AGRIF_FixedGrids.in 43 47 44 48 The first line indicates the number of zooms (1). 45 49 The second line contains the starting and ending indices in both directions on the root grid 46 ( imin=34 imax=63 jmin=34 jmax=63) followed by the space and time refinement factors (3 3 3).50 (``imin=34 imax=63 jmin=34 jmax=63``) followed by the space and time refinement factors (3 3 3). 47 51 The last line is the number of child grid nested in the refined region (0). 48 52 A more complex example with telescoping grids can be found below and 49 in the ``AGRIF_DEMO`` reference configuration directory.53 in the :file:`AGRIF_DEMO` reference configuration directory. 50 54 51 [Add some plots here with grid staggering and positioning ?] 55 .. todo:: 52 56 53 When creating the nested domain, one must keep in mind that the child domain is shifted toward north-east and 54 depends on the number of ghost cells as illustrated by the (attempted) drawing below for nbghostcells=1 and 55 nbghostcells=3. 56 The grid refinement is 3 and nxfin is the number of child grid points in i-direction. 57 Add some plots here with grid staggering and positioning? 58 59 When creating the nested domain, one must keep in mind that 60 the child domain is shifted toward north-east and 61 depends on the number of ghost cells as illustrated by 62 the *attempted* drawing below for ``nbghostcells=1`` and ``nbghostcells=3``. 63 The grid refinement is 3 and ``nxfin`` is the number of child grid points in i-direction. 57 64 58 65 .. image:: _static/agrif_grid_position.jpg … … 62 69 boundary data exchange and update being only performed between root and child grids. 63 70 Use of east-west periodic or north-fold boundary conditions is not allowed in child grids either. 64 Defining for instance a circumpolar zoom in a global model is therefore not possible. 71 Defining for instance a circumpolar zoom in a global model is therefore not possible. 65 72 66 73 Preprocessing 67 74 ============= 68 75 69 Knowing the refinement factors and area, a ``NESTING`` pre-processing tool may help to create needed input files 76 Knowing the refinement factors and area, 77 a ``NESTING`` pre-processing tool may help to create needed input files 70 78 (mesh file, restart, climatological and forcing files). 71 79 The key is to ensure volume matching near the child grid interface, 72 a step done by invoking the ``Agrif_create_bathy.exe`` program.73 You may use the namelists provided in the ``NESTING`` directory as a guide.80 a step done by invoking the :file:`Agrif_create_bathy.exe` program. 81 You may use the namelists provided in the :file:`NESTING` directory as a guide. 74 82 These correspond to the namelists used to create ``AGRIF_DEMO`` inputs. 75 83 … … 78 86 79 87 Each child grid expects to read its own namelist so that different numerical choices can be made 80 (these should be stored in the form ``1_namelist_cfg``, ``2_namelist_cfg``, etc... according to their rank in81 the grid hierarchy).88 (these should be stored in the form :file:`1_namelist_cfg`, :file:`2_namelist_cfg`, etc... 89 according to their rank in the grid hierarchy). 82 90 Consistent time steps and number of steps with the chosen time refinement have to be provided. 83 91 Specific to AGRIF is the following block: 84 92 85 .. code-block:: fortran 86 87 !----------------------------------------------------------------------- 88 &namagrif ! AGRIF zoom ("key_agrif") 89 !----------------------------------------------------------------------- 90 ln_spc_dyn = .true. ! use 0 as special value for dynamics 91 rn_sponge_tra = 2880. ! coefficient for tracer sponge layer [m2/s] 92 rn_sponge_dyn = 2880. ! coefficient for dynamics sponge layer [m2/s] 93 ln_chk_bathy = .false. ! =T check the parent bathymetry 94 / 93 .. literalinclude:: ../../namelists/namagrif 94 :language: fortran 95 95 96 96 where sponge layer coefficients have to be chosen according to the child grid mesh size. 97 97 The sponge area is hard coded in NEMO and applies on the following grid points: 98 2 x refinement factor (from i=1+nbghostcells+1 to i=1+nbghostcells+sponge_area)98 2 x refinement factor (from ``i=1+nbghostcells+1`` to ``i=1+nbghostcells+sponge_area``) 99 99 100 References 101 ========== 100 .. rubric:: References 102 101 103 102 .. bibliography:: zooms.bib 104 105 106 107 103 :all: 104 :style: unsrt 105 :labelprefix: A 106 :keyprefix: a- -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/ORCA2_ICE_PISCES/EXPREF/namelist_ice_cfg
r10535 r12165 38 38 &namdyn_rhg ! Ice rheology 39 39 !------------------------------------------------------------------------------ 40 ln_aEVP = .false. ! adaptive rheology (Kimmritz et al. 2016 & 2017) 40 41 / 41 42 !------------------------------------------------------------------------------ -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/README.rst
r10694 r12165 1 ************************ 2 Reference configurations 3 ************************ 1 ******************************** 2 Run the Reference configurations 3 ******************************** 4 5 .. todo:: 6 7 Lack of illustrations for ref. cfgs, and more generally in the guide. 4 8 5 9 NEMO is distributed with a set of reference configurations allowing both … … 7 11 the developer to test/validate his NEMO developments (using SETTE package). 8 12 13 .. contents:: 14 :local: 15 :depth: 1 16 9 17 .. attention:: 10 18 … … 21 29 =========================================================== 22 30 23 A user who wants to compile the ORCA2_ICE_PISCES_ reference configuration using ``makenemo`` 24 should use the following, by selecting among available architecture file or providing a user defined one: 31 To compile the ORCA2_ICE_PISCES_ reference configuration using :file:`makenemo`, 32 one should use the following, by selecting among available architecture file or 33 providing a user defined one: 25 34 26 35 .. code-block:: console 27 28 $ ./makenemo -r 'ORCA2_ICE_PISCES' -m 'my -fortran.fcm' -j '4'36 37 $ ./makenemo -r 'ORCA2_ICE_PISCES' -m 'my_arch' -j '4' 29 38 30 39 A new ``EXP00`` folder will be created within the selected reference configurations, 31 namely ``./cfgs/ORCA2_ICE_PISCES/EXP00``, 32 where it will be necessary to uncompress the Input & Forcing Files listed in the above table. 40 namely ``./cfgs/ORCA2_ICE_PISCES/EXP00``. 41 It will be necessary to uncompress the archives listed in the above table for 42 the given reference configuration that includes input & forcing files. 33 43 34 44 Then it will be possible to launch the execution of the model through a runscript 35 45 (opportunely adapted to the user system). 36 46 37 47 List of Configurations 38 48 ====================== 39 49 40 All forcing files listed below in the table are available from |NEMO archives URL|_ 41 42 .. |NEMO archives URL| image:: https://www.zenodo.org/badge/DOI/10.5281/zenodo.1472245.svg 43 .. _NEMO archives URL: https://doi.org/10.5281/zenodo.1472245 44 45 ====================== ===== ===== ===== ======== ======= ================================================ 46 Configuration Component(s) Input & Forcing File(s) 47 ---------------------- ---------------------------------- ------------------------------------------------ 48 Name OPA SI3 TOP PISCES AGRIF 49 ====================== ===== ===== ===== ======== ======= ================================================ 50 AGRIF_DEMO_ X X X AGRIF_DEMO_v4.0.tar, ORCA2_ICE_v4.0.tar 51 AMM12_ X AMM12_v4.0.tar 52 C1D_PAPA_ X INPUTS_C1D_PAPA_v4.0.tar 53 GYRE_BFM_ X X *none* 54 GYRE_PISCES_ X X X *none* 55 ORCA2_ICE_PISCES_ X X X X ORCA2_ICE_v4.0.tar, INPUTS_PISCES_v4.0.tar 56 ORCA2_OFF_PISCES_ X X ORCA2_OFF_v4.0.tar, INPUTS_PISCES_v4.0.tar 57 ORCA2_OFF_TRC_ X ORCA2_OFF_v4.0.tar 58 ORCA2_SAS_ICE_ X ORCA2_ICE_v4.0.tar, INPUTS_SAS_v4.0.tar 59 SPITZ12_ X X SPITZ12_v4.0.tar 60 ====================== ===== ===== ===== ======== ======= ================================================ 50 All forcing files listed below in the table are available from |DOI data|_ 51 52 =================== === === === === === ================================== 53 Configuration Component(s) Archives (input & forcing files) 54 ------------------- ------------------- ---------------------------------- 55 Name O S T P A 56 =================== === === === === === ================================== 57 AGRIF_DEMO_ X X X AGRIF_DEMO_v4.0.tar, 58 ORCA2_ICE_v4.0.tar 59 AMM12_ X AMM12_v4.0.tar 60 C1D_PAPA_ X INPUTS_C1D_PAPA_v4.0.tar 61 GYRE_BFM_ X X *none* 62 GYRE_PISCES_ X X X *none* 63 ORCA2_ICE_PISCES_ X X X X ORCA2_ICE_v4.0.tar, 64 INPUTS_PISCES_v4.0.tar 65 ORCA2_OFF_PISCES_ X X ORCA2_OFF_v4.0.tar, 66 INPUTS_PISCES_v4.0.tar 67 ORCA2_OFF_TRC_ X ORCA2_OFF_v4.0.tar 68 ORCA2_SAS_ICE_ X ORCA2_ICE_v4.0.tar, 69 INPUTS_SAS_v4.0.tar 70 SPITZ12_ X X SPITZ12_v4.0.tar 71 =================== === === === === === ================================== 72 73 .. admonition:: Legend for component combination 74 75 O for OCE, S for SI\ :sup:`3`, T for TOP, P for PISCES and A for AGRIF 61 76 62 77 AGRIF_DEMO … … 72 87 particular interest to test sea ice coupling. 73 88 89 .. image:: _static/AGRIF_DEMO_no_cap.jpg 90 :scale: 66% 91 :align: center 92 74 93 The 1:1 grid can be used alone as a benchmark to check that 75 the model solution is not corrupted by grid exchanges. 94 the model solution is not corrupted by grid exchanges. 76 95 Note that since grids interact only at the baroclinic time level, 77 96 numerically exact results can not be achieved in the 1:1 case. 78 Perfect reproducibility is obtained only by switching to a fully explicit setup instead of a split explicit free surface scheme. 97 Perfect reproducibility is obtained only by switching to a fully explicit setup instead of 98 a split explicit free surface scheme. 79 99 80 100 AMM12 … … 85 105 a regular horizontal grid of ~12 km of resolution (see :cite:`ODEA2012`). 86 106 87 This configuration allows to tests several features of NEMO specifically addressed to the shelf seas. 107 .. image:: _static/AMM_domain.png 108 :align: center 109 110 This configuration allows to tests several features of NEMO specifically addressed to the shelf seas. 88 111 In particular, ``AMM12`` accounts for vertical s-coordinates system, GLS turbulence scheme, 89 112 tidal lateral boundary conditions using a flather scheme (see more in ``BDY``). … … 99 122 -------- 100 123 101 ``C1D_PAPA`` is a 1D configuration for the `PAPA station <http://www.pmel.noaa.gov/OCS/Papa/index-Papa.shtml>`_ located in the northern-eastern Pacific Ocean at 50.1°N, 144.9°W. 102 See `Reffray et al. (2015) <http://www.geosci-model-dev.net/8/69/2015>`_ for the description of its physical and numerical turbulent-mixing behaviour. 103 104 The water column setup, called NEMO1D, is activated with the inclusion of the CPP key ``key_c1d`` and 105 has a horizontal domain of 3x3 grid points. 106 107 This reference configuration uses 75 vertical levels grid (1m at the surface), GLS turbulence scheme with K-epsilon closure and the NCAR bulk formulae. 124 .. figure:: _static/Papa2015.jpg 125 :height: 225px 126 :align: left 127 128 ``C1D_PAPA`` is a 1D configuration for the `PAPA station`_ located in 129 the northern-eastern Pacific Ocean at 50.1°N, 144.9°W. 130 See :gmd:`Reffray et al. (2015) <8/69/2015>` for the description of 131 its physical and numerical turbulent-mixing behaviour. 132 133 | The water column setup, called NEMO1D, is activated with 134 the inclusion of the CPP key ``key_c1d`` and 135 has a horizontal domain of 3x3 grid points. 136 | This reference configuration uses 75 vertical levels grid (1m at the surface), 137 GLS turbulence scheme with K-epsilon closure and the NCAR bulk formulae. 138 108 139 Data provided with ``INPUTS_C1D_PAPA_v4.0.tar`` file account for: 109 140 110 - ``forcing_PAPASTATION_1h_y201[0-1].nc`` : ECMWF operational analysis atmospheric forcing rescaled to 1h (with long and short waves flux correction) for years 2010 and 2011 111 - ``init_PAPASTATION_m06d15.nc`` : Initial Conditions from observed data and Levitus 2009 climatology 112 - ``chlorophyll_PAPASTATION.nc`` : surface chlorophyll file from Seawifs data 141 - :file:`forcing_PAPASTATION_1h_y201[0-1].nc`: 142 ECMWF operational analysis atmospheric forcing rescaled to 1h 143 (with long and short waves flux correction) for years 2010 and 2011 144 - :file:`init_PAPASTATION_m06d15.nc`: Initial Conditions from 145 observed data and Levitus 2009 climatology 146 - :file:`chlorophyll_PAPASTATION.nc`: surface chlorophyll file from Seawifs data 113 147 114 148 GYRE_BFM 115 149 -------- 116 150 117 ``GYRE_BFM`` shares the same physical setup of GYRE_PISCES_, but NEMO is coupled with the `BFM <http://www.bfm-community.eu/>`_ biogeochemical model as described in ``./cfgs/GYRE_BFM/README``. 151 ``GYRE_BFM`` shares the same physical setup of GYRE_PISCES_, 152 but NEMO is coupled with the `BFM`_ biogeochemical model as described in ``./cfgs/GYRE_BFM/README``. 118 153 119 154 GYRE_PISCES … … 123 158 in the Beta-plane approximation with a regular 1° horizontal resolution and 31 vertical levels, 124 159 with PISCES BGC model :cite:`gmd-8-2465-2015`. 125 Analytical forcing for heat, freshwater and wind-stress fields are applied. 126 127 This configuration acts also as demonstrator of the **user defined setup** (``ln_read_cfg = .false.``) and 128 grid setting are handled through the ``&namusr_def`` controls in ``namelist_cfg``: 160 Analytical forcing for heat, freshwater and wind-stress fields are applied. 161 162 This configuration acts also as demonstrator of the **user defined setup** 163 (``ln_read_cfg = .false.``) and grid setting are handled through 164 the ``&namusr_def`` controls in :file:`namelist_cfg`: 129 165 130 166 .. literalinclude:: ../../../cfgs/GYRE_PISCES/EXPREF/namelist_cfg 131 167 :language: fortran 132 :lines: 34-42168 :lines: 35-41 133 169 134 170 Note that, the default grid size is 30x20 grid points (with ``nn_GYRE = 1``) and 135 171 vertical levels are set by ``jpkglo``. 136 The specific code changes can be inspected in ``./src/OCE/USR``. 137 138 **Running GYRE as a benchmark** : 139 this simple configuration can be used as a benchmark since it is easy to increase resolution, 140 with the drawback of getting results that have a very limited physical meaning. 141 142 GYRE grid resolution can be increased at runtime by setting a different value of ``nn_GYRE`` (integer multiplier scaling factor), as described in the following table: 143 144 =========== ========= ========== ============ =================== 145 ``nn_GYRE`` *jpiglo* *jpjglo* ``jpkglo`` **Equivalent to** 146 =========== ========= ========== ============ =================== 147 1 30 20 31 GYRE 1° 148 25 750 500 101 ORCA 1/2° 149 50 1500 1000 101 ORCA 1/4° 150 150 4500 3000 101 ORCA 1/12° 151 200 6000 4000 101 ORCA 1/16° 152 =========== ========= ========== ============ =================== 153 154 Note that, it is necessary to set ``ln_bench = .true.`` in ``namusr_def`` to 155 avoid problems in the physics computation and that 156 the model timestep should be adequately rescaled. 157 158 For example if ``nn_GYRE = 150``, equivalent to an ORCA 1/12° grid, 159 the timestep ``rn_rdt = 1200`` should be set to 1200 seconds 160 161 Differently from previous versions of NEMO, 162 the code uses by default the time-splitting scheme and 163 internally computes the number of sub-steps. 172 The specific code changes can be inspected in :file:`./src/OCE/USR`. 173 174 .. rubric:: Running GYRE as a benchmark 175 176 | This simple configuration can be used as a benchmark since it is easy to increase resolution, 177 with the drawback of getting results that have a very limited physical meaning. 178 | GYRE grid resolution can be increased at runtime by setting a different value of ``nn_GYRE`` 179 (integer multiplier scaling factor), as described in the following table: 180 181 =========== ============ ============ ============ =============== 182 ``nn_GYRE`` ``jpiglo`` ``jpjglo`` ``jpkglo`` Equivalent to 183 =========== ============ ============ ============ =============== 184 1 30 20 31 GYRE 1° 185 25 750 500 101 ORCA 1/2° 186 50 1500 1000 101 ORCA 1/4° 187 150 4500 3000 101 ORCA 1/12° 188 200 6000 4000 101 ORCA 1/16° 189 =========== ============ ============ ============ =============== 190 191 | Note that, it is necessary to set ``ln_bench = .true.`` in ``&namusr_def`` to 192 avoid problems in the physics computation and that 193 the model timestep should be adequately rescaled. 194 | For example if ``nn_GYRE = 150``, equivalent to an ORCA 1/12° grid, 195 the timestep ``rn_rdt`` should be set to 1200 seconds 196 Differently from previous versions of NEMO, the code uses by default the time-splitting scheme and 197 internally computes the number of sub-steps. 164 198 165 199 ORCA2_ICE_PISCES … … 174 208 the ratio of anisotropy is nearly one everywhere 175 209 176 this configuration uses the three components 177 178 - |O PA|, the ocean dynamical core179 - | SI3|, the thermodynamic-dynamic sea ice model.180 - | TOP|, passive tracer transport module and PISCES BGC model :cite:`gmd-8-2465-2015`210 This configuration uses the three components 211 212 - |OCE|, the ocean dynamical core 213 - |ICE|, the thermodynamic-dynamic sea ice model. 214 - |MBG|, passive tracer transport module and PISCES BGC model :cite:`gmd-8-2465-2015` 181 215 182 216 All components share the same grid. 183 184 217 The model is forced with CORE-II normal year atmospheric forcing and 185 218 it uses the NCAR bulk formulae. 186 219 187 In this ``ORCA2_ICE_PISCES`` configuration, 188 AGRIF nesting can be activated that includes a nested grid in the Agulhas region. 189 190 To set up this configuration, after extracting NEMO: 191 192 Build your AGRIF configuration directory from ``ORCA2_ICE_PISCES``, 193 with the ``key_agrif`` CPP key activated: 194 195 .. code-block:: console 196 197 $ ./makenemo -r 'ORCA2_ICE_PISCES' -n 'AGRIF' add_key 'key_agrif' 198 199 By using the input files and namelists for ``ORCA2_ICE_PISCES``, 200 the AGRIF test configuration is ready to run. 201 202 **Ocean Physics** 203 204 - *horizontal diffusion on momentum*: the eddy viscosity coefficient depends on the geographical position. It is taken as 40000 m^2/s, reduced in the equator regions (2000 m^2/s) excepted near the western boundaries. 205 - *isopycnal diffusion on tracers*: the diffusion acts along the isopycnal surfaces (neutral surface) with an eddy diffusivity coefficient of 2000 m^2/s. 206 - *Eddy induced velocity parametrization* with a coefficient that depends on the growth rate of baroclinic instabilities (it usually varies from 15 m^2/s to 3000 m^2/s). 207 - *lateral boundary conditions* : zero fluxes of heat and salt and no-slip conditions are applied through lateral solid boundaries. 208 - *bottom boundary condition* : zero fluxes of heat and salt are applied through the ocean bottom. 209 The Beckmann [19XX] simple bottom boundary layer parameterization is applied along continental slopes. 210 A linear friction is applied on momentum. 211 - *convection*: the vertical eddy viscosity and diffusivity coefficients are increased to 1 m^2/s in case of static instability. 212 - *time step* is 5760sec (1h36') so that there is 15 time steps in one day. 220 .. rubric:: Ocean Physics 221 222 :horizontal diffusion on momentum: 223 the eddy viscosity coefficient depends on the geographical position. 224 It is taken as 40000 m\ :sup:`2`/s, reduced in the equator regions (2000 m\ :sup:`2`/s) 225 excepted near the western boundaries. 226 :isopycnal diffusion on tracers: 227 the diffusion acts along the isopycnal surfaces (neutral surface) with 228 an eddy diffusivity coefficient of 2000 m\ :sup:`2`/s. 229 :Eddy induced velocity parametrization: 230 With a coefficient that depends on the growth rate of baroclinic instabilities 231 (it usually varies from 15 m\ :sup:`2`/s to 3000 m\ :sup:`2`/s). 232 :lateral boundary conditions: 233 Zero fluxes of heat and salt and no-slip conditions are applied through lateral solid boundaries. 234 :bottom boundary condition: 235 Zero fluxes of heat and salt are applied through the ocean bottom. 236 The Beckmann [19XX] simple bottom boundary layer parameterization is applied along 237 continental slopes. 238 A linear friction is applied on momentum. 239 :convection: 240 The vertical eddy viscosity and diffusivity coefficients are increased to 1 m\ :sup:`2`/s in 241 case of static instability. 242 :time step: is 5760sec (1h36') so that there is 15 time steps in one day. 213 243 214 244 ORCA2_OFF_PISCES … … 218 248 but only PISCES model is an active component of TOP. 219 249 220 221 250 ORCA2_OFF_TRC 222 251 ------------- 223 252 224 ``ORCA2_OFF_TRC`` is based on the ORCA2 global ocean configuration 225 (see ORCA2_ICE_PISCES_ for general description) along with the tracer passive transport module (TOP), but dynamical fields are pre-calculated and read with specific time frequency. 226 227 This enables for an offline coupling of TOP components, 228 here specifically inorganic carbon compounds (cfc11, cfc12, sf6, c14) and water age module (age). 229 See ``namelist_top_cfg`` to inspect the selection of each component with the dedicated logical keys. 253 | ``ORCA2_OFF_TRC`` is based on the ORCA2 global ocean configuration 254 (see ORCA2_ICE_PISCES_ for general description) along with 255 the tracer passive transport module (TOP), 256 but dynamical fields are pre-calculated and read with specific time frequency. 257 | This enables for an offline coupling of TOP components, 258 here specifically inorganic carbon compounds (CFC11, CFC12, SF6, C14) and water age module (age). 259 See :file:`namelist_top_cfg` to inspect the selection of 260 each component with the dedicated logical keys. 230 261 231 262 Pre-calculated dynamical fields are provided to NEMO using 232 the namelist ``&namdta_dyn`` in ``namelist_cfg``,263 the namelist ``&namdta_dyn`` in :file:`namelist_cfg`, 233 264 in this case with a 5 days frequency (120 hours): 234 265 235 .. literalinclude:: ../../ ../cfgs/GYRE_PISCES/EXPREF/namelist_ref266 .. literalinclude:: ../../namelists/namdta_dyn 236 267 :language: fortran 237 :lines: 935-960 238 239 Input dynamical fields for this configuration (``ORCA2_OFF_v4.0.tar``) comes from 268 269 Input dynamical fields for this configuration (:file:`ORCA2_OFF_v4.0.tar`) comes from 240 270 a 2000 years long climatological simulation of ORCA2_ICE using ERA40 atmospheric forcing. 241 271 242 Note that, this configuration default uses linear free surface (``ln_linssh = .true.``) assuming that 243 model mesh is not varying in time and 244 it includes the bottom boundary layer parameterization (``ln_trabbl = .true.``) that 245 requires the provision of bbl coefficients through ``sn_ubl`` and ``sn_vbl`` fields. 246 247 It is also possible to activate PISCES model (see ``ORCA2_OFF_PISCES``) or248 a user defined set of tracers and source-sink terms with ``ln_my_trc = .true.``249 (and adaptation of ``./src/TOP/MY_TRC`` routines).272 | Note that, 273 this configuration default uses linear free surface (``ln_linssh = .true.``) assuming that 274 model mesh is not varying in time and 275 it includes the bottom boundary layer parameterization (``ln_trabbl = .true.``) that 276 requires the provision of BBL coefficients through ``sn_ubl`` and ``sn_vbl`` fields. 277 | It is also possible to activate PISCES model (see ``ORCA2_OFF_PISCES``) or 278 a user defined set of tracers and source-sink terms with ``ln_my_trc = .true.`` 279 (and adaptation of ``./src/TOP/MY_TRC`` routines). 250 280 251 281 In addition, the offline module (OFF) allows for the provision of further fields: … … 254 284 by including an input datastream similarly to the following: 255 285 256 .. code-block:: fortran257 258 sn_rnf = 'dyna_grid_T', 120, 'sorunoff' , .true., .true., 'yearly', '', '', ''259 260 2. **VVL dynamical fields**, 261 in the case input data were produced by a dyamical core usingvariable volume (``ln_linssh = .false.``)262 it necessary to provide also diverce and E-P at before timestep by286 .. code-block:: fortran 287 288 sn_rnf = 'dyna_grid_T', 120, 'sorunoff' , .true., .true., 'yearly', '', '', '' 289 290 2. **VVL dynamical fields**, in the case input data were produced by a dyamical core using 291 variable volume (``ln_linssh = .false.``) 292 it is necessary to provide also diverce and E-P at before timestep by 263 293 including input datastreams similarly to the following 264 294 265 .. code-block:: fortran 266 267 sn_div = 'dyna_grid_T', 120, 'e3t' , .true., .true., 'yearly', '', '', '' 268 sn_empb = 'dyna_grid_T', 120, 'sowaflupb', .true., .true., 'yearly', '', '', '' 269 295 .. code-block:: fortran 296 297 sn_div = 'dyna_grid_T', 120, 'e3t' , .true., .true., 'yearly', '', '', '' 298 sn_empb = 'dyna_grid_T', 120, 'sowaflupb', .true., .true., 'yearly', '', '', '' 270 299 271 300 More details can be found by inspecting the offline data manager in 272 the routine ``./src/OFF/dtadyn.F90``.301 the routine :file:`./src/OFF/dtadyn.F90`. 273 302 274 303 ORCA2_SAS_ICE 275 304 ------------- 276 305 277 ORCA2_SAS_ICE is a demonstrator of the Stand-Alone Surface (SAS) module and 278 it relies on ORCA2 global ocean configuration (see ORCA2_ICE_PISCES_ for general description). 279 280 The standalone surface module allows surface elements such as sea-ice, iceberg drift, and 281 surface fluxes to be run using prescribed model state fields. 282 It can profitably be used to compare different bulk formulae or 283 adjust the parameters of a given bulk formula. 284 285 More informations about SAS can be found in NEMO manual. 306 | ORCA2_SAS_ICE is a demonstrator of the Stand-Alone Surface (SAS) module and 307 it relies on ORCA2 global ocean configuration (see ORCA2_ICE_PISCES_ for general description). 308 | The standalone surface module allows surface elements such as sea-ice, iceberg drift, and 309 surface fluxes to be run using prescribed model state fields. 310 It can profitably be used to compare different bulk formulae or 311 adjust the parameters of a given bulk formula. 312 313 More informations about SAS can be found in :doc:`NEMO manual <cite>`. 286 314 287 315 SPITZ12 … … 290 318 ``SPITZ12`` is a regional configuration around the Svalbard archipelago 291 319 at 1/12° of horizontal resolution and 75 vertical levels. 292 See `Rousset et al. (2015) <https://www.geosci-model-dev.net/8/2991/2015/>`_for more details.320 See :gmd:`Rousset et al. (2015) <8/2991/2015>` for more details. 293 321 294 322 This configuration references to year 2002, … … 296 324 while lateral boundary conditions for dynamical fields have 3 days time frequency. 297 325 298 References 299 ========== 300 301 .. bibliography:: configurations.bib 326 .. rubric:: References 327 328 .. bibliography:: cfgs.bib 302 329 :all: 303 330 :style: unsrt 304 331 :labelprefix: C 305 306 .. Links and substitutions307 -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SHARED/README.rst
r10598 r12165 3 3 *********** 4 4 5 .. todo:: 6 7 8 5 9 .. contents:: 6 10 :local: 7 11 8 12 Output of diagnostics in NEMO is usually done using XIOS. 9 This is an efficient way of writing diagnostics because the time averaging, file writing and even some simple arithmetic or regridding is carried out in parallel to the NEMO model run. 13 This is an efficient way of writing diagnostics because 14 the time averaging, file writing and even some simple arithmetic or regridding is carried out in 15 parallel to the NEMO model run. 10 16 This page gives a basic introduction to using XIOS with NEMO. 11 Much more information is available from the XIOS homepageabove and from the NEMO manual.17 Much more information is available from the :xios:`XIOS homepage<>` above and from the NEMO manual. 12 18 13 Use of XIOS for diagnostics is activated using the pre-compiler key ``key_iomput``. 19 Use of XIOS for diagnostics is activated using the pre-compiler key ``key_iomput``. 14 20 15 21 Extracting and installing XIOS 16 ------------------------------ 22 ============================== 17 23 18 24 1. Install the NetCDF4 library. 19 If you want to use single file output you will need to compile the HDF & NetCDF libraries to allow parallel IO.20 2. Download the version of XIOS that you wish to use. The recommended version is now XIOS 2.5: 21 22 .. code-block:: console 25 If you want to use single file output you will need to compile the HDF & NetCDF libraries to 26 allow parallel IO. 27 2. Download the version of XIOS that you wish to use. 28 The recommended version is now XIOS 2.5: 23 29 24 $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 xios-2.5 30 .. code-block:: console 25 31 26 and follow the instructions in `XIOS documentation <http://forge.ipsl.jussieu.fr/ioserver/wiki/documentation>`_ to compile it. 27 If you find problems at this stage, support can be found by subscribing to the `XIOS mailing list <http://forge.ipsl.jussieu.fr/mailman/listinfo.cgi/xios-users>`_ and sending a mail message to it. 32 $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 33 34 and follow the instructions in :xios:`XIOS documentation <wiki/documentation>` to compile it. 35 If you find problems at this stage, support can be found by subscribing to 36 the :xios:`XIOS mailing list <../mailman/listinfo.cgi/xios-users>` and sending a mail message to it. 28 37 29 38 XIOS Configuration files 30 39 ------------------------ 31 40 32 XIOS is controlled using xml input files that should be copied to your model run directory before running the model. 33 Examples of these files can be found in the reference configurations (``cfgs``). The XIOS executable expects to find a file called ``iodef.xml`` in the model run directory. 34 In NEMO we have made the decision to use include statements in the ``iodef.xml`` file to include ``field_def_nemo-oce.xml`` (for physics), ``field_def_nemo-ice.xml`` (for ice), ``field_def_nemo-pisces.xml`` (for biogeochemistry) and ``domain_def.xml`` from the /cfgs/SHARED directory. 35 Most users will not need to modify ``domain_def.xml`` or ``field_def_nemo-???.xml`` unless they want to add new diagnostics to the NEMO code. 36 The definition of the output files is organized into separate ``file_definition.xml`` files which are included in the ``iodef.xml`` file. 41 XIOS is controlled using XML input files that should be copied to 42 your model run directory before running the model. 43 Examples of these files can be found in the reference configurations (:file:`./cfgs`). 44 The XIOS executable expects to find a file called :file:`iodef.xml` in the model run directory. 45 In NEMO we have made the decision to use include statements in the :file:`iodef.xml` file to include: 46 47 - :file:`field_def_nemo-oce.xml` (for physics), 48 - :file:`field_def_nemo-ice.xml` (for ice), 49 - :file:`field_def_nemo-pisces.xml` (for biogeochemistry) and 50 - :file:`domain_def.xml` from the :file:`./cfgs/SHARED` directory. 51 52 Most users will not need to modify :file:`domain_def.xml` or :file:`field_def_nemo-???.xml` unless 53 they want to add new diagnostics to the NEMO code. 54 The definition of the output files is organized into separate :file:`file_definition.xml` files which 55 are included in the :file:`iodef.xml` file. 37 56 38 57 Modes 39 ----- 58 ===== 40 59 41 60 Detached Mode … … 44 63 In detached mode the XIOS executable is executed on separate cores from the NEMO model. 45 64 This is the recommended method for using XIOS for realistic model runs. 46 To use this mode set ``using_server`` to ``true`` at the bottom of the ``iodef.xml`` file:65 To use this mode set ``using_server`` to ``true`` at the bottom of the :file:`iodef.xml` file: 47 66 48 67 .. code-block:: xml 49 68 50 69 <variable id="using_server" type="boolean">true</variable> 51 70 52 Make sure there is a copy (or link to) your XIOS executable in the working directory and in your job submission script allocate processors to XIOS. 71 Make sure there is a copy (or link to) your XIOS executable in the working directory and 72 in your job submission script allocate processors to XIOS. 53 73 54 74 Attached Mode … … 56 76 57 77 In attached mode XIOS runs on each of the cores used by NEMO. 58 This method is less efficient than the detached mode but can be more convenient for testing or with small configurations. 59 To activate this mode simply set ``using_server`` to false in the ``iodef.xml`` file 78 This method is less efficient than the detached mode but can be more convenient for testing or 79 with small configurations. 80 To activate this mode simply set ``using_server`` to false in the :file:`iodef.xml` file 60 81 61 82 .. code-block:: xml 62 83 63 84 <variable id="using_server" type="boolean">false</variable> 64 85 65 86 and don't allocate any cores to XIOS. 66 Note that due to the different domain decompositions between XIOS and NEMO if the total number of cores is larger than the number of grid points in the j direction then the model run will fail. 87 88 .. note:: 89 90 Due to the different domain decompositions between XIOS and NEMO, 91 if the total number of cores is larger than the number of grid points in the ``j`` direction then 92 the model run will fail. 67 93 68 94 Adding new diagnostics 69 ---------------------- 95 ====================== 70 96 71 97 If you want to add a NEMO diagnostic to the NEMO code you will need to do the following: 72 98 73 99 1. Add any necessary code to calculate you new diagnostic in NEMO 74 2. Send the field to XIOS using ``CALL iom_put( 'field_id', variable )`` where ``field_id`` is a unique id for your new diagnostics and variable is the fortran variable containing the data. 75 This should be called at every model timestep regardless of how often you want to output the field. No time averaging should be done in the model code. 76 3. If it is computationally expensive to calculate your new diagnostic you should also use "iom_use" to determine if it is requested in the current model run. For example, 77 78 .. code-block:: fortran 100 2. Send the field to XIOS using ``CALL iom_put( 'field_id', variable )`` where 101 ``field_id`` is a unique id for your new diagnostics and 102 variable is the fortran variable containing the data. 103 This should be called at every model timestep regardless of how often you want to output the field. 104 No time averaging should be done in the model code. 105 3. If it is computationally expensive to calculate your new diagnostic 106 you should also use "iom_use" to determine if it is requested in the current model run. 107 For example, 79 108 80 IF iom_use('field_id') THEN 81 !Some expensive computation 82 !... 83 !... 84 iom_put('field_id', variable) 85 ENDIF 109 .. code-block:: fortran 86 110 87 4. Add a variable definition to the ``field_def_nemo-???.xml`` file. 88 5. Add the variable to the ``iodef.xml`` or ``file_definition.xml`` file. 111 IF iom_use('field_id') THEN 112 !Some expensive computation 113 !... 114 !... 115 iom_put('field_id', variable) 116 ENDIF 117 118 4. Add a variable definition to the :file:`field_def_nemo-???.xml` file. 119 5. Add the variable to the :file:`iodef.xml` or :file:`file_definition.xml` file. -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SHARED/namelist_ice_ref
r11586 r12165 57 57 ln_landfast_L16 = .false. ! landfast: parameterization from Lemieux 2016 58 58 rn_depfra = 0.125 ! fraction of ocean depth that ice must reach to initiate landfast 59 ! recommended range: [0.1 ; 0.25] - L16=0.125 - home=0.15 60 rn_icebfr = 15. ! ln_landfast_L16: maximum bottom stress per unit volume [N/m3] 61 ! ln_landfast_home: maximum bottom stress per unit area of contact [N/m2] 62 ! recommended range: ?? L16=15 - home=10 59 ! recommended range: [0.1 ; 0.25] 60 rn_icebfr = 15. ! maximum bottom stress per unit volume [N/m3] 63 61 rn_lfrelax = 1.e-5 ! relaxation time scale to reach static friction [s-1] 64 rn_tensile = 0.2 ! ln_landfast_L16: isotropic tensile strength62 rn_tensile = 0.2 ! isotropic tensile strength [0-0.5??] 65 63 / 66 64 !------------------------------------------------------------------------------ … … 103 101 &namdyn_adv ! Ice advection 104 102 !------------------------------------------------------------------------------ 105 ln_adv_Pra = . false. ! Advection scheme (Prather)106 ln_adv_UMx = . true. ! Advection scheme (Ultimate-Macho)103 ln_adv_Pra = .true. ! Advection scheme (Prather) 104 ln_adv_UMx = .false. ! Advection scheme (Ultimate-Macho) 107 105 nn_UMx = 5 ! order of the scheme for UMx (1-5 ; 20=centered 2nd order) 108 106 / … … 234 232 &namdia ! Diagnostics 235 233 !------------------------------------------------------------------------------ 236 ln_icediachk = .false. ! check online the heat, mass & salt budgets at each time step237 ! ! rate of ice spuriously gained/lost. For ex., rn_icechk=1. <=> 1mm/year, rn_icechk=0.1 <=> 1mm/10years238 rn_icechk_cel = 1 . ! check at any gridcell=> stops the code if violated (and writes a file)239 rn_icechk_glo = 0.1 ! check over the entire ice cover=> only prints warnings234 ln_icediachk = .false. ! check online heat, mass & salt budgets 235 ! ! rate of ice spuriously gained/lost at each time step => rn_icechk=1 <=> 1.e-6 m/hour 236 rn_icechk_cel = 100. ! check at each gridcell (1.e-4m/h)=> stops the code if violated (and writes a file) 237 rn_icechk_glo = 1. ! check over the entire ice cover (1.e-6m/h)=> only prints warnings 240 238 ln_icediahsb = .false. ! output the heat, mass & salt budgets (T) or not (F) 241 239 ln_icectl = .false. ! ice points output for debug (T or F) -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SPITZ12/EXPREF/namelist_ice_cfg
r11587 r12165 44 44 &namdyn_rhg ! Ice rheology 45 45 !------------------------------------------------------------------------------ 46 ln_rhg_EVP = .true. ! EVP rheology47 ln_aEVP = .true. ! adaptive rheology (Kimmritz et al. 2016 & 2017)48 46 / 49 47 !------------------------------------------------------------------------------ 50 48 &namdyn_adv ! Ice advection 51 49 !------------------------------------------------------------------------------ 50 ln_adv_Pra = .false. ! Advection scheme (Prather) 51 ln_adv_UMx = .true. ! Advection scheme (Ultimate-Macho) 52 nn_UMx = 5 ! order of the scheme for UMx (1-5 ; 20=centered 2nd order) 52 53 / 53 54 !------------------------------------------------------------------------------
Note: See TracChangeset
for help on using the changeset viewer.