New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
Changeset 12165 for NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs – NEMO

Ignore:
Timestamp:
2019-12-11T09:27:27+01:00 (4 years ago)
Author:
davestorkey
Message:

2019/dev_ASINTER-01-05_merged: Update to r12072 of trunk.

Location:
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs
Files:
4 deleted
7 edited

Legend:

Unmodified
Added
Removed
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/AGRIF_DEMO/EXPREF/namelist_ice_cfg

    r10535 r12165  
    3838&namdyn_rhg     !   Ice rheology 
    3939!------------------------------------------------------------------------------ 
     40      ln_aEVP       = .false.          !     adaptive rheology (Kimmritz et al. 2016 & 2017) 
    4041/ 
    4142!------------------------------------------------------------------------------ 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/AGRIF_DEMO/README.rst

    r10460 r12165  
    22Embedded zooms 
    33************** 
     4 
     5.. todo:: 
     6 
     7 
    48 
    59.. contents:: 
     
    913======== 
    1014 
    11 AGRIF (Adaptive Grid Refinement In Fortran) is a library that allows the seamless space and time refinement over 
    12 rectangular regions in NEMO. 
     15AGRIF (Adaptive Grid Refinement In Fortran) is a library that 
     16allows the seamless space and time refinement over rectangular regions in NEMO. 
    1317Refinement factors can be odd or even (usually lower than 5 to maintain stability). 
    14 Interaction between grid is "two-ways" in the sense that the parent grid feeds the child grid open boundaries and 
    15 the child grid provides volume averages of prognostic variables once a given number of time step is completed. 
     18Interaction between grid is "two-ways" in the sense that 
     19the parent grid feeds the child grid open boundaries and 
     20the child grid provides volume averages of prognostic variables once 
     21a given number of time step is completed. 
    1622These pages provide guidelines how to use AGRIF in NEMO. 
    17 For a more technical description of the library itself, please refer to http://agrif.imag.fr. 
     23For a more technical description of the library itself, please refer to AGRIF_. 
    1824 
    1925Compilation 
    2026=========== 
    2127 
    22 Activating AGRIF requires to append the cpp key ``key_agrif`` at compilation time:  
     28Activating AGRIF requires to append the cpp key ``key_agrif`` at compilation time: 
    2329 
    2430.. code-block:: sh 
    2531 
    26    ./makenemo add_key 'key_agrif' 
     32   ./makenemo [...] add_key 'key_agrif' 
    2733 
    28 Although this is transparent to users, the way the code is processed during compilation is different from 
    29 the standard case: 
    30 a preprocessing stage (the so called "conv" program) translates the actual code so that 
     34Although this is transparent to users, 
     35the way the code is processed during compilation is different from the standard case: 
     36a preprocessing stage (the so called ``conv`` program) translates the actual code so that 
    3137saved arrays may be switched in memory space from one domain to an other. 
    3238 
     
    3440================================ 
    3541 
    36 An additional text file ``AGRIF_FixedGrids.in`` is required at run time. 
     42An additional text file :file:`AGRIF_FixedGrids.in` is required at run time. 
    3743This is where the grid hierarchy is defined. 
    38 An example of such a file, here taken from the ``ICEDYN`` test case, is given below:: 
     44An example of such a file, here taken from the ``ICEDYN`` test case, is given below 
    3945 
    40    1 
    41    34 63 34 63 3 3 3 
    42    0 
     46.. literalinclude:: ../../../tests/ICE_AGRIF/EXPREF/AGRIF_FixedGrids.in 
    4347 
    4448The first line indicates the number of zooms (1). 
    4549The second line contains the starting and ending indices in both directions on the root grid 
    46 (imin=34 imax=63 jmin=34 jmax=63) followed by the space and time refinement factors (3 3 3). 
     50(``imin=34 imax=63 jmin=34 jmax=63``) followed by the space and time refinement factors (3 3 3). 
    4751The last line is the number of child grid nested in the refined region (0). 
    4852A more complex example with telescoping grids can be found below and 
    49 in the ``AGRIF_DEMO`` reference configuration directory. 
     53in the :file:`AGRIF_DEMO` reference configuration directory. 
    5054 
    51 [Add some plots here with grid staggering and positioning ?] 
     55.. todo:: 
    5256 
    53 When creating the nested domain, one must keep in mind that the child domain is shifted toward north-east and 
    54 depends on the number of ghost cells as illustrated by the (attempted) drawing below for nbghostcells=1 and 
    55 nbghostcells=3. 
    56 The grid refinement is 3 and nxfin is the number of child grid points in i-direction.   
     57   Add some plots here with grid staggering and positioning? 
     58 
     59When creating the nested domain, one must keep in mind that 
     60the child domain is shifted toward north-east and 
     61depends on the number of ghost cells as illustrated by 
     62the *attempted* drawing below for ``nbghostcells=1`` and ``nbghostcells=3``. 
     63The grid refinement is 3 and ``nxfin`` is the number of child grid points in i-direction. 
    5764 
    5865.. image:: _static/agrif_grid_position.jpg 
     
    6269boundary data exchange and update being only performed between root and child grids. 
    6370Use of east-west periodic or north-fold boundary conditions is not allowed in child grids either. 
    64 Defining for instance a circumpolar zoom in a global model is therefore not possible.  
     71Defining for instance a circumpolar zoom in a global model is therefore not possible. 
    6572 
    6673Preprocessing 
    6774============= 
    6875 
    69 Knowing the refinement factors and area, a ``NESTING`` pre-processing tool may help to create needed input files 
     76Knowing the refinement factors and area, 
     77a ``NESTING`` pre-processing tool may help to create needed input files 
    7078(mesh file, restart, climatological and forcing files). 
    7179The key is to ensure volume matching near the child grid interface, 
    72 a step done by invoking the ``Agrif_create_bathy.exe`` program. 
    73 You may use the namelists provided in the ``NESTING`` directory as a guide. 
     80a step done by invoking the :file:`Agrif_create_bathy.exe` program. 
     81You may use the namelists provided in the :file:`NESTING` directory as a guide. 
    7482These correspond to the namelists used to create ``AGRIF_DEMO`` inputs. 
    7583 
     
    7886 
    7987Each child grid expects to read its own namelist so that different numerical choices can be made 
    80 (these should be stored in the form ``1_namelist_cfg``, ``2_namelist_cfg``, etc... according to their rank in 
    81 the grid hierarchy). 
     88(these should be stored in the form :file:`1_namelist_cfg`, :file:`2_namelist_cfg`, etc... 
     89according to their rank in the grid hierarchy). 
    8290Consistent time steps and number of steps with the chosen time refinement have to be provided. 
    8391Specific to AGRIF is the following block: 
    8492 
    85 .. code-block:: fortran 
    86  
    87    !----------------------------------------------------------------------- 
    88    &namagrif      !  AGRIF zoom                                            ("key_agrif") 
    89    !----------------------------------------------------------------------- 
    90       ln_spc_dyn    = .true.  !  use 0 as special value for dynamics 
    91       rn_sponge_tra = 2880.   !  coefficient for tracer   sponge layer [m2/s] 
    92       rn_sponge_dyn = 2880.   !  coefficient for dynamics sponge layer [m2/s] 
    93       ln_chk_bathy  = .false. !  =T  check the parent bathymetry 
    94    /              
     93.. literalinclude:: ../../namelists/namagrif 
     94   :language: fortran 
    9595 
    9696where sponge layer coefficients have to be chosen according to the child grid mesh size. 
    9797The sponge area is hard coded in NEMO and applies on the following grid points: 
    98 2 x refinement factor (from i=1+nbghostcells+1 to i=1+nbghostcells+sponge_area)  
     982 x refinement factor (from ``i=1+nbghostcells+1`` to ``i=1+nbghostcells+sponge_area``) 
    9999 
    100 References 
    101 ========== 
     100.. rubric:: References 
    102101 
    103102.. bibliography:: zooms.bib 
    104    :all: 
    105    :style: unsrt 
    106    :labelprefix: A 
    107    :keyprefix: a- 
     103   :all: 
     104   :style: unsrt 
     105   :labelprefix: A 
     106   :keyprefix: a- 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/ORCA2_ICE_PISCES/EXPREF/namelist_ice_cfg

    r10535 r12165  
    3838&namdyn_rhg     !   Ice rheology 
    3939!------------------------------------------------------------------------------ 
     40      ln_aEVP       = .false.          !     adaptive rheology (Kimmritz et al. 2016 & 2017) 
    4041/ 
    4142!------------------------------------------------------------------------------ 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/README.rst

    r10694 r12165  
    1 ************************ 
    2 Reference configurations 
    3 ************************ 
     1******************************** 
     2Run the Reference configurations 
     3******************************** 
     4 
     5.. todo:: 
     6 
     7   Lack of illustrations for ref. cfgs, and more generally in the guide. 
    48 
    59NEMO is distributed with a set of reference configurations allowing both 
     
    711the developer to test/validate his NEMO developments (using SETTE package). 
    812 
     13.. contents:: 
     14   :local: 
     15   :depth: 1 
     16 
    917.. attention:: 
    1018 
     
    2129=========================================================== 
    2230 
    23 A user who wants to compile the ORCA2_ICE_PISCES_ reference configuration using ``makenemo`` 
    24 should use the following, by selecting among available architecture file or providing a user defined one: 
     31To compile the ORCA2_ICE_PISCES_ reference configuration using :file:`makenemo`, 
     32one should use the following, by selecting among available architecture file or 
     33providing a user defined one: 
    2534 
    2635.. code-block:: console 
    27                  
    28    $ ./makenemo -r 'ORCA2_ICE_PISCES' -m 'my-fortran.fcm' -j '4' 
     36 
     37   $ ./makenemo -r 'ORCA2_ICE_PISCES' -m 'my_arch' -j '4' 
    2938 
    3039A new ``EXP00`` folder will be created within the selected reference configurations, 
    31 namely ``./cfgs/ORCA2_ICE_PISCES/EXP00``, 
    32 where it will be necessary to uncompress the Input & Forcing Files listed in the above table. 
     40namely ``./cfgs/ORCA2_ICE_PISCES/EXP00``. 
     41It will be necessary to uncompress the archives listed in the above table for 
     42the given reference configuration that includes input & forcing files. 
    3343 
    3444Then it will be possible to launch the execution of the model through a runscript 
    3545(opportunely adapted to the user system). 
    36     
     46 
    3747List of Configurations 
    3848====================== 
    3949 
    40 All forcing files listed below in the table are available from |NEMO archives URL|_ 
    41  
    42 .. |NEMO archives URL| image:: https://www.zenodo.org/badge/DOI/10.5281/zenodo.1472245.svg 
    43 .. _NEMO archives URL: https://doi.org/10.5281/zenodo.1472245 
    44  
    45 ====================== ===== ===== ===== ======== ======= ================================================ 
    46  Configuration                     Component(s)                            Input & Forcing File(s) 
    47 ---------------------- ---------------------------------- ------------------------------------------------ 
    48  Name                   OPA   SI3   TOP   PISCES   AGRIF 
    49 ====================== ===== ===== ===== ======== ======= ================================================ 
    50  AGRIF_DEMO_             X     X                     X     AGRIF_DEMO_v4.0.tar, ORCA2_ICE_v4.0.tar 
    51  AMM12_                  X                                 AMM12_v4.0.tar 
    52  C1D_PAPA_               X                                 INPUTS_C1D_PAPA_v4.0.tar 
    53  GYRE_BFM_               X           X                     *none* 
    54  GYRE_PISCES_            X           X      X              *none* 
    55  ORCA2_ICE_PISCES_       X     X     X      X              ORCA2_ICE_v4.0.tar, INPUTS_PISCES_v4.0.tar 
    56  ORCA2_OFF_PISCES_                   X      X              ORCA2_OFF_v4.0.tar, INPUTS_PISCES_v4.0.tar 
    57  ORCA2_OFF_TRC_                      X                     ORCA2_OFF_v4.0.tar 
    58  ORCA2_SAS_ICE_                X                           ORCA2_ICE_v4.0.tar, INPUTS_SAS_v4.0.tar 
    59  SPITZ12_                X     X                           SPITZ12_v4.0.tar 
    60 ====================== ===== ===== ===== ======== ======= ================================================ 
     50All forcing files listed below in the table are available from |DOI data|_ 
     51 
     52=================== === === === === === ================================== 
     53 Configuration       Component(s)        Archives (input & forcing files) 
     54------------------- ------------------- ---------------------------------- 
     55 Name                O   S   T   P   A 
     56=================== === === === === === ================================== 
     57 AGRIF_DEMO_         X   X           X   AGRIF_DEMO_v4.0.tar, 
     58                                         ORCA2_ICE_v4.0.tar 
     59 AMM12_              X                   AMM12_v4.0.tar 
     60 C1D_PAPA_           X                   INPUTS_C1D_PAPA_v4.0.tar 
     61 GYRE_BFM_           X       X           *none* 
     62 GYRE_PISCES_        X       X   X       *none* 
     63 ORCA2_ICE_PISCES_   X   X   X   X       ORCA2_ICE_v4.0.tar, 
     64                                         INPUTS_PISCES_v4.0.tar 
     65 ORCA2_OFF_PISCES_           X   X       ORCA2_OFF_v4.0.tar, 
     66                                         INPUTS_PISCES_v4.0.tar 
     67 ORCA2_OFF_TRC_              X           ORCA2_OFF_v4.0.tar 
     68 ORCA2_SAS_ICE_          X               ORCA2_ICE_v4.0.tar, 
     69                                         INPUTS_SAS_v4.0.tar 
     70 SPITZ12_            X   X               SPITZ12_v4.0.tar 
     71=================== === === === === === ================================== 
     72 
     73.. admonition:: Legend for component combination 
     74 
     75   O for OCE, S for SI\ :sup:`3`, T for TOP, P for PISCES and A for AGRIF 
    6176 
    6277AGRIF_DEMO 
     
    7287particular interest to test sea ice coupling. 
    7388 
     89.. image:: _static/AGRIF_DEMO_no_cap.jpg 
     90   :scale: 66% 
     91   :align: center 
     92 
    7493The 1:1 grid can be used alone as a benchmark to check that 
    75 the model solution is not corrupted by grid exchanges.  
     94the model solution is not corrupted by grid exchanges. 
    7695Note that since grids interact only at the baroclinic time level, 
    7796numerically exact results can not be achieved in the 1:1 case. 
    78 Perfect reproducibility is obtained only by switching to a fully explicit setup instead of a split explicit free surface scheme. 
     97Perfect reproducibility is obtained only by switching to a fully explicit setup instead of 
     98a split explicit free surface scheme. 
    7999 
    80100AMM12 
     
    85105a regular horizontal grid of ~12 km of resolution (see :cite:`ODEA2012`). 
    86106 
    87 This configuration allows to tests several features of NEMO specifically addressed to the shelf seas.  
     107.. image:: _static/AMM_domain.png 
     108   :align: center 
     109 
     110This configuration allows to tests several features of NEMO specifically addressed to the shelf seas. 
    88111In particular, ``AMM12`` accounts for vertical s-coordinates system, GLS turbulence scheme, 
    89112tidal lateral boundary conditions using a flather scheme (see more in ``BDY``). 
     
    99122-------- 
    100123 
    101 ``C1D_PAPA`` is a 1D configuration for the `PAPA station <http://www.pmel.noaa.gov/OCS/Papa/index-Papa.shtml>`_ located in the northern-eastern Pacific Ocean at 50.1°N, 144.9°W. 
    102 See `Reffray et al. (2015) <http://www.geosci-model-dev.net/8/69/2015>`_ for the description of its physical and numerical turbulent-mixing behaviour. 
    103  
    104 The water column setup, called NEMO1D, is activated with the inclusion of the CPP key ``key_c1d`` and 
    105 has a horizontal domain of 3x3 grid points. 
    106  
    107 This reference configuration uses 75 vertical levels grid (1m at the surface), GLS turbulence scheme with K-epsilon closure and the NCAR bulk formulae. 
     124.. figure:: _static/Papa2015.jpg 
     125   :height: 225px 
     126   :align:  left 
     127 
     128``C1D_PAPA`` is a 1D configuration for the `PAPA station`_ located in 
     129the northern-eastern Pacific Ocean at 50.1°N, 144.9°W. 
     130See :gmd:`Reffray et al. (2015) <8/69/2015>` for the description of 
     131its physical and numerical turbulent-mixing behaviour. 
     132 
     133| The water column setup, called NEMO1D, is activated with 
     134  the inclusion of the CPP key ``key_c1d`` and 
     135  has a horizontal domain of 3x3 grid points. 
     136| This reference configuration uses 75 vertical levels grid (1m at the surface), 
     137  GLS turbulence scheme with K-epsilon closure and the NCAR bulk formulae. 
     138 
    108139Data provided with ``INPUTS_C1D_PAPA_v4.0.tar`` file account for: 
    109140 
    110 - ``forcing_PAPASTATION_1h_y201[0-1].nc`` : ECMWF operational analysis atmospheric forcing rescaled to 1h (with long and short waves flux correction) for years 2010 and 2011 
    111 - ``init_PAPASTATION_m06d15.nc`` : Initial Conditions from observed data and Levitus 2009 climatology 
    112 - ``chlorophyll_PAPASTATION.nc`` : surface chlorophyll file from Seawifs data 
     141- :file:`forcing_PAPASTATION_1h_y201[0-1].nc`: 
     142  ECMWF operational analysis atmospheric forcing rescaled to 1h 
     143  (with long and short waves flux correction) for years 2010 and 2011 
     144- :file:`init_PAPASTATION_m06d15.nc`: Initial Conditions from 
     145  observed data and Levitus 2009 climatology 
     146- :file:`chlorophyll_PAPASTATION.nc`: surface chlorophyll file from Seawifs data 
    113147 
    114148GYRE_BFM 
    115149-------- 
    116150 
    117 ``GYRE_BFM`` shares the same physical setup of GYRE_PISCES_, but NEMO is coupled with the `BFM <http://www.bfm-community.eu/>`_ biogeochemical model as described in ``./cfgs/GYRE_BFM/README``. 
     151``GYRE_BFM`` shares the same physical setup of GYRE_PISCES_, 
     152but NEMO is coupled with the `BFM`_ biogeochemical model as described in ``./cfgs/GYRE_BFM/README``. 
    118153 
    119154GYRE_PISCES 
     
    123158in the Beta-plane approximation with a regular 1° horizontal resolution and 31 vertical levels, 
    124159with PISCES BGC model :cite:`gmd-8-2465-2015`. 
    125 Analytical forcing for heat, freshwater and wind-stress fields are applied.   
    126  
    127 This configuration acts also as demonstrator of the **user defined setup** (``ln_read_cfg = .false.``) and 
    128 grid setting are handled through the ``&namusr_def`` controls in ``namelist_cfg``: 
     160Analytical forcing for heat, freshwater and wind-stress fields are applied. 
     161 
     162This configuration acts also as demonstrator of the **user defined setup** 
     163(``ln_read_cfg = .false.``) and grid setting are handled through 
     164the ``&namusr_def`` controls in :file:`namelist_cfg`: 
    129165 
    130166.. literalinclude:: ../../../cfgs/GYRE_PISCES/EXPREF/namelist_cfg 
    131167   :language: fortran 
    132    :lines: 34-42 
     168   :lines:    35-41 
    133169 
    134170Note that, the default grid size is 30x20 grid points (with ``nn_GYRE = 1``) and 
    135171vertical levels are set by ``jpkglo``. 
    136 The specific code changes can be inspected in ``./src/OCE/USR``. 
    137  
    138 **Running GYRE as a benchmark** : 
    139 this simple configuration can be used as a benchmark since it is easy to increase resolution, 
    140 with the drawback of getting results that have a very limited physical meaning. 
    141  
    142 GYRE grid resolution can be increased at runtime by setting a different value of ``nn_GYRE`` (integer multiplier scaling factor), as described in the following table:  
    143  
    144 =========== ========= ========== ============ =================== 
    145 ``nn_GYRE``  *jpiglo*  *jpjglo*   ``jpkglo``   **Equivalent to** 
    146 =========== ========= ========== ============ =================== 
    147  1           30        20         31           GYRE 1° 
    148  25          750       500        101          ORCA 1/2° 
    149  50          1500      1000       101          ORCA 1/4° 
    150  150         4500      3000       101          ORCA 1/12° 
    151  200         6000      4000       101          ORCA 1/16° 
    152 =========== ========= ========== ============ =================== 
    153  
    154 Note that, it is necessary to set ``ln_bench = .true.`` in ``namusr_def`` to 
    155 avoid problems in the physics computation and that 
    156 the model timestep should be adequately rescaled.  
    157  
    158 For example if ``nn_GYRE = 150``, equivalent to an ORCA 1/12° grid, 
    159 the timestep ``rn_rdt = 1200`` should be set to 1200 seconds 
    160  
    161 Differently from previous versions of NEMO, 
    162 the code uses by default the time-splitting scheme and 
    163 internally computes the number of sub-steps.  
     172The specific code changes can be inspected in :file:`./src/OCE/USR`. 
     173 
     174.. rubric:: Running GYRE as a benchmark 
     175 
     176| This simple configuration can be used as a benchmark since it is easy to increase resolution, 
     177  with the drawback of getting results that have a very limited physical meaning. 
     178| GYRE grid resolution can be increased at runtime by setting a different value of ``nn_GYRE`` 
     179  (integer multiplier scaling factor), as described in the following table: 
     180 
     181=========== ============ ============ ============ =============== 
     182``nn_GYRE``  ``jpiglo``   ``jpjglo``   ``jpkglo``   Equivalent to 
     183=========== ============ ============ ============ =============== 
     184 1           30           20           31           GYRE 1° 
     185 25          750          500          101          ORCA 1/2° 
     186 50          1500         1000         101          ORCA 1/4° 
     187 150         4500         3000         101          ORCA 1/12° 
     188 200         6000         4000         101          ORCA 1/16° 
     189=========== ============ ============ ============ =============== 
     190 
     191| Note that, it is necessary to set ``ln_bench = .true.`` in ``&namusr_def`` to 
     192  avoid problems in the physics computation and that 
     193  the model timestep should be adequately rescaled. 
     194| For example if ``nn_GYRE = 150``, equivalent to an ORCA 1/12° grid, 
     195  the timestep ``rn_rdt`` should be set to 1200 seconds 
     196  Differently from previous versions of NEMO, the code uses by default the time-splitting scheme and 
     197  internally computes the number of sub-steps. 
    164198 
    165199ORCA2_ICE_PISCES 
     
    174208the ratio of anisotropy is nearly one everywhere 
    175209 
    176 this configuration uses the three components  
    177  
    178 - |OPA|, the ocean dynamical core  
    179 - |SI3|, the thermodynamic-dynamic sea ice model. 
    180 - |TOP|, passive tracer transport module and PISCES BGC model :cite:`gmd-8-2465-2015` 
     210This configuration uses the three components 
     211 
     212- |OCE|, the ocean dynamical core 
     213- |ICE|, the thermodynamic-dynamic sea ice model. 
     214- |MBG|, passive tracer transport module and PISCES BGC model :cite:`gmd-8-2465-2015` 
    181215 
    182216All components share the same grid. 
    183  
    184217The model is forced with CORE-II normal year atmospheric forcing and 
    185218it uses the NCAR bulk formulae. 
    186219 
    187 In this ``ORCA2_ICE_PISCES`` configuration, 
    188 AGRIF nesting can be activated that includes a nested grid in the Agulhas region. 
    189  
    190 To set up this configuration, after extracting NEMO: 
    191  
    192 Build your AGRIF configuration directory from ``ORCA2_ICE_PISCES``, 
    193 with the ``key_agrif`` CPP key activated: 
    194  
    195 .. code-block:: console 
    196                  
    197         $ ./makenemo -r 'ORCA2_ICE_PISCES' -n 'AGRIF' add_key 'key_agrif' 
    198  
    199 By using the input files and namelists for ``ORCA2_ICE_PISCES``, 
    200 the AGRIF test configuration is ready to run. 
    201  
    202 **Ocean Physics** 
    203  
    204 - *horizontal diffusion on momentum*: the eddy viscosity coefficient depends on the geographical position. It is taken as 40000 m^2/s, reduced in the equator regions (2000 m^2/s) excepted near the western boundaries. 
    205 - *isopycnal diffusion on tracers*: the diffusion acts along the isopycnal surfaces (neutral surface) with an eddy diffusivity coefficient of 2000 m^2/s. 
    206 - *Eddy induced velocity parametrization* with a coefficient that depends on the growth rate of baroclinic instabilities (it usually varies from 15 m^2/s to 3000 m^2/s). 
    207 - *lateral boundary conditions* : zero fluxes of heat and salt and no-slip conditions are applied through lateral solid boundaries. 
    208 - *bottom boundary condition* : zero fluxes of heat and salt are applied through the ocean bottom. 
    209   The Beckmann [19XX] simple bottom boundary layer parameterization is applied along continental slopes. 
    210   A linear friction is applied on momentum. 
    211 - *convection*: the vertical eddy viscosity and diffusivity coefficients are increased to 1 m^2/s in case of static instability. 
    212 - *time step* is 5760sec (1h36') so that there is 15 time steps in one day. 
     220.. rubric:: Ocean Physics 
     221 
     222:horizontal diffusion on momentum: 
     223   the eddy viscosity coefficient depends on the geographical position. 
     224   It is taken as 40000 m\ :sup:`2`/s, reduced in the equator regions (2000 m\ :sup:`2`/s) 
     225   excepted near the western boundaries. 
     226:isopycnal diffusion on tracers: 
     227   the diffusion acts along the isopycnal surfaces (neutral surface) with 
     228   an eddy diffusivity coefficient of 2000 m\ :sup:`2`/s. 
     229:Eddy induced velocity parametrization: 
     230   With a coefficient that depends on the growth rate of baroclinic instabilities 
     231   (it usually varies from 15 m\ :sup:`2`/s to 3000 m\ :sup:`2`/s). 
     232:lateral boundary conditions: 
     233   Zero fluxes of heat and salt and no-slip conditions are applied through lateral solid boundaries. 
     234:bottom boundary condition: 
     235   Zero fluxes of heat and salt are applied through the ocean bottom. 
     236   The Beckmann [19XX] simple bottom boundary layer parameterization is applied along 
     237   continental slopes. 
     238   A linear friction is applied on momentum. 
     239:convection: 
     240   The vertical eddy viscosity and diffusivity coefficients are increased to 1 m\ :sup:`2`/s in 
     241   case of static instability. 
     242:time step: is 5760sec (1h36') so that there is 15 time steps in one day. 
    213243 
    214244ORCA2_OFF_PISCES 
     
    218248but only PISCES model is an active component of TOP. 
    219249 
    220  
    221250ORCA2_OFF_TRC 
    222251------------- 
    223252 
    224 ``ORCA2_OFF_TRC`` is based on the ORCA2 global ocean configuration 
    225 (see ORCA2_ICE_PISCES_ for general description) along with the tracer passive transport module (TOP), but dynamical fields are pre-calculated and read with specific time frequency. 
    226  
    227 This enables for an offline coupling of TOP components, 
    228 here specifically inorganic carbon compounds (cfc11, cfc12, sf6, c14) and water age module (age). 
    229 See ``namelist_top_cfg`` to inspect the selection of each component with the dedicated logical keys. 
     253| ``ORCA2_OFF_TRC`` is based on the ORCA2 global ocean configuration 
     254  (see ORCA2_ICE_PISCES_ for general description) along with 
     255  the tracer passive transport module (TOP), 
     256  but dynamical fields are pre-calculated and read with specific time frequency. 
     257| This enables for an offline coupling of TOP components, 
     258  here specifically inorganic carbon compounds (CFC11, CFC12, SF6, C14) and water age module (age). 
     259  See :file:`namelist_top_cfg` to inspect the selection of 
     260  each component with the dedicated logical keys. 
    230261 
    231262Pre-calculated dynamical fields are provided to NEMO using 
    232 the namelist ``&namdta_dyn``  in ``namelist_cfg``, 
     263the namelist ``&namdta_dyn``  in :file:`namelist_cfg`, 
    233264in this case with a 5 days frequency (120 hours): 
    234265 
    235 .. literalinclude:: ../../../cfgs/GYRE_PISCES/EXPREF/namelist_ref 
     266.. literalinclude:: ../../namelists/namdta_dyn 
    236267   :language: fortran 
    237    :lines: 935-960 
    238  
    239 Input dynamical fields for this configuration (``ORCA2_OFF_v4.0.tar``) comes from 
     268 
     269Input dynamical fields for this configuration (:file:`ORCA2_OFF_v4.0.tar`) comes from 
    240270a 2000 years long climatological simulation of ORCA2_ICE using ERA40 atmospheric forcing. 
    241271 
    242 Note that, this configuration default uses linear free surface (``ln_linssh = .true.``) assuming that 
    243 model mesh is not varying in time and 
    244 it includes the bottom boundary layer parameterization (``ln_trabbl = .true.``) that 
    245 requires the provision of bbl coefficients through ``sn_ubl`` and ``sn_vbl`` fields. 
    246  
    247 It is also possible to activate PISCES model (see ``ORCA2_OFF_PISCES``) or 
    248 a user defined set of tracers and source-sink terms with ``ln_my_trc = .true.`` 
    249 (and adaptation of ``./src/TOP/MY_TRC`` routines). 
     272| Note that, 
     273  this configuration default uses linear free surface (``ln_linssh = .true.``) assuming that 
     274  model mesh is not varying in time and 
     275  it includes the bottom boundary layer parameterization (``ln_trabbl = .true.``) that 
     276  requires the provision of BBL coefficients through ``sn_ubl`` and ``sn_vbl`` fields. 
     277| It is also possible to activate PISCES model (see ``ORCA2_OFF_PISCES``) or 
     278  a user defined set of tracers and source-sink terms with ``ln_my_trc = .true.`` 
     279  (and adaptation of ``./src/TOP/MY_TRC`` routines). 
    250280 
    251281In addition, the offline module (OFF) allows for the provision of further fields: 
     
    254284   by including an input datastream similarly to the following: 
    255285 
    256 .. code-block:: fortran 
    257  
    258    sn_rnf  = 'dyna_grid_T', 120, 'sorunoff' , .true., .true., 'yearly', '', '', '' 
    259  
    260 2. **VVL dynamical fields**, 
    261    in the case input data were produced by a dyamical core using variable volume (``ln_linssh = .false.``) 
    262    it necessary to provide also diverce and E-P at before timestep by 
     286   .. code-block:: fortran 
     287 
     288      sn_rnf  = 'dyna_grid_T', 120, 'sorunoff' , .true., .true., 'yearly', '', '', '' 
     289 
     2902. **VVL dynamical fields**, in the case input data were produced by a dyamical core using 
     291   variable volume (``ln_linssh = .false.``) 
     292   it is necessary to provide also diverce and E-P at before timestep by 
    263293   including input datastreams similarly to the following 
    264294 
    265 .. code-block:: fortran 
    266  
    267    sn_div  = 'dyna_grid_T', 120, 'e3t'      , .true., .true., 'yearly', '', '', '' 
    268    sn_empb = 'dyna_grid_T', 120, 'sowaflupb', .true., .true., 'yearly', '', '', '' 
    269  
     295   .. code-block:: fortran 
     296 
     297      sn_div  = 'dyna_grid_T', 120, 'e3t'      , .true., .true., 'yearly', '', '', '' 
     298      sn_empb = 'dyna_grid_T', 120, 'sowaflupb', .true., .true., 'yearly', '', '', '' 
    270299 
    271300More details can be found by inspecting the offline data manager in 
    272 the routine ``./src/OFF/dtadyn.F90``. 
     301the routine :file:`./src/OFF/dtadyn.F90`. 
    273302 
    274303ORCA2_SAS_ICE 
    275304------------- 
    276305 
    277 ORCA2_SAS_ICE is a demonstrator of the Stand-Alone Surface (SAS) module and 
    278 it relies on ORCA2 global ocean configuration (see ORCA2_ICE_PISCES_ for general description). 
    279  
    280 The standalone surface module allows surface elements such as sea-ice, iceberg drift, and 
    281 surface fluxes to be run using prescribed model state fields. 
    282 It can profitably be used to compare different bulk formulae or 
    283 adjust the parameters of a given bulk formula. 
    284  
    285 More informations about SAS can be found in NEMO manual. 
     306| ORCA2_SAS_ICE is a demonstrator of the Stand-Alone Surface (SAS) module and 
     307  it relies on ORCA2 global ocean configuration (see ORCA2_ICE_PISCES_ for general description). 
     308| The standalone surface module allows surface elements such as sea-ice, iceberg drift, and 
     309  surface fluxes to be run using prescribed model state fields. 
     310  It can profitably be used to compare different bulk formulae or 
     311  adjust the parameters of a given bulk formula. 
     312 
     313More informations about SAS can be found in :doc:`NEMO manual <cite>`. 
    286314 
    287315SPITZ12 
     
    290318``SPITZ12`` is a regional configuration around the Svalbard archipelago 
    291319at 1/12° of horizontal resolution and 75 vertical levels. 
    292 See `Rousset et al. (2015) <https://www.geosci-model-dev.net/8/2991/2015/>`_ for more details. 
     320See :gmd:`Rousset et al. (2015) <8/2991/2015>` for more details. 
    293321 
    294322This configuration references to year 2002, 
     
    296324while lateral boundary conditions for dynamical fields have 3 days time frequency. 
    297325 
    298 References 
    299 ========== 
    300  
    301 .. bibliography:: configurations.bib 
     326.. rubric:: References 
     327 
     328.. bibliography:: cfgs.bib 
    302329   :all: 
    303330   :style: unsrt 
    304331   :labelprefix: C 
    305  
    306 .. Links and substitutions 
    307  
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SHARED/README.rst

    r10598 r12165  
    33*********** 
    44 
     5.. todo:: 
     6 
     7 
     8 
    59.. contents:: 
    6            :local: 
     10   :local: 
    711 
    812Output of diagnostics in NEMO is usually done using XIOS. 
    9 This is an efficient way of writing diagnostics because the time averaging, file writing and even some simple arithmetic or regridding is carried out in parallel to the NEMO model run. 
     13This is an efficient way of writing diagnostics because 
     14the time averaging, file writing and even some simple arithmetic or regridding is carried out in 
     15parallel to the NEMO model run. 
    1016This page gives a basic introduction to using XIOS with NEMO. 
    11 Much more information is available from the XIOS homepage above and from the NEMO manual. 
     17Much more information is available from the :xios:`XIOS homepage<>` above and from the NEMO manual. 
    1218 
    13 Use of XIOS for diagnostics is activated using the pre-compiler key ``key_iomput``.  
     19Use of XIOS for diagnostics is activated using the pre-compiler key ``key_iomput``. 
    1420 
    1521Extracting and installing XIOS 
    16 ------------------------------ 
     22============================== 
    1723 
    18241. Install the NetCDF4 library. 
    19    If you want to use single file output you will need to compile the HDF & NetCDF libraries to allow parallel IO. 
    20 2. Download the version of XIOS that you wish to use. The recommended version is now XIOS 2.5: 
    21     
    22 .. code-block:: console 
     25   If you want to use single file output you will need to compile the HDF & NetCDF libraries to 
     26   allow parallel IO. 
     272. Download the version of XIOS that you wish to use. 
     28   The recommended version is now XIOS 2.5: 
    2329 
    24    $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 xios-2.5 
     30   .. code-block:: console 
    2531 
    26 and follow the instructions in `XIOS documentation <http://forge.ipsl.jussieu.fr/ioserver/wiki/documentation>`_ to compile it. 
    27    If you find problems at this stage, support can be found by subscribing to the `XIOS mailing list <http://forge.ipsl.jussieu.fr/mailman/listinfo.cgi/xios-users>`_ and sending a mail message to it.  
     32      $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 
     33 
     34and follow the instructions in :xios:`XIOS documentation <wiki/documentation>` to compile it. 
     35If you find problems at this stage, support can be found by subscribing to 
     36the :xios:`XIOS mailing list <../mailman/listinfo.cgi/xios-users>` and sending a mail message to it. 
    2837 
    2938XIOS Configuration files 
    3039------------------------ 
    3140 
    32 XIOS is controlled using xml input files that should be copied to your model run directory before running the model. 
    33 Examples of these files can be found in the reference configurations (``cfgs``). The XIOS executable expects to find a file called ``iodef.xml`` in the model run directory. 
    34 In NEMO we have made the decision to use include statements in the ``iodef.xml`` file to include ``field_def_nemo-oce.xml`` (for physics), ``field_def_nemo-ice.xml`` (for ice), ``field_def_nemo-pisces.xml`` (for biogeochemistry) and ``domain_def.xml`` from the /cfgs/SHARED directory. 
    35 Most users will not need to modify ``domain_def.xml`` or ``field_def_nemo-???.xml`` unless they want to add new diagnostics to the NEMO code. 
    36 The definition of the output files is organized into separate ``file_definition.xml`` files which are included in the ``iodef.xml`` file. 
     41XIOS is controlled using XML input files that should be copied to 
     42your model run directory before running the model. 
     43Examples of these files can be found in the reference configurations (:file:`./cfgs`). 
     44The XIOS executable expects to find a file called :file:`iodef.xml` in the model run directory. 
     45In NEMO we have made the decision to use include statements in the :file:`iodef.xml` file to include: 
     46 
     47- :file:`field_def_nemo-oce.xml` (for physics), 
     48- :file:`field_def_nemo-ice.xml` (for ice), 
     49- :file:`field_def_nemo-pisces.xml` (for biogeochemistry) and 
     50- :file:`domain_def.xml` from the :file:`./cfgs/SHARED` directory. 
     51 
     52Most users will not need to modify :file:`domain_def.xml` or :file:`field_def_nemo-???.xml` unless 
     53they want to add new diagnostics to the NEMO code. 
     54The definition of the output files is organized into separate :file:`file_definition.xml` files which 
     55are included in the :file:`iodef.xml` file. 
    3756 
    3857Modes 
    39 ----- 
     58===== 
    4059 
    4160Detached Mode 
     
    4463In detached mode the XIOS executable is executed on separate cores from the NEMO model. 
    4564This is the recommended method for using XIOS for realistic model runs. 
    46 To use this mode set ``using_server`` to ``true`` at the bottom of the ``iodef.xml`` file: 
     65To use this mode set ``using_server`` to ``true`` at the bottom of the :file:`iodef.xml` file: 
    4766 
    4867.. code-block:: xml 
    4968 
    50    <variable id="using_server" type="boolean">true</variable> 
     69   <variable id="using_server" type="boolean">true</variable> 
    5170 
    52 Make sure there is a copy (or link to) your XIOS executable in the working directory and in your job submission script allocate processors to XIOS. 
     71Make sure there is a copy (or link to) your XIOS executable in the working directory and 
     72in your job submission script allocate processors to XIOS. 
    5373 
    5474Attached Mode 
     
    5676 
    5777In attached mode XIOS runs on each of the cores used by NEMO. 
    58 This method is less efficient than the detached mode but can be more convenient for testing or with small configurations. 
    59 To activate this mode simply set ``using_server`` to false in the ``iodef.xml`` file 
     78This method is less efficient than the detached mode but can be more convenient for testing or 
     79with small configurations. 
     80To activate this mode simply set ``using_server`` to false in the :file:`iodef.xml` file 
    6081 
    6182.. code-block:: xml 
    6283 
    63    <variable id="using_server" type="boolean">false</variable> 
     84   <variable id="using_server" type="boolean">false</variable> 
    6485 
    6586and don't allocate any cores to XIOS. 
    66 Note that due to the different domain decompositions between XIOS and NEMO if the total number of cores is larger than the number of grid points in the j direction then the model run will fail. 
     87 
     88.. note:: 
     89 
     90   Due to the different domain decompositions between XIOS and NEMO, 
     91   if the total number of cores is larger than the number of grid points in the ``j`` direction then 
     92   the model run will fail. 
    6793 
    6894Adding new diagnostics 
    69 ---------------------- 
     95====================== 
    7096 
    7197If you want to add a NEMO diagnostic to the NEMO code you will need to do the following: 
    7298 
    73991. Add any necessary code to calculate you new diagnostic in NEMO 
    74 2. Send the field to XIOS using ``CALL iom_put( 'field_id', variable )`` where ``field_id`` is a unique id for your new diagnostics and variable is the fortran variable containing the data. 
    75    This should be called at every model timestep regardless of how often you want to output the field. No time averaging should be done in the model code.  
    76 3. If it is computationally expensive to calculate your new diagnostic you should also use "iom_use" to determine if it is requested in the current model run. For example, 
    77     
    78 .. code-block:: fortran 
     1002. Send the field to XIOS using ``CALL iom_put( 'field_id', variable )`` where 
     101   ``field_id`` is a unique id for your new diagnostics and 
     102   variable is the fortran variable containing the data. 
     103   This should be called at every model timestep regardless of how often you want to output the field. 
     104   No time averaging should be done in the model code. 
     1053. If it is computationally expensive to calculate your new diagnostic 
     106   you should also use "iom_use" to determine if it is requested in the current model run. 
     107   For example, 
    79108 
    80       IF iom_use('field_id') THEN 
    81          !Some expensive computation 
    82          !... 
    83          !... 
    84          iom_put('field_id', variable) 
    85       ENDIF 
     109   .. code-block:: fortran 
    86110 
    87 4. Add a variable definition to the ``field_def_nemo-???.xml`` file. 
    88 5. Add the variable to the ``iodef.xml`` or ``file_definition.xml`` file. 
     111      IF iom_use('field_id') THEN 
     112         !Some expensive computation 
     113         !... 
     114         !... 
     115    iom_put('field_id', variable) 
     116      ENDIF 
     117 
     1184. Add a variable definition to the :file:`field_def_nemo-???.xml` file. 
     1195. Add the variable to the :file:`iodef.xml` or :file:`file_definition.xml` file. 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SHARED/namelist_ice_ref

    r11586 r12165  
    5757   ln_landfast_L16  = .false.         !  landfast: parameterization from Lemieux 2016 
    5858      rn_depfra     =   0.125         !        fraction of ocean depth that ice must reach to initiate landfast 
    59                                       !          recommended range: [0.1 ; 0.25] - L16=0.125 - home=0.15 
    60       rn_icebfr     =  15.            !        ln_landfast_L16:  maximum bottom stress per unit volume [N/m3] 
    61                                       !        ln_landfast_home: maximum bottom stress per unit area of contact [N/m2] 
    62                                       !          recommended range: ?? L16=15 - home=10 
     59                                      !          recommended range: [0.1 ; 0.25] 
     60      rn_icebfr     =  15.            !        maximum bottom stress per unit volume [N/m3] 
    6361      rn_lfrelax    =   1.e-5         !        relaxation time scale to reach static friction [s-1] 
    64       rn_tensile    =   0.2           !        ln_landfast_L16: isotropic tensile strength 
     62      rn_tensile    =   0.2           !        isotropic tensile strength [0-0.5??] 
    6563/ 
    6664!------------------------------------------------------------------------------ 
     
    103101&namdyn_adv     !   Ice advection 
    104102!------------------------------------------------------------------------------ 
    105    ln_adv_Pra       = .false.         !  Advection scheme (Prather) 
    106    ln_adv_UMx       = .true.          !  Advection scheme (Ultimate-Macho) 
     103   ln_adv_Pra       = .true.         !  Advection scheme (Prather) 
     104   ln_adv_UMx       = .false.          !  Advection scheme (Ultimate-Macho) 
    107105      nn_UMx        =   5             !     order of the scheme for UMx (1-5 ; 20=centered 2nd order) 
    108106/ 
     
    234232&namdia         !   Diagnostics 
    235233!------------------------------------------------------------------------------ 
    236    ln_icediachk     = .false.         !  check online the heat, mass & salt budgets at each time step 
    237       !                               !     rate of ice spuriously gained/lost. For ex., rn_icechk=1. <=> 1mm/year, rn_icechk=0.1 <=> 1mm/10years                                    
    238       rn_icechk_cel =  1.             !     check at any gridcell           => stops the code if violated (and writes a file) 
    239       rn_icechk_glo =  0.1            !     check over the entire ice cover => only prints warnings 
     234   ln_icediachk     = .false.         !  check online heat, mass & salt budgets 
     235      !                               !   rate of ice spuriously gained/lost at each time step => rn_icechk=1 <=> 1.e-6 m/hour 
     236      rn_icechk_cel =  100.           !     check at each gridcell          (1.e-4m/h)=> stops the code if violated (and writes a file) 
     237      rn_icechk_glo =  1.             !     check over the entire ice cover (1.e-6m/h)=> only prints warnings 
    240238   ln_icediahsb     = .false.         !  output the heat, mass & salt budgets (T) or not (F) 
    241239   ln_icectl        = .false.         !  ice points output for debug (T or F) 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SPITZ12/EXPREF/namelist_ice_cfg

    r11587 r12165  
    4444&namdyn_rhg     !   Ice rheology 
    4545!------------------------------------------------------------------------------ 
    46    ln_rhg_EVP       = .true.          !  EVP rheology 
    47       ln_aEVP       = .true.          !     adaptive rheology (Kimmritz et al. 2016 & 2017) 
    4846/ 
    4947!------------------------------------------------------------------------------ 
    5048&namdyn_adv     !   Ice advection 
    5149!------------------------------------------------------------------------------ 
     50   ln_adv_Pra       = .false.         !  Advection scheme (Prather) 
     51   ln_adv_UMx       = .true.          !  Advection scheme (Ultimate-Macho) 
     52      nn_UMx        =   5             !     order of the scheme for UMx (1-5 ; 20=centered 2nd order) 
    5253/ 
    5354!------------------------------------------------------------------------------ 
Note: See TracChangeset for help on using the changeset viewer.