New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
Changeset 12165 – NEMO

Changeset 12165


Ignore:
Timestamp:
2019-12-11T09:27:27+01:00 (4 years ago)
Author:
davestorkey
Message:

2019/dev_ASINTER-01-05_merged: Update to r12072 of trunk.

Location:
NEMO/branches/2019/dev_ASINTER-01-05_merged
Files:
6 deleted
31 edited
4 copied

Legend:

Unmodified
Added
Removed
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/CONTRIBUTING.rst

    • Property svn:mergeinfo deleted
    r11586 r12165  
    33************ 
    44 
     5.. todo:: 
     6 
     7 
     8 
    59.. contents:: 
    6    :local: 
     10   :local: 
    711 
    812Sending feedbacks 
     
    1721- You have a question: create a topic in the appropriate :forge:`discussion <forum>` 
    1822- You would like to raise and issue: open a new ticket of the right type depending of its severity 
    19    
     23 
    2024  - "Unavoidable" :forge:`newticket?type=Bug       <bug>` 
    21      
     25 
    2226  - "Workable"    :forge:`newticket?type=Defect <defect>` 
    2327 
     
    2731=============== 
    2832 
    29 You have build a development relevant for NEMO shared reference: an addition of the source code,   
     33You have build a development relevant for NEMO shared reference: an addition of the source code, 
    3034a full fork of the reference, ... 
    3135 
     
    3438 
    3539The proposals for developments to be included in the shared NEMO reference are first examined by NEMO Developers 
    36 Committee / Scientific Advisory Board.  
     40Committee / Scientific Advisory Board. 
    3741The implementation of a new development requires some additionnal work from the intial developer. 
    3842These tasks will need to be scheduled with NEMO System Team. 
     
    4246---- 
    4347 
    44 You only would like to inform NEMO community about your developments.  
     48You only would like to inform NEMO community about your developments. 
    4549You can promote your work on NEMO forum gathering  the contributions fromof the community by creating 
    46 a specific topic here :forge:`discussion/forum/5 <dedicated forum>`  
     50a specific topic here :forge:`discussion/forum/5 <dedicated forum>` 
    4751 
    4852 
     
    5559  routines to the ticket, to highlight the proposed changes by adding to the ticket the output of ``svn diff`` 
    5660  or ``svn patch`` from your working copy. 
    57    
     61 
    5862| Your development seems relevant for addition into the future release of NEMO shared reference. 
    5963  Implementing it into NEMO shared reference following the usual quality control will require some additionnal work 
     
    6165  your suggestion should be send as a proposed enhancement here :forge:`newticket?type=Enhancement <enhancement>` 
    6266  including description of the development, its implementation, and the existing validations. 
    63    
    64   The proposed enhancement will be examined  by NEMO Developers Committee / Scientific Advisory Board.  
     67 
     68  The proposed enhancement will be examined  by NEMO Developers Committee / Scientific Advisory Board. 
    6569  Once approved by the  Committee, the assicated development task can be scheduled in NEMO development work plan, 
    6670  and tasks distributed between you as initial developer and PI of this development action, and the NEMO System Team. 
    67    
     71 
    6872  Once sucessful (meeting the usual quality control steps) this action will allow the merge of these developments with 
    6973  other developments of the year, building the future NEMO. 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/INSTALL.rst

    • Property svn:mergeinfo deleted
    r11586 r12165  
    33******************* 
    44 
     5.. todo:: 
     6 
     7 
     8 
    59.. contents:: 
    610   :local: 
     
    1014 
    1115| The NEMO source code is written in *Fortran 95* and 
    12   some of its prerequisite tools and libraries are already included in the ``./ext`` subdirectory. 
     16  some of its prerequisite tools and libraries are already included in the download. 
    1317| It contains the AGRIF_ preprocessing program ``conv``; the FCM_ build system and 
    1418  the IOIPSL_ library for parts of the output. 
     
    2327- *Fortran* compiler (``ifort``, ``gfortran``, ``pgfortran``, ...), 
    2428- *Message Passing Interface (MPI)* implementation (e.g. |OpenMPI|_ or |MPICH|_). 
    25 - |NetCDF|_ library with its underlying |HDF|_  
     29- |NetCDF|_ library with its underlying |HDF|_ 
    2630 
    2731**NEMO, by default, takes advantage of some MPI features introduced into the MPI-3 standard.** 
     
    4044   This will limit MPI features to those defined within the MPI-2 standard 
    4145   (but will lose some performance benefits). 
    42  
    43 Specifics for NetCDF and HDF 
    44 ---------------------------- 
    45  
    46 NetCDF and HDF versions from . 
    47 However access to all the options available with the XIOS IO-server will require 
    48 the parallel IO support of these libraries which can be unavailable. 
    49  
    50 | **To satisfy these requirements, it is common to have to compile from source 
    51   in this order HDF (C library) then NetCDF (C and Fortran libraries)** 
    52 | It is also necessary to compile these libraries with the same version of the MPI implementation that 
    53   both NEMO and XIOS (see below) are compiled and linked with. 
    54  
    55 .. hint:: 
    56  
    57    | It is difficult to define the options for the compilation as 
    58      they differ from one architecture to another according to 
    59      the hardware used and the software installed. 
    60    | The following is provided without any warranty 
    61  
    62    .. code-block:: console 
    63  
    64       $ ./configure [--{enable-fortran,disable-shared,enable-parallel}] ... 
    65  
    66    It is recommended to build the tests ``--enable-parallel-tests`` and run them with ``make check`` 
    67  
    68 Particular versions of these libraries may have their own restrictions. 
    69 State the following requirements for netCDF-4 support: 
    70  
    71 .. caution:: 
    72  
    73    | When building NetCDF-C library versions older than 4.4.1, use only HDF5 1.8.x versions. 
    74    | Combining older NetCDF-C versions with newer HDF5 1.10 versions will create superblock 3 files 
    75      that are not readable by lots of older software. 
    76     
    77 Extract and install XIOS 
    78 ======================== 
    79  
    80 With the sole exception of running NEMO in mono-processor mode 
    81 (in which case output options are limited to those supported by the ``IOIPSL`` library), 
    82 diagnostic outputs from NEMO are handled by the third party ``XIOS`` library. 
    83 This can be used in two different modes: 
    84  
    85 - *attached* - Every NEMO process also acts as a XIOS server 
    86 - *detached* - Every NEMO process runs as a XIOS client. 
    87   Output is collected and collated by external, stand-alone XIOS server processors. 
    88  
    89 .. important:: 
    90  
    91    In either case, XIOS needs to be compiled before NEMO, 
    92    since the libraries are needed to successfully create the NEMO executable. 
    93  
    94 Instructions on how to obtain and install the software can be found on the :xios:`XIOS wiki<wiki>`. 
    95  
    96 .. hint:: 
    97  
    98    It is recommended to use XIOS version 2.5. 
    99    This version should be more stable (in terms of future code changes) than the XIOS trunk. 
    100    It is also the version used by the NEMO system team when testing all developments and new releases. 
    101     
    102    This particular version has its own branch and can be checked out and downloaded with: 
    103  
    104    .. code:: console 
    105  
    106       $ svn co https://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 
    107  
    108 Download the NEMO source code 
    109 ============================= 
    110  
    111 .. code:: console 
    112  
    113    $ svn co https://forge.ipsl.jussieu.fr/nemo/svn/NEMO/trunk 
    114  
    115 Description of directory tree 
    116 ----------------------------- 
    117  
    118 +-----------+------------------------------------------------------------+ 
    119 | Folder    | Purpose                                                    | 
    120 +===========+============================================================+ 
    121 | ``arch``  | Settings (per architecture-compiler pair)                  | 
    122 +-----------+------------------------------------------------------------+ 
    123 | ``cfgs``  | :doc:`Reference configurations <configurations>`           | 
    124 +-----------+------------------------------------------------------------+ 
    125 | ``doc``   | - ``latex``    : LaTex source code for ref. manuals        | 
    126 |           | - ``namelists``: k start guide                             | 
    127 |           | - ``rst``      : ReST files for quick start guide          | 
    128 +-----------+------------------------------------------------------------+ 
    129 | ``ext``   | Dependencies included (``AGRIF``, ``FCM`` & ``IOIPSL``)    | 
    130 +-----------+------------------------------------------------------------+ 
    131 | ``mk``    | Building  routines                                         | 
    132 +-----------+------------------------------------------------------------+ 
    133 | ``src``   | Modelling routines                                         | 
    134 |           |                                                            | 
    135 |           | - ``ICE``: |SI3| for sea ice                               | 
    136 |           | - ``NST``: AGRIF for embedded zooms                        | 
    137 |           | - ``OCE``: |OPA| for ocean dynamics                        | 
    138 |           | - ``TOP``: |TOP| for tracers                               | 
    139 +-----------+------------------------------------------------------------+ 
    140 | ``tests`` | :doc:`Test cases <test_cases>` (unsupported)               | 
    141 +-----------+------------------------------------------------------------+ 
    142 | ``tools`` | :doc:`Utilities <tools>` to [pre|post]process data         | 
    143 +-----------+------------------------------------------------------------+ 
    144  
    145 Setup your architecture configuration file 
    146 ========================================== 
    147  
    148 All compiler options in NEMO are controlled using files in 
    149 ``./arch/arch-'my_arch'.fcm`` where 'my_arch' is the name of the computing 
    150 architecture.  It is recommended to copy and rename an configuration file from 
    151 an architecture similar to your owns. You will need to set appropriate values 
    152 for all of the variables in the file. In particular the FCM variables: 
    153 ``%NCDF_HOME``; ``%HDF5_HOME`` and ``%XIOS_HOME`` should be set to the 
    154 installation directories used for XIOS installation. 
    155  
    156 .. code-block:: sh 
    157  
    158         %NCDF_HOME           /opt/local 
    159         %HDF5_HOME           /opt/local 
    160         %XIOS_HOME           /Users/$( whoami )/xios-2.5 
    161         %OASIS_HOME          /not/defined 
    162  
    163 Compile and create NEMO executable 
    164 ================================== 
    165  
    166 The main script to compile and create executable is called makenemo and located in the CONFIG directory, it is used to identify the routines you need from the source code, to build the makefile and run it. 
    167 As an example, compile GYRE with 'my_arch' to create a 'MY_GYRE' configuration: 
    168  
    169 .. code-block:: sh 
    170  
    171    ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' 
    172  
    173 The image below shows the structure and some content of "MY_CONFIG" directory from the launching of the configuration creation (directories and fundamental files created by makenemo). 
    174  
    175 +------------+----------------------------------------------------+ 
    176 | Folder     | Purpose                                            | 
    177 +============+====================================================+ 
    178 | ``BLD``    |                                                    | 
    179 +------------+----------------------------------------------------+ 
    180 | ``EXP00``  |                                                    | 
    181 +------------+----------------------------------------------------+ 
    182 | ``EXPREF`` |                                                    | 
    183 +------------+----------------------------------------------------+ 
    184 | ``MY_SRC`` |                                                    | 
    185 +------------+----------------------------------------------------+ 
    186 | ``WORK``   |                                                    | 
    187 +------------+----------------------------------------------------+ 
    188  
    189 Folder with the symbolic links to all unpreprocessed routines considered in the configuration 
    190 Compilation folder (executables, headers files, libraries, preprocessed routines, flags, …) 
    191 Computation folder for running the model (namelists, xml, executables and inputs-outputs) 
    192 Folder intended to contain your customised routines (modified from initial ones or new entire routines) 
    193  
    194 After successful execution of makenemo command, the executable called opa is created in the EXP00 directory (in the example above, the executable is created in CONFIG/MY_GYRE/EXP00). 
    195  
    196 More makenemo options 
    197 --------------------- 
    198  
    199 ``makenemo`` has several other options that can control which source files are selected and 
    200 the operation of the build process itself. 
    201 These are:: 
    202  
    203    Optional: 
    204       -d  Set of new sub-components (space separated list from ./src directory) 
    205       -e  Path for alternative patch  location (default: 'MY_SRC' in configuration folder) 
    206       -h  Print this help 
    207       -j  Number of processes to compile (0: no build) 
    208       -n  Name for new configuration 
    209       -s  Path for alternative source location (default: 'src' root directory) 
    210       -t  Path for alternative build  location (default: 'BLD' in configuration folder) 
    211       -v  Level of verbosity ([0-3]) 
    212  
    213 These options can be useful for maintaining several code versions with only minor differences but 
    214 they should be used sparingly. 
    215 Note however the ``-j`` option which should be used more routinely to speed up the build process. 
    216 For example: 
    217  
    218 .. code-block:: sh 
    219  
    220         ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' -j 8 
    221  
    222 which will compile up to 8 modules simultaneously. 
    223  
    224  
    225 Default behaviour 
    226 ----------------- 
    227  
    228 At the first use, you need the -m option to specify the architecture 
    229 configuration file (compiler and its options, routines and libraries to 
    230 include), then for next compilation, it is assumed you will be using the 
    231 same compiler.  If the –n option is not specified the last compiled configuration 
    232 will be used. 
    233  
    234 Tools used during the process 
    235 ----------------------------- 
    236  
    237 *   functions.sh : bash functions used by makenemo, for instance to create the WORK directory 
    238 *   cfg.txt : text list of configurations and source directories 
    239 *   bld.cfg : FCM rules to compile  
    240  
    241 Examples 
    242 -------- 
    243  
    244 .. code-block:: sh 
    245  
    246         echo "Example to install a new configuration MY_CONFIG"; 
    247         echo "with OPA_SRC and LIM_SRC_2 "; 
    248         echo "makenemo -n MY_CONFIG -d \"OPA_SRC LIM_SRC_2\""; 
    249         echo ""; 
    250         echo "Available configurations :"; cat ${CONFIG_DIR}/cfg.txt; 
    251         echo ""; 
    252         echo "Available unsupported (external) configurations :"; cat ${CONFIG_DIR}/uspcfg.txt; 
    253         echo ""; 
    254         echo "Example to remove bad configuration "; 
    255         echo "./makenemo -n MY_CONFIG clean_config"; 
    256         echo ""; 
    257         echo "Example to clean "; 
    258         echo "./makenemo clean"; 
    259         echo ""; 
    260         echo "Example to list the available keys of a CONFIG "; 
    261         echo "./makenemo list_key"; 
    262         echo ""; 
    263         echo "Example to add and remove keys"; 
    264         echo "./makenemo add_key \"key_iomput key_mpp_mpi\" del_key \"key_agrif\" "; 
    265         echo ""; 
    266         echo "Example to add and remove keys for a new configuration, and do not compile"; 
    267         echo "./makenemo -n MY_CONFIG -j0 add_key \"key_iomput key_mpp_mpi\" del_key \"key_agrif\" "; 
    268  
    269 Running the model 
    270 ================= 
    271  
    272 Once makenemo has run successfully, the opa executable is available in ``CONFIG/MY_CONFIG/EXP00`` 
    273 For the reference configurations, the EXP00 folder also contains the initial input files (namelists, \*xml files for the IOs…). If the configuration also needs NetCDF input files, this should be downloaded here from the corresponding tar file, see Users/Reference Configurations 
    274  
    275 .. code-block:: sh 
    276  
    277         cd 'MY_CONFIG'/EXP00 
    278         mpirun -n $NPROCS ./opa    # $NPROCS is the number of processes ; mpirun is your MPI wrapper 
    279  
    280  
    281 Viewing and changing list of active CPP keys 
    282 ============================================ 
    283  
    284 For a given configuration (here called MY_CONFIG), the list of active CPP keys can be found in: 
    285  
    286 .. code-block:: sh 
    287  
    288         ./cfgs/'MYCONFIG'/cpp_'MY_CONFIG'.fcm 
    289  
    290  
    291 This text file can be edited to change the list of active CPP keys. Once changed, one needs to recompile opa executable using makenemo command in order for this change to be taken in account. 
    292 Note that most NEMO configurations will need to specify the following CPP keys: 
    293 ``key_iomput`` and ``key_mpp_mpi`` 
    294  
    295 .. Links and substitutions 
    29646 
    29747.. |OpenMPI| replace:: *OpenMPI* 
     
    30050.. _MPICH:   https://www.mpich.org 
    30151.. |NetCDF|  replace:: *Network Common Data Form (NetCDF)* 
    302 .. _NetCDF:  https://www.unidata.ucar.edu/downloads/netcdf 
     52.. _NetCDF:  https://www.unidata.ucar.edu 
    30353.. |HDF|     replace:: *Hierarchical Data Form (HDF)* 
    304 .. _HDF:     https://www.hdfgroup.org/downloads 
     54.. _HDF:     https://www.hdfgroup.org 
     55 
     56Specifics for NetCDF and HDF 
     57---------------------------- 
     58 
     59NetCDF and HDF versions from official repositories may have not been compiled with MPI support. 
     60However access to all the options available with the XIOS IO-server will require 
     61the parallelism of these libraries. 
     62 
     63| **To satisfy these requirements, it is common to have to compile from source 
     64  in this order HDF (C library) then NetCDF (C and Fortran libraries)** 
     65| It is also necessary to compile these libraries with the same version of the MPI implementation that 
     66  both NEMO and XIOS (see below) have been compiled and linked with. 
     67 
     68.. hint:: 
     69 
     70   | It is difficult to define the options for the compilation as 
     71     they differ from one architecture to another according to 
     72     the hardware used and the software installed. 
     73   | The following is provided without any warranty 
     74 
     75   .. code-block:: console 
     76 
     77      $ ./configure [--{enable-fortran,disable-shared,enable-parallel}] ... 
     78 
     79   It is recommended to build the tests ``--enable-parallel-tests`` and run them with ``make check`` 
     80 
     81Particular versions of these libraries may have their own restrictions. 
     82State the following requirements for netCDF-4 support: 
     83 
     84.. caution:: 
     85 
     86   | When building NetCDF-C library versions older than 4.4.1, use only HDF5 1.8.x versions. 
     87   | Combining older NetCDF-C versions with newer HDF5 1.10 versions will create superblock 3 files 
     88     that are not readable by lots of older software. 
     89 
     90Extract and install XIOS 
     91======================== 
     92 
     93With the sole exception of running NEMO in mono-processor mode 
     94(in which case output options are limited to those supported by the ``IOIPSL`` library), 
     95diagnostic outputs from NEMO are handled by the third party ``XIOS`` library. 
     96It can be used in two different modes: 
     97 
     98:*attached*:  Every NEMO process also acts as a XIOS server 
     99:*detached*:  Every NEMO process runs as a XIOS client. 
     100  Output is collected and collated by external, stand-alone XIOS server processors. 
     101 
     102Instructions on how to install XIOS can be found on its :xios:`wiki<>`. 
     103 
     104.. hint:: 
     105 
     106   It is recommended to use XIOS 2.5 release. 
     107   This version should be more stable (in terms of future code changes) than the XIOS trunk. 
     108   It is also the one used by the NEMO system team when testing all developments and new releases. 
     109 
     110   This particular version has its own branch and can be checked out with: 
     111 
     112   .. code:: console 
     113 
     114      $ svn co https://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 
     115 
     116Download and install the NEMO code 
     117================================== 
     118 
     119Checkout the NEMO sources 
     120------------------------- 
     121 
     122.. code:: console 
     123 
     124   $ svn co https://forge.ipsl.jussieu.fr/nemo/svn/NEMO/trunk 
     125 
     126Description of 1\ :sup:`st` level tree structure 
     127------------------------------------------------ 
     128 
     129+---------------+----------------------------------------+ 
     130| :file:`arch`  | Compilation settings                   | 
     131+---------------+----------------------------------------+ 
     132| :file:`cfgs`  | :doc:`Reference configurations <cfgs>` | 
     133+---------------+----------------------------------------+ 
     134| :file:`doc`   | :doc:`Documentation <doc>`             | 
     135+---------------+----------------------------------------+ 
     136| :file:`ext`   | Dependencies included                  | 
     137|               | (``AGRIF``, ``FCM`` & ``IOIPSL``)      | 
     138+---------------+----------------------------------------+ 
     139| :file:`mk`    | Compilation scripts                    | 
     140+---------------+----------------------------------------+ 
     141| :file:`src`   | :doc:`Modelling routines <src>`        | 
     142+---------------+----------------------------------------+ 
     143| :file:`tests` | :doc:`Test cases <tests>`              | 
     144|               | (unsupported)                          | 
     145+---------------+----------------------------------------+ 
     146| :file:`tools` | :doc:`Utilities <tools>`               | 
     147|               | to {pre,post}process data              | 
     148+---------------+----------------------------------------+ 
     149 
     150Setup your architecture configuration file 
     151------------------------------------------ 
     152 
     153All compiler options in NEMO are controlled using files in :file:`./arch/arch-'my_arch'.fcm` where 
     154``my_arch`` is the name of the computing architecture 
     155(generally following the pattern ``HPCC-compiler`` or ``OS-compiler``). 
     156It is recommended to copy and rename an configuration file from an architecture similar to your owns. 
     157You will need to set appropriate values for all of the variables in the file. 
     158In particular the FCM variables: 
     159``%NCDF_HOME``; ``%HDF5_HOME`` and ``%XIOS_HOME`` should be set to 
     160the installation directories used for XIOS installation 
     161 
     162.. code-block:: sh 
     163 
     164   %NCDF_HOME    /usr/local/path/to/netcdf 
     165   %HDF5_HOME    /usr/local/path/to/hdf5 
     166   %XIOS_HOME    /home/$( whoami )/path/to/xios-2.5 
     167   %OASIS_HOME   /home/$( whoami )/path/to/oasis 
     168 
     169Create and compile a new configuration 
     170====================================== 
     171 
     172The main script to {re}compile and create executable is called :file:`makenemo` located at 
     173the root of the working copy. 
     174It is used to identify the routines you need from the source code, to build the makefile and run it. 
     175As an example, compile a :file:`MY_GYRE` configuration from GYRE with 'my_arch': 
     176 
     177.. code-block:: sh 
     178 
     179   ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' 
     180 
     181Then at the end of the configuration compilation, 
     182:file:`MY_GYRE` directory will have the following structure. 
     183 
     184+------------+----------------------------------------------------------------------------+ 
     185| Directory  | Purpose                                                                    | 
     186+============+============================================================================+ 
     187| ``BLD``    | BuiLD folder: target executable, headers, libs, preprocessed routines, ... | 
     188+------------+----------------------------------------------------------------------------+ 
     189| ``EXP00``  | Run   folder: link to executable, namelists, ``*.xml`` and IOs             | 
     190+------------+----------------------------------------------------------------------------+ 
     191| ``EXPREF`` | Files under version control only for :doc:`official configurations <cfgs>` | 
     192+------------+----------------------------------------------------------------------------+ 
     193| ``MY_SRC`` | New routines or modified copies of NEMO sources                            | 
     194+------------+----------------------------------------------------------------------------+ 
     195| ``WORK``   | Links to all raw routines from :file:`./src` considered                    | 
     196+------------+----------------------------------------------------------------------------+ 
     197 
     198After successful execution of :file:`makenemo` command, 
     199the executable called `nemo` is available in the :file:`EXP00` directory 
     200 
     201More :file:`makenemo` options 
     202----------------------------- 
     203 
     204``makenemo`` has several other options that can control which source files are selected and 
     205the operation of the build process itself. 
     206 
     207.. literalinclude:: ../../../makenemo 
     208   :language: text 
     209   :lines: 119-143 
     210   :caption: Output of ``makenemo -h`` 
     211 
     212These options can be useful for maintaining several code versions with only minor differences but 
     213they should be used sparingly. 
     214Note however the ``-j`` option which should be used more routinely to speed up the build process. 
     215For example: 
     216 
     217.. code-block:: sh 
     218 
     219        ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' -j 8 
     220 
     221will compile up to 8 processes simultaneously. 
     222 
     223Default behaviour 
     224----------------- 
     225 
     226At the first use, 
     227you need the ``-m`` option to specify the architecture configuration file 
     228(compiler and its options, routines and libraries to include), 
     229then for next compilation, it is assumed you will be using the same compiler. 
     230If the ``-n`` option is not specified the last compiled configuration will be used. 
     231 
     232Tools used during the process 
     233----------------------------- 
     234 
     235* :file:`functions.sh`: bash functions used by ``makenemo``, for instance to create the WORK directory 
     236* :file:`cfg.txt`     : text list of configurations and source directories 
     237* :file:`bld.cfg`     : FCM rules for compilation 
     238 
     239Examples 
     240-------- 
     241 
     242.. literalinclude:: ../../../makenemo 
     243   :language: text 
     244   :lines: 146-153 
     245 
     246Running the model 
     247================= 
     248 
     249Once :file:`makenemo` has run successfully, 
     250the ``nemo`` executable is available in :file:`./cfgs/MY_CONFIG/EXP00`. 
     251For the reference configurations, the :file:`EXP00` folder also contains the initial input files 
     252(namelists, ``*.xml`` files for the IOs, ...). 
     253If the configuration needs other input files, they have to be placed here. 
     254 
     255.. code-block:: sh 
     256 
     257   cd 'MY_CONFIG'/EXP00 
     258   mpirun -n $NPROCS ./nemo   # $NPROCS is the number of processes 
     259                              # mpirun is your MPI wrapper 
     260 
     261Viewing and changing list of active CPP keys 
     262============================================ 
     263 
     264For a given configuration (here called ``MY_CONFIG``), 
     265the list of active CPP keys can be found in :file:`./cfgs/'MYCONFIG'/cpp_MY_CONFIG.fcm` 
     266 
     267This text file can be edited by hand or with :file:`makenemo` to change the list of active CPP keys. 
     268Once changed, one needs to recompile ``nemo`` in order for this change to be taken in account. 
     269Note that most NEMO configurations will need to specify the following CPP keys: 
     270``key_iomput`` for IOs and ``key_mpp_mpi`` for parallelism. 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/README.rst

    • Property svn:mergeinfo deleted
    r11586 r12165  
    1 :Release:  |release| 
    2 :Date:     |today| 
    3 :SVN rev.: |revision| 
     1.. todo:: 
    42 
    5 NEMO_ for **Nucleus for European Modelling of the Ocean** is a state-of-the-art modelling framework for 
     3 
     4 
     5NEMO_ for *Nucleus for European Modelling of the Ocean* is a state-of-the-art modelling framework for 
    66research activities and forecasting services in ocean and climate sciences, 
    77developed in a sustainable way by a European consortium since 2008. 
     
    1515The NEMO ocean model has 3 major components: 
    1616 
    17 - |OPA| models the ocean {thermo}dynamics and solves the primitive equations 
    18   (``./src/OCE``) 
    19 - |SI3| simulates seaice {thermo}dynamics, brine inclusions and subgrid-scale thickness variations 
    20   (``./src/ICE``) 
    21 - |TOP| models the {on,off}line oceanic tracers transport and biogeochemical processes 
    22   (``./src/TOP``) 
     17- |OCE| models the ocean {thermo}dynamics and solves the primitive equations 
     18  (:file:`./src/OCE`) 
     19- |ICE| simulates sea-ice {thermo}dynamics, brine inclusions and 
     20  subgrid-scale thickness variations (:file:`./src/ICE`) 
     21- |MBG| models the {on,off}line oceanic tracers transport and biogeochemical processes 
     22  (:file:`./src/TOP`) 
    2323 
    24 These physical core engines are described in their respective `references`_ that 
    25 must be cited for any work related to their use. 
     24These physical core engines are described in 
     25their respective `reference publications <#project-documentation>`_ that 
     26must be cited for any work related to their use (see :doc:`cite`). 
    2627 
    2728Assets and solutions 
     
    3334- Create :doc:`embedded zooms<zooms>` seamlessly thanks to 2-way nesting package AGRIF_. 
    3435- Opportunity to integrate an :doc:`external biogeochemistry model<tracers>` 
    35 - Versatile :doc:`data assimilation<data_assimilation>` 
    36 - Generation of :doc:`diagnostics<diagnostics>` through effective XIOS_ system 
    37 - Roll-out Earth system modeling with :doc:`coupling interface<coupling>` based on OASIS_ 
     36- Versatile :doc:`data assimilation<da>` 
     37- Generation of :doc:`diagnostics<diags>` through effective XIOS_ system 
     38- Roll-out Earth system modeling with :doc:`coupling interface<cplg>` based on OASIS_ 
    3839 
    39 Several :doc:`built-in configurations<configurations>` are provided to 
     40Several :doc:`built-in configurations<cfgs>` are provided to 
    4041evaluate the skills and performances of the model which 
    41 can be used as templates for setting up a new configurations (``./cfgs``). 
     42can be used as templates for setting up a new configurations (:file:`./cfgs`). 
    4243 
    43 The user can also checkout available :doc:`idealized test cases<test_cases>` that 
    44 address specific physical processes(``./tests``). 
     44The user can also checkout available :doc:`idealized test cases<tests>` that 
     45address specific physical processes (:file:`./tests`). 
    4546 
    46 A set of :doc:`utilities <tools>` is also provided to {pre,post}process your data (``./tools``). 
     47A set of :doc:`utilities <tools>` is also provided to {pre,post}process your data (:file:`./tools`). 
    4748 
    4849Project documentation 
     
    5051 
    5152A walkthrough tutorial illustrates how to get code dependencies, compile and execute NEMO 
    52 (``./INSTALL.rst``) .  
     53(:file:`./INSTALL.rst`). 
    5354 
    5455Reference manuals and quick start guide can be build from source and 
    55 exported to HTML or PDF formats (``./doc``) or 
    56 downloaded directly from the :website:`website<bibliography/documentation>`. 
     56exported to HTML or PDF formats (:file:`./doc`) or 
     57downloaded directly from the :forge:`development platform<wiki/Documentations>`. 
    5758 
    58 =========== ===================== =============== 
    59  Component   Reference Manual      Quick start 
    60 =========== ===================== =============== 
    61  |OPA|       |NEMO manual|_        |NEMO guide| 
    62              :cite:`NEMO_manual` 
    63  |SI3|       |SI3 manual| 
    64              :cite:`SI3_manual` 
    65  |TOP|       |TOP manual| 
    66              :cite:`TOP_manual` 
    67 =========== ===================== =============== 
     59============ ================== =================== 
     60 Component    Reference Manual   Quick Start Guide 
     61============ ================== =================== 
     62 |NEMO-OCE|   |DOI man OCE|_     |DOI qsg| 
     63 |NEMO-ICE|   |DOI man ICE| 
     64 |NEMO-MBG|   |DOI man MBG| 
     65============ ================== =================== 
    6866 
    6967Since 2014 the project has a `Special Issue`_ in the open-access journal 
    70 Geoscientific Model Development (GMD) from the European Geosciences Union (EGU). 
     68Geoscientific Model Development (GMD) from the European Geosciences Union (EGU_). 
    7169The main scope is to collect relevant manuscripts covering various topics and 
    7270to provide a single portal to assess the model potential and evolution. 
     
    7977================= 
    8078 
    81 The NEMO Consortium pulling together 5 European institutes (CMCC_, CNRS_, MOI_, `Met Office`_ and NERC_) 
    82 plans the sustainable development in order to keep a reliable evolving framework since 2008. 
     79The NEMO Consortium pulling together 5 European institutes 
     80(CMCC_, CNRS_, MOI_, `Met Office`_ and NERC_) plans the sustainable development in order to 
     81keep a reliable evolving framework since 2008. 
    8382 
    84 It defines the |NEMO strategy|_ that is implemented by the System Team on a yearly basis in order to 
    85 release a new version almost every four years. 
     83It defines the |DOI dev stgy|_ that is implemented by the System Team on a yearly basis 
     84in order to release a new version almost every four years. 
    8685 
    8786When the need arises, :forge:`working groups<wiki/WorkingGroups>` are created or resumed to 
    8887gather the community expertise for advising on the development activities. 
    8988 
     89.. |DOI dev stgy| replace:: multi-year development strategy 
    9090 
    91 .. Substitutions / Links 
     91Disclaimer 
     92========== 
    9293 
    93 .. |NEMO manual| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1464816.svg 
    94 .. |NEMO guide|  image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1475325.svg 
    95 .. |SI3 manual|  image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1471689.svg 
    96 .. |TOP manual|  image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1471700.svg 
     94The NEMO source code is freely available and distributed under 
     95:download:`CeCILL v2.0 license <../../../LICENSE>` (GNU GPL compatible). 
    9796 
    98 .. |NEMO strategy| replace:: multi-year development strategy 
    99  
    100 .. _Special Issue: https://www.geosci-model-dev.net/special_issue40.html 
     97You can use, modify and/or redistribute the software under its terms, 
     98but users are provided only with a limited warranty and the software's authors and 
     99the successive licensor's have only limited liability. 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/REFERENCES.bib

    • Property svn:mergeinfo deleted
    r11586 r12165  
    1 @manual{NEMO_manual, 
    2    title={NEMO ocean engine}, 
    3    author={Madec Gurvan and NEMO System Team}, 
    4    organization={NEMO Consortium}, 
    5    journal={Notes du Pôle de modélisation de l\'Institut Pierre-Simon Laplace (IPSL)}, 
    6    number={27}, 
    7    publisher={Zenodo}, 
    8    abstract={The ocean engine of NEMO is a primitive equation model adapted to regional and 
    9    global ocean circulation problems. 
    10    It is intended to be a flexible tool for studying the ocean and its interactions with 
    11    the others components of the earth climate system over a wide range of space and time scales.}, 
    12    doi={10.5281/zenodo.1464816}, 
    13    edition={}, 
    14    year={} 
     1@manual{NEMO_man, 
     2   title="NEMO ocean engine", 
     3   author="NEMO System Team", 
     4   series="Scientific Notes of Climate Modelling Center", 
     5   number="27", 
     6   institution="Institut Pierre-Simon Laplace (IPSL)", 
     7   publisher="Zenodo", 
     8   doi="10.5281/zenodo.1464816", 
    159} 
     10%   edition="", 
     11%   year="" 
    1612 
    17 @manual{SI3_manual, 
    18    title={SI³ – Sea Ice modelling Integrated Initiative – The NEMO Sea Ice engine}, 
    19    author={NEMO Sea Ice Working Group}, 
    20    organization={NEMO Consortium}, 
    21    journal={Notes du Pôle de modélisation de l\'Institut Pierre-Simon Laplace (IPSL)}, 
    22    number={31}, 
    23    publisher={Zenodo}, 
    24    abstract={SI³ (Sea Ice modelling Integrated Initiative) is the sea ice engine of NEMO 
    25    (Nucleus for European Modelling of the Ocean). 
    26    SI³ is based on the Arctic Ice Dynamics Joint EXperiment (AIDJEX) framework, 
    27    combining the ice thickness distribution framework, the conservation of horizontal momentum, 
    28    an elastic-viscous plastic rheology, and energy-conserving halo-thermodynamics. 
    29    SI³ is interfaced with the NEMO ocean engine, and, via the OASIS coupler, with 
    30    several atmospheric general circulation models. 
    31    It also supports two-way grid embedding via the AGRIF software.}, 
    32    doi={10.5281/zenodo.1471689}, 
    33    edition={}, 
    34    year={} 
     13@manual{SI3_man, 
     14   title="Sea Ice modelling Integrated Initiative (SI$^3$) -- The NEMO Sea Ice engine", 
     15   author="NEMO Sea Ice Working Group", 
     16   series="Scientific Notes of Climate Modelling Center", 
     17   number="31", 
     18   institution="Institut Pierre-Simon Laplace (IPSL)", 
     19   publisher="Zenodo", 
     20   doi="10.5281/zenodo.1471689", 
    3521} 
     22%   edition="", 
     23%   year="" 
    3624 
    37 @manual{TOP_manual, 
    38    title={TOP – Tracers in Ocean Paradigm – The NEMO Tracers engine}, 
    39    author={NEMO TOP Working Group}, 
    40    organization={NEMO Consortium}, 
    41    journal={Notes du Pôle de modélisation de l\'Institut Pierre-Simon Laplace (IPSL)}, 
    42    number={28}, 
    43    publisher={Zenodo}, 
    44    abstract={}, 
    45    doi={10.5281/zenodo.1471700}, 
    46    edition={}, 
    47    year={} 
     25@manual{TOP_man, 
     26   title="Tracers in Ocean Paradigm (TOP) -- The NEMO Tracers engine", 
     27   author="NEMO TOP Working Group", 
     28   series="Scientific Notes of Climate Modelling Center", 
     29   number="28", 
     30   institution="Institut Pierre-Simon Laplace (IPSL)", 
     31   publisher="Zenodo", 
     32   doi="10.5281/zenodo.1471700", 
    4833} 
     34%   edition="", 
     35%   year="" 
    4936 
    50 @Article{gmd-8-1245-2015, 
    51    author = {Vidard, A. and Bouttier, P.-A. and Vigilant, F.}, 
    52    title = {{NEMOTAM}: {T}angent and {A}djoint {M}odels for the ocean modelling platform {NEMO}}, 
    53    journal = {Geoscientific Model Development}, 
    54    volume = {8}, 
    55    year = {2015}, 
    56    number = {4}, 
    57    pages = {1245--1257}, 
    58    doi = {10.5194/gmd-8-1245-2015} 
     37@article{TAM_pub, 
     38   author = "Vidard, A. and Bouttier, P.-A. and Vigilant, F.", 
     39   title = "NEMOTAM: Tangent and Adjoint Models for the ocean modelling platform NEMO", 
     40   journal = "Geoscientific Model Development", 
     41   volume = "8", 
     42   year = "2015", 
     43   number = "4", 
     44   pages = "1245--1257", 
     45   doi = "10.5194/gmd-8-1245-2015" 
    5946} 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/AGRIF_DEMO/EXPREF/namelist_ice_cfg

    r10535 r12165  
    3838&namdyn_rhg     !   Ice rheology 
    3939!------------------------------------------------------------------------------ 
     40      ln_aEVP       = .false.          !     adaptive rheology (Kimmritz et al. 2016 & 2017) 
    4041/ 
    4142!------------------------------------------------------------------------------ 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/AGRIF_DEMO/README.rst

    r10460 r12165  
    22Embedded zooms 
    33************** 
     4 
     5.. todo:: 
     6 
     7 
    48 
    59.. contents:: 
     
    913======== 
    1014 
    11 AGRIF (Adaptive Grid Refinement In Fortran) is a library that allows the seamless space and time refinement over 
    12 rectangular regions in NEMO. 
     15AGRIF (Adaptive Grid Refinement In Fortran) is a library that 
     16allows the seamless space and time refinement over rectangular regions in NEMO. 
    1317Refinement factors can be odd or even (usually lower than 5 to maintain stability). 
    14 Interaction between grid is "two-ways" in the sense that the parent grid feeds the child grid open boundaries and 
    15 the child grid provides volume averages of prognostic variables once a given number of time step is completed. 
     18Interaction between grid is "two-ways" in the sense that 
     19the parent grid feeds the child grid open boundaries and 
     20the child grid provides volume averages of prognostic variables once 
     21a given number of time step is completed. 
    1622These pages provide guidelines how to use AGRIF in NEMO. 
    17 For a more technical description of the library itself, please refer to http://agrif.imag.fr. 
     23For a more technical description of the library itself, please refer to AGRIF_. 
    1824 
    1925Compilation 
    2026=========== 
    2127 
    22 Activating AGRIF requires to append the cpp key ``key_agrif`` at compilation time:  
     28Activating AGRIF requires to append the cpp key ``key_agrif`` at compilation time: 
    2329 
    2430.. code-block:: sh 
    2531 
    26    ./makenemo add_key 'key_agrif' 
     32   ./makenemo [...] add_key 'key_agrif' 
    2733 
    28 Although this is transparent to users, the way the code is processed during compilation is different from 
    29 the standard case: 
    30 a preprocessing stage (the so called "conv" program) translates the actual code so that 
     34Although this is transparent to users, 
     35the way the code is processed during compilation is different from the standard case: 
     36a preprocessing stage (the so called ``conv`` program) translates the actual code so that 
    3137saved arrays may be switched in memory space from one domain to an other. 
    3238 
     
    3440================================ 
    3541 
    36 An additional text file ``AGRIF_FixedGrids.in`` is required at run time. 
     42An additional text file :file:`AGRIF_FixedGrids.in` is required at run time. 
    3743This is where the grid hierarchy is defined. 
    38 An example of such a file, here taken from the ``ICEDYN`` test case, is given below:: 
     44An example of such a file, here taken from the ``ICEDYN`` test case, is given below 
    3945 
    40    1 
    41    34 63 34 63 3 3 3 
    42    0 
     46.. literalinclude:: ../../../tests/ICE_AGRIF/EXPREF/AGRIF_FixedGrids.in 
    4347 
    4448The first line indicates the number of zooms (1). 
    4549The second line contains the starting and ending indices in both directions on the root grid 
    46 (imin=34 imax=63 jmin=34 jmax=63) followed by the space and time refinement factors (3 3 3). 
     50(``imin=34 imax=63 jmin=34 jmax=63``) followed by the space and time refinement factors (3 3 3). 
    4751The last line is the number of child grid nested in the refined region (0). 
    4852A more complex example with telescoping grids can be found below and 
    49 in the ``AGRIF_DEMO`` reference configuration directory. 
     53in the :file:`AGRIF_DEMO` reference configuration directory. 
    5054 
    51 [Add some plots here with grid staggering and positioning ?] 
     55.. todo:: 
    5256 
    53 When creating the nested domain, one must keep in mind that the child domain is shifted toward north-east and 
    54 depends on the number of ghost cells as illustrated by the (attempted) drawing below for nbghostcells=1 and 
    55 nbghostcells=3. 
    56 The grid refinement is 3 and nxfin is the number of child grid points in i-direction.   
     57   Add some plots here with grid staggering and positioning? 
     58 
     59When creating the nested domain, one must keep in mind that 
     60the child domain is shifted toward north-east and 
     61depends on the number of ghost cells as illustrated by 
     62the *attempted* drawing below for ``nbghostcells=1`` and ``nbghostcells=3``. 
     63The grid refinement is 3 and ``nxfin`` is the number of child grid points in i-direction. 
    5764 
    5865.. image:: _static/agrif_grid_position.jpg 
     
    6269boundary data exchange and update being only performed between root and child grids. 
    6370Use of east-west periodic or north-fold boundary conditions is not allowed in child grids either. 
    64 Defining for instance a circumpolar zoom in a global model is therefore not possible.  
     71Defining for instance a circumpolar zoom in a global model is therefore not possible. 
    6572 
    6673Preprocessing 
    6774============= 
    6875 
    69 Knowing the refinement factors and area, a ``NESTING`` pre-processing tool may help to create needed input files 
     76Knowing the refinement factors and area, 
     77a ``NESTING`` pre-processing tool may help to create needed input files 
    7078(mesh file, restart, climatological and forcing files). 
    7179The key is to ensure volume matching near the child grid interface, 
    72 a step done by invoking the ``Agrif_create_bathy.exe`` program. 
    73 You may use the namelists provided in the ``NESTING`` directory as a guide. 
     80a step done by invoking the :file:`Agrif_create_bathy.exe` program. 
     81You may use the namelists provided in the :file:`NESTING` directory as a guide. 
    7482These correspond to the namelists used to create ``AGRIF_DEMO`` inputs. 
    7583 
     
    7886 
    7987Each child grid expects to read its own namelist so that different numerical choices can be made 
    80 (these should be stored in the form ``1_namelist_cfg``, ``2_namelist_cfg``, etc... according to their rank in 
    81 the grid hierarchy). 
     88(these should be stored in the form :file:`1_namelist_cfg`, :file:`2_namelist_cfg`, etc... 
     89according to their rank in the grid hierarchy). 
    8290Consistent time steps and number of steps with the chosen time refinement have to be provided. 
    8391Specific to AGRIF is the following block: 
    8492 
    85 .. code-block:: fortran 
    86  
    87    !----------------------------------------------------------------------- 
    88    &namagrif      !  AGRIF zoom                                            ("key_agrif") 
    89    !----------------------------------------------------------------------- 
    90       ln_spc_dyn    = .true.  !  use 0 as special value for dynamics 
    91       rn_sponge_tra = 2880.   !  coefficient for tracer   sponge layer [m2/s] 
    92       rn_sponge_dyn = 2880.   !  coefficient for dynamics sponge layer [m2/s] 
    93       ln_chk_bathy  = .false. !  =T  check the parent bathymetry 
    94    /              
     93.. literalinclude:: ../../namelists/namagrif 
     94   :language: fortran 
    9595 
    9696where sponge layer coefficients have to be chosen according to the child grid mesh size. 
    9797The sponge area is hard coded in NEMO and applies on the following grid points: 
    98 2 x refinement factor (from i=1+nbghostcells+1 to i=1+nbghostcells+sponge_area)  
     982 x refinement factor (from ``i=1+nbghostcells+1`` to ``i=1+nbghostcells+sponge_area``) 
    9999 
    100 References 
    101 ========== 
     100.. rubric:: References 
    102101 
    103102.. bibliography:: zooms.bib 
    104    :all: 
    105    :style: unsrt 
    106    :labelprefix: A 
    107    :keyprefix: a- 
     103   :all: 
     104   :style: unsrt 
     105   :labelprefix: A 
     106   :keyprefix: a- 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/ORCA2_ICE_PISCES/EXPREF/namelist_ice_cfg

    r10535 r12165  
    3838&namdyn_rhg     !   Ice rheology 
    3939!------------------------------------------------------------------------------ 
     40      ln_aEVP       = .false.          !     adaptive rheology (Kimmritz et al. 2016 & 2017) 
    4041/ 
    4142!------------------------------------------------------------------------------ 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/README.rst

    r10694 r12165  
    1 ************************ 
    2 Reference configurations 
    3 ************************ 
     1******************************** 
     2Run the Reference configurations 
     3******************************** 
     4 
     5.. todo:: 
     6 
     7   Lack of illustrations for ref. cfgs, and more generally in the guide. 
    48 
    59NEMO is distributed with a set of reference configurations allowing both 
     
    711the developer to test/validate his NEMO developments (using SETTE package). 
    812 
     13.. contents:: 
     14   :local: 
     15   :depth: 1 
     16 
    917.. attention:: 
    1018 
     
    2129=========================================================== 
    2230 
    23 A user who wants to compile the ORCA2_ICE_PISCES_ reference configuration using ``makenemo`` 
    24 should use the following, by selecting among available architecture file or providing a user defined one: 
     31To compile the ORCA2_ICE_PISCES_ reference configuration using :file:`makenemo`, 
     32one should use the following, by selecting among available architecture file or 
     33providing a user defined one: 
    2534 
    2635.. code-block:: console 
    27                  
    28    $ ./makenemo -r 'ORCA2_ICE_PISCES' -m 'my-fortran.fcm' -j '4' 
     36 
     37   $ ./makenemo -r 'ORCA2_ICE_PISCES' -m 'my_arch' -j '4' 
    2938 
    3039A new ``EXP00`` folder will be created within the selected reference configurations, 
    31 namely ``./cfgs/ORCA2_ICE_PISCES/EXP00``, 
    32 where it will be necessary to uncompress the Input & Forcing Files listed in the above table. 
     40namely ``./cfgs/ORCA2_ICE_PISCES/EXP00``. 
     41It will be necessary to uncompress the archives listed in the above table for 
     42the given reference configuration that includes input & forcing files. 
    3343 
    3444Then it will be possible to launch the execution of the model through a runscript 
    3545(opportunely adapted to the user system). 
    36     
     46 
    3747List of Configurations 
    3848====================== 
    3949 
    40 All forcing files listed below in the table are available from |NEMO archives URL|_ 
    41  
    42 .. |NEMO archives URL| image:: https://www.zenodo.org/badge/DOI/10.5281/zenodo.1472245.svg 
    43 .. _NEMO archives URL: https://doi.org/10.5281/zenodo.1472245 
    44  
    45 ====================== ===== ===== ===== ======== ======= ================================================ 
    46  Configuration                     Component(s)                            Input & Forcing File(s) 
    47 ---------------------- ---------------------------------- ------------------------------------------------ 
    48  Name                   OPA   SI3   TOP   PISCES   AGRIF 
    49 ====================== ===== ===== ===== ======== ======= ================================================ 
    50  AGRIF_DEMO_             X     X                     X     AGRIF_DEMO_v4.0.tar, ORCA2_ICE_v4.0.tar 
    51  AMM12_                  X                                 AMM12_v4.0.tar 
    52  C1D_PAPA_               X                                 INPUTS_C1D_PAPA_v4.0.tar 
    53  GYRE_BFM_               X           X                     *none* 
    54  GYRE_PISCES_            X           X      X              *none* 
    55  ORCA2_ICE_PISCES_       X     X     X      X              ORCA2_ICE_v4.0.tar, INPUTS_PISCES_v4.0.tar 
    56  ORCA2_OFF_PISCES_                   X      X              ORCA2_OFF_v4.0.tar, INPUTS_PISCES_v4.0.tar 
    57  ORCA2_OFF_TRC_                      X                     ORCA2_OFF_v4.0.tar 
    58  ORCA2_SAS_ICE_                X                           ORCA2_ICE_v4.0.tar, INPUTS_SAS_v4.0.tar 
    59  SPITZ12_                X     X                           SPITZ12_v4.0.tar 
    60 ====================== ===== ===== ===== ======== ======= ================================================ 
     50All forcing files listed below in the table are available from |DOI data|_ 
     51 
     52=================== === === === === === ================================== 
     53 Configuration       Component(s)        Archives (input & forcing files) 
     54------------------- ------------------- ---------------------------------- 
     55 Name                O   S   T   P   A 
     56=================== === === === === === ================================== 
     57 AGRIF_DEMO_         X   X           X   AGRIF_DEMO_v4.0.tar, 
     58                                         ORCA2_ICE_v4.0.tar 
     59 AMM12_              X                   AMM12_v4.0.tar 
     60 C1D_PAPA_           X                   INPUTS_C1D_PAPA_v4.0.tar 
     61 GYRE_BFM_           X       X           *none* 
     62 GYRE_PISCES_        X       X   X       *none* 
     63 ORCA2_ICE_PISCES_   X   X   X   X       ORCA2_ICE_v4.0.tar, 
     64                                         INPUTS_PISCES_v4.0.tar 
     65 ORCA2_OFF_PISCES_           X   X       ORCA2_OFF_v4.0.tar, 
     66                                         INPUTS_PISCES_v4.0.tar 
     67 ORCA2_OFF_TRC_              X           ORCA2_OFF_v4.0.tar 
     68 ORCA2_SAS_ICE_          X               ORCA2_ICE_v4.0.tar, 
     69                                         INPUTS_SAS_v4.0.tar 
     70 SPITZ12_            X   X               SPITZ12_v4.0.tar 
     71=================== === === === === === ================================== 
     72 
     73.. admonition:: Legend for component combination 
     74 
     75   O for OCE, S for SI\ :sup:`3`, T for TOP, P for PISCES and A for AGRIF 
    6176 
    6277AGRIF_DEMO 
     
    7287particular interest to test sea ice coupling. 
    7388 
     89.. image:: _static/AGRIF_DEMO_no_cap.jpg 
     90   :scale: 66% 
     91   :align: center 
     92 
    7493The 1:1 grid can be used alone as a benchmark to check that 
    75 the model solution is not corrupted by grid exchanges.  
     94the model solution is not corrupted by grid exchanges. 
    7695Note that since grids interact only at the baroclinic time level, 
    7796numerically exact results can not be achieved in the 1:1 case. 
    78 Perfect reproducibility is obtained only by switching to a fully explicit setup instead of a split explicit free surface scheme. 
     97Perfect reproducibility is obtained only by switching to a fully explicit setup instead of 
     98a split explicit free surface scheme. 
    7999 
    80100AMM12 
     
    85105a regular horizontal grid of ~12 km of resolution (see :cite:`ODEA2012`). 
    86106 
    87 This configuration allows to tests several features of NEMO specifically addressed to the shelf seas.  
     107.. image:: _static/AMM_domain.png 
     108   :align: center 
     109 
     110This configuration allows to tests several features of NEMO specifically addressed to the shelf seas. 
    88111In particular, ``AMM12`` accounts for vertical s-coordinates system, GLS turbulence scheme, 
    89112tidal lateral boundary conditions using a flather scheme (see more in ``BDY``). 
     
    99122-------- 
    100123 
    101 ``C1D_PAPA`` is a 1D configuration for the `PAPA station <http://www.pmel.noaa.gov/OCS/Papa/index-Papa.shtml>`_ located in the northern-eastern Pacific Ocean at 50.1°N, 144.9°W. 
    102 See `Reffray et al. (2015) <http://www.geosci-model-dev.net/8/69/2015>`_ for the description of its physical and numerical turbulent-mixing behaviour. 
    103  
    104 The water column setup, called NEMO1D, is activated with the inclusion of the CPP key ``key_c1d`` and 
    105 has a horizontal domain of 3x3 grid points. 
    106  
    107 This reference configuration uses 75 vertical levels grid (1m at the surface), GLS turbulence scheme with K-epsilon closure and the NCAR bulk formulae. 
     124.. figure:: _static/Papa2015.jpg 
     125   :height: 225px 
     126   :align:  left 
     127 
     128``C1D_PAPA`` is a 1D configuration for the `PAPA station`_ located in 
     129the northern-eastern Pacific Ocean at 50.1°N, 144.9°W. 
     130See :gmd:`Reffray et al. (2015) <8/69/2015>` for the description of 
     131its physical and numerical turbulent-mixing behaviour. 
     132 
     133| The water column setup, called NEMO1D, is activated with 
     134  the inclusion of the CPP key ``key_c1d`` and 
     135  has a horizontal domain of 3x3 grid points. 
     136| This reference configuration uses 75 vertical levels grid (1m at the surface), 
     137  GLS turbulence scheme with K-epsilon closure and the NCAR bulk formulae. 
     138 
    108139Data provided with ``INPUTS_C1D_PAPA_v4.0.tar`` file account for: 
    109140 
    110 - ``forcing_PAPASTATION_1h_y201[0-1].nc`` : ECMWF operational analysis atmospheric forcing rescaled to 1h (with long and short waves flux correction) for years 2010 and 2011 
    111 - ``init_PAPASTATION_m06d15.nc`` : Initial Conditions from observed data and Levitus 2009 climatology 
    112 - ``chlorophyll_PAPASTATION.nc`` : surface chlorophyll file from Seawifs data 
     141- :file:`forcing_PAPASTATION_1h_y201[0-1].nc`: 
     142  ECMWF operational analysis atmospheric forcing rescaled to 1h 
     143  (with long and short waves flux correction) for years 2010 and 2011 
     144- :file:`init_PAPASTATION_m06d15.nc`: Initial Conditions from 
     145  observed data and Levitus 2009 climatology 
     146- :file:`chlorophyll_PAPASTATION.nc`: surface chlorophyll file from Seawifs data 
    113147 
    114148GYRE_BFM 
    115149-------- 
    116150 
    117 ``GYRE_BFM`` shares the same physical setup of GYRE_PISCES_, but NEMO is coupled with the `BFM <http://www.bfm-community.eu/>`_ biogeochemical model as described in ``./cfgs/GYRE_BFM/README``. 
     151``GYRE_BFM`` shares the same physical setup of GYRE_PISCES_, 
     152but NEMO is coupled with the `BFM`_ biogeochemical model as described in ``./cfgs/GYRE_BFM/README``. 
    118153 
    119154GYRE_PISCES 
     
    123158in the Beta-plane approximation with a regular 1° horizontal resolution and 31 vertical levels, 
    124159with PISCES BGC model :cite:`gmd-8-2465-2015`. 
    125 Analytical forcing for heat, freshwater and wind-stress fields are applied.   
    126  
    127 This configuration acts also as demonstrator of the **user defined setup** (``ln_read_cfg = .false.``) and 
    128 grid setting are handled through the ``&namusr_def`` controls in ``namelist_cfg``: 
     160Analytical forcing for heat, freshwater and wind-stress fields are applied. 
     161 
     162This configuration acts also as demonstrator of the **user defined setup** 
     163(``ln_read_cfg = .false.``) and grid setting are handled through 
     164the ``&namusr_def`` controls in :file:`namelist_cfg`: 
    129165 
    130166.. literalinclude:: ../../../cfgs/GYRE_PISCES/EXPREF/namelist_cfg 
    131167   :language: fortran 
    132    :lines: 34-42 
     168   :lines:    35-41 
    133169 
    134170Note that, the default grid size is 30x20 grid points (with ``nn_GYRE = 1``) and 
    135171vertical levels are set by ``jpkglo``. 
    136 The specific code changes can be inspected in ``./src/OCE/USR``. 
    137  
    138 **Running GYRE as a benchmark** : 
    139 this simple configuration can be used as a benchmark since it is easy to increase resolution, 
    140 with the drawback of getting results that have a very limited physical meaning. 
    141  
    142 GYRE grid resolution can be increased at runtime by setting a different value of ``nn_GYRE`` (integer multiplier scaling factor), as described in the following table:  
    143  
    144 =========== ========= ========== ============ =================== 
    145 ``nn_GYRE``  *jpiglo*  *jpjglo*   ``jpkglo``   **Equivalent to** 
    146 =========== ========= ========== ============ =================== 
    147  1           30        20         31           GYRE 1° 
    148  25          750       500        101          ORCA 1/2° 
    149  50          1500      1000       101          ORCA 1/4° 
    150  150         4500      3000       101          ORCA 1/12° 
    151  200         6000      4000       101          ORCA 1/16° 
    152 =========== ========= ========== ============ =================== 
    153  
    154 Note that, it is necessary to set ``ln_bench = .true.`` in ``namusr_def`` to 
    155 avoid problems in the physics computation and that 
    156 the model timestep should be adequately rescaled.  
    157  
    158 For example if ``nn_GYRE = 150``, equivalent to an ORCA 1/12° grid, 
    159 the timestep ``rn_rdt = 1200`` should be set to 1200 seconds 
    160  
    161 Differently from previous versions of NEMO, 
    162 the code uses by default the time-splitting scheme and 
    163 internally computes the number of sub-steps.  
     172The specific code changes can be inspected in :file:`./src/OCE/USR`. 
     173 
     174.. rubric:: Running GYRE as a benchmark 
     175 
     176| This simple configuration can be used as a benchmark since it is easy to increase resolution, 
     177  with the drawback of getting results that have a very limited physical meaning. 
     178| GYRE grid resolution can be increased at runtime by setting a different value of ``nn_GYRE`` 
     179  (integer multiplier scaling factor), as described in the following table: 
     180 
     181=========== ============ ============ ============ =============== 
     182``nn_GYRE``  ``jpiglo``   ``jpjglo``   ``jpkglo``   Equivalent to 
     183=========== ============ ============ ============ =============== 
     184 1           30           20           31           GYRE 1° 
     185 25          750          500          101          ORCA 1/2° 
     186 50          1500         1000         101          ORCA 1/4° 
     187 150         4500         3000         101          ORCA 1/12° 
     188 200         6000         4000         101          ORCA 1/16° 
     189=========== ============ ============ ============ =============== 
     190 
     191| Note that, it is necessary to set ``ln_bench = .true.`` in ``&namusr_def`` to 
     192  avoid problems in the physics computation and that 
     193  the model timestep should be adequately rescaled. 
     194| For example if ``nn_GYRE = 150``, equivalent to an ORCA 1/12° grid, 
     195  the timestep ``rn_rdt`` should be set to 1200 seconds 
     196  Differently from previous versions of NEMO, the code uses by default the time-splitting scheme and 
     197  internally computes the number of sub-steps. 
    164198 
    165199ORCA2_ICE_PISCES 
     
    174208the ratio of anisotropy is nearly one everywhere 
    175209 
    176 this configuration uses the three components  
    177  
    178 - |OPA|, the ocean dynamical core  
    179 - |SI3|, the thermodynamic-dynamic sea ice model. 
    180 - |TOP|, passive tracer transport module and PISCES BGC model :cite:`gmd-8-2465-2015` 
     210This configuration uses the three components 
     211 
     212- |OCE|, the ocean dynamical core 
     213- |ICE|, the thermodynamic-dynamic sea ice model. 
     214- |MBG|, passive tracer transport module and PISCES BGC model :cite:`gmd-8-2465-2015` 
    181215 
    182216All components share the same grid. 
    183  
    184217The model is forced with CORE-II normal year atmospheric forcing and 
    185218it uses the NCAR bulk formulae. 
    186219 
    187 In this ``ORCA2_ICE_PISCES`` configuration, 
    188 AGRIF nesting can be activated that includes a nested grid in the Agulhas region. 
    189  
    190 To set up this configuration, after extracting NEMO: 
    191  
    192 Build your AGRIF configuration directory from ``ORCA2_ICE_PISCES``, 
    193 with the ``key_agrif`` CPP key activated: 
    194  
    195 .. code-block:: console 
    196                  
    197         $ ./makenemo -r 'ORCA2_ICE_PISCES' -n 'AGRIF' add_key 'key_agrif' 
    198  
    199 By using the input files and namelists for ``ORCA2_ICE_PISCES``, 
    200 the AGRIF test configuration is ready to run. 
    201  
    202 **Ocean Physics** 
    203  
    204 - *horizontal diffusion on momentum*: the eddy viscosity coefficient depends on the geographical position. It is taken as 40000 m^2/s, reduced in the equator regions (2000 m^2/s) excepted near the western boundaries. 
    205 - *isopycnal diffusion on tracers*: the diffusion acts along the isopycnal surfaces (neutral surface) with an eddy diffusivity coefficient of 2000 m^2/s. 
    206 - *Eddy induced velocity parametrization* with a coefficient that depends on the growth rate of baroclinic instabilities (it usually varies from 15 m^2/s to 3000 m^2/s). 
    207 - *lateral boundary conditions* : zero fluxes of heat and salt and no-slip conditions are applied through lateral solid boundaries. 
    208 - *bottom boundary condition* : zero fluxes of heat and salt are applied through the ocean bottom. 
    209   The Beckmann [19XX] simple bottom boundary layer parameterization is applied along continental slopes. 
    210   A linear friction is applied on momentum. 
    211 - *convection*: the vertical eddy viscosity and diffusivity coefficients are increased to 1 m^2/s in case of static instability. 
    212 - *time step* is 5760sec (1h36') so that there is 15 time steps in one day. 
     220.. rubric:: Ocean Physics 
     221 
     222:horizontal diffusion on momentum: 
     223   the eddy viscosity coefficient depends on the geographical position. 
     224   It is taken as 40000 m\ :sup:`2`/s, reduced in the equator regions (2000 m\ :sup:`2`/s) 
     225   excepted near the western boundaries. 
     226:isopycnal diffusion on tracers: 
     227   the diffusion acts along the isopycnal surfaces (neutral surface) with 
     228   an eddy diffusivity coefficient of 2000 m\ :sup:`2`/s. 
     229:Eddy induced velocity parametrization: 
     230   With a coefficient that depends on the growth rate of baroclinic instabilities 
     231   (it usually varies from 15 m\ :sup:`2`/s to 3000 m\ :sup:`2`/s). 
     232:lateral boundary conditions: 
     233   Zero fluxes of heat and salt and no-slip conditions are applied through lateral solid boundaries. 
     234:bottom boundary condition: 
     235   Zero fluxes of heat and salt are applied through the ocean bottom. 
     236   The Beckmann [19XX] simple bottom boundary layer parameterization is applied along 
     237   continental slopes. 
     238   A linear friction is applied on momentum. 
     239:convection: 
     240   The vertical eddy viscosity and diffusivity coefficients are increased to 1 m\ :sup:`2`/s in 
     241   case of static instability. 
     242:time step: is 5760sec (1h36') so that there is 15 time steps in one day. 
    213243 
    214244ORCA2_OFF_PISCES 
     
    218248but only PISCES model is an active component of TOP. 
    219249 
    220  
    221250ORCA2_OFF_TRC 
    222251------------- 
    223252 
    224 ``ORCA2_OFF_TRC`` is based on the ORCA2 global ocean configuration 
    225 (see ORCA2_ICE_PISCES_ for general description) along with the tracer passive transport module (TOP), but dynamical fields are pre-calculated and read with specific time frequency. 
    226  
    227 This enables for an offline coupling of TOP components, 
    228 here specifically inorganic carbon compounds (cfc11, cfc12, sf6, c14) and water age module (age). 
    229 See ``namelist_top_cfg`` to inspect the selection of each component with the dedicated logical keys. 
     253| ``ORCA2_OFF_TRC`` is based on the ORCA2 global ocean configuration 
     254  (see ORCA2_ICE_PISCES_ for general description) along with 
     255  the tracer passive transport module (TOP), 
     256  but dynamical fields are pre-calculated and read with specific time frequency. 
     257| This enables for an offline coupling of TOP components, 
     258  here specifically inorganic carbon compounds (CFC11, CFC12, SF6, C14) and water age module (age). 
     259  See :file:`namelist_top_cfg` to inspect the selection of 
     260  each component with the dedicated logical keys. 
    230261 
    231262Pre-calculated dynamical fields are provided to NEMO using 
    232 the namelist ``&namdta_dyn``  in ``namelist_cfg``, 
     263the namelist ``&namdta_dyn``  in :file:`namelist_cfg`, 
    233264in this case with a 5 days frequency (120 hours): 
    234265 
    235 .. literalinclude:: ../../../cfgs/GYRE_PISCES/EXPREF/namelist_ref 
     266.. literalinclude:: ../../namelists/namdta_dyn 
    236267   :language: fortran 
    237    :lines: 935-960 
    238  
    239 Input dynamical fields for this configuration (``ORCA2_OFF_v4.0.tar``) comes from 
     268 
     269Input dynamical fields for this configuration (:file:`ORCA2_OFF_v4.0.tar`) comes from 
    240270a 2000 years long climatological simulation of ORCA2_ICE using ERA40 atmospheric forcing. 
    241271 
    242 Note that, this configuration default uses linear free surface (``ln_linssh = .true.``) assuming that 
    243 model mesh is not varying in time and 
    244 it includes the bottom boundary layer parameterization (``ln_trabbl = .true.``) that 
    245 requires the provision of bbl coefficients through ``sn_ubl`` and ``sn_vbl`` fields. 
    246  
    247 It is also possible to activate PISCES model (see ``ORCA2_OFF_PISCES``) or 
    248 a user defined set of tracers and source-sink terms with ``ln_my_trc = .true.`` 
    249 (and adaptation of ``./src/TOP/MY_TRC`` routines). 
     272| Note that, 
     273  this configuration default uses linear free surface (``ln_linssh = .true.``) assuming that 
     274  model mesh is not varying in time and 
     275  it includes the bottom boundary layer parameterization (``ln_trabbl = .true.``) that 
     276  requires the provision of BBL coefficients through ``sn_ubl`` and ``sn_vbl`` fields. 
     277| It is also possible to activate PISCES model (see ``ORCA2_OFF_PISCES``) or 
     278  a user defined set of tracers and source-sink terms with ``ln_my_trc = .true.`` 
     279  (and adaptation of ``./src/TOP/MY_TRC`` routines). 
    250280 
    251281In addition, the offline module (OFF) allows for the provision of further fields: 
     
    254284   by including an input datastream similarly to the following: 
    255285 
    256 .. code-block:: fortran 
    257  
    258    sn_rnf  = 'dyna_grid_T', 120, 'sorunoff' , .true., .true., 'yearly', '', '', '' 
    259  
    260 2. **VVL dynamical fields**, 
    261    in the case input data were produced by a dyamical core using variable volume (``ln_linssh = .false.``) 
    262    it necessary to provide also diverce and E-P at before timestep by 
     286   .. code-block:: fortran 
     287 
     288      sn_rnf  = 'dyna_grid_T', 120, 'sorunoff' , .true., .true., 'yearly', '', '', '' 
     289 
     2902. **VVL dynamical fields**, in the case input data were produced by a dyamical core using 
     291   variable volume (``ln_linssh = .false.``) 
     292   it is necessary to provide also diverce and E-P at before timestep by 
    263293   including input datastreams similarly to the following 
    264294 
    265 .. code-block:: fortran 
    266  
    267    sn_div  = 'dyna_grid_T', 120, 'e3t'      , .true., .true., 'yearly', '', '', '' 
    268    sn_empb = 'dyna_grid_T', 120, 'sowaflupb', .true., .true., 'yearly', '', '', '' 
    269  
     295   .. code-block:: fortran 
     296 
     297      sn_div  = 'dyna_grid_T', 120, 'e3t'      , .true., .true., 'yearly', '', '', '' 
     298      sn_empb = 'dyna_grid_T', 120, 'sowaflupb', .true., .true., 'yearly', '', '', '' 
    270299 
    271300More details can be found by inspecting the offline data manager in 
    272 the routine ``./src/OFF/dtadyn.F90``. 
     301the routine :file:`./src/OFF/dtadyn.F90`. 
    273302 
    274303ORCA2_SAS_ICE 
    275304------------- 
    276305 
    277 ORCA2_SAS_ICE is a demonstrator of the Stand-Alone Surface (SAS) module and 
    278 it relies on ORCA2 global ocean configuration (see ORCA2_ICE_PISCES_ for general description). 
    279  
    280 The standalone surface module allows surface elements such as sea-ice, iceberg drift, and 
    281 surface fluxes to be run using prescribed model state fields. 
    282 It can profitably be used to compare different bulk formulae or 
    283 adjust the parameters of a given bulk formula. 
    284  
    285 More informations about SAS can be found in NEMO manual. 
     306| ORCA2_SAS_ICE is a demonstrator of the Stand-Alone Surface (SAS) module and 
     307  it relies on ORCA2 global ocean configuration (see ORCA2_ICE_PISCES_ for general description). 
     308| The standalone surface module allows surface elements such as sea-ice, iceberg drift, and 
     309  surface fluxes to be run using prescribed model state fields. 
     310  It can profitably be used to compare different bulk formulae or 
     311  adjust the parameters of a given bulk formula. 
     312 
     313More informations about SAS can be found in :doc:`NEMO manual <cite>`. 
    286314 
    287315SPITZ12 
     
    290318``SPITZ12`` is a regional configuration around the Svalbard archipelago 
    291319at 1/12° of horizontal resolution and 75 vertical levels. 
    292 See `Rousset et al. (2015) <https://www.geosci-model-dev.net/8/2991/2015/>`_ for more details. 
     320See :gmd:`Rousset et al. (2015) <8/2991/2015>` for more details. 
    293321 
    294322This configuration references to year 2002, 
     
    296324while lateral boundary conditions for dynamical fields have 3 days time frequency. 
    297325 
    298 References 
    299 ========== 
    300  
    301 .. bibliography:: configurations.bib 
     326.. rubric:: References 
     327 
     328.. bibliography:: cfgs.bib 
    302329   :all: 
    303330   :style: unsrt 
    304331   :labelprefix: C 
    305  
    306 .. Links and substitutions 
    307  
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SHARED/README.rst

    r10598 r12165  
    33*********** 
    44 
     5.. todo:: 
     6 
     7 
     8 
    59.. contents:: 
    6            :local: 
     10   :local: 
    711 
    812Output of diagnostics in NEMO is usually done using XIOS. 
    9 This is an efficient way of writing diagnostics because the time averaging, file writing and even some simple arithmetic or regridding is carried out in parallel to the NEMO model run. 
     13This is an efficient way of writing diagnostics because 
     14the time averaging, file writing and even some simple arithmetic or regridding is carried out in 
     15parallel to the NEMO model run. 
    1016This page gives a basic introduction to using XIOS with NEMO. 
    11 Much more information is available from the XIOS homepage above and from the NEMO manual. 
     17Much more information is available from the :xios:`XIOS homepage<>` above and from the NEMO manual. 
    1218 
    13 Use of XIOS for diagnostics is activated using the pre-compiler key ``key_iomput``.  
     19Use of XIOS for diagnostics is activated using the pre-compiler key ``key_iomput``. 
    1420 
    1521Extracting and installing XIOS 
    16 ------------------------------ 
     22============================== 
    1723 
    18241. Install the NetCDF4 library. 
    19    If you want to use single file output you will need to compile the HDF & NetCDF libraries to allow parallel IO. 
    20 2. Download the version of XIOS that you wish to use. The recommended version is now XIOS 2.5: 
    21     
    22 .. code-block:: console 
     25   If you want to use single file output you will need to compile the HDF & NetCDF libraries to 
     26   allow parallel IO. 
     272. Download the version of XIOS that you wish to use. 
     28   The recommended version is now XIOS 2.5: 
    2329 
    24    $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 xios-2.5 
     30   .. code-block:: console 
    2531 
    26 and follow the instructions in `XIOS documentation <http://forge.ipsl.jussieu.fr/ioserver/wiki/documentation>`_ to compile it. 
    27    If you find problems at this stage, support can be found by subscribing to the `XIOS mailing list <http://forge.ipsl.jussieu.fr/mailman/listinfo.cgi/xios-users>`_ and sending a mail message to it.  
     32      $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 
     33 
     34and follow the instructions in :xios:`XIOS documentation <wiki/documentation>` to compile it. 
     35If you find problems at this stage, support can be found by subscribing to 
     36the :xios:`XIOS mailing list <../mailman/listinfo.cgi/xios-users>` and sending a mail message to it. 
    2837 
    2938XIOS Configuration files 
    3039------------------------ 
    3140 
    32 XIOS is controlled using xml input files that should be copied to your model run directory before running the model. 
    33 Examples of these files can be found in the reference configurations (``cfgs``). The XIOS executable expects to find a file called ``iodef.xml`` in the model run directory. 
    34 In NEMO we have made the decision to use include statements in the ``iodef.xml`` file to include ``field_def_nemo-oce.xml`` (for physics), ``field_def_nemo-ice.xml`` (for ice), ``field_def_nemo-pisces.xml`` (for biogeochemistry) and ``domain_def.xml`` from the /cfgs/SHARED directory. 
    35 Most users will not need to modify ``domain_def.xml`` or ``field_def_nemo-???.xml`` unless they want to add new diagnostics to the NEMO code. 
    36 The definition of the output files is organized into separate ``file_definition.xml`` files which are included in the ``iodef.xml`` file. 
     41XIOS is controlled using XML input files that should be copied to 
     42your model run directory before running the model. 
     43Examples of these files can be found in the reference configurations (:file:`./cfgs`). 
     44The XIOS executable expects to find a file called :file:`iodef.xml` in the model run directory. 
     45In NEMO we have made the decision to use include statements in the :file:`iodef.xml` file to include: 
     46 
     47- :file:`field_def_nemo-oce.xml` (for physics), 
     48- :file:`field_def_nemo-ice.xml` (for ice), 
     49- :file:`field_def_nemo-pisces.xml` (for biogeochemistry) and 
     50- :file:`domain_def.xml` from the :file:`./cfgs/SHARED` directory. 
     51 
     52Most users will not need to modify :file:`domain_def.xml` or :file:`field_def_nemo-???.xml` unless 
     53they want to add new diagnostics to the NEMO code. 
     54The definition of the output files is organized into separate :file:`file_definition.xml` files which 
     55are included in the :file:`iodef.xml` file. 
    3756 
    3857Modes 
    39 ----- 
     58===== 
    4059 
    4160Detached Mode 
     
    4463In detached mode the XIOS executable is executed on separate cores from the NEMO model. 
    4564This is the recommended method for using XIOS for realistic model runs. 
    46 To use this mode set ``using_server`` to ``true`` at the bottom of the ``iodef.xml`` file: 
     65To use this mode set ``using_server`` to ``true`` at the bottom of the :file:`iodef.xml` file: 
    4766 
    4867.. code-block:: xml 
    4968 
    50    <variable id="using_server" type="boolean">true</variable> 
     69   <variable id="using_server" type="boolean">true</variable> 
    5170 
    52 Make sure there is a copy (or link to) your XIOS executable in the working directory and in your job submission script allocate processors to XIOS. 
     71Make sure there is a copy (or link to) your XIOS executable in the working directory and 
     72in your job submission script allocate processors to XIOS. 
    5373 
    5474Attached Mode 
     
    5676 
    5777In attached mode XIOS runs on each of the cores used by NEMO. 
    58 This method is less efficient than the detached mode but can be more convenient for testing or with small configurations. 
    59 To activate this mode simply set ``using_server`` to false in the ``iodef.xml`` file 
     78This method is less efficient than the detached mode but can be more convenient for testing or 
     79with small configurations. 
     80To activate this mode simply set ``using_server`` to false in the :file:`iodef.xml` file 
    6081 
    6182.. code-block:: xml 
    6283 
    63    <variable id="using_server" type="boolean">false</variable> 
     84   <variable id="using_server" type="boolean">false</variable> 
    6485 
    6586and don't allocate any cores to XIOS. 
    66 Note that due to the different domain decompositions between XIOS and NEMO if the total number of cores is larger than the number of grid points in the j direction then the model run will fail. 
     87 
     88.. note:: 
     89 
     90   Due to the different domain decompositions between XIOS and NEMO, 
     91   if the total number of cores is larger than the number of grid points in the ``j`` direction then 
     92   the model run will fail. 
    6793 
    6894Adding new diagnostics 
    69 ---------------------- 
     95====================== 
    7096 
    7197If you want to add a NEMO diagnostic to the NEMO code you will need to do the following: 
    7298 
    73991. Add any necessary code to calculate you new diagnostic in NEMO 
    74 2. Send the field to XIOS using ``CALL iom_put( 'field_id', variable )`` where ``field_id`` is a unique id for your new diagnostics and variable is the fortran variable containing the data. 
    75    This should be called at every model timestep regardless of how often you want to output the field. No time averaging should be done in the model code.  
    76 3. If it is computationally expensive to calculate your new diagnostic you should also use "iom_use" to determine if it is requested in the current model run. For example, 
    77     
    78 .. code-block:: fortran 
     1002. Send the field to XIOS using ``CALL iom_put( 'field_id', variable )`` where 
     101   ``field_id`` is a unique id for your new diagnostics and 
     102   variable is the fortran variable containing the data. 
     103   This should be called at every model timestep regardless of how often you want to output the field. 
     104   No time averaging should be done in the model code. 
     1053. If it is computationally expensive to calculate your new diagnostic 
     106   you should also use "iom_use" to determine if it is requested in the current model run. 
     107   For example, 
    79108 
    80       IF iom_use('field_id') THEN 
    81          !Some expensive computation 
    82          !... 
    83          !... 
    84          iom_put('field_id', variable) 
    85       ENDIF 
     109   .. code-block:: fortran 
    86110 
    87 4. Add a variable definition to the ``field_def_nemo-???.xml`` file. 
    88 5. Add the variable to the ``iodef.xml`` or ``file_definition.xml`` file. 
     111      IF iom_use('field_id') THEN 
     112         !Some expensive computation 
     113         !... 
     114         !... 
     115    iom_put('field_id', variable) 
     116      ENDIF 
     117 
     1184. Add a variable definition to the :file:`field_def_nemo-???.xml` file. 
     1195. Add the variable to the :file:`iodef.xml` or :file:`file_definition.xml` file. 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SHARED/namelist_ice_ref

    r11586 r12165  
    5757   ln_landfast_L16  = .false.         !  landfast: parameterization from Lemieux 2016 
    5858      rn_depfra     =   0.125         !        fraction of ocean depth that ice must reach to initiate landfast 
    59                                       !          recommended range: [0.1 ; 0.25] - L16=0.125 - home=0.15 
    60       rn_icebfr     =  15.            !        ln_landfast_L16:  maximum bottom stress per unit volume [N/m3] 
    61                                       !        ln_landfast_home: maximum bottom stress per unit area of contact [N/m2] 
    62                                       !          recommended range: ?? L16=15 - home=10 
     59                                      !          recommended range: [0.1 ; 0.25] 
     60      rn_icebfr     =  15.            !        maximum bottom stress per unit volume [N/m3] 
    6361      rn_lfrelax    =   1.e-5         !        relaxation time scale to reach static friction [s-1] 
    64       rn_tensile    =   0.2           !        ln_landfast_L16: isotropic tensile strength 
     62      rn_tensile    =   0.2           !        isotropic tensile strength [0-0.5??] 
    6563/ 
    6664!------------------------------------------------------------------------------ 
     
    103101&namdyn_adv     !   Ice advection 
    104102!------------------------------------------------------------------------------ 
    105    ln_adv_Pra       = .false.         !  Advection scheme (Prather) 
    106    ln_adv_UMx       = .true.          !  Advection scheme (Ultimate-Macho) 
     103   ln_adv_Pra       = .true.         !  Advection scheme (Prather) 
     104   ln_adv_UMx       = .false.          !  Advection scheme (Ultimate-Macho) 
    107105      nn_UMx        =   5             !     order of the scheme for UMx (1-5 ; 20=centered 2nd order) 
    108106/ 
     
    234232&namdia         !   Diagnostics 
    235233!------------------------------------------------------------------------------ 
    236    ln_icediachk     = .false.         !  check online the heat, mass & salt budgets at each time step 
    237       !                               !     rate of ice spuriously gained/lost. For ex., rn_icechk=1. <=> 1mm/year, rn_icechk=0.1 <=> 1mm/10years                                    
    238       rn_icechk_cel =  1.             !     check at any gridcell           => stops the code if violated (and writes a file) 
    239       rn_icechk_glo =  0.1            !     check over the entire ice cover => only prints warnings 
     234   ln_icediachk     = .false.         !  check online heat, mass & salt budgets 
     235      !                               !   rate of ice spuriously gained/lost at each time step => rn_icechk=1 <=> 1.e-6 m/hour 
     236      rn_icechk_cel =  100.           !     check at each gridcell          (1.e-4m/h)=> stops the code if violated (and writes a file) 
     237      rn_icechk_glo =  1.             !     check over the entire ice cover (1.e-6m/h)=> only prints warnings 
    240238   ln_icediahsb     = .false.         !  output the heat, mass & salt budgets (T) or not (F) 
    241239   ln_icectl        = .false.         !  ice points output for debug (T or F) 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SPITZ12/EXPREF/namelist_ice_cfg

    r11587 r12165  
    4444&namdyn_rhg     !   Ice rheology 
    4545!------------------------------------------------------------------------------ 
    46    ln_rhg_EVP       = .true.          !  EVP rheology 
    47       ln_aEVP       = .true.          !     adaptive rheology (Kimmritz et al. 2016 & 2017) 
    4846/ 
    4947!------------------------------------------------------------------------------ 
    5048&namdyn_adv     !   Ice advection 
    5149!------------------------------------------------------------------------------ 
     50   ln_adv_Pra       = .false.         !  Advection scheme (Prather) 
     51   ln_adv_UMx       = .true.          !  Advection scheme (Ultimate-Macho) 
     52      nn_UMx        =   5             !     order of the scheme for UMx (1-5 ; 20=centered 2nd order) 
    5253/ 
    5354!------------------------------------------------------------------------------ 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/doc/latex/global/ametsoc.bst

    r11128 r12165  
    99%% *** Bibliography style file for ALL AMS Journals...version 1.0  *** 
    1010%% *** Brian Papa - American Meteorological Society *** 
    11 %%  
     11%% 
    1212%% Copyright 1994-2004 Patrick W Daly 
    1313 % =============================================================== 
     
    519519  duplicate$ empty$ 'skip$ 
    520520    { 
    521       "\href{http://dx.doi.org/" swap$ * "}{DOI}" * 
     521      "\href{http://dx.doi.org/" swap$ * "}{\aiDoi}" * 
    522522    } 
    523523  if$ 
     
    11921192  crossref missing$ 
    11931193    { format.in.ed.booktitle "booktitle" output.check 
    1194       format.publisher.address output       
     1194      format.publisher.address output 
    11951195      format.bvolume output 
    11961196      format.number.series output 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/doc/rst/source/conf.py

    r12063 r12165  
    230230texinfo_documents = [ 
    231231  ('guide', 'NEMO', u'NEMO Documentation', 
    232    u'NEMO System Team', 'NEMO', 'One line description of project.', 
     232   u'NEMO System Team', 'NEMO', 'Community Ocean Model', 
    233233   'Miscellaneous'), 
    234234] 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/doc/rst/source/global.rst

    r12063 r12165  
    1 .. Roles (custom styles related to CSS classes in 'source/_static/style.css') 
     1.. Roles 
     2 
     3.. custom styles related to CSS classes in './_static/style.css' 
    24 
    35.. role:: blue 
     
    57.. role:: grey 
    68.. role:: greysup(sup) 
     9 
     10.. inline code snippets 
     11 
     12.. role:: python(code) 
     13   :language: python 
     14   :class: highlight 
     15 
     16.. role:: fortran(code) 
     17   :language: fortran 
     18   :class: highlight 
     19 
     20.. role:: console(code) 
     21   :language: console 
     22   :class: highlight 
    723 
    824.. Substitutions 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/ice.F90

    r11586 r12165  
    328328   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:,:) ::   sz_i     !: ice salinity          [PSS] 
    329329 
    330    REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) ::   a_ip       !: melt pond fraction per grid cell area 
     330   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) ::   a_ip       !: melt pond concentration 
    331331   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) ::   v_ip       !: melt pond volume per grid cell area      [m] 
    332    REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) ::   a_ip_frac  !: melt pond volume per ice area 
    333    REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) ::   h_ip       !: melt pond thickness                      [m] 
    334  
    335    REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:)   ::   at_ip      !: total melt pond fraction 
     332   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) ::   a_ip_frac  !: melt pond fraction (a_ip/a_i) 
     333   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) ::   h_ip       !: melt pond depth                          [m] 
     334 
     335   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:)   ::   at_ip      !: total melt pond concentration 
    336336   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:)   ::   hm_ip      !: mean melt pond depth                     [m] 
    337    REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:)   ::   vt_ip      !: total melt pond volume per unit area    [m] 
     337   REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:)   ::   vt_ip      !: total melt pond volume per gridcell area [m] 
    338338 
    339339   !!---------------------------------------------------------------------- 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/icectl.F90

    r11586 r12165  
    4444   PUBLIC   ice_prt3D 
    4545 
    46    ! thresold values for conservation 
     46   ! thresold rates for conservation 
    4747   !    these values are changed by the namelist parameter rn_icechk, so that threshold = zchk * rn_icechk 
    48    REAL(wp), PARAMETER ::   zchk_m   = 1.e-5   ! kg/m2/s <=> 1mm of ice per year spuriously gained/lost 
    49    REAL(wp), PARAMETER ::   zchk_s   = 1.e-4   ! g/m2/s  <=> 1mm of ice per year spuriously gained/lost (considering s=10g/kg) 
    50    REAL(wp), PARAMETER ::   zchk_t   = 3.      ! W/m2    <=> 1mm of ice per year spuriously gained/lost (considering Lf=3e5J/kg) 
     48   REAL(wp), PARAMETER ::   zchk_m   = 2.5e-7   ! kg/m2/s <=> 1e-6 m of ice per hour spuriously gained/lost 
     49   REAL(wp), PARAMETER ::   zchk_s   = 2.5e-6   ! g/m2/s  <=> 1e-6 m of ice per hour spuriously gained/lost (considering s=10g/kg) 
     50   REAL(wp), PARAMETER ::   zchk_t   = 7.5e-2   ! W/m2    <=> 1e-6 m of ice per hour spuriously gained/lost (considering Lf=3e5J/kg) 
    5151    
    5252   !! * Substitutions 
     
    6868      !! ** Method  : This is an online diagnostics which can be activated with ln_icediachk=true 
    6969      !!              It prints in ocean.output if there is a violation of conservation at each time-step 
    70       !!              The thresholds (zchk_m, zchk_s, zchk_t) which determine violations are set to 
    71       !!              a minimum of 1 mm of ice (over the ice area) that is lost/gained spuriously during 100 years. 
     70      !!              The thresholds (zchk_m, zchk_s, zchk_t) determine violations 
    7271      !!              For salt and heat thresholds, ice is considered to have a salinity of 10  
    7372      !!              and a heat content of 3e5 J/kg (=latent heat of fusion)  
     
    133132 
    134133         ! -- advection scheme is conservative? -- ! 
    135          zvtrp = glob_sum( 'icectl', ( diag_trp_vi * rhoi + diag_trp_vs * rhos ) * e1e2t ) ! must be close to 0 
    136          zetrp = glob_sum( 'icectl', ( diag_trp_ei        + diag_trp_es        ) * e1e2t ) ! must be close to 0 
     134         zvtrp = glob_sum( 'icectl', ( diag_trp_vi * rhoi + diag_trp_vs * rhos ) * e1e2t ) ! must be close to 0 (only for Prather) 
     135         zetrp = glob_sum( 'icectl', ( diag_trp_ei        + diag_trp_es        ) * e1e2t ) ! must be close to 0 (only for Prather) 
    137136 
    138137         ! ice area (+epsi10 to set a threshold > 0 when there is no ice)  
     
    157156               &                   WRITE(numout,*)   cd_routine,' : violation a_i > amax      = ',zdiag_amax 
    158157            ! check if advection scheme is conservative 
    159             IF( ABS(zvtrp) > zchk_m * rn_icechk_glo * zarea .AND. cd_routine == 'icedyn_adv' ) & 
    160                &                   WRITE(numout,*)   cd_routine,' : violation adv scheme [kg] = ',zvtrp * rdt_ice 
     158            !    only check for Prather because Ultimate-Macho uses corrective fluxes (wfx etc) 
     159            !    so the formulation for conservation is different (and not coded)  
     160            !    it does not mean UM is not conservative (it is checked with above prints) => update (09/2019): same for Prather now 
     161            !IF( ln_adv_Pra .AND. ABS(zvtrp) > zchk_m * rn_icechk_glo * zarea .AND. cd_routine == 'icedyn_adv' ) & 
     162            !   &                   WRITE(numout,*)   cd_routine,' : violation adv scheme [kg] = ',zvtrp * rdt_ice 
    161163         ENDIF 
    162164         ! 
     
    173175      !! ** Method  : This is an online diagnostics which can be activated with ln_icediachk=true 
    174176      !!              It prints in ocean.output if there is a violation of conservation at each time-step 
    175       !!              The thresholds (zchk_m, zchk_s, zchk_t) which determine the violation are set to 
    176       !!              a minimum of 1 mm of ice (over the ice area) that is lost/gained spuriously during 100 years. 
     177      !!              The thresholds (zchk_m, zchk_s, zchk_t) determine the violations 
    177178      !!              For salt and heat thresholds, ice is considered to have a salinity of 10  
    178179      !!              and a heat content of 3e5 J/kg (=latent heat of fusion)  
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/icedyn_adv_pra.F90

    r10425 r12165  
    1616   !!   adv_pra_rst     : read/write Prather field in ice restart file, or initialized to zero 
    1717   !!---------------------------------------------------------------------- 
     18   USE phycst         ! physical constant 
    1819   USE dom_oce        ! ocean domain 
    1920   USE ice            ! sea-ice variables 
    2021   USE sbc_oce , ONLY : nn_fsbc   ! frequency of sea-ice call 
     22   USE icevar         ! sea-ice: operations 
    2123   ! 
    2224   USE in_out_manager ! I/O manager 
     
    2527   USE lib_fortran    ! fortran utilities (glob_sum + no signed zero) 
    2628   USE lbclnk         ! lateral boundary conditions (or mpp links) 
    27    USE prtctl         ! Print control 
    2829 
    2930   IMPLICIT NONE 
     
    3637   REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:)   ::   sxice, syice, sxxice, syyice, sxyice   ! ice thickness  
    3738   REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:)   ::   sxsn , sysn , sxxsn , syysn , sxysn    ! snow thickness 
    38    REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:)   ::   sxa  , sya  , sxxa  , syya  , sxya     ! lead fraction 
     39   REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:)   ::   sxa  , sya  , sxxa  , syya  , sxya     ! ice concentration 
    3940   REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:)   ::   sxsal, sysal, sxxsal, syysal, sxysal   ! ice salinity 
    4041   REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:)   ::   sxage, syage, sxxage, syyage, sxyage   ! ice age 
    41    REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:)     ::   sxopw, syopw, sxxopw, syyopw, sxyopw   ! open water in sea ice 
    4242   REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:,:) ::   sxc0 , syc0 , sxxc0 , syyc0 , sxyc0    ! snow layers heat content 
    4343   REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:,:) ::   sxe  , sye  , sxxe  , syye  , sxye     ! ice layers heat content 
     
    8181      REAL(wp), DIMENSION(:,:,:,:), INTENT(inout) ::   pe_i       ! ice heat content 
    8282      ! 
    83       INTEGER  ::   jk, jl, jt              ! dummy loop indices 
    84       INTEGER  ::   initad                  ! number of sub-timestep for the advection 
    85       REAL(wp) ::   zcfl , zusnit           !   -      - 
    86       REAL(wp), ALLOCATABLE, DIMENSION(:,:)     ::   zarea 
    87       REAL(wp), ALLOCATABLE, DIMENSION(:,:,:)   ::   z0opw 
    88       REAL(wp), ALLOCATABLE, DIMENSION(:,:,:)   ::   z0ice, z0snw, z0ai, z0smi, z0oi 
    89       REAL(wp), ALLOCATABLE, DIMENSION(:,:,:)   ::   z0ap , z0vp 
    90       REAL(wp), ALLOCATABLE, DIMENSION(:,:,:,:) ::   z0es 
    91       REAL(wp), ALLOCATABLE, DIMENSION(:,:,:,:) ::   z0ei 
     83      INTEGER  ::   ji,jj, jk, jl, jt       ! dummy loop indices 
     84      INTEGER  ::   icycle                  ! number of sub-timestep for the advection 
     85      REAL(wp) ::   zdt                     !   -      - 
     86      REAL(wp), DIMENSION(1)                  ::   zcflprv, zcflnow   ! for global communication 
     87      REAL(wp), DIMENSION(jpi,jpj)            ::   zati1, zati2 
     88      REAL(wp), DIMENSION(jpi,jpj)            ::   zudy, zvdx 
     89      REAL(wp), DIMENSION(jpi,jpj,jpl)        ::   zarea 
     90      REAL(wp), DIMENSION(jpi,jpj,jpl)        ::   z0ice, z0snw, z0ai, z0smi, z0oi 
     91      REAL(wp), DIMENSION(jpi,jpj,jpl)        ::   z0ap , z0vp 
     92      REAL(wp), DIMENSION(jpi,jpj,nlay_s,jpl) ::   z0es 
     93      REAL(wp), DIMENSION(jpi,jpj,nlay_i,jpl) ::   z0ei 
    9294      !!---------------------------------------------------------------------- 
    9395      ! 
    9496      IF( kt == nit000 .AND. lwp )   WRITE(numout,*) '-- ice_dyn_adv_pra: Prather advection scheme' 
    9597      ! 
    96       ALLOCATE( zarea(jpi,jpj)    , z0opw(jpi,jpj, 1 ) , z0ice(jpi,jpj,jpl) , z0snw(jpi,jpj,jpl) ,                       & 
    97          &      z0ai(jpi,jpj,jpl) , z0smi(jpi,jpj,jpl) , z0oi (jpi,jpj,jpl) , z0ap (jpi,jpj,jpl) , z0vp(jpi,jpj,jpl) ,   & 
    98          &      z0es (jpi,jpj,nlay_s,jpl), z0ei(jpi,jpj,nlay_i,jpl) ) 
    99       ! 
    100       ! --- If ice drift field is too fast, use an appropriate time step for advection (CFL test for stability) --- !         
    101       zcfl  =            MAXVAL( ABS( pu_ice(:,:) ) * rdt_ice * r1_e1u(:,:) ) 
    102       zcfl  = MAX( zcfl, MAXVAL( ABS( pv_ice(:,:) ) * rdt_ice * r1_e2v(:,:) ) ) 
    103       CALL mpp_max( 'icedyn_adv_pra', zcfl ) 
     98      ! --- If ice drift is too fast, use  subtime steps for advection (CFL test for stability) --- ! 
     99      !        Note: the advection split is applied at the next time-step in order to avoid blocking global comm. 
     100      !              this should not affect too much the stability 
     101      zcflnow(1) =                  MAXVAL( ABS( pu_ice(:,:) ) * rdt_ice * r1_e1u(:,:) ) 
     102      zcflnow(1) = MAX( zcflnow(1), MAXVAL( ABS( pv_ice(:,:) ) * rdt_ice * r1_e2v(:,:) ) ) 
    104103       
    105       IF( zcfl > 0.5 ) THEN   ;   initad = 2   ;   zusnit = 0.5_wp 
    106       ELSE                    ;   initad = 1   ;   zusnit = 1.0_wp 
     104      ! non-blocking global communication send zcflnow and receive zcflprv 
     105      CALL mpp_delay_max( 'icedyn_adv_pra', 'cflice', zcflnow(:), zcflprv(:), kt == nitend - nn_fsbc + 1 ) 
     106 
     107      IF( zcflprv(1) > .5 ) THEN   ;   icycle = 2 
     108      ELSE                         ;   icycle = 1 
    107109      ENDIF 
     110      zdt = rdt_ice / REAL(icycle) 
    108111       
    109       zarea(:,:) = e1e2t(:,:) 
    110       !------------------------- 
    111       ! transported fields                                         
    112       !------------------------- 
    113       z0opw(:,:,1) = pato_i(:,:) * e1e2t(:,:)              ! Open water area  
    114       DO jl = 1, jpl 
    115          z0snw(:,:,jl) = pv_s (:,:,  jl) * e1e2t(:,:)     ! Snow volume 
    116          z0ice(:,:,jl) = pv_i (:,:,  jl) * e1e2t(:,:)     ! Ice  volume 
    117          z0ai (:,:,jl) = pa_i (:,:,  jl) * e1e2t(:,:)     ! Ice area 
    118          z0smi(:,:,jl) = psv_i(:,:,  jl) * e1e2t(:,:)     ! Salt content 
    119          z0oi (:,:,jl) = poa_i(:,:,  jl) * e1e2t(:,:)     ! Age content 
    120          DO jk = 1, nlay_s 
    121             z0es(:,:,jk,jl) = pe_s(:,:,jk,jl) * e1e2t(:,:) ! Snow heat content 
    122          END DO 
    123          DO jk = 1, nlay_i 
    124             z0ei(:,:,jk,jl) = pe_i(:,:,jk,jl) * e1e2t(:,:) ! Ice  heat content 
    125          END DO 
    126          IF ( ln_pnd_H12 ) THEN 
    127             z0ap(:,:,jl)  = pa_ip(:,:,jl) * e1e2t(:,:)     ! Melt pond fraction 
    128             z0vp(:,:,jl)  = pv_ip(:,:,jl) * e1e2t(:,:)     ! Melt pond volume 
     112      ! --- transport --- ! 
     113      zudy(:,:) = pu_ice(:,:) * e2u(:,:) 
     114      zvdx(:,:) = pv_ice(:,:) * e1v(:,:) 
     115 
     116      DO jt = 1, icycle 
     117 
     118         ! record at_i before advection (for open water) 
     119         zati1(:,:) = SUM( pa_i(:,:,:), dim=3 ) 
     120          
     121         ! --- transported fields --- !                                         
     122         DO jl = 1, jpl 
     123            zarea(:,:,jl) = e1e2t(:,:) 
     124            z0snw(:,:,jl) = pv_s (:,:,jl) * e1e2t(:,:)        ! Snow volume 
     125            z0ice(:,:,jl) = pv_i (:,:,jl) * e1e2t(:,:)        ! Ice  volume 
     126            z0ai (:,:,jl) = pa_i (:,:,jl) * e1e2t(:,:)        ! Ice area 
     127            z0smi(:,:,jl) = psv_i(:,:,jl) * e1e2t(:,:)        ! Salt content 
     128            z0oi (:,:,jl) = poa_i(:,:,jl) * e1e2t(:,:)        ! Age content 
     129            DO jk = 1, nlay_s 
     130               z0es(:,:,jk,jl) = pe_s(:,:,jk,jl) * e1e2t(:,:) ! Snow heat content 
     131            END DO 
     132            DO jk = 1, nlay_i 
     133               z0ei(:,:,jk,jl) = pe_i(:,:,jk,jl) * e1e2t(:,:) ! Ice  heat content 
     134            END DO 
     135            IF ( ln_pnd_H12 ) THEN 
     136               z0ap(:,:,jl)  = pa_ip(:,:,jl) * e1e2t(:,:)     ! Melt pond fraction 
     137               z0vp(:,:,jl)  = pv_ip(:,:,jl) * e1e2t(:,:)     ! Melt pond volume 
     138            ENDIF 
     139         END DO 
     140         ! 
     141         !                                                                  !--------------------------------------------! 
     142         IF( MOD( (kt - 1) / nn_fsbc , 2 ) ==  MOD( (jt - 1) , 2 ) ) THEN   !==  odd ice time step:  adv_x then adv_y  ==! 
     143            !                                                               !--------------------------------------------! 
     144            CALL adv_x( zdt , zudy , 1._wp , zarea , z0ice , sxice , sxxice , syice , syyice , sxyice ) !--- ice volume 
     145            CALL adv_y( zdt , zvdx , 0._wp , zarea , z0ice , sxice , sxxice , syice , syyice , sxyice ) 
     146            CALL adv_x( zdt , zudy , 1._wp , zarea , z0snw , sxsn  , sxxsn  , sysn  , syysn  , sxysn  ) !--- snow volume 
     147            CALL adv_y( zdt , zvdx , 0._wp , zarea , z0snw , sxsn  , sxxsn  , sysn  , syysn  , sxysn  ) 
     148            CALL adv_x( zdt , zudy , 1._wp , zarea , z0smi , sxsal , sxxsal , sysal , syysal , sxysal ) !--- ice salinity 
     149            CALL adv_y( zdt , zvdx , 0._wp , zarea , z0smi , sxsal , sxxsal , sysal , syysal , sxysal ) 
     150            CALL adv_x( zdt , zudy , 1._wp , zarea , z0ai  , sxa   , sxxa   , sya   , syya   , sxya   ) !--- ice concentration 
     151            CALL adv_y( zdt , zvdx , 0._wp , zarea , z0ai  , sxa   , sxxa   , sya   , syya   , sxya   ) 
     152            CALL adv_x( zdt , zudy , 1._wp , zarea , z0oi  , sxage , sxxage , syage , syyage , sxyage ) !--- ice age 
     153            CALL adv_y( zdt , zvdx , 0._wp , zarea , z0oi  , sxage , sxxage , syage , syyage , sxyage ) 
     154            ! 
     155            DO jk = 1, nlay_s                                                                           !--- snow heat content 
     156               CALL adv_x( zdt, zudy, 1._wp, zarea, z0es (:,:,jk,:), sxc0(:,:,jk,:),   & 
     157                  &                                 sxxc0(:,:,jk,:), syc0(:,:,jk,:), syyc0(:,:,jk,:), sxyc0(:,:,jk,:) ) 
     158               CALL adv_y( zdt, zvdx, 0._wp, zarea, z0es (:,:,jk,:), sxc0(:,:,jk,:),   & 
     159                  &                                 sxxc0(:,:,jk,:), syc0(:,:,jk,:), syyc0(:,:,jk,:), sxyc0(:,:,jk,:) ) 
     160            END DO 
     161            DO jk = 1, nlay_i                                                                           !--- ice heat content 
     162               CALL adv_x( zdt, zudy, 1._wp, zarea, z0ei(:,:,jk,:), sxe(:,:,jk,:),   &  
     163                  &                                 sxxe(:,:,jk,:), sye(:,:,jk,:), syye(:,:,jk,:), sxye(:,:,jk,:) ) 
     164               CALL adv_y( zdt, zvdx, 0._wp, zarea, z0ei(:,:,jk,:), sxe(:,:,jk,:),   &  
     165                  &                                 sxxe(:,:,jk,:), sye(:,:,jk,:), syye(:,:,jk,:), sxye(:,:,jk,:) ) 
     166            END DO 
     167            ! 
     168            IF ( ln_pnd_H12 ) THEN 
     169               CALL adv_x( zdt , zudy , 1._wp , zarea , z0ap , sxap , sxxap , syap , syyap , sxyap )    !--- melt pond fraction 
     170               CALL adv_y( zdt , zvdx , 0._wp , zarea , z0ap , sxap , sxxap , syap , syyap , sxyap )  
     171               CALL adv_x( zdt , zudy , 1._wp , zarea , z0vp , sxvp , sxxvp , syvp , syyvp , sxyvp )    !--- melt pond volume 
     172               CALL adv_y( zdt , zvdx , 0._wp , zarea , z0vp , sxvp , sxxvp , syvp , syyvp , sxyvp )  
     173            ENDIF 
     174            !                                                               !--------------------------------------------! 
     175         ELSE                                                               !== even ice time step:  adv_y then adv_x  ==! 
     176            !                                                               !--------------------------------------------! 
     177            CALL adv_y( zdt , zvdx , 1._wp , zarea , z0ice , sxice , sxxice , syice , syyice , sxyice ) !--- ice volume 
     178            CALL adv_x( zdt , zudy , 0._wp , zarea , z0ice , sxice , sxxice , syice , syyice , sxyice ) 
     179            CALL adv_y( zdt , zvdx , 1._wp , zarea , z0snw , sxsn  , sxxsn  , sysn  , syysn  , sxysn  ) !--- snow volume 
     180            CALL adv_x( zdt , zudy , 0._wp , zarea , z0snw , sxsn  , sxxsn  , sysn  , syysn  , sxysn  ) 
     181            CALL adv_y( zdt , zvdx , 1._wp , zarea , z0smi , sxsal , sxxsal , sysal , syysal , sxysal ) !--- ice salinity 
     182            CALL adv_x( zdt , zudy , 0._wp , zarea , z0smi , sxsal , sxxsal , sysal , syysal , sxysal ) 
     183            CALL adv_y( zdt , zvdx , 1._wp , zarea , z0ai  , sxa   , sxxa   , sya   , syya   , sxya   ) !--- ice concentration 
     184            CALL adv_x( zdt , zudy , 0._wp , zarea , z0ai  , sxa   , sxxa   , sya   , syya   , sxya   ) 
     185            CALL adv_y( zdt , zvdx , 1._wp , zarea , z0oi  , sxage , sxxage , syage , syyage , sxyage ) !--- ice age 
     186            CALL adv_x( zdt , zudy , 0._wp , zarea , z0oi  , sxage , sxxage , syage , syyage , sxyage ) 
     187            DO jk = 1, nlay_s                                                                           !--- snow heat content 
     188               CALL adv_y( zdt, zvdx, 1._wp, zarea, z0es (:,:,jk,:), sxc0(:,:,jk,:),   & 
     189                  &                                 sxxc0(:,:,jk,:), syc0(:,:,jk,:), syyc0(:,:,jk,:), sxyc0(:,:,jk,:) ) 
     190               CALL adv_x( zdt, zudy, 0._wp, zarea, z0es (:,:,jk,:), sxc0(:,:,jk,:),   & 
     191                  &                                 sxxc0(:,:,jk,:), syc0(:,:,jk,:), syyc0(:,:,jk,:), sxyc0(:,:,jk,:) ) 
     192            END DO 
     193            DO jk = 1, nlay_i                                                                           !--- ice heat content 
     194               CALL adv_y( zdt, zvdx, 1._wp, zarea, z0ei(:,:,jk,:), sxe(:,:,jk,:),   &  
     195                  &                                 sxxe(:,:,jk,:), sye(:,:,jk,:), syye(:,:,jk,:), sxye(:,:,jk,:) ) 
     196               CALL adv_x( zdt, zudy, 0._wp, zarea, z0ei(:,:,jk,:), sxe(:,:,jk,:),   &  
     197                  &                                 sxxe(:,:,jk,:), sye(:,:,jk,:), syye(:,:,jk,:), sxye(:,:,jk,:) ) 
     198            END DO 
     199            IF ( ln_pnd_H12 ) THEN 
     200               CALL adv_y( zdt , zvdx , 1._wp , zarea , z0ap , sxap , sxxap , syap , syyap , sxyap )    !--- melt pond fraction 
     201               CALL adv_x( zdt , zudy , 0._wp , zarea , z0ap , sxap , sxxap , syap , syyap , sxyap ) 
     202               CALL adv_y( zdt , zvdx , 1._wp , zarea , z0vp , sxvp , sxxvp , syvp , syyvp , sxyvp )    !--- melt pond volume 
     203               CALL adv_x( zdt , zudy , 0._wp , zarea , z0vp , sxvp , sxxvp , syvp , syyvp , sxyvp ) 
     204            ENDIF 
     205            ! 
    129206         ENDIF 
     207 
     208         ! --- Recover the properties from their contents --- ! 
     209         DO jl = 1, jpl 
     210            pv_i (:,:,jl) = z0ice(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     211            pv_s (:,:,jl) = z0snw(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     212            psv_i(:,:,jl) = z0smi(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     213            poa_i(:,:,jl) = z0oi (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     214            pa_i (:,:,jl) = z0ai (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     215            DO jk = 1, nlay_s 
     216               pe_s(:,:,jk,jl) = z0es(:,:,jk,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     217            END DO 
     218            DO jk = 1, nlay_i 
     219               pe_i(:,:,jk,jl) = z0ei(:,:,jk,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     220            END DO 
     221            IF ( ln_pnd_H12 ) THEN 
     222               pa_ip(:,:,jl) = z0ap(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     223               pv_ip(:,:,jl) = z0vp(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
     224            ENDIF 
     225         END DO 
     226         ! 
     227         ! derive open water from ice concentration 
     228         zati2(:,:) = SUM( pa_i(:,:,:), dim=3 ) 
     229         DO jj = 2, jpjm1 
     230            DO ji = fs_2, fs_jpim1 
     231               pato_i(ji,jj) = pato_i(ji,jj) - ( zati2(ji,jj) - zati1(ji,jj) ) &                        !--- open water 
     232                  &                          - ( zudy(ji,jj) - zudy(ji-1,jj) + zvdx(ji,jj) - zvdx(ji,jj-1) ) * r1_e1e2t(ji,jj) * zdt 
     233            END DO 
     234         END DO 
     235         CALL lbc_lnk( 'icedyn_adv_pra', pato_i, 'T',  1. ) 
     236         ! 
     237         ! --- Ensure non-negative fields --- ! 
     238         !     Remove negative values (conservation is ensured) 
     239         !     (because advected fields are not perfectly bounded and tiny negative values can occur, e.g. -1.e-20) 
     240         CALL ice_var_zapneg( zdt, pato_i, pv_i, pv_s, psv_i, poa_i, pa_i, pa_ip, pv_ip, pe_s, pe_i ) 
     241         ! 
     242         ! --- Ensure snow load is not too big --- ! 
     243         CALL Hsnow( zdt, pv_i, pv_s, pa_i, pa_ip, pe_s ) 
     244         ! 
    130245      END DO 
    131  
    132       !                                                    !--------------------------------------------! 
    133       IF( MOD( ( kt - 1) / nn_fsbc , 2 ) == 0 ) THEN       !==  odd ice time step:  adv_x then adv_y  ==! 
    134          !                                                 !--------------------------------------------! 
    135          DO jt = 1, initad 
    136             CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0opw (:,:,1), sxopw(:,:),   &             !--- ice open water area 
    137                &                                      sxxopw(:,:)  , syopw(:,:), syyopw(:,:), sxyopw(:,:)  ) 
    138             CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0opw (:,:,1), sxopw(:,:),   & 
    139                &                                      sxxopw(:,:)  , syopw(:,:), syyopw(:,:), sxyopw(:,:)  ) 
    140             DO jl = 1, jpl 
    141                CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0ice (:,:,jl), sxice(:,:,jl),   &    !--- ice volume  --- 
    142                   &                                      sxxice(:,:,jl), syice(:,:,jl), syyice(:,:,jl), sxyice(:,:,jl)  ) 
    143                CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0ice (:,:,jl), sxice(:,:,jl),   & 
    144                   &                                      sxxice(:,:,jl), syice(:,:,jl), syyice(:,:,jl), sxyice(:,:,jl)  ) 
    145                CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0snw (:,:,jl), sxsn (:,:,jl),   &    !--- snow volume  --- 
    146                   &                                      sxxsn (:,:,jl), sysn (:,:,jl), syysn (:,:,jl), sxysn (:,:,jl)  ) 
    147                CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0snw (:,:,jl), sxsn (:,:,jl),   & 
    148                   &                                      sxxsn (:,:,jl), sysn (:,:,jl), syysn (:,:,jl), sxysn (:,:,jl)  ) 
    149                CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0smi (:,:,jl), sxsal(:,:,jl),   &    !--- ice salinity --- 
    150                   &                                      sxxsal(:,:,jl), sysal(:,:,jl), syysal(:,:,jl), sxysal(:,:,jl)  ) 
    151                CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0smi (:,:,jl), sxsal(:,:,jl),   & 
    152                   &                                      sxxsal(:,:,jl), sysal(:,:,jl), syysal(:,:,jl), sxysal(:,:,jl)  ) 
    153                CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0oi  (:,:,jl), sxage(:,:,jl),   &    !--- ice age      ---      
    154                   &                                      sxxage(:,:,jl), syage(:,:,jl), syyage(:,:,jl), sxyage(:,:,jl)  ) 
    155                CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0oi  (:,:,jl), sxage(:,:,jl),   & 
    156                   &                                      sxxage(:,:,jl), syage(:,:,jl), syyage(:,:,jl), sxyage(:,:,jl)  ) 
    157                CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0ai  (:,:,jl), sxa  (:,:,jl),   &    !--- ice concentrations --- 
    158                   &                                      sxxa  (:,:,jl), sya  (:,:,jl), syya  (:,:,jl), sxya  (:,:,jl)  ) 
    159                CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0ai  (:,:,jl), sxa  (:,:,jl),   &  
    160                   &                                      sxxa  (:,:,jl), sya  (:,:,jl), syya  (:,:,jl), sxya  (:,:,jl)  ) 
    161                DO jk = 1, nlay_s                                                               !--- snow heat contents --- 
    162                   CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0es (:,:,jk,jl), sxc0(:,:,jk,jl),   & 
    163                      &                                      sxxc0(:,:,jk,jl), syc0(:,:,jk,jl), syyc0(:,:,jk,jl), sxyc0(:,:,jk,jl) ) 
    164                   CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0es (:,:,jk,jl), sxc0(:,:,jk,jl),   & 
    165                      &                                      sxxc0(:,:,jk,jl), syc0(:,:,jk,jl), syyc0(:,:,jk,jl), sxyc0(:,:,jk,jl) ) 
    166                END DO 
    167                DO jk = 1, nlay_i                                                               !--- ice heat contents --- 
    168                   CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0ei(:,:,jk,jl), sxe(:,:,jk,jl),   &  
    169                      &                                      sxxe(:,:,jk,jl), sye(:,:,jk,jl), syye(:,:,jk,jl), sxye(:,:,jk,jl) ) 
    170                   CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0ei(:,:,jk,jl), sxe(:,:,jk,jl),   &  
    171                      &                                      sxxe(:,:,jk,jl), sye(:,:,jk,jl), syye(:,:,jk,jl), sxye(:,:,jk,jl) ) 
    172                END DO 
    173                IF ( ln_pnd_H12 ) THEN 
    174                   CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0ap  (:,:,jl), sxap (:,:,jl),   &    !--- melt pond fraction -- 
    175                      &                                      sxxap (:,:,jl), syap (:,:,jl), syyap (:,:,jl), sxyap (:,:,jl)  ) 
    176                   CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0ap  (:,:,jl), sxap (:,:,jl),   &  
    177                      &                                      sxxap (:,:,jl), syap (:,:,jl), syyap (:,:,jl), sxyap (:,:,jl)  ) 
    178                   CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0vp  (:,:,jl), sxvp (:,:,jl),   &    !--- melt pond volume   -- 
    179                      &                                      sxxvp (:,:,jl), syvp (:,:,jl), syyvp (:,:,jl), sxyvp (:,:,jl)  ) 
    180                   CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0vp  (:,:,jl), sxvp (:,:,jl),   &  
    181                      &                                      sxxvp (:,:,jl), syvp (:,:,jl), syyvp (:,:,jl), sxyvp (:,:,jl)  ) 
    182                ENDIF 
    183             END DO 
    184          END DO 
    185       !                                                    !--------------------------------------------! 
    186       ELSE                                                 !== even ice time step:  adv_y then adv_x  ==! 
    187          !                                                 !--------------------------------------------! 
    188          DO jt = 1, initad 
    189             CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0opw (:,:,1), sxopw(:,:),   &             !--- ice open water area 
    190                &                                      sxxopw(:,:)  , syopw(:,:), syyopw(:,:), sxyopw(:,:)  ) 
    191             CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0opw (:,:,1), sxopw(:,:),   & 
    192                &                                      sxxopw(:,:)  , syopw(:,:), syyopw(:,:), sxyopw(:,:)  ) 
    193             DO jl = 1, jpl 
    194                CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0ice (:,:,jl), sxice(:,:,jl),   &    !--- ice volume  --- 
    195                   &                                      sxxice(:,:,jl), syice(:,:,jl), syyice(:,:,jl), sxyice(:,:,jl)  ) 
    196                CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0ice (:,:,jl), sxice(:,:,jl),   & 
    197                   &                                      sxxice(:,:,jl), syice(:,:,jl), syyice(:,:,jl), sxyice(:,:,jl)  ) 
    198                CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0snw (:,:,jl), sxsn (:,:,jl),   &    !--- snow volume  --- 
    199                   &                                      sxxsn (:,:,jl), sysn (:,:,jl), syysn (:,:,jl), sxysn (:,:,jl)  ) 
    200                CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0snw (:,:,jl), sxsn (:,:,jl),   & 
    201                   &                                      sxxsn (:,:,jl), sysn (:,:,jl), syysn (:,:,jl), sxysn (:,:,jl)  ) 
    202                CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0smi (:,:,jl), sxsal(:,:,jl),   &    !--- ice salinity --- 
    203                   &                                      sxxsal(:,:,jl), sysal(:,:,jl), syysal(:,:,jl), sxysal(:,:,jl)  ) 
    204                CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0smi (:,:,jl), sxsal(:,:,jl),   & 
    205                   &                                      sxxsal(:,:,jl), sysal(:,:,jl), syysal(:,:,jl), sxysal(:,:,jl)  ) 
    206                CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0oi  (:,:,jl), sxage(:,:,jl),   &   !--- ice age      --- 
    207                   &                                      sxxage(:,:,jl), syage(:,:,jl), syyage(:,:,jl), sxyage(:,:,jl)  ) 
    208                CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0oi  (:,:,jl), sxage(:,:,jl),   & 
    209                   &                                      sxxage(:,:,jl), syage(:,:,jl), syyage(:,:,jl), sxyage(:,:,jl)  ) 
    210                CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0ai  (:,:,jl), sxa  (:,:,jl),   &   !--- ice concentrations --- 
    211                   &                                      sxxa  (:,:,jl), sya  (:,:,jl), syya  (:,:,jl), sxya  (:,:,jl)  ) 
    212                CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0ai  (:,:,jl), sxa  (:,:,jl),   & 
    213                   &                                      sxxa  (:,:,jl), sya  (:,:,jl), syya  (:,:,jl), sxya  (:,:,jl)  ) 
    214                DO jk = 1, nlay_s                                                             !--- snow heat contents --- 
    215                   CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0es (:,:,jk,jl), sxc0(:,:,jk,jl),   & 
    216                      &                                      sxxc0(:,:,jk,jl), syc0(:,:,jk,jl), syyc0(:,:,jk,jl), sxyc0(:,:,jk,jl) ) 
    217                   CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0es (:,:,jk,jl), sxc0(:,:,jk,jl),   & 
    218                      &                                      sxxc0(:,:,jk,jl), syc0(:,:,jk,jl), syyc0(:,:,jk,jl), sxyc0(:,:,jk,jl) ) 
    219                END DO 
    220                DO jk = 1, nlay_i                                                             !--- ice heat contents --- 
    221                   CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0ei(:,:,jk,jl), sxe(:,:,jk,jl),   &  
    222                      &                                      sxxe(:,:,jk,jl), sye(:,:,jk,jl), syye(:,:,jk,jl), sxye(:,:,jk,jl) ) 
    223                   CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0ei(:,:,jk,jl), sxe(:,:,jk,jl),   &  
    224                      &                                      sxxe(:,:,jk,jl), sye(:,:,jk,jl), syye(:,:,jk,jl), sxye(:,:,jk,jl) ) 
    225                END DO 
    226                IF ( ln_pnd_H12 ) THEN 
    227                   CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0ap  (:,:,jl), sxap (:,:,jl),   &   !--- melt pond fraction --- 
    228                      &                                      sxxap (:,:,jl), syap (:,:,jl), syyap (:,:,jl), sxyap (:,:,jl)  ) 
    229                   CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0ap  (:,:,jl), sxap (:,:,jl),   & 
    230                      &                                      sxxap (:,:,jl), syap (:,:,jl), syyap (:,:,jl), sxyap (:,:,jl)  ) 
    231                   CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0vp  (:,:,jl), sxvp (:,:,jl),   &   !--- melt pond volume   --- 
    232                      &                                      sxxvp (:,:,jl), syvp (:,:,jl), syyvp (:,:,jl), sxyvp (:,:,jl)  ) 
    233                   CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0vp  (:,:,jl), sxvp (:,:,jl),   & 
    234                      &                                      sxxvp (:,:,jl), syvp (:,:,jl), syyvp (:,:,jl), sxyvp (:,:,jl)  ) 
    235                ENDIF 
    236             END DO 
    237          END DO 
    238       ENDIF 
    239  
    240       !------------------------------------------- 
    241       ! Recover the properties from their contents 
    242       !------------------------------------------- 
    243       pato_i(:,:) = z0opw(:,:,1) * r1_e1e2t(:,:) * tmask(:,:,1) 
    244       DO jl = 1, jpl 
    245          pv_i (:,:,  jl) = z0ice(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    246          pv_s (:,:,  jl) = z0snw(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    247          psv_i(:,:,  jl) = z0smi(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    248          poa_i(:,:,  jl) = z0oi (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    249          pa_i (:,:,  jl) = z0ai (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    250          DO jk = 1, nlay_s 
    251             pe_s(:,:,jk,jl) = z0es(:,:,jk,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    252          END DO 
    253          DO jk = 1, nlay_i 
    254             pe_i(:,:,jk,jl) = z0ei(:,:,jk,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    255          END DO 
    256          IF ( ln_pnd_H12 ) THEN 
    257             pa_ip  (:,:,jl) = z0ap (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    258             pv_ip  (:,:,jl) = z0vp (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 
    259          ENDIF 
    260       END DO 
    261       ! 
    262       DEALLOCATE( zarea , z0opw , z0ice, z0snw , z0ai , z0smi , z0oi , z0ap , z0vp , z0es, z0ei ) 
    263246      ! 
    264247      IF( lrst_ice )   CALL adv_pra_rst( 'WRITE', kt )   !* write Prather fields in the restart file 
     
    267250    
    268251    
    269    SUBROUTINE adv_x( pdf, put , pcrh, psm , ps0 ,   & 
     252   SUBROUTINE adv_x( pdt, put , pcrh, psm , ps0 ,   & 
    270253      &              psx, psxx, psy , psyy, psxy ) 
    271254      !!---------------------------------------------------------------------- 
     
    275258      !!                variable on x axis 
    276259      !!---------------------------------------------------------------------- 
    277       REAL(wp)                    , INTENT(in   ) ::   pdf                ! reduction factor for the time step 
    278       REAL(wp)                    , INTENT(in   ) ::   pcrh               ! call adv_x then adv_y (=1) or the opposite (=0) 
    279       REAL(wp), DIMENSION(jpi,jpj), INTENT(in   ) ::   put                ! i-direction ice velocity at U-point [m/s] 
    280       REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) ::   psm                ! area 
    281       REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) ::   ps0                ! field to be advected 
    282       REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) ::   psx , psy          ! 1st moments  
    283       REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) ::   psxx, psyy, psxy   ! 2nd moments 
     260      REAL(wp)                  , INTENT(in   ) ::   pdt                ! the time step 
     261      REAL(wp)                  , INTENT(in   ) ::   pcrh               ! call adv_x then adv_y (=1) or the opposite (=0) 
     262      REAL(wp), DIMENSION(:,:)  , INTENT(in   ) ::   put                ! i-direction ice velocity at U-point [m/s] 
     263      REAL(wp), DIMENSION(:,:,:), INTENT(inout) ::   psm                ! area 
     264      REAL(wp), DIMENSION(:,:,:), INTENT(inout) ::   ps0                ! field to be advected 
     265      REAL(wp), DIMENSION(:,:,:), INTENT(inout) ::   psx , psy          ! 1st moments  
     266      REAL(wp), DIMENSION(:,:,:), INTENT(inout) ::   psxx, psyy, psxy   ! 2nd moments 
    284267      !!  
    285       INTEGER  ::   ji, jj                               ! dummy loop indices 
    286       REAL(wp) ::   zs1max, zrdt, zslpmax, ztemp         ! local scalars 
     268      INTEGER  ::   ji, jj, jl, jcat                     ! dummy loop indices 
     269      REAL(wp) ::   zs1max, zslpmax, ztemp               ! local scalars 
    287270      REAL(wp) ::   zs1new, zalf , zalfq , zbt           !   -      - 
    288271      REAL(wp) ::   zs2new, zalf1, zalf1q, zbt1          !   -      - 
     
    291274      REAL(wp), DIMENSION(jpi,jpj) ::   zalg, zalg1, zalg1q         !  -      - 
    292275      !----------------------------------------------------------------------- 
    293  
    294       ! Limitation of moments.                                            
    295  
    296       zrdt = rdt_ice * pdf      ! If ice drift field is too fast, use an appropriate time step for advection. 
    297  
    298       DO jj = 1, jpj 
    299          DO ji = 1, jpi 
    300             zslpmax = MAX( 0._wp, ps0(ji,jj) ) 
    301             zs1max  = 1.5 * zslpmax 
    302             zs1new  = MIN( zs1max, MAX( -zs1max, psx(ji,jj) ) ) 
    303             zs2new  = MIN(  2.0 * zslpmax - 0.3334 * ABS( zs1new ),      & 
    304                &            MAX( ABS( zs1new ) - zslpmax, psxx(ji,jj) )  ) 
    305             rswitch = ( 1.0 - MAX( 0._wp, SIGN( 1._wp, -zslpmax) ) ) * tmask(ji,jj,1)   ! Case of empty boxes & Apply mask 
    306  
    307             ps0 (ji,jj) = zslpmax   
    308             psx (ji,jj) = zs1new      * rswitch 
    309             psxx(ji,jj) = zs2new      * rswitch 
    310             psy (ji,jj) = psy (ji,jj) * rswitch 
    311             psyy(ji,jj) = psyy(ji,jj) * rswitch 
    312             psxy(ji,jj) = MIN( zslpmax, MAX( -zslpmax, psxy(ji,jj) ) ) * rswitch 
    313          END DO 
     276      ! 
     277      jcat = SIZE( ps0 , 3 )   ! size of input arrays 
     278      ! 
     279      DO jl = 1, jcat   ! loop on categories 
     280         ! 
     281         ! Limitation of moments.                                            
     282         DO jj = 2, jpjm1 
     283            DO ji = 1, jpi 
     284               !  Initialize volumes of boxes  (=area if adv_x first called, =psm otherwise)                                      
     285               psm (ji,jj,jl) = MAX( pcrh * e1e2t(ji,jj) + ( 1.0 - pcrh ) * psm(ji,jj,jl) , epsi20 ) 
     286               ! 
     287               zslpmax = MAX( 0._wp, ps0(ji,jj,jl) ) 
     288               zs1max  = 1.5 * zslpmax 
     289               zs1new  = MIN( zs1max, MAX( -zs1max, psx(ji,jj,jl) ) ) 
     290               zs2new  = MIN(  2.0 * zslpmax - 0.3334 * ABS( zs1new ),      & 
     291                  &            MAX( ABS( zs1new ) - zslpmax, psxx(ji,jj,jl) )  ) 
     292               rswitch = ( 1.0 - MAX( 0._wp, SIGN( 1._wp, -zslpmax) ) ) * tmask(ji,jj,1)   ! Case of empty boxes & Apply mask 
     293 
     294               ps0 (ji,jj,jl) = zslpmax   
     295               psx (ji,jj,jl) = zs1new         * rswitch 
     296               psxx(ji,jj,jl) = zs2new         * rswitch 
     297               psy (ji,jj,jl) = psy (ji,jj,jl) * rswitch 
     298               psyy(ji,jj,jl) = psyy(ji,jj,jl) * rswitch 
     299               psxy(ji,jj,jl) = MIN( zslpmax, MAX( -zslpmax, psxy(ji,jj,jl) ) ) * rswitch 
     300            END DO 
     301         END DO 
     302 
     303         !  Calculate fluxes and moments between boxes i<-->i+1               
     304         DO jj = 2, jpjm1                      !  Flux from i to i+1 WHEN u GT 0  
     305            DO ji = 1, jpi 
     306               zbet(ji,jj)  =  MAX( 0._wp, SIGN( 1._wp, put(ji,jj) ) ) 
     307               zalf         =  MAX( 0._wp, put(ji,jj) ) * pdt / psm(ji,jj,jl) 
     308               zalfq        =  zalf * zalf 
     309               zalf1        =  1.0 - zalf 
     310               zalf1q       =  zalf1 * zalf1 
     311               ! 
     312               zfm (ji,jj)  =  zalf  *   psm (ji,jj,jl) 
     313               zf0 (ji,jj)  =  zalf  * ( ps0 (ji,jj,jl) + zalf1 * ( psx(ji,jj,jl) + (zalf1 - zalf) * psxx(ji,jj,jl) ) ) 
     314               zfx (ji,jj)  =  zalfq * ( psx (ji,jj,jl) + 3.0 * zalf1 * psxx(ji,jj,jl) ) 
     315               zfxx(ji,jj)  =  zalf  *   psxx(ji,jj,jl) * zalfq 
     316               zfy (ji,jj)  =  zalf  * ( psy (ji,jj,jl) + zalf1 * psxy(ji,jj,jl) ) 
     317               zfxy(ji,jj)  =  zalfq *   psxy(ji,jj,jl) 
     318               zfyy(ji,jj)  =  zalf  *   psyy(ji,jj,jl) 
     319 
     320               !  Readjust moments remaining in the box. 
     321               psm (ji,jj,jl)  =  psm (ji,jj,jl) - zfm(ji,jj) 
     322               ps0 (ji,jj,jl)  =  ps0 (ji,jj,jl) - zf0(ji,jj) 
     323               psx (ji,jj,jl)  =  zalf1q * ( psx(ji,jj,jl) - 3.0 * zalf * psxx(ji,jj,jl) ) 
     324               psxx(ji,jj,jl)  =  zalf1  * zalf1q * psxx(ji,jj,jl) 
     325               psy (ji,jj,jl)  =  psy (ji,jj,jl) - zfy(ji,jj) 
     326               psyy(ji,jj,jl)  =  psyy(ji,jj,jl) - zfyy(ji,jj) 
     327               psxy(ji,jj,jl)  =  zalf1q * psxy(ji,jj,jl) 
     328            END DO 
     329         END DO 
     330 
     331         DO jj = 2, jpjm1                      !  Flux from i+1 to i when u LT 0. 
     332            DO ji = 1, fs_jpim1 
     333               zalf          = MAX( 0._wp, -put(ji,jj) ) * pdt / psm(ji+1,jj,jl)  
     334               zalg  (ji,jj) = zalf 
     335               zalfq         = zalf * zalf 
     336               zalf1         = 1.0 - zalf 
     337               zalg1 (ji,jj) = zalf1 
     338               zalf1q        = zalf1 * zalf1 
     339               zalg1q(ji,jj) = zalf1q 
     340               ! 
     341               zfm   (ji,jj) = zfm (ji,jj) + zalf  *    psm (ji+1,jj,jl) 
     342               zf0   (ji,jj) = zf0 (ji,jj) + zalf  * (  ps0 (ji+1,jj,jl) & 
     343                  &                                   - zalf1 * ( psx(ji+1,jj,jl) - (zalf1 - zalf ) * psxx(ji+1,jj,jl) ) ) 
     344               zfx   (ji,jj) = zfx (ji,jj) + zalfq * (  psx (ji+1,jj,jl) - 3.0 * zalf1 * psxx(ji+1,jj,jl) ) 
     345               zfxx  (ji,jj) = zfxx(ji,jj) + zalf  *    psxx(ji+1,jj,jl) * zalfq 
     346               zfy   (ji,jj) = zfy (ji,jj) + zalf  * (  psy (ji+1,jj,jl) - zalf1 * psxy(ji+1,jj,jl) ) 
     347               zfxy  (ji,jj) = zfxy(ji,jj) + zalfq *    psxy(ji+1,jj,jl) 
     348               zfyy  (ji,jj) = zfyy(ji,jj) + zalf  *    psyy(ji+1,jj,jl) 
     349            END DO 
     350         END DO 
     351 
     352         DO jj = 2, jpjm1                     !  Readjust moments remaining in the box.  
     353            DO ji = fs_2, fs_jpim1 
     354               zbt  =       zbet(ji-1,jj) 
     355               zbt1 = 1.0 - zbet(ji-1,jj) 
     356               ! 
     357               psm (ji,jj,jl) = zbt * psm(ji,jj,jl) + zbt1 * ( psm(ji,jj,jl) - zfm(ji-1,jj) ) 
     358               ps0 (ji,jj,jl) = zbt * ps0(ji,jj,jl) + zbt1 * ( ps0(ji,jj,jl) - zf0(ji-1,jj) ) 
     359               psx (ji,jj,jl) = zalg1q(ji-1,jj) * ( psx(ji,jj,jl) + 3.0 * zalg(ji-1,jj) * psxx(ji,jj,jl) ) 
     360               psxx(ji,jj,jl) = zalg1 (ji-1,jj) * zalg1q(ji-1,jj) * psxx(ji,jj,jl) 
     361               psy (ji,jj,jl) = zbt * psy (ji,jj,jl) + zbt1 * ( psy (ji,jj,jl) - zfy (ji-1,jj) ) 
     362               psyy(ji,jj,jl) = zbt * psyy(ji,jj,jl) + zbt1 * ( psyy(ji,jj,jl) - zfyy(ji-1,jj) ) 
     363               psxy(ji,jj,jl) = zalg1q(ji-1,jj) * psxy(ji,jj,jl) 
     364            END DO 
     365         END DO 
     366 
     367         !   Put the temporary moments into appropriate neighboring boxes.     
     368         DO jj = 2, jpjm1                     !   Flux from i to i+1 IF u GT 0. 
     369            DO ji = fs_2, fs_jpim1 
     370               zbt  =       zbet(ji-1,jj) 
     371               zbt1 = 1.0 - zbet(ji-1,jj) 
     372               psm(ji,jj,jl) = zbt * ( psm(ji,jj,jl) + zfm(ji-1,jj) ) + zbt1 * psm(ji,jj,jl) 
     373               zalf          = zbt * zfm(ji-1,jj) / psm(ji,jj,jl) 
     374               zalf1         = 1.0 - zalf 
     375               ztemp         = zalf * ps0(ji,jj,jl) - zalf1 * zf0(ji-1,jj) 
     376               ! 
     377               ps0 (ji,jj,jl) =  zbt  * ( ps0(ji,jj,jl) + zf0(ji-1,jj) ) + zbt1 * ps0(ji,jj,jl) 
     378               psx (ji,jj,jl) =  zbt  * ( zalf * zfx(ji-1,jj) + zalf1 * psx(ji,jj,jl) + 3.0 * ztemp ) + zbt1 * psx(ji,jj,jl) 
     379               psxx(ji,jj,jl) =  zbt  * ( zalf * zalf * zfxx(ji-1,jj) + zalf1 * zalf1 * psxx(ji,jj,jl)                             & 
     380                  &                     + 5.0 * ( zalf * zalf1 * ( psx (ji,jj,jl) - zfx(ji-1,jj) ) - ( zalf1 - zalf ) * ztemp )  ) & 
     381                  &            + zbt1 * psxx(ji,jj,jl) 
     382               psxy(ji,jj,jl) =  zbt  * ( zalf * zfxy(ji-1,jj) + zalf1 * psxy(ji,jj,jl)             & 
     383                  &                     + 3.0 * (- zalf1*zfy(ji-1,jj)  + zalf * psy(ji,jj,jl) ) )   & 
     384                  &            + zbt1 * psxy(ji,jj,jl) 
     385               psy (ji,jj,jl) =  zbt  * ( psy (ji,jj,jl) + zfy (ji-1,jj) ) + zbt1 * psy (ji,jj,jl) 
     386               psyy(ji,jj,jl) =  zbt  * ( psyy(ji,jj,jl) + zfyy(ji-1,jj) ) + zbt1 * psyy(ji,jj,jl) 
     387            END DO 
     388         END DO 
     389 
     390         DO jj = 2, jpjm1                      !  Flux from i+1 to i IF u LT 0. 
     391            DO ji = fs_2, fs_jpim1 
     392               zbt  =       zbet(ji,jj) 
     393               zbt1 = 1.0 - zbet(ji,jj) 
     394               psm(ji,jj,jl) = zbt * psm(ji,jj,jl) + zbt1 * ( psm(ji,jj,jl) + zfm(ji,jj) ) 
     395               zalf          = zbt1 * zfm(ji,jj) / psm(ji,jj,jl) 
     396               zalf1         = 1.0 - zalf 
     397               ztemp         = - zalf * ps0(ji,jj,jl) + zalf1 * zf0(ji,jj) 
     398               ! 
     399               ps0 (ji,jj,jl) = zbt * ps0 (ji,jj,jl) + zbt1 * ( ps0(ji,jj,jl) + zf0(ji,jj) ) 
     400               psx (ji,jj,jl) = zbt * psx (ji,jj,jl) + zbt1 * ( zalf * zfx(ji,jj) + zalf1 * psx(ji,jj,jl) + 3.0 * ztemp ) 
     401               psxx(ji,jj,jl) = zbt * psxx(ji,jj,jl) + zbt1 * ( zalf * zalf * zfxx(ji,jj) + zalf1 * zalf1 * psxx(ji,jj,jl) & 
     402                  &                                           + 5.0 * ( zalf * zalf1 * ( - psx(ji,jj,jl) + zfx(ji,jj) )    & 
     403                  &                                           + ( zalf1 - zalf ) * ztemp ) ) 
     404               psxy(ji,jj,jl) = zbt * psxy(ji,jj,jl) + zbt1 * ( zalf * zfxy(ji,jj) + zalf1 * psxy(ji,jj,jl)  & 
     405                  &                                           + 3.0 * ( zalf1 * zfy(ji,jj) - zalf * psy(ji,jj,jl) ) ) 
     406               psy (ji,jj,jl) = zbt * psy (ji,jj,jl) + zbt1 * ( psy (ji,jj,jl) + zfy (ji,jj) ) 
     407               psyy(ji,jj,jl) = zbt * psyy(ji,jj,jl) + zbt1 * ( psyy(ji,jj,jl) + zfyy(ji,jj) ) 
     408            END DO 
     409         END DO 
     410 
    314411      END DO 
    315412 
    316       !  Initialize volumes of boxes  (=area if adv_x first called, =psm otherwise)                                      
    317       psm (:,:)  = MAX( pcrh * e1e2t(:,:) + ( 1.0 - pcrh ) * psm(:,:) , epsi20 ) 
    318  
    319       !  Calculate fluxes and moments between boxes i<-->i+1               
    320       DO jj = 1, jpj                      !  Flux from i to i+1 WHEN u GT 0  
    321          DO ji = 1, jpi 
    322             zbet(ji,jj)  =  MAX( 0._wp, SIGN( 1._wp, put(ji,jj) ) ) 
    323             zalf         =  MAX( 0._wp, put(ji,jj) ) * zrdt * e2u(ji,jj) / psm(ji,jj) 
    324             zalfq        =  zalf * zalf 
    325             zalf1        =  1.0 - zalf 
    326             zalf1q       =  zalf1 * zalf1 
    327             ! 
    328             zfm (ji,jj)  =  zalf  *   psm (ji,jj) 
    329             zf0 (ji,jj)  =  zalf  * ( ps0 (ji,jj) + zalf1 * ( psx(ji,jj) + (zalf1 - zalf) * psxx(ji,jj) )  ) 
    330             zfx (ji,jj)  =  zalfq * ( psx (ji,jj) + 3.0 * zalf1 * psxx(ji,jj) ) 
    331             zfxx(ji,jj)  =  zalf  *   psxx(ji,jj) * zalfq 
    332             zfy (ji,jj)  =  zalf  * ( psy (ji,jj) + zalf1 * psxy(ji,jj) ) 
    333             zfxy(ji,jj)  =  zalfq *   psxy(ji,jj) 
    334             zfyy(ji,jj)  =  zalf  *   psyy(ji,jj) 
    335  
    336             !  Readjust moments remaining in the box. 
    337             psm (ji,jj)  =  psm (ji,jj) - zfm(ji,jj) 
    338             ps0 (ji,jj)  =  ps0 (ji,jj) - zf0(ji,jj) 
    339             psx (ji,jj)  =  zalf1q * ( psx(ji,jj) - 3.0 * zalf * psxx(ji,jj) ) 
    340             psxx(ji,jj)  =  zalf1  * zalf1q * psxx(ji,jj) 
    341             psy (ji,jj)  =  psy (ji,jj) - zfy(ji,jj) 
    342             psyy(ji,jj)  =  psyy(ji,jj) - zfyy(ji,jj) 
    343             psxy(ji,jj)  =  zalf1q * psxy(ji,jj) 
    344          END DO 
    345       END DO 
    346  
    347       DO jj = 1, jpjm1                      !  Flux from i+1 to i when u LT 0. 
    348          DO ji = 1, fs_jpim1 
    349             zalf          = MAX( 0._wp, -put(ji,jj) ) * zrdt * e2u(ji,jj) / psm(ji+1,jj)  
    350             zalg  (ji,jj) = zalf 
    351             zalfq         = zalf * zalf 
    352             zalf1         = 1.0 - zalf 
    353             zalg1 (ji,jj) = zalf1 
    354             zalf1q        = zalf1 * zalf1 
    355             zalg1q(ji,jj) = zalf1q 
    356             ! 
    357             zfm   (ji,jj) = zfm (ji,jj) + zalf  *   psm (ji+1,jj) 
    358             zf0   (ji,jj) = zf0 (ji,jj) + zalf  * ( ps0 (ji+1,jj) - zalf1 * ( psx(ji+1,jj) - (zalf1 - zalf ) * psxx(ji+1,jj) ) ) 
    359             zfx   (ji,jj) = zfx (ji,jj) + zalfq * ( psx (ji+1,jj) - 3.0 * zalf1 * psxx(ji+1,jj) ) 
    360             zfxx  (ji,jj) = zfxx(ji,jj) + zalf  *   psxx(ji+1,jj) * zalfq 
    361             zfy   (ji,jj) = zfy (ji,jj) + zalf  * ( psy (ji+1,jj) - zalf1 * psxy(ji+1,jj) ) 
    362             zfxy  (ji,jj) = zfxy(ji,jj) + zalfq *   psxy(ji+1,jj) 
    363             zfyy  (ji,jj) = zfyy(ji,jj) + zalf  *   psyy(ji+1,jj) 
    364          END DO 
    365       END DO 
    366  
    367       DO jj = 2, jpjm1                     !  Readjust moments remaining in the box.  
    368          DO ji = fs_2, fs_jpim1 
    369             zbt  =       zbet(ji-1,jj) 
    370             zbt1 = 1.0 - zbet(ji-1,jj) 
    371             ! 
    372             psm (ji,jj) = zbt * psm(ji,jj) + zbt1 * ( psm(ji,jj) - zfm(ji-1,jj) ) 
    373             ps0 (ji,jj) = zbt * ps0(ji,jj) + zbt1 * ( ps0(ji,jj) - zf0(ji-1,jj) ) 
    374             psx (ji,jj) = zalg1q(ji-1,jj) * ( psx(ji,jj) + 3.0 * zalg(ji-1,jj) * psxx(ji,jj) ) 
    375             psxx(ji,jj) = zalg1 (ji-1,jj) * zalg1q(ji-1,jj) * psxx(ji,jj) 
    376             psy (ji,jj) = zbt * psy (ji,jj) + zbt1 * ( psy (ji,jj) - zfy (ji-1,jj) ) 
    377             psyy(ji,jj) = zbt * psyy(ji,jj) + zbt1 * ( psyy(ji,jj) - zfyy(ji-1,jj) ) 
    378             psxy(ji,jj) = zalg1q(ji-1,jj) * psxy(ji,jj) 
    379          END DO 
    380       END DO 
    381  
    382       !   Put the temporary moments into appropriate neighboring boxes.     
    383       DO jj = 2, jpjm1                     !   Flux from i to i+1 IF u GT 0. 
    384          DO ji = fs_2, fs_jpim1 
    385             zbt  =       zbet(ji-1,jj) 
    386             zbt1 = 1.0 - zbet(ji-1,jj) 
    387             psm(ji,jj)  = zbt * ( psm(ji,jj) + zfm(ji-1,jj) ) + zbt1 * psm(ji,jj) 
    388             zalf        = zbt * zfm(ji-1,jj) / psm(ji,jj) 
    389             zalf1       = 1.0 - zalf 
    390             ztemp       = zalf * ps0(ji,jj) - zalf1 * zf0(ji-1,jj) 
    391             ! 
    392             ps0 (ji,jj) = zbt * ( ps0(ji,jj) + zf0(ji-1,jj) ) + zbt1 * ps0(ji,jj) 
    393             psx (ji,jj) = zbt * ( zalf * zfx(ji-1,jj) + zalf1 * psx(ji,jj) + 3.0 * ztemp ) + zbt1 * psx(ji,jj) 
    394             psxx(ji,jj) = zbt * ( zalf * zalf * zfxx(ji-1,jj) + zalf1 * zalf1 * psxx(ji,jj)                               & 
    395                &                + 5.0 * ( zalf * zalf1 * ( psx (ji,jj) - zfx(ji-1,jj) ) - ( zalf1 - zalf ) * ztemp )  )   & 
    396                &                                                + zbt1 * psxx(ji,jj) 
    397             psxy(ji,jj) = zbt * ( zalf * zfxy(ji-1,jj) + zalf1 * psxy(ji,jj)             & 
    398                &                + 3.0 * (- zalf1*zfy(ji-1,jj)  + zalf * psy(ji,jj) ) )   & 
    399                &                                                + zbt1 * psxy(ji,jj) 
    400             psy (ji,jj) = zbt * ( psy (ji,jj) + zfy (ji-1,jj) ) + zbt1 * psy (ji,jj) 
    401             psyy(ji,jj) = zbt * ( psyy(ji,jj) + zfyy(ji-1,jj) ) + zbt1 * psyy(ji,jj) 
    402          END DO 
    403       END DO 
    404  
    405       DO jj = 2, jpjm1                     !  Flux from i+1 to i IF u LT 0. 
    406          DO ji = fs_2, fs_jpim1 
    407             zbt  =       zbet(ji,jj) 
    408             zbt1 = 1.0 - zbet(ji,jj) 
    409             psm(ji,jj)  = zbt * psm(ji,jj)  + zbt1 * ( psm(ji,jj) + zfm(ji,jj) ) 
    410             zalf        = zbt1 * zfm(ji,jj) / psm(ji,jj) 
    411             zalf1       = 1.0 - zalf 
    412             ztemp       = - zalf * ps0(ji,jj) + zalf1 * zf0(ji,jj) 
    413             ! 
    414             ps0(ji,jj)  = zbt * ps0 (ji,jj) + zbt1 * ( ps0(ji,jj) + zf0(ji,jj) ) 
    415             psx(ji,jj)  = zbt * psx (ji,jj) + zbt1 * ( zalf * zfx(ji,jj) + zalf1 * psx(ji,jj) + 3.0 * ztemp ) 
    416             psxx(ji,jj) = zbt * psxx(ji,jj) + zbt1 * ( zalf * zalf * zfxx(ji,jj)  + zalf1 * zalf1 * psxx(ji,jj)  & 
    417                &                                      + 5.0 *( zalf * zalf1 * ( - psx(ji,jj) + zfx(ji,jj) )      & 
    418                &                                      + ( zalf1 - zalf ) * ztemp ) ) 
    419             psxy(ji,jj) = zbt * psxy(ji,jj) + zbt1 * (  zalf * zfxy(ji,jj) + zalf1 * psxy(ji,jj)  & 
    420                &                                      + 3.0 * ( zalf1 * zfy(ji,jj) - zalf * psy(ji,jj) )  ) 
    421             psy(ji,jj)  = zbt * psy (ji,jj)  + zbt1 * ( psy (ji,jj) + zfy (ji,jj) ) 
    422             psyy(ji,jj) = zbt * psyy(ji,jj)  + zbt1 * ( psyy(ji,jj) + zfyy(ji,jj) ) 
    423          END DO 
    424       END DO 
    425  
    426413      !-- Lateral boundary conditions 
    427       CALL lbc_lnk_multi( 'icedyn_adv_pra', psm , 'T',  1., ps0 , 'T',  1.   & 
    428          &              , psx , 'T', -1., psy , 'T', -1.   &   ! caution gradient ==> the sign changes 
    429          &              , psxx, 'T',  1., psyy, 'T',  1.   & 
    430          &              , psxy, 'T',  1. ) 
    431  
    432       IF(ln_ctl) THEN 
    433          CALL prt_ctl(tab2d_1=psm  , clinfo1=' adv_x: psm  :', tab2d_2=ps0 , clinfo2=' ps0  : ') 
    434          CALL prt_ctl(tab2d_1=psx  , clinfo1=' adv_x: psx  :', tab2d_2=psxx, clinfo2=' psxx : ') 
    435          CALL prt_ctl(tab2d_1=psy  , clinfo1=' adv_x: psy  :', tab2d_2=psyy, clinfo2=' psyy : ') 
    436          CALL prt_ctl(tab2d_1=psxy , clinfo1=' adv_x: psxy :') 
    437       ENDIF 
     414      CALL lbc_lnk_multi( 'icedyn_adv_pra', psm(:,:,1:jcat) , 'T',  1., ps0 , 'T',  1.   & 
     415         &                                , psx             , 'T', -1., psy , 'T', -1.   &   ! caution gradient ==> the sign changes 
     416         &                                , psxx            , 'T',  1., psyy, 'T',  1. , psxy, 'T',  1. ) 
    438417      ! 
    439418   END SUBROUTINE adv_x 
    440419 
    441420 
    442    SUBROUTINE adv_y( pdf, pvt , pcrh, psm , ps0 ,   & 
     421   SUBROUTINE adv_y( pdt, pvt , pcrh, psm , ps0 ,   & 
    443422      &              psx, psxx, psy , psyy, psxy ) 
    444423      !!--------------------------------------------------------------------- 
     
    448427      !!                variable on y axis 
    449428      !!--------------------------------------------------------------------- 
    450       REAL(wp)                    , INTENT(in   ) ::   pdf                ! reduction factor for the time step 
    451       REAL(wp)                    , INTENT(in   ) ::   pcrh               ! call adv_x then adv_y (=1) or the opposite (=0) 
    452       REAL(wp), DIMENSION(jpi,jpj), INTENT(in   ) ::   pvt                ! j-direction ice velocity at V-point [m/s] 
    453       REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) ::   psm                ! area 
    454       REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) ::   ps0                ! field to be advected 
    455       REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) ::   psx , psy          ! 1st moments  
    456       REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) ::   psxx, psyy, psxy   ! 2nd moments 
     429      REAL(wp)                  , INTENT(in   ) ::   pdt                ! time step 
     430      REAL(wp)                  , INTENT(in   ) ::   pcrh               ! call adv_x then adv_y (=1) or the opposite (=0) 
     431      REAL(wp), DIMENSION(:,:)  , INTENT(in   ) ::   pvt                ! j-direction ice velocity at V-point [m/s] 
     432      REAL(wp), DIMENSION(:,:,:), INTENT(inout) ::   psm                ! area 
     433      REAL(wp), DIMENSION(:,:,:), INTENT(inout) ::   ps0                ! field to be advected 
     434      REAL(wp), DIMENSION(:,:,:), INTENT(inout) ::   psx , psy          ! 1st moments  
     435      REAL(wp), DIMENSION(:,:,:), INTENT(inout) ::   psxx, psyy, psxy   ! 2nd moments 
    457436      !! 
    458       INTEGER  ::   ji, jj                               ! dummy loop indices 
    459       REAL(wp) ::   zs1max, zrdt, zslpmax, ztemp         ! temporary scalars 
     437      INTEGER  ::   ji, jj, jl, jcat                     ! dummy loop indices 
     438      REAL(wp) ::   zs1max, zslpmax, ztemp               ! temporary scalars 
    460439      REAL(wp) ::   zs1new, zalf , zalfq , zbt           !    -         - 
    461440      REAL(wp) ::   zs2new, zalf1, zalf1q, zbt1          !    -         - 
     
    464443      REAL(wp), DIMENSION(jpi,jpj) ::   zalg, zalg1, zalg1q     !  -      - 
    465444      !--------------------------------------------------------------------- 
    466  
    467       ! Limitation of moments. 
    468  
    469       zrdt = rdt_ice * pdf ! If ice drift field is too fast, use an appropriate time step for advection. 
    470  
    471       DO jj = 1, jpj 
    472          DO ji = 1, jpi 
    473             zslpmax = MAX( 0._wp, ps0(ji,jj) ) 
    474             zs1max  = 1.5 * zslpmax 
    475             zs1new  = MIN( zs1max, MAX( -zs1max, psy(ji,jj) ) ) 
    476             zs2new  = MIN(  ( 2.0 * zslpmax - 0.3334 * ABS( zs1new ) ),   & 
    477                &             MAX( ABS( zs1new )-zslpmax, psyy(ji,jj) )  ) 
    478             rswitch = ( 1.0 - MAX( 0._wp, SIGN( 1._wp, -zslpmax) ) ) * tmask(ji,jj,1)   ! Case of empty boxes & Apply mask 
    479             ! 
    480             ps0 (ji,jj) = zslpmax   
    481             psx (ji,jj) = psx (ji,jj) * rswitch 
    482             psxx(ji,jj) = psxx(ji,jj) * rswitch 
    483             psy (ji,jj) = zs1new * rswitch 
    484             psyy(ji,jj) = zs2new * rswitch 
    485             psxy(ji,jj) = MIN( zslpmax, MAX( -zslpmax, psxy(ji,jj) ) ) * rswitch 
    486          END DO 
     445      ! 
     446      jcat = SIZE( ps0 , 3 )   ! size of input arrays 
     447      !       
     448      DO jl = 1, jcat   ! loop on categories 
     449         ! 
     450         ! Limitation of moments. 
     451         DO jj = 1, jpj 
     452            DO ji = fs_2, fs_jpim1 
     453               !  Initialize volumes of boxes (=area if adv_x first called, =psm otherwise) 
     454               psm(ji,jj,jl) = MAX(  pcrh * e1e2t(ji,jj) + ( 1.0 - pcrh ) * psm(ji,jj,jl) , epsi20  ) 
     455               ! 
     456               zslpmax = MAX( 0._wp, ps0(ji,jj,jl) ) 
     457               zs1max  = 1.5 * zslpmax 
     458               zs1new  = MIN( zs1max, MAX( -zs1max, psy(ji,jj,jl) ) ) 
     459               zs2new  = MIN(  ( 2.0 * zslpmax - 0.3334 * ABS( zs1new ) ),   & 
     460                  &             MAX( ABS( zs1new )-zslpmax, psyy(ji,jj,jl) )  ) 
     461               rswitch = ( 1.0 - MAX( 0._wp, SIGN( 1._wp, -zslpmax) ) ) * tmask(ji,jj,1)   ! Case of empty boxes & Apply mask 
     462               ! 
     463               ps0 (ji,jj,jl) = zslpmax   
     464               psx (ji,jj,jl) = psx (ji,jj,jl) * rswitch 
     465               psxx(ji,jj,jl) = psxx(ji,jj,jl) * rswitch 
     466               psy (ji,jj,jl) = zs1new         * rswitch 
     467               psyy(ji,jj,jl) = zs2new         * rswitch 
     468               psxy(ji,jj,jl) = MIN( zslpmax, MAX( -zslpmax, psxy(ji,jj,jl) ) ) * rswitch 
     469            END DO 
     470         END DO 
     471  
     472         !  Calculate fluxes and moments between boxes j<-->j+1               
     473         DO jj = 1, jpj                     !  Flux from j to j+1 WHEN v GT 0    
     474            DO ji = fs_2, fs_jpim1 
     475               zbet(ji,jj)  =  MAX( 0._wp, SIGN( 1._wp, pvt(ji,jj) ) ) 
     476               zalf         =  MAX( 0._wp, pvt(ji,jj) ) * pdt / psm(ji,jj,jl) 
     477               zalfq        =  zalf * zalf 
     478               zalf1        =  1.0 - zalf 
     479               zalf1q       =  zalf1 * zalf1 
     480               ! 
     481               zfm (ji,jj)  =  zalf  * psm(ji,jj,jl) 
     482               zf0 (ji,jj)  =  zalf  * ( ps0(ji,jj,jl) + zalf1 * ( psy(ji,jj,jl)  + (zalf1-zalf) * psyy(ji,jj,jl) ) )  
     483               zfy (ji,jj)  =  zalfq *( psy(ji,jj,jl) + 3.0*zalf1*psyy(ji,jj,jl) ) 
     484               zfyy(ji,jj)  =  zalf  * zalfq * psyy(ji,jj,jl) 
     485               zfx (ji,jj)  =  zalf  * ( psx(ji,jj,jl) + zalf1 * psxy(ji,jj,jl) ) 
     486               zfxy(ji,jj)  =  zalfq * psxy(ji,jj,jl) 
     487               zfxx(ji,jj)  =  zalf  * psxx(ji,jj,jl) 
     488               ! 
     489               !  Readjust moments remaining in the box. 
     490               psm (ji,jj,jl)  =  psm (ji,jj,jl) - zfm(ji,jj) 
     491               ps0 (ji,jj,jl)  =  ps0 (ji,jj,jl) - zf0(ji,jj) 
     492               psy (ji,jj,jl)  =  zalf1q * ( psy(ji,jj,jl) -3.0 * zalf * psyy(ji,jj,jl) ) 
     493               psyy(ji,jj,jl)  =  zalf1 * zalf1q * psyy(ji,jj,jl) 
     494               psx (ji,jj,jl)  =  psx (ji,jj,jl) - zfx(ji,jj) 
     495               psxx(ji,jj,jl)  =  psxx(ji,jj,jl) - zfxx(ji,jj) 
     496               psxy(ji,jj,jl)  =  zalf1q * psxy(ji,jj,jl) 
     497            END DO 
     498         END DO 
     499         ! 
     500         DO jj = 1, jpjm1                   !  Flux from j+1 to j when v LT 0. 
     501            DO ji = fs_2, fs_jpim1 
     502               zalf          = MAX( 0._wp, -pvt(ji,jj) ) * pdt / psm(ji,jj+1,jl)  
     503               zalg  (ji,jj) = zalf 
     504               zalfq         = zalf * zalf 
     505               zalf1         = 1.0 - zalf 
     506               zalg1 (ji,jj) = zalf1 
     507               zalf1q        = zalf1 * zalf1 
     508               zalg1q(ji,jj) = zalf1q 
     509               ! 
     510               zfm   (ji,jj) = zfm (ji,jj) + zalf  *    psm (ji,jj+1,jl) 
     511               zf0   (ji,jj) = zf0 (ji,jj) + zalf  * (  ps0 (ji,jj+1,jl) & 
     512                  &                                   - zalf1 * (psy(ji,jj+1,jl) - (zalf1 - zalf ) * psyy(ji,jj+1,jl) ) ) 
     513               zfy   (ji,jj) = zfy (ji,jj) + zalfq * (  psy (ji,jj+1,jl) - 3.0 * zalf1 * psyy(ji,jj+1,jl) ) 
     514               zfyy  (ji,jj) = zfyy(ji,jj) + zalf  *    psyy(ji,jj+1,jl) * zalfq 
     515               zfx   (ji,jj) = zfx (ji,jj) + zalf  * (  psx (ji,jj+1,jl) - zalf1 * psxy(ji,jj+1,jl) ) 
     516               zfxy  (ji,jj) = zfxy(ji,jj) + zalfq *    psxy(ji,jj+1,jl) 
     517               zfxx  (ji,jj) = zfxx(ji,jj) + zalf  *    psxx(ji,jj+1,jl) 
     518            END DO 
     519         END DO 
     520 
     521         !  Readjust moments remaining in the box.  
     522         DO jj = 2, jpjm1 
     523            DO ji = fs_2, fs_jpim1 
     524               zbt  =         zbet(ji,jj-1) 
     525               zbt1 = ( 1.0 - zbet(ji,jj-1) ) 
     526               ! 
     527               psm (ji,jj,jl) = zbt * psm(ji,jj,jl) + zbt1 * ( psm(ji,jj,jl) - zfm(ji,jj-1) ) 
     528               ps0 (ji,jj,jl) = zbt * ps0(ji,jj,jl) + zbt1 * ( ps0(ji,jj,jl) - zf0(ji,jj-1) ) 
     529               psy (ji,jj,jl) = zalg1q(ji,jj-1) * ( psy(ji,jj,jl) + 3.0 * zalg(ji,jj-1) * psyy(ji,jj,jl) ) 
     530               psyy(ji,jj,jl) = zalg1 (ji,jj-1) * zalg1q(ji,jj-1) * psyy(ji,jj,jl) 
     531               psx (ji,jj,jl) = zbt * psx (ji,jj,jl) + zbt1 * ( psx (ji,jj,jl) - zfx (ji,jj-1) ) 
     532               psxx(ji,jj,jl) = zbt * psxx(ji,jj,jl) + zbt1 * ( psxx(ji,jj,jl) - zfxx(ji,jj-1) ) 
     533               psxy(ji,jj,jl) = zalg1q(ji,jj-1) * psxy(ji,jj,jl) 
     534            END DO 
     535         END DO 
     536 
     537         !   Put the temporary moments into appropriate neighboring boxes.     
     538         DO jj = 2, jpjm1                    !   Flux from j to j+1 IF v GT 0. 
     539            DO ji = fs_2, fs_jpim1 
     540               zbt  =       zbet(ji,jj-1) 
     541               zbt1 = 1.0 - zbet(ji,jj-1) 
     542               psm(ji,jj,jl) = zbt * ( psm(ji,jj,jl) + zfm(ji,jj-1) ) + zbt1 * psm(ji,jj,jl)  
     543               zalf          = zbt * zfm(ji,jj-1) / psm(ji,jj,jl)  
     544               zalf1         = 1.0 - zalf 
     545               ztemp         = zalf * ps0(ji,jj,jl) - zalf1 * zf0(ji,jj-1) 
     546               ! 
     547               ps0(ji,jj,jl)  =   zbt  * ( ps0(ji,jj,jl) + zf0(ji,jj-1) ) + zbt1 * ps0(ji,jj,jl) 
     548               psy(ji,jj,jl)  =   zbt  * ( zalf * zfy(ji,jj-1) + zalf1 * psy(ji,jj,jl) + 3.0 * ztemp )  & 
     549                  &             + zbt1 * psy(ji,jj,jl)   
     550               psyy(ji,jj,jl) =   zbt  * ( zalf * zalf * zfyy(ji,jj-1) + zalf1 * zalf1 * psyy(ji,jj,jl)                           & 
     551                  &                      + 5.0 * ( zalf * zalf1 * ( psy(ji,jj,jl) - zfy(ji,jj-1) ) - ( zalf1 - zalf ) * ztemp ) ) &  
     552                  &             + zbt1 * psyy(ji,jj,jl) 
     553               psxy(ji,jj,jl) =   zbt  * (  zalf * zfxy(ji,jj-1) + zalf1 * psxy(ji,jj,jl)            & 
     554                  &                      + 3.0 * (- zalf1 * zfx(ji,jj-1) + zalf * psx(ji,jj,jl) ) )  & 
     555                  &             + zbt1 * psxy(ji,jj,jl) 
     556               psx (ji,jj,jl) =   zbt * ( psx (ji,jj,jl) + zfx (ji,jj-1) ) + zbt1 * psx (ji,jj,jl) 
     557               psxx(ji,jj,jl) =   zbt * ( psxx(ji,jj,jl) + zfxx(ji,jj-1) ) + zbt1 * psxx(ji,jj,jl) 
     558            END DO 
     559         END DO 
     560 
     561         DO jj = 2, jpjm1                      !  Flux from j+1 to j IF v LT 0. 
     562            DO ji = fs_2, fs_jpim1 
     563               zbt  =       zbet(ji,jj) 
     564               zbt1 = 1.0 - zbet(ji,jj) 
     565               psm(ji,jj,jl) = zbt * psm(ji,jj,jl) + zbt1 * ( psm(ji,jj,jl) + zfm(ji,jj) ) 
     566               zalf          = zbt1 * zfm(ji,jj) / psm(ji,jj,jl) 
     567               zalf1         = 1.0 - zalf 
     568               ztemp         = - zalf * ps0(ji,jj,jl) + zalf1 * zf0(ji,jj) 
     569               ! 
     570               ps0 (ji,jj,jl) = zbt * ps0 (ji,jj,jl) + zbt1 * (  ps0(ji,jj,jl) + zf0(ji,jj) ) 
     571               psy (ji,jj,jl) = zbt * psy (ji,jj,jl) + zbt1 * (  zalf * zfy(ji,jj) + zalf1 * psy(ji,jj,jl) + 3.0 * ztemp ) 
     572               psyy(ji,jj,jl) = zbt * psyy(ji,jj,jl) + zbt1 * (  zalf * zalf * zfyy(ji,jj) + zalf1 * zalf1 * psyy(ji,jj,jl) & 
     573                  &                                            + 5.0 * ( zalf * zalf1 * ( - psy(ji,jj,jl) + zfy(ji,jj) )    & 
     574                  &                                            + ( zalf1 - zalf ) * ztemp ) ) 
     575               psxy(ji,jj,jl) = zbt * psxy(ji,jj,jl) + zbt1 * (  zalf * zfxy(ji,jj) + zalf1 * psxy(ji,jj,jl)  & 
     576                  &                                            + 3.0 * ( zalf1 * zfx(ji,jj) - zalf * psx(ji,jj,jl) ) ) 
     577               psx (ji,jj,jl) = zbt * psx (ji,jj,jl) + zbt1 * ( psx (ji,jj,jl) + zfx (ji,jj) ) 
     578               psxx(ji,jj,jl) = zbt * psxx(ji,jj,jl) + zbt1 * ( psxx(ji,jj,jl) + zfxx(ji,jj) ) 
     579            END DO 
     580         END DO 
     581 
    487582      END DO 
    488583 
    489       !  Initialize volumes of boxes (=area if adv_x first called, =psm otherwise) 
    490       psm(:,:)  = MAX(  pcrh * e1e2t(:,:) + ( 1.0 - pcrh ) * psm(:,:) , epsi20  ) 
    491  
    492       !  Calculate fluxes and moments between boxes j<-->j+1               
    493       DO jj = 1, jpj                     !  Flux from j to j+1 WHEN v GT 0    
    494          DO ji = 1, jpi 
    495             zbet(ji,jj)  =  MAX( 0._wp, SIGN( 1._wp, pvt(ji,jj) ) ) 
    496             zalf         =  MAX( 0._wp, pvt(ji,jj) ) * zrdt * e1v(ji,jj) / psm(ji,jj) 
    497             zalfq        =  zalf * zalf 
    498             zalf1        =  1.0 - zalf 
    499             zalf1q       =  zalf1 * zalf1 
    500             ! 
    501             zfm (ji,jj)  =  zalf  * psm(ji,jj) 
    502             zf0 (ji,jj)  =  zalf  * ( ps0(ji,jj) + zalf1 * ( psy(ji,jj)  + (zalf1-zalf) * psyy(ji,jj)  ) )  
    503             zfy (ji,jj)  =  zalfq *( psy(ji,jj) + 3.0*zalf1*psyy(ji,jj) ) 
    504             zfyy(ji,jj)  =  zalf  * zalfq * psyy(ji,jj) 
    505             zfx (ji,jj)  =  zalf  * ( psx(ji,jj) + zalf1 * psxy(ji,jj) ) 
    506             zfxy(ji,jj)  =  zalfq * psxy(ji,jj) 
    507             zfxx(ji,jj)  =  zalf  * psxx(ji,jj) 
    508             ! 
    509             !  Readjust moments remaining in the box. 
    510             psm (ji,jj)  =  psm (ji,jj) - zfm(ji,jj) 
    511             ps0 (ji,jj)  =  ps0 (ji,jj) - zf0(ji,jj) 
    512             psy (ji,jj)  =  zalf1q * ( psy(ji,jj) -3.0 * zalf * psyy(ji,jj) ) 
    513             psyy(ji,jj)  =  zalf1 * zalf1q * psyy(ji,jj) 
    514             psx (ji,jj)  =  psx (ji,jj) - zfx(ji,jj) 
    515             psxx(ji,jj)  =  psxx(ji,jj) - zfxx(ji,jj) 
    516             psxy(ji,jj)  =  zalf1q * psxy(ji,jj) 
     584      !-- Lateral boundary conditions 
     585      CALL lbc_lnk_multi( 'icedyn_adv_pra', psm(:,:,1:jcat) , 'T',  1., ps0 , 'T',  1.   & 
     586         &                                , psx             , 'T', -1., psy , 'T', -1.   &   ! caution gradient ==> the sign changes 
     587         &                                , psxx            , 'T',  1., psyy, 'T',  1. , psxy, 'T',  1. ) 
     588      ! 
     589   END SUBROUTINE adv_y 
     590 
     591 
     592   SUBROUTINE Hsnow( pdt, pv_i, pv_s, pa_i, pa_ip, pe_s ) 
     593      !!------------------------------------------------------------------- 
     594      !!                  ***  ROUTINE Hsnow  *** 
     595      !! 
     596      !! ** Purpose : 1- Check snow load after advection 
     597      !!              2- Correct pond concentration to avoid a_ip > a_i 
     598      !! 
     599      !! ** Method :  If snow load makes snow-ice interface to deplet below the ocean surface 
     600      !!              then put the snow excess in the ocean 
     601      !! 
     602      !! ** Notes :   This correction is crucial because of the call to routine icecor afterwards 
     603      !!              which imposes a mini of ice thick. (rn_himin). This imposed mini can artificially 
     604      !!              make the snow very thick (if concentration decreases drastically) 
     605      !!              This behavior has been seen in Ultimate-Macho and supposedly it can also be true for Prather 
     606      !!------------------------------------------------------------------- 
     607      REAL(wp)                    , INTENT(in   ) ::   pdt   ! tracer time-step 
     608      REAL(wp), DIMENSION(:,:,:)  , INTENT(inout) ::   pv_i, pv_s, pa_i, pa_ip 
     609      REAL(wp), DIMENSION(:,:,:,:), INTENT(inout) ::   pe_s 
     610      ! 
     611      INTEGER  ::   ji, jj, jl   ! dummy loop indices 
     612      REAL(wp) ::   z1_dt, zvs_excess, zfra 
     613      !!------------------------------------------------------------------- 
     614      ! 
     615      z1_dt = 1._wp / pdt 
     616      ! 
     617      ! -- check snow load -- ! 
     618      DO jl = 1, jpl 
     619         DO jj = 1, jpj 
     620            DO ji = 1, jpi 
     621               IF ( pv_i(ji,jj,jl) > 0._wp ) THEN 
     622                  ! 
     623                  zvs_excess = MAX( 0._wp, pv_s(ji,jj,jl) - pv_i(ji,jj,jl) * (rau0-rhoi) * r1_rhos ) 
     624                  ! 
     625                  IF( zvs_excess > 0._wp ) THEN   ! snow-ice interface deplets below the ocean surface 
     626                     ! put snow excess in the ocean 
     627                     zfra = ( pv_s(ji,jj,jl) - zvs_excess ) / MAX( pv_s(ji,jj,jl), epsi20 ) 
     628                     wfx_res(ji,jj) = wfx_res(ji,jj) + zvs_excess * rhos * z1_dt 
     629                     hfx_res(ji,jj) = hfx_res(ji,jj) - SUM( pe_s(ji,jj,1:nlay_s,jl) ) * ( 1._wp - zfra ) * z1_dt ! W.m-2 <0 
     630                     ! correct snow volume and heat content 
     631                     pe_s(ji,jj,1:nlay_s,jl) = pe_s(ji,jj,1:nlay_s,jl) * zfra 
     632                     pv_s(ji,jj,jl)          = pv_s(ji,jj,jl) - zvs_excess 
     633                  ENDIF 
     634                  ! 
     635               ENDIF 
     636            END DO 
    517637         END DO 
    518638      END DO 
    519639      ! 
    520       DO jj = 1, jpjm1                   !  Flux from j+1 to j when v LT 0. 
    521          DO ji = 1, jpi 
    522             zalf          = ( MAX(0._wp, -pvt(ji,jj) ) * zrdt * e1v(ji,jj) ) / psm(ji,jj+1)  
    523             zalg  (ji,jj) = zalf 
    524             zalfq         = zalf * zalf 
    525             zalf1         = 1.0 - zalf 
    526             zalg1 (ji,jj) = zalf1 
    527             zalf1q        = zalf1 * zalf1 
    528             zalg1q(ji,jj) = zalf1q 
    529             ! 
    530             zfm   (ji,jj) = zfm (ji,jj) + zalf  *   psm (ji,jj+1) 
    531             zf0   (ji,jj) = zf0 (ji,jj) + zalf  * ( ps0 (ji,jj+1) - zalf1 * (psy(ji,jj+1) - (zalf1 - zalf ) * psyy(ji,jj+1) ) ) 
    532             zfy   (ji,jj) = zfy (ji,jj) + zalfq * ( psy (ji,jj+1) - 3.0 * zalf1 * psyy(ji,jj+1) ) 
    533             zfyy  (ji,jj) = zfyy(ji,jj) + zalf  *   psyy(ji,jj+1) * zalfq 
    534             zfx   (ji,jj) = zfx (ji,jj) + zalf  * ( psx (ji,jj+1) - zalf1 * psxy(ji,jj+1) ) 
    535             zfxy  (ji,jj) = zfxy(ji,jj) + zalfq *   psxy(ji,jj+1) 
    536             zfxx  (ji,jj) = zfxx(ji,jj) + zalf  *   psxx(ji,jj+1) 
    537          END DO 
    538       END DO 
    539  
    540       !  Readjust moments remaining in the box.  
    541       DO jj = 2, jpj 
    542          DO ji = 1, jpi 
    543             zbt  =         zbet(ji,jj-1) 
    544             zbt1 = ( 1.0 - zbet(ji,jj-1) ) 
    545             ! 
    546             psm (ji,jj) = zbt * psm(ji,jj) + zbt1 * ( psm(ji,jj) - zfm(ji,jj-1) ) 
    547             ps0 (ji,jj) = zbt * ps0(ji,jj) + zbt1 * ( ps0(ji,jj) - zf0(ji,jj-1) ) 
    548             psy (ji,jj) = zalg1q(ji,jj-1) * ( psy(ji,jj) + 3.0 * zalg(ji,jj-1) * psyy(ji,jj) ) 
    549             psyy(ji,jj) = zalg1 (ji,jj-1) * zalg1q(ji,jj-1) * psyy(ji,jj) 
    550             psx (ji,jj) = zbt * psx (ji,jj) + zbt1 * ( psx (ji,jj) - zfx (ji,jj-1) ) 
    551             psxx(ji,jj) = zbt * psxx(ji,jj) + zbt1 * ( psxx(ji,jj) - zfxx(ji,jj-1) ) 
    552             psxy(ji,jj) = zalg1q(ji,jj-1) * psxy(ji,jj) 
    553          END DO 
    554       END DO 
    555  
    556       !   Put the temporary moments into appropriate neighboring boxes.     
    557       DO jj = 2, jpjm1                    !   Flux from j to j+1 IF v GT 0. 
    558          DO ji = 1, jpi 
    559             zbt  =         zbet(ji,jj-1) 
    560             zbt1 = ( 1.0 - zbet(ji,jj-1) ) 
    561             psm(ji,jj)  = zbt * ( psm(ji,jj) + zfm(ji,jj-1) ) + zbt1 * psm(ji,jj)  
    562             zalf        = zbt * zfm(ji,jj-1) / psm(ji,jj)  
    563             zalf1       = 1.0 - zalf 
    564             ztemp       = zalf * ps0(ji,jj) - zalf1 * zf0(ji,jj-1) 
    565             ! 
    566             ps0(ji,jj)  = zbt  * ( ps0(ji,jj) + zf0(ji,jj-1) ) + zbt1 * ps0(ji,jj) 
    567             psy(ji,jj)  = zbt  * ( zalf * zfy(ji,jj-1) + zalf1 * psy(ji,jj) + 3.0 * ztemp )   & 
    568                &                                               + zbt1 * psy(ji,jj)   
    569             psyy(ji,jj) = zbt  * ( zalf * zalf * zfyy(ji,jj-1) + zalf1 * zalf1 * psyy(ji,jj)                             & 
    570                &                 + 5.0 * ( zalf * zalf1 * ( psy(ji,jj) - zfy(ji,jj-1) ) - ( zalf1 - zalf ) * ztemp ) )   &  
    571                &                                               + zbt1 * psyy(ji,jj) 
    572             psxy(ji,jj) = zbt  * (  zalf * zfxy(ji,jj-1) + zalf1 * psxy(ji,jj)               & 
    573                &                  + 3.0 * (- zalf1 * zfx(ji,jj-1) + zalf * psx(ji,jj) )  )   & 
    574                &                                                + zbt1 * psxy(ji,jj) 
    575             psx (ji,jj) = zbt * ( psx (ji,jj) + zfx (ji,jj-1) ) + zbt1 * psx (ji,jj) 
    576             psxx(ji,jj) = zbt * ( psxx(ji,jj) + zfxx(ji,jj-1) ) + zbt1 * psxx(ji,jj) 
    577          END DO 
    578       END DO 
    579  
    580       DO jj = 2, jpjm1                   !  Flux from j+1 to j IF v LT 0. 
    581          DO ji = 1, jpi 
    582             zbt  =         zbet(ji,jj) 
    583             zbt1 = ( 1.0 - zbet(ji,jj) ) 
    584             psm(ji,jj)  = zbt * psm(ji,jj) + zbt1 * ( psm(ji,jj) + zfm(ji,jj) ) 
    585             zalf        = zbt1 * zfm(ji,jj) / psm(ji,jj) 
    586             zalf1       = 1.0 - zalf 
    587             ztemp       = - zalf * ps0 (ji,jj) + zalf1 * zf0(ji,jj) 
    588             ps0 (ji,jj) =   zbt  * ps0 (ji,jj) + zbt1  * ( ps0(ji,jj) + zf0(ji,jj) ) 
    589             psy (ji,jj) =   zbt  * psy (ji,jj) + zbt1  * ( zalf * zfy(ji,jj) + zalf1 * psy(ji,jj) + 3.0 * ztemp ) 
    590             psyy(ji,jj) =   zbt  * psyy(ji,jj) + zbt1 * (  zalf * zalf * zfyy(ji,jj) + zalf1 * zalf1 * psyy(ji,jj)   & 
    591                &                                         + 5.0 *( zalf *zalf1 *( -psy(ji,jj) + zfy(ji,jj) )          & 
    592                &                                         + ( zalf1 - zalf ) * ztemp )                                ) 
    593             psxy(ji,jj) =   zbt  * psxy(ji,jj) + zbt1 * (  zalf * zfxy(ji,jj) + zalf1 * psxy(ji,jj)       & 
    594                &                                         + 3.0 * ( zalf1 * zfx(ji,jj) - zalf * psx(ji,jj) )  ) 
    595             psx (ji,jj) =   zbt  * psx (ji,jj) + zbt1 * ( psx (ji,jj) + zfx (ji,jj) ) 
    596             psxx(ji,jj) =   zbt  * psxx(ji,jj) + zbt1 * ( psxx(ji,jj) + zfxx(ji,jj) ) 
    597          END DO 
    598       END DO 
    599  
    600       !-- Lateral boundary conditions 
    601       CALL lbc_lnk_multi( 'icedyn_adv_pra', psm , 'T',  1.,  ps0 , 'T',  1.   & 
    602          &              , psx , 'T', -1.,  psy , 'T', -1.   &   ! caution gradient ==> the sign changes 
    603          &              , psxx, 'T',  1.,  psyy, 'T',  1.   & 
    604          &              , psxy, 'T',  1. ) 
    605  
    606       IF(ln_ctl) THEN 
    607          CALL prt_ctl(tab2d_1=psm  , clinfo1=' adv_y: psm  :', tab2d_2=ps0 , clinfo2=' ps0  : ') 
    608          CALL prt_ctl(tab2d_1=psx  , clinfo1=' adv_y: psx  :', tab2d_2=psxx, clinfo2=' psxx : ') 
    609          CALL prt_ctl(tab2d_1=psy  , clinfo1=' adv_y: psy  :', tab2d_2=psyy, clinfo2=' psyy : ') 
    610          CALL prt_ctl(tab2d_1=psxy , clinfo1=' adv_y: psxy :') 
    611       ENDIF 
    612       ! 
    613    END SUBROUTINE adv_y 
     640      !-- correct pond concentration to avoid a_ip > a_i -- ! 
     641      WHERE( pa_ip(:,:,:) > pa_i(:,:,:) )   pa_ip(:,:,:) = pa_i(:,:,:) 
     642      ! 
     643   END SUBROUTINE Hsnow 
    614644 
    615645 
     
    624654      ! 
    625655      !                             !* allocate prather fields 
    626       ALLOCATE( sxopw(jpi,jpj)     , syopw(jpi,jpj)     , sxxopw(jpi,jpj)     , syyopw(jpi,jpj)     , sxyopw(jpi,jpj)     ,   & 
    627          &      sxice(jpi,jpj,jpl) , syice(jpi,jpj,jpl) , sxxice(jpi,jpj,jpl) , syyice(jpi,jpj,jpl) , sxyice(jpi,jpj,jpl) ,   & 
     656      ALLOCATE( sxice(jpi,jpj,jpl) , syice(jpi,jpj,jpl) , sxxice(jpi,jpj,jpl) , syyice(jpi,jpj,jpl) , sxyice(jpi,jpj,jpl) ,   & 
    628657         &      sxsn (jpi,jpj,jpl) , sysn (jpi,jpj,jpl) , sxxsn (jpi,jpj,jpl) , syysn (jpi,jpj,jpl) , sxysn (jpi,jpj,jpl) ,   & 
    629658         &      sxa  (jpi,jpj,jpl) , sya  (jpi,jpj,jpl) , sxxa  (jpi,jpj,jpl) , syya  (jpi,jpj,jpl) , sxya  (jpi,jpj,jpl) ,   & 
     
    652681      !!                   ***  ROUTINE adv_pra_rst  *** 
    653682      !!                      
    654       !! ** Purpose :   Read or write RHG file in restart file 
     683      !! ** Purpose :   Read or write file in restart file 
    655684      !! 
    656685      !! ** Method  :   use of IOM library 
     
    671700         !                                   !==========================! 
    672701         ! 
    673          IF( ln_rstart ) THEN   ;   id1 = iom_varid( numrir, 'sxopw' , ldstop = .FALSE. )    ! file exist: id1>0 
     702         IF( ln_rstart ) THEN   ;   id1 = iom_varid( numrir, 'sxice' , ldstop = .FALSE. )    ! file exist: id1>0 
    674703         ELSE                   ;   id1 = 0                                                  ! no restart: id1=0 
    675704         ENDIF 
     
    689718            CALL iom_get( numrir, jpdom_autoglo, 'syysn' , syysn  ) 
    690719            CALL iom_get( numrir, jpdom_autoglo, 'sxysn' , sxysn  ) 
    691             !                                                        ! lead fraction 
     720            !                                                        ! ice concentration 
    692721            CALL iom_get( numrir, jpdom_autoglo, 'sxa'   , sxa    ) 
    693722            CALL iom_get( numrir, jpdom_autoglo, 'sya'   , sya    ) 
     
    707736            CALL iom_get( numrir, jpdom_autoglo, 'syyage', syyage ) 
    708737            CALL iom_get( numrir, jpdom_autoglo, 'sxyage', sxyage ) 
    709             !                                                        ! open water in sea ice 
    710             CALL iom_get( numrir, jpdom_autoglo, 'sxopw' , sxopw  ) 
    711             CALL iom_get( numrir, jpdom_autoglo, 'syopw' , syopw  ) 
    712             CALL iom_get( numrir, jpdom_autoglo, 'sxxopw', sxxopw ) 
    713             CALL iom_get( numrir, jpdom_autoglo, 'syyopw', syyopw ) 
    714             CALL iom_get( numrir, jpdom_autoglo, 'sxyopw', sxyopw ) 
    715738            !                                                        ! snow layers heat content 
    716739            DO jk = 1, nlay_s 
     
    752775            sxice = 0._wp   ;   syice = 0._wp   ;   sxxice = 0._wp   ;   syyice = 0._wp   ;   sxyice = 0._wp      ! ice thickness 
    753776            sxsn  = 0._wp   ;   sysn  = 0._wp   ;   sxxsn  = 0._wp   ;   syysn  = 0._wp   ;   sxysn  = 0._wp      ! snow thickness 
    754             sxa   = 0._wp   ;   sya   = 0._wp   ;   sxxa   = 0._wp   ;   syya   = 0._wp   ;   sxya   = 0._wp      ! lead fraction 
     777            sxa   = 0._wp   ;   sya   = 0._wp   ;   sxxa   = 0._wp   ;   syya   = 0._wp   ;   sxya   = 0._wp      ! ice concentration 
    755778            sxsal = 0._wp   ;   sysal = 0._wp   ;   sxxsal = 0._wp   ;   syysal = 0._wp   ;   sxysal = 0._wp      ! ice salinity 
    756779            sxage = 0._wp   ;   syage = 0._wp   ;   sxxage = 0._wp   ;   syyage = 0._wp   ;   sxyage = 0._wp      ! ice age 
    757             sxopw = 0._wp   ;   syopw = 0._wp   ;   sxxopw = 0._wp   ;   syyopw = 0._wp   ;   sxyopw = 0._wp      ! open water in sea ice 
    758780            sxc0  = 0._wp   ;   syc0  = 0._wp   ;   sxxc0  = 0._wp   ;   syyc0  = 0._wp   ;   sxyc0  = 0._wp      ! snow layers heat content 
    759781            sxe   = 0._wp   ;   sye   = 0._wp   ;   sxxe   = 0._wp   ;   syye   = 0._wp   ;   sxye   = 0._wp      ! ice layers heat content 
     
    786808         CALL iom_rstput( iter, nitrst, numriw, 'syysn' , syysn  ) 
    787809         CALL iom_rstput( iter, nitrst, numriw, 'sxysn' , sxysn  ) 
    788          !                                                           ! lead fraction 
     810         !                                                           ! ice concentration 
    789811         CALL iom_rstput( iter, nitrst, numriw, 'sxa'   , sxa    ) 
    790812         CALL iom_rstput( iter, nitrst, numriw, 'sya'   , sya    ) 
     
    804826         CALL iom_rstput( iter, nitrst, numriw, 'syyage', syyage ) 
    805827         CALL iom_rstput( iter, nitrst, numriw, 'sxyage', sxyage ) 
    806          !                                                           ! open water in sea ice 
    807          CALL iom_rstput( iter, nitrst, numriw, 'sxopw' , sxopw  ) 
    808          CALL iom_rstput( iter, nitrst, numriw, 'syopw' , syopw  ) 
    809          CALL iom_rstput( iter, nitrst, numriw, 'sxxopw', sxxopw ) 
    810          CALL iom_rstput( iter, nitrst, numriw, 'syyopw', syyopw ) 
    811          CALL iom_rstput( iter, nitrst, numriw, 'sxyopw', sxyopw ) 
    812828         !                                                           ! snow layers heat content 
    813829         DO jk = 1, nlay_s 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/icedyn_adv_umx.F90

    r10945 r12165  
    8383      REAL(wp), DIMENSION(:,:,:)  , INTENT(inout) ::   poa_i      ! age content 
    8484      REAL(wp), DIMENSION(:,:,:)  , INTENT(inout) ::   pa_i       ! ice concentration 
    85       REAL(wp), DIMENSION(:,:,:)  , INTENT(inout) ::   pa_ip      ! melt pond fraction 
     85      REAL(wp), DIMENSION(:,:,:)  , INTENT(inout) ::   pa_ip      ! melt pond concentration 
    8686      REAL(wp), DIMENSION(:,:,:)  , INTENT(inout) ::   pv_ip      ! melt pond volume 
    8787      REAL(wp), DIMENSION(:,:,:,:), INTENT(inout) ::   pe_s       ! snw heat content 
     
    319319         ! 
    320320         !== Ice age ==! 
    321          IF( iom_use('iceage') .OR. iom_use('iceage_cat') ) THEN 
    322             zamsk = 1._wp 
    323             CALL adv_umx( zamsk, kn_umx, jt, kt, zdt, zudy , zvdx , zu_cat, zv_cat, zcu_box, zcv_box, & 
    324                &                                      poa_i, poa_i ) 
    325          ENDIF 
     321         zamsk = 1._wp 
     322         CALL adv_umx( zamsk, kn_umx, jt, kt, zdt, zudy , zvdx , zu_cat, zv_cat, zcu_box, zcv_box, & 
     323            &                                      poa_i, poa_i ) 
    326324         ! 
    327325         !== melt ponds ==! 
    328326         IF ( ln_pnd_H12 ) THEN 
    329             ! fraction 
     327            ! concentration 
    330328            zamsk = 1._wp 
    331329            CALL adv_umx( zamsk, kn_umx, jt, kt, zdt, zudy , zvdx , zu_cat , zv_cat , zcu_box, zcv_box, & 
     
    15291527      !!              3- check whether snow load deplets the snow-ice interface below sea level$ 
    15301528      !!                 and reduce it by sending the excess in the ocean 
    1531       !!              4- correct pond fraction to avoid a_ip > a_i 
     1529      !!              4- correct pond concentration to avoid a_ip > a_i 
    15321530      !! 
    15331531      !! ** input   : Max thickness of the surrounding 9-points 
     
    15991597         END DO 
    16001598      END DO  
    1601       !                                           !-- correct pond fraction to avoid a_ip > a_i 
     1599      !                                           !-- correct pond concentration to avoid a_ip > a_i 
    16021600      WHERE( pa_ip(:,:,:) > pa_i(:,:,:) )   pa_ip(:,:,:) = pa_i(:,:,:) 
    16031601      ! 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/icedyn_rdgrft.F90

    r11587 r12165  
    8686      !!                ***  ROUTINE ice_dyn_rdgrft_alloc *** 
    8787      !!------------------------------------------------------------------- 
    88       ALLOCATE( closing_net(jpij), opning(jpij)   , closing_gross(jpij),   & 
    89          &      apartf(jpij,0:jpl), hrmin(jpij,jpl), hraft(jpij,jpl)    , aridge(jpij,jpl),  & 
    90          &      hrmax(jpij,jpl), hi_hrdg(jpij,jpl)  , araft (jpij,jpl),  & 
     88      ALLOCATE( closing_net(jpij)  , opning(jpij)      , closing_gross(jpij) ,               & 
     89         &      apartf(jpij,0:jpl) , hrmin  (jpij,jpl) , hraft(jpij,jpl) , aridge(jpij,jpl), & 
     90         &      hrmax (jpij,jpl)   , hi_hrdg(jpij,jpl) , araft(jpij,jpl) ,                   & 
    9191         &      ze_i_2d(jpij,nlay_i,jpl), ze_s_2d(jpij,nlay_s,jpl), STAT=ice_dyn_rdgrft_alloc ) 
    9292 
     
    137137      REAL(wp) ::   zfac                       ! local scalar 
    138138      INTEGER , DIMENSION(jpij) ::   iptidx        ! compute ridge/raft or not 
    139       REAL(wp), DIMENSION(jpij) ::   zdivu_adv     ! divu as implied by transport scheme  (1/s) 
    140139      REAL(wp), DIMENSION(jpij) ::   zdivu, zdelt  ! 1D divu_i & delta_i 
    141140      ! 
     
    175174         
    176175         ! just needed here 
    177          CALL tab_2d_1d( npti, nptidx(1:npti), zdivu   (1:npti)      , divu_i  ) 
    178176         CALL tab_2d_1d( npti, nptidx(1:npti), zdelt   (1:npti)      , delta_i ) 
    179177         ! needed here and in the iteration loop 
     178         CALL tab_2d_1d( npti, nptidx(1:npti), zdivu   (1:npti)      , divu_i) ! zdivu is used as a work array here (no change in divu_i) 
    180179         CALL tab_3d_2d( npti, nptidx(1:npti), a_i_2d  (1:npti,1:jpl), a_i   ) 
    181180         CALL tab_3d_2d( npti, nptidx(1:npti), v_i_2d  (1:npti,1:jpl), v_i   ) 
     
    187186            closing_net(ji) = rn_csrdg * 0.5_wp * ( zdelt(ji) - ABS( zdivu(ji) ) ) - MIN( zdivu(ji), 0._wp ) 
    188187            ! 
    189             ! divergence given by the advection scheme 
    190             !   (which may not be equal to divu as computed from the velocity field) 
    191             IF    ( ln_adv_Pra ) THEN 
    192                zdivu_adv(ji) = ( 1._wp - ato_i_1d(ji) - SUM( a_i_2d(ji,:) ) ) * r1_rdtice 
    193             ELSEIF( ln_adv_UMx ) THEN 
    194                zdivu_adv(ji) = zdivu(ji) 
    195             ENDIF 
    196             ! 
    197             IF( zdivu_adv(ji) < 0._wp )   closing_net(ji) = MAX( closing_net(ji), -zdivu_adv(ji) )   ! make sure the closing rate is large enough 
    198             !                                                                                        ! to give asum = 1.0 after ridging 
     188            IF( zdivu(ji) < 0._wp )   closing_net(ji) = MAX( closing_net(ji), -zdivu(ji) )   ! make sure the closing rate is large enough 
     189            !                                                                                ! to give asum = 1.0 after ridging 
    199190            ! Opening rate (non-negative) that will give asum = 1.0 after ridging. 
    200             opning(ji) = closing_net(ji) + zdivu_adv(ji) 
     191            opning(ji) = closing_net(ji) + zdivu(ji) 
    201192         END DO 
    202193         ! 
     
    215206               ato_i_1d   (ipti)   = ato_i_1d   (ji) 
    216207               closing_net(ipti)   = closing_net(ji) 
    217                zdivu_adv  (ipti)   = zdivu_adv  (ji) 
     208               zdivu      (ipti)   = zdivu      (ji) 
    218209               opning     (ipti)   = opning     (ji) 
    219210            ENDIF 
     
    259250               ELSE 
    260251                  iterate_ridging  = 1 
    261                   zdivu_adv  (ji) = zfac * r1_rdtice 
    262                   closing_net(ji) = MAX( 0._wp, -zdivu_adv(ji) ) 
    263                   opning     (ji) = MAX( 0._wp,  zdivu_adv(ji) ) 
     252                  zdivu      (ji) = zfac * r1_rdtice 
     253                  closing_net(ji) = MAX( 0._wp, -zdivu(ji) ) 
     254                  opning     (ji) = MAX( 0._wp,  zdivu(ji) ) 
    264255               ENDIF 
    265256            END DO 
     
    309300 
    310301      !                       ! Ice thickness needed for rafting 
    311       WHERE( pa_i(1:npti,:) > epsi20 )   ;   zhi(1:npti,:) = pv_i(1:npti,:) / pa_i(1:npti,:) 
     302      WHERE( pa_i(1:npti,:) > epsi10 )   ;   zhi(1:npti,:) = pv_i(1:npti,:) / pa_i(1:npti,:) 
    312303      ELSEWHERE                          ;   zhi(1:npti,:) = 0._wp 
    313304      END WHERE 
     
    328319      zasum(1:npti) = pato_i(1:npti) + SUM( pa_i(1:npti,:), dim=2 ) 
    329320      ! 
    330       WHERE( zasum(1:npti) > epsi20 )   ;   z1_asum(1:npti) = 1._wp / zasum(1:npti) 
     321      WHERE( zasum(1:npti) > epsi10 )   ;   z1_asum(1:npti) = 1._wp / zasum(1:npti) 
    331322      ELSEWHERE                         ;   z1_asum(1:npti) = 0._wp 
    332323      END WHERE 
     
    454445      ! Based on the ITD of ridging and ridged ice, convert the net closing rate to a gross closing rate.   
    455446      ! NOTE: 0 < aksum <= 1 
    456       WHERE( zaksum(1:npti) > epsi20 )   ;   closing_gross(1:npti) = pclosing_net(1:npti) / zaksum(1:npti) 
     447      WHERE( zaksum(1:npti) > epsi10 )   ;   closing_gross(1:npti) = pclosing_net(1:npti) / zaksum(1:npti) 
    457448      ELSEWHERE                          ;   closing_gross(1:npti) = 0._wp 
    458449      END WHERE 
     
    537528            IF( apartf(ji,jl1) > 0._wp .AND. closing_gross(ji) > 0._wp ) THEN   ! only if ice is ridging 
    538529 
    539                IF( a_i_2d(ji,jl1) > epsi20 ) THEN   ;   z1_ai(ji) = 1._wp / a_i_2d(ji,jl1) 
     530               IF( a_i_2d(ji,jl1) > epsi10 ) THEN   ;   z1_ai(ji) = 1._wp / a_i_2d(ji,jl1) 
    540531               ELSE                                 ;   z1_ai(ji) = 0._wp 
    541532               ENDIF 
     
    595586               ! virtual salt flux to keep salinity constant 
    596587               IF( nn_icesal /= 2 )  THEN 
    597                   sirdg2(ji)     = sirdg2(ji)     - vsw * ( sss_1d(ji) - s_i_1d(ji) )        ! ridge salinity = s_i 
     588                  sirdg2(ji)     = sirdg2(ji)     - vsw * ( sss_1d(ji) - s_i_1d(ji) )       ! ridge salinity = s_i 
    598589                  sfx_bri_1d(ji) = sfx_bri_1d(ji) + sss_1d(ji) * vsw * rhoi * r1_rdtice  &  ! put back sss_m into the ocean 
    599590                     &                            - s_i_1d(ji) * vsw * rhoi * r1_rdtice     ! and get  s_i  from the ocean  
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/iceitd.F90

    r11586 r12165  
    211211               CALL itd_glinear( zhb0(1:npti)  , zhb1(1:npti)  , h_ib_1d(1:npti)  , a_i_1d(1:npti)  ,  &   ! in 
    212212                  &              g0  (1:npti,1), g1  (1:npti,1), hL     (1:npti,1), hR    (1:npti,1)   )   ! out 
    213                   ! 
     213               ! 
    214214               ! Area lost due to melting of thin ice 
    215215               DO ji = 1, npti 
     
    218218                     ! 
    219219                     zdh0 =  h_i_1d(ji) - h_ib_1d(ji)                 
    220                      IF( zdh0 < 0.0 ) THEN      !remove area from category 1 
     220                     IF( zdh0 < 0.0 ) THEN      ! remove area from category 1 
    221221                        zdh0 = MIN( -zdh0, hi_max(1) ) 
    222222                        !Integrate g(1) from 0 to dh0 to estimate area melted 
     
    226226                           zx1    = zetamax 
    227227                           zx2    = 0.5 * zetamax * zetamax  
    228                            zda0   = g1(ji,1) * zx2 + g0(ji,1) * zx1                        ! ice area removed 
     228                           zda0   = g1(ji,1) * zx2 + g0(ji,1) * zx1                ! ice area removed 
    229229                           zdamax = a_i_1d(ji) * (1.0 - h_i_1d(ji) / h_ib_1d(ji) ) ! Constrain new thickness <= h_i                 
    230                            zda0   = MIN( zda0, zdamax )                                                  ! ice area lost due to melting  
    231                            !     of thin ice (zdamax > 0) 
     230                           zda0   = MIN( zda0, zdamax )                            ! ice area lost due to melting of thin ice (zdamax > 0) 
    232231                           ! Remove area, conserving volume 
    233232                           h_i_1d(ji) = h_i_1d(ji) * a_i_1d(ji) / ( a_i_1d(ji) - zda0 ) 
     
    349348      DO ji = 1, npti 
    350349         ! 
    351          IF( paice(ji) > epsi10  .AND. phice(ji) > 0._wp )  THEN 
     350         IF( paice(ji) > epsi10  .AND. phice(ji) > epsi10 )  THEN 
    352351            ! 
    353352            ! Initialize hL and hR 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/CRS/README.rst

    r10279 r12165  
    22On line biogeochemistry coarsening 
    33********************************** 
     4 
     5.. todo:: 
     6 
     7 
    48 
    59.. contents:: 
     
    6367                              ! 1, MAX of KZ 
    6468                              ! 2, MIN of KZ 
    65                               ! 3, 10^(MEAN(LOG(KZ))  
    66                               ! 4, MEDIANE of KZ  
     69                              ! 3, 10^(MEAN(LOG(KZ)) 
     70                              ! 4, MEDIANE of KZ 
    6771      ln_crs_wn   = .false.   ! wn coarsened (T) or computed using horizontal divergence ( F ) 
    6872                              !                           ! 
     
    7377  the north-fold lateral boundary condition (ORCA025, ORCA12, ORCA36, ...). 
    7478- ``nn_msh_crs = 1`` will activate the generation of the coarsened grid meshmask. 
    75 - ``nn_crs_kz`` is the operator to coarsen the vertical mixing coefficient.  
     79- ``nn_crs_kz`` is the operator to coarsen the vertical mixing coefficient. 
    7680- ``ln_crs_wn`` 
    7781 
     
    8084  - when ``key_vvl`` is not activated, 
    8185 
    82     - coarsened vertical velocities are computed using horizontal divergence (``ln_crs_wn = .false.``)  
     86    - coarsened vertical velocities are computed using horizontal divergence (``ln_crs_wn = .false.``) 
    8387    - or coarsened vertical velocities are computed with an average operator (``ln_crs_wn = .true.``) 
    8488- ``ln_crs_top = .true.``: should be activated to run BCG model in coarsened space; 
     
    97101 
    98102In the [attachment:iodef.xml iodef.xml]  file, a "nemo" context is defined and 
    99 some variable defined in [attachment:file_def.xml file_def.xml] are writted on the ocean-dynamic grid.   
     103some variable defined in [attachment:file_def.xml file_def.xml] are writted on the ocean-dynamic grid. 
    100104To write variables on the coarsened grid, and in particular the passive tracers, 
    101105a "nemo_crs" context should be defined in [attachment:iodef.xml iodef.xml] and 
     
    111115  interpolated `on-the-fly <http://forge.ipsl.jussieu.fr/nemo/wiki/Users/SetupNewConfiguration/Weight-creator>`_. 
    112116  Example of namelist for PISCES : 
    113    
     117 
    114118   .. code-block:: fortran 
    115119 
     
    134138         rn_trfac(14)  =   1.0e-06  !  -      -      -     - 
    135139         rn_trfac(23)  =   7.6e-06  !  -      -      -     - 
    136        
     140 
    137141         cn_dir        =  './'      !  root directory for the location of the data files 
    138142 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/DYN/dynnxt.F90

    r10425 r12165  
    175175      IF( neuler == 0 .AND. kt == nit000 ) THEN        !* Euler at first time-step: only swap 
    176176         DO jk = 1, jpkm1 
     177            ub(:,:,jk) = un(:,:,jk)                         ! ub <-- un 
     178            vb(:,:,jk) = vn(:,:,jk) 
    177179            un(:,:,jk) = ua(:,:,jk)                         ! un <-- ua 
    178180            vn(:,:,jk) = va(:,:,jk) 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/FLO/flodom.F90

    r11413 r12165  
    433433      IF( ABS(dlx) > 1.0_wp ) dlx = 1.0_wp 
    434434      ! 
    435       dld = ATAN(DSQRT( 1._wp * ( 1._wp-dlx )/( 1._wp+dlx ) )) * 222.24_wp / dls 
     435      dld = ATAN(SQRT( 1._wp * ( 1._wp-dlx )/( 1._wp+dlx ) )) * 222.24_wp / dls 
    436436      flo_dstnce = dld * 1000._wp 
    437437      ! 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/FLO/flowri.F90

    r11413 r12165  
    221221               clname=TRIM(clname)//".nc" 
    222222 
    223                CALL fliocrfd( clname , (/ 'ntraj' , 't' /), (/ jpnfl , -1  /) , numflo ) 
     223               CALL fliocrfd( clname , (/'ntraj' , '    t' /), (/ jpnfl , -1/) , numflo ) 
    224224    
    225225               CALL fliodefv( numflo, 'traj_lon'    , (/1,2/), v_t=flio_r8, long_name="Longitude"           , units="degrees_east"  ) 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/LBC/mppini.F90

    r11586 r12165  
    538538 9401    FORMAT('              '   ,20('   ',i3,'          ') ) 
    539539 9402    FORMAT('       ',i3,' *  ',20(i3,'  x',i3,'   *   ') ) 
    540  9404    FORMAT('           *  '   ,20('      ',i3,'   *   ') ) 
     540 9404    FORMAT('           *  '   ,20('     ' ,i4,'   *   ') ) 
    541541      ENDIF 
    542542          
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/LDF/ldfdyn.F90

    r11348 r12165  
    315315            DO jj = 1, jpj             ! Set local gridscale values 
    316316               DO ji = 1, jpi 
    317                   esqt(ji,jj) = ( e1e2t(ji,jj) / ( e1t(ji,jj) + e2t(ji,jj) ) )**2  
    318                   esqf(ji,jj) = ( e1e2f(ji,jj) / ( e1f(ji,jj) + e2f(ji,jj) ) )**2  
     317                  esqt(ji,jj) = ( 2._wp * e1e2t(ji,jj) / ( e1t(ji,jj) + e2t(ji,jj) ) )**2  
     318                  esqf(ji,jj) = ( 2._wp * e1e2f(ji,jj) / ( e1f(ji,jj) + e2f(ji,jj) ) )**2  
    319319               END DO 
    320320            END DO 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OFF/nemogcm.F90

    r11348 r12165  
    114114#else 
    115115                                CALL dta_dyn    ( istp )         ! Interpolation of the dynamical fields 
     116#endif 
     117                                CALL trc_stp    ( istp )         ! time-stepping 
     118#if ! defined key_sed_off 
    116119         IF( .NOT.ln_linssh )   CALL dta_dyn_swp( istp )         ! swap of sea  surface height and vertical scale factors 
    117120#endif 
    118                                 CALL trc_stp    ( istp )         ! time-stepping 
    119121                                CALL stp_ctl    ( istp, indic )  ! Time loop: control and print 
    120122         istp = istp + 1 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/TOP/README.rst

    r10549 r12165  
    33*************** 
    44 
     5.. todo:: 
     6 
     7 
     8 
    59.. contents:: 
    6    :local: 
    7  
    8 TOP (Tracers in the Ocean Paradigm) is the NEMO hardwired interface toward biogeochemical models and 
    9 provide the physical constraints/boundaries for oceanic tracers. 
    10 It consists of a modular framework to handle multiple ocean tracers, including also a variety of built-in modules. 
     10   :local: 
     11 
     12TOP (Tracers in the Ocean Paradigm) is the NEMO hardwired interface toward 
     13biogeochemical models and provide the physical constraints/boundaries for oceanic tracers. 
     14It consists of a modular framework to handle multiple ocean tracers, 
     15including also a variety of built-in modules. 
    1116 
    1217This component of the NEMO framework allows one to exploit available modules (see below) and 
    1318further develop a range of applications, spanning from the implementation of a dye passive tracer to 
    1419evaluate dispersion processes (by means of MY_TRC), track water masses age (AGE module), 
    15 assess the ocean interior penetration of persistent chemical compounds (e.g., gases like CFC or even PCBs), 
    16 up to the full set of equations involving marine biogeochemical cycles. 
     20assess the ocean interior penetration of persistent chemical compounds 
     21(e.g., gases like CFC or even PCBs), up to the full set of equations involving 
     22marine biogeochemical cycles. 
    1723 
    1824Structure 
    1925========= 
    2026 
    21 TOP interface has the following location in the source code ``./src/MBG/`` and 
     27TOP interface has the following location in the source code :file:`./src/TOP` and 
    2228the following modules are available: 
    2329 
    24 ``TRP`` 
    25    Interface to NEMO physical core for computing tracers transport 
    26  
    27 ``CFC`` 
    28    Inert carbon tracers (CFC11,CFC12,SF6) 
    29  
    30 ``C14`` 
    31    Radiocarbon passive tracer 
    32  
    33 ``AGE`` 
    34    Water age tracking 
    35  
    36 ``MY_TRC`` 
    37    Template for creation of new modules and external BGC models coupling 
    38  
    39 ``PISCES`` 
    40    Built in BGC model. 
    41    See [https://www.geosci-model-dev.net/8/2465/2015/gmd-8-2465-2015-discussion.html Aumont et al. (2015)] for 
    42    a throughout description. | 
    43  
    44 The usage of TOP is activated i) by including in the configuration definition  the component ``MBG`` and 
    45 ii) by adding the macro ``key_top`` in the configuration CPP file 
    46 (see for more details [http://forge.ipsl.jussieu.fr/nemo/wiki/Users "Learn more about the model"]). 
     30:file:`TRP` 
     31   Interface to NEMO physical core for computing tracers transport 
     32 
     33:file:`CFC` 
     34   Inert carbon tracers (CFC11,CFC12,SF6) 
     35 
     36:file:`C14` 
     37   Radiocarbon passive tracer 
     38 
     39:file:`AGE` 
     40   Water age tracking 
     41 
     42:file:`MY_TRC` 
     43   Template for creation of new modules and external BGC models coupling 
     44 
     45:file:`PISCES` 
     46   Built in BGC model. See :cite:`gmd-8-2465-2015` for a throughout description. 
     47 
     48The usage of TOP is activated 
     49*i)* by including in the configuration definition the component ``TOP`` and 
     50*ii)* by adding the macro ``key_top`` in the configuration CPP file 
     51(see for more details :forge:`"Learn more about the model" <wiki/Users>`). 
    4752 
    4853As an example, the user can refer to already available configurations in the code, 
     
    5156(see also Section 4) . 
    5257 
    53 Note that, since version 4.0, TOP interface core functionalities are activated by means of logical keys and 
     58Note that, since version 4.0, 
     59TOP interface core functionalities are activated by means of logical keys and 
    5460all submodules preprocessing macros from previous versions were removed. 
    5561 
     
    5763 
    5864``key_iomput`` 
    59    use XIOS I/O 
     65   use XIOS I/O 
    6066 
    6167``key_agrif`` 
    62    enable AGRIF coupling 
     68   enable AGRIF coupling 
    6369 
    6470``key_trdtrc`` & ``key_trdmxl_trc`` 
    65    trend computation for tracers 
     71   trend computation for tracers 
    6672 
    6773Synthetic Workflow 
    6874================== 
    6975 
    70 A synthetic description of the TOP interface workflow is given below to summarize the steps involved in 
    71 the computation of biogeochemical and physical trends and their time integration and outputs, 
     76A synthetic description of the TOP interface workflow is given below to 
     77summarize the steps involved in the computation of biogeochemical and physical trends and 
     78their time integration and outputs, 
    7279by reporting also the principal Fortran subroutine herein involved. 
    7380 
    74 **Model initialization (OPA_SRC/nemogcm.F90)** 
    75  
    76 call to trc_init (trcini.F90) 
    77  
    78   ↳ call trc_nam (trcnam.F90) to initialize TOP tracers and run setting 
    79  
    80   ↳ call trc_ini_sms, to initialize each submodule 
    81  
    82   ↳ call trc_ini_trp, to initialize transport for tracers 
    83  
    84   ↳ call trc_ice_ini, to initialize tracers in seaice 
    85  
    86   ↳ call trc_ini_state, read passive tracers from a restart or input data 
    87  
    88   ↳ call trc_sub_ini, setup substepping if {{{nn_dttrc /= 1}}} 
    89  
    90 **Time marching procedure (OPA_SRC/stp.F90)** 
    91  
    92 call to trc_stp.F90 (trcstp.F90) 
    93  
    94   ↳ call trc_sub_stp, averaging physical variables for sub-stepping 
    95  
    96   ↳ call trc_wri, call XIOS for output of data 
    97  
    98   ↳ call trc_sms, compute BGC trends for each submodule 
    99  
    100     ↳ call trc_sms_my_trc, includes also surface and coastal BCs trends 
    101  
    102   ↳ call trc_trp (TRP/trctrp.F90), compute physical trends 
    103  
    104     ↳ call trc_sbc, get trend due to surface concentration/dilution 
    105  
    106     ↳ call trc_adv, compute tracers advection 
    107  
    108     ↳ call to trc_ldf, compute tracers lateral diffusion 
    109  
    110     ↳ call to trc_zdf, vertical mixing and after tracer fields 
    111  
    112     ↳ call to trc_nxt, tracer fields at next time step. Lateral Boundary Conditions are solved in here. 
    113  
    114     ↳ call to trc_rad, Correct artificial negative concentrations 
    115  
    116   ↳ call trc_rst_wri, output tracers restart files 
     81Model initialization (:file:`./src/OCE/nemogcm.F90`) 
     82---------------------------------------------------- 
     83 
     84Call to ``trc_init`` subroutine (:file:`./src/TOP/trcini.F90`) to initialize TOP. 
     85 
     86.. literalinclude:: ../../../src/TOP/trcini.F90 
     87   :language:        fortran 
     88   :lines:           41-86 
     89   :emphasize-lines: 21,30-32,38-40 
     90   :caption:         ``trc_init`` subroutine 
     91 
     92Time marching procedure (:file:`./src/OCE/step.F90`) 
     93---------------------------------------------------- 
     94 
     95Call to ``trc_stp`` subroutine (:file:`./src/TOP/trcstp.F90`) to compute/update passive tracers. 
     96 
     97.. literalinclude:: ../../../src/TOP/trcstp.F90 
     98   :language:        fortran 
     99   :lines:           46-125 
     100   :emphasize-lines: 42,55-57 
     101   :caption:         ``trc_stp`` subroutine 
     102 
     103BGC trends computation for each submodule (:file:`./src/TOP/trcsms.F90`) 
     104------------------------------------------------------------------------ 
     105 
     106.. literalinclude:: ../../../src/TOP/trcsms.F90 
     107   :language:        fortran 
     108   :lines:           21 
     109   :caption:         :file:`trcsms` snippet 
     110 
     111Physical trends computation (:file:`./src/TOP/TRP/trctrp.F90`) 
     112-------------------------------------------------------------- 
     113 
     114.. literalinclude:: ../../../src/TOP/TRP/trctrp.F90 
     115   :language:        fortran 
     116   :lines:           46-95 
     117   :emphasize-lines: 17,21,29,33-35 
     118   :caption:         ``trc_trp`` subroutine 
    117119 
    118120Namelists walkthrough 
    119121===================== 
    120122 
    121 namelist_top 
    122 ------------ 
    123  
    124 Here below are listed the features/options of the TOP interface accessible through the namelist_top_ref and 
    125 modifiable by means of namelist_top_cfg (as for NEMO physical ones). 
    126  
    127 Note that ## is used to refer to a number in an array field. 
     123:file:`namelist_top` 
     124-------------------- 
     125 
     126Here below are listed the features/options of the TOP interface accessible through 
     127the :file:`namelist_top_ref` and modifiable by means of :file:`namelist_top_cfg` 
     128(as for NEMO physical ones). 
     129 
     130Note that ``##`` is used to refer to a number in an array field. 
    128131 
    129132.. literalinclude:: ../../namelists/namtrc_run 
     133   :language: fortran 
    130134 
    131135.. literalinclude:: ../../namelists/namtrc 
     136   :language: fortran 
    132137 
    133138.. literalinclude:: ../../namelists/namtrc_dta 
     139   :language: fortran 
    134140 
    135141.. literalinclude:: ../../namelists/namtrc_adv 
     142   :language: fortran 
    136143 
    137144.. literalinclude:: ../../namelists/namtrc_ldf 
     145   :language: fortran 
    138146 
    139147.. literalinclude:: ../../namelists/namtrc_rad 
     148   :language: fortran 
    140149 
    141150.. literalinclude:: ../../namelists/namtrc_snk 
     151   :language: fortran 
    142152 
    143153.. literalinclude:: ../../namelists/namtrc_dmp 
     154   :language: fortran 
    144155 
    145156.. literalinclude:: ../../namelists/namtrc_ice 
     157   :language: fortran 
    146158 
    147159.. literalinclude:: ../../namelists/namtrc_trd 
     160   :language: fortran 
    148161 
    149162.. literalinclude:: ../../namelists/namtrc_bc 
     163   :language: fortran 
    150164 
    151165.. literalinclude:: ../../namelists/namtrc_bdy 
     166   :language: fortran 
    152167 
    153168.. literalinclude:: ../../namelists/namage 
    154  
    155 Two main types of data structure are used within TOP interface to initialize tracer properties (1) and 
     169   :language: fortran 
     170 
     171Two main types of data structure are used within TOP interface 
     172to initialize tracer properties (1) and 
    156173to provide related initial and boundary conditions (2). 
    157174 
    158 **1. TOP tracers initialization**: sn_tracer (namtrc) 
     1751. TOP tracers initialization: ``sn_tracer`` (``&namtrc``) 
     176^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    159177 
    160178Beside providing name and metadata for tracers, 
    161 here are also defined the use of initial ({{{sn_tracer%llinit}}}) and 
    162 boundary ({{{sn_tracer%llsbc, sn_tracer%llcbc, sn_tracer%llobc}}}) conditions. 
    163  
    164 In the following, an example of the full structure definition is given for two idealized tracers both with 
    165 initial conditions given, while the first has only surface boundary forcing and 
     179here are also defined the use of initial (``sn_tracer%llinit``) and 
     180boundary (``sn_tracer%llsbc, sn_tracer%llcbc, sn_tracer%llobc``) conditions. 
     181 
     182In the following, an example of the full structure definition is given for 
     183two idealized tracers both with initial conditions given, 
     184while the first has only surface boundary forcing and 
    166185the second both surface and coastal forcings: 
    167186 
    168187.. code-block:: fortran 
    169188 
    170    !             !    name   !           title of the field            !   units    ! initial data ! sbc   !   cbc  !   obc  ! 
    171    sn_tracer(1)  = 'TRC1'    , 'Tracer 1 Concentration                ',   ' - '    ,  .true.      , .true., .false., .true. 
    172    sn_tracer(2)  = 'TRC2 '   , 'Tracer 2 Concentration                ',   ' - '    ,  .true.      , .true., .true. , .false. 
     189   !             !    name   !           title of the field            !   units    ! initial data ! sbc   !   cbc  !   obc  ! 
     190   sn_tracer(1)  = 'TRC1'    , 'Tracer 1 Concentration                ',   ' - '    ,  .true.      , .true., .false., .true. 
     191   sn_tracer(2)  = 'TRC2 '   , 'Tracer 2 Concentration                ',   ' - '    ,  .true.      , .true., .true. , .false. 
    173192 
    174193As tracers in BGC models are increasingly growing, 
     
    177196.. code-block:: fortran 
    178197 
    179    !             !    name   !           title of the field            !   units    ! initial data ! 
    180    sn_tracer(1)  = 'TRC1'    , 'Tracer 1 Concentration                ',   ' - '    ,   .true. 
    181    sn_tracer(2)  = 'TRC2 '   , 'Tracer 2 Concentration                ',   ' - '    ,   .true. 
    182    ! sbc 
    183    sn_tracer(1)%llsbc = .true. 
    184    sn_tracer(2)%llsbc = .true. 
    185    ! cbc 
    186    sn_tracer(2)%llcbc = .true. 
     198   !             !    name   !           title of the field            !   units    ! initial data ! 
     199   sn_tracer(1)  = 'TRC1'    , 'Tracer 1 Concentration                ',   ' - '    ,   .true. 
     200   sn_tracer(2)  = 'TRC2 '   , 'Tracer 2 Concentration                ',   ' - '    ,   .true. 
     201   ! sbc 
     202   sn_tracer(1)%llsbc = .true. 
     203   sn_tracer(2)%llsbc = .true. 
     204   ! cbc 
     205   sn_tracer(2)%llcbc = .true. 
    187206 
    188207The data structure is internally initialized by code with dummy names and 
    189 all initialization/forcing logical fields set to .false. . 
    190  
    191 **2. Structures to read input initial and boundary conditions**: namtrc_dta (sn_trcdta), namtrc_bc (sn_trcsbc/sn_trccbc/sn_trcobc) 
     208all initialization/forcing logical fields set to ``.false.`` . 
     209 
     2102. Structures to read input initial and boundary conditions: ``&namtrc_dta`` (``sn_trcdta``), ``&namtrc_bc`` (``sn_trcsbc`` / ``sn_trccbc`` / ``sn_trcobc``) 
     211^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 
    192212 
    193213The overall data structure (Fortran type) is based on the general one defined for NEMO core in the SBC component 
    194 (see details in User Manual SBC Chapter on Input Data specification). 
    195  
    196 Input fields are prescribed within namtrc_dta (with sn_trcdta structure), 
    197 while Boundary Conditions are applied to the model by means of namtrc_bc, 
    198 with dedicated structure fields for surface (sn_trcsbc), riverine (sn_trccbc), and 
    199 lateral open (sn_trcobc) boundaries. 
     214(see details in ``SBC`` Chapter of :doc:`Reference Manual <cite>` on Input Data specification). 
     215 
     216Input fields are prescribed within ``&namtrc_dta`` (with ``sn_trcdta`` structure), 
     217while Boundary Conditions are applied to the model by means of ``&namtrc_bc``, 
     218with dedicated structure fields for surface (``sn_trcsbc``), riverine (``sn_trccbc``), and 
     219lateral open (``sn_trcobc``) boundaries. 
    200220 
    201221The following example illustrates the data structure in the case of initial condition for 
    202 a single tracer contained in the file named tracer_1_data.nc (.nc is implicitly assumed in namelist filename), 
    203 with a doubled initial value, and located in the usr/work/model/inputdata/ folder: 
     222a single tracer contained in the file named :file:`tracer_1_data.nc` 
     223(``.nc`` is implicitly assumed in namelist filename), 
     224with a doubled initial value, and located in the :file:`usr/work/model/inputdata` folder: 
    204225 
    205226.. code-block:: fortran 
    206227 
    207    !               !  file name             ! frequency (hours) ! variable  ! time interp. !  clim  ! 'yearly'/ ! weights  ! rotation ! land/sea mask ! 
    208    !               !                        !  (if <0  months)  !   name    !   (logical)  !  (T/F) ! 'monthly' ! filename ! pairing  ! filename      ! 
    209      sn_trcdta(1)  = 'tracer_1_data'        ,        -12        ,  'TRC1'   ,    .false.   , .true. , 'yearly'  , ''       , ''       , '' 
    210      rf_trfac(1) = 2.0 
    211      cn_dir = “usr/work/model/inputdata/” 
    212  
    213 Note that, the Lateral Open Boundaries conditions are applied on the segments defined for the physical core of NEMO 
    214 (see BDY description in the User Manual). 
    215  
    216 namelist_trc 
    217 ------------ 
    218  
    219 Here below the description of namelist_trc_ref used to handle Carbon tracers modules, namely CFC and C14. 
    220  
    221 |||| &'''namcfc'''     !   CFC || 
    222  
    223 |||| &'''namc14_typ'''     !  C14 - type of C14 tracer, default values of C14/C and pco2 || 
    224  
    225 |||| &'''namc14_sbc'''     !  C14 - surface BC || 
    226  
    227 |||| &'''namc14_fcg'''     !  files & dates || 
     228   !               !  file name             ! frequency (hours) ! variable  ! time interp. !  clim  ! 'yearly'/ ! weights  ! rotation ! land/sea mask ! 
     229   !               !                        !  (if <0  months)  !   name    !   (logical)  !  (T/F) ! 'monthly' ! filename ! pairing  ! filename      ! 
     230     sn_trcdta(1)  = 'tracer_1_data'        ,        -12        ,  'TRC1'   ,    .false.   , .true. , 'yearly'  , ''       , ''       , '' 
     231     rf_trfac(1) = 2.0 
     232     cn_dir = 'usr/work/model/inputdata/' 
     233 
     234Note that, the Lateral Open Boundaries conditions are applied on 
     235the segments defined for the physical core of NEMO 
     236(see ``BDY`` description in the :doc:`Reference Manual <cite>`). 
     237 
     238:file:`namelist_trc` 
     239-------------------- 
     240 
     241Here below the description of :file:`namelist_trc_ref` used to handle Carbon tracers modules, 
     242namely CFC and C14. 
     243 
     244.. literalinclude:: ../../../cfgs/SHARED/namelist_trc_ref 
     245   :language: fortran 
     246   :lines: 7,17,26,34 
     247   :caption: :file:`namelist_trc_ref` snippet 
    228248 
    229249``MY_TRC`` interface for coupling external BGC models 
    230250===================================================== 
    231251 
    232 The generalized interface is pivoted on MY_TRC module that contains template files to build the coupling between 
     252The generalized interface is pivoted on MY_TRC module that contains template files to 
     253build the coupling between 
    233254NEMO and any external BGC model. 
    234255 
    235 The call to MY_TRC is activated by setting ``ln_my_trc = .true.`` (in namtrc) 
     256The call to MY_TRC is activated by setting ``ln_my_trc = .true.`` (in ``&namtrc``) 
    236257 
    237258The following 6 fortran files are available in MY_TRC with the specific purposes here described. 
    238259 
    239 ``par_my_trc.F90`` 
    240    This module allows to define additional arrays and public variables to be used within the MY_TRC interface 
    241  
    242 ``trcini_my_trc.F90`` 
    243    Here are initialized user defined namelists and the call to the external BGC model initialization procedures to 
    244    populate general tracer array (trn and trb). Here are also likely to be defined suport arrays related to 
    245    system metrics that could be needed by the BGC model. 
    246  
    247 ``trcnam_my_trc.F90`` 
    248    This routine is called at the beginning of trcini_my_trc and should contain the initialization of 
    249    additional namelists for the BGC model or user-defined code. 
    250  
    251 ``trcsms_my_trc.F90`` 
    252    The routine performs the call to Boundary Conditions and its main purpose is to 
    253    contain the Source-Minus-Sinks terms due to the biogeochemical processes of the external model. 
    254    Be aware that lateral boundary conditions are applied in trcnxt routine. 
    255    IMPORTANT: the routines to compute the light penetration along the water column and 
    256    the tracer vertical sinking should be defined/called in here, as generalized modules are still missing in 
    257    the code. 
    258  
    259 ``trcice_my_trc.F90`` 
    260    Here it is possible to prescribe the tracers concentrations in the seaice that will be used as 
    261    boundary conditions when ice melting occurs (nn_ice_tr =1 in namtrc_ice). 
    262    See e.g. the correspondent PISCES subroutine. 
    263  
    264 ``trcwri_my_trc.F90`` 
    265    This routine performs the output of the model tracers (only those defined in namtrc) using IOM module 
    266    (see Manual Chapter “Output and Diagnostics”). 
    267    It is possible to place here the output of additional variables produced by the model, 
    268    if not done elsewhere in the code, using the call to iom_put. 
     260:file:`par_my_trc.F90` 
     261   This module allows to define additional arrays and public variables to 
     262   be used within the MY_TRC interface 
     263 
     264:file:`trcini_my_trc.F90` 
     265   Here are initialized user defined namelists and 
     266   the call to the external BGC model initialization procedures to populate general tracer array 
     267   (``trn`` and ``trb``). 
     268   Here are also likely to be defined support arrays related to system metrics that 
     269   could be needed by the BGC model. 
     270 
     271:file:`trcnam_my_trc.F90` 
     272   This routine is called at the beginning of ``trcini_my_trc`` and 
     273   should contain the initialization of additional namelists for the BGC model or user-defined code. 
     274 
     275:file:`trcsms_my_trc.F90` 
     276   The routine performs the call to Boundary Conditions and its main purpose is to 
     277   contain the Source-Minus-Sinks terms due to the biogeochemical processes of the external model. 
     278   Be aware that lateral boundary conditions are applied in trcnxt routine. 
     279 
     280   .. warning:: 
     281      The routines to compute the light penetration along the water column and 
     282      the tracer vertical sinking should be defined/called in here, 
     283      as generalized modules are still missing in the code. 
     284 
     285:file:`trcice_my_trc.F90` 
     286   Here it is possible to prescribe the tracers concentrations in the sea-ice that 
     287   will be used as boundary conditions when ice melting occurs (``nn_ice_tr = 1`` in ``&namtrc_ice``). 
     288   See e.g. the correspondent PISCES subroutine. 
     289 
     290:file:`trcwri_my_trc.F90` 
     291   This routine performs the output of the model tracers (only those defined in ``&namtrc``) using 
     292   IOM module (see chapter “Output and Diagnostics” in the :doc:`Reference Manual <cite>`). 
     293   It is possible to place here the output of additional variables produced by the model, 
     294   if not done elsewhere in the code, using the call to ``iom_put``. 
    269295 
    270296Coupling an external BGC model using NEMO framework 
     
    273299The coupling with an external BGC model through the NEMO compilation framework can be achieved in 
    274300different ways according to the degree of coding complexity of the Biogeochemical model, like e.g., 
    275 the whole code is made only by one file or it has multiple modules and interfaces spread across several subfolders. 
    276  
    277 Beside the 6 core files of MY_TRC module, let’s assume an external BGC model named *MYBGC* and constituted by 
    278 a rather essential coding structure, likely few Fortran files. 
     301the whole code is made only by one file or 
     302it has multiple modules and interfaces spread across several subfolders. 
     303 
     304Beside the 6 core files of MY_TRC module, let’s assume an external BGC model named *MYBGC* and 
     305constituted by a rather essential coding structure, likely few Fortran files. 
    279306The new coupled configuration name is *NEMO_MYBGC*. 
    280307 
    281 The best solution is to have all files (the modified ``MY_TRC`` routines and the BGC model ones) placed in 
    282 a unique folder with root ``MYBGCPATH`` and to use the makenemo external readdressing of ``MY_SRC`` folder. 
    283  
    284 The coupled configuration listed in ``work_cfgs.txt`` will look like 
     308The best solution is to have all files (the modified ``MY_TRC`` routines and the BGC model ones) 
     309placed in a unique folder with root ``MYBGCPATH`` and 
     310to use the makenemo external readdressing of ``MY_SRC`` folder. 
     311 
     312The coupled configuration listed in :file:`work_cfgs.txt` will look like 
    285313 
    286314:: 
    287315 
    288    NEMO_MYBGC OPA_SRC TOP_SRC 
     316   NEMO_MYBGC OCE TOP 
    289317 
    290318and the related ``cpp_MYBGC.fcm`` content will be 
     
    292320.. code-block:: perl 
    293321 
    294    bld::tool::fppkeys key_iomput key_mpp_mpi key_top 
    295  
    296 the compilation with ``makenemo`` will be executed through the following syntax 
     322   bld::tool::fppkeys key_iomput key_mpp_mpi key_top 
     323 
     324the compilation with :file:`makenemo` will be executed through the following syntax 
    297325 
    298326.. code-block:: console 
    299327 
    300    $ makenemo -n 'NEMO_MYBGC' -m '<arch_my_machine>' -j 8 -e '<MYBGCPATH>' 
    301  
    302 The makenemo feature “-e” was introduced to readdress at compilation time the standard MY_SRC folder 
    303 (usually found in NEMO configurations) with a user defined external one. 
    304  
    305 The compilation of more articulated BGC model code & infrastructure, like in the case of BFM 
    306 ([http://www.bfm-community.eu/publications/bfmnemomanual_r1.0_201508.pdf BFM-NEMO coupling manual]), 
    307 requires some additional features. 
     328   $ makenemo -n 'NEMO_MYBGC' -m '<arch_my_machine>' -j 8 -e '<MYBGCPATH>' 
     329 
     330The makenemo feature ``-e`` was introduced to 
     331readdress at compilation time the standard MY_SRC folder (usually found in NEMO configurations) with 
     332a user defined external one. 
     333 
     334The compilation of more articulated BGC model code & infrastructure, 
     335like in the case of BFM (|BFM man|_), requires some additional features. 
    308336 
    309337As before, let’s assume a coupled configuration name *NEMO_MYBGC*, 
    310 but in this case MYBGC model root becomes ``<MYBGCPATH>`` that contains 4 different subfolders for 
    311 biogeochemistry, named ``initialization``, ``pelagic``, and ``benthic``, and 
    312 a separate one named ``nemo_coupling`` including the modified ``MY_SRC`` routines. 
     338but in this case MYBGC model root becomes :file:`MYBGC` path that 
     339contains 4 different subfolders for biogeochemistry, 
     340named :file:`initialization`, :file:`pelagic`, and :file:`benthic`, 
     341and a separate one named :file:`nemo_coupling` including the modified `MY_SRC` routines. 
    313342The latter folder containing the modified NEMO coupling interface will be still linked using 
    314 the makenemo “-e” option. 
     343the makenemo ``-e`` option. 
    315344 
    316345In order to include the BGC model subfolders in the compilation of NEMO code, 
    317 it will be necessary to extend the configuration ``cpp_NEMO_MYBGC.fcm`` file to include the specific paths of 
    318 ``MYBGC`` folders, as in the following example 
     346it will be necessary to extend the configuration :file:`cpp_NEMO_MYBGC.fcm` file to include the specific paths of :file:`MYBGC` folders, as in the following example 
    319347 
    320348.. code-block:: perl 
    321349 
    322    bld::tool::fppkeys  key_iomput key_mpp_mpi key_top 
    323     
    324    src::MYBGC::initialization         <MYBGCPATH>/initialization 
    325    src::MYBGC::pelagic                <MYBGCPATH>/pelagic 
    326    src::MYBGC::benthic                <MYBGCPATH>/benthic 
    327     
    328    bld::pp::MYBGC      1 
    329    bld::tool::fppflags::MYBGC   %FPPFLAGS 
    330    bld::tool::fppkeys           %bld::tool::fppkeys MYBGC_MACROS 
     350   bld::tool::fppkeys  key_iomput key_mpp_mpi key_top 
     351 
     352   src::MYBGC::initialization         <MYBGCPATH>/initialization 
     353   src::MYBGC::pelagic                <MYBGCPATH>/pelagic 
     354   src::MYBGC::benthic                <MYBGCPATH>/benthic 
     355 
     356   bld::pp::MYBGC      1 
     357   bld::tool::fppflags::MYBGC   %FPPFLAGS 
     358   bld::tool::fppkeys           %bld::tool::fppkeys MYBGC_MACROS 
    331359 
    332360where *MYBGC_MACROS* is the space delimited list of macros used in *MYBGC* model for 
    333361selecting/excluding specific parts of the code. 
    334 The BGC model code will be preprocessed in the configuration ``BLD`` folder as for NEMO, 
    335 but with an independent path, like ``NEMO_MYBGC/BLD/MYBGC/<subforlders>``. 
     362The BGC model code will be preprocessed in the configuration :file:`BLD` folder as for NEMO, 
     363but with an independent path, like :file:`NEMO_MYBGC/BLD/MYBGC/<subforlders>`. 
    336364 
    337365The compilation will be performed similarly to in the previous case with the following 
     
    339367.. code-block:: console 
    340368 
    341    $ makenemo -n 'NEMO_MYBGC' -m '<arch_my_machine>' -j 8 -e '<MYBGCPATH>/nemo_coupling' 
    342  
    343 Note that, the additional lines specific for the BGC model source and build paths can be written into 
    344 a separate file, e.g. named ``MYBGC.fcm``, and then simply included in the ``cpp_NEMO_MYBGC.fcm`` as follow 
    345  
    346 .. code-block:: perl 
    347  
    348    bld::tool::fppkeys  key_zdftke key_dynspg_ts key_iomput key_mpp_mpi key_top 
    349    inc <MYBGCPATH>/MYBGC.fcm 
    350  
    351 This will enable a more portable compilation structure for all MYBGC related configurations. 
    352  
    353 **Important**: the coupling interface contained in nemo_coupling cannot be added using the FCM syntax, 
    354 as the same files already exists in NEMO and they are overridden only with the readdressing of MY_SRC contents to 
    355 avoid compilation conflicts due to duplicate routines. 
    356  
    357 All modifications illustrated above, can be easily implemented using shell or python scripting to 
    358 edit the NEMO configuration CPP.fcm file and to create the BGC model specific FCM compilation file with code paths. 
     369   $ makenemo -n 'NEMO_MYBGC' -m '<arch_my_machine>' -j 8 -e '<MYBGCPATH>/nemo_coupling' 
     370 
     371.. note:: 
     372   The additional lines specific for the BGC model source and build paths can be written into 
     373   a separate file, e.g. named :file:`MYBGC.fcm`, 
     374   and then simply included in the :file:`cpp_NEMO_MYBGC.fcm` as follow 
     375 
     376   .. code-block:: perl 
     377 
     378      bld::tool::fppkeys  key_zdftke key_dynspg_ts key_iomput key_mpp_mpi key_top 
     379      inc <MYBGCPATH>/MYBGC.fcm 
     380 
     381   This will enable a more portable compilation structure for all MYBGC related configurations. 
     382 
     383.. warning:: 
     384   The coupling interface contained in :file:`nemo_coupling` cannot be added using the FCM syntax, 
     385   as the same files already exists in NEMO and they are overridden only with 
     386   the readdressing of MY_SRC contents to avoid compilation conflicts due to duplicate routines. 
     387 
     388All modifications illustrated above, can be easily implemented using shell or python scripting 
     389to edit the NEMO configuration :file:`CPP.fcm` file and 
     390to create the BGC model specific FCM compilation file with code paths. 
     391 
     392.. |BFM man| replace:: BFM-NEMO coupling manual 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/TOP/TRP/trcnxt.F90

    r10425 r12165  
    139139      ENDIF 
    140140      !                                ! Leap-Frog + Asselin filter time stepping 
    141       IF( (neuler == 0 .AND. kt == nittrc000) .OR. ln_top_euler ) THEN    ! Euler time-stepping (only swap) 
     141      IF( (neuler == 0 .AND. kt == nittrc000) ) THEN 
     142         ! set up for leapfrog on second timestep 
     143         DO jn = 1, jptra 
     144            DO jk = 1, jpkm1 
     145               trb(:,:,jk,jn) = trn(:,:,jk,jn)   
     146               trn(:,:,jk,jn) = tra(:,:,jk,jn) 
     147            END DO 
     148         END DO 
     149      ELSE IF( ln_top_euler ) THEN 
     150         ! always doing euler timestepping 
    142151         DO jn = 1, jptra 
    143152            DO jk = 1, jpkm1 
     
    146155            END DO 
    147156         END DO 
     157      ENDIF 
     158      IF( (neuler == 0 .AND. kt == nittrc000) .OR. ln_top_euler ) THEN    ! Euler time-stepping (only swap) 
    148159         IF (l_trdtrc .AND. .NOT. ln_linssh ) THEN   ! Zero Asselin filter contribution must be explicitly written out since for vvl 
    149160            !                                        ! Asselin filter is output by tra_nxt_vvl that is not called on this time step 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/src/TOP/trcbdy.F90

    r11224 r12165  
    9595         END DO 
    9696         IF( ANY(llsend1) .OR. ANY(llrecv1) ) THEN   ! if need to send/recv in at least one direction 
    97             CALL lbc_lnk( 'bdytra', tsa, 'T',  1., kfillmode=jpfillnothing ,lsend=llsend1, lrecv=llrecv1 ) 
     97            CALL lbc_lnk( 'trcbdy', tra, 'T',  1., kfillmode=jpfillnothing ,lsend=llsend1, lrecv=llrecv1 ) 
    9898         END IF 
    9999         ! 
  • NEMO/branches/2019/dev_ASINTER-01-05_merged/tests/CANAL/MY_SRC/usrdef_nam.F90

    r11586 r12165  
    8686      REWIND( numnam_cfg )          ! Namelist namusr_def (exist in namelist_cfg only) 
    8787      READ  ( numnam_cfg, namusr_def, IOSTAT = ios, ERR = 902 ) 
    88 902   IF( ios /= 0 )   CALL ctl_nam ( ios , 'namusr_def in configuration namelist', cdtxt ) 
     88902   IF( ios /= 0 )   CALL ctl_nam ( ios , 'namusr_def in configuration namelist' ) 
    8989      ! 
    9090      IF(lwm)   WRITE( numond, namusr_def ) 
Note: See TracChangeset for help on using the changeset viewer.