- Timestamp:
- 2019-12-11T09:27:27+01:00 (4 years ago)
- Location:
- NEMO/branches/2019/dev_ASINTER-01-05_merged
- Files:
-
- 6 deleted
- 31 edited
- 4 copied
Legend:
- Unmodified
- Added
- Removed
-
NEMO/branches/2019/dev_ASINTER-01-05_merged/CONTRIBUTING.rst
- Property svn:mergeinfo deleted
r11586 r12165 3 3 ************ 4 4 5 .. todo:: 6 7 8 5 9 .. contents:: 6 10 :local: 7 11 8 12 Sending feedbacks … … 17 21 - You have a question: create a topic in the appropriate :forge:`discussion <forum>` 18 22 - You would like to raise and issue: open a new ticket of the right type depending of its severity 19 23 20 24 - "Unavoidable" :forge:`newticket?type=Bug <bug>` 21 25 22 26 - "Workable" :forge:`newticket?type=Defect <defect>` 23 27 … … 27 31 =============== 28 32 29 You have build a development relevant for NEMO shared reference: an addition of the source code, 33 You have build a development relevant for NEMO shared reference: an addition of the source code, 30 34 a full fork of the reference, ... 31 35 … … 34 38 35 39 The proposals for developments to be included in the shared NEMO reference are first examined by NEMO Developers 36 Committee / Scientific Advisory Board. 40 Committee / Scientific Advisory Board. 37 41 The implementation of a new development requires some additionnal work from the intial developer. 38 42 These tasks will need to be scheduled with NEMO System Team. … … 42 46 ---- 43 47 44 You only would like to inform NEMO community about your developments. 48 You only would like to inform NEMO community about your developments. 45 49 You can promote your work on NEMO forum gathering the contributions fromof the community by creating 46 a specific topic here :forge:`discussion/forum/5 <dedicated forum>` 50 a specific topic here :forge:`discussion/forum/5 <dedicated forum>` 47 51 48 52 … … 55 59 routines to the ticket, to highlight the proposed changes by adding to the ticket the output of ``svn diff`` 56 60 or ``svn patch`` from your working copy. 57 61 58 62 | Your development seems relevant for addition into the future release of NEMO shared reference. 59 63 Implementing it into NEMO shared reference following the usual quality control will require some additionnal work … … 61 65 your suggestion should be send as a proposed enhancement here :forge:`newticket?type=Enhancement <enhancement>` 62 66 including description of the development, its implementation, and the existing validations. 63 64 The proposed enhancement will be examined by NEMO Developers Committee / Scientific Advisory Board. 67 68 The proposed enhancement will be examined by NEMO Developers Committee / Scientific Advisory Board. 65 69 Once approved by the Committee, the assicated development task can be scheduled in NEMO development work plan, 66 70 and tasks distributed between you as initial developer and PI of this development action, and the NEMO System Team. 67 71 68 72 Once sucessful (meeting the usual quality control steps) this action will allow the merge of these developments with 69 73 other developments of the year, building the future NEMO. -
NEMO/branches/2019/dev_ASINTER-01-05_merged/INSTALL.rst
- Property svn:mergeinfo deleted
r11586 r12165 3 3 ******************* 4 4 5 .. todo:: 6 7 8 5 9 .. contents:: 6 10 :local: … … 10 14 11 15 | The NEMO source code is written in *Fortran 95* and 12 some of its prerequisite tools and libraries are already included in the ``./ext`` subdirectory.16 some of its prerequisite tools and libraries are already included in the download. 13 17 | It contains the AGRIF_ preprocessing program ``conv``; the FCM_ build system and 14 18 the IOIPSL_ library for parts of the output. … … 23 27 - *Fortran* compiler (``ifort``, ``gfortran``, ``pgfortran``, ...), 24 28 - *Message Passing Interface (MPI)* implementation (e.g. |OpenMPI|_ or |MPICH|_). 25 - |NetCDF|_ library with its underlying |HDF|_ 29 - |NetCDF|_ library with its underlying |HDF|_ 26 30 27 31 **NEMO, by default, takes advantage of some MPI features introduced into the MPI-3 standard.** … … 40 44 This will limit MPI features to those defined within the MPI-2 standard 41 45 (but will lose some performance benefits). 42 43 Specifics for NetCDF and HDF44 ----------------------------45 46 NetCDF and HDF versions from .47 However access to all the options available with the XIOS IO-server will require48 the parallel IO support of these libraries which can be unavailable.49 50 | **To satisfy these requirements, it is common to have to compile from source51 in this order HDF (C library) then NetCDF (C and Fortran libraries)**52 | It is also necessary to compile these libraries with the same version of the MPI implementation that53 both NEMO and XIOS (see below) are compiled and linked with.54 55 .. hint::56 57 | It is difficult to define the options for the compilation as58 they differ from one architecture to another according to59 the hardware used and the software installed.60 | The following is provided without any warranty61 62 .. code-block:: console63 64 $ ./configure [--{enable-fortran,disable-shared,enable-parallel}] ...65 66 It is recommended to build the tests ``--enable-parallel-tests`` and run them with ``make check``67 68 Particular versions of these libraries may have their own restrictions.69 State the following requirements for netCDF-4 support:70 71 .. caution::72 73 | When building NetCDF-C library versions older than 4.4.1, use only HDF5 1.8.x versions.74 | Combining older NetCDF-C versions with newer HDF5 1.10 versions will create superblock 3 files75 that are not readable by lots of older software.76 77 Extract and install XIOS78 ========================79 80 With the sole exception of running NEMO in mono-processor mode81 (in which case output options are limited to those supported by the ``IOIPSL`` library),82 diagnostic outputs from NEMO are handled by the third party ``XIOS`` library.83 This can be used in two different modes:84 85 - *attached* - Every NEMO process also acts as a XIOS server86 - *detached* - Every NEMO process runs as a XIOS client.87 Output is collected and collated by external, stand-alone XIOS server processors.88 89 .. important::90 91 In either case, XIOS needs to be compiled before NEMO,92 since the libraries are needed to successfully create the NEMO executable.93 94 Instructions on how to obtain and install the software can be found on the :xios:`XIOS wiki<wiki>`.95 96 .. hint::97 98 It is recommended to use XIOS version 2.5.99 This version should be more stable (in terms of future code changes) than the XIOS trunk.100 It is also the version used by the NEMO system team when testing all developments and new releases.101 102 This particular version has its own branch and can be checked out and downloaded with:103 104 .. code:: console105 106 $ svn co https://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5107 108 Download the NEMO source code109 =============================110 111 .. code:: console112 113 $ svn co https://forge.ipsl.jussieu.fr/nemo/svn/NEMO/trunk114 115 Description of directory tree116 -----------------------------117 118 +-----------+------------------------------------------------------------+119 | Folder | Purpose |120 +===========+============================================================+121 | ``arch`` | Settings (per architecture-compiler pair) |122 +-----------+------------------------------------------------------------+123 | ``cfgs`` | :doc:`Reference configurations <configurations>` |124 +-----------+------------------------------------------------------------+125 | ``doc`` | - ``latex`` : LaTex source code for ref. manuals |126 | | - ``namelists``: k start guide |127 | | - ``rst`` : ReST files for quick start guide |128 +-----------+------------------------------------------------------------+129 | ``ext`` | Dependencies included (``AGRIF``, ``FCM`` & ``IOIPSL``) |130 +-----------+------------------------------------------------------------+131 | ``mk`` | Building routines |132 +-----------+------------------------------------------------------------+133 | ``src`` | Modelling routines |134 | | |135 | | - ``ICE``: |SI3| for sea ice |136 | | - ``NST``: AGRIF for embedded zooms |137 | | - ``OCE``: |OPA| for ocean dynamics |138 | | - ``TOP``: |TOP| for tracers |139 +-----------+------------------------------------------------------------+140 | ``tests`` | :doc:`Test cases <test_cases>` (unsupported) |141 +-----------+------------------------------------------------------------+142 | ``tools`` | :doc:`Utilities <tools>` to [pre|post]process data |143 +-----------+------------------------------------------------------------+144 145 Setup your architecture configuration file146 ==========================================147 148 All compiler options in NEMO are controlled using files in149 ``./arch/arch-'my_arch'.fcm`` where 'my_arch' is the name of the computing150 architecture. It is recommended to copy and rename an configuration file from151 an architecture similar to your owns. You will need to set appropriate values152 for all of the variables in the file. In particular the FCM variables:153 ``%NCDF_HOME``; ``%HDF5_HOME`` and ``%XIOS_HOME`` should be set to the154 installation directories used for XIOS installation.155 156 .. code-block:: sh157 158 %NCDF_HOME /opt/local159 %HDF5_HOME /opt/local160 %XIOS_HOME /Users/$( whoami )/xios-2.5161 %OASIS_HOME /not/defined162 163 Compile and create NEMO executable164 ==================================165 166 The main script to compile and create executable is called makenemo and located in the CONFIG directory, it is used to identify the routines you need from the source code, to build the makefile and run it.167 As an example, compile GYRE with 'my_arch' to create a 'MY_GYRE' configuration:168 169 .. code-block:: sh170 171 ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE'172 173 The image below shows the structure and some content of "MY_CONFIG" directory from the launching of the configuration creation (directories and fundamental files created by makenemo).174 175 +------------+----------------------------------------------------+176 | Folder | Purpose |177 +============+====================================================+178 | ``BLD`` | |179 +------------+----------------------------------------------------+180 | ``EXP00`` | |181 +------------+----------------------------------------------------+182 | ``EXPREF`` | |183 +------------+----------------------------------------------------+184 | ``MY_SRC`` | |185 +------------+----------------------------------------------------+186 | ``WORK`` | |187 +------------+----------------------------------------------------+188 189 Folder with the symbolic links to all unpreprocessed routines considered in the configuration190 Compilation folder (executables, headers files, libraries, preprocessed routines, flags, …)191 Computation folder for running the model (namelists, xml, executables and inputs-outputs)192 Folder intended to contain your customised routines (modified from initial ones or new entire routines)193 194 After successful execution of makenemo command, the executable called opa is created in the EXP00 directory (in the example above, the executable is created in CONFIG/MY_GYRE/EXP00).195 196 More makenemo options197 ---------------------198 199 ``makenemo`` has several other options that can control which source files are selected and200 the operation of the build process itself.201 These are::202 203 Optional:204 -d Set of new sub-components (space separated list from ./src directory)205 -e Path for alternative patch location (default: 'MY_SRC' in configuration folder)206 -h Print this help207 -j Number of processes to compile (0: no build)208 -n Name for new configuration209 -s Path for alternative source location (default: 'src' root directory)210 -t Path for alternative build location (default: 'BLD' in configuration folder)211 -v Level of verbosity ([0-3])212 213 These options can be useful for maintaining several code versions with only minor differences but214 they should be used sparingly.215 Note however the ``-j`` option which should be used more routinely to speed up the build process.216 For example:217 218 .. code-block:: sh219 220 ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' -j 8221 222 which will compile up to 8 modules simultaneously.223 224 225 Default behaviour226 -----------------227 228 At the first use, you need the -m option to specify the architecture229 configuration file (compiler and its options, routines and libraries to230 include), then for next compilation, it is assumed you will be using the231 same compiler. If the –n option is not specified the last compiled configuration232 will be used.233 234 Tools used during the process235 -----------------------------236 237 * functions.sh : bash functions used by makenemo, for instance to create the WORK directory238 * cfg.txt : text list of configurations and source directories239 * bld.cfg : FCM rules to compile240 241 Examples242 --------243 244 .. code-block:: sh245 246 echo "Example to install a new configuration MY_CONFIG";247 echo "with OPA_SRC and LIM_SRC_2 ";248 echo "makenemo -n MY_CONFIG -d \"OPA_SRC LIM_SRC_2\"";249 echo "";250 echo "Available configurations :"; cat ${CONFIG_DIR}/cfg.txt;251 echo "";252 echo "Available unsupported (external) configurations :"; cat ${CONFIG_DIR}/uspcfg.txt;253 echo "";254 echo "Example to remove bad configuration ";255 echo "./makenemo -n MY_CONFIG clean_config";256 echo "";257 echo "Example to clean ";258 echo "./makenemo clean";259 echo "";260 echo "Example to list the available keys of a CONFIG ";261 echo "./makenemo list_key";262 echo "";263 echo "Example to add and remove keys";264 echo "./makenemo add_key \"key_iomput key_mpp_mpi\" del_key \"key_agrif\" ";265 echo "";266 echo "Example to add and remove keys for a new configuration, and do not compile";267 echo "./makenemo -n MY_CONFIG -j0 add_key \"key_iomput key_mpp_mpi\" del_key \"key_agrif\" ";268 269 Running the model270 =================271 272 Once makenemo has run successfully, the opa executable is available in ``CONFIG/MY_CONFIG/EXP00``273 For the reference configurations, the EXP00 folder also contains the initial input files (namelists, \*xml files for the IOs…). If the configuration also needs NetCDF input files, this should be downloaded here from the corresponding tar file, see Users/Reference Configurations274 275 .. code-block:: sh276 277 cd 'MY_CONFIG'/EXP00278 mpirun -n $NPROCS ./opa # $NPROCS is the number of processes ; mpirun is your MPI wrapper279 280 281 Viewing and changing list of active CPP keys282 ============================================283 284 For a given configuration (here called MY_CONFIG), the list of active CPP keys can be found in:285 286 .. code-block:: sh287 288 ./cfgs/'MYCONFIG'/cpp_'MY_CONFIG'.fcm289 290 291 This text file can be edited to change the list of active CPP keys. Once changed, one needs to recompile opa executable using makenemo command in order for this change to be taken in account.292 Note that most NEMO configurations will need to specify the following CPP keys:293 ``key_iomput`` and ``key_mpp_mpi``294 295 .. Links and substitutions296 46 297 47 .. |OpenMPI| replace:: *OpenMPI* … … 300 50 .. _MPICH: https://www.mpich.org 301 51 .. |NetCDF| replace:: *Network Common Data Form (NetCDF)* 302 .. _NetCDF: https://www.unidata.ucar.edu /downloads/netcdf52 .. _NetCDF: https://www.unidata.ucar.edu 303 53 .. |HDF| replace:: *Hierarchical Data Form (HDF)* 304 .. _HDF: https://www.hdfgroup.org/downloads 54 .. _HDF: https://www.hdfgroup.org 55 56 Specifics for NetCDF and HDF 57 ---------------------------- 58 59 NetCDF and HDF versions from official repositories may have not been compiled with MPI support. 60 However access to all the options available with the XIOS IO-server will require 61 the parallelism of these libraries. 62 63 | **To satisfy these requirements, it is common to have to compile from source 64 in this order HDF (C library) then NetCDF (C and Fortran libraries)** 65 | It is also necessary to compile these libraries with the same version of the MPI implementation that 66 both NEMO and XIOS (see below) have been compiled and linked with. 67 68 .. hint:: 69 70 | It is difficult to define the options for the compilation as 71 they differ from one architecture to another according to 72 the hardware used and the software installed. 73 | The following is provided without any warranty 74 75 .. code-block:: console 76 77 $ ./configure [--{enable-fortran,disable-shared,enable-parallel}] ... 78 79 It is recommended to build the tests ``--enable-parallel-tests`` and run them with ``make check`` 80 81 Particular versions of these libraries may have their own restrictions. 82 State the following requirements for netCDF-4 support: 83 84 .. caution:: 85 86 | When building NetCDF-C library versions older than 4.4.1, use only HDF5 1.8.x versions. 87 | Combining older NetCDF-C versions with newer HDF5 1.10 versions will create superblock 3 files 88 that are not readable by lots of older software. 89 90 Extract and install XIOS 91 ======================== 92 93 With the sole exception of running NEMO in mono-processor mode 94 (in which case output options are limited to those supported by the ``IOIPSL`` library), 95 diagnostic outputs from NEMO are handled by the third party ``XIOS`` library. 96 It can be used in two different modes: 97 98 :*attached*: Every NEMO process also acts as a XIOS server 99 :*detached*: Every NEMO process runs as a XIOS client. 100 Output is collected and collated by external, stand-alone XIOS server processors. 101 102 Instructions on how to install XIOS can be found on its :xios:`wiki<>`. 103 104 .. hint:: 105 106 It is recommended to use XIOS 2.5 release. 107 This version should be more stable (in terms of future code changes) than the XIOS trunk. 108 It is also the one used by the NEMO system team when testing all developments and new releases. 109 110 This particular version has its own branch and can be checked out with: 111 112 .. code:: console 113 114 $ svn co https://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 115 116 Download and install the NEMO code 117 ================================== 118 119 Checkout the NEMO sources 120 ------------------------- 121 122 .. code:: console 123 124 $ svn co https://forge.ipsl.jussieu.fr/nemo/svn/NEMO/trunk 125 126 Description of 1\ :sup:`st` level tree structure 127 ------------------------------------------------ 128 129 +---------------+----------------------------------------+ 130 | :file:`arch` | Compilation settings | 131 +---------------+----------------------------------------+ 132 | :file:`cfgs` | :doc:`Reference configurations <cfgs>` | 133 +---------------+----------------------------------------+ 134 | :file:`doc` | :doc:`Documentation <doc>` | 135 +---------------+----------------------------------------+ 136 | :file:`ext` | Dependencies included | 137 | | (``AGRIF``, ``FCM`` & ``IOIPSL``) | 138 +---------------+----------------------------------------+ 139 | :file:`mk` | Compilation scripts | 140 +---------------+----------------------------------------+ 141 | :file:`src` | :doc:`Modelling routines <src>` | 142 +---------------+----------------------------------------+ 143 | :file:`tests` | :doc:`Test cases <tests>` | 144 | | (unsupported) | 145 +---------------+----------------------------------------+ 146 | :file:`tools` | :doc:`Utilities <tools>` | 147 | | to {pre,post}process data | 148 +---------------+----------------------------------------+ 149 150 Setup your architecture configuration file 151 ------------------------------------------ 152 153 All compiler options in NEMO are controlled using files in :file:`./arch/arch-'my_arch'.fcm` where 154 ``my_arch`` is the name of the computing architecture 155 (generally following the pattern ``HPCC-compiler`` or ``OS-compiler``). 156 It is recommended to copy and rename an configuration file from an architecture similar to your owns. 157 You will need to set appropriate values for all of the variables in the file. 158 In particular the FCM variables: 159 ``%NCDF_HOME``; ``%HDF5_HOME`` and ``%XIOS_HOME`` should be set to 160 the installation directories used for XIOS installation 161 162 .. code-block:: sh 163 164 %NCDF_HOME /usr/local/path/to/netcdf 165 %HDF5_HOME /usr/local/path/to/hdf5 166 %XIOS_HOME /home/$( whoami )/path/to/xios-2.5 167 %OASIS_HOME /home/$( whoami )/path/to/oasis 168 169 Create and compile a new configuration 170 ====================================== 171 172 The main script to {re}compile and create executable is called :file:`makenemo` located at 173 the root of the working copy. 174 It is used to identify the routines you need from the source code, to build the makefile and run it. 175 As an example, compile a :file:`MY_GYRE` configuration from GYRE with 'my_arch': 176 177 .. code-block:: sh 178 179 ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' 180 181 Then at the end of the configuration compilation, 182 :file:`MY_GYRE` directory will have the following structure. 183 184 +------------+----------------------------------------------------------------------------+ 185 | Directory | Purpose | 186 +============+============================================================================+ 187 | ``BLD`` | BuiLD folder: target executable, headers, libs, preprocessed routines, ... | 188 +------------+----------------------------------------------------------------------------+ 189 | ``EXP00`` | Run folder: link to executable, namelists, ``*.xml`` and IOs | 190 +------------+----------------------------------------------------------------------------+ 191 | ``EXPREF`` | Files under version control only for :doc:`official configurations <cfgs>` | 192 +------------+----------------------------------------------------------------------------+ 193 | ``MY_SRC`` | New routines or modified copies of NEMO sources | 194 +------------+----------------------------------------------------------------------------+ 195 | ``WORK`` | Links to all raw routines from :file:`./src` considered | 196 +------------+----------------------------------------------------------------------------+ 197 198 After successful execution of :file:`makenemo` command, 199 the executable called `nemo` is available in the :file:`EXP00` directory 200 201 More :file:`makenemo` options 202 ----------------------------- 203 204 ``makenemo`` has several other options that can control which source files are selected and 205 the operation of the build process itself. 206 207 .. literalinclude:: ../../../makenemo 208 :language: text 209 :lines: 119-143 210 :caption: Output of ``makenemo -h`` 211 212 These options can be useful for maintaining several code versions with only minor differences but 213 they should be used sparingly. 214 Note however the ``-j`` option which should be used more routinely to speed up the build process. 215 For example: 216 217 .. code-block:: sh 218 219 ./makenemo –m 'my_arch' –r GYRE -n 'MY_GYRE' -j 8 220 221 will compile up to 8 processes simultaneously. 222 223 Default behaviour 224 ----------------- 225 226 At the first use, 227 you need the ``-m`` option to specify the architecture configuration file 228 (compiler and its options, routines and libraries to include), 229 then for next compilation, it is assumed you will be using the same compiler. 230 If the ``-n`` option is not specified the last compiled configuration will be used. 231 232 Tools used during the process 233 ----------------------------- 234 235 * :file:`functions.sh`: bash functions used by ``makenemo``, for instance to create the WORK directory 236 * :file:`cfg.txt` : text list of configurations and source directories 237 * :file:`bld.cfg` : FCM rules for compilation 238 239 Examples 240 -------- 241 242 .. literalinclude:: ../../../makenemo 243 :language: text 244 :lines: 146-153 245 246 Running the model 247 ================= 248 249 Once :file:`makenemo` has run successfully, 250 the ``nemo`` executable is available in :file:`./cfgs/MY_CONFIG/EXP00`. 251 For the reference configurations, the :file:`EXP00` folder also contains the initial input files 252 (namelists, ``*.xml`` files for the IOs, ...). 253 If the configuration needs other input files, they have to be placed here. 254 255 .. code-block:: sh 256 257 cd 'MY_CONFIG'/EXP00 258 mpirun -n $NPROCS ./nemo # $NPROCS is the number of processes 259 # mpirun is your MPI wrapper 260 261 Viewing and changing list of active CPP keys 262 ============================================ 263 264 For a given configuration (here called ``MY_CONFIG``), 265 the list of active CPP keys can be found in :file:`./cfgs/'MYCONFIG'/cpp_MY_CONFIG.fcm` 266 267 This text file can be edited by hand or with :file:`makenemo` to change the list of active CPP keys. 268 Once changed, one needs to recompile ``nemo`` in order for this change to be taken in account. 269 Note that most NEMO configurations will need to specify the following CPP keys: 270 ``key_iomput`` for IOs and ``key_mpp_mpi`` for parallelism. -
NEMO/branches/2019/dev_ASINTER-01-05_merged/README.rst
- Property svn:mergeinfo deleted
r11586 r12165 1 :Release: |release| 2 :Date: |today| 3 :SVN rev.: |revision| 1 .. todo:: 4 2 5 NEMO_ for **Nucleus for European Modelling of the Ocean** is a state-of-the-art modelling framework for 3 4 5 NEMO_ for *Nucleus for European Modelling of the Ocean* is a state-of-the-art modelling framework for 6 6 research activities and forecasting services in ocean and climate sciences, 7 7 developed in a sustainable way by a European consortium since 2008. … … 15 15 The NEMO ocean model has 3 major components: 16 16 17 - |O PA| models the ocean {thermo}dynamics and solves the primitive equations18 ( ``./src/OCE``)19 - | SI3| simulates seaice {thermo}dynamics, brine inclusions and subgrid-scale thickness variations20 (``./src/ICE``)21 - | TOP| models the {on,off}line oceanic tracers transport and biogeochemical processes22 ( ``./src/TOP``)17 - |OCE| models the ocean {thermo}dynamics and solves the primitive equations 18 (:file:`./src/OCE`) 19 - |ICE| simulates sea-ice {thermo}dynamics, brine inclusions and 20 subgrid-scale thickness variations (:file:`./src/ICE`) 21 - |MBG| models the {on,off}line oceanic tracers transport and biogeochemical processes 22 (:file:`./src/TOP`) 23 23 24 These physical core engines are described in their respective `references`_ that 25 must be cited for any work related to their use. 24 These physical core engines are described in 25 their respective `reference publications <#project-documentation>`_ that 26 must be cited for any work related to their use (see :doc:`cite`). 26 27 27 28 Assets and solutions … … 33 34 - Create :doc:`embedded zooms<zooms>` seamlessly thanks to 2-way nesting package AGRIF_. 34 35 - Opportunity to integrate an :doc:`external biogeochemistry model<tracers>` 35 - Versatile :doc:`data assimilation<da ta_assimilation>`36 - Generation of :doc:`diagnostics<diag nostics>` through effective XIOS_ system37 - Roll-out Earth system modeling with :doc:`coupling interface<c oupling>` based on OASIS_36 - Versatile :doc:`data assimilation<da>` 37 - Generation of :doc:`diagnostics<diags>` through effective XIOS_ system 38 - Roll-out Earth system modeling with :doc:`coupling interface<cplg>` based on OASIS_ 38 39 39 Several :doc:`built-in configurations<c onfigurations>` are provided to40 Several :doc:`built-in configurations<cfgs>` are provided to 40 41 evaluate the skills and performances of the model which 41 can be used as templates for setting up a new configurations ( ``./cfgs``).42 can be used as templates for setting up a new configurations (:file:`./cfgs`). 42 43 43 The user can also checkout available :doc:`idealized test cases<test _cases>` that44 address specific physical processes (``./tests``).44 The user can also checkout available :doc:`idealized test cases<tests>` that 45 address specific physical processes (:file:`./tests`). 45 46 46 A set of :doc:`utilities <tools>` is also provided to {pre,post}process your data ( ``./tools``).47 A set of :doc:`utilities <tools>` is also provided to {pre,post}process your data (:file:`./tools`). 47 48 48 49 Project documentation … … 50 51 51 52 A walkthrough tutorial illustrates how to get code dependencies, compile and execute NEMO 52 ( ``./INSTALL.rst``) .53 (:file:`./INSTALL.rst`). 53 54 54 55 Reference manuals and quick start guide can be build from source and 55 exported to HTML or PDF formats ( ``./doc``) or56 downloaded directly from the : website:`website<bibliography/documentation>`.56 exported to HTML or PDF formats (:file:`./doc`) or 57 downloaded directly from the :forge:`development platform<wiki/Documentations>`. 57 58 58 =========== ===================== =============== 59 Component Reference Manual Quick start 60 =========== ===================== =============== 61 |OPA| |NEMO manual|_ |NEMO guide| 62 :cite:`NEMO_manual` 63 |SI3| |SI3 manual| 64 :cite:`SI3_manual` 65 |TOP| |TOP manual| 66 :cite:`TOP_manual` 67 =========== ===================== =============== 59 ============ ================== =================== 60 Component Reference Manual Quick Start Guide 61 ============ ================== =================== 62 |NEMO-OCE| |DOI man OCE|_ |DOI qsg| 63 |NEMO-ICE| |DOI man ICE| 64 |NEMO-MBG| |DOI man MBG| 65 ============ ================== =================== 68 66 69 67 Since 2014 the project has a `Special Issue`_ in the open-access journal 70 Geoscientific Model Development (GMD) from the European Geosciences Union (EGU ).68 Geoscientific Model Development (GMD) from the European Geosciences Union (EGU_). 71 69 The main scope is to collect relevant manuscripts covering various topics and 72 70 to provide a single portal to assess the model potential and evolution. … … 79 77 ================= 80 78 81 The NEMO Consortium pulling together 5 European institutes (CMCC_, CNRS_, MOI_, `Met Office`_ and NERC_) 82 plans the sustainable development in order to keep a reliable evolving framework since 2008. 79 The NEMO Consortium pulling together 5 European institutes 80 (CMCC_, CNRS_, MOI_, `Met Office`_ and NERC_) plans the sustainable development in order to 81 keep a reliable evolving framework since 2008. 83 82 84 It defines the | NEMO strategy|_ that is implemented by the System Team on a yearly basis in order to85 release a new version almost every four years.83 It defines the |DOI dev stgy|_ that is implemented by the System Team on a yearly basis 84 in order to release a new version almost every four years. 86 85 87 86 When the need arises, :forge:`working groups<wiki/WorkingGroups>` are created or resumed to 88 87 gather the community expertise for advising on the development activities. 89 88 89 .. |DOI dev stgy| replace:: multi-year development strategy 90 90 91 .. Substitutions / Links 91 Disclaimer 92 ========== 92 93 93 .. |NEMO manual| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1464816.svg 94 .. |NEMO guide| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1475325.svg 95 .. |SI3 manual| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1471689.svg 96 .. |TOP manual| image:: https://zenodo.org/badge/DOI/10.5281/zenodo.1471700.svg 94 The NEMO source code is freely available and distributed under 95 :download:`CeCILL v2.0 license <../../../LICENSE>` (GNU GPL compatible). 97 96 98 .. |NEMO strategy| replace:: multi-year development strategy 99 100 .. _Special Issue: https://www.geosci-model-dev.net/special_issue40.html 97 You can use, modify and/or redistribute the software under its terms, 98 but users are provided only with a limited warranty and the software's authors and 99 the successive licensor's have only limited liability. -
NEMO/branches/2019/dev_ASINTER-01-05_merged/REFERENCES.bib
- Property svn:mergeinfo deleted
r11586 r12165 1 @manual{NEMO_manual, 2 title={NEMO ocean engine}, 3 author={Madec Gurvan and NEMO System Team}, 4 organization={NEMO Consortium}, 5 journal={Notes du Pôle de modélisation de l\'Institut Pierre-Simon Laplace (IPSL)}, 6 number={27}, 7 publisher={Zenodo}, 8 abstract={The ocean engine of NEMO is a primitive equation model adapted to regional and 9 global ocean circulation problems. 10 It is intended to be a flexible tool for studying the ocean and its interactions with 11 the others components of the earth climate system over a wide range of space and time scales.}, 12 doi={10.5281/zenodo.1464816}, 13 edition={}, 14 year={} 1 @manual{NEMO_man, 2 title="NEMO ocean engine", 3 author="NEMO System Team", 4 series="Scientific Notes of Climate Modelling Center", 5 number="27", 6 institution="Institut Pierre-Simon Laplace (IPSL)", 7 publisher="Zenodo", 8 doi="10.5281/zenodo.1464816", 15 9 } 10 % edition="", 11 % year="" 16 12 17 @manual{SI3_manual, 18 title={SI³ – Sea Ice modelling Integrated Initiative – The NEMO Sea Ice engine}, 19 author={NEMO Sea Ice Working Group}, 20 organization={NEMO Consortium}, 21 journal={Notes du Pôle de modélisation de l\'Institut Pierre-Simon Laplace (IPSL)}, 22 number={31}, 23 publisher={Zenodo}, 24 abstract={SI³ (Sea Ice modelling Integrated Initiative) is the sea ice engine of NEMO 25 (Nucleus for European Modelling of the Ocean). 26 SI³ is based on the Arctic Ice Dynamics Joint EXperiment (AIDJEX) framework, 27 combining the ice thickness distribution framework, the conservation of horizontal momentum, 28 an elastic-viscous plastic rheology, and energy-conserving halo-thermodynamics. 29 SI³ is interfaced with the NEMO ocean engine, and, via the OASIS coupler, with 30 several atmospheric general circulation models. 31 It also supports two-way grid embedding via the AGRIF software.}, 32 doi={10.5281/zenodo.1471689}, 33 edition={}, 34 year={} 13 @manual{SI3_man, 14 title="Sea Ice modelling Integrated Initiative (SI$^3$) -- The NEMO Sea Ice engine", 15 author="NEMO Sea Ice Working Group", 16 series="Scientific Notes of Climate Modelling Center", 17 number="31", 18 institution="Institut Pierre-Simon Laplace (IPSL)", 19 publisher="Zenodo", 20 doi="10.5281/zenodo.1471689", 35 21 } 22 % edition="", 23 % year="" 36 24 37 @manual{TOP_manual, 38 title={TOP – Tracers in Ocean Paradigm – The NEMO Tracers engine}, 39 author={NEMO TOP Working Group}, 40 organization={NEMO Consortium}, 41 journal={Notes du Pôle de modélisation de l\'Institut Pierre-Simon Laplace (IPSL)}, 42 number={28}, 43 publisher={Zenodo}, 44 abstract={}, 45 doi={10.5281/zenodo.1471700}, 46 edition={}, 47 year={} 25 @manual{TOP_man, 26 title="Tracers in Ocean Paradigm (TOP) -- The NEMO Tracers engine", 27 author="NEMO TOP Working Group", 28 series="Scientific Notes of Climate Modelling Center", 29 number="28", 30 institution="Institut Pierre-Simon Laplace (IPSL)", 31 publisher="Zenodo", 32 doi="10.5281/zenodo.1471700", 48 33 } 34 % edition="", 35 % year="" 49 36 50 @ Article{gmd-8-1245-2015,51 author = {Vidard, A. and Bouttier, P.-A. and Vigilant, F.},52 title = {{NEMOTAM}: {T}angent and {A}djoint {M}odels for the ocean modelling platform {NEMO}},53 journal = {Geoscientific Model Development},54 volume = {8},55 year = {2015},56 number = {4},57 pages = {1245--1257},58 doi = {10.5194/gmd-8-1245-2015}37 @article{TAM_pub, 38 author = "Vidard, A. and Bouttier, P.-A. and Vigilant, F.", 39 title = "NEMOTAM: Tangent and Adjoint Models for the ocean modelling platform NEMO", 40 journal = "Geoscientific Model Development", 41 volume = "8", 42 year = "2015", 43 number = "4", 44 pages = "1245--1257", 45 doi = "10.5194/gmd-8-1245-2015" 59 46 } -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/AGRIF_DEMO/EXPREF/namelist_ice_cfg
r10535 r12165 38 38 &namdyn_rhg ! Ice rheology 39 39 !------------------------------------------------------------------------------ 40 ln_aEVP = .false. ! adaptive rheology (Kimmritz et al. 2016 & 2017) 40 41 / 41 42 !------------------------------------------------------------------------------ -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/AGRIF_DEMO/README.rst
r10460 r12165 2 2 Embedded zooms 3 3 ************** 4 5 .. todo:: 6 7 4 8 5 9 .. contents:: … … 9 13 ======== 10 14 11 AGRIF (Adaptive Grid Refinement In Fortran) is a library that allows the seamless space and time refinement over12 rectangular regions in NEMO.15 AGRIF (Adaptive Grid Refinement In Fortran) is a library that 16 allows the seamless space and time refinement over rectangular regions in NEMO. 13 17 Refinement factors can be odd or even (usually lower than 5 to maintain stability). 14 Interaction between grid is "two-ways" in the sense that the parent grid feeds the child grid open boundaries and 15 the child grid provides volume averages of prognostic variables once a given number of time step is completed. 18 Interaction between grid is "two-ways" in the sense that 19 the parent grid feeds the child grid open boundaries and 20 the child grid provides volume averages of prognostic variables once 21 a given number of time step is completed. 16 22 These pages provide guidelines how to use AGRIF in NEMO. 17 For a more technical description of the library itself, please refer to http://agrif.imag.fr.23 For a more technical description of the library itself, please refer to AGRIF_. 18 24 19 25 Compilation 20 26 =========== 21 27 22 Activating AGRIF requires to append the cpp key ``key_agrif`` at compilation time: 28 Activating AGRIF requires to append the cpp key ``key_agrif`` at compilation time: 23 29 24 30 .. code-block:: sh 25 31 26 ./makenemoadd_key 'key_agrif'32 ./makenemo [...] add_key 'key_agrif' 27 33 28 Although this is transparent to users, the way the code is processed during compilation is different from29 the standard case:30 a preprocessing stage (the so called "conv"program) translates the actual code so that34 Although this is transparent to users, 35 the way the code is processed during compilation is different from the standard case: 36 a preprocessing stage (the so called ``conv`` program) translates the actual code so that 31 37 saved arrays may be switched in memory space from one domain to an other. 32 38 … … 34 40 ================================ 35 41 36 An additional text file ``AGRIF_FixedGrids.in`` is required at run time.42 An additional text file :file:`AGRIF_FixedGrids.in` is required at run time. 37 43 This is where the grid hierarchy is defined. 38 An example of such a file, here taken from the ``ICEDYN`` test case, is given below ::44 An example of such a file, here taken from the ``ICEDYN`` test case, is given below 39 45 40 1 41 34 63 34 63 3 3 3 42 0 46 .. literalinclude:: ../../../tests/ICE_AGRIF/EXPREF/AGRIF_FixedGrids.in 43 47 44 48 The first line indicates the number of zooms (1). 45 49 The second line contains the starting and ending indices in both directions on the root grid 46 ( imin=34 imax=63 jmin=34 jmax=63) followed by the space and time refinement factors (3 3 3).50 (``imin=34 imax=63 jmin=34 jmax=63``) followed by the space and time refinement factors (3 3 3). 47 51 The last line is the number of child grid nested in the refined region (0). 48 52 A more complex example with telescoping grids can be found below and 49 in the ``AGRIF_DEMO`` reference configuration directory.53 in the :file:`AGRIF_DEMO` reference configuration directory. 50 54 51 [Add some plots here with grid staggering and positioning ?] 55 .. todo:: 52 56 53 When creating the nested domain, one must keep in mind that the child domain is shifted toward north-east and 54 depends on the number of ghost cells as illustrated by the (attempted) drawing below for nbghostcells=1 and 55 nbghostcells=3. 56 The grid refinement is 3 and nxfin is the number of child grid points in i-direction. 57 Add some plots here with grid staggering and positioning? 58 59 When creating the nested domain, one must keep in mind that 60 the child domain is shifted toward north-east and 61 depends on the number of ghost cells as illustrated by 62 the *attempted* drawing below for ``nbghostcells=1`` and ``nbghostcells=3``. 63 The grid refinement is 3 and ``nxfin`` is the number of child grid points in i-direction. 57 64 58 65 .. image:: _static/agrif_grid_position.jpg … … 62 69 boundary data exchange and update being only performed between root and child grids. 63 70 Use of east-west periodic or north-fold boundary conditions is not allowed in child grids either. 64 Defining for instance a circumpolar zoom in a global model is therefore not possible. 71 Defining for instance a circumpolar zoom in a global model is therefore not possible. 65 72 66 73 Preprocessing 67 74 ============= 68 75 69 Knowing the refinement factors and area, a ``NESTING`` pre-processing tool may help to create needed input files 76 Knowing the refinement factors and area, 77 a ``NESTING`` pre-processing tool may help to create needed input files 70 78 (mesh file, restart, climatological and forcing files). 71 79 The key is to ensure volume matching near the child grid interface, 72 a step done by invoking the ``Agrif_create_bathy.exe`` program.73 You may use the namelists provided in the ``NESTING`` directory as a guide.80 a step done by invoking the :file:`Agrif_create_bathy.exe` program. 81 You may use the namelists provided in the :file:`NESTING` directory as a guide. 74 82 These correspond to the namelists used to create ``AGRIF_DEMO`` inputs. 75 83 … … 78 86 79 87 Each child grid expects to read its own namelist so that different numerical choices can be made 80 (these should be stored in the form ``1_namelist_cfg``, ``2_namelist_cfg``, etc... according to their rank in81 the grid hierarchy).88 (these should be stored in the form :file:`1_namelist_cfg`, :file:`2_namelist_cfg`, etc... 89 according to their rank in the grid hierarchy). 82 90 Consistent time steps and number of steps with the chosen time refinement have to be provided. 83 91 Specific to AGRIF is the following block: 84 92 85 .. code-block:: fortran 86 87 !----------------------------------------------------------------------- 88 &namagrif ! AGRIF zoom ("key_agrif") 89 !----------------------------------------------------------------------- 90 ln_spc_dyn = .true. ! use 0 as special value for dynamics 91 rn_sponge_tra = 2880. ! coefficient for tracer sponge layer [m2/s] 92 rn_sponge_dyn = 2880. ! coefficient for dynamics sponge layer [m2/s] 93 ln_chk_bathy = .false. ! =T check the parent bathymetry 94 / 93 .. literalinclude:: ../../namelists/namagrif 94 :language: fortran 95 95 96 96 where sponge layer coefficients have to be chosen according to the child grid mesh size. 97 97 The sponge area is hard coded in NEMO and applies on the following grid points: 98 2 x refinement factor (from i=1+nbghostcells+1 to i=1+nbghostcells+sponge_area)98 2 x refinement factor (from ``i=1+nbghostcells+1`` to ``i=1+nbghostcells+sponge_area``) 99 99 100 References 101 ========== 100 .. rubric:: References 102 101 103 102 .. bibliography:: zooms.bib 104 105 106 107 103 :all: 104 :style: unsrt 105 :labelprefix: A 106 :keyprefix: a- -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/ORCA2_ICE_PISCES/EXPREF/namelist_ice_cfg
r10535 r12165 38 38 &namdyn_rhg ! Ice rheology 39 39 !------------------------------------------------------------------------------ 40 ln_aEVP = .false. ! adaptive rheology (Kimmritz et al. 2016 & 2017) 40 41 / 41 42 !------------------------------------------------------------------------------ -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/README.rst
r10694 r12165 1 ************************ 2 Reference configurations 3 ************************ 1 ******************************** 2 Run the Reference configurations 3 ******************************** 4 5 .. todo:: 6 7 Lack of illustrations for ref. cfgs, and more generally in the guide. 4 8 5 9 NEMO is distributed with a set of reference configurations allowing both … … 7 11 the developer to test/validate his NEMO developments (using SETTE package). 8 12 13 .. contents:: 14 :local: 15 :depth: 1 16 9 17 .. attention:: 10 18 … … 21 29 =========================================================== 22 30 23 A user who wants to compile the ORCA2_ICE_PISCES_ reference configuration using ``makenemo`` 24 should use the following, by selecting among available architecture file or providing a user defined one: 31 To compile the ORCA2_ICE_PISCES_ reference configuration using :file:`makenemo`, 32 one should use the following, by selecting among available architecture file or 33 providing a user defined one: 25 34 26 35 .. code-block:: console 27 28 $ ./makenemo -r 'ORCA2_ICE_PISCES' -m 'my -fortran.fcm' -j '4'36 37 $ ./makenemo -r 'ORCA2_ICE_PISCES' -m 'my_arch' -j '4' 29 38 30 39 A new ``EXP00`` folder will be created within the selected reference configurations, 31 namely ``./cfgs/ORCA2_ICE_PISCES/EXP00``, 32 where it will be necessary to uncompress the Input & Forcing Files listed in the above table. 40 namely ``./cfgs/ORCA2_ICE_PISCES/EXP00``. 41 It will be necessary to uncompress the archives listed in the above table for 42 the given reference configuration that includes input & forcing files. 33 43 34 44 Then it will be possible to launch the execution of the model through a runscript 35 45 (opportunely adapted to the user system). 36 46 37 47 List of Configurations 38 48 ====================== 39 49 40 All forcing files listed below in the table are available from |NEMO archives URL|_ 41 42 .. |NEMO archives URL| image:: https://www.zenodo.org/badge/DOI/10.5281/zenodo.1472245.svg 43 .. _NEMO archives URL: https://doi.org/10.5281/zenodo.1472245 44 45 ====================== ===== ===== ===== ======== ======= ================================================ 46 Configuration Component(s) Input & Forcing File(s) 47 ---------------------- ---------------------------------- ------------------------------------------------ 48 Name OPA SI3 TOP PISCES AGRIF 49 ====================== ===== ===== ===== ======== ======= ================================================ 50 AGRIF_DEMO_ X X X AGRIF_DEMO_v4.0.tar, ORCA2_ICE_v4.0.tar 51 AMM12_ X AMM12_v4.0.tar 52 C1D_PAPA_ X INPUTS_C1D_PAPA_v4.0.tar 53 GYRE_BFM_ X X *none* 54 GYRE_PISCES_ X X X *none* 55 ORCA2_ICE_PISCES_ X X X X ORCA2_ICE_v4.0.tar, INPUTS_PISCES_v4.0.tar 56 ORCA2_OFF_PISCES_ X X ORCA2_OFF_v4.0.tar, INPUTS_PISCES_v4.0.tar 57 ORCA2_OFF_TRC_ X ORCA2_OFF_v4.0.tar 58 ORCA2_SAS_ICE_ X ORCA2_ICE_v4.0.tar, INPUTS_SAS_v4.0.tar 59 SPITZ12_ X X SPITZ12_v4.0.tar 60 ====================== ===== ===== ===== ======== ======= ================================================ 50 All forcing files listed below in the table are available from |DOI data|_ 51 52 =================== === === === === === ================================== 53 Configuration Component(s) Archives (input & forcing files) 54 ------------------- ------------------- ---------------------------------- 55 Name O S T P A 56 =================== === === === === === ================================== 57 AGRIF_DEMO_ X X X AGRIF_DEMO_v4.0.tar, 58 ORCA2_ICE_v4.0.tar 59 AMM12_ X AMM12_v4.0.tar 60 C1D_PAPA_ X INPUTS_C1D_PAPA_v4.0.tar 61 GYRE_BFM_ X X *none* 62 GYRE_PISCES_ X X X *none* 63 ORCA2_ICE_PISCES_ X X X X ORCA2_ICE_v4.0.tar, 64 INPUTS_PISCES_v4.0.tar 65 ORCA2_OFF_PISCES_ X X ORCA2_OFF_v4.0.tar, 66 INPUTS_PISCES_v4.0.tar 67 ORCA2_OFF_TRC_ X ORCA2_OFF_v4.0.tar 68 ORCA2_SAS_ICE_ X ORCA2_ICE_v4.0.tar, 69 INPUTS_SAS_v4.0.tar 70 SPITZ12_ X X SPITZ12_v4.0.tar 71 =================== === === === === === ================================== 72 73 .. admonition:: Legend for component combination 74 75 O for OCE, S for SI\ :sup:`3`, T for TOP, P for PISCES and A for AGRIF 61 76 62 77 AGRIF_DEMO … … 72 87 particular interest to test sea ice coupling. 73 88 89 .. image:: _static/AGRIF_DEMO_no_cap.jpg 90 :scale: 66% 91 :align: center 92 74 93 The 1:1 grid can be used alone as a benchmark to check that 75 the model solution is not corrupted by grid exchanges. 94 the model solution is not corrupted by grid exchanges. 76 95 Note that since grids interact only at the baroclinic time level, 77 96 numerically exact results can not be achieved in the 1:1 case. 78 Perfect reproducibility is obtained only by switching to a fully explicit setup instead of a split explicit free surface scheme. 97 Perfect reproducibility is obtained only by switching to a fully explicit setup instead of 98 a split explicit free surface scheme. 79 99 80 100 AMM12 … … 85 105 a regular horizontal grid of ~12 km of resolution (see :cite:`ODEA2012`). 86 106 87 This configuration allows to tests several features of NEMO specifically addressed to the shelf seas. 107 .. image:: _static/AMM_domain.png 108 :align: center 109 110 This configuration allows to tests several features of NEMO specifically addressed to the shelf seas. 88 111 In particular, ``AMM12`` accounts for vertical s-coordinates system, GLS turbulence scheme, 89 112 tidal lateral boundary conditions using a flather scheme (see more in ``BDY``). … … 99 122 -------- 100 123 101 ``C1D_PAPA`` is a 1D configuration for the `PAPA station <http://www.pmel.noaa.gov/OCS/Papa/index-Papa.shtml>`_ located in the northern-eastern Pacific Ocean at 50.1°N, 144.9°W. 102 See `Reffray et al. (2015) <http://www.geosci-model-dev.net/8/69/2015>`_ for the description of its physical and numerical turbulent-mixing behaviour. 103 104 The water column setup, called NEMO1D, is activated with the inclusion of the CPP key ``key_c1d`` and 105 has a horizontal domain of 3x3 grid points. 106 107 This reference configuration uses 75 vertical levels grid (1m at the surface), GLS turbulence scheme with K-epsilon closure and the NCAR bulk formulae. 124 .. figure:: _static/Papa2015.jpg 125 :height: 225px 126 :align: left 127 128 ``C1D_PAPA`` is a 1D configuration for the `PAPA station`_ located in 129 the northern-eastern Pacific Ocean at 50.1°N, 144.9°W. 130 See :gmd:`Reffray et al. (2015) <8/69/2015>` for the description of 131 its physical and numerical turbulent-mixing behaviour. 132 133 | The water column setup, called NEMO1D, is activated with 134 the inclusion of the CPP key ``key_c1d`` and 135 has a horizontal domain of 3x3 grid points. 136 | This reference configuration uses 75 vertical levels grid (1m at the surface), 137 GLS turbulence scheme with K-epsilon closure and the NCAR bulk formulae. 138 108 139 Data provided with ``INPUTS_C1D_PAPA_v4.0.tar`` file account for: 109 140 110 - ``forcing_PAPASTATION_1h_y201[0-1].nc`` : ECMWF operational analysis atmospheric forcing rescaled to 1h (with long and short waves flux correction) for years 2010 and 2011 111 - ``init_PAPASTATION_m06d15.nc`` : Initial Conditions from observed data and Levitus 2009 climatology 112 - ``chlorophyll_PAPASTATION.nc`` : surface chlorophyll file from Seawifs data 141 - :file:`forcing_PAPASTATION_1h_y201[0-1].nc`: 142 ECMWF operational analysis atmospheric forcing rescaled to 1h 143 (with long and short waves flux correction) for years 2010 and 2011 144 - :file:`init_PAPASTATION_m06d15.nc`: Initial Conditions from 145 observed data and Levitus 2009 climatology 146 - :file:`chlorophyll_PAPASTATION.nc`: surface chlorophyll file from Seawifs data 113 147 114 148 GYRE_BFM 115 149 -------- 116 150 117 ``GYRE_BFM`` shares the same physical setup of GYRE_PISCES_, but NEMO is coupled with the `BFM <http://www.bfm-community.eu/>`_ biogeochemical model as described in ``./cfgs/GYRE_BFM/README``. 151 ``GYRE_BFM`` shares the same physical setup of GYRE_PISCES_, 152 but NEMO is coupled with the `BFM`_ biogeochemical model as described in ``./cfgs/GYRE_BFM/README``. 118 153 119 154 GYRE_PISCES … … 123 158 in the Beta-plane approximation with a regular 1° horizontal resolution and 31 vertical levels, 124 159 with PISCES BGC model :cite:`gmd-8-2465-2015`. 125 Analytical forcing for heat, freshwater and wind-stress fields are applied. 126 127 This configuration acts also as demonstrator of the **user defined setup** (``ln_read_cfg = .false.``) and 128 grid setting are handled through the ``&namusr_def`` controls in ``namelist_cfg``: 160 Analytical forcing for heat, freshwater and wind-stress fields are applied. 161 162 This configuration acts also as demonstrator of the **user defined setup** 163 (``ln_read_cfg = .false.``) and grid setting are handled through 164 the ``&namusr_def`` controls in :file:`namelist_cfg`: 129 165 130 166 .. literalinclude:: ../../../cfgs/GYRE_PISCES/EXPREF/namelist_cfg 131 167 :language: fortran 132 :lines: 34-42168 :lines: 35-41 133 169 134 170 Note that, the default grid size is 30x20 grid points (with ``nn_GYRE = 1``) and 135 171 vertical levels are set by ``jpkglo``. 136 The specific code changes can be inspected in ``./src/OCE/USR``. 137 138 **Running GYRE as a benchmark** : 139 this simple configuration can be used as a benchmark since it is easy to increase resolution, 140 with the drawback of getting results that have a very limited physical meaning. 141 142 GYRE grid resolution can be increased at runtime by setting a different value of ``nn_GYRE`` (integer multiplier scaling factor), as described in the following table: 143 144 =========== ========= ========== ============ =================== 145 ``nn_GYRE`` *jpiglo* *jpjglo* ``jpkglo`` **Equivalent to** 146 =========== ========= ========== ============ =================== 147 1 30 20 31 GYRE 1° 148 25 750 500 101 ORCA 1/2° 149 50 1500 1000 101 ORCA 1/4° 150 150 4500 3000 101 ORCA 1/12° 151 200 6000 4000 101 ORCA 1/16° 152 =========== ========= ========== ============ =================== 153 154 Note that, it is necessary to set ``ln_bench = .true.`` in ``namusr_def`` to 155 avoid problems in the physics computation and that 156 the model timestep should be adequately rescaled. 157 158 For example if ``nn_GYRE = 150``, equivalent to an ORCA 1/12° grid, 159 the timestep ``rn_rdt = 1200`` should be set to 1200 seconds 160 161 Differently from previous versions of NEMO, 162 the code uses by default the time-splitting scheme and 163 internally computes the number of sub-steps. 172 The specific code changes can be inspected in :file:`./src/OCE/USR`. 173 174 .. rubric:: Running GYRE as a benchmark 175 176 | This simple configuration can be used as a benchmark since it is easy to increase resolution, 177 with the drawback of getting results that have a very limited physical meaning. 178 | GYRE grid resolution can be increased at runtime by setting a different value of ``nn_GYRE`` 179 (integer multiplier scaling factor), as described in the following table: 180 181 =========== ============ ============ ============ =============== 182 ``nn_GYRE`` ``jpiglo`` ``jpjglo`` ``jpkglo`` Equivalent to 183 =========== ============ ============ ============ =============== 184 1 30 20 31 GYRE 1° 185 25 750 500 101 ORCA 1/2° 186 50 1500 1000 101 ORCA 1/4° 187 150 4500 3000 101 ORCA 1/12° 188 200 6000 4000 101 ORCA 1/16° 189 =========== ============ ============ ============ =============== 190 191 | Note that, it is necessary to set ``ln_bench = .true.`` in ``&namusr_def`` to 192 avoid problems in the physics computation and that 193 the model timestep should be adequately rescaled. 194 | For example if ``nn_GYRE = 150``, equivalent to an ORCA 1/12° grid, 195 the timestep ``rn_rdt`` should be set to 1200 seconds 196 Differently from previous versions of NEMO, the code uses by default the time-splitting scheme and 197 internally computes the number of sub-steps. 164 198 165 199 ORCA2_ICE_PISCES … … 174 208 the ratio of anisotropy is nearly one everywhere 175 209 176 this configuration uses the three components 177 178 - |O PA|, the ocean dynamical core179 - | SI3|, the thermodynamic-dynamic sea ice model.180 - | TOP|, passive tracer transport module and PISCES BGC model :cite:`gmd-8-2465-2015`210 This configuration uses the three components 211 212 - |OCE|, the ocean dynamical core 213 - |ICE|, the thermodynamic-dynamic sea ice model. 214 - |MBG|, passive tracer transport module and PISCES BGC model :cite:`gmd-8-2465-2015` 181 215 182 216 All components share the same grid. 183 184 217 The model is forced with CORE-II normal year atmospheric forcing and 185 218 it uses the NCAR bulk formulae. 186 219 187 In this ``ORCA2_ICE_PISCES`` configuration, 188 AGRIF nesting can be activated that includes a nested grid in the Agulhas region. 189 190 To set up this configuration, after extracting NEMO: 191 192 Build your AGRIF configuration directory from ``ORCA2_ICE_PISCES``, 193 with the ``key_agrif`` CPP key activated: 194 195 .. code-block:: console 196 197 $ ./makenemo -r 'ORCA2_ICE_PISCES' -n 'AGRIF' add_key 'key_agrif' 198 199 By using the input files and namelists for ``ORCA2_ICE_PISCES``, 200 the AGRIF test configuration is ready to run. 201 202 **Ocean Physics** 203 204 - *horizontal diffusion on momentum*: the eddy viscosity coefficient depends on the geographical position. It is taken as 40000 m^2/s, reduced in the equator regions (2000 m^2/s) excepted near the western boundaries. 205 - *isopycnal diffusion on tracers*: the diffusion acts along the isopycnal surfaces (neutral surface) with an eddy diffusivity coefficient of 2000 m^2/s. 206 - *Eddy induced velocity parametrization* with a coefficient that depends on the growth rate of baroclinic instabilities (it usually varies from 15 m^2/s to 3000 m^2/s). 207 - *lateral boundary conditions* : zero fluxes of heat and salt and no-slip conditions are applied through lateral solid boundaries. 208 - *bottom boundary condition* : zero fluxes of heat and salt are applied through the ocean bottom. 209 The Beckmann [19XX] simple bottom boundary layer parameterization is applied along continental slopes. 210 A linear friction is applied on momentum. 211 - *convection*: the vertical eddy viscosity and diffusivity coefficients are increased to 1 m^2/s in case of static instability. 212 - *time step* is 5760sec (1h36') so that there is 15 time steps in one day. 220 .. rubric:: Ocean Physics 221 222 :horizontal diffusion on momentum: 223 the eddy viscosity coefficient depends on the geographical position. 224 It is taken as 40000 m\ :sup:`2`/s, reduced in the equator regions (2000 m\ :sup:`2`/s) 225 excepted near the western boundaries. 226 :isopycnal diffusion on tracers: 227 the diffusion acts along the isopycnal surfaces (neutral surface) with 228 an eddy diffusivity coefficient of 2000 m\ :sup:`2`/s. 229 :Eddy induced velocity parametrization: 230 With a coefficient that depends on the growth rate of baroclinic instabilities 231 (it usually varies from 15 m\ :sup:`2`/s to 3000 m\ :sup:`2`/s). 232 :lateral boundary conditions: 233 Zero fluxes of heat and salt and no-slip conditions are applied through lateral solid boundaries. 234 :bottom boundary condition: 235 Zero fluxes of heat and salt are applied through the ocean bottom. 236 The Beckmann [19XX] simple bottom boundary layer parameterization is applied along 237 continental slopes. 238 A linear friction is applied on momentum. 239 :convection: 240 The vertical eddy viscosity and diffusivity coefficients are increased to 1 m\ :sup:`2`/s in 241 case of static instability. 242 :time step: is 5760sec (1h36') so that there is 15 time steps in one day. 213 243 214 244 ORCA2_OFF_PISCES … … 218 248 but only PISCES model is an active component of TOP. 219 249 220 221 250 ORCA2_OFF_TRC 222 251 ------------- 223 252 224 ``ORCA2_OFF_TRC`` is based on the ORCA2 global ocean configuration 225 (see ORCA2_ICE_PISCES_ for general description) along with the tracer passive transport module (TOP), but dynamical fields are pre-calculated and read with specific time frequency. 226 227 This enables for an offline coupling of TOP components, 228 here specifically inorganic carbon compounds (cfc11, cfc12, sf6, c14) and water age module (age). 229 See ``namelist_top_cfg`` to inspect the selection of each component with the dedicated logical keys. 253 | ``ORCA2_OFF_TRC`` is based on the ORCA2 global ocean configuration 254 (see ORCA2_ICE_PISCES_ for general description) along with 255 the tracer passive transport module (TOP), 256 but dynamical fields are pre-calculated and read with specific time frequency. 257 | This enables for an offline coupling of TOP components, 258 here specifically inorganic carbon compounds (CFC11, CFC12, SF6, C14) and water age module (age). 259 See :file:`namelist_top_cfg` to inspect the selection of 260 each component with the dedicated logical keys. 230 261 231 262 Pre-calculated dynamical fields are provided to NEMO using 232 the namelist ``&namdta_dyn`` in ``namelist_cfg``,263 the namelist ``&namdta_dyn`` in :file:`namelist_cfg`, 233 264 in this case with a 5 days frequency (120 hours): 234 265 235 .. literalinclude:: ../../ ../cfgs/GYRE_PISCES/EXPREF/namelist_ref266 .. literalinclude:: ../../namelists/namdta_dyn 236 267 :language: fortran 237 :lines: 935-960 238 239 Input dynamical fields for this configuration (``ORCA2_OFF_v4.0.tar``) comes from 268 269 Input dynamical fields for this configuration (:file:`ORCA2_OFF_v4.0.tar`) comes from 240 270 a 2000 years long climatological simulation of ORCA2_ICE using ERA40 atmospheric forcing. 241 271 242 Note that, this configuration default uses linear free surface (``ln_linssh = .true.``) assuming that 243 model mesh is not varying in time and 244 it includes the bottom boundary layer parameterization (``ln_trabbl = .true.``) that 245 requires the provision of bbl coefficients through ``sn_ubl`` and ``sn_vbl`` fields. 246 247 It is also possible to activate PISCES model (see ``ORCA2_OFF_PISCES``) or248 a user defined set of tracers and source-sink terms with ``ln_my_trc = .true.``249 (and adaptation of ``./src/TOP/MY_TRC`` routines).272 | Note that, 273 this configuration default uses linear free surface (``ln_linssh = .true.``) assuming that 274 model mesh is not varying in time and 275 it includes the bottom boundary layer parameterization (``ln_trabbl = .true.``) that 276 requires the provision of BBL coefficients through ``sn_ubl`` and ``sn_vbl`` fields. 277 | It is also possible to activate PISCES model (see ``ORCA2_OFF_PISCES``) or 278 a user defined set of tracers and source-sink terms with ``ln_my_trc = .true.`` 279 (and adaptation of ``./src/TOP/MY_TRC`` routines). 250 280 251 281 In addition, the offline module (OFF) allows for the provision of further fields: … … 254 284 by including an input datastream similarly to the following: 255 285 256 .. code-block:: fortran257 258 sn_rnf = 'dyna_grid_T', 120, 'sorunoff' , .true., .true., 'yearly', '', '', ''259 260 2. **VVL dynamical fields**, 261 in the case input data were produced by a dyamical core usingvariable volume (``ln_linssh = .false.``)262 it necessary to provide also diverce and E-P at before timestep by286 .. code-block:: fortran 287 288 sn_rnf = 'dyna_grid_T', 120, 'sorunoff' , .true., .true., 'yearly', '', '', '' 289 290 2. **VVL dynamical fields**, in the case input data were produced by a dyamical core using 291 variable volume (``ln_linssh = .false.``) 292 it is necessary to provide also diverce and E-P at before timestep by 263 293 including input datastreams similarly to the following 264 294 265 .. code-block:: fortran 266 267 sn_div = 'dyna_grid_T', 120, 'e3t' , .true., .true., 'yearly', '', '', '' 268 sn_empb = 'dyna_grid_T', 120, 'sowaflupb', .true., .true., 'yearly', '', '', '' 269 295 .. code-block:: fortran 296 297 sn_div = 'dyna_grid_T', 120, 'e3t' , .true., .true., 'yearly', '', '', '' 298 sn_empb = 'dyna_grid_T', 120, 'sowaflupb', .true., .true., 'yearly', '', '', '' 270 299 271 300 More details can be found by inspecting the offline data manager in 272 the routine ``./src/OFF/dtadyn.F90``.301 the routine :file:`./src/OFF/dtadyn.F90`. 273 302 274 303 ORCA2_SAS_ICE 275 304 ------------- 276 305 277 ORCA2_SAS_ICE is a demonstrator of the Stand-Alone Surface (SAS) module and 278 it relies on ORCA2 global ocean configuration (see ORCA2_ICE_PISCES_ for general description). 279 280 The standalone surface module allows surface elements such as sea-ice, iceberg drift, and 281 surface fluxes to be run using prescribed model state fields. 282 It can profitably be used to compare different bulk formulae or 283 adjust the parameters of a given bulk formula. 284 285 More informations about SAS can be found in NEMO manual. 306 | ORCA2_SAS_ICE is a demonstrator of the Stand-Alone Surface (SAS) module and 307 it relies on ORCA2 global ocean configuration (see ORCA2_ICE_PISCES_ for general description). 308 | The standalone surface module allows surface elements such as sea-ice, iceberg drift, and 309 surface fluxes to be run using prescribed model state fields. 310 It can profitably be used to compare different bulk formulae or 311 adjust the parameters of a given bulk formula. 312 313 More informations about SAS can be found in :doc:`NEMO manual <cite>`. 286 314 287 315 SPITZ12 … … 290 318 ``SPITZ12`` is a regional configuration around the Svalbard archipelago 291 319 at 1/12° of horizontal resolution and 75 vertical levels. 292 See `Rousset et al. (2015) <https://www.geosci-model-dev.net/8/2991/2015/>`_for more details.320 See :gmd:`Rousset et al. (2015) <8/2991/2015>` for more details. 293 321 294 322 This configuration references to year 2002, … … 296 324 while lateral boundary conditions for dynamical fields have 3 days time frequency. 297 325 298 References 299 ========== 300 301 .. bibliography:: configurations.bib 326 .. rubric:: References 327 328 .. bibliography:: cfgs.bib 302 329 :all: 303 330 :style: unsrt 304 331 :labelprefix: C 305 306 .. Links and substitutions307 -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SHARED/README.rst
r10598 r12165 3 3 *********** 4 4 5 .. todo:: 6 7 8 5 9 .. contents:: 6 10 :local: 7 11 8 12 Output of diagnostics in NEMO is usually done using XIOS. 9 This is an efficient way of writing diagnostics because the time averaging, file writing and even some simple arithmetic or regridding is carried out in parallel to the NEMO model run. 13 This is an efficient way of writing diagnostics because 14 the time averaging, file writing and even some simple arithmetic or regridding is carried out in 15 parallel to the NEMO model run. 10 16 This page gives a basic introduction to using XIOS with NEMO. 11 Much more information is available from the XIOS homepageabove and from the NEMO manual.17 Much more information is available from the :xios:`XIOS homepage<>` above and from the NEMO manual. 12 18 13 Use of XIOS for diagnostics is activated using the pre-compiler key ``key_iomput``. 19 Use of XIOS for diagnostics is activated using the pre-compiler key ``key_iomput``. 14 20 15 21 Extracting and installing XIOS 16 ------------------------------ 22 ============================== 17 23 18 24 1. Install the NetCDF4 library. 19 If you want to use single file output you will need to compile the HDF & NetCDF libraries to allow parallel IO.20 2. Download the version of XIOS that you wish to use. The recommended version is now XIOS 2.5: 21 22 .. code-block:: console 25 If you want to use single file output you will need to compile the HDF & NetCDF libraries to 26 allow parallel IO. 27 2. Download the version of XIOS that you wish to use. 28 The recommended version is now XIOS 2.5: 23 29 24 $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 xios-2.5 30 .. code-block:: console 25 31 26 and follow the instructions in `XIOS documentation <http://forge.ipsl.jussieu.fr/ioserver/wiki/documentation>`_ to compile it. 27 If you find problems at this stage, support can be found by subscribing to the `XIOS mailing list <http://forge.ipsl.jussieu.fr/mailman/listinfo.cgi/xios-users>`_ and sending a mail message to it. 32 $ svn co http://forge.ipsl.jussieu.fr/ioserver/svn/XIOS/branchs/xios-2.5 33 34 and follow the instructions in :xios:`XIOS documentation <wiki/documentation>` to compile it. 35 If you find problems at this stage, support can be found by subscribing to 36 the :xios:`XIOS mailing list <../mailman/listinfo.cgi/xios-users>` and sending a mail message to it. 28 37 29 38 XIOS Configuration files 30 39 ------------------------ 31 40 32 XIOS is controlled using xml input files that should be copied to your model run directory before running the model. 33 Examples of these files can be found in the reference configurations (``cfgs``). The XIOS executable expects to find a file called ``iodef.xml`` in the model run directory. 34 In NEMO we have made the decision to use include statements in the ``iodef.xml`` file to include ``field_def_nemo-oce.xml`` (for physics), ``field_def_nemo-ice.xml`` (for ice), ``field_def_nemo-pisces.xml`` (for biogeochemistry) and ``domain_def.xml`` from the /cfgs/SHARED directory. 35 Most users will not need to modify ``domain_def.xml`` or ``field_def_nemo-???.xml`` unless they want to add new diagnostics to the NEMO code. 36 The definition of the output files is organized into separate ``file_definition.xml`` files which are included in the ``iodef.xml`` file. 41 XIOS is controlled using XML input files that should be copied to 42 your model run directory before running the model. 43 Examples of these files can be found in the reference configurations (:file:`./cfgs`). 44 The XIOS executable expects to find a file called :file:`iodef.xml` in the model run directory. 45 In NEMO we have made the decision to use include statements in the :file:`iodef.xml` file to include: 46 47 - :file:`field_def_nemo-oce.xml` (for physics), 48 - :file:`field_def_nemo-ice.xml` (for ice), 49 - :file:`field_def_nemo-pisces.xml` (for biogeochemistry) and 50 - :file:`domain_def.xml` from the :file:`./cfgs/SHARED` directory. 51 52 Most users will not need to modify :file:`domain_def.xml` or :file:`field_def_nemo-???.xml` unless 53 they want to add new diagnostics to the NEMO code. 54 The definition of the output files is organized into separate :file:`file_definition.xml` files which 55 are included in the :file:`iodef.xml` file. 37 56 38 57 Modes 39 ----- 58 ===== 40 59 41 60 Detached Mode … … 44 63 In detached mode the XIOS executable is executed on separate cores from the NEMO model. 45 64 This is the recommended method for using XIOS for realistic model runs. 46 To use this mode set ``using_server`` to ``true`` at the bottom of the ``iodef.xml`` file:65 To use this mode set ``using_server`` to ``true`` at the bottom of the :file:`iodef.xml` file: 47 66 48 67 .. code-block:: xml 49 68 50 69 <variable id="using_server" type="boolean">true</variable> 51 70 52 Make sure there is a copy (or link to) your XIOS executable in the working directory and in your job submission script allocate processors to XIOS. 71 Make sure there is a copy (or link to) your XIOS executable in the working directory and 72 in your job submission script allocate processors to XIOS. 53 73 54 74 Attached Mode … … 56 76 57 77 In attached mode XIOS runs on each of the cores used by NEMO. 58 This method is less efficient than the detached mode but can be more convenient for testing or with small configurations. 59 To activate this mode simply set ``using_server`` to false in the ``iodef.xml`` file 78 This method is less efficient than the detached mode but can be more convenient for testing or 79 with small configurations. 80 To activate this mode simply set ``using_server`` to false in the :file:`iodef.xml` file 60 81 61 82 .. code-block:: xml 62 83 63 84 <variable id="using_server" type="boolean">false</variable> 64 85 65 86 and don't allocate any cores to XIOS. 66 Note that due to the different domain decompositions between XIOS and NEMO if the total number of cores is larger than the number of grid points in the j direction then the model run will fail. 87 88 .. note:: 89 90 Due to the different domain decompositions between XIOS and NEMO, 91 if the total number of cores is larger than the number of grid points in the ``j`` direction then 92 the model run will fail. 67 93 68 94 Adding new diagnostics 69 ---------------------- 95 ====================== 70 96 71 97 If you want to add a NEMO diagnostic to the NEMO code you will need to do the following: 72 98 73 99 1. Add any necessary code to calculate you new diagnostic in NEMO 74 2. Send the field to XIOS using ``CALL iom_put( 'field_id', variable )`` where ``field_id`` is a unique id for your new diagnostics and variable is the fortran variable containing the data. 75 This should be called at every model timestep regardless of how often you want to output the field. No time averaging should be done in the model code. 76 3. If it is computationally expensive to calculate your new diagnostic you should also use "iom_use" to determine if it is requested in the current model run. For example, 77 78 .. code-block:: fortran 100 2. Send the field to XIOS using ``CALL iom_put( 'field_id', variable )`` where 101 ``field_id`` is a unique id for your new diagnostics and 102 variable is the fortran variable containing the data. 103 This should be called at every model timestep regardless of how often you want to output the field. 104 No time averaging should be done in the model code. 105 3. If it is computationally expensive to calculate your new diagnostic 106 you should also use "iom_use" to determine if it is requested in the current model run. 107 For example, 79 108 80 IF iom_use('field_id') THEN 81 !Some expensive computation 82 !... 83 !... 84 iom_put('field_id', variable) 85 ENDIF 109 .. code-block:: fortran 86 110 87 4. Add a variable definition to the ``field_def_nemo-???.xml`` file. 88 5. Add the variable to the ``iodef.xml`` or ``file_definition.xml`` file. 111 IF iom_use('field_id') THEN 112 !Some expensive computation 113 !... 114 !... 115 iom_put('field_id', variable) 116 ENDIF 117 118 4. Add a variable definition to the :file:`field_def_nemo-???.xml` file. 119 5. Add the variable to the :file:`iodef.xml` or :file:`file_definition.xml` file. -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SHARED/namelist_ice_ref
r11586 r12165 57 57 ln_landfast_L16 = .false. ! landfast: parameterization from Lemieux 2016 58 58 rn_depfra = 0.125 ! fraction of ocean depth that ice must reach to initiate landfast 59 ! recommended range: [0.1 ; 0.25] - L16=0.125 - home=0.15 60 rn_icebfr = 15. ! ln_landfast_L16: maximum bottom stress per unit volume [N/m3] 61 ! ln_landfast_home: maximum bottom stress per unit area of contact [N/m2] 62 ! recommended range: ?? L16=15 - home=10 59 ! recommended range: [0.1 ; 0.25] 60 rn_icebfr = 15. ! maximum bottom stress per unit volume [N/m3] 63 61 rn_lfrelax = 1.e-5 ! relaxation time scale to reach static friction [s-1] 64 rn_tensile = 0.2 ! ln_landfast_L16: isotropic tensile strength62 rn_tensile = 0.2 ! isotropic tensile strength [0-0.5??] 65 63 / 66 64 !------------------------------------------------------------------------------ … … 103 101 &namdyn_adv ! Ice advection 104 102 !------------------------------------------------------------------------------ 105 ln_adv_Pra = . false. ! Advection scheme (Prather)106 ln_adv_UMx = . true. ! Advection scheme (Ultimate-Macho)103 ln_adv_Pra = .true. ! Advection scheme (Prather) 104 ln_adv_UMx = .false. ! Advection scheme (Ultimate-Macho) 107 105 nn_UMx = 5 ! order of the scheme for UMx (1-5 ; 20=centered 2nd order) 108 106 / … … 234 232 &namdia ! Diagnostics 235 233 !------------------------------------------------------------------------------ 236 ln_icediachk = .false. ! check online the heat, mass & salt budgets at each time step237 ! ! rate of ice spuriously gained/lost. For ex., rn_icechk=1. <=> 1mm/year, rn_icechk=0.1 <=> 1mm/10years238 rn_icechk_cel = 1 . ! check at any gridcell=> stops the code if violated (and writes a file)239 rn_icechk_glo = 0.1 ! check over the entire ice cover=> only prints warnings234 ln_icediachk = .false. ! check online heat, mass & salt budgets 235 ! ! rate of ice spuriously gained/lost at each time step => rn_icechk=1 <=> 1.e-6 m/hour 236 rn_icechk_cel = 100. ! check at each gridcell (1.e-4m/h)=> stops the code if violated (and writes a file) 237 rn_icechk_glo = 1. ! check over the entire ice cover (1.e-6m/h)=> only prints warnings 240 238 ln_icediahsb = .false. ! output the heat, mass & salt budgets (T) or not (F) 241 239 ln_icectl = .false. ! ice points output for debug (T or F) -
NEMO/branches/2019/dev_ASINTER-01-05_merged/cfgs/SPITZ12/EXPREF/namelist_ice_cfg
r11587 r12165 44 44 &namdyn_rhg ! Ice rheology 45 45 !------------------------------------------------------------------------------ 46 ln_rhg_EVP = .true. ! EVP rheology47 ln_aEVP = .true. ! adaptive rheology (Kimmritz et al. 2016 & 2017)48 46 / 49 47 !------------------------------------------------------------------------------ 50 48 &namdyn_adv ! Ice advection 51 49 !------------------------------------------------------------------------------ 50 ln_adv_Pra = .false. ! Advection scheme (Prather) 51 ln_adv_UMx = .true. ! Advection scheme (Ultimate-Macho) 52 nn_UMx = 5 ! order of the scheme for UMx (1-5 ; 20=centered 2nd order) 52 53 / 53 54 !------------------------------------------------------------------------------ -
NEMO/branches/2019/dev_ASINTER-01-05_merged/doc/latex/global/ametsoc.bst
r11128 r12165 9 9 %% *** Bibliography style file for ALL AMS Journals...version 1.0 *** 10 10 %% *** Brian Papa - American Meteorological Society *** 11 %% 11 %% 12 12 %% Copyright 1994-2004 Patrick W Daly 13 13 % =============================================================== … … 519 519 duplicate$ empty$ 'skip$ 520 520 { 521 "\href{http://dx.doi.org/" swap$ * "}{ DOI}" *521 "\href{http://dx.doi.org/" swap$ * "}{\aiDoi}" * 522 522 } 523 523 if$ … … 1192 1192 crossref missing$ 1193 1193 { format.in.ed.booktitle "booktitle" output.check 1194 format.publisher.address output 1194 format.publisher.address output 1195 1195 format.bvolume output 1196 1196 format.number.series output -
NEMO/branches/2019/dev_ASINTER-01-05_merged/doc/rst/source/conf.py
r12063 r12165 230 230 texinfo_documents = [ 231 231 ('guide', 'NEMO', u'NEMO Documentation', 232 u'NEMO System Team', 'NEMO', ' One line description of project.',232 u'NEMO System Team', 'NEMO', 'Community Ocean Model', 233 233 'Miscellaneous'), 234 234 ] -
NEMO/branches/2019/dev_ASINTER-01-05_merged/doc/rst/source/global.rst
r12063 r12165 1 .. Roles (custom styles related to CSS classes in 'source/_static/style.css') 1 .. Roles 2 3 .. custom styles related to CSS classes in './_static/style.css' 2 4 3 5 .. role:: blue … … 5 7 .. role:: grey 6 8 .. role:: greysup(sup) 9 10 .. inline code snippets 11 12 .. role:: python(code) 13 :language: python 14 :class: highlight 15 16 .. role:: fortran(code) 17 :language: fortran 18 :class: highlight 19 20 .. role:: console(code) 21 :language: console 22 :class: highlight 7 23 8 24 .. Substitutions -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/ice.F90
r11586 r12165 328 328 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:,:) :: sz_i !: ice salinity [PSS] 329 329 330 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: a_ip !: melt pond fraction per grid cell area330 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: a_ip !: melt pond concentration 331 331 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: v_ip !: melt pond volume per grid cell area [m] 332 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: a_ip_frac !: melt pond volume per ice area333 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: h_ip !: melt pond thickness[m]334 335 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: at_ip !: total melt pond fraction332 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: a_ip_frac !: melt pond fraction (a_ip/a_i) 333 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: h_ip !: melt pond depth [m] 334 335 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: at_ip !: total melt pond concentration 336 336 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: hm_ip !: mean melt pond depth [m] 337 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: vt_ip !: total melt pond volume per unit area[m]337 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: vt_ip !: total melt pond volume per gridcell area [m] 338 338 339 339 !!---------------------------------------------------------------------- -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/icectl.F90
r11586 r12165 44 44 PUBLIC ice_prt3D 45 45 46 ! thresold values for conservation46 ! thresold rates for conservation 47 47 ! these values are changed by the namelist parameter rn_icechk, so that threshold = zchk * rn_icechk 48 REAL(wp), PARAMETER :: zchk_m = 1.e-5 ! kg/m2/s <=> 1mm of ice per yearspuriously gained/lost49 REAL(wp), PARAMETER :: zchk_s = 1.e-4 ! g/m2/s <=> 1mm of ice per yearspuriously gained/lost (considering s=10g/kg)50 REAL(wp), PARAMETER :: zchk_t = 3. ! W/m2 <=> 1mm of ice per yearspuriously gained/lost (considering Lf=3e5J/kg)48 REAL(wp), PARAMETER :: zchk_m = 2.5e-7 ! kg/m2/s <=> 1e-6 m of ice per hour spuriously gained/lost 49 REAL(wp), PARAMETER :: zchk_s = 2.5e-6 ! g/m2/s <=> 1e-6 m of ice per hour spuriously gained/lost (considering s=10g/kg) 50 REAL(wp), PARAMETER :: zchk_t = 7.5e-2 ! W/m2 <=> 1e-6 m of ice per hour spuriously gained/lost (considering Lf=3e5J/kg) 51 51 52 52 !! * Substitutions … … 68 68 !! ** Method : This is an online diagnostics which can be activated with ln_icediachk=true 69 69 !! It prints in ocean.output if there is a violation of conservation at each time-step 70 !! The thresholds (zchk_m, zchk_s, zchk_t) which determine violations are set to 71 !! a minimum of 1 mm of ice (over the ice area) that is lost/gained spuriously during 100 years. 70 !! The thresholds (zchk_m, zchk_s, zchk_t) determine violations 72 71 !! For salt and heat thresholds, ice is considered to have a salinity of 10 73 72 !! and a heat content of 3e5 J/kg (=latent heat of fusion) … … 133 132 134 133 ! -- advection scheme is conservative? -- ! 135 zvtrp = glob_sum( 'icectl', ( diag_trp_vi * rhoi + diag_trp_vs * rhos ) * e1e2t ) ! must be close to 0 136 zetrp = glob_sum( 'icectl', ( diag_trp_ei + diag_trp_es ) * e1e2t ) ! must be close to 0 134 zvtrp = glob_sum( 'icectl', ( diag_trp_vi * rhoi + diag_trp_vs * rhos ) * e1e2t ) ! must be close to 0 (only for Prather) 135 zetrp = glob_sum( 'icectl', ( diag_trp_ei + diag_trp_es ) * e1e2t ) ! must be close to 0 (only for Prather) 137 136 138 137 ! ice area (+epsi10 to set a threshold > 0 when there is no ice) … … 157 156 & WRITE(numout,*) cd_routine,' : violation a_i > amax = ',zdiag_amax 158 157 ! check if advection scheme is conservative 159 IF( ABS(zvtrp) > zchk_m * rn_icechk_glo * zarea .AND. cd_routine == 'icedyn_adv' ) & 160 & WRITE(numout,*) cd_routine,' : violation adv scheme [kg] = ',zvtrp * rdt_ice 158 ! only check for Prather because Ultimate-Macho uses corrective fluxes (wfx etc) 159 ! so the formulation for conservation is different (and not coded) 160 ! it does not mean UM is not conservative (it is checked with above prints) => update (09/2019): same for Prather now 161 !IF( ln_adv_Pra .AND. ABS(zvtrp) > zchk_m * rn_icechk_glo * zarea .AND. cd_routine == 'icedyn_adv' ) & 162 ! & WRITE(numout,*) cd_routine,' : violation adv scheme [kg] = ',zvtrp * rdt_ice 161 163 ENDIF 162 164 ! … … 173 175 !! ** Method : This is an online diagnostics which can be activated with ln_icediachk=true 174 176 !! It prints in ocean.output if there is a violation of conservation at each time-step 175 !! The thresholds (zchk_m, zchk_s, zchk_t) which determine the violation are set to 176 !! a minimum of 1 mm of ice (over the ice area) that is lost/gained spuriously during 100 years. 177 !! The thresholds (zchk_m, zchk_s, zchk_t) determine the violations 177 178 !! For salt and heat thresholds, ice is considered to have a salinity of 10 178 179 !! and a heat content of 3e5 J/kg (=latent heat of fusion) -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/icedyn_adv_pra.F90
r10425 r12165 16 16 !! adv_pra_rst : read/write Prather field in ice restart file, or initialized to zero 17 17 !!---------------------------------------------------------------------- 18 USE phycst ! physical constant 18 19 USE dom_oce ! ocean domain 19 20 USE ice ! sea-ice variables 20 21 USE sbc_oce , ONLY : nn_fsbc ! frequency of sea-ice call 22 USE icevar ! sea-ice: operations 21 23 ! 22 24 USE in_out_manager ! I/O manager … … 25 27 USE lib_fortran ! fortran utilities (glob_sum + no signed zero) 26 28 USE lbclnk ! lateral boundary conditions (or mpp links) 27 USE prtctl ! Print control28 29 29 30 IMPLICIT NONE … … 36 37 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: sxice, syice, sxxice, syyice, sxyice ! ice thickness 37 38 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: sxsn , sysn , sxxsn , syysn , sxysn ! snow thickness 38 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: sxa , sya , sxxa , syya , sxya ! lead fraction39 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: sxa , sya , sxxa , syya , sxya ! ice concentration 39 40 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: sxsal, sysal, sxxsal, syysal, sxysal ! ice salinity 40 41 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: sxage, syage, sxxage, syyage, sxyage ! ice age 41 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:) :: sxopw, syopw, sxxopw, syyopw, sxyopw ! open water in sea ice42 42 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:,:) :: sxc0 , syc0 , sxxc0 , syyc0 , sxyc0 ! snow layers heat content 43 43 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:,:) :: sxe , sye , sxxe , syye , sxye ! ice layers heat content … … 81 81 REAL(wp), DIMENSION(:,:,:,:), INTENT(inout) :: pe_i ! ice heat content 82 82 ! 83 INTEGER :: jk, jl, jt ! dummy loop indices 84 INTEGER :: initad ! number of sub-timestep for the advection 85 REAL(wp) :: zcfl , zusnit ! - - 86 REAL(wp), ALLOCATABLE, DIMENSION(:,:) :: zarea 87 REAL(wp), ALLOCATABLE, DIMENSION(:,:,:) :: z0opw 88 REAL(wp), ALLOCATABLE, DIMENSION(:,:,:) :: z0ice, z0snw, z0ai, z0smi, z0oi 89 REAL(wp), ALLOCATABLE, DIMENSION(:,:,:) :: z0ap , z0vp 90 REAL(wp), ALLOCATABLE, DIMENSION(:,:,:,:) :: z0es 91 REAL(wp), ALLOCATABLE, DIMENSION(:,:,:,:) :: z0ei 83 INTEGER :: ji,jj, jk, jl, jt ! dummy loop indices 84 INTEGER :: icycle ! number of sub-timestep for the advection 85 REAL(wp) :: zdt ! - - 86 REAL(wp), DIMENSION(1) :: zcflprv, zcflnow ! for global communication 87 REAL(wp), DIMENSION(jpi,jpj) :: zati1, zati2 88 REAL(wp), DIMENSION(jpi,jpj) :: zudy, zvdx 89 REAL(wp), DIMENSION(jpi,jpj,jpl) :: zarea 90 REAL(wp), DIMENSION(jpi,jpj,jpl) :: z0ice, z0snw, z0ai, z0smi, z0oi 91 REAL(wp), DIMENSION(jpi,jpj,jpl) :: z0ap , z0vp 92 REAL(wp), DIMENSION(jpi,jpj,nlay_s,jpl) :: z0es 93 REAL(wp), DIMENSION(jpi,jpj,nlay_i,jpl) :: z0ei 92 94 !!---------------------------------------------------------------------- 93 95 ! 94 96 IF( kt == nit000 .AND. lwp ) WRITE(numout,*) '-- ice_dyn_adv_pra: Prather advection scheme' 95 97 ! 96 ALLOCATE( zarea(jpi,jpj) , z0opw(jpi,jpj, 1 ) , z0ice(jpi,jpj,jpl) , z0snw(jpi,jpj,jpl) , & 97 & z0ai(jpi,jpj,jpl) , z0smi(jpi,jpj,jpl) , z0oi (jpi,jpj,jpl) , z0ap (jpi,jpj,jpl) , z0vp(jpi,jpj,jpl) , & 98 & z0es (jpi,jpj,nlay_s,jpl), z0ei(jpi,jpj,nlay_i,jpl) ) 99 ! 100 ! --- If ice drift field is too fast, use an appropriate time step for advection (CFL test for stability) --- ! 101 zcfl = MAXVAL( ABS( pu_ice(:,:) ) * rdt_ice * r1_e1u(:,:) ) 102 zcfl = MAX( zcfl, MAXVAL( ABS( pv_ice(:,:) ) * rdt_ice * r1_e2v(:,:) ) ) 103 CALL mpp_max( 'icedyn_adv_pra', zcfl ) 98 ! --- If ice drift is too fast, use subtime steps for advection (CFL test for stability) --- ! 99 ! Note: the advection split is applied at the next time-step in order to avoid blocking global comm. 100 ! this should not affect too much the stability 101 zcflnow(1) = MAXVAL( ABS( pu_ice(:,:) ) * rdt_ice * r1_e1u(:,:) ) 102 zcflnow(1) = MAX( zcflnow(1), MAXVAL( ABS( pv_ice(:,:) ) * rdt_ice * r1_e2v(:,:) ) ) 104 103 105 IF( zcfl > 0.5 ) THEN ; initad = 2 ; zusnit = 0.5_wp 106 ELSE ; initad = 1 ; zusnit = 1.0_wp 104 ! non-blocking global communication send zcflnow and receive zcflprv 105 CALL mpp_delay_max( 'icedyn_adv_pra', 'cflice', zcflnow(:), zcflprv(:), kt == nitend - nn_fsbc + 1 ) 106 107 IF( zcflprv(1) > .5 ) THEN ; icycle = 2 108 ELSE ; icycle = 1 107 109 ENDIF 110 zdt = rdt_ice / REAL(icycle) 108 111 109 zarea(:,:) = e1e2t(:,:) 110 !------------------------- 111 ! transported fields 112 !------------------------- 113 z0opw(:,:,1) = pato_i(:,:) * e1e2t(:,:) ! Open water area 114 DO jl = 1, jpl 115 z0snw(:,:,jl) = pv_s (:,:, jl) * e1e2t(:,:) ! Snow volume 116 z0ice(:,:,jl) = pv_i (:,:, jl) * e1e2t(:,:) ! Ice volume 117 z0ai (:,:,jl) = pa_i (:,:, jl) * e1e2t(:,:) ! Ice area 118 z0smi(:,:,jl) = psv_i(:,:, jl) * e1e2t(:,:) ! Salt content 119 z0oi (:,:,jl) = poa_i(:,:, jl) * e1e2t(:,:) ! Age content 120 DO jk = 1, nlay_s 121 z0es(:,:,jk,jl) = pe_s(:,:,jk,jl) * e1e2t(:,:) ! Snow heat content 122 END DO 123 DO jk = 1, nlay_i 124 z0ei(:,:,jk,jl) = pe_i(:,:,jk,jl) * e1e2t(:,:) ! Ice heat content 125 END DO 126 IF ( ln_pnd_H12 ) THEN 127 z0ap(:,:,jl) = pa_ip(:,:,jl) * e1e2t(:,:) ! Melt pond fraction 128 z0vp(:,:,jl) = pv_ip(:,:,jl) * e1e2t(:,:) ! Melt pond volume 112 ! --- transport --- ! 113 zudy(:,:) = pu_ice(:,:) * e2u(:,:) 114 zvdx(:,:) = pv_ice(:,:) * e1v(:,:) 115 116 DO jt = 1, icycle 117 118 ! record at_i before advection (for open water) 119 zati1(:,:) = SUM( pa_i(:,:,:), dim=3 ) 120 121 ! --- transported fields --- ! 122 DO jl = 1, jpl 123 zarea(:,:,jl) = e1e2t(:,:) 124 z0snw(:,:,jl) = pv_s (:,:,jl) * e1e2t(:,:) ! Snow volume 125 z0ice(:,:,jl) = pv_i (:,:,jl) * e1e2t(:,:) ! Ice volume 126 z0ai (:,:,jl) = pa_i (:,:,jl) * e1e2t(:,:) ! Ice area 127 z0smi(:,:,jl) = psv_i(:,:,jl) * e1e2t(:,:) ! Salt content 128 z0oi (:,:,jl) = poa_i(:,:,jl) * e1e2t(:,:) ! Age content 129 DO jk = 1, nlay_s 130 z0es(:,:,jk,jl) = pe_s(:,:,jk,jl) * e1e2t(:,:) ! Snow heat content 131 END DO 132 DO jk = 1, nlay_i 133 z0ei(:,:,jk,jl) = pe_i(:,:,jk,jl) * e1e2t(:,:) ! Ice heat content 134 END DO 135 IF ( ln_pnd_H12 ) THEN 136 z0ap(:,:,jl) = pa_ip(:,:,jl) * e1e2t(:,:) ! Melt pond fraction 137 z0vp(:,:,jl) = pv_ip(:,:,jl) * e1e2t(:,:) ! Melt pond volume 138 ENDIF 139 END DO 140 ! 141 ! !--------------------------------------------! 142 IF( MOD( (kt - 1) / nn_fsbc , 2 ) == MOD( (jt - 1) , 2 ) ) THEN !== odd ice time step: adv_x then adv_y ==! 143 ! !--------------------------------------------! 144 CALL adv_x( zdt , zudy , 1._wp , zarea , z0ice , sxice , sxxice , syice , syyice , sxyice ) !--- ice volume 145 CALL adv_y( zdt , zvdx , 0._wp , zarea , z0ice , sxice , sxxice , syice , syyice , sxyice ) 146 CALL adv_x( zdt , zudy , 1._wp , zarea , z0snw , sxsn , sxxsn , sysn , syysn , sxysn ) !--- snow volume 147 CALL adv_y( zdt , zvdx , 0._wp , zarea , z0snw , sxsn , sxxsn , sysn , syysn , sxysn ) 148 CALL adv_x( zdt , zudy , 1._wp , zarea , z0smi , sxsal , sxxsal , sysal , syysal , sxysal ) !--- ice salinity 149 CALL adv_y( zdt , zvdx , 0._wp , zarea , z0smi , sxsal , sxxsal , sysal , syysal , sxysal ) 150 CALL adv_x( zdt , zudy , 1._wp , zarea , z0ai , sxa , sxxa , sya , syya , sxya ) !--- ice concentration 151 CALL adv_y( zdt , zvdx , 0._wp , zarea , z0ai , sxa , sxxa , sya , syya , sxya ) 152 CALL adv_x( zdt , zudy , 1._wp , zarea , z0oi , sxage , sxxage , syage , syyage , sxyage ) !--- ice age 153 CALL adv_y( zdt , zvdx , 0._wp , zarea , z0oi , sxage , sxxage , syage , syyage , sxyage ) 154 ! 155 DO jk = 1, nlay_s !--- snow heat content 156 CALL adv_x( zdt, zudy, 1._wp, zarea, z0es (:,:,jk,:), sxc0(:,:,jk,:), & 157 & sxxc0(:,:,jk,:), syc0(:,:,jk,:), syyc0(:,:,jk,:), sxyc0(:,:,jk,:) ) 158 CALL adv_y( zdt, zvdx, 0._wp, zarea, z0es (:,:,jk,:), sxc0(:,:,jk,:), & 159 & sxxc0(:,:,jk,:), syc0(:,:,jk,:), syyc0(:,:,jk,:), sxyc0(:,:,jk,:) ) 160 END DO 161 DO jk = 1, nlay_i !--- ice heat content 162 CALL adv_x( zdt, zudy, 1._wp, zarea, z0ei(:,:,jk,:), sxe(:,:,jk,:), & 163 & sxxe(:,:,jk,:), sye(:,:,jk,:), syye(:,:,jk,:), sxye(:,:,jk,:) ) 164 CALL adv_y( zdt, zvdx, 0._wp, zarea, z0ei(:,:,jk,:), sxe(:,:,jk,:), & 165 & sxxe(:,:,jk,:), sye(:,:,jk,:), syye(:,:,jk,:), sxye(:,:,jk,:) ) 166 END DO 167 ! 168 IF ( ln_pnd_H12 ) THEN 169 CALL adv_x( zdt , zudy , 1._wp , zarea , z0ap , sxap , sxxap , syap , syyap , sxyap ) !--- melt pond fraction 170 CALL adv_y( zdt , zvdx , 0._wp , zarea , z0ap , sxap , sxxap , syap , syyap , sxyap ) 171 CALL adv_x( zdt , zudy , 1._wp , zarea , z0vp , sxvp , sxxvp , syvp , syyvp , sxyvp ) !--- melt pond volume 172 CALL adv_y( zdt , zvdx , 0._wp , zarea , z0vp , sxvp , sxxvp , syvp , syyvp , sxyvp ) 173 ENDIF 174 ! !--------------------------------------------! 175 ELSE !== even ice time step: adv_y then adv_x ==! 176 ! !--------------------------------------------! 177 CALL adv_y( zdt , zvdx , 1._wp , zarea , z0ice , sxice , sxxice , syice , syyice , sxyice ) !--- ice volume 178 CALL adv_x( zdt , zudy , 0._wp , zarea , z0ice , sxice , sxxice , syice , syyice , sxyice ) 179 CALL adv_y( zdt , zvdx , 1._wp , zarea , z0snw , sxsn , sxxsn , sysn , syysn , sxysn ) !--- snow volume 180 CALL adv_x( zdt , zudy , 0._wp , zarea , z0snw , sxsn , sxxsn , sysn , syysn , sxysn ) 181 CALL adv_y( zdt , zvdx , 1._wp , zarea , z0smi , sxsal , sxxsal , sysal , syysal , sxysal ) !--- ice salinity 182 CALL adv_x( zdt , zudy , 0._wp , zarea , z0smi , sxsal , sxxsal , sysal , syysal , sxysal ) 183 CALL adv_y( zdt , zvdx , 1._wp , zarea , z0ai , sxa , sxxa , sya , syya , sxya ) !--- ice concentration 184 CALL adv_x( zdt , zudy , 0._wp , zarea , z0ai , sxa , sxxa , sya , syya , sxya ) 185 CALL adv_y( zdt , zvdx , 1._wp , zarea , z0oi , sxage , sxxage , syage , syyage , sxyage ) !--- ice age 186 CALL adv_x( zdt , zudy , 0._wp , zarea , z0oi , sxage , sxxage , syage , syyage , sxyage ) 187 DO jk = 1, nlay_s !--- snow heat content 188 CALL adv_y( zdt, zvdx, 1._wp, zarea, z0es (:,:,jk,:), sxc0(:,:,jk,:), & 189 & sxxc0(:,:,jk,:), syc0(:,:,jk,:), syyc0(:,:,jk,:), sxyc0(:,:,jk,:) ) 190 CALL adv_x( zdt, zudy, 0._wp, zarea, z0es (:,:,jk,:), sxc0(:,:,jk,:), & 191 & sxxc0(:,:,jk,:), syc0(:,:,jk,:), syyc0(:,:,jk,:), sxyc0(:,:,jk,:) ) 192 END DO 193 DO jk = 1, nlay_i !--- ice heat content 194 CALL adv_y( zdt, zvdx, 1._wp, zarea, z0ei(:,:,jk,:), sxe(:,:,jk,:), & 195 & sxxe(:,:,jk,:), sye(:,:,jk,:), syye(:,:,jk,:), sxye(:,:,jk,:) ) 196 CALL adv_x( zdt, zudy, 0._wp, zarea, z0ei(:,:,jk,:), sxe(:,:,jk,:), & 197 & sxxe(:,:,jk,:), sye(:,:,jk,:), syye(:,:,jk,:), sxye(:,:,jk,:) ) 198 END DO 199 IF ( ln_pnd_H12 ) THEN 200 CALL adv_y( zdt , zvdx , 1._wp , zarea , z0ap , sxap , sxxap , syap , syyap , sxyap ) !--- melt pond fraction 201 CALL adv_x( zdt , zudy , 0._wp , zarea , z0ap , sxap , sxxap , syap , syyap , sxyap ) 202 CALL adv_y( zdt , zvdx , 1._wp , zarea , z0vp , sxvp , sxxvp , syvp , syyvp , sxyvp ) !--- melt pond volume 203 CALL adv_x( zdt , zudy , 0._wp , zarea , z0vp , sxvp , sxxvp , syvp , syyvp , sxyvp ) 204 ENDIF 205 ! 129 206 ENDIF 207 208 ! --- Recover the properties from their contents --- ! 209 DO jl = 1, jpl 210 pv_i (:,:,jl) = z0ice(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 211 pv_s (:,:,jl) = z0snw(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 212 psv_i(:,:,jl) = z0smi(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 213 poa_i(:,:,jl) = z0oi (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 214 pa_i (:,:,jl) = z0ai (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 215 DO jk = 1, nlay_s 216 pe_s(:,:,jk,jl) = z0es(:,:,jk,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 217 END DO 218 DO jk = 1, nlay_i 219 pe_i(:,:,jk,jl) = z0ei(:,:,jk,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 220 END DO 221 IF ( ln_pnd_H12 ) THEN 222 pa_ip(:,:,jl) = z0ap(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 223 pv_ip(:,:,jl) = z0vp(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1) 224 ENDIF 225 END DO 226 ! 227 ! derive open water from ice concentration 228 zati2(:,:) = SUM( pa_i(:,:,:), dim=3 ) 229 DO jj = 2, jpjm1 230 DO ji = fs_2, fs_jpim1 231 pato_i(ji,jj) = pato_i(ji,jj) - ( zati2(ji,jj) - zati1(ji,jj) ) & !--- open water 232 & - ( zudy(ji,jj) - zudy(ji-1,jj) + zvdx(ji,jj) - zvdx(ji,jj-1) ) * r1_e1e2t(ji,jj) * zdt 233 END DO 234 END DO 235 CALL lbc_lnk( 'icedyn_adv_pra', pato_i, 'T', 1. ) 236 ! 237 ! --- Ensure non-negative fields --- ! 238 ! Remove negative values (conservation is ensured) 239 ! (because advected fields are not perfectly bounded and tiny negative values can occur, e.g. -1.e-20) 240 CALL ice_var_zapneg( zdt, pato_i, pv_i, pv_s, psv_i, poa_i, pa_i, pa_ip, pv_ip, pe_s, pe_i ) 241 ! 242 ! --- Ensure snow load is not too big --- ! 243 CALL Hsnow( zdt, pv_i, pv_s, pa_i, pa_ip, pe_s ) 244 ! 130 245 END DO 131 132 ! !--------------------------------------------!133 IF( MOD( ( kt - 1) / nn_fsbc , 2 ) == 0 ) THEN !== odd ice time step: adv_x then adv_y ==!134 ! !--------------------------------------------!135 DO jt = 1, initad136 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0opw (:,:,1), sxopw(:,:), & !--- ice open water area137 & sxxopw(:,:) , syopw(:,:), syyopw(:,:), sxyopw(:,:) )138 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0opw (:,:,1), sxopw(:,:), &139 & sxxopw(:,:) , syopw(:,:), syyopw(:,:), sxyopw(:,:) )140 DO jl = 1, jpl141 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0ice (:,:,jl), sxice(:,:,jl), & !--- ice volume ---142 & sxxice(:,:,jl), syice(:,:,jl), syyice(:,:,jl), sxyice(:,:,jl) )143 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0ice (:,:,jl), sxice(:,:,jl), &144 & sxxice(:,:,jl), syice(:,:,jl), syyice(:,:,jl), sxyice(:,:,jl) )145 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0snw (:,:,jl), sxsn (:,:,jl), & !--- snow volume ---146 & sxxsn (:,:,jl), sysn (:,:,jl), syysn (:,:,jl), sxysn (:,:,jl) )147 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0snw (:,:,jl), sxsn (:,:,jl), &148 & sxxsn (:,:,jl), sysn (:,:,jl), syysn (:,:,jl), sxysn (:,:,jl) )149 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0smi (:,:,jl), sxsal(:,:,jl), & !--- ice salinity ---150 & sxxsal(:,:,jl), sysal(:,:,jl), syysal(:,:,jl), sxysal(:,:,jl) )151 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0smi (:,:,jl), sxsal(:,:,jl), &152 & sxxsal(:,:,jl), sysal(:,:,jl), syysal(:,:,jl), sxysal(:,:,jl) )153 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0oi (:,:,jl), sxage(:,:,jl), & !--- ice age ---154 & sxxage(:,:,jl), syage(:,:,jl), syyage(:,:,jl), sxyage(:,:,jl) )155 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0oi (:,:,jl), sxage(:,:,jl), &156 & sxxage(:,:,jl), syage(:,:,jl), syyage(:,:,jl), sxyage(:,:,jl) )157 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0ai (:,:,jl), sxa (:,:,jl), & !--- ice concentrations ---158 & sxxa (:,:,jl), sya (:,:,jl), syya (:,:,jl), sxya (:,:,jl) )159 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0ai (:,:,jl), sxa (:,:,jl), &160 & sxxa (:,:,jl), sya (:,:,jl), syya (:,:,jl), sxya (:,:,jl) )161 DO jk = 1, nlay_s !--- snow heat contents ---162 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0es (:,:,jk,jl), sxc0(:,:,jk,jl), &163 & sxxc0(:,:,jk,jl), syc0(:,:,jk,jl), syyc0(:,:,jk,jl), sxyc0(:,:,jk,jl) )164 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0es (:,:,jk,jl), sxc0(:,:,jk,jl), &165 & sxxc0(:,:,jk,jl), syc0(:,:,jk,jl), syyc0(:,:,jk,jl), sxyc0(:,:,jk,jl) )166 END DO167 DO jk = 1, nlay_i !--- ice heat contents ---168 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0ei(:,:,jk,jl), sxe(:,:,jk,jl), &169 & sxxe(:,:,jk,jl), sye(:,:,jk,jl), syye(:,:,jk,jl), sxye(:,:,jk,jl) )170 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0ei(:,:,jk,jl), sxe(:,:,jk,jl), &171 & sxxe(:,:,jk,jl), sye(:,:,jk,jl), syye(:,:,jk,jl), sxye(:,:,jk,jl) )172 END DO173 IF ( ln_pnd_H12 ) THEN174 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0ap (:,:,jl), sxap (:,:,jl), & !--- melt pond fraction --175 & sxxap (:,:,jl), syap (:,:,jl), syyap (:,:,jl), sxyap (:,:,jl) )176 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0ap (:,:,jl), sxap (:,:,jl), &177 & sxxap (:,:,jl), syap (:,:,jl), syyap (:,:,jl), sxyap (:,:,jl) )178 CALL adv_x( zusnit, pu_ice, 1._wp, zarea, z0vp (:,:,jl), sxvp (:,:,jl), & !--- melt pond volume --179 & sxxvp (:,:,jl), syvp (:,:,jl), syyvp (:,:,jl), sxyvp (:,:,jl) )180 CALL adv_y( zusnit, pv_ice, 0._wp, zarea, z0vp (:,:,jl), sxvp (:,:,jl), &181 & sxxvp (:,:,jl), syvp (:,:,jl), syyvp (:,:,jl), sxyvp (:,:,jl) )182 ENDIF183 END DO184 END DO185 ! !--------------------------------------------!186 ELSE !== even ice time step: adv_y then adv_x ==!187 ! !--------------------------------------------!188 DO jt = 1, initad189 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0opw (:,:,1), sxopw(:,:), & !--- ice open water area190 & sxxopw(:,:) , syopw(:,:), syyopw(:,:), sxyopw(:,:) )191 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0opw (:,:,1), sxopw(:,:), &192 & sxxopw(:,:) , syopw(:,:), syyopw(:,:), sxyopw(:,:) )193 DO jl = 1, jpl194 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0ice (:,:,jl), sxice(:,:,jl), & !--- ice volume ---195 & sxxice(:,:,jl), syice(:,:,jl), syyice(:,:,jl), sxyice(:,:,jl) )196 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0ice (:,:,jl), sxice(:,:,jl), &197 & sxxice(:,:,jl), syice(:,:,jl), syyice(:,:,jl), sxyice(:,:,jl) )198 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0snw (:,:,jl), sxsn (:,:,jl), & !--- snow volume ---199 & sxxsn (:,:,jl), sysn (:,:,jl), syysn (:,:,jl), sxysn (:,:,jl) )200 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0snw (:,:,jl), sxsn (:,:,jl), &201 & sxxsn (:,:,jl), sysn (:,:,jl), syysn (:,:,jl), sxysn (:,:,jl) )202 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0smi (:,:,jl), sxsal(:,:,jl), & !--- ice salinity ---203 & sxxsal(:,:,jl), sysal(:,:,jl), syysal(:,:,jl), sxysal(:,:,jl) )204 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0smi (:,:,jl), sxsal(:,:,jl), &205 & sxxsal(:,:,jl), sysal(:,:,jl), syysal(:,:,jl), sxysal(:,:,jl) )206 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0oi (:,:,jl), sxage(:,:,jl), & !--- ice age ---207 & sxxage(:,:,jl), syage(:,:,jl), syyage(:,:,jl), sxyage(:,:,jl) )208 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0oi (:,:,jl), sxage(:,:,jl), &209 & sxxage(:,:,jl), syage(:,:,jl), syyage(:,:,jl), sxyage(:,:,jl) )210 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0ai (:,:,jl), sxa (:,:,jl), & !--- ice concentrations ---211 & sxxa (:,:,jl), sya (:,:,jl), syya (:,:,jl), sxya (:,:,jl) )212 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0ai (:,:,jl), sxa (:,:,jl), &213 & sxxa (:,:,jl), sya (:,:,jl), syya (:,:,jl), sxya (:,:,jl) )214 DO jk = 1, nlay_s !--- snow heat contents ---215 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0es (:,:,jk,jl), sxc0(:,:,jk,jl), &216 & sxxc0(:,:,jk,jl), syc0(:,:,jk,jl), syyc0(:,:,jk,jl), sxyc0(:,:,jk,jl) )217 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0es (:,:,jk,jl), sxc0(:,:,jk,jl), &218 & sxxc0(:,:,jk,jl), syc0(:,:,jk,jl), syyc0(:,:,jk,jl), sxyc0(:,:,jk,jl) )219 END DO220 DO jk = 1, nlay_i !--- ice heat contents ---221 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0ei(:,:,jk,jl), sxe(:,:,jk,jl), &222 & sxxe(:,:,jk,jl), sye(:,:,jk,jl), syye(:,:,jk,jl), sxye(:,:,jk,jl) )223 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0ei(:,:,jk,jl), sxe(:,:,jk,jl), &224 & sxxe(:,:,jk,jl), sye(:,:,jk,jl), syye(:,:,jk,jl), sxye(:,:,jk,jl) )225 END DO226 IF ( ln_pnd_H12 ) THEN227 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0ap (:,:,jl), sxap (:,:,jl), & !--- melt pond fraction ---228 & sxxap (:,:,jl), syap (:,:,jl), syyap (:,:,jl), sxyap (:,:,jl) )229 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0ap (:,:,jl), sxap (:,:,jl), &230 & sxxap (:,:,jl), syap (:,:,jl), syyap (:,:,jl), sxyap (:,:,jl) )231 CALL adv_y( zusnit, pv_ice, 1._wp, zarea, z0vp (:,:,jl), sxvp (:,:,jl), & !--- melt pond volume ---232 & sxxvp (:,:,jl), syvp (:,:,jl), syyvp (:,:,jl), sxyvp (:,:,jl) )233 CALL adv_x( zusnit, pu_ice, 0._wp, zarea, z0vp (:,:,jl), sxvp (:,:,jl), &234 & sxxvp (:,:,jl), syvp (:,:,jl), syyvp (:,:,jl), sxyvp (:,:,jl) )235 ENDIF236 END DO237 END DO238 ENDIF239 240 !-------------------------------------------241 ! Recover the properties from their contents242 !-------------------------------------------243 pato_i(:,:) = z0opw(:,:,1) * r1_e1e2t(:,:) * tmask(:,:,1)244 DO jl = 1, jpl245 pv_i (:,:, jl) = z0ice(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1)246 pv_s (:,:, jl) = z0snw(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1)247 psv_i(:,:, jl) = z0smi(:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1)248 poa_i(:,:, jl) = z0oi (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1)249 pa_i (:,:, jl) = z0ai (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1)250 DO jk = 1, nlay_s251 pe_s(:,:,jk,jl) = z0es(:,:,jk,jl) * r1_e1e2t(:,:) * tmask(:,:,1)252 END DO253 DO jk = 1, nlay_i254 pe_i(:,:,jk,jl) = z0ei(:,:,jk,jl) * r1_e1e2t(:,:) * tmask(:,:,1)255 END DO256 IF ( ln_pnd_H12 ) THEN257 pa_ip (:,:,jl) = z0ap (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1)258 pv_ip (:,:,jl) = z0vp (:,:,jl) * r1_e1e2t(:,:) * tmask(:,:,1)259 ENDIF260 END DO261 !262 DEALLOCATE( zarea , z0opw , z0ice, z0snw , z0ai , z0smi , z0oi , z0ap , z0vp , z0es, z0ei )263 246 ! 264 247 IF( lrst_ice ) CALL adv_pra_rst( 'WRITE', kt ) !* write Prather fields in the restart file … … 267 250 268 251 269 SUBROUTINE adv_x( pd f, put , pcrh, psm , ps0 , &252 SUBROUTINE adv_x( pdt, put , pcrh, psm , ps0 , & 270 253 & psx, psxx, psy , psyy, psxy ) 271 254 !!---------------------------------------------------------------------- … … 275 258 !! variable on x axis 276 259 !!---------------------------------------------------------------------- 277 REAL(wp) , INTENT(in ) :: pdf ! reduction factor forthe time step278 REAL(wp) 279 REAL(wp), DIMENSION( jpi,jpj), INTENT(in ) :: put ! i-direction ice velocity at U-point [m/s]280 REAL(wp), DIMENSION( jpi,jpj), INTENT(inout) :: psm ! area281 REAL(wp), DIMENSION( jpi,jpj), INTENT(inout) :: ps0 ! field to be advected282 REAL(wp), DIMENSION( jpi,jpj), INTENT(inout) :: psx , psy ! 1st moments283 REAL(wp), DIMENSION( jpi,jpj), INTENT(inout) :: psxx, psyy, psxy ! 2nd moments260 REAL(wp) , INTENT(in ) :: pdt ! the time step 261 REAL(wp) , INTENT(in ) :: pcrh ! call adv_x then adv_y (=1) or the opposite (=0) 262 REAL(wp), DIMENSION(:,:) , INTENT(in ) :: put ! i-direction ice velocity at U-point [m/s] 263 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: psm ! area 264 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: ps0 ! field to be advected 265 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: psx , psy ! 1st moments 266 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: psxx, psyy, psxy ! 2nd moments 284 267 !! 285 INTEGER :: ji, jj 286 REAL(wp) :: zs1max, z rdt, zslpmax, ztemp! local scalars268 INTEGER :: ji, jj, jl, jcat ! dummy loop indices 269 REAL(wp) :: zs1max, zslpmax, ztemp ! local scalars 287 270 REAL(wp) :: zs1new, zalf , zalfq , zbt ! - - 288 271 REAL(wp) :: zs2new, zalf1, zalf1q, zbt1 ! - - … … 291 274 REAL(wp), DIMENSION(jpi,jpj) :: zalg, zalg1, zalg1q ! - - 292 275 !----------------------------------------------------------------------- 293 294 ! Limitation of moments. 295 296 zrdt = rdt_ice * pdf ! If ice drift field is too fast, use an appropriate time step for advection. 297 298 DO jj = 1, jpj 299 DO ji = 1, jpi 300 zslpmax = MAX( 0._wp, ps0(ji,jj) ) 301 zs1max = 1.5 * zslpmax 302 zs1new = MIN( zs1max, MAX( -zs1max, psx(ji,jj) ) ) 303 zs2new = MIN( 2.0 * zslpmax - 0.3334 * ABS( zs1new ), & 304 & MAX( ABS( zs1new ) - zslpmax, psxx(ji,jj) ) ) 305 rswitch = ( 1.0 - MAX( 0._wp, SIGN( 1._wp, -zslpmax) ) ) * tmask(ji,jj,1) ! Case of empty boxes & Apply mask 306 307 ps0 (ji,jj) = zslpmax 308 psx (ji,jj) = zs1new * rswitch 309 psxx(ji,jj) = zs2new * rswitch 310 psy (ji,jj) = psy (ji,jj) * rswitch 311 psyy(ji,jj) = psyy(ji,jj) * rswitch 312 psxy(ji,jj) = MIN( zslpmax, MAX( -zslpmax, psxy(ji,jj) ) ) * rswitch 313 END DO 276 ! 277 jcat = SIZE( ps0 , 3 ) ! size of input arrays 278 ! 279 DO jl = 1, jcat ! loop on categories 280 ! 281 ! Limitation of moments. 282 DO jj = 2, jpjm1 283 DO ji = 1, jpi 284 ! Initialize volumes of boxes (=area if adv_x first called, =psm otherwise) 285 psm (ji,jj,jl) = MAX( pcrh * e1e2t(ji,jj) + ( 1.0 - pcrh ) * psm(ji,jj,jl) , epsi20 ) 286 ! 287 zslpmax = MAX( 0._wp, ps0(ji,jj,jl) ) 288 zs1max = 1.5 * zslpmax 289 zs1new = MIN( zs1max, MAX( -zs1max, psx(ji,jj,jl) ) ) 290 zs2new = MIN( 2.0 * zslpmax - 0.3334 * ABS( zs1new ), & 291 & MAX( ABS( zs1new ) - zslpmax, psxx(ji,jj,jl) ) ) 292 rswitch = ( 1.0 - MAX( 0._wp, SIGN( 1._wp, -zslpmax) ) ) * tmask(ji,jj,1) ! Case of empty boxes & Apply mask 293 294 ps0 (ji,jj,jl) = zslpmax 295 psx (ji,jj,jl) = zs1new * rswitch 296 psxx(ji,jj,jl) = zs2new * rswitch 297 psy (ji,jj,jl) = psy (ji,jj,jl) * rswitch 298 psyy(ji,jj,jl) = psyy(ji,jj,jl) * rswitch 299 psxy(ji,jj,jl) = MIN( zslpmax, MAX( -zslpmax, psxy(ji,jj,jl) ) ) * rswitch 300 END DO 301 END DO 302 303 ! Calculate fluxes and moments between boxes i<-->i+1 304 DO jj = 2, jpjm1 ! Flux from i to i+1 WHEN u GT 0 305 DO ji = 1, jpi 306 zbet(ji,jj) = MAX( 0._wp, SIGN( 1._wp, put(ji,jj) ) ) 307 zalf = MAX( 0._wp, put(ji,jj) ) * pdt / psm(ji,jj,jl) 308 zalfq = zalf * zalf 309 zalf1 = 1.0 - zalf 310 zalf1q = zalf1 * zalf1 311 ! 312 zfm (ji,jj) = zalf * psm (ji,jj,jl) 313 zf0 (ji,jj) = zalf * ( ps0 (ji,jj,jl) + zalf1 * ( psx(ji,jj,jl) + (zalf1 - zalf) * psxx(ji,jj,jl) ) ) 314 zfx (ji,jj) = zalfq * ( psx (ji,jj,jl) + 3.0 * zalf1 * psxx(ji,jj,jl) ) 315 zfxx(ji,jj) = zalf * psxx(ji,jj,jl) * zalfq 316 zfy (ji,jj) = zalf * ( psy (ji,jj,jl) + zalf1 * psxy(ji,jj,jl) ) 317 zfxy(ji,jj) = zalfq * psxy(ji,jj,jl) 318 zfyy(ji,jj) = zalf * psyy(ji,jj,jl) 319 320 ! Readjust moments remaining in the box. 321 psm (ji,jj,jl) = psm (ji,jj,jl) - zfm(ji,jj) 322 ps0 (ji,jj,jl) = ps0 (ji,jj,jl) - zf0(ji,jj) 323 psx (ji,jj,jl) = zalf1q * ( psx(ji,jj,jl) - 3.0 * zalf * psxx(ji,jj,jl) ) 324 psxx(ji,jj,jl) = zalf1 * zalf1q * psxx(ji,jj,jl) 325 psy (ji,jj,jl) = psy (ji,jj,jl) - zfy(ji,jj) 326 psyy(ji,jj,jl) = psyy(ji,jj,jl) - zfyy(ji,jj) 327 psxy(ji,jj,jl) = zalf1q * psxy(ji,jj,jl) 328 END DO 329 END DO 330 331 DO jj = 2, jpjm1 ! Flux from i+1 to i when u LT 0. 332 DO ji = 1, fs_jpim1 333 zalf = MAX( 0._wp, -put(ji,jj) ) * pdt / psm(ji+1,jj,jl) 334 zalg (ji,jj) = zalf 335 zalfq = zalf * zalf 336 zalf1 = 1.0 - zalf 337 zalg1 (ji,jj) = zalf1 338 zalf1q = zalf1 * zalf1 339 zalg1q(ji,jj) = zalf1q 340 ! 341 zfm (ji,jj) = zfm (ji,jj) + zalf * psm (ji+1,jj,jl) 342 zf0 (ji,jj) = zf0 (ji,jj) + zalf * ( ps0 (ji+1,jj,jl) & 343 & - zalf1 * ( psx(ji+1,jj,jl) - (zalf1 - zalf ) * psxx(ji+1,jj,jl) ) ) 344 zfx (ji,jj) = zfx (ji,jj) + zalfq * ( psx (ji+1,jj,jl) - 3.0 * zalf1 * psxx(ji+1,jj,jl) ) 345 zfxx (ji,jj) = zfxx(ji,jj) + zalf * psxx(ji+1,jj,jl) * zalfq 346 zfy (ji,jj) = zfy (ji,jj) + zalf * ( psy (ji+1,jj,jl) - zalf1 * psxy(ji+1,jj,jl) ) 347 zfxy (ji,jj) = zfxy(ji,jj) + zalfq * psxy(ji+1,jj,jl) 348 zfyy (ji,jj) = zfyy(ji,jj) + zalf * psyy(ji+1,jj,jl) 349 END DO 350 END DO 351 352 DO jj = 2, jpjm1 ! Readjust moments remaining in the box. 353 DO ji = fs_2, fs_jpim1 354 zbt = zbet(ji-1,jj) 355 zbt1 = 1.0 - zbet(ji-1,jj) 356 ! 357 psm (ji,jj,jl) = zbt * psm(ji,jj,jl) + zbt1 * ( psm(ji,jj,jl) - zfm(ji-1,jj) ) 358 ps0 (ji,jj,jl) = zbt * ps0(ji,jj,jl) + zbt1 * ( ps0(ji,jj,jl) - zf0(ji-1,jj) ) 359 psx (ji,jj,jl) = zalg1q(ji-1,jj) * ( psx(ji,jj,jl) + 3.0 * zalg(ji-1,jj) * psxx(ji,jj,jl) ) 360 psxx(ji,jj,jl) = zalg1 (ji-1,jj) * zalg1q(ji-1,jj) * psxx(ji,jj,jl) 361 psy (ji,jj,jl) = zbt * psy (ji,jj,jl) + zbt1 * ( psy (ji,jj,jl) - zfy (ji-1,jj) ) 362 psyy(ji,jj,jl) = zbt * psyy(ji,jj,jl) + zbt1 * ( psyy(ji,jj,jl) - zfyy(ji-1,jj) ) 363 psxy(ji,jj,jl) = zalg1q(ji-1,jj) * psxy(ji,jj,jl) 364 END DO 365 END DO 366 367 ! Put the temporary moments into appropriate neighboring boxes. 368 DO jj = 2, jpjm1 ! Flux from i to i+1 IF u GT 0. 369 DO ji = fs_2, fs_jpim1 370 zbt = zbet(ji-1,jj) 371 zbt1 = 1.0 - zbet(ji-1,jj) 372 psm(ji,jj,jl) = zbt * ( psm(ji,jj,jl) + zfm(ji-1,jj) ) + zbt1 * psm(ji,jj,jl) 373 zalf = zbt * zfm(ji-1,jj) / psm(ji,jj,jl) 374 zalf1 = 1.0 - zalf 375 ztemp = zalf * ps0(ji,jj,jl) - zalf1 * zf0(ji-1,jj) 376 ! 377 ps0 (ji,jj,jl) = zbt * ( ps0(ji,jj,jl) + zf0(ji-1,jj) ) + zbt1 * ps0(ji,jj,jl) 378 psx (ji,jj,jl) = zbt * ( zalf * zfx(ji-1,jj) + zalf1 * psx(ji,jj,jl) + 3.0 * ztemp ) + zbt1 * psx(ji,jj,jl) 379 psxx(ji,jj,jl) = zbt * ( zalf * zalf * zfxx(ji-1,jj) + zalf1 * zalf1 * psxx(ji,jj,jl) & 380 & + 5.0 * ( zalf * zalf1 * ( psx (ji,jj,jl) - zfx(ji-1,jj) ) - ( zalf1 - zalf ) * ztemp ) ) & 381 & + zbt1 * psxx(ji,jj,jl) 382 psxy(ji,jj,jl) = zbt * ( zalf * zfxy(ji-1,jj) + zalf1 * psxy(ji,jj,jl) & 383 & + 3.0 * (- zalf1*zfy(ji-1,jj) + zalf * psy(ji,jj,jl) ) ) & 384 & + zbt1 * psxy(ji,jj,jl) 385 psy (ji,jj,jl) = zbt * ( psy (ji,jj,jl) + zfy (ji-1,jj) ) + zbt1 * psy (ji,jj,jl) 386 psyy(ji,jj,jl) = zbt * ( psyy(ji,jj,jl) + zfyy(ji-1,jj) ) + zbt1 * psyy(ji,jj,jl) 387 END DO 388 END DO 389 390 DO jj = 2, jpjm1 ! Flux from i+1 to i IF u LT 0. 391 DO ji = fs_2, fs_jpim1 392 zbt = zbet(ji,jj) 393 zbt1 = 1.0 - zbet(ji,jj) 394 psm(ji,jj,jl) = zbt * psm(ji,jj,jl) + zbt1 * ( psm(ji,jj,jl) + zfm(ji,jj) ) 395 zalf = zbt1 * zfm(ji,jj) / psm(ji,jj,jl) 396 zalf1 = 1.0 - zalf 397 ztemp = - zalf * ps0(ji,jj,jl) + zalf1 * zf0(ji,jj) 398 ! 399 ps0 (ji,jj,jl) = zbt * ps0 (ji,jj,jl) + zbt1 * ( ps0(ji,jj,jl) + zf0(ji,jj) ) 400 psx (ji,jj,jl) = zbt * psx (ji,jj,jl) + zbt1 * ( zalf * zfx(ji,jj) + zalf1 * psx(ji,jj,jl) + 3.0 * ztemp ) 401 psxx(ji,jj,jl) = zbt * psxx(ji,jj,jl) + zbt1 * ( zalf * zalf * zfxx(ji,jj) + zalf1 * zalf1 * psxx(ji,jj,jl) & 402 & + 5.0 * ( zalf * zalf1 * ( - psx(ji,jj,jl) + zfx(ji,jj) ) & 403 & + ( zalf1 - zalf ) * ztemp ) ) 404 psxy(ji,jj,jl) = zbt * psxy(ji,jj,jl) + zbt1 * ( zalf * zfxy(ji,jj) + zalf1 * psxy(ji,jj,jl) & 405 & + 3.0 * ( zalf1 * zfy(ji,jj) - zalf * psy(ji,jj,jl) ) ) 406 psy (ji,jj,jl) = zbt * psy (ji,jj,jl) + zbt1 * ( psy (ji,jj,jl) + zfy (ji,jj) ) 407 psyy(ji,jj,jl) = zbt * psyy(ji,jj,jl) + zbt1 * ( psyy(ji,jj,jl) + zfyy(ji,jj) ) 408 END DO 409 END DO 410 314 411 END DO 315 412 316 ! Initialize volumes of boxes (=area if adv_x first called, =psm otherwise)317 psm (:,:) = MAX( pcrh * e1e2t(:,:) + ( 1.0 - pcrh ) * psm(:,:) , epsi20 )318 319 ! Calculate fluxes and moments between boxes i<-->i+1320 DO jj = 1, jpj ! Flux from i to i+1 WHEN u GT 0321 DO ji = 1, jpi322 zbet(ji,jj) = MAX( 0._wp, SIGN( 1._wp, put(ji,jj) ) )323 zalf = MAX( 0._wp, put(ji,jj) ) * zrdt * e2u(ji,jj) / psm(ji,jj)324 zalfq = zalf * zalf325 zalf1 = 1.0 - zalf326 zalf1q = zalf1 * zalf1327 !328 zfm (ji,jj) = zalf * psm (ji,jj)329 zf0 (ji,jj) = zalf * ( ps0 (ji,jj) + zalf1 * ( psx(ji,jj) + (zalf1 - zalf) * psxx(ji,jj) ) )330 zfx (ji,jj) = zalfq * ( psx (ji,jj) + 3.0 * zalf1 * psxx(ji,jj) )331 zfxx(ji,jj) = zalf * psxx(ji,jj) * zalfq332 zfy (ji,jj) = zalf * ( psy (ji,jj) + zalf1 * psxy(ji,jj) )333 zfxy(ji,jj) = zalfq * psxy(ji,jj)334 zfyy(ji,jj) = zalf * psyy(ji,jj)335 336 ! Readjust moments remaining in the box.337 psm (ji,jj) = psm (ji,jj) - zfm(ji,jj)338 ps0 (ji,jj) = ps0 (ji,jj) - zf0(ji,jj)339 psx (ji,jj) = zalf1q * ( psx(ji,jj) - 3.0 * zalf * psxx(ji,jj) )340 psxx(ji,jj) = zalf1 * zalf1q * psxx(ji,jj)341 psy (ji,jj) = psy (ji,jj) - zfy(ji,jj)342 psyy(ji,jj) = psyy(ji,jj) - zfyy(ji,jj)343 psxy(ji,jj) = zalf1q * psxy(ji,jj)344 END DO345 END DO346 347 DO jj = 1, jpjm1 ! Flux from i+1 to i when u LT 0.348 DO ji = 1, fs_jpim1349 zalf = MAX( 0._wp, -put(ji,jj) ) * zrdt * e2u(ji,jj) / psm(ji+1,jj)350 zalg (ji,jj) = zalf351 zalfq = zalf * zalf352 zalf1 = 1.0 - zalf353 zalg1 (ji,jj) = zalf1354 zalf1q = zalf1 * zalf1355 zalg1q(ji,jj) = zalf1q356 !357 zfm (ji,jj) = zfm (ji,jj) + zalf * psm (ji+1,jj)358 zf0 (ji,jj) = zf0 (ji,jj) + zalf * ( ps0 (ji+1,jj) - zalf1 * ( psx(ji+1,jj) - (zalf1 - zalf ) * psxx(ji+1,jj) ) )359 zfx (ji,jj) = zfx (ji,jj) + zalfq * ( psx (ji+1,jj) - 3.0 * zalf1 * psxx(ji+1,jj) )360 zfxx (ji,jj) = zfxx(ji,jj) + zalf * psxx(ji+1,jj) * zalfq361 zfy (ji,jj) = zfy (ji,jj) + zalf * ( psy (ji+1,jj) - zalf1 * psxy(ji+1,jj) )362 zfxy (ji,jj) = zfxy(ji,jj) + zalfq * psxy(ji+1,jj)363 zfyy (ji,jj) = zfyy(ji,jj) + zalf * psyy(ji+1,jj)364 END DO365 END DO366 367 DO jj = 2, jpjm1 ! Readjust moments remaining in the box.368 DO ji = fs_2, fs_jpim1369 zbt = zbet(ji-1,jj)370 zbt1 = 1.0 - zbet(ji-1,jj)371 !372 psm (ji,jj) = zbt * psm(ji,jj) + zbt1 * ( psm(ji,jj) - zfm(ji-1,jj) )373 ps0 (ji,jj) = zbt * ps0(ji,jj) + zbt1 * ( ps0(ji,jj) - zf0(ji-1,jj) )374 psx (ji,jj) = zalg1q(ji-1,jj) * ( psx(ji,jj) + 3.0 * zalg(ji-1,jj) * psxx(ji,jj) )375 psxx(ji,jj) = zalg1 (ji-1,jj) * zalg1q(ji-1,jj) * psxx(ji,jj)376 psy (ji,jj) = zbt * psy (ji,jj) + zbt1 * ( psy (ji,jj) - zfy (ji-1,jj) )377 psyy(ji,jj) = zbt * psyy(ji,jj) + zbt1 * ( psyy(ji,jj) - zfyy(ji-1,jj) )378 psxy(ji,jj) = zalg1q(ji-1,jj) * psxy(ji,jj)379 END DO380 END DO381 382 ! Put the temporary moments into appropriate neighboring boxes.383 DO jj = 2, jpjm1 ! Flux from i to i+1 IF u GT 0.384 DO ji = fs_2, fs_jpim1385 zbt = zbet(ji-1,jj)386 zbt1 = 1.0 - zbet(ji-1,jj)387 psm(ji,jj) = zbt * ( psm(ji,jj) + zfm(ji-1,jj) ) + zbt1 * psm(ji,jj)388 zalf = zbt * zfm(ji-1,jj) / psm(ji,jj)389 zalf1 = 1.0 - zalf390 ztemp = zalf * ps0(ji,jj) - zalf1 * zf0(ji-1,jj)391 !392 ps0 (ji,jj) = zbt * ( ps0(ji,jj) + zf0(ji-1,jj) ) + zbt1 * ps0(ji,jj)393 psx (ji,jj) = zbt * ( zalf * zfx(ji-1,jj) + zalf1 * psx(ji,jj) + 3.0 * ztemp ) + zbt1 * psx(ji,jj)394 psxx(ji,jj) = zbt * ( zalf * zalf * zfxx(ji-1,jj) + zalf1 * zalf1 * psxx(ji,jj) &395 & + 5.0 * ( zalf * zalf1 * ( psx (ji,jj) - zfx(ji-1,jj) ) - ( zalf1 - zalf ) * ztemp ) ) &396 & + zbt1 * psxx(ji,jj)397 psxy(ji,jj) = zbt * ( zalf * zfxy(ji-1,jj) + zalf1 * psxy(ji,jj) &398 & + 3.0 * (- zalf1*zfy(ji-1,jj) + zalf * psy(ji,jj) ) ) &399 & + zbt1 * psxy(ji,jj)400 psy (ji,jj) = zbt * ( psy (ji,jj) + zfy (ji-1,jj) ) + zbt1 * psy (ji,jj)401 psyy(ji,jj) = zbt * ( psyy(ji,jj) + zfyy(ji-1,jj) ) + zbt1 * psyy(ji,jj)402 END DO403 END DO404 405 DO jj = 2, jpjm1 ! Flux from i+1 to i IF u LT 0.406 DO ji = fs_2, fs_jpim1407 zbt = zbet(ji,jj)408 zbt1 = 1.0 - zbet(ji,jj)409 psm(ji,jj) = zbt * psm(ji,jj) + zbt1 * ( psm(ji,jj) + zfm(ji,jj) )410 zalf = zbt1 * zfm(ji,jj) / psm(ji,jj)411 zalf1 = 1.0 - zalf412 ztemp = - zalf * ps0(ji,jj) + zalf1 * zf0(ji,jj)413 !414 ps0(ji,jj) = zbt * ps0 (ji,jj) + zbt1 * ( ps0(ji,jj) + zf0(ji,jj) )415 psx(ji,jj) = zbt * psx (ji,jj) + zbt1 * ( zalf * zfx(ji,jj) + zalf1 * psx(ji,jj) + 3.0 * ztemp )416 psxx(ji,jj) = zbt * psxx(ji,jj) + zbt1 * ( zalf * zalf * zfxx(ji,jj) + zalf1 * zalf1 * psxx(ji,jj) &417 & + 5.0 *( zalf * zalf1 * ( - psx(ji,jj) + zfx(ji,jj) ) &418 & + ( zalf1 - zalf ) * ztemp ) )419 psxy(ji,jj) = zbt * psxy(ji,jj) + zbt1 * ( zalf * zfxy(ji,jj) + zalf1 * psxy(ji,jj) &420 & + 3.0 * ( zalf1 * zfy(ji,jj) - zalf * psy(ji,jj) ) )421 psy(ji,jj) = zbt * psy (ji,jj) + zbt1 * ( psy (ji,jj) + zfy (ji,jj) )422 psyy(ji,jj) = zbt * psyy(ji,jj) + zbt1 * ( psyy(ji,jj) + zfyy(ji,jj) )423 END DO424 END DO425 426 413 !-- Lateral boundary conditions 427 CALL lbc_lnk_multi( 'icedyn_adv_pra', psm , 'T', 1., ps0 , 'T', 1. & 428 & , psx , 'T', -1., psy , 'T', -1. & ! caution gradient ==> the sign changes 429 & , psxx, 'T', 1., psyy, 'T', 1. & 430 & , psxy, 'T', 1. ) 431 432 IF(ln_ctl) THEN 433 CALL prt_ctl(tab2d_1=psm , clinfo1=' adv_x: psm :', tab2d_2=ps0 , clinfo2=' ps0 : ') 434 CALL prt_ctl(tab2d_1=psx , clinfo1=' adv_x: psx :', tab2d_2=psxx, clinfo2=' psxx : ') 435 CALL prt_ctl(tab2d_1=psy , clinfo1=' adv_x: psy :', tab2d_2=psyy, clinfo2=' psyy : ') 436 CALL prt_ctl(tab2d_1=psxy , clinfo1=' adv_x: psxy :') 437 ENDIF 414 CALL lbc_lnk_multi( 'icedyn_adv_pra', psm(:,:,1:jcat) , 'T', 1., ps0 , 'T', 1. & 415 & , psx , 'T', -1., psy , 'T', -1. & ! caution gradient ==> the sign changes 416 & , psxx , 'T', 1., psyy, 'T', 1. , psxy, 'T', 1. ) 438 417 ! 439 418 END SUBROUTINE adv_x 440 419 441 420 442 SUBROUTINE adv_y( pd f, pvt , pcrh, psm , ps0 , &421 SUBROUTINE adv_y( pdt, pvt , pcrh, psm , ps0 , & 443 422 & psx, psxx, psy , psyy, psxy ) 444 423 !!--------------------------------------------------------------------- … … 448 427 !! variable on y axis 449 428 !!--------------------------------------------------------------------- 450 REAL(wp) , INTENT(in ) :: pdf ! reduction factor for thetime step451 REAL(wp) 452 REAL(wp), DIMENSION( jpi,jpj), INTENT(in ) :: pvt ! j-direction ice velocity at V-point [m/s]453 REAL(wp), DIMENSION( jpi,jpj), INTENT(inout) :: psm ! area454 REAL(wp), DIMENSION( jpi,jpj), INTENT(inout) :: ps0 ! field to be advected455 REAL(wp), DIMENSION( jpi,jpj), INTENT(inout) :: psx , psy ! 1st moments456 REAL(wp), DIMENSION( jpi,jpj), INTENT(inout) :: psxx, psyy, psxy ! 2nd moments429 REAL(wp) , INTENT(in ) :: pdt ! time step 430 REAL(wp) , INTENT(in ) :: pcrh ! call adv_x then adv_y (=1) or the opposite (=0) 431 REAL(wp), DIMENSION(:,:) , INTENT(in ) :: pvt ! j-direction ice velocity at V-point [m/s] 432 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: psm ! area 433 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: ps0 ! field to be advected 434 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: psx , psy ! 1st moments 435 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: psxx, psyy, psxy ! 2nd moments 457 436 !! 458 INTEGER :: ji, jj 459 REAL(wp) :: zs1max, z rdt, zslpmax, ztemp! temporary scalars437 INTEGER :: ji, jj, jl, jcat ! dummy loop indices 438 REAL(wp) :: zs1max, zslpmax, ztemp ! temporary scalars 460 439 REAL(wp) :: zs1new, zalf , zalfq , zbt ! - - 461 440 REAL(wp) :: zs2new, zalf1, zalf1q, zbt1 ! - - … … 464 443 REAL(wp), DIMENSION(jpi,jpj) :: zalg, zalg1, zalg1q ! - - 465 444 !--------------------------------------------------------------------- 466 467 ! Limitation of moments. 468 469 zrdt = rdt_ice * pdf ! If ice drift field is too fast, use an appropriate time step for advection. 470 471 DO jj = 1, jpj 472 DO ji = 1, jpi 473 zslpmax = MAX( 0._wp, ps0(ji,jj) ) 474 zs1max = 1.5 * zslpmax 475 zs1new = MIN( zs1max, MAX( -zs1max, psy(ji,jj) ) ) 476 zs2new = MIN( ( 2.0 * zslpmax - 0.3334 * ABS( zs1new ) ), & 477 & MAX( ABS( zs1new )-zslpmax, psyy(ji,jj) ) ) 478 rswitch = ( 1.0 - MAX( 0._wp, SIGN( 1._wp, -zslpmax) ) ) * tmask(ji,jj,1) ! Case of empty boxes & Apply mask 479 ! 480 ps0 (ji,jj) = zslpmax 481 psx (ji,jj) = psx (ji,jj) * rswitch 482 psxx(ji,jj) = psxx(ji,jj) * rswitch 483 psy (ji,jj) = zs1new * rswitch 484 psyy(ji,jj) = zs2new * rswitch 485 psxy(ji,jj) = MIN( zslpmax, MAX( -zslpmax, psxy(ji,jj) ) ) * rswitch 486 END DO 445 ! 446 jcat = SIZE( ps0 , 3 ) ! size of input arrays 447 ! 448 DO jl = 1, jcat ! loop on categories 449 ! 450 ! Limitation of moments. 451 DO jj = 1, jpj 452 DO ji = fs_2, fs_jpim1 453 ! Initialize volumes of boxes (=area if adv_x first called, =psm otherwise) 454 psm(ji,jj,jl) = MAX( pcrh * e1e2t(ji,jj) + ( 1.0 - pcrh ) * psm(ji,jj,jl) , epsi20 ) 455 ! 456 zslpmax = MAX( 0._wp, ps0(ji,jj,jl) ) 457 zs1max = 1.5 * zslpmax 458 zs1new = MIN( zs1max, MAX( -zs1max, psy(ji,jj,jl) ) ) 459 zs2new = MIN( ( 2.0 * zslpmax - 0.3334 * ABS( zs1new ) ), & 460 & MAX( ABS( zs1new )-zslpmax, psyy(ji,jj,jl) ) ) 461 rswitch = ( 1.0 - MAX( 0._wp, SIGN( 1._wp, -zslpmax) ) ) * tmask(ji,jj,1) ! Case of empty boxes & Apply mask 462 ! 463 ps0 (ji,jj,jl) = zslpmax 464 psx (ji,jj,jl) = psx (ji,jj,jl) * rswitch 465 psxx(ji,jj,jl) = psxx(ji,jj,jl) * rswitch 466 psy (ji,jj,jl) = zs1new * rswitch 467 psyy(ji,jj,jl) = zs2new * rswitch 468 psxy(ji,jj,jl) = MIN( zslpmax, MAX( -zslpmax, psxy(ji,jj,jl) ) ) * rswitch 469 END DO 470 END DO 471 472 ! Calculate fluxes and moments between boxes j<-->j+1 473 DO jj = 1, jpj ! Flux from j to j+1 WHEN v GT 0 474 DO ji = fs_2, fs_jpim1 475 zbet(ji,jj) = MAX( 0._wp, SIGN( 1._wp, pvt(ji,jj) ) ) 476 zalf = MAX( 0._wp, pvt(ji,jj) ) * pdt / psm(ji,jj,jl) 477 zalfq = zalf * zalf 478 zalf1 = 1.0 - zalf 479 zalf1q = zalf1 * zalf1 480 ! 481 zfm (ji,jj) = zalf * psm(ji,jj,jl) 482 zf0 (ji,jj) = zalf * ( ps0(ji,jj,jl) + zalf1 * ( psy(ji,jj,jl) + (zalf1-zalf) * psyy(ji,jj,jl) ) ) 483 zfy (ji,jj) = zalfq *( psy(ji,jj,jl) + 3.0*zalf1*psyy(ji,jj,jl) ) 484 zfyy(ji,jj) = zalf * zalfq * psyy(ji,jj,jl) 485 zfx (ji,jj) = zalf * ( psx(ji,jj,jl) + zalf1 * psxy(ji,jj,jl) ) 486 zfxy(ji,jj) = zalfq * psxy(ji,jj,jl) 487 zfxx(ji,jj) = zalf * psxx(ji,jj,jl) 488 ! 489 ! Readjust moments remaining in the box. 490 psm (ji,jj,jl) = psm (ji,jj,jl) - zfm(ji,jj) 491 ps0 (ji,jj,jl) = ps0 (ji,jj,jl) - zf0(ji,jj) 492 psy (ji,jj,jl) = zalf1q * ( psy(ji,jj,jl) -3.0 * zalf * psyy(ji,jj,jl) ) 493 psyy(ji,jj,jl) = zalf1 * zalf1q * psyy(ji,jj,jl) 494 psx (ji,jj,jl) = psx (ji,jj,jl) - zfx(ji,jj) 495 psxx(ji,jj,jl) = psxx(ji,jj,jl) - zfxx(ji,jj) 496 psxy(ji,jj,jl) = zalf1q * psxy(ji,jj,jl) 497 END DO 498 END DO 499 ! 500 DO jj = 1, jpjm1 ! Flux from j+1 to j when v LT 0. 501 DO ji = fs_2, fs_jpim1 502 zalf = MAX( 0._wp, -pvt(ji,jj) ) * pdt / psm(ji,jj+1,jl) 503 zalg (ji,jj) = zalf 504 zalfq = zalf * zalf 505 zalf1 = 1.0 - zalf 506 zalg1 (ji,jj) = zalf1 507 zalf1q = zalf1 * zalf1 508 zalg1q(ji,jj) = zalf1q 509 ! 510 zfm (ji,jj) = zfm (ji,jj) + zalf * psm (ji,jj+1,jl) 511 zf0 (ji,jj) = zf0 (ji,jj) + zalf * ( ps0 (ji,jj+1,jl) & 512 & - zalf1 * (psy(ji,jj+1,jl) - (zalf1 - zalf ) * psyy(ji,jj+1,jl) ) ) 513 zfy (ji,jj) = zfy (ji,jj) + zalfq * ( psy (ji,jj+1,jl) - 3.0 * zalf1 * psyy(ji,jj+1,jl) ) 514 zfyy (ji,jj) = zfyy(ji,jj) + zalf * psyy(ji,jj+1,jl) * zalfq 515 zfx (ji,jj) = zfx (ji,jj) + zalf * ( psx (ji,jj+1,jl) - zalf1 * psxy(ji,jj+1,jl) ) 516 zfxy (ji,jj) = zfxy(ji,jj) + zalfq * psxy(ji,jj+1,jl) 517 zfxx (ji,jj) = zfxx(ji,jj) + zalf * psxx(ji,jj+1,jl) 518 END DO 519 END DO 520 521 ! Readjust moments remaining in the box. 522 DO jj = 2, jpjm1 523 DO ji = fs_2, fs_jpim1 524 zbt = zbet(ji,jj-1) 525 zbt1 = ( 1.0 - zbet(ji,jj-1) ) 526 ! 527 psm (ji,jj,jl) = zbt * psm(ji,jj,jl) + zbt1 * ( psm(ji,jj,jl) - zfm(ji,jj-1) ) 528 ps0 (ji,jj,jl) = zbt * ps0(ji,jj,jl) + zbt1 * ( ps0(ji,jj,jl) - zf0(ji,jj-1) ) 529 psy (ji,jj,jl) = zalg1q(ji,jj-1) * ( psy(ji,jj,jl) + 3.0 * zalg(ji,jj-1) * psyy(ji,jj,jl) ) 530 psyy(ji,jj,jl) = zalg1 (ji,jj-1) * zalg1q(ji,jj-1) * psyy(ji,jj,jl) 531 psx (ji,jj,jl) = zbt * psx (ji,jj,jl) + zbt1 * ( psx (ji,jj,jl) - zfx (ji,jj-1) ) 532 psxx(ji,jj,jl) = zbt * psxx(ji,jj,jl) + zbt1 * ( psxx(ji,jj,jl) - zfxx(ji,jj-1) ) 533 psxy(ji,jj,jl) = zalg1q(ji,jj-1) * psxy(ji,jj,jl) 534 END DO 535 END DO 536 537 ! Put the temporary moments into appropriate neighboring boxes. 538 DO jj = 2, jpjm1 ! Flux from j to j+1 IF v GT 0. 539 DO ji = fs_2, fs_jpim1 540 zbt = zbet(ji,jj-1) 541 zbt1 = 1.0 - zbet(ji,jj-1) 542 psm(ji,jj,jl) = zbt * ( psm(ji,jj,jl) + zfm(ji,jj-1) ) + zbt1 * psm(ji,jj,jl) 543 zalf = zbt * zfm(ji,jj-1) / psm(ji,jj,jl) 544 zalf1 = 1.0 - zalf 545 ztemp = zalf * ps0(ji,jj,jl) - zalf1 * zf0(ji,jj-1) 546 ! 547 ps0(ji,jj,jl) = zbt * ( ps0(ji,jj,jl) + zf0(ji,jj-1) ) + zbt1 * ps0(ji,jj,jl) 548 psy(ji,jj,jl) = zbt * ( zalf * zfy(ji,jj-1) + zalf1 * psy(ji,jj,jl) + 3.0 * ztemp ) & 549 & + zbt1 * psy(ji,jj,jl) 550 psyy(ji,jj,jl) = zbt * ( zalf * zalf * zfyy(ji,jj-1) + zalf1 * zalf1 * psyy(ji,jj,jl) & 551 & + 5.0 * ( zalf * zalf1 * ( psy(ji,jj,jl) - zfy(ji,jj-1) ) - ( zalf1 - zalf ) * ztemp ) ) & 552 & + zbt1 * psyy(ji,jj,jl) 553 psxy(ji,jj,jl) = zbt * ( zalf * zfxy(ji,jj-1) + zalf1 * psxy(ji,jj,jl) & 554 & + 3.0 * (- zalf1 * zfx(ji,jj-1) + zalf * psx(ji,jj,jl) ) ) & 555 & + zbt1 * psxy(ji,jj,jl) 556 psx (ji,jj,jl) = zbt * ( psx (ji,jj,jl) + zfx (ji,jj-1) ) + zbt1 * psx (ji,jj,jl) 557 psxx(ji,jj,jl) = zbt * ( psxx(ji,jj,jl) + zfxx(ji,jj-1) ) + zbt1 * psxx(ji,jj,jl) 558 END DO 559 END DO 560 561 DO jj = 2, jpjm1 ! Flux from j+1 to j IF v LT 0. 562 DO ji = fs_2, fs_jpim1 563 zbt = zbet(ji,jj) 564 zbt1 = 1.0 - zbet(ji,jj) 565 psm(ji,jj,jl) = zbt * psm(ji,jj,jl) + zbt1 * ( psm(ji,jj,jl) + zfm(ji,jj) ) 566 zalf = zbt1 * zfm(ji,jj) / psm(ji,jj,jl) 567 zalf1 = 1.0 - zalf 568 ztemp = - zalf * ps0(ji,jj,jl) + zalf1 * zf0(ji,jj) 569 ! 570 ps0 (ji,jj,jl) = zbt * ps0 (ji,jj,jl) + zbt1 * ( ps0(ji,jj,jl) + zf0(ji,jj) ) 571 psy (ji,jj,jl) = zbt * psy (ji,jj,jl) + zbt1 * ( zalf * zfy(ji,jj) + zalf1 * psy(ji,jj,jl) + 3.0 * ztemp ) 572 psyy(ji,jj,jl) = zbt * psyy(ji,jj,jl) + zbt1 * ( zalf * zalf * zfyy(ji,jj) + zalf1 * zalf1 * psyy(ji,jj,jl) & 573 & + 5.0 * ( zalf * zalf1 * ( - psy(ji,jj,jl) + zfy(ji,jj) ) & 574 & + ( zalf1 - zalf ) * ztemp ) ) 575 psxy(ji,jj,jl) = zbt * psxy(ji,jj,jl) + zbt1 * ( zalf * zfxy(ji,jj) + zalf1 * psxy(ji,jj,jl) & 576 & + 3.0 * ( zalf1 * zfx(ji,jj) - zalf * psx(ji,jj,jl) ) ) 577 psx (ji,jj,jl) = zbt * psx (ji,jj,jl) + zbt1 * ( psx (ji,jj,jl) + zfx (ji,jj) ) 578 psxx(ji,jj,jl) = zbt * psxx(ji,jj,jl) + zbt1 * ( psxx(ji,jj,jl) + zfxx(ji,jj) ) 579 END DO 580 END DO 581 487 582 END DO 488 583 489 ! Initialize volumes of boxes (=area if adv_x first called, =psm otherwise) 490 psm(:,:) = MAX( pcrh * e1e2t(:,:) + ( 1.0 - pcrh ) * psm(:,:) , epsi20 ) 491 492 ! Calculate fluxes and moments between boxes j<-->j+1 493 DO jj = 1, jpj ! Flux from j to j+1 WHEN v GT 0 494 DO ji = 1, jpi 495 zbet(ji,jj) = MAX( 0._wp, SIGN( 1._wp, pvt(ji,jj) ) ) 496 zalf = MAX( 0._wp, pvt(ji,jj) ) * zrdt * e1v(ji,jj) / psm(ji,jj) 497 zalfq = zalf * zalf 498 zalf1 = 1.0 - zalf 499 zalf1q = zalf1 * zalf1 500 ! 501 zfm (ji,jj) = zalf * psm(ji,jj) 502 zf0 (ji,jj) = zalf * ( ps0(ji,jj) + zalf1 * ( psy(ji,jj) + (zalf1-zalf) * psyy(ji,jj) ) ) 503 zfy (ji,jj) = zalfq *( psy(ji,jj) + 3.0*zalf1*psyy(ji,jj) ) 504 zfyy(ji,jj) = zalf * zalfq * psyy(ji,jj) 505 zfx (ji,jj) = zalf * ( psx(ji,jj) + zalf1 * psxy(ji,jj) ) 506 zfxy(ji,jj) = zalfq * psxy(ji,jj) 507 zfxx(ji,jj) = zalf * psxx(ji,jj) 508 ! 509 ! Readjust moments remaining in the box. 510 psm (ji,jj) = psm (ji,jj) - zfm(ji,jj) 511 ps0 (ji,jj) = ps0 (ji,jj) - zf0(ji,jj) 512 psy (ji,jj) = zalf1q * ( psy(ji,jj) -3.0 * zalf * psyy(ji,jj) ) 513 psyy(ji,jj) = zalf1 * zalf1q * psyy(ji,jj) 514 psx (ji,jj) = psx (ji,jj) - zfx(ji,jj) 515 psxx(ji,jj) = psxx(ji,jj) - zfxx(ji,jj) 516 psxy(ji,jj) = zalf1q * psxy(ji,jj) 584 !-- Lateral boundary conditions 585 CALL lbc_lnk_multi( 'icedyn_adv_pra', psm(:,:,1:jcat) , 'T', 1., ps0 , 'T', 1. & 586 & , psx , 'T', -1., psy , 'T', -1. & ! caution gradient ==> the sign changes 587 & , psxx , 'T', 1., psyy, 'T', 1. , psxy, 'T', 1. ) 588 ! 589 END SUBROUTINE adv_y 590 591 592 SUBROUTINE Hsnow( pdt, pv_i, pv_s, pa_i, pa_ip, pe_s ) 593 !!------------------------------------------------------------------- 594 !! *** ROUTINE Hsnow *** 595 !! 596 !! ** Purpose : 1- Check snow load after advection 597 !! 2- Correct pond concentration to avoid a_ip > a_i 598 !! 599 !! ** Method : If snow load makes snow-ice interface to deplet below the ocean surface 600 !! then put the snow excess in the ocean 601 !! 602 !! ** Notes : This correction is crucial because of the call to routine icecor afterwards 603 !! which imposes a mini of ice thick. (rn_himin). This imposed mini can artificially 604 !! make the snow very thick (if concentration decreases drastically) 605 !! This behavior has been seen in Ultimate-Macho and supposedly it can also be true for Prather 606 !!------------------------------------------------------------------- 607 REAL(wp) , INTENT(in ) :: pdt ! tracer time-step 608 REAL(wp), DIMENSION(:,:,:) , INTENT(inout) :: pv_i, pv_s, pa_i, pa_ip 609 REAL(wp), DIMENSION(:,:,:,:), INTENT(inout) :: pe_s 610 ! 611 INTEGER :: ji, jj, jl ! dummy loop indices 612 REAL(wp) :: z1_dt, zvs_excess, zfra 613 !!------------------------------------------------------------------- 614 ! 615 z1_dt = 1._wp / pdt 616 ! 617 ! -- check snow load -- ! 618 DO jl = 1, jpl 619 DO jj = 1, jpj 620 DO ji = 1, jpi 621 IF ( pv_i(ji,jj,jl) > 0._wp ) THEN 622 ! 623 zvs_excess = MAX( 0._wp, pv_s(ji,jj,jl) - pv_i(ji,jj,jl) * (rau0-rhoi) * r1_rhos ) 624 ! 625 IF( zvs_excess > 0._wp ) THEN ! snow-ice interface deplets below the ocean surface 626 ! put snow excess in the ocean 627 zfra = ( pv_s(ji,jj,jl) - zvs_excess ) / MAX( pv_s(ji,jj,jl), epsi20 ) 628 wfx_res(ji,jj) = wfx_res(ji,jj) + zvs_excess * rhos * z1_dt 629 hfx_res(ji,jj) = hfx_res(ji,jj) - SUM( pe_s(ji,jj,1:nlay_s,jl) ) * ( 1._wp - zfra ) * z1_dt ! W.m-2 <0 630 ! correct snow volume and heat content 631 pe_s(ji,jj,1:nlay_s,jl) = pe_s(ji,jj,1:nlay_s,jl) * zfra 632 pv_s(ji,jj,jl) = pv_s(ji,jj,jl) - zvs_excess 633 ENDIF 634 ! 635 ENDIF 636 END DO 517 637 END DO 518 638 END DO 519 639 ! 520 DO jj = 1, jpjm1 ! Flux from j+1 to j when v LT 0. 521 DO ji = 1, jpi 522 zalf = ( MAX(0._wp, -pvt(ji,jj) ) * zrdt * e1v(ji,jj) ) / psm(ji,jj+1) 523 zalg (ji,jj) = zalf 524 zalfq = zalf * zalf 525 zalf1 = 1.0 - zalf 526 zalg1 (ji,jj) = zalf1 527 zalf1q = zalf1 * zalf1 528 zalg1q(ji,jj) = zalf1q 529 ! 530 zfm (ji,jj) = zfm (ji,jj) + zalf * psm (ji,jj+1) 531 zf0 (ji,jj) = zf0 (ji,jj) + zalf * ( ps0 (ji,jj+1) - zalf1 * (psy(ji,jj+1) - (zalf1 - zalf ) * psyy(ji,jj+1) ) ) 532 zfy (ji,jj) = zfy (ji,jj) + zalfq * ( psy (ji,jj+1) - 3.0 * zalf1 * psyy(ji,jj+1) ) 533 zfyy (ji,jj) = zfyy(ji,jj) + zalf * psyy(ji,jj+1) * zalfq 534 zfx (ji,jj) = zfx (ji,jj) + zalf * ( psx (ji,jj+1) - zalf1 * psxy(ji,jj+1) ) 535 zfxy (ji,jj) = zfxy(ji,jj) + zalfq * psxy(ji,jj+1) 536 zfxx (ji,jj) = zfxx(ji,jj) + zalf * psxx(ji,jj+1) 537 END DO 538 END DO 539 540 ! Readjust moments remaining in the box. 541 DO jj = 2, jpj 542 DO ji = 1, jpi 543 zbt = zbet(ji,jj-1) 544 zbt1 = ( 1.0 - zbet(ji,jj-1) ) 545 ! 546 psm (ji,jj) = zbt * psm(ji,jj) + zbt1 * ( psm(ji,jj) - zfm(ji,jj-1) ) 547 ps0 (ji,jj) = zbt * ps0(ji,jj) + zbt1 * ( ps0(ji,jj) - zf0(ji,jj-1) ) 548 psy (ji,jj) = zalg1q(ji,jj-1) * ( psy(ji,jj) + 3.0 * zalg(ji,jj-1) * psyy(ji,jj) ) 549 psyy(ji,jj) = zalg1 (ji,jj-1) * zalg1q(ji,jj-1) * psyy(ji,jj) 550 psx (ji,jj) = zbt * psx (ji,jj) + zbt1 * ( psx (ji,jj) - zfx (ji,jj-1) ) 551 psxx(ji,jj) = zbt * psxx(ji,jj) + zbt1 * ( psxx(ji,jj) - zfxx(ji,jj-1) ) 552 psxy(ji,jj) = zalg1q(ji,jj-1) * psxy(ji,jj) 553 END DO 554 END DO 555 556 ! Put the temporary moments into appropriate neighboring boxes. 557 DO jj = 2, jpjm1 ! Flux from j to j+1 IF v GT 0. 558 DO ji = 1, jpi 559 zbt = zbet(ji,jj-1) 560 zbt1 = ( 1.0 - zbet(ji,jj-1) ) 561 psm(ji,jj) = zbt * ( psm(ji,jj) + zfm(ji,jj-1) ) + zbt1 * psm(ji,jj) 562 zalf = zbt * zfm(ji,jj-1) / psm(ji,jj) 563 zalf1 = 1.0 - zalf 564 ztemp = zalf * ps0(ji,jj) - zalf1 * zf0(ji,jj-1) 565 ! 566 ps0(ji,jj) = zbt * ( ps0(ji,jj) + zf0(ji,jj-1) ) + zbt1 * ps0(ji,jj) 567 psy(ji,jj) = zbt * ( zalf * zfy(ji,jj-1) + zalf1 * psy(ji,jj) + 3.0 * ztemp ) & 568 & + zbt1 * psy(ji,jj) 569 psyy(ji,jj) = zbt * ( zalf * zalf * zfyy(ji,jj-1) + zalf1 * zalf1 * psyy(ji,jj) & 570 & + 5.0 * ( zalf * zalf1 * ( psy(ji,jj) - zfy(ji,jj-1) ) - ( zalf1 - zalf ) * ztemp ) ) & 571 & + zbt1 * psyy(ji,jj) 572 psxy(ji,jj) = zbt * ( zalf * zfxy(ji,jj-1) + zalf1 * psxy(ji,jj) & 573 & + 3.0 * (- zalf1 * zfx(ji,jj-1) + zalf * psx(ji,jj) ) ) & 574 & + zbt1 * psxy(ji,jj) 575 psx (ji,jj) = zbt * ( psx (ji,jj) + zfx (ji,jj-1) ) + zbt1 * psx (ji,jj) 576 psxx(ji,jj) = zbt * ( psxx(ji,jj) + zfxx(ji,jj-1) ) + zbt1 * psxx(ji,jj) 577 END DO 578 END DO 579 580 DO jj = 2, jpjm1 ! Flux from j+1 to j IF v LT 0. 581 DO ji = 1, jpi 582 zbt = zbet(ji,jj) 583 zbt1 = ( 1.0 - zbet(ji,jj) ) 584 psm(ji,jj) = zbt * psm(ji,jj) + zbt1 * ( psm(ji,jj) + zfm(ji,jj) ) 585 zalf = zbt1 * zfm(ji,jj) / psm(ji,jj) 586 zalf1 = 1.0 - zalf 587 ztemp = - zalf * ps0 (ji,jj) + zalf1 * zf0(ji,jj) 588 ps0 (ji,jj) = zbt * ps0 (ji,jj) + zbt1 * ( ps0(ji,jj) + zf0(ji,jj) ) 589 psy (ji,jj) = zbt * psy (ji,jj) + zbt1 * ( zalf * zfy(ji,jj) + zalf1 * psy(ji,jj) + 3.0 * ztemp ) 590 psyy(ji,jj) = zbt * psyy(ji,jj) + zbt1 * ( zalf * zalf * zfyy(ji,jj) + zalf1 * zalf1 * psyy(ji,jj) & 591 & + 5.0 *( zalf *zalf1 *( -psy(ji,jj) + zfy(ji,jj) ) & 592 & + ( zalf1 - zalf ) * ztemp ) ) 593 psxy(ji,jj) = zbt * psxy(ji,jj) + zbt1 * ( zalf * zfxy(ji,jj) + zalf1 * psxy(ji,jj) & 594 & + 3.0 * ( zalf1 * zfx(ji,jj) - zalf * psx(ji,jj) ) ) 595 psx (ji,jj) = zbt * psx (ji,jj) + zbt1 * ( psx (ji,jj) + zfx (ji,jj) ) 596 psxx(ji,jj) = zbt * psxx(ji,jj) + zbt1 * ( psxx(ji,jj) + zfxx(ji,jj) ) 597 END DO 598 END DO 599 600 !-- Lateral boundary conditions 601 CALL lbc_lnk_multi( 'icedyn_adv_pra', psm , 'T', 1., ps0 , 'T', 1. & 602 & , psx , 'T', -1., psy , 'T', -1. & ! caution gradient ==> the sign changes 603 & , psxx, 'T', 1., psyy, 'T', 1. & 604 & , psxy, 'T', 1. ) 605 606 IF(ln_ctl) THEN 607 CALL prt_ctl(tab2d_1=psm , clinfo1=' adv_y: psm :', tab2d_2=ps0 , clinfo2=' ps0 : ') 608 CALL prt_ctl(tab2d_1=psx , clinfo1=' adv_y: psx :', tab2d_2=psxx, clinfo2=' psxx : ') 609 CALL prt_ctl(tab2d_1=psy , clinfo1=' adv_y: psy :', tab2d_2=psyy, clinfo2=' psyy : ') 610 CALL prt_ctl(tab2d_1=psxy , clinfo1=' adv_y: psxy :') 611 ENDIF 612 ! 613 END SUBROUTINE adv_y 640 !-- correct pond concentration to avoid a_ip > a_i -- ! 641 WHERE( pa_ip(:,:,:) > pa_i(:,:,:) ) pa_ip(:,:,:) = pa_i(:,:,:) 642 ! 643 END SUBROUTINE Hsnow 614 644 615 645 … … 624 654 ! 625 655 ! !* allocate prather fields 626 ALLOCATE( sxopw(jpi,jpj) , syopw(jpi,jpj) , sxxopw(jpi,jpj) , syyopw(jpi,jpj) , sxyopw(jpi,jpj) , & 627 & sxice(jpi,jpj,jpl) , syice(jpi,jpj,jpl) , sxxice(jpi,jpj,jpl) , syyice(jpi,jpj,jpl) , sxyice(jpi,jpj,jpl) , & 656 ALLOCATE( sxice(jpi,jpj,jpl) , syice(jpi,jpj,jpl) , sxxice(jpi,jpj,jpl) , syyice(jpi,jpj,jpl) , sxyice(jpi,jpj,jpl) , & 628 657 & sxsn (jpi,jpj,jpl) , sysn (jpi,jpj,jpl) , sxxsn (jpi,jpj,jpl) , syysn (jpi,jpj,jpl) , sxysn (jpi,jpj,jpl) , & 629 658 & sxa (jpi,jpj,jpl) , sya (jpi,jpj,jpl) , sxxa (jpi,jpj,jpl) , syya (jpi,jpj,jpl) , sxya (jpi,jpj,jpl) , & … … 652 681 !! *** ROUTINE adv_pra_rst *** 653 682 !! 654 !! ** Purpose : Read or write RHGfile in restart file683 !! ** Purpose : Read or write file in restart file 655 684 !! 656 685 !! ** Method : use of IOM library … … 671 700 ! !==========================! 672 701 ! 673 IF( ln_rstart ) THEN ; id1 = iom_varid( numrir, 'sx opw' , ldstop = .FALSE. ) ! file exist: id1>0702 IF( ln_rstart ) THEN ; id1 = iom_varid( numrir, 'sxice' , ldstop = .FALSE. ) ! file exist: id1>0 674 703 ELSE ; id1 = 0 ! no restart: id1=0 675 704 ENDIF … … 689 718 CALL iom_get( numrir, jpdom_autoglo, 'syysn' , syysn ) 690 719 CALL iom_get( numrir, jpdom_autoglo, 'sxysn' , sxysn ) 691 ! ! lead fraction720 ! ! ice concentration 692 721 CALL iom_get( numrir, jpdom_autoglo, 'sxa' , sxa ) 693 722 CALL iom_get( numrir, jpdom_autoglo, 'sya' , sya ) … … 707 736 CALL iom_get( numrir, jpdom_autoglo, 'syyage', syyage ) 708 737 CALL iom_get( numrir, jpdom_autoglo, 'sxyage', sxyage ) 709 ! ! open water in sea ice710 CALL iom_get( numrir, jpdom_autoglo, 'sxopw' , sxopw )711 CALL iom_get( numrir, jpdom_autoglo, 'syopw' , syopw )712 CALL iom_get( numrir, jpdom_autoglo, 'sxxopw', sxxopw )713 CALL iom_get( numrir, jpdom_autoglo, 'syyopw', syyopw )714 CALL iom_get( numrir, jpdom_autoglo, 'sxyopw', sxyopw )715 738 ! ! snow layers heat content 716 739 DO jk = 1, nlay_s … … 752 775 sxice = 0._wp ; syice = 0._wp ; sxxice = 0._wp ; syyice = 0._wp ; sxyice = 0._wp ! ice thickness 753 776 sxsn = 0._wp ; sysn = 0._wp ; sxxsn = 0._wp ; syysn = 0._wp ; sxysn = 0._wp ! snow thickness 754 sxa = 0._wp ; sya = 0._wp ; sxxa = 0._wp ; syya = 0._wp ; sxya = 0._wp ! lead fraction777 sxa = 0._wp ; sya = 0._wp ; sxxa = 0._wp ; syya = 0._wp ; sxya = 0._wp ! ice concentration 755 778 sxsal = 0._wp ; sysal = 0._wp ; sxxsal = 0._wp ; syysal = 0._wp ; sxysal = 0._wp ! ice salinity 756 779 sxage = 0._wp ; syage = 0._wp ; sxxage = 0._wp ; syyage = 0._wp ; sxyage = 0._wp ! ice age 757 sxopw = 0._wp ; syopw = 0._wp ; sxxopw = 0._wp ; syyopw = 0._wp ; sxyopw = 0._wp ! open water in sea ice758 780 sxc0 = 0._wp ; syc0 = 0._wp ; sxxc0 = 0._wp ; syyc0 = 0._wp ; sxyc0 = 0._wp ! snow layers heat content 759 781 sxe = 0._wp ; sye = 0._wp ; sxxe = 0._wp ; syye = 0._wp ; sxye = 0._wp ! ice layers heat content … … 786 808 CALL iom_rstput( iter, nitrst, numriw, 'syysn' , syysn ) 787 809 CALL iom_rstput( iter, nitrst, numriw, 'sxysn' , sxysn ) 788 ! ! lead fraction810 ! ! ice concentration 789 811 CALL iom_rstput( iter, nitrst, numriw, 'sxa' , sxa ) 790 812 CALL iom_rstput( iter, nitrst, numriw, 'sya' , sya ) … … 804 826 CALL iom_rstput( iter, nitrst, numriw, 'syyage', syyage ) 805 827 CALL iom_rstput( iter, nitrst, numriw, 'sxyage', sxyage ) 806 ! ! open water in sea ice807 CALL iom_rstput( iter, nitrst, numriw, 'sxopw' , sxopw )808 CALL iom_rstput( iter, nitrst, numriw, 'syopw' , syopw )809 CALL iom_rstput( iter, nitrst, numriw, 'sxxopw', sxxopw )810 CALL iom_rstput( iter, nitrst, numriw, 'syyopw', syyopw )811 CALL iom_rstput( iter, nitrst, numriw, 'sxyopw', sxyopw )812 828 ! ! snow layers heat content 813 829 DO jk = 1, nlay_s -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/icedyn_adv_umx.F90
r10945 r12165 83 83 REAL(wp), DIMENSION(:,:,:) , INTENT(inout) :: poa_i ! age content 84 84 REAL(wp), DIMENSION(:,:,:) , INTENT(inout) :: pa_i ! ice concentration 85 REAL(wp), DIMENSION(:,:,:) , INTENT(inout) :: pa_ip ! melt pond fraction85 REAL(wp), DIMENSION(:,:,:) , INTENT(inout) :: pa_ip ! melt pond concentration 86 86 REAL(wp), DIMENSION(:,:,:) , INTENT(inout) :: pv_ip ! melt pond volume 87 87 REAL(wp), DIMENSION(:,:,:,:), INTENT(inout) :: pe_s ! snw heat content … … 319 319 ! 320 320 !== Ice age ==! 321 IF( iom_use('iceage') .OR. iom_use('iceage_cat') ) THEN 322 zamsk = 1._wp 323 CALL adv_umx( zamsk, kn_umx, jt, kt, zdt, zudy , zvdx , zu_cat, zv_cat, zcu_box, zcv_box, & 324 & poa_i, poa_i ) 325 ENDIF 321 zamsk = 1._wp 322 CALL adv_umx( zamsk, kn_umx, jt, kt, zdt, zudy , zvdx , zu_cat, zv_cat, zcu_box, zcv_box, & 323 & poa_i, poa_i ) 326 324 ! 327 325 !== melt ponds ==! 328 326 IF ( ln_pnd_H12 ) THEN 329 ! fraction327 ! concentration 330 328 zamsk = 1._wp 331 329 CALL adv_umx( zamsk, kn_umx, jt, kt, zdt, zudy , zvdx , zu_cat , zv_cat , zcu_box, zcv_box, & … … 1529 1527 !! 3- check whether snow load deplets the snow-ice interface below sea level$ 1530 1528 !! and reduce it by sending the excess in the ocean 1531 !! 4- correct pond fraction to avoid a_ip > a_i1529 !! 4- correct pond concentration to avoid a_ip > a_i 1532 1530 !! 1533 1531 !! ** input : Max thickness of the surrounding 9-points … … 1599 1597 END DO 1600 1598 END DO 1601 ! !-- correct pond fraction to avoid a_ip > a_i1599 ! !-- correct pond concentration to avoid a_ip > a_i 1602 1600 WHERE( pa_ip(:,:,:) > pa_i(:,:,:) ) pa_ip(:,:,:) = pa_i(:,:,:) 1603 1601 ! -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/icedyn_rdgrft.F90
r11587 r12165 86 86 !! *** ROUTINE ice_dyn_rdgrft_alloc *** 87 87 !!------------------------------------------------------------------- 88 ALLOCATE( closing_net(jpij) , opning(jpij) , closing_gross(jpij),&89 & apartf(jpij,0:jpl) , hrmin(jpij,jpl), hraft(jpij,jpl) , aridge(jpij,jpl),&90 & hrmax (jpij,jpl), hi_hrdg(jpij,jpl) , araft (jpij,jpl),&88 ALLOCATE( closing_net(jpij) , opning(jpij) , closing_gross(jpij) , & 89 & apartf(jpij,0:jpl) , hrmin (jpij,jpl) , hraft(jpij,jpl) , aridge(jpij,jpl), & 90 & hrmax (jpij,jpl) , hi_hrdg(jpij,jpl) , araft(jpij,jpl) , & 91 91 & ze_i_2d(jpij,nlay_i,jpl), ze_s_2d(jpij,nlay_s,jpl), STAT=ice_dyn_rdgrft_alloc ) 92 92 … … 137 137 REAL(wp) :: zfac ! local scalar 138 138 INTEGER , DIMENSION(jpij) :: iptidx ! compute ridge/raft or not 139 REAL(wp), DIMENSION(jpij) :: zdivu_adv ! divu as implied by transport scheme (1/s)140 139 REAL(wp), DIMENSION(jpij) :: zdivu, zdelt ! 1D divu_i & delta_i 141 140 ! … … 175 174 176 175 ! just needed here 177 CALL tab_2d_1d( npti, nptidx(1:npti), zdivu (1:npti) , divu_i )178 176 CALL tab_2d_1d( npti, nptidx(1:npti), zdelt (1:npti) , delta_i ) 179 177 ! needed here and in the iteration loop 178 CALL tab_2d_1d( npti, nptidx(1:npti), zdivu (1:npti) , divu_i) ! zdivu is used as a work array here (no change in divu_i) 180 179 CALL tab_3d_2d( npti, nptidx(1:npti), a_i_2d (1:npti,1:jpl), a_i ) 181 180 CALL tab_3d_2d( npti, nptidx(1:npti), v_i_2d (1:npti,1:jpl), v_i ) … … 187 186 closing_net(ji) = rn_csrdg * 0.5_wp * ( zdelt(ji) - ABS( zdivu(ji) ) ) - MIN( zdivu(ji), 0._wp ) 188 187 ! 189 ! divergence given by the advection scheme 190 ! (which may not be equal to divu as computed from the velocity field) 191 IF ( ln_adv_Pra ) THEN 192 zdivu_adv(ji) = ( 1._wp - ato_i_1d(ji) - SUM( a_i_2d(ji,:) ) ) * r1_rdtice 193 ELSEIF( ln_adv_UMx ) THEN 194 zdivu_adv(ji) = zdivu(ji) 195 ENDIF 196 ! 197 IF( zdivu_adv(ji) < 0._wp ) closing_net(ji) = MAX( closing_net(ji), -zdivu_adv(ji) ) ! make sure the closing rate is large enough 198 ! ! to give asum = 1.0 after ridging 188 IF( zdivu(ji) < 0._wp ) closing_net(ji) = MAX( closing_net(ji), -zdivu(ji) ) ! make sure the closing rate is large enough 189 ! ! to give asum = 1.0 after ridging 199 190 ! Opening rate (non-negative) that will give asum = 1.0 after ridging. 200 opning(ji) = closing_net(ji) + zdivu _adv(ji)191 opning(ji) = closing_net(ji) + zdivu(ji) 201 192 END DO 202 193 ! … … 215 206 ato_i_1d (ipti) = ato_i_1d (ji) 216 207 closing_net(ipti) = closing_net(ji) 217 zdivu _adv (ipti) = zdivu_adv(ji)208 zdivu (ipti) = zdivu (ji) 218 209 opning (ipti) = opning (ji) 219 210 ENDIF … … 259 250 ELSE 260 251 iterate_ridging = 1 261 zdivu _adv(ji) = zfac * r1_rdtice262 closing_net(ji) = MAX( 0._wp, -zdivu _adv(ji) )263 opning (ji) = MAX( 0._wp, zdivu _adv(ji) )252 zdivu (ji) = zfac * r1_rdtice 253 closing_net(ji) = MAX( 0._wp, -zdivu(ji) ) 254 opning (ji) = MAX( 0._wp, zdivu(ji) ) 264 255 ENDIF 265 256 END DO … … 309 300 310 301 ! ! Ice thickness needed for rafting 311 WHERE( pa_i(1:npti,:) > epsi 20 ) ; zhi(1:npti,:) = pv_i(1:npti,:) / pa_i(1:npti,:)302 WHERE( pa_i(1:npti,:) > epsi10 ) ; zhi(1:npti,:) = pv_i(1:npti,:) / pa_i(1:npti,:) 312 303 ELSEWHERE ; zhi(1:npti,:) = 0._wp 313 304 END WHERE … … 328 319 zasum(1:npti) = pato_i(1:npti) + SUM( pa_i(1:npti,:), dim=2 ) 329 320 ! 330 WHERE( zasum(1:npti) > epsi 20 ) ; z1_asum(1:npti) = 1._wp / zasum(1:npti)321 WHERE( zasum(1:npti) > epsi10 ) ; z1_asum(1:npti) = 1._wp / zasum(1:npti) 331 322 ELSEWHERE ; z1_asum(1:npti) = 0._wp 332 323 END WHERE … … 454 445 ! Based on the ITD of ridging and ridged ice, convert the net closing rate to a gross closing rate. 455 446 ! NOTE: 0 < aksum <= 1 456 WHERE( zaksum(1:npti) > epsi 20 ) ; closing_gross(1:npti) = pclosing_net(1:npti) / zaksum(1:npti)447 WHERE( zaksum(1:npti) > epsi10 ) ; closing_gross(1:npti) = pclosing_net(1:npti) / zaksum(1:npti) 457 448 ELSEWHERE ; closing_gross(1:npti) = 0._wp 458 449 END WHERE … … 537 528 IF( apartf(ji,jl1) > 0._wp .AND. closing_gross(ji) > 0._wp ) THEN ! only if ice is ridging 538 529 539 IF( a_i_2d(ji,jl1) > epsi 20 ) THEN ; z1_ai(ji) = 1._wp / a_i_2d(ji,jl1)530 IF( a_i_2d(ji,jl1) > epsi10 ) THEN ; z1_ai(ji) = 1._wp / a_i_2d(ji,jl1) 540 531 ELSE ; z1_ai(ji) = 0._wp 541 532 ENDIF … … 595 586 ! virtual salt flux to keep salinity constant 596 587 IF( nn_icesal /= 2 ) THEN 597 sirdg2(ji) = sirdg2(ji) - vsw * ( sss_1d(ji) - s_i_1d(ji) ) 588 sirdg2(ji) = sirdg2(ji) - vsw * ( sss_1d(ji) - s_i_1d(ji) ) ! ridge salinity = s_i 598 589 sfx_bri_1d(ji) = sfx_bri_1d(ji) + sss_1d(ji) * vsw * rhoi * r1_rdtice & ! put back sss_m into the ocean 599 590 & - s_i_1d(ji) * vsw * rhoi * r1_rdtice ! and get s_i from the ocean -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/ICE/iceitd.F90
r11586 r12165 211 211 CALL itd_glinear( zhb0(1:npti) , zhb1(1:npti) , h_ib_1d(1:npti) , a_i_1d(1:npti) , & ! in 212 212 & g0 (1:npti,1), g1 (1:npti,1), hL (1:npti,1), hR (1:npti,1) ) ! out 213 213 ! 214 214 ! Area lost due to melting of thin ice 215 215 DO ji = 1, npti … … 218 218 ! 219 219 zdh0 = h_i_1d(ji) - h_ib_1d(ji) 220 IF( zdh0 < 0.0 ) THEN ! remove area from category 1220 IF( zdh0 < 0.0 ) THEN ! remove area from category 1 221 221 zdh0 = MIN( -zdh0, hi_max(1) ) 222 222 !Integrate g(1) from 0 to dh0 to estimate area melted … … 226 226 zx1 = zetamax 227 227 zx2 = 0.5 * zetamax * zetamax 228 zda0 = g1(ji,1) * zx2 + g0(ji,1) * zx1 228 zda0 = g1(ji,1) * zx2 + g0(ji,1) * zx1 ! ice area removed 229 229 zdamax = a_i_1d(ji) * (1.0 - h_i_1d(ji) / h_ib_1d(ji) ) ! Constrain new thickness <= h_i 230 zda0 = MIN( zda0, zdamax ) ! ice area lost due to melting 231 ! of thin ice (zdamax > 0) 230 zda0 = MIN( zda0, zdamax ) ! ice area lost due to melting of thin ice (zdamax > 0) 232 231 ! Remove area, conserving volume 233 232 h_i_1d(ji) = h_i_1d(ji) * a_i_1d(ji) / ( a_i_1d(ji) - zda0 ) … … 349 348 DO ji = 1, npti 350 349 ! 351 IF( paice(ji) > epsi10 .AND. phice(ji) > 0._wp) THEN350 IF( paice(ji) > epsi10 .AND. phice(ji) > epsi10 ) THEN 352 351 ! 353 352 ! Initialize hL and hR -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/CRS/README.rst
r10279 r12165 2 2 On line biogeochemistry coarsening 3 3 ********************************** 4 5 .. todo:: 6 7 4 8 5 9 .. contents:: … … 63 67 ! 1, MAX of KZ 64 68 ! 2, MIN of KZ 65 ! 3, 10^(MEAN(LOG(KZ)) 66 ! 4, MEDIANE of KZ 69 ! 3, 10^(MEAN(LOG(KZ)) 70 ! 4, MEDIANE of KZ 67 71 ln_crs_wn = .false. ! wn coarsened (T) or computed using horizontal divergence ( F ) 68 72 ! ! … … 73 77 the north-fold lateral boundary condition (ORCA025, ORCA12, ORCA36, ...). 74 78 - ``nn_msh_crs = 1`` will activate the generation of the coarsened grid meshmask. 75 - ``nn_crs_kz`` is the operator to coarsen the vertical mixing coefficient. 79 - ``nn_crs_kz`` is the operator to coarsen the vertical mixing coefficient. 76 80 - ``ln_crs_wn`` 77 81 … … 80 84 - when ``key_vvl`` is not activated, 81 85 82 - coarsened vertical velocities are computed using horizontal divergence (``ln_crs_wn = .false.``) 86 - coarsened vertical velocities are computed using horizontal divergence (``ln_crs_wn = .false.``) 83 87 - or coarsened vertical velocities are computed with an average operator (``ln_crs_wn = .true.``) 84 88 - ``ln_crs_top = .true.``: should be activated to run BCG model in coarsened space; … … 97 101 98 102 In the [attachment:iodef.xml iodef.xml] file, a "nemo" context is defined and 99 some variable defined in [attachment:file_def.xml file_def.xml] are writted on the ocean-dynamic grid. 103 some variable defined in [attachment:file_def.xml file_def.xml] are writted on the ocean-dynamic grid. 100 104 To write variables on the coarsened grid, and in particular the passive tracers, 101 105 a "nemo_crs" context should be defined in [attachment:iodef.xml iodef.xml] and … … 111 115 interpolated `on-the-fly <http://forge.ipsl.jussieu.fr/nemo/wiki/Users/SetupNewConfiguration/Weight-creator>`_. 112 116 Example of namelist for PISCES : 113 117 114 118 .. code-block:: fortran 115 119 … … 134 138 rn_trfac(14) = 1.0e-06 ! - - - - 135 139 rn_trfac(23) = 7.6e-06 ! - - - - 136 140 137 141 cn_dir = './' ! root directory for the location of the data files 138 142 -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/DYN/dynnxt.F90
r10425 r12165 175 175 IF( neuler == 0 .AND. kt == nit000 ) THEN !* Euler at first time-step: only swap 176 176 DO jk = 1, jpkm1 177 ub(:,:,jk) = un(:,:,jk) ! ub <-- un 178 vb(:,:,jk) = vn(:,:,jk) 177 179 un(:,:,jk) = ua(:,:,jk) ! un <-- ua 178 180 vn(:,:,jk) = va(:,:,jk) -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/FLO/flodom.F90
r11413 r12165 433 433 IF( ABS(dlx) > 1.0_wp ) dlx = 1.0_wp 434 434 ! 435 dld = ATAN( DSQRT( 1._wp * ( 1._wp-dlx )/( 1._wp+dlx ) )) * 222.24_wp / dls435 dld = ATAN(SQRT( 1._wp * ( 1._wp-dlx )/( 1._wp+dlx ) )) * 222.24_wp / dls 436 436 flo_dstnce = dld * 1000._wp 437 437 ! -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/FLO/flowri.F90
r11413 r12165 221 221 clname=TRIM(clname)//".nc" 222 222 223 CALL fliocrfd( clname , (/ 'ntraj' , 't' /), (/ jpnfl , -1/) , numflo )223 CALL fliocrfd( clname , (/'ntraj' , ' t' /), (/ jpnfl , -1/) , numflo ) 224 224 225 225 CALL fliodefv( numflo, 'traj_lon' , (/1,2/), v_t=flio_r8, long_name="Longitude" , units="degrees_east" ) -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/LBC/mppini.F90
r11586 r12165 538 538 9401 FORMAT(' ' ,20(' ',i3,' ') ) 539 539 9402 FORMAT(' ',i3,' * ',20(i3,' x',i3,' * ') ) 540 9404 FORMAT(' * ' ,20(' ',i3,' * ') )540 9404 FORMAT(' * ' ,20(' ' ,i4,' * ') ) 541 541 ENDIF 542 542 -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OCE/LDF/ldfdyn.F90
r11348 r12165 315 315 DO jj = 1, jpj ! Set local gridscale values 316 316 DO ji = 1, jpi 317 esqt(ji,jj) = ( e1e2t(ji,jj) / ( e1t(ji,jj) + e2t(ji,jj) ) )**2318 esqf(ji,jj) = ( e1e2f(ji,jj) / ( e1f(ji,jj) + e2f(ji,jj) ) )**2317 esqt(ji,jj) = ( 2._wp * e1e2t(ji,jj) / ( e1t(ji,jj) + e2t(ji,jj) ) )**2 318 esqf(ji,jj) = ( 2._wp * e1e2f(ji,jj) / ( e1f(ji,jj) + e2f(ji,jj) ) )**2 319 319 END DO 320 320 END DO -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/OFF/nemogcm.F90
r11348 r12165 114 114 #else 115 115 CALL dta_dyn ( istp ) ! Interpolation of the dynamical fields 116 #endif 117 CALL trc_stp ( istp ) ! time-stepping 118 #if ! defined key_sed_off 116 119 IF( .NOT.ln_linssh ) CALL dta_dyn_swp( istp ) ! swap of sea surface height and vertical scale factors 117 120 #endif 118 CALL trc_stp ( istp ) ! time-stepping119 121 CALL stp_ctl ( istp, indic ) ! Time loop: control and print 120 122 istp = istp + 1 -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/TOP/README.rst
r10549 r12165 3 3 *************** 4 4 5 .. todo:: 6 7 8 5 9 .. contents:: 6 :local: 7 8 TOP (Tracers in the Ocean Paradigm) is the NEMO hardwired interface toward biogeochemical models and 9 provide the physical constraints/boundaries for oceanic tracers. 10 It consists of a modular framework to handle multiple ocean tracers, including also a variety of built-in modules. 10 :local: 11 12 TOP (Tracers in the Ocean Paradigm) is the NEMO hardwired interface toward 13 biogeochemical models and provide the physical constraints/boundaries for oceanic tracers. 14 It consists of a modular framework to handle multiple ocean tracers, 15 including also a variety of built-in modules. 11 16 12 17 This component of the NEMO framework allows one to exploit available modules (see below) and 13 18 further develop a range of applications, spanning from the implementation of a dye passive tracer to 14 19 evaluate dispersion processes (by means of MY_TRC), track water masses age (AGE module), 15 assess the ocean interior penetration of persistent chemical compounds (e.g., gases like CFC or even PCBs), 16 up to the full set of equations involving marine biogeochemical cycles. 20 assess the ocean interior penetration of persistent chemical compounds 21 (e.g., gases like CFC or even PCBs), up to the full set of equations involving 22 marine biogeochemical cycles. 17 23 18 24 Structure 19 25 ========= 20 26 21 TOP interface has the following location in the source code ``./src/MBG/`` and27 TOP interface has the following location in the source code :file:`./src/TOP` and 22 28 the following modules are available: 23 29 24 ``TRP`` 25 Interface to NEMO physical core for computing tracers transport 26 27 ``CFC`` 28 Inert carbon tracers (CFC11,CFC12,SF6) 29 30 ``C14`` 31 Radiocarbon passive tracer 32 33 ``AGE`` 34 Water age tracking 35 36 ``MY_TRC`` 37 Template for creation of new modules and external BGC models coupling 38 39 ``PISCES`` 40 Built in BGC model. 41 See [https://www.geosci-model-dev.net/8/2465/2015/gmd-8-2465-2015-discussion.html Aumont et al. (2015)] for 42 a throughout description. | 43 44 The usage of TOP is activated i) by including in the configuration definition the component ``MBG`` and 45 ii) by adding the macro ``key_top`` in the configuration CPP file 46 (see for more details [http://forge.ipsl.jussieu.fr/nemo/wiki/Users "Learn more about the model"]). 30 :file:`TRP` 31 Interface to NEMO physical core for computing tracers transport 32 33 :file:`CFC` 34 Inert carbon tracers (CFC11,CFC12,SF6) 35 36 :file:`C14` 37 Radiocarbon passive tracer 38 39 :file:`AGE` 40 Water age tracking 41 42 :file:`MY_TRC` 43 Template for creation of new modules and external BGC models coupling 44 45 :file:`PISCES` 46 Built in BGC model. See :cite:`gmd-8-2465-2015` for a throughout description. 47 48 The usage of TOP is activated 49 *i)* by including in the configuration definition the component ``TOP`` and 50 *ii)* by adding the macro ``key_top`` in the configuration CPP file 51 (see for more details :forge:`"Learn more about the model" <wiki/Users>`). 47 52 48 53 As an example, the user can refer to already available configurations in the code, … … 51 56 (see also Section 4) . 52 57 53 Note that, since version 4.0, TOP interface core functionalities are activated by means of logical keys and 58 Note that, since version 4.0, 59 TOP interface core functionalities are activated by means of logical keys and 54 60 all submodules preprocessing macros from previous versions were removed. 55 61 … … 57 63 58 64 ``key_iomput`` 59 65 use XIOS I/O 60 66 61 67 ``key_agrif`` 62 68 enable AGRIF coupling 63 69 64 70 ``key_trdtrc`` & ``key_trdmxl_trc`` 65 71 trend computation for tracers 66 72 67 73 Synthetic Workflow 68 74 ================== 69 75 70 A synthetic description of the TOP interface workflow is given below to summarize the steps involved in 71 the computation of biogeochemical and physical trends and their time integration and outputs, 76 A synthetic description of the TOP interface workflow is given below to 77 summarize the steps involved in the computation of biogeochemical and physical trends and 78 their time integration and outputs, 72 79 by reporting also the principal Fortran subroutine herein involved. 73 80 74 **Model initialization (OPA_SRC/nemogcm.F90)** 75 76 call to trc_init (trcini.F90) 77 78 ↳ call trc_nam (trcnam.F90) to initialize TOP tracers and run setting 79 80 ↳ call trc_ini_sms, to initialize each submodule 81 82 ↳ call trc_ini_trp, to initialize transport for tracers 83 84 ↳ call trc_ice_ini, to initialize tracers in seaice 85 86 ↳ call trc_ini_state, read passive tracers from a restart or input data 87 88 ↳ call trc_sub_ini, setup substepping if {{{nn_dttrc /= 1}}} 89 90 **Time marching procedure (OPA_SRC/stp.F90)** 91 92 call to trc_stp.F90 (trcstp.F90) 93 94 ↳ call trc_sub_stp, averaging physical variables for sub-stepping 95 96 ↳ call trc_wri, call XIOS for output of data 97 98 ↳ call trc_sms, compute BGC trends for each submodule 99 100 ↳ call trc_sms_my_trc, includes also surface and coastal BCs trends 101 102 ↳ call trc_trp (TRP/trctrp.F90), compute physical trends 103 104 ↳ call trc_sbc, get trend due to surface concentration/dilution 105 106 ↳ call trc_adv, compute tracers advection 107 108 ↳ call to trc_ldf, compute tracers lateral diffusion 109 110 ↳ call to trc_zdf, vertical mixing and after tracer fields 111 112 ↳ call to trc_nxt, tracer fields at next time step. Lateral Boundary Conditions are solved in here. 113 114 ↳ call to trc_rad, Correct artificial negative concentrations 115 116 ↳ call trc_rst_wri, output tracers restart files 81 Model initialization (:file:`./src/OCE/nemogcm.F90`) 82 ---------------------------------------------------- 83 84 Call to ``trc_init`` subroutine (:file:`./src/TOP/trcini.F90`) to initialize TOP. 85 86 .. literalinclude:: ../../../src/TOP/trcini.F90 87 :language: fortran 88 :lines: 41-86 89 :emphasize-lines: 21,30-32,38-40 90 :caption: ``trc_init`` subroutine 91 92 Time marching procedure (:file:`./src/OCE/step.F90`) 93 ---------------------------------------------------- 94 95 Call to ``trc_stp`` subroutine (:file:`./src/TOP/trcstp.F90`) to compute/update passive tracers. 96 97 .. literalinclude:: ../../../src/TOP/trcstp.F90 98 :language: fortran 99 :lines: 46-125 100 :emphasize-lines: 42,55-57 101 :caption: ``trc_stp`` subroutine 102 103 BGC trends computation for each submodule (:file:`./src/TOP/trcsms.F90`) 104 ------------------------------------------------------------------------ 105 106 .. literalinclude:: ../../../src/TOP/trcsms.F90 107 :language: fortran 108 :lines: 21 109 :caption: :file:`trcsms` snippet 110 111 Physical trends computation (:file:`./src/TOP/TRP/trctrp.F90`) 112 -------------------------------------------------------------- 113 114 .. literalinclude:: ../../../src/TOP/TRP/trctrp.F90 115 :language: fortran 116 :lines: 46-95 117 :emphasize-lines: 17,21,29,33-35 118 :caption: ``trc_trp`` subroutine 117 119 118 120 Namelists walkthrough 119 121 ===================== 120 122 121 namelist_top 122 ------------ 123 124 Here below are listed the features/options of the TOP interface accessible through the namelist_top_ref and 125 modifiable by means of namelist_top_cfg (as for NEMO physical ones). 126 127 Note that ## is used to refer to a number in an array field. 123 :file:`namelist_top` 124 -------------------- 125 126 Here below are listed the features/options of the TOP interface accessible through 127 the :file:`namelist_top_ref` and modifiable by means of :file:`namelist_top_cfg` 128 (as for NEMO physical ones). 129 130 Note that ``##`` is used to refer to a number in an array field. 128 131 129 132 .. literalinclude:: ../../namelists/namtrc_run 133 :language: fortran 130 134 131 135 .. literalinclude:: ../../namelists/namtrc 136 :language: fortran 132 137 133 138 .. literalinclude:: ../../namelists/namtrc_dta 139 :language: fortran 134 140 135 141 .. literalinclude:: ../../namelists/namtrc_adv 142 :language: fortran 136 143 137 144 .. literalinclude:: ../../namelists/namtrc_ldf 145 :language: fortran 138 146 139 147 .. literalinclude:: ../../namelists/namtrc_rad 148 :language: fortran 140 149 141 150 .. literalinclude:: ../../namelists/namtrc_snk 151 :language: fortran 142 152 143 153 .. literalinclude:: ../../namelists/namtrc_dmp 154 :language: fortran 144 155 145 156 .. literalinclude:: ../../namelists/namtrc_ice 157 :language: fortran 146 158 147 159 .. literalinclude:: ../../namelists/namtrc_trd 160 :language: fortran 148 161 149 162 .. literalinclude:: ../../namelists/namtrc_bc 163 :language: fortran 150 164 151 165 .. literalinclude:: ../../namelists/namtrc_bdy 166 :language: fortran 152 167 153 168 .. literalinclude:: ../../namelists/namage 154 155 Two main types of data structure are used within TOP interface to initialize tracer properties (1) and 169 :language: fortran 170 171 Two main types of data structure are used within TOP interface 172 to initialize tracer properties (1) and 156 173 to provide related initial and boundary conditions (2). 157 174 158 **1. TOP tracers initialization**: sn_tracer (namtrc) 175 1. TOP tracers initialization: ``sn_tracer`` (``&namtrc``) 176 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 159 177 160 178 Beside providing name and metadata for tracers, 161 here are also defined the use of initial ({{{sn_tracer%llinit}}}) and 162 boundary ({{{sn_tracer%llsbc, sn_tracer%llcbc, sn_tracer%llobc}}}) conditions. 163 164 In the following, an example of the full structure definition is given for two idealized tracers both with 165 initial conditions given, while the first has only surface boundary forcing and 179 here are also defined the use of initial (``sn_tracer%llinit``) and 180 boundary (``sn_tracer%llsbc, sn_tracer%llcbc, sn_tracer%llobc``) conditions. 181 182 In the following, an example of the full structure definition is given for 183 two idealized tracers both with initial conditions given, 184 while the first has only surface boundary forcing and 166 185 the second both surface and coastal forcings: 167 186 168 187 .. code-block:: fortran 169 188 170 171 172 189 ! ! name ! title of the field ! units ! initial data ! sbc ! cbc ! obc ! 190 sn_tracer(1) = 'TRC1' , 'Tracer 1 Concentration ', ' - ' , .true. , .true., .false., .true. 191 sn_tracer(2) = 'TRC2 ' , 'Tracer 2 Concentration ', ' - ' , .true. , .true., .true. , .false. 173 192 174 193 As tracers in BGC models are increasingly growing, … … 177 196 .. code-block:: fortran 178 197 179 180 181 182 183 184 185 186 198 ! ! name ! title of the field ! units ! initial data ! 199 sn_tracer(1) = 'TRC1' , 'Tracer 1 Concentration ', ' - ' , .true. 200 sn_tracer(2) = 'TRC2 ' , 'Tracer 2 Concentration ', ' - ' , .true. 201 ! sbc 202 sn_tracer(1)%llsbc = .true. 203 sn_tracer(2)%llsbc = .true. 204 ! cbc 205 sn_tracer(2)%llcbc = .true. 187 206 188 207 The data structure is internally initialized by code with dummy names and 189 all initialization/forcing logical fields set to .false. . 190 191 **2. Structures to read input initial and boundary conditions**: namtrc_dta (sn_trcdta), namtrc_bc (sn_trcsbc/sn_trccbc/sn_trcobc) 208 all initialization/forcing logical fields set to ``.false.`` . 209 210 2. Structures to read input initial and boundary conditions: ``&namtrc_dta`` (``sn_trcdta``), ``&namtrc_bc`` (``sn_trcsbc`` / ``sn_trccbc`` / ``sn_trcobc``) 211 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 192 212 193 213 The overall data structure (Fortran type) is based on the general one defined for NEMO core in the SBC component 194 (see details in User Manual SBC Chapteron Input Data specification).195 196 Input fields are prescribed within namtrc_dta (with sn_trcdtastructure),197 while Boundary Conditions are applied to the model by means of namtrc_bc,198 with dedicated structure fields for surface ( sn_trcsbc), riverine (sn_trccbc), and199 lateral open ( sn_trcobc) boundaries.214 (see details in ``SBC`` Chapter of :doc:`Reference Manual <cite>` on Input Data specification). 215 216 Input fields are prescribed within ``&namtrc_dta`` (with ``sn_trcdta`` structure), 217 while Boundary Conditions are applied to the model by means of ``&namtrc_bc``, 218 with dedicated structure fields for surface (``sn_trcsbc``), riverine (``sn_trccbc``), and 219 lateral open (``sn_trcobc``) boundaries. 200 220 201 221 The following example illustrates the data structure in the case of initial condition for 202 a single tracer contained in the file named tracer_1_data.nc (.nc is implicitly assumed in namelist filename), 203 with a doubled initial value, and located in the usr/work/model/inputdata/ folder: 222 a single tracer contained in the file named :file:`tracer_1_data.nc` 223 (``.nc`` is implicitly assumed in namelist filename), 224 with a doubled initial value, and located in the :file:`usr/work/model/inputdata` folder: 204 225 205 226 .. code-block:: fortran 206 227 207 ! ! file name ! frequency (hours) ! variable ! time interp. ! clim ! 'yearly'/ ! weights ! rotation ! land/sea mask ! 208 ! ! ! (if <0 months) ! name ! (logical) ! (T/F) ! 'monthly' ! filename ! pairing ! filename ! 209 sn_trcdta(1) = 'tracer_1_data' , -12 , 'TRC1' , .false. , .true. , 'yearly' , '' , '' , '' 210 rf_trfac(1) = 2.0 211 cn_dir = “usr/work/model/inputdata/” 212 213 Note that, the Lateral Open Boundaries conditions are applied on the segments defined for the physical core of NEMO 214 (see BDY description in the User Manual). 215 216 namelist_trc 217 ------------ 218 219 Here below the description of namelist_trc_ref used to handle Carbon tracers modules, namely CFC and C14. 220 221 |||| &'''namcfc''' ! CFC || 222 223 |||| &'''namc14_typ''' ! C14 - type of C14 tracer, default values of C14/C and pco2 || 224 225 |||| &'''namc14_sbc''' ! C14 - surface BC || 226 227 |||| &'''namc14_fcg''' ! files & dates || 228 ! ! file name ! frequency (hours) ! variable ! time interp. ! clim ! 'yearly'/ ! weights ! rotation ! land/sea mask ! 229 ! ! ! (if <0 months) ! name ! (logical) ! (T/F) ! 'monthly' ! filename ! pairing ! filename ! 230 sn_trcdta(1) = 'tracer_1_data' , -12 , 'TRC1' , .false. , .true. , 'yearly' , '' , '' , '' 231 rf_trfac(1) = 2.0 232 cn_dir = 'usr/work/model/inputdata/' 233 234 Note that, the Lateral Open Boundaries conditions are applied on 235 the segments defined for the physical core of NEMO 236 (see ``BDY`` description in the :doc:`Reference Manual <cite>`). 237 238 :file:`namelist_trc` 239 -------------------- 240 241 Here below the description of :file:`namelist_trc_ref` used to handle Carbon tracers modules, 242 namely CFC and C14. 243 244 .. literalinclude:: ../../../cfgs/SHARED/namelist_trc_ref 245 :language: fortran 246 :lines: 7,17,26,34 247 :caption: :file:`namelist_trc_ref` snippet 228 248 229 249 ``MY_TRC`` interface for coupling external BGC models 230 250 ===================================================== 231 251 232 The generalized interface is pivoted on MY_TRC module that contains template files to build the coupling between 252 The generalized interface is pivoted on MY_TRC module that contains template files to 253 build the coupling between 233 254 NEMO and any external BGC model. 234 255 235 The call to MY_TRC is activated by setting ``ln_my_trc = .true.`` (in namtrc)256 The call to MY_TRC is activated by setting ``ln_my_trc = .true.`` (in ``&namtrc``) 236 257 237 258 The following 6 fortran files are available in MY_TRC with the specific purposes here described. 238 259 239 ``par_my_trc.F90`` 240 This module allows to define additional arrays and public variables to be used within the MY_TRC interface 241 242 ``trcini_my_trc.F90`` 243 Here are initialized user defined namelists and the call to the external BGC model initialization procedures to 244 populate general tracer array (trn and trb). Here are also likely to be defined suport arrays related to 245 system metrics that could be needed by the BGC model. 246 247 ``trcnam_my_trc.F90`` 248 This routine is called at the beginning of trcini_my_trc and should contain the initialization of 249 additional namelists for the BGC model or user-defined code. 250 251 ``trcsms_my_trc.F90`` 252 The routine performs the call to Boundary Conditions and its main purpose is to 253 contain the Source-Minus-Sinks terms due to the biogeochemical processes of the external model. 254 Be aware that lateral boundary conditions are applied in trcnxt routine. 255 IMPORTANT: the routines to compute the light penetration along the water column and 256 the tracer vertical sinking should be defined/called in here, as generalized modules are still missing in 257 the code. 258 259 ``trcice_my_trc.F90`` 260 Here it is possible to prescribe the tracers concentrations in the seaice that will be used as 261 boundary conditions when ice melting occurs (nn_ice_tr =1 in namtrc_ice). 262 See e.g. the correspondent PISCES subroutine. 263 264 ``trcwri_my_trc.F90`` 265 This routine performs the output of the model tracers (only those defined in namtrc) using IOM module 266 (see Manual Chapter “Output and Diagnostics”). 267 It is possible to place here the output of additional variables produced by the model, 268 if not done elsewhere in the code, using the call to iom_put. 260 :file:`par_my_trc.F90` 261 This module allows to define additional arrays and public variables to 262 be used within the MY_TRC interface 263 264 :file:`trcini_my_trc.F90` 265 Here are initialized user defined namelists and 266 the call to the external BGC model initialization procedures to populate general tracer array 267 (``trn`` and ``trb``). 268 Here are also likely to be defined support arrays related to system metrics that 269 could be needed by the BGC model. 270 271 :file:`trcnam_my_trc.F90` 272 This routine is called at the beginning of ``trcini_my_trc`` and 273 should contain the initialization of additional namelists for the BGC model or user-defined code. 274 275 :file:`trcsms_my_trc.F90` 276 The routine performs the call to Boundary Conditions and its main purpose is to 277 contain the Source-Minus-Sinks terms due to the biogeochemical processes of the external model. 278 Be aware that lateral boundary conditions are applied in trcnxt routine. 279 280 .. warning:: 281 The routines to compute the light penetration along the water column and 282 the tracer vertical sinking should be defined/called in here, 283 as generalized modules are still missing in the code. 284 285 :file:`trcice_my_trc.F90` 286 Here it is possible to prescribe the tracers concentrations in the sea-ice that 287 will be used as boundary conditions when ice melting occurs (``nn_ice_tr = 1`` in ``&namtrc_ice``). 288 See e.g. the correspondent PISCES subroutine. 289 290 :file:`trcwri_my_trc.F90` 291 This routine performs the output of the model tracers (only those defined in ``&namtrc``) using 292 IOM module (see chapter “Output and Diagnostics” in the :doc:`Reference Manual <cite>`). 293 It is possible to place here the output of additional variables produced by the model, 294 if not done elsewhere in the code, using the call to ``iom_put``. 269 295 270 296 Coupling an external BGC model using NEMO framework … … 273 299 The coupling with an external BGC model through the NEMO compilation framework can be achieved in 274 300 different ways according to the degree of coding complexity of the Biogeochemical model, like e.g., 275 the whole code is made only by one file or it has multiple modules and interfaces spread across several subfolders. 276 277 Beside the 6 core files of MY_TRC module, let’s assume an external BGC model named *MYBGC* and constituted by 278 a rather essential coding structure, likely few Fortran files. 301 the whole code is made only by one file or 302 it has multiple modules and interfaces spread across several subfolders. 303 304 Beside the 6 core files of MY_TRC module, let’s assume an external BGC model named *MYBGC* and 305 constituted by a rather essential coding structure, likely few Fortran files. 279 306 The new coupled configuration name is *NEMO_MYBGC*. 280 307 281 The best solution is to have all files (the modified ``MY_TRC`` routines and the BGC model ones) placed in 282 a unique folder with root ``MYBGCPATH`` and to use the makenemo external readdressing of ``MY_SRC`` folder. 283 284 The coupled configuration listed in ``work_cfgs.txt`` will look like 308 The best solution is to have all files (the modified ``MY_TRC`` routines and the BGC model ones) 309 placed in a unique folder with root ``MYBGCPATH`` and 310 to use the makenemo external readdressing of ``MY_SRC`` folder. 311 312 The coupled configuration listed in :file:`work_cfgs.txt` will look like 285 313 286 314 :: 287 315 288 NEMO_MYBGC OPA_SRC TOP_SRC 316 NEMO_MYBGC OCE TOP 289 317 290 318 and the related ``cpp_MYBGC.fcm`` content will be … … 292 320 .. code-block:: perl 293 321 294 bld::tool::fppkeyskey_iomput key_mpp_mpi key_top295 296 the compilation with ``makenemo`` will be executed through the following syntax322 bld::tool::fppkeys key_iomput key_mpp_mpi key_top 323 324 the compilation with :file:`makenemo` will be executed through the following syntax 297 325 298 326 .. code-block:: console 299 327 300 301 302 The makenemo feature “-e” was introduced to readdress at compilation time the standard MY_SRC folder303 (usually found in NEMO configurations) with a user defined external one. 304 305 The compilation of more articulated BGC model code & infrastructure, like in the case of BFM 306 ([http://www.bfm-community.eu/publications/bfmnemomanual_r1.0_201508.pdf BFM-NEMO coupling manual]),307 requires some additional features.328 $ makenemo -n 'NEMO_MYBGC' -m '<arch_my_machine>' -j 8 -e '<MYBGCPATH>' 329 330 The makenemo feature ``-e`` was introduced to 331 readdress at compilation time the standard MY_SRC folder (usually found in NEMO configurations) with 332 a user defined external one. 333 334 The compilation of more articulated BGC model code & infrastructure, 335 like in the case of BFM (|BFM man|_), requires some additional features. 308 336 309 337 As before, let’s assume a coupled configuration name *NEMO_MYBGC*, 310 but in this case MYBGC model root becomes ``<MYBGCPATH>`` that contains 4 different subfolders for 311 biogeochemistry, named ``initialization``, ``pelagic``, and ``benthic``, and 312 a separate one named ``nemo_coupling`` including the modified ``MY_SRC`` routines. 338 but in this case MYBGC model root becomes :file:`MYBGC` path that 339 contains 4 different subfolders for biogeochemistry, 340 named :file:`initialization`, :file:`pelagic`, and :file:`benthic`, 341 and a separate one named :file:`nemo_coupling` including the modified `MY_SRC` routines. 313 342 The latter folder containing the modified NEMO coupling interface will be still linked using 314 the makenemo “-e”option.343 the makenemo ``-e`` option. 315 344 316 345 In order to include the BGC model subfolders in the compilation of NEMO code, 317 it will be necessary to extend the configuration ``cpp_NEMO_MYBGC.fcm`` file to include the specific paths of 318 ``MYBGC`` folders, as in the following example 346 it will be necessary to extend the configuration :file:`cpp_NEMO_MYBGC.fcm` file to include the specific paths of :file:`MYBGC` folders, as in the following example 319 347 320 348 .. code-block:: perl 321 349 322 323 324 325 326 327 328 329 330 350 bld::tool::fppkeys key_iomput key_mpp_mpi key_top 351 352 src::MYBGC::initialization <MYBGCPATH>/initialization 353 src::MYBGC::pelagic <MYBGCPATH>/pelagic 354 src::MYBGC::benthic <MYBGCPATH>/benthic 355 356 bld::pp::MYBGC 1 357 bld::tool::fppflags::MYBGC %FPPFLAGS 358 bld::tool::fppkeys %bld::tool::fppkeys MYBGC_MACROS 331 359 332 360 where *MYBGC_MACROS* is the space delimited list of macros used in *MYBGC* model for 333 361 selecting/excluding specific parts of the code. 334 The BGC model code will be preprocessed in the configuration ``BLD`` folder as for NEMO,335 but with an independent path, like ``NEMO_MYBGC/BLD/MYBGC/<subforlders>``.362 The BGC model code will be preprocessed in the configuration :file:`BLD` folder as for NEMO, 363 but with an independent path, like :file:`NEMO_MYBGC/BLD/MYBGC/<subforlders>`. 336 364 337 365 The compilation will be performed similarly to in the previous case with the following … … 339 367 .. code-block:: console 340 368 341 $ makenemo -n 'NEMO_MYBGC' -m '<arch_my_machine>' -j 8 -e '<MYBGCPATH>/nemo_coupling' 342 343 Note that, the additional lines specific for the BGC model source and build paths can be written into 344 a separate file, e.g. named ``MYBGC.fcm``, and then simply included in the ``cpp_NEMO_MYBGC.fcm`` as follow 345 346 .. code-block:: perl 347 348 bld::tool::fppkeys key_zdftke key_dynspg_ts key_iomput key_mpp_mpi key_top 349 inc <MYBGCPATH>/MYBGC.fcm 350 351 This will enable a more portable compilation structure for all MYBGC related configurations. 352 353 **Important**: the coupling interface contained in nemo_coupling cannot be added using the FCM syntax, 354 as the same files already exists in NEMO and they are overridden only with the readdressing of MY_SRC contents to 355 avoid compilation conflicts due to duplicate routines. 356 357 All modifications illustrated above, can be easily implemented using shell or python scripting to 358 edit the NEMO configuration CPP.fcm file and to create the BGC model specific FCM compilation file with code paths. 369 $ makenemo -n 'NEMO_MYBGC' -m '<arch_my_machine>' -j 8 -e '<MYBGCPATH>/nemo_coupling' 370 371 .. note:: 372 The additional lines specific for the BGC model source and build paths can be written into 373 a separate file, e.g. named :file:`MYBGC.fcm`, 374 and then simply included in the :file:`cpp_NEMO_MYBGC.fcm` as follow 375 376 .. code-block:: perl 377 378 bld::tool::fppkeys key_zdftke key_dynspg_ts key_iomput key_mpp_mpi key_top 379 inc <MYBGCPATH>/MYBGC.fcm 380 381 This will enable a more portable compilation structure for all MYBGC related configurations. 382 383 .. warning:: 384 The coupling interface contained in :file:`nemo_coupling` cannot be added using the FCM syntax, 385 as the same files already exists in NEMO and they are overridden only with 386 the readdressing of MY_SRC contents to avoid compilation conflicts due to duplicate routines. 387 388 All modifications illustrated above, can be easily implemented using shell or python scripting 389 to edit the NEMO configuration :file:`CPP.fcm` file and 390 to create the BGC model specific FCM compilation file with code paths. 391 392 .. |BFM man| replace:: BFM-NEMO coupling manual -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/TOP/TRP/trcnxt.F90
r10425 r12165 139 139 ENDIF 140 140 ! ! Leap-Frog + Asselin filter time stepping 141 IF( (neuler == 0 .AND. kt == nittrc000) .OR. ln_top_euler ) THEN ! Euler time-stepping (only swap) 141 IF( (neuler == 0 .AND. kt == nittrc000) ) THEN 142 ! set up for leapfrog on second timestep 143 DO jn = 1, jptra 144 DO jk = 1, jpkm1 145 trb(:,:,jk,jn) = trn(:,:,jk,jn) 146 trn(:,:,jk,jn) = tra(:,:,jk,jn) 147 END DO 148 END DO 149 ELSE IF( ln_top_euler ) THEN 150 ! always doing euler timestepping 142 151 DO jn = 1, jptra 143 152 DO jk = 1, jpkm1 … … 146 155 END DO 147 156 END DO 157 ENDIF 158 IF( (neuler == 0 .AND. kt == nittrc000) .OR. ln_top_euler ) THEN ! Euler time-stepping (only swap) 148 159 IF (l_trdtrc .AND. .NOT. ln_linssh ) THEN ! Zero Asselin filter contribution must be explicitly written out since for vvl 149 160 ! ! Asselin filter is output by tra_nxt_vvl that is not called on this time step -
NEMO/branches/2019/dev_ASINTER-01-05_merged/src/TOP/trcbdy.F90
r11224 r12165 95 95 END DO 96 96 IF( ANY(llsend1) .OR. ANY(llrecv1) ) THEN ! if need to send/recv in at least one direction 97 CALL lbc_lnk( ' bdytra', tsa, 'T', 1., kfillmode=jpfillnothing ,lsend=llsend1, lrecv=llrecv1 )97 CALL lbc_lnk( 'trcbdy', tra, 'T', 1., kfillmode=jpfillnothing ,lsend=llsend1, lrecv=llrecv1 ) 98 98 END IF 99 99 ! -
NEMO/branches/2019/dev_ASINTER-01-05_merged/tests/CANAL/MY_SRC/usrdef_nam.F90
r11586 r12165 86 86 REWIND( numnam_cfg ) ! Namelist namusr_def (exist in namelist_cfg only) 87 87 READ ( numnam_cfg, namusr_def, IOSTAT = ios, ERR = 902 ) 88 902 IF( ios /= 0 ) CALL ctl_nam ( ios , 'namusr_def in configuration namelist' , cdtxt)88 902 IF( ios /= 0 ) CALL ctl_nam ( ios , 'namusr_def in configuration namelist' ) 89 89 ! 90 90 IF(lwm) WRITE( numond, namusr_def )
Note: See TracChangeset
for help on using the changeset viewer.