Index: NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/main/NEMO_coding_conv.tex
===================================================================
 NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/main/NEMO_coding_conv.tex (revision 11314)
+++ (revision )
@@ 1,807 +1,0 @@
\documentclass{article}

\usepackage{fancyhdr}
\usepackage{times}
\usepackage{graphicx}
\usepackage{hyperref}
\usepackage{minted}
%\usepackage[normalem]{ulem} % striketrough style with \sout{...}

\hypersetup{pdftitle={NEMO coding conventions}, pdfauthor={NEMO System Team}, colorlinks}
\setminted{style=emacs, breaklines, frame=leftline}
\newmintinline[forcode]{fortran}{fontsize=auto, frame=lines} % \forcode{...}
\newminted[forlines]{fortran}{} % \begin{forlines}

\pagestyle{empty}
\setlength{\leftmargin}{1 cm}
\setlength{\rightmargin}{1 cm}
\setlength{\oddsidemargin}{0 cm}
\setlength{\evensidemargin}{0 cm}
\setlength{\topmargin}{1cm}
\setlength{\textwidth}{16 cm}
\setlength{\textheight}{25cm}
\pagestyle{fancy}

\title{
 \includegraphics[width=0.3\textwidth]{../../../figures/NEMO_grey} \\
 \vspace{1.0cm} \rule{345pt}{1.5pt} \\
 \vspace{0.45cm} {\Huge NEMO coding conventions} \rule{345pt}{1.5pt} \\
}
%\title{NEMO coding conventions}
\author{\Large NEMO System Team
 \thanks{
 To be completed
 }
}
\date{version X.X  month year}

\begin{document}

\maketitle

\newpage

\tableofcontents

\newpage

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Introduction}

This document describes conventions\index{conventions} used in NEMO coding and suggested for its development.
The objectives are to offer a guide to all readers of the NEMO code, and to facilitate the work of
all the developers, including the validation of their developments, and
eventually the implementation of these developments within the NEMO platform.

A first approach of these rules can be found in the code in \path{./src/OCE/module_example} where
all the basics coding conventions are illustrated.
More details can be found below.

This work is based on the coding conventions in use for the Community Climate System Model
\footnote {\href{http://www.cesm.ucar.edu/working_groups/Software/dev_guide/dev_guide/node7.html}{UCAR conventions}},
the previous version of this document (``FORTRAN coding standard in the OPA System'') and
the expertise of the NEMO System Team.
After a general overview below, this document will describe:

\begin{itemize}
\item
 The style rules, $i.e.$ the syntax, appearance and naming conventions chosen to improve readability of the code;
\item
 The content rules, $i.e.$ the conventions to improve the reliability of the different parts of the code;
\item
 The package rules to go a step further by improving the reliability of the whole and
 interfaces between routines and modules.
\end{itemize}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Overview and general conventions}

NEMO has 3 major components: ocean dynamics (\path{./src/OCE}), seaice (\path{./src/ICE}) and
marine biogeochemistry (\path{./src/MBG}).
%, lineartangent and adjoint of the dynamics ($TAM$) each of them corresponding to a directory.
In each directory, one will find some FORTRAN files and/or subdirectories, one per functionality of the code:
\path{./src/OCE/BDY} (boundaries), \path{./src/OCE/DIA} (diagnostics), \path{./src/OCE/DOM} (domain),
\path{./src/OCE/DYN} (dynamics), \path{./src/OCE/LDF} (lateral diffusion), etc... \\
All name are chosen to be as selfexplanatory as possible, in English, all prefixes are 3 digits. \\
English is used for all variables names, comments, and documentation. \\
Physical units are MKS. The only exception to this is the temperature, which is expressed in degrees Celsius,
except in bulk formulae and part of SI$^3$ seaice model where it is in Kelvin.
See \path{.src/OCE/DOM/phycst.F90} files for conversions.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Architecture}

Within each directory, organisation of files is driven by orthogonality\index{orthogonality},
$i.e.$ one functionality of the code is intended to be in one and only one directory, and
one module and all its related routines are in one file.
The functional modules\index{module} are:

\begin{itemize}
\item \path{SBC} surface module
\item \path{IOM} management of the I/O
\item \path{NST} interface to AGRIF (nesting model) for dynamics and biogeochemistry
\item \path{OBC}, \path{BDY} management of structured and unstructured open boundaries
\item \path{C1D} 1D (vertical) configuration for dynamics, seaice and biogeochemistry
\item \path{OFF} offline module: passive tracer or biogeochemistry alone
\item \path{...}
\end{itemize}

For example, the file \textit{domain.F90} contains the module \texttt{domain} and all the subroutines related to
this module (\texttt{dom\_init, dom\_nam, dom\_ctl}).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Style rules}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Argument list format}

Routine argument lists will contain a maximum 5 variables\index{variable} per line,
whilst continuation lines can be used.
This applies both to the calling routine and the dummy argument list in the routine being called.
The purpose is to simplify matching up the arguments between caller and callee.

\begin{forlines}
SUBROUTINE tra_adv_eiv( kt, pun, pvn, pwn )

 CALL tra_adv_eiv( kt, zun, zvn, zwn )
\end{forlines}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Array syntax}

Except for long loops (see below), array notation should be used if possible.
To improve readability the array shape must be shown in brackets, $e.g.$:

\begin{forlines}
onedarraya(:) = onedarrayb(:) + onedarrayc(:)
twodarray (:,:) = scalar * anothertwodarray(:,:)
\end{forlines}

When accessing sections of arrays, for example in finite difference equations,
do so by using the triplet notation on the full array, $e.g.$:

\begin{forlines}
twodarray(:,2:len2) = scalar &
 & * ( twodarray2(:,1:len21 ) &
 &  twodarray2(:,2:len2 ) )
\end{forlines}

For long, complicated loops, explicitly indexed loops should be preferred.
In general when using this syntax, the order of the loops indices should reflect the following scheme
(for best usage of data locality):

\begin{forlines}
DO jk = 1, jpk
 DO jj = 1, jpj
 DO ji = 1, jpi
 threedarray(ji,jj,jk) = ...
 END DO
 END DO
END DO
\end{forlines}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Case}

All FORTRAN keywords are in capital: \forcode{DIMENSION}, \forcode{WRITE}, \forcode{DO}, \forcode{END DO},
\forcode{NAMELIST}, ... All other parts of the NEMO code will be written in lower case.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Comments}

Comments in the code are useful when reading the code and changing or developing it. \\
The full documentation and detailed explanations are to be added in the reference manual
(TeX files, aside from the code itself). \\
In the code, the comments should explain variable content and describe each computational step. \\
Comments in the header start with ``!!''.
For more details on the content of the headers, see Content rules/Headers in this document. \\
Comments in the code start with ``!''. \\
All comments are indented (3, 6, or 9 blank spaces). \\
Short comments may be included on the same line as executable code, and an additional line can be used with
proper alignment.
For example:

\begin{forlines}
zx = zx *zzy ! Describe what is going on and if it is
! ! too long use another ! for proper
! ! alignment with automatic indentation
\end{forlines}

More indepth comments should be written in the form:

\begin{forlines}
 ! Check of some namelist values
\end{forlines}

or

\begin{forlines}
!
! !<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
! ! Bottom boundary condition on tke
! !<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
!
\end{forlines}

Key features of this style are

\begin{enumerate}
\item it starts with a "!" in the column required for proper indentation,
\item the text is offset above and below by a blank line or a content line built for underlying.
\end{enumerate}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Continuation lines}

Continuation lines can be used with precise alignment for readability. For example:

\begin{forlines}
avmu(ji,jj,jk) = avmu(ji,jj,jk) * ( un(ji,jj,jk1)  un(ji,jj,jk) ) &
 & * ( ub(ji,jj,jk1)  ub(ji,jj,jk) ) &
 & / ( fse3uw_n(ji,jj,jk) &
 & * fse3uw_b(ji,jj,jk) )
\end{forlines}

Code lines, which are continuation lines of assignment statements, must begin to the right of the column of
the assignment operator.
Due to the possibility of automatic indentation in some editor (emacs for example),
use a ``\&'' as first character of the continuing lines to maintain the alignment.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Declaration of arguments and local variables}

In a routine, input arguments and local variables are declared 1 per line,
with a comment field on the same line as the declaration.
Multiple comment lines describing a single variable are acceptable if needed.
For example:

\begin{forlines}
INTEGER :: kstp ! ocean timestep index
\end{forlines}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{F90 Standard}

NEMO software adheres to the FORTRAN 95 language standard and does not rely on any specific language or
vendor extensions.


%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{FreeForm Source}

Freeform source will be used.
The F90/95 standard allows lines of up to 132 characters, but a selfimposed limit of 80 should enhance readability,
or print source files with two columns per page.
Multiline comments that extend to column 100 are unacceptable.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Indentation}

Code as well as comment lines within loops, ifblocks, continuation lines, \forcode{MODULE} or
\forcode{SUBROUTINE} statements will be indented 3 characters for readability
(except for \forcode{CONTAINS} that remains at first column).

\begin{forlines}
MODULE mod1
 REAL(wp) xx
CONTAINS
 SUBROUTINE sub76( px, py, pz, pw, pa, &
 & pb, pc, pd, pe )

 END SUBROUTINE sub76
END MODULE mod1
\end{forlines}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Loops}

Loops, if explicit, should be structured with the doend do construct as opposed to numbered loops.
Nevertheless nonnumeric labels can be used for a big iterative loop of a recursive algorithm.
In the case of a long loop, a selfdescriptive label can be used ($i.e.$ not just a number).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Naming Conventions: files}

A file containing a module will have the same name as the module it contains
(because dependency rules used by "make" programs are based on file names).
\footnote{
 For example, if routine A "\forcode{USE}"s module B, then "make" must be told of the dependency relation which
 requires B to be compiled before A.
 If one can assume that module B resides in file B.o,
 building a tool to generate this dependency rule ($e.g.$ A.o: B.o) is quite simple.
 Put another way, it is difficult (to say nothing of CPUintensive) to search an entire source tree to
 find the file in which module B resides for each routine or module which "\forcode{USE}"s B.
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Naming Conventions: modules}

Use a meaningful English name and the ``3 letters'' naming convention:
first 3 letters for the code section, and last 3 to describe the module.
For example, zdftke, where ``zdf'' stands for vertical diffusion, and ``tke'' for turbulent kinetic energy. \\
Note that by implication multiple modules are not allowed in a single file.
The use of common blocks is deprecated in Fortran 90 and their use in NEMO is strongly discouraged.
Modules are a better way to declare static data.
Among the advantages of modules is the ability to freely mix data of various types, and
to limit access to contained variables through the use of the \forcode{ONLY} and \forcode{PRIVATE} attributes.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Naming Conventions: variables}

All variable should be named as explicitly as possible in English.
The naming convention concerns prefix letters of these name, in order to identify the variable type and status. \\
Never use a FORTRAN keyword as a routine or variable name. \\
The table below lists the starting letter(s) to be used for variable naming, depending on their type and status:
%TABLE
\begin{table}[htbp]
 \begin{center}
 \begin{tabular}{p{50pt}p{50pt}p{50pt}p{50pt}p{50pt}p{50pt}p{50pt}}
 \hline
 Type \par / Status & integer& real& logical & character& double \par precision& complex \\
 \hline
 public \par or \par module variable& \textbf{m n} \par \textit{but not } \par \textbf{nn\_}& \textbf{a b e f g h o} \textbf{q} \textit{to} \textbf{x} \par but not \par \textbf{fs rn\_}& \textbf{l} \par \textit{but not} \par \textbf{lp ld ll ln\_}& \textbf{c} \par \textit{but not} \par \textbf{cp cd cl cn\_}& \textbf{d} \par \textit{but not} \par \textbf{dp dd dl dn\_}& \textbf{y} \par \textit{but not} \par \textbf{yp yd yl} \\
 \hline
 dummy \par argument& \textbf{k} \par \textit{but not} \par \textbf{kf}& \textbf{p} \par \textit{but not} \par \textbf{pp pf}& \textbf{ld}& \textbf{cd}& \textbf{dd}& \textbf{yd} \\
 \hline
 local \par variable& \textbf{i}& \textbf{z}& \textbf{ll}& \textbf{cl}& \textbf{cd}& \textbf{yl} \\
 \hline
 loop \par control& \textbf{j} \par \textit{but not } \par \textbf{jp}& & & & & \\
 \hline
 parameter& \textbf{jp}& \textbf{pp}& \textbf{lp}& \textbf{cp}& \textbf{dp}& \textbf{yp} \\
 \hline
 namelist& \textbf{nn\_}& \textbf{rn\_}& \textbf{ln\_}& \textbf{cn\_}& \textbf{dn\_}& \\
 \hline
 CPP \par macro& \textbf{kf}& \textbf{sf} \par & & & & \\
 \hline
 \end{tabular}
 \label{tab1}
 \end{center}
\end{table}
%

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Operators}

Use of the operators \texttt{<, >, <=, >=, ==, /=} is strongly recommended instead of their deprecated counterparts
(\texttt{.lt., .gt., .le., .ge., .eq., .ne.}).
The motivation is readability.
In general use the notation: \\
$$
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Pre processor}

Where the use of a language preprocessor is required, it will be the C preprocessor (cpp). \\
The cpp key is the main feature used, allowing to ignore some useless parts of the code at compilation step. \\
The advantage is to reduce the memory use; the drawback is that compilation of this part of the code isn't checked. \\
The cpp key feature should only be used for a few limited options, if it reduces the memory usage.
In all cases, a logical variable and a FORTRAN \forcode{IF} should be preferred.
When using a cpp key \textit{key\_optionname},
a corresponding logical variable \textit{lk\_optionname} should be declared to
allow FORTRAN \forcode{IF} tests in the code and
a FORTRAN module with the same name ($i.e.$ \textit{optionname.F90}) should be defined.
This module is the only place where a \``\#if defined'' command appears, selecting either the whole FORTRAN code or
a dummy module.
For example, the TKE vertical physics, the module name is \textit{zdftke.F90},
the CPP key is \textit{key\_zdftke} and the associated logical is \textit{lk\_zdftke}.

The following syntax:

\begin{forlines}
#if defined key_optionname
!! Part of code conditionally compiled if cpp key key_optionname is active
#endif
\end{forlines}

Is to be used rather than the \#ifdef abbreviate form since it may have conflicts with some Unix scripts.

Tests on cpp keys included in NEMO at compilation step:

\begin{itemize}
\item
 The CPP keys used are compared to the previous list of cpp keys
 (the compilation will stop if trying to specify a nonexisting key)
\item
 If a change occurs in the CPP keys used for a given experiment, the whole compilation phase is done again.
\end{itemize}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Content rules}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Configurations}

The configuration defines the domain and the grid on which NEMO is running.
It may be useful to associate a cpp key and some variables to a given configuration, although
the part of the code changed under each of those keys should be minimized.
As an example, the "ORCA2" configuration (global ocean, 2 degrees grid size) is associated with
the cpp key \texttt{key\_orca2} for which

\begin{forlines}
cp_cfg = "orca"
jp_cfg = 2
\end{forlines}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Constants}

Physical constants ($e.g.$ $\pi$, gas constants) must never be hardwired into the executable portion of a code.
Instead, a mnemonically named variable or parameter should be set to the appropriate value,
in the setup routine for the package\index{package}.
We realize than many parameterizations rely on empirically derived constants or fudge factors,
which are not easy to name.
In these cases it is not forbidden to leave such factors coded as "magic numbers" buried in executable code, but
comments should be included referring to the source of the empirical formula.
Hardcoded numbers should never be passed through argument lists.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Declaration for variables and constants}

\subsubsection{Rules}

Variables used as constants should be declared with attribute \forcode{PARAMETER} and
used always without copying to local variables, in order to
prevent from using different values for the same constant or changing it accidentally.

\begin{itemize}
\item
 Usage of the \forcode{DIMENSION} statement or attribute is required in declaration statements
\item
 The ``::'' notation is quite useful to show that this program unit declaration part is written in
 standard FORTRAN syntax, even if there are no attributes to clarify the declaration section.
 Always use the notation $<$blank$>$::$<$three blanks$>$ to improve readability.
\item
 Declare the length of a character variable using the \forcode{CHARACTER} (len=xxx) syntax
 \footnote {
 The len specifier is important because it is possible to have several kinds for characters
 ($e.g.$ Unicode using two bytes per character, or there might be a different kind for Japanese $e.g.$ NEC).
 }
\item
 For all global data (in contrast to module data, that is all data that can be access by other module)
 must be accompanied with a comment field on the same line
 \footnote {
 This allows a easy research of where and how a variable is declared using the unix command:
 ``grep var *90  grep !:''.
 }.
 For example:
 \begin{forlines}
 REAL(wp), DIMENSION(jpi,jpj,jpk) :: ua ! ihorizontal velocity (m/s)
 \end{forlines}
\end{itemize}

\subsubsection{Implicit None}

All subroutines and functions will include an \forcode{IMPLICIT NONE} statement.
Thus all variables must be explicitly typed.
It also allows the compiler to detect typographical errors in variable names.
For modules, one \forcode{IMPLICIT NONE} statement in the modules definition section is needed.
This also removes the need to have \forcode{IMPLICIT NONE} statements in
any routines that are \forcode{CONTAINS}'ed in the module.
Improper data initialisation is another common source of errors
\footnote{
 A variable could contain an initial value you did not expect.
 This can happen for several reasons, $e.g.$ the variable has never been assigned a value, its value is outdated,
 memory has been allocated for a pointer but you have forgotten to initialise the variable pointed to.
}.
To avoid problems, initialise variables as close as possible to where they are first used.

\subsubsection{Attributes}

\forcode{PRIVATE} / \forcode{PUBLIC}:
All resources of a module are \forcode{PUBLIC} by default.
A reason to store multiple routines and their data in a single module is that
the scope of the data defined in the module can be limited to the routines which are in the same module.
This is accomplished with the \forcode{PRIVATE} attribute. \\
\forcode{INTENT}:
All dummy arguments of a routine must include the \forcode{INTENT} clause in their declaration in order to
improve control of variables in routine calls.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Headers}

Prologues are not used in NEMO for now, although it may become an interesting tool in combination with
ProTeX auto documentation script in the future.
Rules to code the headers and layout of a module or a routine are illustrated in the example module available with
the code: \path{./src/OCE/module_example}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Interface blocks}

Explicit interface blocks are required between routines if optional or keyword arguments are to be used.
They also allow the compiler to check that the type, shape and number of arguments specified in the \forcode{CALL}
are the same as those specified in the subprogram itself.
FORTRAN 95 compilers can automatically provide explicit interface blocks for routines contained in a module.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{I/O Error Conditions}

I/O statements which need to check an error condition will use the \texttt{iostat=} construct
instead of the outmoded \texttt{end=} and \forcode{err=}. \\
Note that a 0 value means success, a positive value means an error has occurred, and
a negative value means the end of record or end of file was encountered.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{PRINT  ASCII output files}

Output listing and errors are directed to \texttt{numout} logical unit =6 and
produces a file called \textit{ocean.output} (use \texttt{ln\_prt} to have one output per process in MPP).
Logical \texttt{lwp} variable allows for less verbose outputs.
To output an error from a routine, one can use the following template:

\begin{forlines}
IF( nstop /= 0 .AND. lwp ) THEN ! error print
 WRITE(numout,cform_err)
 WRITE(numout,*) nstop, ' error have been found'
ENDIF
\end{forlines}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Precision}

Parameterizations should not rely on vendorsupplied flags to supply a default floating point precision or
integer size.
The F95 \forcode{KIND} feature should be used instead.
In order to improve portability between 32 and 64 bit platforms,
it is necessary to make use of kinds by using a specific module \path{./src/OCE/par_kind.F90}
declaring the "kind definitions" to obtain the required numerical precision and range as well as
the size of \forcode{INTEGER}.
It should be noted that numerical constants need to have a suffix of \texttt{\_kindvalue} to
have the according size. \\
Thus \forcode{wp} being the "working precision" as declared in \path{./src/OCE/par_kind.F90},
declaring real array \forcode{zpc} will take the form:

\begin{forlines}
REAL(wp), DIMENSION(jpi,jpj,jpk) :: zpc ! power consumption
\end{forlines}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Structures}

The \forcode{TYPE} structure allowing to declare some variables is more often used in NEMO,
especially in the modules dealing with reading fields, or interfaces.
For example:

\begin{forlines}
! Definition of a tracer as a structure
TYPE PTRACER
 CHARACTER(len = 20) :: sname ! short name
 CHARACTER(len = 80 ) :: lname ! long name
 CHARACTER(len = 20 ) :: unit ! unit
 LOGICAL :: lini ! read in a file or not
 LOGICAL :: lsav ! ouput the tracer or not
END TYPE PTRACER

TYPE(PTRACER) , DIMENSION(jptra) :: tracer
\end{forlines}

Missing rule on structure name??

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Packages coding rules}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Bounds checking}

NEMO is able to run when an array bounds checking option is enabled
(provided the cpp key \texttt{key\_vectopt\_loop} is not defined). \\
Thus, constructs of the following form are disallowed:

\begin{forlines}
REAL(wp) :: arr(1)
\end{forlines}

where "arr" is an input argument into which the user wishes to index beyond 1.
Use of the (*) construct in array dimensioning is forbidden also because
it effectively disables array bounds checking.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Communication}

A package should refer only to its own modules and subprograms and to those intrinsic functions included in
the Fortran standard. \\
All communication with the package will be through the argument list or namelist input.
\footnote{
 The point behind this rule is that packages should not have to know details of
 the surrounding model data structures, or the names of variables outside of the package.
 A notable exception to this rule is model resolution parameters.
 The reason for the exception is to allow compiletime array sizing inside the package.
 This is often important for efficiency.
}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Error conditions}

When an error condition occurs inside a package, a message describing what went wrong will be printed
(see PRINT  ASCII output files).
The name of the routine in which the error occurred must be included.
It is acceptable to terminate execution within a package, but
the developer may instead wish to return an error flag through the argument list,
see \textit{stpctl.F90}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Memory management}

The main action is to identify and declare which arrays are \forcode{PUBLIC} and which are \forcode{PRIVATE}. \\
As of version 3.3.1 of NEMO, the use of static arrays (size fixed at compile time) has been deprecated.
All module arrays are now declared \forcode{ALLOCATABLE} and
allocated in either the \texttt{\_alloc()} or \texttt{\_init()} routines.
The success or otherwise of each \forcode{ALLOCATE} must be checked using
the \texttt{stat=} optional argument. \\

In addition to arrays contained within modules, many routines in NEMO require local, ``workspace'' arrays to
hold the intermediate results of calculations.
In previous versions of NEMO, these arrays were declared in such a way as to be automatically allocated on
the stack when the routine was called.
An example of an automatic array is:

\begin{forlines}
SUBROUTINE sub(n)
 REAL :: a(n)
 ...
END SUBROUTINE sub
\end{forlines}

The downside of this approach is that the program will crash if it runs out of stack space and
the reason for the crash might not be obvious to the user.

Therefore, as of version 3.3.1, the use of automatic arrays is deprecated.
Instead, a new module, \textit{wrk\_nemo.F90}, has been introduced which
contains 1,2,3 and 4dimensional workspace arrays for use in subroutines.
These workspace arrays should be used in preference to declaring new, local (allocatable) arrays whenever possible.
The only exceptions to this are when workspace arrays with lower bounds other than 1 and/or
with extent(s) greater than those in the \textit{wrk\_nemo.F90} module are required. \\

The 2D, 3D and 4D workspace arrays in \textit{wrk\_nemo.F90} have extents \texttt{jpi}, \texttt{jpj},
\texttt{jpk} and \texttt{jpts} ($x$, $y$, $z$ and tracers) in the first, second, third and fourth dimensions,
respectively.
The 1D arrays are allocated with extent MAX($jpi \times jpj, jpk \times jpj, jpi \times jpk$). \\

The \forcode{REAL (KIND = wp)} workspace arrays in \textit{wrk\_nemo.F90}
are named $e.g.$ \texttt{wrk\_1d\_1, wrk\_4d\_2} etc. and
should be accessed by USE'ing the \textit{wrk\_nemo.F90} module.
Since these arrays are available to any routine,
some care must be taken that a given workspace array is not already being used somewhere up the call stack.
To help with this, \textit{wrk\_nemo.F90} also contains some utility routines;
\texttt{wrk\_in\_use()} and \texttt{wrk\_not\_released()}.
The former first checks that the requested arrays are not already in use and then sets internal flags to show that
they are now in use.
The \texttt{wrk\_not\_released()} routine unsets those internal flags.
A subroutine using this functionality for two, 3D workspace arrays named \texttt{zwrk1} and
\texttt{zwrk2} will look something like:

\begin{forlines}
SUBROUTINE sub()
 USE wrk_nemo, ONLY: wrk_in_use, wrk_not_released
 USE wrk_nemo, ONLY: zwrk1 => wrk_3d_5, zwrk2 => wrk_3d_6
 !
 IF(wrk_in_use(3, 5,6)THEN
 CALL ctl_stop('sub: requested workspace arrays unavailable.')
 RETURN
 END IF
 ...
 ...
 IF(wrk_not_released(3, 5,6)THEN
 CALL ctl_stop('sub: failed to release workspace arrays.')
 END IF
 !
END SUBROUTINE sub
\end{forlines}

The first argument to each of the utility routines is the dimensionality of the required workspace (14).
Following this there must be one or more integers identifying which workspaces are to be used/released.
Note that, in the interests of keeping the code as simple as possible,
there is no use of \forcode{POINTER}s etc. in the \textit{wrk\_nemo.F90} module.
Therefore it is the responsibility of the developer to ensure that the arguments to \texttt{wrk\_in\_use()} and
\texttt{wrk\_not\_released()} match the workspace arrays actually being used by the subroutine. \\

If a workspace array is required that has extent(s) less than those of the arrays in
the \textit{wrk\_nemo.F90} module then the advantages of implicit loops and bounds checking may be retained by
defining a pointer to a subarray as follows:

\begin{forlines}
SUBROUTINE sub()
 USE wrk_nemo, ONLY: wrk_in_use, wrk_not_released
 USE wrk_nemo, ONLY: wrk_3d_5
 !
 REAL(wp), DIMENSION(:,:,:), POINTER :: zwrk1
 !
 IF(wrk_in_use(3, 5)THEN
 CALL ctl_stop('sub: requested workspace arrays unavailable.')
 RETURN
 END IF
 !
 zwrk1 => wrk_3d_5(1:10,1:10,1:10)
 ...
END SUBROUTINE sub
\end{forlines}

Here, instead of ``use associating'' the variable \texttt{zwrk1} with the array \texttt{wrk\_3d\_5}
(as in the first example), it is explicitly declared as a pointer to a 3D array.
It is then associated with a subarray of \texttt{wrk\_3d\_5} once the call to
\texttt{wrk\_in\_use()} has completed successfully.
Note that in F95 (to which NEMO conforms) it is not possible for either the upper or lower array bounds of
the pointer object to differ from those of the target array. \\

In addition to the \forcode{REAL (KIND = wp)} workspace arrays,
\textit{wrk\_nemo.F90} also contains 2D integer arrays and 2D REAL arrays with extent (\texttt{jpi}, \texttt{jpk}),
$i.e.$ $xz$.
The utility routines for the integer workspaces are \texttt{iwrk\_in\_use()} and \texttt{iwrk\_not\_released()} while
those for the $xz$ workspaces are \texttt{wrk\_in\_use\_xz()} and \texttt{wrk\_not\_released\_xz()}.

Should a call to one of the \texttt{wrk\_in\_use()} family of utilities fail,
an error message is printed along with a table showing which of the workspace arrays are currently in use.
This should enable the developer to choose alternatives for use in the subroutine being worked on. \\

When compiling NEMO for production runs,
the calls to {\texttt{wrk\_in\_use()} / \texttt{wrk\_not\_released()} can be reduced to stubs that just
return \forcode{.false.} by setting the cpp key \texttt{key\_no\_workspace\_check}.
These stubs may then be inlined (and thus effectively removed altogether) by setting appropriate compiler flags
($e.g.$ ``finline'' for the Intel compiler or ``Q'' for the IBM compiler).

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Optimisation}

Considering the new computer architecture, optimisation cannot be considered independently from the computer type.
In NEMO, portability is a priority, before any too specific optimisation.

Some tools are available to help: for vector computers, \texttt{key\_vectopt\_loop} allows to unroll a loop

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Package attribute: \forcode{PRIVATE}, \forcode{PUBLIC}, \forcode{USE}, \forcode{ONLY}}

Module variables and routines should be encapsulated by using the \forcode{PRIVATE} attribute.
What shall be used outside the module can be declared \forcode{PUBLIC} instead.
Use \forcode{USE} with the \forcode{ONLY} attribute to specify which of the variables, type definitions etc...
defined in a module are to be made available to the using routine.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection {Parallelism using MPI}

NEMO is written in order to be able to run on one processor, or on one or more using MPI
($i.e.$ activating the cpp key $key\_mpp\_mpi$).
The domain decomposition divides the global domain in cubes (see NEMO reference manual).
Whilst coding a new development, the MPI compatibility has to be taken in account
(see \path{./src/LBC/lib_mpp.F90}) and should be tested.
By default, the $x$$z$ part of the decomposition is chosen to be as square as possible.
However, this may be overriden by specifying the number of subdomains in latitude and longitude in
the \texttt{nammpp} section of the namelist file.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Features to be avoided}

The code must follow the current standards of FORTRAN and ANSI C.
In particular, the code should not produce any WARNING at compiling phase, so that
users can be easily alerted of potential bugs when some appear in their new developments.
Below is a list of features to avoid:
\begin{itemize}
\item
 \forcode{COMMON} block
 (use the declaration part of \forcode{MODULE} instead)
\item
 \forcode{EQUIVALENCE}
 (use \forcode{POINTER} or derived data type instead to form data structure)
\item
 Assigned and computed \forcode{GOTO}
 (use the \forcode{CASE} construct instead)
\item
 Arithmetic \forcode{IF} statement
 (use the block \forcode{IF}, \forcode{ELSE}, \forcode{ELSEIF}, \forcode{ENDIF} or
 \forcode{SELECT CASE} construct instead)
\item
 Labeled \forcode{DO} construct
 (use unlabeled \forcode{END DO} instead)
\item
 \forcode{FORMAT} statement
 (use character parameters or
 explicit format specifiers inside the \forcode{READ} or \forcode{WRITE} statement instead)
\item
 \forcode{GOTO} and \forcode{CONTINUE} statements
 (use \forcode{IF}, \forcode{CASE}, \forcode{DO WHILE}, \forcode{EXIT} or \forcode{CYCLE} statements or
 a contained ?)
\item
 \forcode{PAUSE}
\item
 \forcode{ENTRY} statement: a subprogram must only have one entry point.
\item
 \forcode{RETURN} is obsolete and so not necessary at the end of program units
\item
 \forcode{FUNCTION} statement
\item
 Avoid functions with side effects.
 \footnote{
 First, the code is easier to understand, if you can rely on
 the rule that functions don't change their arguments.
 Second, some compilers generate more efficient code for PURE functions
 (in FORTRAN 95 there are the attributes PURE and ELEMENTAL), because
 they can store the arguments in different places.
 This is especially important on massive parallel and as well on vector machines.
 }
\item
 \forcode{DATA} and \forcode{BLOCK DATA}
 (use initialisers)
\end{itemize}

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

% \printindex
% \input NEMO_coding.conv.ind

\end{document}
Index: NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/main/appendices.tex
===================================================================
 NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/main/appendices.tex (revision 11314)
+++ NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/main/appendices.tex (revision 11315)
@@ 4,5 +4,4 @@
\subfile{../subfiles/annex_C} %% Discrete invariants of the eqs.
\subfile{../subfiles/annex_iso} %% Isoneutral diffusion using triads
\subfile{../subfiles/annex_D} %% Coding rules
%% Not included
Index: NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/main/bibliography.bib
===================================================================
 NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/main/bibliography.bib (revision 11314)
+++ NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/main/bibliography.bib (revision 11315)
@@ 2306,4 +2306,20 @@
}
+@article{ qiao.yuan.ea_OD10,
+ title = "A threedimensional surface waveâ€“ocean circulation coupled
+ model and its initial testing",
+ pages = "13391335",
+ journal = "Ocean Dynamics",
+ volume = "60",
+ number = "5",
+ author = "F. Qiao and Y. Yuan and T. Ezer and C. Xia and
+ Y. Yang and X. Lu and Z. Song ",
+ year = "2010",
+ month = "oct",
+ publisher = "SpringerVerlag",
+ issn = "16167341",
+ doi = "10.1007/s102360100326y"
+}
+
@article{ redi_JPO82,
title = "Oceanic isopycnal mixing by coordinate rotation",
Index: NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/annex_D.tex
===================================================================
 NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/annex_D.tex (revision 11314)
+++ (revision )
@@ 1,207 +1,0 @@
\documentclass[../main/NEMO_manual]{subfiles}

\begin{document}
% ================================================================
% Appendix D Coding Rules
% ================================================================
\chapter{Coding Rules}
\label{apdx:D}

\minitoc

\newpage

A "model life" is more than ten years.
Its software, composed of a few hundred modules, is used by many people who are scientists or students and
do not necessarily know every aspect of computing very well.
Moreover, a well thoughtout program is easier to read and understand, less difficult to modify,
produces fewer bugs and is easier to maintain.
Therefore, it is essential that the model development follows some rules:

 well planned and designed

 well written

 well documented (both on and offline)

 maintainable

 easily portable

 flexible.

To satisfy part of these aims, \NEMO is written with a coding standard which is close to the ECMWF rules,
named DOCTOR \citep{gibson_rpt86}.
These rules present some advantages like:

 to provide a well presented program

 to use rules for variable names which allow recognition of their type
(integer, real, parameter, local or shared variables, etc. ).

This facilitates both the understanding and the debugging of an algorithm.

% ================================================================
% The program structure
% ================================================================
\section{Program structure}
\label{sec:D_structure}

Each program begins with a set of headline comments containing:

 the program title

 the purpose of the routine

 the method and algorithms used

 the detail of input and output interfaces

 the external routines and functions used (if they exist)

 references (if they exist)

 the author name(s), the date of creation and any updates.

 Each program is split into several well separated sections and
subsections with an underlined title and specific labelled statements.

 A program has not more than 200 to 300 lines.

A template of a module style can be found on the NEMO depository in the following file:
NEMO/OPA\_SRC/module\_example.
% ================================================================
% Coding conventions
% ================================================================
\section{Coding conventions}
\label{sec:D_coding}

 Use of the universal language \fninety, and try to avoid obsolescent features like statement functions,
do not use GO TO and EQUIVALENCE statements.

 A continuation line begins with the character {\&} indented by three spaces compared to the previous line,
while the previous line ended with the character {\&}.

 All the variables must be declared.
The code is usually compiled with implicit none.

 Never use continuation lines in the declaration of a variable.
When searching a variable in the code through a \textit{grep} command, the declaration line will be found.

 In the declaration of a PUBLIC variable, the comment part at the end of the line should start with
the two characters "\verb?!:?".
The following UNIX command, \\
\verb?grep var_name *90 \ grep \!: ? \\
will display the module name and the line where the var\_name declaration is.

 Always use a three spaces indentation in DO loop, CASE, or IFELSEIFELSEENDIF statements.

 use a space after a comma, except when it appears to separate the indices of an array.

 use call to ctl\_stop routine instead of just a STOP.

\newpage

% ================================================================
% Naming Conventions
% ================================================================
\section{Naming conventions}
\label{sec:D_naming}

The purpose of the naming conventions is to use prefix letters to classify model variables.
These conventions allow the variable type to be easily known and rapidly identified.
The naming conventions are summarised in the Table below:


%TABLE
\begin{table}[htbp]
 \label{tab:VarName}
 \begin{center}
 \begin{tabular}{p{45pt}p{35pt}p{45pt}p{40pt}p{40pt}p{40pt}p{40pt}p{40pt}}
 \hline
 Type \par / Status
 & integer
 & real
 & logical
 & character
 & structure
 & double \par precision
 & complex \\
 \hline
 public \par or \par module variable
 & \textbf{m n} \par \textit{but not} \par \textbf{nn\_ np\_}
 & \textbf{a b e f g h o q r} \par \textbf{t} \textit{to} \textbf{x} \par but not \par \textbf{fs rn\_}
 & \textbf{l} \par \textit{but not} \par \textbf{lp ld} \par \textbf{ ll ln\_}
 & \textbf{c} \par \textit{but not} \par \textbf{cp cd} \par \textbf{cl cn\_}
 & \textbf{s} \par \textit{but not} \par \textbf{sd sd} \par \textbf{sl sn\_}
 & \textbf{d} \par \textit{but not} \par \textbf{dp dd} \par \textbf{dl dn\_}
 & \textbf{y} \par \textit{but not} \par \textbf{yp yd} \par \textbf{yl yn} \\
 \hline
 dummy \par argument
 & \textbf{k} \par \textit{but not} \par \textbf{kf}
 & \textbf{p} \par \textit{but not} \par \textbf{pp pf}
 & \textbf{ld}
 & \textbf{cd}
 & \textbf{sd}
 & \textbf{dd}
 & \textbf{yd} \\
 \hline
 local \par variable
 & \textbf{i}
 & \textbf{z}
 & \textbf{ll}
 & \textbf{cl}
 & \textbf{sl}
 & \textbf{dl}
 & \textbf{yl} \\
 \hline
 loop \par control
 & \textbf{j} \par \textit{but not} \par \textbf{jp} &&&&&& \\
 \hline
 parameter
 & \textbf{jp np\_}
 & \textbf{pp}
 & \textbf{lp}
 & \textbf{cp}
 & \textbf{sp}
 & \textbf{dp}
 & \textbf{yp} \\
 \hline
 namelist
 & \textbf{nn\_}
 & \textbf{rn\_}
 & \textbf{ln\_}
 & \textbf{cn\_}
 & \textbf{sn\_}
 & \textbf{dn\_}
 & \textbf{yn\_}
 \\
 \hline
 CPP \par macro
 & \textbf{kf}
 & \textbf{fs} \par &&&&& \\
 \hline
 \end{tabular}
 \label{tab:tab1}
 \end{center}
\end{table}
%

N.B. Parameter here, in not only parameter in the \fortran acceptation,
it is also used for code variables that are read in namelist and should never been modified during a simulation.
It is the case, for example, for the size of a domain (jpi,jpj,jpk).

\newpage

% ================================================================
% The program structure
% ================================================================
%\section{Program structure}
%\label{sec:Apdx_D_structure}

%To be done....
\biblio

\pindex

\end{document}
Index: NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_DIA.tex
===================================================================
 NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_DIA.tex (revision 11314)
+++ NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_DIA.tex (revision 11315)
@@ 15,5 +15,5 @@
% Old Model Output
% ================================================================
\section{Old model output (default)}
+\section{Model output}
\label{sec:DIA_io_old}
@@ 1494,5 +1494,5 @@
\textbf{Note that} in the current version (v3.6), many changes has been introduced but not fully tested.
In particular, options associated with \np{ln\_dyn\_mxl}, \np{ln\_vor\_trd}, and \np{ln\_tra\_mxl} are not working,
and none of the options have been tested with variable volume (\ie \key{vvl} defined).
+and none of the options have been tested with variable volume (\ie \np{ln\_linssh}\forcode{ = .true.}).
% 
@@ 1930,5 +1930,5 @@
Third, the discretisation of \autoref{eq:steric_Bq} depends on the type of free surface which is considered.
In the non linear free surface case, \ie \key{vvl} defined, it is given by
+In the non linear free surface case, \ie \np{ln\_linssh}\forcode{ = .true.}, it is given by
\[
@@ 1969,12 +1969,11 @@
where $S_o$ and $p_o$ are the initial salinity and pressure, respectively.
Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs the \key{diaar5} defined to
be called.
+Both steric and thermosteric sea level are computed in \mdl{diaar5}.
% 
% Other Diagnostics
% 
\section[Other diagnostics (\texttt{\textbf{key\_diahth}}, \texttt{\textbf{key\_diaar5}})]
{Other diagnostics (\protect\key{diahth}, \protect\key{diaar5})}
+\section[Other diagnostics]
+{Other diagnostics}
\label{sec:DIA_diag_others}
@@ 1995,23 +1994,4 @@
 the depth of the thermocline (maximum of the vertical temperature gradient) (\mdl{diahth})
% 
% Poleward heat and salt transports
% 

\subsection[Poleward heat and salt transports (\textit{diaptr.F90})]
{Poleward heat and salt transports (\protect\mdl{diaptr})}

%namptr

\nlst{namptr}
%

The poleward heat and salt transports, their advective and diffusive component,
and the meriodional stream function can be computed online in \mdl{diaptr} \np{ln\_diaptr} to true
(see the \textit{\ngn{namptr} } namelist below).
When \np{ln\_subbas}\forcode{ = .true.}, transports and stream function are computed for the Atlantic, Indian,
Pacific and IndoPacific Oceans (defined north of 30\deg{S}) as well as for the World Ocean.
The subbasin decomposition requires an input file (\ifile{subbasins}) which contains three 2D mask arrays,
the IndoPacific mask been deduced from the sum of the Indian and Pacific mask (\autoref{fig:mask_subasins}).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 2034,11 +2014,23 @@
% CMIP specific diagnostics
% 
\subsection[CMIP specific diagnostics (\textit{diaar5.F90})]
+\subsection[CMIP specific diagnostics (\textit{diaar5.F90}, \textit{diaptr.F90})]
{CMIP specific diagnostics (\protect\mdl{diaar5})}
A series of diagnostics has been added in the \mdl{diaar5}.
They corresponds to outputs that are required for AR5 simulations (CMIP5)
+A series of diagnostics has been added in the \mdl{diaar5} and \mdl{diaptr}.
+In \mdl{diaar5} they correspond to outputs that are required for AR5 simulations (CMIP5)
(see also \autoref{sec:DIA_steric} for one of them).
Activating those outputs requires to define the \key{diaar5} CPP key.
+The module \mdl{diaar5} is active when one of the following outputs is required : global total volume (voltot), global mean ssh (sshtot), global total mass (masstot), global mean temperature (temptot), global mean ssh steric (sshsteric), global mean ssh thermosteric (sshthster), global mean salinity (saltot), sea water pressure at sea floor (botpres), dynamic sea surface height (sshdyn).
+
+In \mdl{diaptr} when \np{ln\_diaptr}\forcode{ = .true.}
+(see the \textit{\ngn{namptr} } namelist below) can be computed online the poleward heat and salt transports, their advective and diffusive component, and the meriodional stream function .
+When \np{ln\_subbas}\forcode{ = .true.}, transports and stream function are computed for the Atlantic, Indian,
+Pacific and IndoPacific Oceans (defined north of 30\deg{S}) as well as for the World Ocean.
+The subbasin decomposition requires an input file (\ifile{subbasins}) which contains three 2D mask arrays,
+the IndoPacific mask been deduced from the sum of the Indian and Pacific mask (\autoref{fig:mask_subasins}).
+
+%namptr
+
+\nlst{namptr}
+%
% 
Index: NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_DOM.tex
===================================================================
 NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_DOM.tex (revision 11314)
+++ NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_DOM.tex (revision 11315)
@@ 18,10 +18,21 @@
%  domclo: closed sea and lakes.... management of closea sea area : specific to global configuration, both forced and coupled
+\vfill
+\begin{figure}[b]
+\subsubsection*{Changes record}
+\begin{tabular}{m{0.08\linewidth}m{0.32\linewidth}m{0.6\linewidth}}
+ Release & Author(s) & Modifications \\
+\hline
+ {\em 4.0} & {\em Simon M{\"u}ller \& Andrew Coward} & {\em Compatibility changes for v4.0. Major simplication has moved many of the options to external domain configuration tools. For now this information has been retained in an appendix } \\
+ {\em 3.x} & {\em Sebastien Masson, Gurvan Madec \& Rashid Benshila } & {\em } \\
+\end{tabular}
+\end{figure}
+
\newpage
Having defined the continuous equations in \autoref{chap:PE} and chosen a time discretization \autoref{chap:STP},
we need to choose a discretization on a grid, and numerical algorithms.
+we need to choose a grid for spatial discretization and related numerical algorithms.
In the present chapter, we provide a general description of the staggered grid used in \NEMO,
and other information relevant to the main directory routines as well as the DOM (DOMain) directory.
+and other relevant information about the DOM (DOMain) sourcecode modules .
% ================================================================
@@ 55,5 +66,5 @@
The numerical techniques used to solve the Primitive Equations in this model are based on the traditional,
centred secondorder finite difference approximation.
Special attention has been given to the homogeneity of the solution in the three space directions.
+Special attention has been given to the homogeneity of the solution in the three spatial directions.
The arrangement of variables is the same in all directions.
It consists of cells centred on scalar points ($t$, $S$, $p$, $\rho$) with vector points $(u, v, w)$ defined in
@@ 71,5 +82,5 @@
Each scale factor is defined as the local analytical value provided by \autoref{eq:scale_factors}.
As a result, the mesh on which partial derivatives $\pd[]{\lambda}$, $\pd[]{\varphi}$ and
$\pd[]{z}$ are evaluated in a uniform mesh with a grid size of unity.
+$\pd[]{z}$ are evaluated is a uniform mesh with a grid size of unity.
Discrete partial derivatives are formulated by the traditional, centred second order finite difference approximation
while the scale factors are chosen equal to their local analytical value.
@@ 79,5 +90,5 @@
the continuous properties (see \autoref{apdx:C}).
A similar, related remark can be made about the domain size:
when needed, an area, volume, or the total ocean depth must be evaluated as the sum of the relevant scale factors
+when needed, an area, volume, or the total ocean depth must be evaluated as the product or sum of the relevant scale factors
(see \autoref{eq:DOM_bar} in the next section).
@@ 87,5 +98,5 @@
\begin{tabular}{p{46pt}p{56pt}p{56pt}p{56pt}}
\hline
 T & $i $ & $j $ & $k $ \\
+ t & $i $ & $j $ & $k $ \\
\hline
u & $i + 1/2$ & $j $ & $k $ \\
@@ 107,10 +118,42 @@
\protect\label{tab:cell}
Location of gridpoints as a function of integer or integer and a half value of the column, line or level.
 This indexing is only used for the writing of the semi discrete equation.
 In the code, the indexing uses integer values only and has a reverse direction in the vertical
+ This indexing is only used for the writing of the semi discrete equations.
+ In the code, the indexing uses integer values only and is positive downwards in the vertical with $k=1$ at the surface.
(see \autoref{subsec:DOM_Num_Index})
}
\end{center}
\end{table}
+%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+
+Note that the definition of the scale factors
+(\ie as the analytical first derivative of the transformation that
+results in $(\lambda,\varphi,z)$ as a function of $(i,j,k)$)
+is specific to the \NEMO model \citep{marti.madec.ea_JGR92}.
+As an example, a scale factor in the $i$ direction is defined locally at a $t$point,
+whereas many other models on a C grid choose to define such a scale factor as
+the distance between the $u$points on each side of the $t$point.
+Relying on an analytical transformation has two advantages:
+firstly, there is no ambiguity in the scale factors appearing in the discrete equations,
+since they are first introduced in the continuous equations;
+secondly, analytical transformations encourage good practice by the definition of smoothly varying grids
+(rather than allowing the user to set arbitrary jumps in thickness between adjacent layers) \citep{treguier.dukowicz.ea_JGR96}.
+An example of the effect of such a choice is shown in \autoref{fig:zgr_e3}.
+%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+\begin{figure}[!t]
+ \begin{center}
+ \includegraphics[width=\textwidth]{Fig_zgr_e3}
+ \caption{
+ \protect\label{fig:zgr_e3}
+ Comparison of (a) traditional definitions of gridpoint position and gridsize in the vertical,
+ and (b) analytically derived gridpoint position and scale factors.
+ For both grids here, the same $w$point depth has been chosen but
+ in (a) the $t$points are set half way between $w$points while
+ in (b) they are defined from an analytical function:
+ $z(k) = 5 \, (k  1/2)^3  45 \, (k  1/2)^2 + 140 \, (k  1/2)  150$.
+ Note the resulting difference between the value of the gridsize $\Delta_k$ and
+ those of the scale factor $e_k$.
+ }
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 132,5 +175,5 @@
Following \autoref{eq:PE_grad} and \autoref{eq:PE_lap}, the gradient of a variable $q$ defined at
a $t$point has its three components defined at $u$, $v$ and $w$points while
its Laplacian is defined at $t$point.
+its Laplacian is defined at the $t$point.
These operators have the following discrete forms in the curvilinear $s$coordinates system:
\[
@@ 171,6 +214,6 @@
\end{equation}
The vertical average over the whole water column denoted by an overbar becomes for a quantity $q$ which
is a masked field (i.e. equal to zero inside solid area):
+The vertical average over the whole water column is denoted by an overbar and is for
+a masked field $q$ (\ie a quantity that is equal to zero inside solid areas):
\begin{equation}
\label{eq:DOM_bar}
@@ 178,5 +221,5 @@
\end{equation}
where $H_q$ is the ocean depth, which is the masked sum of the vertical scale factors at $q$ points,
$k^b$ and $k^o$ are the bottom and surface $k$indices, and the symbol $k^o$ refers to a summation over
+$k^b$ and $k^o$ are the bottom and surface $k$indices, and the symbol $\sum \limits_k$ refers to a summation over
all grid points of the same type in the direction indicated by the subscript (here $k$).
@@ 193,6 +236,6 @@
vector points $(u,v,w)$.
Let $a$ and $b$ be two fields defined on the mesh, with value zero inside continental area.
Using integration by parts it can be shown that the differencing operators ($\delta_i$, $\delta_j$ and $\delta_k$)
+Let $a$ and $b$ be two fields defined on the mesh, with a value of zero inside continental areas.
+It can be shown that the differencing operators ($\delta_i$, $\delta_j$ and $\delta_k$)
are skewsymmetric linear operators, and further that the averaging operators $\overline{\cdots}^{\, i}$,
$\overline{\cdots}^{\, j}$ and $\overline{\cdots}^{\, k}$) are symmetric linear operators, \ie
@@ 228,10 +271,10 @@
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The array representation used in the \fortran code requires an integer indexing while
the analytical definition of the mesh (see \autoref{subsec:DOM_cell}) is associated with the use of
integer values for $t$points and both integer and integer and a half values for all the other points.
Therefore a specific integer indexing must be defined for points other than $t$points
+The array representation used in the \fortran code requires an integer indexing.
+However, the analytical definition of the mesh (see \autoref{subsec:DOM_cell}) is associated with the use of
+integer values for $t$points only while all the other points involve integer and a half values.
+Therefore, a specific integer indexing has been defined for points other than $t$points
(\ie velocity and vorticity gridpoints).
Furthermore, the direction of the vertical indexing has been changed so that the surface level is at $k = 1$.
+Furthermore, the direction of the vertical indexing has been reversed and the surface level set at $k = 1$.
% 
@@ 253,19 +296,18 @@
\label{subsec:DOM_Num_Index_vertical}
In the vertical, the chosen indexing requires special attention since the $k$axis is reorientated downward in
the \fortran code compared to the indexing used in the semi discrete equations and
+In the vertical, the chosen indexing requires special attention since the direction of the $k$axis in
+the \fortran code is the reverse of that used in the semi discrete equations and
given in \autoref{subsec:DOM_cell}.
The sea surface corresponds to the $w$level $k = 1$ which is the same index as $t$level just below
+The sea surface corresponds to the $w$level $k = 1$, which is the same index as the $t$level just below
(\autoref{fig:index_vert}).
The last $w$level ($k = jpk$) either corresponds to the ocean floor or is inside the bathymetry while
the last $t$level is always inside the bathymetry (\autoref{fig:index_vert}).
Note that for an increasing $k$ index, a $w$point and the $t$point just below have the same $k$ index,
in opposition to what is done in the horizontal plane where
it is the $t$point and the nearest velocity points in the direction of the horizontal axis that
have the same $i$ or $j$ index
+The last $w$level ($k = jpk$) either corresponds to or is below the ocean floor while
+the last $t$level is always outside the ocean domain (\autoref{fig:index_vert}).
+Note that a $w$point and the directly underlaying $t$point have a common $k$ index (\ie $t$points and their
+nearest $w$point neighbour in negative index direction), in contrast to the indexing on the horizontal plane where
+the $t$point has the same index as the nearest velocity points in the positive direction of the respective horizontal axis index
(compare the dashed area in \autoref{fig:index_hor} and \autoref{fig:index_vert}).
Since the scale factors are chosen to be strictly positive,
a \textit{minus sign} appears in the \fortran code \textit{before all the vertical derivatives} of
the discrete equations given in this documentation.
+a \textit{minus sign} is included in the \fortran implementations of \textit{all the vertical derivatives} of
+the discrete equations given in this manual in order to accommodate the opposing vertical index directions in implementation and documentation.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 276,6 +318,6 @@
\protect\label{fig:index_vert}
Vertical integer indexing used in the \fortran code.
 Note that the $k$axis is orientated downward.
 The dashed area indicates the cell in which variables contained in arrays have the same $k$index.
+ Note that the $k$axis is oriented downward.
+ The dashed area indicates the cell in which variables contained in arrays have a common $k$index.
}
\end{center}
@@ 283,190 +325,146 @@
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+% 
+% Domain configuration
+% 
+\section{Spatial domain configuration}
+\label{subsec:DOM_config}
+
+\nlst{namcfg}
+
+Two typical methods are available to specify the spatial domain
+configuration; they can be selected using parameter \np{ln\_read\_cfg}
+parameter in namelist \ngn{namcfg}.
+
+If \np{ln\_read\_cfg} is set to \forcode{.true.}, the domainspecific parameters
+and fields are read from a netCDF input file, whose name (without its .nc
+suffix) can be specified as the value of the \np{cn\_domcfg} parameter in
+namelist \ngn{namcfg}.
+
+If \np{ln\_read\_cfg} is set to \forcode{.false.}, the domainspecific
+parameters and fields can be provided (\eg analytically computed) by subroutines
+\mdl{usrdef\_hgr} and \mdl{usrdef\_zgr}. These subroutines can be supplied in
+the \path{MY_SRC} directory of the configuration, and default versions that
+configure the spatial domain for the GYRE reference configuration are present in
+the \path{src/OCE/USR} directory.
+
+In version 4.0 there are no longer any options for reading complex bathmetries and
+performing a vertical discretization at runtime. Whilst it is occasionally convenient
+to have a common bathymetry file and, for example, to run similar models with and
+without partial bottom boxes and/or sigmacoordinates, supporting such choices leads to
+overly complex code. Worse still is the difficulty of ensuring the model configurations
+intended to be identical are indeed so when the model domain itself can be altered by runtime
+selections. The code previously used to perform vertical discretization has be incorporated
+into an external tool (\path{tools/DOMAINcfg}) which is briefly described in appendix F.
+
+The next subsections summarise the parameter and fields related to the
+configuration of the whole model domain. These represent the minimum information
+that must be provided either via the \np{cn\_domcfg} file or set by code
+inserted into usersupplied versions of the \mdl{usrdef\_*} subroutines. The
+requirements are presented in three sections: the domain size
+(\autoref{subsec:DOM_size}), the horizontal mesh
+(\autoref{subsec:DOM_hgr}), and the vertical grid
+(\autoref{subsec:DOM_zgr}).
+
% 
% Domain Size
% 
\subsubsection{Domain size}
+\subsection{Domain size}
\label{subsec:DOM_size}
The total size of the computational domain is set by the parameters \np{jpiglo},
\np{jpjglo} and \np{jpkglo} in the $i$, $j$ and $k$ directions respectively.
Parameters $jpi$ and $jpj$ refer to the size of each processor subdomain when
the code is run in parallel using domain decomposition (\key{mpp\_mpi} defined,
see \autoref{sec:LBC_mpp}).

% ================================================================
% Domain: List of fields needed
% ================================================================
\section{Needed fields}
\label{sec:DOM_fields}
The ocean mesh (\ie the position of all the scalar and vector points) is defined by the transformation that
gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$.
The gridpoints are located at integer or integer and a half values of as indicated in \autoref{tab:cell}.
The associated scale factors are defined using the analytical first derivative of the transformation
\autoref{eq:scale_factors}.
Necessary fields for configuration definition are:

\begin{itemize}
\item
 Geographic position:
 longitude with \texttt{glamt}, \texttt{glamu}, \texttt{glamv}, \texttt{glamf} and
 latitude with \texttt{gphit}, \texttt{gphiu}, \texttt{gphiv}, \texttt{gphif}
 (all respectively at T, U, V and F point)
\item
 Coriolis parameter (if domain not on the sphere): \texttt{ff\_f} and \texttt{ff\_t}
 (at T and F point)
\item
 Scale factors:
 \texttt{e1t}, \texttt{e1u}, \texttt{e1v} and \texttt{e1f} (on i direction),
 \texttt{e2t}, \texttt{e2u}, \texttt{e2v} and \texttt{e2f} (on j direction) and
 \texttt{ie1e2u\_v}, \texttt{e1e2u}, \texttt{e1e2v}. \\
 \texttt{e1e2u}, \texttt{e1e2v} are u and v surfaces (if gridsize reduction in some straits),
 \texttt{ie1e2u\_v} is to flag set u and v surfaces are neither read nor computed.
\end{itemize}

These fields can be read in an domain input file which name is setted in \np{cn\_domcfg} parameter specified in
\ngn{namcfg}.

\nlst{namcfg}

Or they can be defined in an analytical way in \path{MY_SRC} directory of the configuration.
For Reference Configurations of NEMO input domain files are supplied by NEMO System Team.
For analytical definition of input fields two routines are supplied: \mdl{usrdef\_hgr} and \mdl{usrdef\_zgr}.
They are an example of GYRE configuration parameters, and they are available in \path{src/OCE/USR} directory,
they provide the horizontal and vertical mesh.
% 
% Needed fields
% 
%\subsection{List of needed fields to build DOMAIN}
%\label{subsec:DOM_fields_list}

+The total size of the computational domain is set by the parameters
+\np{jpiglo}, \np{jpjglo} and \np{jpkglo} for the $i$, $j$ and $k$
+directions, respectively. Note, that the variables \forcode{jpi} and \forcode{jpj}
+refer to the size of each processor subdomain when the code is run in
+parallel using domain decomposition (\key{mpp\_mpi} defined, see
+\autoref{sec:LBC_mpp}).
+
+The name of the configuration is set through parameter \np{cn\_cfg},
+and the nominal resolution through parameter \np{nn\_cfg} (unless in
+the input file both of variables \forcode{ORCA} and \forcode{ORCA_index}
+are present, in which case \np{cn\_cfg} and \np{nn\_cfg} are set from these
+values accordingly).
+
+The global lateral boundary condition type is selected from 8 options
+using parameter \np{jperio}. See \autoref{sec:LBC_jperio} for
+details on the available options and the corresponding values for
+\np{jperio}.
% ================================================================
% Domain: Horizontal Grid (mesh)
% ================================================================
\section[Horizontal grid mesh (\textit{domhgr.F90})]
{Horizontal grid mesh (\protect\mdl{domhgr})}
\label{sec:DOM_hgr}

% 
% Coordinates and scale factors
% 
\subsection{Coordinates and scale factors}
\label{subsec:DOM_hgr_coord_e}

The ocean mesh (\ie the position of all the scalar and vector points) is defined by
the transformation that gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$.
The gridpoints are located at integer or integer and a half values of as indicated in \autoref{tab:cell}.
The associated scale factors are defined using the analytical first derivative of the transformation
\autoref{eq:scale_factors}.
These definitions are done in two modules, \mdl{domhgr} and \mdl{domzgr},
which provide the horizontal and vertical meshes, respectively.
This section deals with the horizontal mesh parameters.

In a horizontal plane, the location of all the model grid points is defined from
the analytical expressions of the longitude $\lambda$ and latitude $\varphi$ as a function of $(i,j)$.
The horizontal scale factors are calculated using \autoref{eq:scale_factors}.
For example, when the longitude and latitude are function of a single value
($i$ and $j$, respectively) (geographical configuration of the mesh),
the horizontal mesh definition reduces to define the wanted $\lambda(i)$, $\varphi(j)$,
and their derivatives $\lambda'(i) \ \varphi'(j)$ in the \mdl{domhgr} module.
The model computes the gridpoint positions and scale factors in the horizontal plane as follows:
\begin{align*}
 \lambda_t &\equiv \text{glamt} = \lambda (i )
 &\varphi_t &\equiv \text{gphit} = \varphi (j ) \\
 \lambda_u &\equiv \text{glamu} = \lambda (i + 1/2)
 &\varphi_u &\equiv \text{gphiu} = \varphi (j ) \\
 \lambda_v &\equiv \text{glamv} = \lambda (i )
 &\varphi_v &\equiv \text{gphiv} = \varphi (j + 1/2) \\
 \lambda_f &\equiv \text{glamf} = \lambda (i + 1/2)
 &\varphi_f &\equiv \text{gphif} = \varphi (j + 1/2) \\
 e_{1t} &\equiv \text{e1t} = r_a \lambda'(i ) \; \cos\varphi(j ) 
 &e_{2t} &\equiv \text{e2t} = r_a \varphi'(j )  \\
 e_{1u} &\equiv \text{e1t} = r_a \lambda'(i + 1/2) \; \cos\varphi(j ) 
 &e_{2u} &\equiv \text{e2t} = r_a \varphi'(j )  \\
 e_{1v} &\equiv \text{e1t} = r_a \lambda'(i ) \; \cos\varphi(j + 1/2) 
 &e_{2v} &\equiv \text{e2t} = r_a \varphi'(j + 1/2)  \\
 e_{1f} &\equiv \text{e1t} = r_a \lambda'(i + 1/2) \; \cos\varphi(j + 1/2) 
 &e_{2f} &\equiv \text{e2t} = r_a \varphi'(j + 1/2) 
\end{align*}
where the last letter of each computational name indicates the grid point considered and
$r_a$ is the earth radius (defined in \mdl{phycst} along with all universal constants).
Note that the horizontal position of and scale factors at $w$points are exactly equal to those of $t$points,
thus no specific arrays are defined at $w$points.

Note that the definition of the scale factors
(\ie as the analytical first derivative of the transformation that
gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$)
is specific to the \NEMO model \citep{marti.madec.ea_JGR92}.
As an example, $e_{1t}$ is defined locally at a $t$point,
whereas many other models on a C grid choose to define such a scale factor as
the distance between the $U$points on each side of the $t$point.
Relying on an analytical transformation has two advantages:
firstly, there is no ambiguity in the scale factors appearing in the discrete equations,
since they are first introduced in the continuous equations;
secondly, analytical transformations encourage good practice by the definition of smoothly varying grids
(rather than allowing the user to set arbitrary jumps in thickness between adjacent layers) \citep{treguier.dukowicz.ea_JGR96}.
An example of the effect of such a choice is shown in \autoref{fig:zgr_e3}.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t]
 \begin{center}
 \includegraphics[width=\textwidth]{Fig_zgr_e3}
 \caption{
 \protect\label{fig:zgr_e3}
 Comparison of (a) traditional definitions of gridpoint position and gridsize in the vertical,
 and (b) analytically derived gridpoint position and scale factors.
 For both grids here, the same $w$point depth has been chosen but
 in (a) the $t$points are set half way between $w$points while
 in (b) they are defined from an analytical function:
 $z(k) = 5 \, (k  1/2)^3  45 \, (k  1/2)^2 + 140 \, (k  1/2)  150$.
 Note the resulting difference between the value of the gridsize $\Delta_k$ and
 those of the scale factor $e_k$.
 }
 \end{center}
\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>

% 
% Choice of horizontal grid
% 
\subsection{Choice of horizontal grid}
\label{subsec:DOM_hgr_msh_choice}

% 
% Grid files
% 
\subsection{Output grid files}
\label{subsec:DOM_hgr_files}

All the arrays relating to a particular ocean model configuration (gridpoint position, scale factors, masks)
can be saved in files if \np{nn\_msh} $\not = 0$ (namelist variable in \ngn{namdom}).
This can be particularly useful for plots and offline diagnostics.
In some cases, the user may choose to make a local modification of a scale factor in the code.
This is the case in global configurations when restricting the width of a specific strait
(usually a onegridpoint strait that happens to be too wide due to insufficient model resolution).
An example is Gibraltar Strait in the ORCA2 configuration.
When such modifications are done,
the output grid written when \np{nn\_msh} $\not = 0$ is no more equal to the input grid.
+\subsection{Horizontal grid mesh (\protect\mdl{domhgr})}
+\label{subsec:DOM_hgr}
+
+% ================================================================
+% Domain: List of hgrrelated fields needed
+% ================================================================
+\subsubsection{Required fields}
+\label{sec:DOM_hgr_fields}
+The explicit specification of a range of meshrelated fields are required for the definition of a configuration. These include:
+
+\begin{Verbatim}[fontsize=\tiny]
+int jpiglo, jpjglo, jpkglo /* global domain sizes */
+int jperio /* lateral global domain b.c. */
+double glamt, glamu, glamv, glamf /* geographic longitude (t,u,v and f points respectively) */
+double gphit, gphiu, gphiv, gphif /* geographic latitude */
+double e1t, e1u, e1v, e1f /* horizontal scale factors */
+double e2t, e2u, e2v, e2f /* horizontal scale factors */
+\end{Verbatim}
+
+The values of the geographic longitude and latitude arrays at indices $i,j$ correspond to the analytical expressions of the longitude $\lambda$ and latitude $\varphi$ as a function of $(i,j)$, evaluated at the values as specified in Table \autoref{tab:cell} for the respective gridpoint position. The calculation of the values of the horizontal scale factor arrays in general additionally involves partial derivatives of $\lambda$ and $\varphi$ with respect to $i$ and $j$, evaluated for the same arguments as $\lambda$ and $\varphi$.
+
+\subsubsection{Optional fields}
+\begin{Verbatim}[fontsize=\tiny]
+ /* Optional: */
+int ORCA, ORCA_index /* configuration name, configuration resolution */
+double e1e2u, e1e2v /* U and V surfaces (if grid size reduction in some straits) */
+double ff_f, ff_t /* Coriolis parameter (if not on the sphere) */
+\end{Verbatim}
+
+NEMO can support the local reduction of key strait widths by altering individual values of
+e1u or e1v at the appropriate locations. This is particularly useful for locations such as
+Gibraltar or Indonesian Throughflow pinchpoints (see \autoref{sec:MISC_strait} for
+illustrated examples). The key is to reduce the faces of $T$cell (\ie change the value of
+the horizontal scale factors at $u$ or $v$point) but not the volume of the cells. Doing
+otherwise can lead to numerical instability issues. In normal operation the surface areas
+are computed from $\texttt{e1u} * \texttt{e2u}$ and $\texttt{e1v} * \texttt{e2v}$ but in
+cases where a gridsize reduction is required, the unaltered surface areas at $u$ and $v$
+grid points (\texttt{e1e2u} and \texttt{e1e2v}, respectively) must be read or precomputed
+in \mdl{usrdef\_hgr}. If these arrays are present in the \np{cn\_domcfg} file they are
+read and the internal computation is suppressed. Versions of \mdl{usrdef\_hgr} which set
+their own values of \texttt{e1e2u} and \texttt{e1e2v} should set the surfacearea
+computation flag: \texttt{ie1e2u\_v} to a nonzero value to suppress their recomputation.
+
+\smallskip
+Similar logic applies to the other optional fields: \texttt{ff\_f} and \texttt{ff\_t}
+which can be used to provide the Coriolis parameter at F and Tpoints respectively if the
+mesh is not on a sphere. If present these fields will be read and used and the normal
+calculation ($2*\Omega*\sin(\varphi)$) suppressed. Versions of \mdl{usrdef\_hgr} which set
+their own values of \texttt{ff\_f} and \texttt{ff\_t} should set the Coriolis computation
+flag: \texttt{iff} to a nonzero value to suppress their recomputation.
+
+Note that longitudes, latitudes, and scale factors at $w$ points are exactly
+equal to those of $t$ points, thus no specific arrays are defined at $w$ points.
+
% ================================================================
% Domain: Vertical Grid (domzgr)
% ================================================================
\section[Vertical grid (\textit{domzgr.F90})]
+\subsection[Vertical grid (\textit{domzgr.F90})]
{Vertical grid (\protect\mdl{domzgr})}
\label{sec:DOM_zgr}
%nam_zgr & namdom
%
%\nlst{namzgr}

\nlst{namdom}
+\label{subsec:DOM_zgr}
+%namdom
+\nlst{namdom}
%
Variables are defined through the \ngn{namzgr} and \ngn{namdom} namelists.
In the vertical, the model mesh is determined by four things:
(1) the bathymetry given in meters;
(2) the number of levels of the model (\jp{jpk});
(3) the analytical transformation $z(i,j,k)$ and the vertical scale factors (derivatives of the transformation); and
(4) the masking system, \ie the number of wet model levels at each
$(i,j)$ column of points.
+\begin{enumerate}
+ \item the bathymetry given in meters;
+ \item the number of levels of the model (\jp{jpk});
+ \item the analytical transformation $z(i,j,k)$ and the vertical scale factors (derivatives of the transformation); and
+ \item the masking system, \ie the number of wet model levels at each
+$(i,j)$ location of the horizontal grid.
+\end{enumerate}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 489,510 +487,108 @@
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The choice of a vertical coordinate, even if it is made through \ngn{namzgr} namelist parameters,
must be done once of all at the beginning of an experiment.
It is not intended as an option which can be enabled or disabled in the middle of an experiment.
Three main choices are offered (\autoref{fig:z_zps_s_sps}):
$z$coordinate with full step bathymetry (\np{ln\_zco}\forcode{ = .true.}),
$z$coordinate with partial step bathymetry (\np{ln\_zps}\forcode{ = .true.}),
or generalized, $s$coordinate (\np{ln\_sco}\forcode{ = .true.}).
Hybridation of the three main coordinates are available:
$sz$ or $szps$ coordinate (\autoref{fig:z_zps_s_sps} and \autoref{fig:z_zps_s_sps}).
By default a nonlinear free surface is used: the coordinate follow the timevariation of the free surface so that
the transformation is time dependent: $z(i,j,k,t)$ (\autoref{fig:z_zps_s_sps}).
When a linear free surface is assumed (\np{ln\_linssh}\forcode{ = .true.}),
the vertical coordinate are fixed in time, but the seawater can move up and down across the $z_0$ surface
(in other words, the top of the ocean in not a rigidlid).
The last choice in terms of vertical coordinate concerns the presence (or not) in
the model domain of ocean cavities beneath ice shelves.
Setting \np{ln\_isfcav} to true allows to manage ocean cavities, otherwise they are filled in.
This option is currently only available in $z$ or $zps$coordinate,
and partial step are also applied at the ocean/ice shelf interface.

Contrary to the horizontal grid, the vertical grid is computed in the code and no provision is made for
reading it from a file.
The only input file is the bathymetry (in meters) (\ifile{bathy\_meter})
\footnote{
 N.B. in full step $z$coordinate, a \ifile{bathy\_level} file can replace the \ifile{bathy\_meter} file,
 so that the computation of the number of wet ocean point in each water column is bypassed}.
If \np{ln\_isfcav}\forcode{ = .true.}, an extra file input file (\ifile{isf\_draft\_meter}) describing
the ice shelf draft (in meters) is needed.

After reading the bathymetry, the algorithm for vertical grid definition differs between the different options:
\begin{description}
\item[\textit{zco}]
 set a reference coordinate transformation $z_0(k)$, and set $z(i,j,k,t) = z_0(k)$.
\item[\textit{zps}]
 set a reference coordinate transformation $z_0(k)$, and calculate the thickness of the deepest level at
 each $(i,j)$ point using the bathymetry, to obtain the final threedimensional depth and scale factor arrays.
\item[\textit{sco}]
 smooth the bathymetry to fulfill the hydrostatic consistency criteria and
 set the threedimensional transformation.
\item[\textit{sz} and \textit{szps}]
 smooth the bathymetry to fulfill the hydrostatic consistency criteria and
 set the threedimensional transformation $z(i,j,k)$,
 and possibly introduce masking of extra land points to better fit the original bathymetry file.
\end{description}
%%%
\gmcomment{ add the description of the smoothing: envelop topography...}
%%%

Unless a linear free surface is used (\np{ln\_linssh}\forcode{ = .false.}),
the arrays describing the grid point depths and vertical scale factors are three set of
three dimensional arrays $(i,j,k)$ defined at \textit{before}, \textit{now} and \textit{after} time step.
The time at which they are defined is indicated by a suffix: $\_b$, $\_n$, or $\_a$, respectively.
They are updated at each model time step using a fixed reference coordinate system which
computer names have a $\_0$ suffix.
When the linear free surface option is used (\np{ln\_linssh}\forcode{ = .true.}), \textit{before},
\textit{now} and \textit{after} arrays are simply set one for all to their reference counterpart.

% 
% Meter Bathymetry
% 
\subsection{Meter bathymetry}
\label{subsec:DOM_bathy}

Three options are possible for defining the bathymetry, according to the namelist variable \np{nn\_bathy}
(found in \ngn{namdom} namelist):
\begin{description}
\item[\np{nn\_bathy}\forcode{ = 0}]:
 a flatbottom domain is defined.
 The total depth $z_w (jpk)$ is given by the coordinate transformation.
 The domain can either be a closed basin or a periodic channel depending on the parameter \np{jperio}.
\item[\np{nn\_bathy}\forcode{ = 1}]:
 a domain with a bump of topography one third of the domain width at the central latitude.
 This is meant for the "EELR5" configuration, a periodic or open boundary channel with a seamount.
\item[\np{nn\_bathy}\forcode{ = 1}]:
 read a bathymetry and ice shelf draft (if needed).
 The \ifile{bathy\_meter} file (Netcdf format) provides the ocean depth (positive, in meters) at
 each grid point of the model grid.
 The bathymetry is usually built by interpolating a standard bathymetry product (\eg ETOPO2) onto
 the horizontal ocean mesh.
 Defining the bathymetry also defines the coastline: where the bathymetry is zero,
 no model levels are defined (all levels are masked).

 The \ifile{isfdraft\_meter} file (Netcdf format) provides the ice shelf draft (positive, in meters) at
 each grid point of the model grid.
 This file is only needed if \np{ln\_isfcav}\forcode{ = .true.}.
 Defining the ice shelf draft will also define the ice shelf edge and the grounding line position.
\end{description}

When a global ocean is coupled to an atmospheric model it is better to represent all large water bodies
(\eg great lakes, Caspian sea...) even if the model resolution does not allow their communication with
the rest of the ocean.
This is unnecessary when the ocean is forced by fixed atmospheric conditions,
so these seas can be removed from the ocean domain.
The user has the option to set the bathymetry in closed seas to zero (see \autoref{sec:MISC_closea}),
but the code has to be adapted to the user's configuration.

% 
% zcoordinate and reference coordinate transformation
% 
\subsection[$Z$coordinate (\forcode{ln_zco = .true.}) and ref. coordinate]
{$Z$coordinate (\protect\np{ln\_zco}\forcode{ = .true.}) and reference coordinate}
\label{subsec:DOM_zco}

%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!tb]
 \begin{center}
 \includegraphics[width=\textwidth]{Fig_zgr}
 \caption{
 \protect\label{fig:zgr}
 Default vertical mesh for ORCA2: 30 ocean levels (L30).
 Vertical level functions for (a) Tpoint depth and (b) the associated scale factor as computed from
 \autoref{eq:DOM_zgr_ana_1} using \autoref{eq:DOM_zgr_coef} in $z$coordinate.
 }
 \end{center}
\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>

The reference coordinate transformation $z_0(k)$ defines the arrays $gdept_0$ and $gdepw_0$ for $t$ and $w$points,
respectively.
As indicated on \autoref{fig:index_vert} \jp{jpk} is the number of $w$levels.
$gdepw_0(1)$ is the ocean surface.
There are at most \jp{jpk}1 $t$points inside the ocean,
the additional $t$point at $jk = jpk$ is below the sea floor and is not used.
The vertical location of $w$ and $t$levels is defined from the analytic expression of the depth $z_0(k)$ whose
analytical derivative with respect to $k$ provides the vertical scale factors.
The user must provide the analytical expression of both $z_0$ and its first derivative with respect to $k$.
This is done in routine \mdl{domzgr} through statement functions,
using parameters provided in the \ngn{namcfg} namelist.

It is possible to define a simple regular vertical grid by giving zero stretching (\np{ppacr}\forcode{ = 0}).
In that case, the parameters \jp{jpk} (number of $w$levels) and
\np{pphmax} (total ocean depth in meters) fully define the grid.

For climaterelated studies it is often desirable to concentrate the vertical resolution near the ocean surface.
The following function is proposed as a standard for a $z$coordinate (with either full or partial steps):
\begin{gather}
 \label{eq:DOM_zgr_ana_1}
 z_0 (k) = h_{sur}  h_0 \; k  \; h_1 \; \log \big[ \cosh ((k  h_{th}) / h_{cr}) \big] \\
 e_3^0(k) = \lt  h_0  h_1 \; \tanh \big[ (k  h_{th}) / h_{cr} \big] \rt
\end{gather}
where $k = 1$ to \jp{jpk} for $w$levels and $k = 1$ to $k = 1$ for $T$levels.
Such an expression allows us to define a nearly uniform vertical location of levels at the ocean top and bottom with
a smooth hyperbolic tangent transition in between (\autoref{fig:zgr}).

If the ice shelf cavities are opened (\np{ln\_isfcav}\forcode{ = .true.}), the definition of $z_0$ is the same.
However, definition of $e_3^0$ at $t$ and $w$points is respectively changed to:
\begin{equation}
 \label{eq:DOM_zgr_ana_2}
 \begin{split}
 e_3^T(k) &= z_W (k + 1)  z_W (k ) \\
 e_3^W(k) &= z_T (k )  z_T (k  1)
 \end{split}
\end{equation}
This formulation decrease the selfgenerated circulation into the ice shelf cavity
(which can, in extreme case, leads to blow up).\\

The most used vertical grid for ORCA2 has $10~m$ ($500~m$) resolution in the surface (bottom) layers and
a depth which varies from 0 at the sea surface to a minimum of $5000~m$.
This leads to the following conditions:
\begin{equation}
 \label{eq:DOM_zgr_coef}
 \begin{array}{ll}
 e_3 (1 + 1/2) = 10. & z(1 ) = 0. \\
 e_3 (jpk  1/2) = 500. & z(jpk) = 5000.
 \end{array}
\end{equation}

With the choice of the stretching $h_{cr} = 3$ and the number of levels \jp{jpk}~$= 31$,
the four coefficients $h_{sur}$, $h_0$, $h_1$, and $h_{th}$ in
\autoref{eq:DOM_zgr_ana_2} have been determined such that
\autoref{eq:DOM_zgr_coef} is satisfied, through an optimisation procedure using a bisection method.
For the first standard ORCA2 vertical grid this led to the following values:
$h_{sur} = 4762.96$, $h_0 = 255.58, h_1 = 245.5813$, and $h_{th} = 21.43336$.
The resulting depths and scale factors as a function of the model levels are shown in
\autoref{fig:zgr} and given in \autoref{tab:orca_zgr}.
Those values correspond to the parameters \np{ppsur}, \np{ppa0}, \np{ppa1}, \np{ppkth} in \ngn{namcfg} namelist.

Rather than entering parameters $h_{sur}$, $h_0$, and $h_1$ directly, it is possible to recalculate them.
In that case the user sets \np{ppsur}~$=$~\np{ppa0}~$=$~\np{ppa1}~$= 999999$.,
in \ngn{namcfg} namelist, and specifies instead the four following parameters:
+The choice of a vertical coordinate is made when setting up the configuration;
+it is not intended to be an option which can be changed in the middle of an
+experiment. The one exception to this statement being the choice of linear or
+nonlinear free surface. In v4.0 the linear free surface option is implemented
+as a special case of the nonlinear free surface. This is computationally
+wasteful since it uses the structures for timevarying 3D metrics for fields
+that (in the linear free surface case) are fixed. However, the linear
+freesurface is rarely used and implementing it this way means a single configuration
+file can support both options.
+
+By default a nonlinear free surface is used (\np{ln\_linssh} set to \forcode{ =
+.false.} in \ngn{namdom}): the coordinate follow the timevariation of the free
+surface so that the transformation is time dependent: $z(i,j,k,t)$
+(\eg \autoref{fig:z_zps_s_sps}f). When a linear free surface is assumed
+(\np{ln\_linssh} set to \forcode{ = .true.} in \ngn{namdom}), the vertical
+coordinates are fixed in time, but the seawater can move up and down across the
+$z_0$ surface (in other words, the top of the ocean in not a rigid lid).
+
+Note that settings: \np{ln\_zco}, \np{ln\_zps}, \np{ln\_sco} and \np{ln\_isfcav} mentioned
+in the following sections appear to be namelist options but they are no longer truly
+namelist options for NEMO. Their value is written to and read from the domain configuration file
+and they should be treated as fixed parameters for a particular configuration. They are
+namelist options for the \forcode{DOMAINcfg} tool that can be used to build the
+configuration file and serve both to provide a record of the choices made whilst building the
+configuration and to trigger appropriate code blocks within NEMO.
+These values should not be altered in the \np{cn\_domcfg} file.
+
+\medskip
+The decision on these choices must be made when the \np{cn\_domcfg} file is constructed.
+Three main choices are offered (\autoref{fig:z_zps_s_sps}ac):
+
\begin{itemize}
\item
 \np{ppacr}~$= h_{cr}$: stretching factor (nondimensional).
 The larger \np{ppacr}, the smaller the stretching.
 Values from $3$ to $10$ are usual.
\item
 \np{ppkth}~$= h_{th}$: is approximately the model level at which maximum stretching occurs
 (nondimensional, usually of order 1/2 or 2/3 of \jp{jpk})
\item
 \np{ppdzmin}: minimum thickness for the top layer (in meters).
\item
 \np{pphmax}: total depth of the ocean (meters).
+\item $z$coordinate with full step bathymetry (\np{ln\_zco}\forcode{ = .true.}),
+\item $z$coordinate with partial step ($zps$) bathymetry (\np{ln\_zps}\forcode{ = .true.}),
+\item Generalized, $s$coordinate (\np{ln\_sco}\forcode{ = .true.}).
\end{itemize}
As an example, for the $45$ layers used in the DRAKKAR configuration those parameters are:
\jp{jpk}~$= 46$, \np{ppacr}~$= 9$, \np{ppkth}~$= 23.563$, \np{ppdzmin}~$= 6~m$, \np{pphmax}~$= 5750~m$.

%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{table}
 \begin{center}
 \begin{tabular}{crrrr}
 \hline
 \textbf{LEVEL} & \textbf{gdept\_1d} & \textbf{gdepw\_1d} & \textbf{e3t\_1d } & \textbf{e3w\_1d} \\
 \hline
 1 & \textbf{ 5.00} & 0.00 & \textbf{ 10.00} & 10.00 \\
 \hline
 2 & \textbf{ 15.00} & 10.00 & \textbf{ 10.00} & 10.00 \\
 \hline
 3 & \textbf{ 25.00} & 20.00 & \textbf{ 10.00} & 10.00 \\
 \hline
 4 & \textbf{ 35.01} & 30.00 & \textbf{ 10.01} & 10.00 \\
 \hline
 5 & \textbf{ 45.01} & 40.01 & \textbf{ 10.01} & 10.01 \\
 \hline
 6 & \textbf{ 55.03} & 50.02 & \textbf{ 10.02} & 10.02 \\
 \hline
 7 & \textbf{ 65.06} & 60.04 & \textbf{ 10.04} & 10.03 \\
 \hline
 8 & \textbf{ 75.13} & 70.09 & \textbf{ 10.09} & 10.06 \\
 \hline
 9 & \textbf{ 85.25} & 80.18 & \textbf{ 10.17} & 10.12 \\
 \hline
 10 & \textbf{ 95.49} & 90.35 & \textbf{ 10.33} & 10.24 \\
 \hline
 11 & \textbf{ 105.97} & 100.69 & \textbf{ 10.65} & 10.47 \\
 \hline
 12 & \textbf{ 116.90} & 111.36 & \textbf{ 11.27} & 10.91 \\
 \hline
 13 & \textbf{ 128.70} & 122.65 & \textbf{ 12.47} & 11.77 \\
 \hline
 14 & \textbf{ 142.20} & 135.16 & \textbf{ 14.78} & 13.43 \\
 \hline
 15 & \textbf{ 158.96} & 150.03 & \textbf{ 19.23} & 16.65 \\
 \hline
 16 & \textbf{ 181.96} & 169.42 & \textbf{ 27.66} & 22.78 \\
 \hline
 17 & \textbf{ 216.65} & 197.37 & \textbf{ 43.26} & 34.30 \\
 \hline
 18 & \textbf{ 272.48} & 241.13 & \textbf{ 70.88} & 55.21 \\
 \hline
 19 & \textbf{ 364.30} & 312.74 & \textbf{ 116.11} & 90.99 \\
 \hline
 20 & \textbf{ 511.53} & 429.72 & \textbf{ 181.55} & 146.43 \\
 \hline
 21 & \textbf{ 732.20} & 611.89 & \textbf{ 261.03} & 220.35 \\
 \hline
 22 & \textbf{ 1033.22} & 872.87 & \textbf{ 339.39} & 301.42 \\
 \hline
 23 & \textbf{ 1405.70} & 1211.59 & \textbf{ 402.26} & 373.31 \\
 \hline
 24 & \textbf{ 1830.89} & 1612.98 & \textbf{ 444.87} & 426.00 \\
 \hline
 25 & \textbf{ 2289.77} & 2057.13 & \textbf{ 470.55} & 459.47 \\
 \hline
 26 & \textbf{ 2768.24} & 2527.22 & \textbf{ 484.95} & 478.83 \\
 \hline
 27 & \textbf{ 3257.48} & 3011.90 & \textbf{ 492.70} & 489.44 \\
 \hline
 28 & \textbf{ 3752.44} & 3504.46 & \textbf{ 496.78} & 495.07 \\
 \hline
 29 & \textbf{ 4250.40} & 4001.16 & \textbf{ 498.90} & 498.02 \\
 \hline
 30 & \textbf{ 4749.91} & 4500.02 & \textbf{ 500.00} & 499.54 \\
 \hline
 31 & \textbf{ 5250.23} & 5000.00 & \textbf{ 500.56} & 500.33 \\
 \hline
 \end{tabular}
 \end{center}
 \caption{
 \protect\label{tab:orca_zgr}
 Default vertical mesh in $z$coordinate for 30 layers ORCA2 configuration as computed from
 \autoref{eq:DOM_zgr_ana_2} using the coefficients given in \autoref{eq:DOM_zgr_coef}
 }
\end{table}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>

% 
% zcoordinate with partial step
% 
\subsection[$Z$coordinate with partial step (\forcode{ln_zps = .true.})]
{$Z$coordinate with partial step (\protect\np{ln\_zps}\forcode{ = .true.})}
\label{subsec:DOM_zps}
%namdom

\nlst{namdom}
%

In $z$coordinate partial step,
the depths of the model levels are defined by the reference analytical function $z_0(k)$ as described in
the previous section, \textit{except} in the bottom layer.
The thickness of the bottom layer is allowed to vary as a function of geographical location $(\lambda,\varphi)$ to
allow a better representation of the bathymetry, especially in the case of small slopes
(where the bathymetry varies by less than one level thickness from one grid point to the next).
The reference layer thicknesses $e_{3t}^0$ have been defined in the absence of bathymetry.
With partial steps, layers from 1 to \jp{jpk}2 can have a thickness smaller than $e_{3t}(jk)$.
The model deepest layer (\jp{jpk}1) is allowed to have either a smaller or larger thickness than $e_{3t}(jpk)$:
the maximum thickness allowed is $2*e_{3t}(jpk  1)$.
This has to be kept in mind when specifying values in \ngn{namdom} namelist,
as the maximum depth \np{pphmax} in partial steps:
for example, with \np{pphmax}~$= 5750~m$ for the DRAKKAR 45 layer grid,
the maximum ocean depth allowed is actually $6000~m$ (the default thickness $e_{3t}(jpk  1)$ being $250~m$).
Two variables in the namdom namelist are used to define the partial step vertical grid.
The mimimum water thickness (in meters) allowed for a cell partially filled with bathymetry at level jk is
the minimum of \np{rn\_e3zps\_min} (thickness in meters, usually $20~m$) or $e_{3t}(jk)*$\np{rn\_e3zps\_rat}
(a fraction, usually 10\%, of the default thickness $e_{3t}(jk)$).

\gmcomment{ \colorbox{yellow}{Add a figure here of pstep especially at last ocean level } }

% 
% scoordinate
% 
\subsection[$S$coordinate (\forcode{ln_sco = .true.})]
{$S$coordinate (\protect\np{ln\_sco}\forcode{ = .true.})}
\label{subsec:DOM_sco}
%nam_zgr_sco
%
%\nlst{namzgr_sco}
%
Options are defined in \ngn{namzgr\_sco}.
In $s$coordinate (\np{ln\_sco}\forcode{ = .true.}), the depth and thickness of the model levels are defined from
the product of a depth field and either a stretching function or its derivative, respectively:

\begin{align*}
 % \label{eq:DOM_sco_ana}
 z(k) &= h(i,j) \; z_0 (k) \\
 e_3(k) &= h(i,j) \; z_0'(k)
\end{align*}

where $h$ is the depth of the last $w$level ($z_0(k)$) defined at the $t$point location in the horizontal and
$z_0(k)$ is a function which varies from $0$ at the sea surface to $1$ at the ocean bottom.
The depth field $h$ is not necessary the ocean depth,
since a mixed steplike and bottomfollowing representation of the topography can be used
(\autoref{fig:z_zps_s_sps}) or an envelop bathymetry can be defined (\autoref{fig:z_zps_s_sps}).
The namelist parameter \np{rn\_rmax} determines the slope at which
the terrainfollowing coordinate intersects the sea bed and becomes a pseudo zcoordinate.
The coordinate can also be hybridised by specifying \np{rn\_sbot\_min} and \np{rn\_sbot\_max} as
the minimum and maximum depths at which the terrainfollowing vertical coordinate is calculated.

Options for stretching the coordinate are provided as examples,
but care must be taken to ensure that the vertical stretch used is appropriate for the application.

The original default NEMO scoordinate stretching is available if neither of the other options are specified as true
(\np{ln\_s\_SH94}\forcode{ = .false.} and \np{ln\_s\_SF12}\forcode{ = .false.}).
This uses a depth independent $\tanh$ function for the stretching \citep{madec.delecluse.ea_JPO96}:

\[
 z = s_{min} + C (s) (H  s_{min})
 % \label{eq:SH94_1}
\]

where $s_{min}$ is the depth at which the $s$coordinate stretching starts and
allows a $z$coordinate to placed on top of the stretched coordinate,
and $z$ is the depth (negative down from the asea surface).
\begin{gather*}
 s =  \frac{k}{n  1} \quad \text{and} \quad 0 \leq k \leq n  1
 % \label{eq:DOM_s}
 \\
 % \label{eq:DOM_sco_function}
 C(s) = \frac{[\tanh(\theta \, (s + b))  \tanh(\theta \, b)]}{2 \; \sinh(\theta)}
\end{gather*}

A stretching function,
modified from the commonly used \citet{song.haidvogel_JCP94} stretching (\np{ln\_s\_SH94}\forcode{ = .true.}),
is also available and is more commonly used for shelf seas modelling:

\[
 C(s) = (1  b) \frac{\sinh(\theta s)}{\sinh(\theta)}
 + b \frac{\tanh \lt[ \theta \lt(s + \frac{1}{2} \rt) \rt]  \tanh \lt( \frac{\theta}{2} \rt)}
 { 2 \tanh \lt( \frac{\theta}{2} \rt)}
 % \label{eq:SH94_2}
\]

%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!ht]
 \begin{center}
 \includegraphics[width=\textwidth]{Fig_sco_function}
 \caption{
 \protect\label{fig:sco_function}
 Examples of the stretching function applied to a seamount;
 from left to right: surface, surface and bottom, and bottom intensified resolutions
 }
 \end{center}
\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>

where $H_c$ is the critical depth (\np{rn\_hc}) at which the coordinate transitions from pure $\sigma$ to
the stretched coordinate, and $\theta$ (\np{rn\_theta}) and $b$ (\np{rn\_bb}) are the surface and
bottom control parameters such that $0 \leqslant \theta \leqslant 20$, and $0 \leqslant b \leqslant 1$.
$b$ has been designed to allow surface and/or bottom increase of the vertical resolution
(\autoref{fig:sco_function}).

Another example has been provided at version 3.5 (\np{ln\_s\_SF12}) that allows a fixed surface resolution in
an analytical terrainfollowing stretching \citet{siddorn.furner_OM13}.
In this case the a stretching function $\gamma$ is defined such that:

\begin{equation}
 z =  \gamma h \quad \text{with} \quad 0 \leq \gamma \leq 1
 % \label{eq:z}
\end{equation}

The function is defined with respect to $\sigma$, the unstretched terrainfollowing coordinate:

\begin{gather*}
 % \label{eq:DOM_gamma_deriv}
 \gamma = A \lt( \sigma  \frac{1}{2} (\sigma^2 + f (\sigma)) \rt)
 + B \lt( \sigma^3  f (\sigma) \rt) + f (\sigma) \\
 \intertext{Where:}
 % \label{eq:DOM_gamma}
 f(\sigma) = (\alpha + 2) \sigma^{\alpha + 1}  (\alpha + 1) \sigma^{\alpha + 2}
 \quad \text{and} \quad \sigma = \frac{k}{n  1}
\end{gather*}

This gives an analytical stretching of $\sigma$ that is solvable in $A$ and $B$ as a function of
the user prescribed stretching parameter $\alpha$ (\np{rn\_alpha}) that stretches towards
the surface ($\alpha > 1.0$) or the bottom ($\alpha < 1.0$) and
user prescribed surface (\np{rn\_zs}) and bottom depths.
The bottom cell depth in this example is given as a function of water depth:

\[
 % \label{eq:DOM_zb}
 Z_b = h a + b
\]

where the namelist parameters \np{rn\_zb\_a} and \np{rn\_zb\_b} are $a$ and $b$ respectively.

%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!ht]
 \includegraphics[width=\textwidth]{Fig_DOM_compare_coordinates_surface}
 \caption{
 A comparison of the \citet{song.haidvogel_JCP94} $S$coordinate (solid lines),
 a 50 level $Z$coordinate (contoured surfaces) and
 the \citet{siddorn.furner_OM13} $S$coordinate (dashed lines) in the surface $100~m$ for
 a idealised bathymetry that goes from $50~m$ to $5500~m$ depth.
 For clarity every third coordinate surface is shown.
 }
 \label{fig:fig_compare_coordinates_surface}
\end{figure}
 % >>>>>>>>>>>>>>>>>>>>>>>>>>>>

This gives a smooth analytical stretching in computational space that is constrained to
given specified surface and bottom grid cell thicknesses in real space.
This is not to be confused with the hybrid schemes that
superimpose geopotential coordinates on terrain following coordinates thus
creating a nonanalytical vertical coordinate that
therefore may suffer from large gradients in the vertical resolutions.
This stretching is less straightforward to implement than the \citet{song.haidvogel_JCP94} stretching,
but has the advantage of resolving diurnal processes in deep water and has generally flatter slopes.

As with the \citet{song.haidvogel_JCP94} stretching the stretch is only applied at depths greater than
the critical depth $h_c$.
In this example two options are available in depths shallower than $h_c$,
with pure sigma being applied if the \np{ln\_sigcrit} is true and pure zcoordinates if it is false
(the zcoordinate being equal to the depths of the stretched coordinate at $h_c$).

Minimising the horizontal slope of the vertical coordinate is important in terrainfollowing systems as
large slopes lead to hydrostatic consistency.
A hydrostatic consistency parameter diagnostic following \citet{haney_JPO91} has been implemented,
and is output as part of the model mesh file at the start of the run.

% 
% z* or s*coordinate
% 
\subsection[\zstar or \sstarcoordinate (\forcode{ln_linssh = .false.})]
{\zstar or \sstarcoordinate (\protect\np{ln\_linssh}\forcode{ = .false.})}
\label{subsec:DOM_zgr_star}

This option is described in the Report by Levier \textit{et al.} (2007), available on the \NEMO web site.

%gm% key advantage: minimise the diffusion/dispertion associated with advection in response to high frequency surface disturbances
+
+Additionally, hybrid combinations of the three main coordinates are available:
+$sz$ or $szps$ coordinate (\autoref{fig:z_zps_s_sps}d and \autoref{fig:z_zps_s_sps}e).
+
+A further choice related to vertical coordinate concerns the presence (or not) of ocean
+cavities beneath ice shelves within the model domain. A setting of \np{ln\_isfcav} as
+\forcode{.true.} indicates that the domain contains ocean cavities, otherwise the top,
+wet layer of the ocean will always be at the ocean surface. This option is currently only
+available for $z$ or $zps$coordinates. In the latter case, partial steps are also applied
+at the ocean/ice shelf interface.
+
+Within the model, the arrays describing the grid point depths and vertical scale factors
+are three set of three dimensional arrays $(i,j,k)$ defined at \textit{before},
+\textit{now} and \textit{after} time step. The time at which they are defined is
+indicated by a suffix: $\_b$, $\_n$, or $\_a$, respectively. They are updated at each
+model time step. The initial fixed reference coordinate system is held in variable names
+with a $\_0$ suffix. When the linear free surface option is used
+(\np{ln\_linssh}\forcode{ = .true.}), \textit{before}, \textit{now} and \textit{after}
+arrays are initially set to their reference counterpart and remain fixed.
+
+\subsubsection{Required fields}
+\label{sec:DOM_zgr_fields}
+The explicit specification of a range of fields related to the vertical grid are required for the definition of a configuration. These include:
+
+\begin{Verbatim}[fontsize=\tiny]
+int ln_zco, ln_zps, ln_sco /* flags for zcoord, zcoord with partial steps and scoord */
+int ln_isfcav /* flag for ice shelf cavities */
+double e3t_1d, e3w_1d /* reference vertical scale factors at T and W points */
+double e3t_0, e3u_0, e3v_0, e3f_0, e3w_0 /* vertical scale factors 3D coordinate at T,U,V,F and W points */
+double e3uw_0, e3vw_0 /* vertical scale factors 3D coordinate at UW and VW points */
+int bottom_level, top_level /* last wet Tpoints, 1st wet Tpoints (for ice shelf cavities) */
+ /* For reference: */
+float bathy_metry /* bathymetry used in setting top and bottom levels */
+\end{Verbatim}
+
+This set of vertical metrics is sufficient to describe the initial depth and thickness of
+every gridcell in the model regardless of the choice of vertical coordinate. With constant
+zlevels, e3 metrics will be uniform across each horizontal level. In the partial step
+case each e3 at the \np{bottom\_level} (and, possibly, \np{top\_level} if ice cavities are
+present) may vary from its horizontal neighbours. And, in scoordinates, variations can
+occur throughout the water column. With the nonlinear freesurface, all the coordinates
+behave more like the scoordinate in that variations occurr throughout the water column
+with displacements related to the sea surface height. These variations are typically much
+smaller than those arising from bottom fitted coordinates. The values for vertical metrics
+supplied in the domain configuration file can be considered as those arising from a flat
+sea surface with zero elevation.
+
+The \np{bottom\_level} and \np{top\_level} 2D arrays define the \np{bottom\_level} and top
+wet levels in each grid column. Without ice cavities, \np{top\_level} is essentially a land
+mask (0 on land; 1 everywhere else). With ice cavities, \np{top\_level} determines the
+first wet point below the overlying ice shelf.
+
+
% 
% level bathymetry and mask
% 
\subsection{Level bathymetry and mask}
+\subsubsection{Level bathymetry and mask}
\label{subsec:DOM_msk}
Whatever the vertical coordinate used, the model offers the possibility of representing the bottom topography with
steps that follow the face of the model cells (step like topography) \citep{madec.delecluse.ea_JPO96}.
The distribution of the steps in the horizontal is defined in a 2D integer array, mbathy, which
gives the number of ocean levels (\ie those that are not masked) at each $t$point.
mbathy is computed from the meter bathymetry using the definiton of gdept as the number of $t$points which
gdept $\leq$ bathy.

Modifications of the model bathymetry are performed in the \textit{bat\_ctl} routine (see \mdl{domzgr} module) after
mbathy is computed.
Isolated grid points that do not communicate with another ocean point at the same level are eliminated.

As for the representation of bathymetry, a 2D integer array, misfdep, is created.
misfdep defines the level of the first wet $t$point.
All the cells between $k = 1$ and $misfdep(i,j)  1$ are masked.
By default, $misfdep(:,:) = 1$ and no cells are masked.

In case of ice shelf cavities, modifications of the model bathymetry and ice shelf draft into
the cavities are performed in the \textit{zgr\_isf} routine.
The compatibility between ice shelf draft and bathymetry is checked.
All the locations where the isf cavity is thinnest than \np{rn\_isfhmin} meters are grounded (\ie masked).
If only one cell on the water column is opened at $t$, $u$ or $v$points,
the bathymetry or the ice shelf draft is dug to fit this constrain.
If the incompatibility is too strong (need to dig more than 1 cell), the cell is masked.

From the \textit{mbathy} and \textit{misfdep} array, the mask fields are defined as follows:
+
+From \np{top\_level} and \np{bottom\_level} fields, the mask fields are defined as follows:
\begin{alignat*}{2}
tmask(i,j,k) &= & &
\begin{cases}
 0 &\text{if $ k < misfdep(i,j)$} \\
 1 &\text{if $misfdep(i,j) \leq k \leq mbathy(i,j)$} \\
 0 &\text{if $ k > mbathy(i,j)$}
+ 0 &\text{if $ k < top\_level(i,j)$} \\
+ 1 &\text{if $bottom\_level(i,j) \leq k \leq top\_level(i,j)$} \\
+ 0 &\text{if $ k > bottom\_level(i,j)$}
\end{cases}
\\
@@ 1010,9 +606,54 @@
exactly in the same way as for the bottom boundary.
The specification of closed lateral boundaries requires that at least
the first and last rows and columns of the \textit{mbathy} array are set to zero.
In the particular case of an eastwest cyclical boundary condition, \textit{mbathy} has its last column equal to
the second one and its first column equal to the last but one (and so too the mask arrays)
(see \autoref{fig:LBC_jperio}).
+%% The specification of closed lateral boundaries requires that at least
+%% the first and last rows and columns of the \textit{mbathy} array are set to zero.
+%% In the particular case of an eastwest cyclical boundary condition, \textit{mbathy} has its last column equal to
+%% the second one and its first column equal to the last but one (and so too the mask arrays)
+%% (see \autoref{fig:LBC_jperio}).
+
+
+%
+% Closed seas
+%
+\subsection{Closed seas} \label{subsec:DOM_closea}
+
+When a global ocean is coupled to an atmospheric model it is better to represent all large
+water bodies (\eg great lakes, Caspian sea...) even if the model resolution does not allow
+their communication with the rest of the ocean. This is unnecessary when the ocean is
+forced by fixed atmospheric conditions, so these seas can be removed from the ocean
+domain. The user has the option to set the bathymetry in closed seas to zero (see
+\autoref{sec:MISC_closea}) and to optionally decide on the fate of any freshwater
+imbalance over the area. The options are explained in \autoref{sec:MISC_closea} but it
+should be noted here that a successful use of these options requires appropriate mask
+fields to be present in the domain configuration file. Among the possibilities are:
+
+\begin{Verbatim}[fontsize=\tiny]
+int closea_mask /* nonzero values in closed sea areas for optional masking */
+int closea_mask_rnf /* nonzero values in closed sea areas with runoff locations (precip only) */
+int closea_mask_emp /* nonzero values in closed sea areas with runoff locations (total emp) */
+\end{Verbatim}
+
+% 
+% Grid files
+% 
+\subsection{Output grid files}
+\label{subsec:DOM_meshmask}
+
+\nlst{namcfg}
+
+Most of the arrays relating to a particular ocean model configuration dicussed in this
+chapter (gridpoint position, scale factors) can be saved in a file if namelist parameter
+\np{ln\_write\_cfg} (namelist \ngn{namcfg}) is set to \forcode{.true.}; the output
+filename is set thorugh parameter \np{cn\_domcfg\_out}. This is only really useful
+if the fields are computed in subroutines \mdl{usrdef\_hgr} or \mdl{usrdef\_zgr} and
+checking or confirmation is required.
+
+\nlst{namdom}
+
+Alternatively, all the arrays relating to a particular ocean model configuration
+(gridpoint position, scale factors, depths and masks) can be saved in a file called
+\texttt{mesh\_mask} if namelist parameter \np{ln\_meshmask} (namelist \ngn{namdom}) is set
+to \forcode{.true.}. This file contains additional fields that can be useful for
+postprocessing applications
% ================================================================
@@ 1023,22 +664,22 @@
\label{sec:DTA_tsd}
%namtsd

\nlst{namtsd}
%
Options are defined in \ngn{namtsd}.
By default, the ocean start from rest (the velocity field is set to zero) and the initialization of temperature and
salinity fields is controlled through the \np{ln\_tsd\_ini} namelist parameter.
+Basic initial state options are defined in \ngn{namtsd}. By default, the ocean starts
+from rest (the velocity field is set to zero) and the initialization of temperature and
+salinity fields is controlled through the \np{ln\_tsd\_init} namelist parameter.
+
\begin{description}
\item[\np{ln\_tsd\_init}\forcode{ = .true.}]
 use a T and S input files that can be given on the model grid itself or on their native input data grid.
 In the latter case,
 the data will be interpolated onthefly both in the horizontal and the vertical to the model grid
 (see \autoref{subsec:SBC_iof}).
 The information relative to the input files are given in the \np{sn\_tem} and \np{sn\_sal} structures.
 The computation is done in the \mdl{dtatsd} module.
\item[\np{ln\_tsd\_init}\forcode{ = .false.}]
 use constant salinity value of $35.5~psu$ and an analytical profile of temperature
 (typical of the tropical ocean), see \rou{istate\_t\_s} subroutine called from \mdl{istate} module.
+\item[\np{ln\_tsd\_init}\forcode{= .true.}]
+ Use T and S input files that can be given on the model grid itself or on their native
+ input data grids. In the latter case, the data will be interpolated onthefly both in
+ the horizontal and the vertical to the model grid (see \autoref{subsec:SBC_iof}). The
+ information relating to the input files are specified in the \np{sn\_tem} and
+ \np{sn\_sal} structures. The computation is done in the \mdl{dtatsd} module.
+\item[\np{ln\_tsd\_init}\forcode{= .false.}]
+ Initial values for T and S are set via a user supplied \rou{usr\_def\_istate} routine
+ contained in \mdl{userdef\_istate}. The default version sets horizontally uniform T and
+ profiles as used in the GYRE configuration (see \autoref{sec:CFG_gyre}).
\end{description}
Index: NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_ZDF.tex
===================================================================
 NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_ZDF.tex (revision 11314)
+++ NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/chap_ZDF.tex (revision 11315)
@@ 1235,5 +1235,30 @@
\label{subsec:ZDF_swm}
TBC ...
+Surface waves produce an enhanced mixing through waveturbulence interaction.
+In addition to breaking waves induced turbulence (\autoref{subsec:ZDF_tke}),
+the influence of nonbreaking waves can be accounted introducing
+waveinduced viscosity and diffusivity as a function of the wave number spectrum.
+Following \citet{qiao.yuan.ea_OD10}, a formulation of waveinduced mixing coefficient
+is provided as a function of wave amplitude, Stokes Drift and wavenumber:
+
+\begin{equation}
+ \label{eq:Bv}
+ B_{v} = \alpha {A} {U}_{st} {exp(3kz)}
+\end{equation}
+
+Where $B_{v}$ is the waveinduced mixing coefficient, $A$ is the wave amplitude,
+${U}_{st}$ is the Stokes Drift velocity, $k$ is the wave number and $\alpha$
+is a constant which should be determined by observations or
+numerical experiments and is set to be 1.
+
+The coefficient $B_{v}$ is then directly added to the vertical viscosity
+and diffusivity coefficients.
+
+In order to account for this contribution set: \forcode{ln_zdfswm = .true.},
+then wave interaction has to be activated through \forcode{ln_wave = .true.},
+the Stokes Drift can be evaluated by setting \forcode{ln_sdw = .true.}
+(see \autoref{subsec:SBC_wave_sdw})
+and the needed wave fields can be provided either in forcing or coupled mode
+(for more information on wave parameters and settings see \autoref{sec:SBC_wave})
% ================================================================
Index: NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/introduction.tex
===================================================================
 NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/introduction.tex (revision 11314)
+++ NEMO/branches/2019/dev_r11233_obsasm_docfixes/doc/latex/NEMO/subfiles/introduction.tex (revision 11315)
@@ 151,5 +151,5 @@
The coding rules for OPA include conventions for naming variables,
with different starting letters for different types of variables (real, integer, parameter\ldots).
Those rules are briefly presented in \autoref{apdx:D} and a more complete document is available .
+Those rules are briefly presented in \autoref{apdx:coding} and a more complete document is available .
The model is organized with a high internal modularity based on physics.
@@ 158,5 +158,5 @@
To make it easier for the user to find his way around the code, the module names follow a threeletter rule.
For example, \mdl{traldf} is a module related to the TRAcers equation, computing the Lateral DiFfussion.
%The complete list of module names is presented in \autoref{apdx:D}. %====>>>> to be done !
+%The complete list of module names is presented in \autoref{apdx:coding}. %====>>>> to be done !
Furthermore, modules are organized in a few directories that correspond to their category,
as indicated by the first three letters of their name (\autoref{tab:chapters}).