Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/abstract_foreword.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/abstract_foreword.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/abstract_foreword.tex (revision 10368)
@@ 11,17 +11,18 @@
{\small
The ocean engine of NEMO (Nucleus for European Modelling of the Ocean) is a primitive
equation model adapted to regional and global ocean circulation problems. It is intended to
be a flexible tool for studying the ocean and its interactions with the others components of
the earth climate system over a wide range of space and time scales.
Prognostic variables are the threedimensional velocity field, a nonlinear sea surface height,
the \textit{Conservative} Temperature and the \textit{Absolute} Salinity.
In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction,
a full or partial step $z$coordinate, or $s$coordinate, or a mixture of the two.
The distribution of variables is a threedimensional Arakawa Ctype grid.
Various physical choices are available to describe ocean physics, including TKE, and GLS vertical physics.
Within NEMO, the ocean is interfaced with a seaice model (LIM or CICE), passive tracer and
biogeochemical models (TOP) and, via the OASIS coupler, with several atmospheric general circulation models.
It also support twoway grid embedding via the AGRIF software.
+ The ocean engine of NEMO (Nucleus for European Modelling of the Ocean) is a primitive equation model adapted to
+ regional and global ocean circulation problems.
+ It is intended to be a flexible tool for studying the ocean and its interactions with
+ the others components of the earth climate system over a wide range of space and time scales.
+ Prognostic variables are the threedimensional velocity field, a nonlinear sea surface height,
+ the \textit{Conservative} Temperature and the \textit{Absolute} Salinity.
+ In the horizontal direction, the model uses a curvilinear orthogonal grid and in the vertical direction,
+ a full or partial step $z$coordinate, or $s$coordinate, or a mixture of the two.
+ The distribution of variables is a threedimensional Arakawa Ctype grid.
+ Various physical choices are available to describe ocean physics, including TKE, and GLS vertical physics.
+ Within NEMO, the ocean is interfaced with a seaice model (LIM or CICE),
+ passive tracer and biogeochemical models (TOP) and,
+ via the OASIS coupler, with several atmospheric general circulation models.
+ It also support twoway grid embedding via the AGRIF software.
}
@@ 31,12 +32,14 @@
\chapter*{Disclaimer}
Like all components of NEMO, the ocean component is developed under the \href{http://www.cecill.info/}{CECILL license},
which is a French adaptation of the GNU GPL (General Public License). Anyone may use it
freely for research purposes, and is encouraged to communicate back to the NEMO team
its own developments and improvements. The model and the present document have been
made available as a service to the community. We cannot certify that the code and its manual
are free of errors. Bugs are inevitable and some have undoubtedly survived the testing phase.
Users are encouraged to bring them to our attention. The author assumes no responsibility
for problems, errors, or incorrect usage of NEMO.
+Like all components of NEMO,
+the ocean component is developed under the \href{http://www.cecill.info/}{CECILL license},
+which is a French adaptation of the GNU GPL (General Public License).
+Anyone may use it freely for research purposes,
+and is encouraged to communicate back to the NEMO team its own developments and improvements.
+The model and the present document have been made available as a service to the community.
+We cannot certify that the code and its manual are free of errors.
+Bugs are inevitable and some have undoubtedly survived the testing phase.
+Users are encouraged to bring them to our attention.
+The author assumes no responsibility for problems, errors, or incorrect usage of NEMO.
\vspace{1cm}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_A.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_A.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_A.tex (revision 10368)
@@ 19,10 +19,11 @@
In order to establish the set of Primitive Equation in curvilinear $s$coordinates
($i.e.$ an orthogonal curvilinear coordinate in the horizontal and an Arbitrary Lagrangian
Eulerian (ALE) coordinate in the vertical), we start from the set of equations established
in \autoref{subsec:PE_zco_Eq} for the special case $k = z$ and thus $e_3 = 1$, and we introduce
an arbitrary vertical coordinate $a = a(i,j,z,t)$. Let us define a new vertical scale factor by
$e_3 = \partial z / \partial s$ (which now depends on $(i,j,z,t)$) and the horizontal
slope of $s$surfaces by :
+($i.e.$ an orthogonal curvilinear coordinate in the horizontal and
+an Arbitrary Lagrangian Eulerian (ALE) coordinate in the vertical),
+we start from the set of equations established in \autoref{subsec:PE_zco_Eq} for
+the special case $k = z$ and thus $e_3 = 1$,
+and we introduce an arbitrary vertical coordinate $a = a(i,j,z,t)$.
+Let us define a new vertical scale factor by $e_3 = \partial z / \partial s$ (which now depends on $(i,j,z,t)$) and
+the horizontal slope of $s$surfaces by:
\begin{equation} \label{apdx:A_s_slope}
\sigma _1 =\frac{1}{e_1 }\;\left. {\frac{\partial z}{\partial i}} \right_s
@@ 31,6 +32,5 @@
\end{equation}
The chain rule to establish the model equations in the curvilinear $s$coordinate
system is:
+The chain rule to establish the model equations in the curvilinear $s$coordinate system is:
\begin{equation} \label{apdx:A_s_chain_rule}
\begin{aligned}
@@ 52,6 +52,6 @@
\end{equation}
In particular applying the time derivative chain rule to $z$ provides the expression
for $w_s$, the vertical velocity of the $s$surfaces referenced to a fix zcoordinate:
+In particular applying the time derivative chain rule to $z$ provides the expression for $w_s$,
+the vertical velocity of the $s$surfaces referenced to a fix zcoordinate:
\begin{equation} \label{apdx:A_w_in_s}
w_s = \left. \frac{\partial z }{\partial t} \right_s
@@ 67,8 +67,8 @@
\label{sec:A_continuity}
Using (\autoref{apdx:A_s_chain_rule}) and the fact that the horizontal scale factors
$e_1$ and $e_2$ do not depend on the vertical coordinate, the divergence of
the velocity relative to the ($i$,$j$,$z$) coordinate system is transformed as follows
in order to obtain its expression in the curvilinear $s$coordinate system:
+Using (\autoref{apdx:A_s_chain_rule}) and
+the fact that the horizontal scale factors $e_1$ and $e_2$ do not depend on the vertical coordinate,
+the divergence of the velocity relative to the ($i$,$j$,$z$) coordinate system is transformed as follows in order to
+obtain its expression in the curvilinear $s$coordinate system:
\begin{subequations}
@@ 128,12 +128,12 @@
\end{subequations}
Here, $w$ is the vertical velocity relative to the $z$coordinate system.
Introducing the diasurface velocity component, $\omega $, defined as
the volume flux across the moving $s$surfaces per unit horizontal area:
+Here, $w$ is the vertical velocity relative to the $z$coordinate system.
+Introducing the diasurface velocity component,
+$\omega $, defined as the volume flux across the moving $s$surfaces per unit horizontal area:
\begin{equation} \label{apdx:A_w_s}
\omega = w  w_s  \sigma _1 \,u  \sigma _2 \,v \\
\end{equation}
with $w_s$ given by \autoref{apdx:A_w_in_s}, we obtain the expression for
the divergence of the velocity in the curvilinear $s$coordinate system:
+with $w_s$ given by \autoref{apdx:A_w_in_s},
+we obtain the expression for the divergence of the velocity in the curvilinear $s$coordinate system:
\begin{subequations}
\begin{align*} {\begin{array}{*{20}l}
@@ 167,6 +167,5 @@
\end{subequations}
As a result, the continuity equation \autoref{eq:PE_continuity} in the
$s$coordinates is:
+As a result, the continuity equation \autoref{eq:PE_continuity} in the $s$coordinates is:
\begin{equation} \label{apdx:A_sco_Continuity}
\frac{1}{e_3 } \frac{\partial e_3}{\partial t}
@@ 176,6 +175,6 @@
+\frac{1}{e_3 }\frac{\partial \omega }{\partial s} = 0
\end{equation}
A additional term has appeared that take into account the contribution of the time variation
of the vertical coordinate to the volume budget.
+A additional term has appeared that take into account
+the contribution of the time variation of the vertical coordinate to the volume budget.
@@ 186,5 +185,5 @@
\label{sec:A_momentum}
Here we only consider the first component of the momentum equation,
+Here we only consider the first component of the momentum equation,
the generalization to the second one being straightforward.
@@ 193,7 +192,7 @@
$\bullet$ \textbf{Total derivative in vector invariant form}
Let us consider \autoref{eq:PE_dyn_vect}, the first component of the momentum
equation in the vector invariant form. Its total $z$coordinate time derivative,
$\left. \frac{D u}{D t} \right_z$ can be transformed as follows in order to obtain
+Let us consider \autoref{eq:PE_dyn_vect}, the first component of the momentum equation in the vector invariant form.
+Its total $z$coordinate time derivative,
+$\left. \frac{D u}{D t} \right_z$ can be transformed as follows in order to obtain
its expression in the curvilinear $s$coordinate system:
@@ 258,7 +257,6 @@
\end{subequations}
%
Applying the time derivative chain rule (first equation of (\autoref{apdx:A_s_chain_rule}))
to $u$ and using (\autoref{apdx:A_w_in_s}) provides the expression of the last term
of the right hand side,
+Applying the time derivative chain rule (first equation of (\autoref{apdx:A_s_chain_rule})) to $u$ and
+using (\autoref{apdx:A_w_in_s}) provides the expression of the last term of the right hand side,
\begin{equation*} {\begin{array}{*{20}l}
w_s \;\frac{\partial u}{\partial s}
@@ 267,5 +265,5 @@
\end{array} }
\end{equation*}
leads to the $s$coordinate formulation of the total $z$coordinate time derivative,
+leads to the $s$coordinate formulation of the total $z$coordinate time derivative,
$i.e.$ the total $s$coordinate time derivative :
\begin{align} \label{apdx:A_sco_Dt_vect}
@@ 276,7 +274,7 @@
+ \frac{1}{e_3 } \omega \;\frac{\partial u}{\partial s}
\end{align}
Therefore, the vector invariant form of the total time derivative has exactly the same
mathematical form in $z$ and $s$coordinates. This is not the case for the flux form
as shown in next paragraph.
+Therefore, the vector invariant form of the total time derivative has exactly the same mathematical form in
+$z$ and $s$coordinates.
+This is not the case for the flux form as shown in next paragraph.
$\ $\newline % force a new ligne
@@ 284,7 +282,6 @@
$\bullet$ \textbf{Total derivative in flux form}
Let us start from the total time derivative in the curvilinear $s$coordinate system
we have just establish. Following the procedure used to establish (\autoref{eq:PE_flux_form}),
it can be transformed into :
+Let us start from the total time derivative in the curvilinear $s$coordinate system we have just establish.
+Following the procedure used to establish (\autoref{eq:PE_flux_form}), it can be transformed into :
%\begin{subequations}
\begin{align*} {\begin{array}{*{20}l}
@@ 355,5 +352,5 @@
\end{subequations}
which leads to the $s$coordinate flux formulation of the total $s$coordinate time derivative,
$i.e.$ the total $s$coordinate time derivative in flux form :
+$i.e.$ the total $s$coordinate time derivative in flux form:
\begin{flalign}\label{apdx:A_sco_Dt_flux}
\left. \frac{D u}{D t} \right_s = \frac{1}{e_3} \left. \frac{\partial ( e_3\,u)}{\partial t} \right_s
@@ 363,7 +360,8 @@
\end{flalign}
which is the total time derivative expressed in the curvilinear $s$coordinate system.
It has the same form as in the $z$coordinate but for the vertical scale factor
that has appeared inside the time derivative which comes from the modification
of (\autoref{apdx:A_sco_Continuity}), the continuity equation.
+It has the same form as in the $z$coordinate but for
+the vertical scale factor that has appeared inside the time derivative which
+comes from the modification of (\autoref{apdx:A_sco_Continuity}),
+the continuity equation.
$\ $\newline % force a new ligne
@@ 380,6 +378,6 @@
\end{split}
\end{equation*}
Applying similar manipulation to the second component and replacing
$\sigma _1$ and $\sigma _2$ by their expression \autoref{apdx:A_s_slope}, it comes:
+Applying similar manipulation to the second component and
+replacing $\sigma _1$ and $\sigma _2$ by their expression \autoref{apdx:A_s_slope}, it comes:
\begin{equation} \label{apdx:A_grad_p_1}
\begin{split}
@@ 394,10 +392,11 @@
\end{equation}
An additional term appears in (\autoref{apdx:A_grad_p_1}) which accounts for the
tilt of $s$surfaces with respect to geopotential $z$surfaces.

As in $z$coordinate, the horizontal pressure gradient can be split in two parts
following \citet{Marsaleix_al_OM08}. Let defined a density anomaly, $d$, by $d=(\rho  \rho_o)/ \rho_o$,
and a hydrostatic pressure anomaly, $p_h'$, by $p_h'= g \; \int_z^\eta d \; e_3 \; dk$.
+An additional term appears in (\autoref{apdx:A_grad_p_1}) which accounts for
+the tilt of $s$surfaces with respect to geopotential $z$surfaces.
+
+As in $z$coordinate,
+the horizontal pressure gradient can be split in two parts following \citet{Marsaleix_al_OM08}.
+Let defined a density anomaly, $d$, by $d=(\rho  \rho_o)/ \rho_o$,
+and a hydrostatic pressure anomaly, $p_h'$, by $p_h'= g \; \int_z^\eta d \; e_3 \; dk$.
The pressure is then given by:
\begin{equation*}
@@ 416,6 +415,6 @@
\end{equation*}
Substituing \autoref{apdx:A_pressure} in \autoref{apdx:A_grad_p_1} and using the definition of
the density anomaly it comes the expression in two parts:
+Substituing \autoref{apdx:A_pressure} in \autoref{apdx:A_grad_p_1} and
+using the definition of the density anomaly it comes the expression in two parts:
\begin{equation} \label{apdx:A_grad_p_2}
\begin{split}
@@ 429,13 +428,13 @@
\end{split}
\end{equation}
This formulation of the pressure gradient is characterised by the appearance of a term depending on the
the sea surface height only (last term on the right hand side of expression \autoref{apdx:A_grad_p_2}).
This term will be loosely termed \textit{surface pressure gradient}
whereas the first term will be termed the
\textit{hydrostatic pressure gradient} by analogy to the $z$coordinate formulation.
In fact, the the true surface pressure gradient is $1/\rho_o \nabla (\rho \eta)$, and
$\eta$ is implicitly included in the computation of $p_h'$ through the upper bound of
the vertical integration.

+This formulation of the pressure gradient is characterised by the appearance of
+a term depending on the sea surface height only
+(last term on the right hand side of expression \autoref{apdx:A_grad_p_2}).
+This term will be loosely termed \textit{surface pressure gradient} whereas
+the first term will be termed the \textit{hydrostatic pressure gradient} by analogy to
+the $z$coordinate formulation.
+In fact, the true surface pressure gradient is $1/\rho_o \nabla (\rho \eta)$,
+and $\eta$ is implicitly included in the computation of $p_h'$ through the upper bound of the vertical integration.
+
$\ $\newline % force a new ligne
@@ 443,7 +442,7 @@
$\bullet$ \textbf{The other terms of the momentum equation}
The coriolis and forcing terms as well as the the vertical physics remain unchanged
as they involve neither time nor space derivatives. The form of the lateral physics is
discussed in \autoref{apdx:B}.
+The coriolis and forcing terms as well as the the vertical physics remain unchanged as
+they involve neither time nor space derivatives.
+The form of the lateral physics is discussed in \autoref{apdx:B}.
@@ 452,7 +451,7 @@
$\bullet$ \textbf{Full momentum equation}
To sum up, in a curvilinear $s$coordinate system, the vector invariant momentum equation
solved by the model has the same mathematical expression as the one in a curvilinear
$z$coordinate, except for the pressure gradient term :
+To sum up, in a curvilinear $s$coordinate system,
+the vector invariant momentum equation solved by the model has the same mathematical expression as
+the one in a curvilinear $z$coordinate, except for the pressure gradient term:
\begin{subequations} \label{apdx:A_dyn_vect}
\begin{multline} \label{apdx:A_PE_dyn_vect_u}
@@ 475,6 +474,6 @@
\end{multline}
\end{subequations}
whereas the flux form momentum equation differ from it by the formulation of both
the time derivative and the pressure gradient term :
+whereas the flux form momentum equation differs from it by
+the formulation of both the time derivative and the pressure gradient term:
\begin{subequations} \label{apdx:A_dyn_flux}
\begin{multline} \label{apdx:A_PE_dyn_flux_u}
@@ 503,11 +502,10 @@
\end{equation}
It is important to realize that the change in coordinate system has only concerned
the position on the vertical. It has not affected (\textbf{i},\textbf{j},\textbf{k}), the
orthogonal curvilinear set of unit vectors. ($u$,$v$) are always horizontal velocities
so that their evolution is driven by \emph{horizontal} forces, in particular
the pressure gradient. By contrast, $\omega$ is not $w$, the third component of the velocity,
but the diasurface velocity component, $i.e.$ the volume flux across the moving
$s$surfaces per unit horizontal area.
+It is important to realize that the change in coordinate system has only concerned the position on the vertical.
+It has not affected (\textbf{i},\textbf{j},\textbf{k}), the orthogonal curvilinear set of unit vectors.
+($u$,$v$) are always horizontal velocities so that their evolution is driven by \emph{horizontal} forces,
+in particular the pressure gradient.
+By contrast, $\omega$ is not $w$, the third component of the velocity, but the diasurface velocity component,
+$i.e.$ the volume flux across the moving $s$surfaces per unit horizontal area.
@@ 518,6 +516,6 @@
\label{sec:A_tracer}
The tracer equation is obtained using the same calculation as for the continuity
equation and then regrouping the time derivative terms in the left hand side :
+The tracer equation is obtained using the same calculation as for the continuity equation and then
+regrouping the time derivative terms in the left hand side :
\begin{multline} \label{apdx:A_tracer}
@@ 531,6 +529,6 @@
The expression for the advection term is a straight consequence of (A.4), the
expression of the 3D divergence in the $s$coordinates established above.
+The expression for the advection term is a straight consequence of (A.4),
+the expression of the 3D divergence in the $s$coordinates established above.
\end{document}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_B.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_B.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_B.tex (revision 10368)
@@ 19,6 +19,5 @@
\subsubsection*{In zcoordinates}
In $z$coordinates, the horizontal/vertical second order tracer diffusion operator
is given by:
+In $z$coordinates, the horizontal/vertical second order tracer diffusion operator is given by:
\begin{eqnarray} \label{apdx:B1}
&D^T = \frac{1}{e_1 \, e_2} \left[
@@ 30,7 +29,7 @@
\subsubsection*{In generalized vertical coordinates}
In $s$coordinates, we defined the slopes of $s$surfaces, $\sigma_1$ and
$\sigma_2$ by \autoref{apdx:A_s_slope} and the vertical/horizontal ratio of diffusion
coefficient by $\epsilon = A^{vT} / A^{lT}$. The diffusion operator is given by:
+In $s$coordinates, we defined the slopes of $s$surfaces, $\sigma_1$ and $\sigma_2$ by \autoref{apdx:A_s_slope} and
+the vertical/horizontal ratio of diffusion coefficient by $\epsilon = A^{vT} / A^{lT}$.
+The diffusion operator is given by:
\begin{equation} \label{apdx:B2}
@@ 56,12 +55,12 @@
\end{subequations}
Equation \autoref{apdx:B2} is obtained from \autoref{apdx:B1} without any
additional assumption. Indeed, for the special case $k=z$ and thus $e_3 =1$,
we introduce an arbitrary vertical coordinate $s = s (i,j,z)$ as in \autoref{apdx:A}
and use \autoref{apdx:A_s_slope} and \autoref{apdx:A_s_chain_rule}.
Since no cross horizontal derivative $\partial _i \partial _j $ appears in
\autoref{apdx:B1}, the ($i$,$z$) and ($j$,$z$) planes are independent.
The derivation can then be demonstrated for the ($i$,$z$)~$\to$~($j$,$s$)
transformation without any loss of generality:
+Equation \autoref{apdx:B2} is obtained from \autoref{apdx:B1} without any additional assumption.
+Indeed, for the special case $k=z$ and thus $e_3 =1$,
+we introduce an arbitrary vertical coordinate $s = s (i,j,z)$ as in \autoref{apdx:A} and
+use \autoref{apdx:A_s_slope} and \autoref{apdx:A_s_chain_rule}.
+Since no cross horizontal derivative $\partial _i \partial _j $ appears in \autoref{apdx:B1},
+the ($i$,$z$) and ($j$,$z$) planes are independent.
+The derivation can then be demonstrated for the ($i$,$z$)~$\to$~($j$,$s$) transformation without
+any loss of generality:
\begin{subequations}
@@ 143,7 +142,8 @@
\subsubsection*{In zcoordinates}
The iso/diapycnal diffusive tensor $\textbf {A}_{\textbf I}$ expressed in the ($i$,$j$,$k$)
curvilinear coordinate system in which the equations of the ocean circulation model are
formulated, takes the following form \citep{Redi_JPO82}:
+The iso/diapycnal diffusive tensor $\textbf {A}_{\textbf I}$ expressed in
+the ($i$,$j$,$k$) curvilinear coordinate system in which
+the equations of the ocean circulation model are formulated,
+takes the following form \citep{Redi_JPO82}:
\begin{equation} \label{apdx:B3}
@@ 155,6 +155,5 @@
\end{array} }} \right]
\end{equation}
where ($a_1$, $a_2$) are the isopycnal slopes in ($\textbf{i}$,
$\textbf{j}$) directions, relative to geopotentials:
+where ($a_1$, $a_2$) are the isopycnal slopes in ($\textbf{i}$, $\textbf{j}$) directions, relative to geopotentials:
\begin{equation*}
a_1 =\frac{e_3 }{e_1 }\left( {\frac{\partial \rho }{\partial i}} \right)\left( {\frac{\partial \rho }{\partial k}} \right)^{1}
@@ 164,6 +163,6 @@
\end{equation*}
In practice, isopycnal slopes are generally less than $10^{2}$ in the ocean, so
$\textbf {A}_{\textbf I}$ can be simplified appreciably \citep{Cox1987}:
+In practice, isopycnal slopes are generally less than $10^{2}$ in the ocean,
+so $\textbf {A}_{\textbf I}$ can be simplified appreciably \citep{Cox1987}:
\begin{subequations} \label{apdx:B4}
\begin{equation} \label{apdx:B4a}
@@ 183,15 +182,13 @@
Physically, the full tensor \autoref{apdx:B3}
represents strong isoneutral diffusion on a plane parallel to the isoneutral
surface and weak dianeutral diffusion perpendicular to this plane.
However, the approximate `weakslope' tensor \autoref{apdx:B4a} represents strong
diffusion along the isoneutral surface, with weak
\emph{vertical} diffusion  the principal axes of the tensor are no
longer orthogonal. This simplification also decouples
the ($i$,$z$) and ($j$,$z$) planes of the tensor. The weakslope operator therefore takes the same
form, \autoref{apdx:B4}, as \autoref{apdx:B2}, the diffusion operator for geopotential
diffusion written in nonorthogonal $i,j,s$coordinates. Written out
explicitly,
+Physically, the full tensor \autoref{apdx:B3} represents strong isoneutral diffusion on a plane parallel to
+the isoneutral surface and weak dianeutral diffusion perpendicular to this plane.
+However,
+the approximate `weakslope' tensor \autoref{apdx:B4a} represents strong diffusion along the isoneutral surface,
+with weak \emph{vertical} diffusion  the principal axes of the tensor are no longer orthogonal.
+This simplification also decouples the ($i$,$z$) and ($j$,$z$) planes of the tensor.
+The weakslope operator therefore takes the same form, \autoref{apdx:B4}, as \autoref{apdx:B2},
+the diffusion operator for geopotential diffusion written in nonorthogonal $i,j,s$coordinates.
+Written out explicitly,
\begin{multline} \label{apdx:B_ldfiso}
@@ 204,7 +201,7 @@
The isopycnal diffusion operator \autoref{apdx:B4},
\autoref{apdx:B_ldfiso} conserves tracer quantity and dissipates its
square. The demonstration of the first property is trivial as \autoref{apdx:B4} is the divergence
of fluxes. Let us demonstrate the second one:
+\autoref{apdx:B_ldfiso} conserves tracer quantity and dissipates its square.
+The demonstration of the first property is trivial as \autoref{apdx:B4} is the divergence of fluxes.
+Let us demonstrate the second one:
\begin{equation*}
\iiint\limits_D T\;\nabla .\left( {\textbf{A}}_{\textbf{I}} \nabla T \right)\,dv
@@ 229,12 +226,13 @@
\end{subequations}
\addtocounter{equation}{1}
 the property becomes obvious.
+the property becomes obvious.
\subsubsection*{In generalized vertical coordinates}
Because the weakslope operator \autoref{apdx:B4}, \autoref{apdx:B_ldfiso} is decoupled
in the ($i$,$z$) and ($j$,$z$) planes, it may be transformed into
generalized $s$coordinates in the same way as \autoref{sec:B_1} was transformed into
\autoref{sec:B_2}. The resulting operator then takes the simple form
+Because the weakslope operator \autoref{apdx:B4},
+\autoref{apdx:B_ldfiso} is decoupled in the ($i$,$z$) and ($j$,$z$) planes,
+it may be transformed into generalized $s$coordinates in the same way as
+\autoref{sec:B_1} was transformed into \autoref{sec:B_2}.
+The resulting operator then takes the simple form
\begin{equation} \label{apdx:B_ldfiso_s}
@@ 249,6 +247,6 @@
\end{equation}
where ($r_1$, $r_2$) are the isopycnal slopes in ($\textbf{i}$,
$\textbf{j}$) directions, relative to $s$coordinate surfaces:
+where ($r_1$, $r_2$) are the isopycnal slopes in ($\textbf{i}$, $\textbf{j}$) directions,
+relative to $s$coordinate surfaces:
\begin{equation*}
r_1 =\frac{e_3 }{e_1 }\left( {\frac{\partial \rho }{\partial i}} \right)\left( {\frac{\partial \rho }{\partial s}} \right)^{1}
@@ 258,9 +256,7 @@
\end{equation*}
To prove \autoref{apdx:B5} by direct reexpression of \autoref{apdx:B_ldfiso} is
straightforward, but laborious. An easier way is first to note (by reversing the
derivation of \autoref{sec:B_2} from \autoref{sec:B_1} ) that the
weakslope operator may be \emph{exactly} reexpressed in
nonorthogonal $i,j,\rho$coordinates as
+To prove \autoref{apdx:B5} by direct reexpression of \autoref{apdx:B_ldfiso} is straightforward, but laborious.
+An easier way is first to note (by reversing the derivation of \autoref{sec:B_2} from \autoref{sec:B_1} ) that
+the weakslope operator may be \emph{exactly} reexpressed in nonorthogonal $i,j,\rho$coordinates as
\begin{equation} \label{apdx:B5}
@@ 273,13 +269,11 @@
\end{array} }} \right).
\end{equation}
Then direct transformation from $i,j,\rho$coordinates to
$i,j,s$coordinates gives \autoref{apdx:B_ldfiso_s} immediately.

Note that the weakslope approximation is only made in
transforming from the (rotated,orthogonal) isoneutral axes to the
nonorthogonal $i,j,\rho$coordinates. The further transformation
into $i,j,s$coordinates is exact, whatever the steepness of
the $s$surfaces, in the same way as the transformation of
horizontal/vertical Laplacian diffusion in $z$coordinates,
+Then direct transformation from $i,j,\rho$coordinates to $i,j,s$coordinates gives
+\autoref{apdx:B_ldfiso_s} immediately.
+
+Note that the weakslope approximation is only made in transforming from
+the (rotated,orthogonal) isoneutral axes to the nonorthogonal $i,j,\rho$coordinates.
+The further transformation into $i,j,s$coordinates is exact, whatever the steepness of the $s$surfaces,
+in the same way as the transformation of horizontal/vertical Laplacian diffusion in $z$coordinates,
\autoref{sec:B_1} onto $s$coordinates is exact, however steep the $s$surfaces.
@@ 291,7 +285,7 @@
\label{sec:B_3}
The second order momentum diffusion operator (Laplacian) in the $z$coordinate
is found by applying \autoref{eq:PE_lap_vector}, the expression for the Laplacian
of a vector, to the horizontal velocity vector :
+The second order momentum diffusion operator (Laplacian) in the $z$coordinate is found by
+applying \autoref{eq:PE_lap_vector}, the expression for the Laplacian of a vector,
+to the horizontal velocity vector:
\begin{align*}
\Delta {\textbf{U}}_h
@@ 329,16 +323,16 @@
\end{array} }} \right)
\end{align*}
Using \autoref{eq:PE_div}, the definition of the horizontal divergence, the third
componant of the second vector is obviously zero and thus :
+Using \autoref{eq:PE_div}, the definition of the horizontal divergence,
+the third componant of the second vector is obviously zero and thus :
\begin{equation*}
\Delta {\textbf{U}}_h = \nabla _h \left( \chi \right)  \nabla _h \times \left( \zeta \right) + \frac {1}{e_3 } \frac {\partial }{\partial k} \left( {\frac {1}{e_3 } \frac{\partial {\textbf{ U}}_h }{\partial k}} \right)
\end{equation*}
Note that this operator ensures a full separation between the vorticity and horizontal
divergence fields (see \autoref{apdx:C}). It is only equal to a Laplacian
applied to each component in Cartesian coordinates, not on the sphere.

The horizontal/vertical second order (Laplacian type) operator used to diffuse
horizontal momentum in the $z$coordinate therefore takes the following form :
+Note that this operator ensures a full separation between
+the vorticity and horizontal divergence fields (see \autoref{apdx:C}).
+It is only equal to a Laplacian applied to each component in Cartesian coordinates, not on the sphere.
+
+The horizontal/vertical second order (Laplacian type) operator used to diffuse horizontal momentum in
+the $z$coordinate therefore takes the following form:
\begin{equation} \label{apdx:B_Lap_U}
{\textbf{D}}^{\textbf{U}} =
@@ 360,9 +354,9 @@
\end{align*}
Note Bene: introducing a rotation in \autoref{apdx:B_Lap_U} does not lead to a
useful expression for the iso/diapycnal Laplacian operator in the $z$coordinate.
Similarly, we did not found an expression of practical use for the geopotential
horizontal/vertical Laplacian operator in the $s$coordinate. Generally,
\autoref{apdx:B_Lap_U} is used in both $z$ and $s$coordinate systems, that is
a Laplacian diffusion is applied on momentum along the coordinate directions.
+Note Bene: introducing a rotation in \autoref{apdx:B_Lap_U} does not lead to
+a useful expression for the iso/diapycnal Laplacian operator in the $z$coordinate.
+Similarly, we did not found an expression of practical use for
+the geopotential horizontal/vertical Laplacian operator in the $s$coordinate.
+Generally, \autoref{apdx:B_Lap_U} is used in both $z$ and $s$coordinate systems,
+that is a Laplacian diffusion is applied on momentum along the coordinate directions.
\end{document}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_C.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_C.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_C.tex (revision 10368)
@@ 22,5 +22,5 @@
\label{sec:C.0}
Notation used in this appendix in the demonstations :
+Notation used in this appendix in the demonstations:
fluxes at the faces of a $T$box:
@@ 37,7 +37,7 @@
$dv=e_1\,e_2\,e_3 \,di\,dj\,dk$ is the volume element, with only $e_3$ that depends on time.
$D$ and $S$ are the ocean domain volume and surface, respectively.
No wetting/drying is allow ($i.e.$ $\frac{\partial S}{\partial t} = 0$)
Let $k_s$ and $k_b$ be the ocean surface and bottom, resp.
+$D$ and $S$ are the ocean domain volume and surface, respectively.
+No wetting/drying is allow ($i.e.$ $\frac{\partial S}{\partial t} = 0$).
+Let $k_s$ and $k_b$ be the ocean surface and bottom, resp.
($i.e.$ $s(k_s) = \eta$ and $s(k_b)=H$, where $H$ is the bottom depth).
\begin{flalign*}
@@ 60,6 +60,7 @@
= \int_D { \frac{1}{e_3} \partial_t \left( e_3 \, Q \right) dv } =0
\end{equation*}
equation of evolution of $Q$ written as the time evolution of the vertical content of $Q$
like for tracers, or momentum in flux form, the quadratic quantity $\frac{1}{2}Q^2$ is conserved when :
+equation of evolution of $Q$ written as
+the time evolution of the vertical content of $Q$ like for tracers, or momentum in flux form,
+the quadratic quantity $\frac{1}{2}Q^2$ is conserved when:
\begin{flalign*}
\partial_t \left( \int_D{ \frac{1}{2} \,Q^2\;dv } \right)
@@ 74,6 +75,6 @@
 \frac{1}{2} \int_D { \frac{Q^2}{e_3} \partial_t (e_3) \;dv }
\end{flalign}
equation of evolution of $Q$ written as the time evolution of $Q$
like for momentum in vector invariant form, the quadratic quantity $\frac{1}{2}Q^2$ is conserved when :
+equation of evolution of $Q$ written as the time evolution of $Q$ like for momentum in vector invariant form,
+the quadratic quantity $\frac{1}{2}Q^2$ is conserved when:
\begin{flalign*}
\partial_t \left( \int_D {\frac{1}{2} Q^2\;dv} \right)
@@ 82,5 +83,5 @@
+ \int_D { \frac{1}{2} Q^2 \, \partial_t e_3 \;e_1e_2\;di\,dj\,dk } \\
\end{flalign*}
that is in a more compact form :
+that is in a more compact form:
\begin{flalign} \label{eq:Q2_vect}
\partial_t \left( \int_D {\frac{1}{2} Q^2\;dv} \right)
@@ 97,25 +98,23 @@
The discretization of pimitive equation in $s$coordinate ($i.e.$ time and space varying
vertical coordinate) must be chosen so that the discrete equation of the model satisfy
integral constrains on energy and enstrophy.
+The discretization of pimitive equation in $s$coordinate ($i.e.$ time and space varying vertical coordinate)
+must be chosen so that the discrete equation of the model satisfy integral constrains on energy and enstrophy.
Let us first establish those constraint in the continuous world.
The total energy ($i.e.$ kinetic plus potential energies) is conserved :
+The total energy ($i.e.$ kinetic plus potential energies) is conserved:
\begin{flalign} \label{eq:Tot_Energy}
\partial_t \left( \int_D \left( \frac{1}{2} {\textbf{U}_h}^2 + \rho \, g \, z \right) \;dv \right) = & 0
\end{flalign}
under the following assumptions: no dissipation, no forcing
(wind, buoyancy flux, atmospheric pressure variations), mass
conservation, and closed domain.

This equation can be transformed to obtain several subequalities.
The transformation for the advection term depends on whether
the vector invariant form or the flux form is used for the momentum equation.
Using \autoref{eq:Q2_vect} and introducing \autoref{apdx:A_dyn_vect} in \autoref{eq:Tot_Energy}
for the former form and
Using \autoref{eq:Q2_flux} and introducing \autoref{apdx:A_dyn_flux} in \autoref{eq:Tot_Energy}
for the latter form leads to:
+under the following assumptions: no dissipation, no forcing (wind, buoyancy flux, atmospheric pressure variations),
+mass conservation, and closed domain.
+
+This equation can be transformed to obtain several subequalities.
+The transformation for the advection term depends on whether the vector invariant form or
+the flux form is used for the momentum equation.
+Using \autoref{eq:Q2_vect} and introducing \autoref{apdx:A_dyn_vect} in
+\autoref{eq:Tot_Energy} for the former form and
+using \autoref{eq:Q2_flux} and introducing \autoref{apdx:A_dyn_flux} in
+\autoref{eq:Tot_Energy} for the latter form leads to:
\begin{subequations} \label{eq:E_tot}
@@ 348,5 +347,5 @@
Substituting the discrete expression of the time derivative of the velocity either in vector invariant,
leads to the discrete equivalent of the four equations \autoref{eq:E_tot_flux}.
+leads to the discrete equivalent of the four equations \autoref{eq:E_tot_flux}.
% 
@@ 356,8 +355,7 @@
\label{subsec:C_vor}
Let $q$, located at $f$points, be either the relative ($q=\zeta / e_{3f}$), or
the planetary ($q=f/e_{3f}$), or the total potential vorticity ($q=(\zeta +f) /e_{3f}$).
Two discretisation of the vorticity term (ENE and EEN) allows the conservation of
the kinetic energy.
+Let $q$, located at $f$points, be either the relative ($q=\zeta / e_{3f}$),
+or the planetary ($q=f/e_{3f}$), or the total potential vorticity ($q=(\zeta +f) /e_{3f}$).
+Two discretisation of the vorticity term (ENE and EEN) allows the conservation of the kinetic energy.
% 
% Vorticity Term with ENE scheme
@@ 366,5 +364,5 @@
\label{subsec:C_vorENE}
For the ENE scheme, the two components of the vorticity term are given by :
+For the ENE scheme, the two components of the vorticity term are given by:
\begin{equation*}
 e_3 \, q \;{\textbf{k}}\times {\textbf {U}}_h \equiv
@@ 377,8 +375,7 @@
\end{equation*}
This formulation does not conserve the enstrophy but it does conserve the
total kinetic energy. Indeed, the kinetic energy tendency associated to the
vorticity term and averaged over the ocean domain can be transformed as
follows:
+This formulation does not conserve the enstrophy but it does conserve the total kinetic energy.
+Indeed, the kinetic energy tendency associated to the vorticity term and
+averaged over the ocean domain can be transformed as follows:
\begin{flalign*}
&\int\limits_D  \left( e_3 \, q \;\textbf{k} \times \textbf{U}_h \right) \cdot \textbf{U}_h \; dv && \\
@@ 412,6 +409,5 @@
\end{aligned} } \right.
\end{equation}
where the indices $i_p$ and $j_p$ take the following value:
$i_p = 1/2$ or $1/2$ and $j_p = 1/2$ or $1/2$,
+where the indices $i_p$ and $j_p$ take the following value: $i_p = 1/2$ or $1/2$ and $j_p = 1/2$ or $1/2$,
and the vorticity triads, ${^i_j}\mathbb{Q}^{i_p}_{j_p}$, defined at $T$point, are given by:
\begin{equation} \tag{\ref{eq:Q_triads}}
@@ 420,5 +416,6 @@
\end{equation}
This formulation does conserve the total kinetic energy. Indeed,
+This formulation does conserve the total kinetic energy.
+Indeed,
\begin{flalign*}
&\int\limits_D  \textbf{U}_h \cdot \left( \zeta \;\textbf{k} \times \textbf{U}_h \right) \; dv && \\
@@ 473,6 +470,5 @@
\label{subsec:C_zad}
The change of Kinetic Energy (KE) due to the vertical advection is exactly
balanced by the change of KE due to the horizontal gradient of KE~:
+The change of Kinetic Energy (KE) due to the vertical advection is exactly balanced by the change of KE due to the horizontal gradient of KE~:
\begin{equation*}
\int_D \textbf{U}_h \cdot \frac{1}{e_3 } \omega \partial_k \textbf{U}_h \;dv
@@ 480,9 +476,8 @@
+ \frac{1}{2} \int_D { \frac{{\textbf{U}_h}^2}{e_3} \partial_t ( e_3) \;dv } \\
\end{equation*}
Indeed, using successively \autoref{eq:DOM_di_adj} ($i.e.$ the skew symmetry
property of the $\delta$ operator) and the continuity equation, then
\autoref{eq:DOM_di_adj} again, then the commutativity of operators
$\overline {\,\cdot \,}$ and $\delta$, and finally \autoref{eq:DOM_mi_adj}
($i.e.$ the symmetry property of the $\overline {\,\cdot \,}$ operator)
+Indeed, using successively \autoref{eq:DOM_di_adj} ($i.e.$ the skew symmetry property of the $\delta$ operator)
+and the continuity equation, then \autoref{eq:DOM_di_adj} again,
+then the commutativity of operators $\overline {\,\cdot \,}$ and $\delta$, and finally \autoref{eq:DOM_mi_adj}
+($i.e.$ the symmetry property of the $\overline {\,\cdot \,}$ operator)
applied in the horizontal and vertical directions, it becomes:
\begin{flalign*}
@@ 543,9 +538,10 @@
\end{flalign*}
There is two main points here. First, the satisfaction of this property links the choice of
the discrete formulation of the vertical advection and of the horizontal gradient
of KE. Choosing one imposes the other. For example KE can also be discretized
as $1/2\,({\overline u^{\,i}}^2 + {\overline v^{\,j}}^2)$. This leads to the following
expression for the vertical advection:
+There is two main points here.
+First, the satisfaction of this property links the choice of the discrete formulation of the vertical advection and
+of the horizontal gradient of KE.
+Choosing one imposes the other.
+For example KE can also be discretized as $1/2\,({\overline u^{\,i}}^2 + {\overline v^{\,j}}^2)$.
+This leads to the following expression for the vertical advection:
\begin{equation*}
\frac{1} {e_3 }\; \omega\; \partial_k \textbf{U}_h
@@ 557,10 +553,10 @@
\end{array}} } \right)
\end{equation*}
a formulation that requires an additional horizontal mean in contrast with
the one used in NEMO. Nine velocity points have to be used instead of 3.
+a formulation that requires an additional horizontal mean in contrast with the one used in NEMO.
+Nine velocity points have to be used instead of 3.
This is the reason why it has not been chosen.
Second, as soon as the chosen $s$coordinate depends on time, an extra constraint
arises on the time derivative of the volume at $u$ and $v$points:
+Second, as soon as the chosen $s$coordinate depends on time,
+an extra constraint arises on the time derivative of the volume at $u$ and $v$points:
\begin{flalign*}
e_{1u}\,e_{2u}\,\partial_t (e_{3u}) =\overline{ e_{1t}\,e_{2t}\;\partial_t (e_{3t}) }^{\,i+1/2} \\
@@ 583,13 +579,13 @@
\gmcomment{
A pressure gradient has no contribution to the evolution of the vorticity as the
curl of a gradient is zero. In the $z$coordinate, this property is satisfied locally
on a Cgrid with 2nd order finite differences (property \autoref{eq:DOM_curl_grad}).
+ A pressure gradient has no contribution to the evolution of the vorticity as the curl of a gradient is zero.
+ In the $z$coordinate, this property is satisfied locally on a Cgrid with 2nd order finite differences
+ (property \autoref{eq:DOM_curl_grad}).
}
When the equation of state is linear ($i.e.$ when an advectiondiffusion equation
for density can be derived from those of temperature and salinity) the change of
KE due to the work of pressure forces is balanced by the change of potential
energy due to buoyancy forces:
+When the equation of state is linear
+($i.e.$ when an advectiondiffusion equation for density can be derived from those of temperature and salinity)
+the change of KE due to the work of pressure forces is balanced by
+the change of potential energy due to buoyancy forces:
\begin{equation*}
 \int_D \left. \nabla p \right_z \cdot \textbf{U}_h \;dv
@@ 598,8 +594,8 @@
\end{equation*}
This property can be satisfied in a discrete sense for both $z$ and $s$coordinates.
Indeed, defining the depth of a $T$point, $z_t$, as the sum of the vertical scale
factors at $w$points starting from the surface, the work of pressure forces can be
written as:
+This property can be satisfied in a discrete sense for both $z$ and $s$coordinates.
+Indeed, defining the depth of a $T$point, $z_t$,
+as the sum of the vertical scale factors at $w$points starting from the surface,
+the work of pressure forces can be written as:
\begin{flalign*}
& \int_D \left. \nabla p \right_z \cdot \textbf{U}_h \;dv
@@ 658,6 +654,7 @@
\end{flalign*}
The first term is exactly the first term of the righthandside of \autoref{eq:KE+PE_vect_discrete}.
It remains to demonstrate that the last term, which is obviously a discrete analogue of
$\int_D \frac{p}{e_3} \partial_t (e_3)\;dv$ is equal to the last term of \autoref{eq:KE+PE_vect_discrete}.
+It remains to demonstrate that the last term,
+which is obviously a discrete analogue of $\int_D \frac{p}{e_3} \partial_t (e_3)\;dv$ is equal to
+the last term of \autoref{eq:KE+PE_vect_discrete}.
In other words, the following property must be satisfied:
\begin{flalign*}
@@ 666,5 +663,5 @@
\end{flalign*}
Let introduce $p_w$ the pressure at $w$point such that $\delta_k [p_w] =  \rho \,g\,e_{3t}$.
+Let introduce $p_w$ the pressure at $w$point such that $\delta_k [p_w] =  \rho \,g\,e_{3t}$.
The righthandside of the above equation can be transformed as follows:
@@ 718,8 +715,7 @@
Note that this property strongly constrains the discrete expression of both
the depth of $T$points and of the term added to the pressure gradient in the
$s$coordinate. Nevertheless, it is almost never satisfied since a linear equation
of state is rarely used.
+Note that this property strongly constrains the discrete expression of both the depth of $T$points and
+of the term added to the pressure gradient in the $s$coordinate.
+Nevertheless, it is almost never satisfied since a linear equation of state is rarely used.
@@ 755,6 +751,6 @@
\end{flalign*}
Substituting the discrete expression of the time derivative of the velocity either in vector invariant or in flux form,
leads to the discrete equivalent of the
+Substituting the discrete expression of the time derivative of the velocity either in
+vector invariant or in flux form, leads to the discrete equivalent of the ????
@@ 771,7 +767,8 @@
\label{subsec:C.3.3}
In flux from the vorticity term reduces to a Coriolis term in which the Coriolis
parameter has been modified to account for the ``metric'' term. This altered
Coriolis parameter is discretised at an fpoint. It is given by:
+In flux from the vorticity term reduces to a Coriolis term in which
+the Coriolis parameter has been modified to account for the ``metric'' term.
+This altered Coriolis parameter is discretised at an fpoint.
+It is given by:
\begin{equation*}
f+\frac{1} {e_1 e_2 } \left( v \frac{\partial e_2 } {\partial i}  u \frac{\partial e_1 } {\partial j}\right)\;
@@ 781,7 +778,7 @@
\end{equation*}
Either the ENE or EEN scheme is then applied to obtain the vorticity term in flux form.
It therefore conserves the total KE. The derivation is the same as for the
vorticity term in the vector invariant form (\autoref{subsec:C_vor}).
+Either the ENE or EEN scheme is then applied to obtain the vorticity term in flux form.
+It therefore conserves the total KE.
+The derivation is the same as for the vorticity term in the vector invariant form (\autoref{subsec:C_vor}).
% 
@@ 791,9 +788,8 @@
\label{subsec:C.3.4}
The flux form operator of the momentum advection is evaluated using a
centered second order finite difference scheme. Because of the flux form,
the discrete operator does not contribute to the global budget of linear
momentum. Because of the centered second order scheme, it conserves
the horizontal kinetic energy, that is :
+The flux form operator of the momentum advection is evaluated using
+a centered second order finite difference scheme.
+Because of the flux form, the discrete operator does not contribute to the global budget of linear momentum.
+Because of the centered second order scheme, it conserves the horizontal kinetic energy, that is:
\begin{equation} \label{eq:C_ADV_KE_flux}
@@ 804,6 +800,6 @@
\end{equation}
Let us first consider the first term of the scalar product ($i.e.$ just the the terms
associated with the icomponent of the advection) :
+Let us first consider the first term of the scalar product
+($i.e.$ just the the terms associated with the icomponent of the advection):
\begin{flalign*}
&  \int_D u \cdot \nabla \cdot \left( \textbf{U}\,u \right) \; dv \\
@@ 845,6 +841,5 @@
\biggl\{ \left( \frac{1}{e_{3t}} \frac{\partial e_{3t}}{\partial t} \right) \; b_t \biggr\} &&& \\
\end{flalign*}
Applying similar manipulation applied to the second term of the scalar product
leads to :
+Applying similar manipulation applied to the second term of the scalar product leads to:
\begin{equation*}
 \int_D \textbf{U}_h \cdot \left( {{\begin{array} {*{20}c}
@@ 854,13 +849,11 @@
\biggl\{ \left( \frac{1}{e_{3t}} \frac{\partial e_{3t}}{\partial t} \right) \; b_t \biggr\}
\end{equation*}
which is the discrete form of
$ \frac{1}{2} \int_D u \cdot \nabla \cdot \left( \textbf{U}\,u \right) \; dv $.
+which is the discrete form of $ \frac{1}{2} \int_D u \cdot \nabla \cdot \left( \textbf{U}\,u \right) \; dv $.
\autoref{eq:C_ADV_KE_flux} is thus satisfied.
When the UBS scheme is used to evaluate the flux form momentum advection,
the discrete operator does not contribute to the global budget of linear momentum
(flux form). The horizontal kinetic energy is not conserved, but forced to decay
($i.e.$ the scheme is diffusive).
+When the UBS scheme is used to evaluate the flux form momentum advection,
+the discrete operator does not contribute to the global budget of linear momentum (flux form).
+The horizontal kinetic energy is not conserved, but forced to decay ($i.e.$ the scheme is diffusive).
@@ 894,14 +887,15 @@
\end{equation}
The scheme does not allow but the conservation of the total kinetic energy but the conservation
of $q^2$, the potential enstrophy for a horizontally nondivergent flow ($i.e.$ when $\chi$=$0$).
Indeed, using the symmetry or skew symmetry properties of the operators ( \autoref{eq:DOM_mi_adj}
and \autoref{eq:DOM_di_adj}), it can be shown that:
+The scheme does not allow but the conservation of the total kinetic energy but the conservation of $q^2$,
+the potential enstrophy for a horizontally nondivergent flow ($i.e.$ when $\chi$=$0$).
+Indeed, using the symmetry or skew symmetry properties of the operators
+( \autoref{eq:DOM_mi_adj} and \autoref{eq:DOM_di_adj}),
+it can be shown that:
\begin{equation} \label{eq:C_1.1}
\int_D {q\,\;{\textbf{k}}\cdot \frac{1} {e_3} \nabla \times \left( {e_3 \, q \;{\textbf{k}}\times {\textbf{U}}_h } \right)\;dv} \equiv 0
\end{equation}
where $dv=e_1\,e_2\,e_3 \; di\,dj\,dk$ is the volume element. Indeed, using
\autoref{eq:dynvor_ens}, the discrete form of the right hand side of \autoref{eq:C_1.1}
can be transformed as follow:
+where $dv=e_1\,e_2\,e_3 \; di\,dj\,dk$ is the volume element.
+Indeed, using \autoref{eq:dynvor_ens},
+the discrete form of the right hand side of \autoref{eq:C_1.1} can be transformed as follow:
\begin{flalign*}
&\int_D q \,\; \textbf{k} \cdot \frac{1} {e_3 } \nabla \times
@@ 955,5 +949,5 @@
\end{aligned} } \right.
\end{equation}
where the indices $i_p$ and $k_p$ take the following value:
+where the indices $i_p$ and $k_p$ take the following values:
$i_p = 1/2$ or $1/2$ and $j_p = 1/2$ or $1/2$,
and the vorticity triads, ${^i_j}\mathbb{Q}^{i_p}_{j_p}$, defined at $T$point, are given by:
@@ 966,7 +960,8 @@
This formulation does conserve the potential enstrophy for a horizontally nondivergent flow ($i.e.$ $\chi=0$).
Let consider one of the vorticity triad, for example ${^{i}_j}\mathbb{Q}^{+1/2}_{+1/2} $,
similar manipulation can be done for the 3 others. The discrete form of the right hand
side of \autoref{eq:C_1.1} applied to this triad only can be transformed as follow:
+Let consider one of the vorticity triad, for example ${^{i}_j}\mathbb{Q}^{+1/2}_{+1/2} $,
+similar manipulation can be done for the 3 others.
+The discrete form of the right hand side of \autoref{eq:C_1.1} applied to
+this triad only can be transformed as follow:
\begin{flalign*}
@@ 1020,13 +1015,14 @@
All the numerical schemes used in NEMO are written such that the tracer content
is conserved by the internal dynamics and physics (equations in flux form).
For advection, only the CEN2 scheme ($i.e.$ $2^{nd}$ order finite different scheme)
conserves the global variance of tracer. Nevertheless the other schemes ensure
that the global variance decreases ($i.e.$ they are at least slightly diffusive).
For diffusion, all the schemes ensure the decrease of the total tracer variance,
except the isoneutral operator. There is generally no strict conservation of mass,
as the equation of state is non linear with respect to $T$ and $S$. In practice,
the mass is conserved to a very high accuracy.
+All the numerical schemes used in NEMO are written such that the tracer content is conserved by
+the internal dynamics and physics (equations in flux form).
+For advection,
+only the CEN2 scheme ($i.e.$ $2^{nd}$ order finite different scheme) conserves the global variance of tracer.
+Nevertheless the other schemes ensure that the global variance decreases
+($i.e.$ they are at least slightly diffusive).
+For diffusion, all the schemes ensure the decrease of the total tracer variance, except the isoneutral operator.
+There is generally no strict conservation of mass,
+as the equation of state is non linear with respect to $T$ and $S$.
+In practice, the mass is conserved to a very high accuracy.
% 
% Advection Term
@@ 1049,7 +1045,8 @@
Whatever the advection scheme considered it conserves of the tracer content as all
the scheme are written in flux form. Indeed, let $T$ be the tracer and $\tau_u$, $\tau_v$,
and $\tau_w$ its interpolated values at velocity point (whatever the interpolation is),
+Whatever the advection scheme considered it conserves of the tracer content as
+all the scheme are written in flux form.
+Indeed, let $T$ be the tracer and its $\tau_u$, $\tau_v$, and $\tau_w$ interpolated values at velocity point
+(whatever the interpolation is),
the conservation of the tracer content due to the advection tendency is obtained as follows:
\begin{flalign*}
@@ 1067,7 +1064,6 @@
\end{flalign*}
The conservation of the variance of tracer due to the advection tendency
can be achieved only with the CEN2 scheme, $i.e.$ when
$\tau_u= \overline T^{\,i+1/2}$, $\tau_v= \overline T^{\,j+1/2}$, and $\tau_w= \overline T^{\,k+1/2}$.
+The conservation of the variance of tracer due to the advection tendency can be achieved only with the CEN2 scheme,
+$i.e.$ when $\tau_u= \overline T^{\,i+1/2}$, $\tau_v= \overline T^{\,j+1/2}$, and $\tau_w= \overline T^{\,k+1/2}$.
It can be demonstarted as follows:
\begin{flalign*}
@@ 1103,19 +1099,18 @@
The discrete formulation of the horizontal diffusion of momentum ensures the
conservation of potential vorticity and the horizontal divergence, and the
dissipation of the square of these quantities ($i.e.$ enstrophy and the
variance of the horizontal divergence) as well as the dissipation of the
horizontal kinetic energy. In particular, when the eddy coefficients are
horizontally uniform, it ensures a complete separation of vorticity and
horizontal divergence fields, so that diffusion (dissipation) of vorticity
(enstrophy) does not generate horizontal divergence (variance of the
horizontal divergence) and \textit{vice versa}.

These properties of the horizontal diffusion operator are a direct consequence
of properties \autoref{eq:DOM_curl_grad} and \autoref{eq:DOM_div_curl}.
When the vertical curl of the horizontal diffusion of momentum (discrete sense)
is taken, the term associated with the horizontal gradient of the divergence is
locally zero.
+The discrete formulation of the horizontal diffusion of momentum ensures
+the conservation of potential vorticity and the horizontal divergence,
+and the dissipation of the square of these quantities
+($i.e.$ enstrophy and the variance of the horizontal divergence) as well as
+the dissipation of the horizontal kinetic energy.
+In particular, when the eddy coefficients are horizontally uniform,
+it ensures a complete separation of vorticity and horizontal divergence fields,
+so that diffusion (dissipation) of vorticity (enstrophy) does not generate horizontal divergence
+(variance of the horizontal divergence) and \textit{vice versa}.
+
+These properties of the horizontal diffusion operator are a direct consequence of
+properties \autoref{eq:DOM_curl_grad} and \autoref{eq:DOM_div_curl}.
+When the vertical curl of the horizontal diffusion of momentum (discrete sense) is taken,
+the term associated with the horizontal gradient of the divergence is locally zero.
% 
@@ 1125,5 +1120,5 @@
\label{subsec:C.6.1}
The lateral momentum diffusion term conserves the potential vorticity :
+The lateral momentum diffusion term conserves the potential vorticity:
\begin{flalign*}
&\int \limits_D \frac{1} {e_3 } \textbf{k} \cdot \nabla \times
@@ 1211,6 +1206,5 @@
\label{subsec:C.6.3}
The lateral momentum diffusion term dissipates the enstrophy when the eddy
coefficients are horizontally uniform:
+The lateral momentum diffusion term dissipates the enstrophy when the eddy coefficients are horizontally uniform:
\begin{flalign*}
&\int\limits_D \zeta \; \textbf{k} \cdot \nabla \times
@@ 1236,9 +1230,7 @@
\label{subsec:C.6.4}
When the horizontal divergence of the horizontal diffusion of momentum
(discrete sense) is taken, the term associated with the vertical curl of the
vorticity is zero locally, due to \autoref{eq:DOM_div_curl}.
The resulting term conserves the $\chi$ and dissipates $\chi^2$
when the eddy coefficients are horizontally uniform.
+When the horizontal divergence of the horizontal diffusion of momentum (discrete sense) is taken,
+the term associated with the vertical curl of the vorticity is zero locally, due to \autoref{eq:DOM_div_curl}.
+The resulting term conserves the $\chi$ and dissipates $\chi^2$ when the eddy coefficients are horizontally uniform.
\begin{flalign*}
& \int\limits_D \nabla_h \cdot
@@ 1291,7 +1283,7 @@
\label{sec:C.7}
As for the lateral momentum physics, the continuous form of the vertical diffusion
of momentum satisfies several integral constraints. The first two are associated
with the conservation of momentum and the dissipation of horizontal kinetic energy:
+As for the lateral momentum physics,
+the continuous form of the vertical diffusion of momentum satisfies several integral constraints.
+The first two are associated with the conservation of momentum and the dissipation of horizontal kinetic energy:
\begin{align*}
\int\limits_D \frac{1} {e_3 }\; \frac{\partial } {\partial k}
@@ 1306,5 +1298,6 @@
\end{align*}
The first property is obvious. The second results from:
+The first property is obvious.
+The second results from:
\begin{flalign*}
\int\limits_D
@@ 1326,5 +1319,6 @@
\end{flalign*}
The vorticity is also conserved. Indeed:
+The vorticity is also conserved.
+Indeed:
\begin{flalign*}
\int \limits_D
@@ 1346,6 +1340,5 @@
\end{flalign*}
If the vertical diffusion coefficient is uniform over the whole domain, the
enstrophy is dissipated, $i.e.$
+If the vertical diffusion coefficient is uniform over the whole domain, the enstrophy is dissipated, $i.e.$
\begin{flalign*}
\int\limits_D \zeta \, \textbf{k} \cdot \nabla \times
@@ 1378,8 +1371,7 @@
&\left[ \frac{A_u^{\,vm}} {e_{3uw}} \delta_{k+1/2} \left[ \delta_{j+1/2} \left[ e_{1u}\,u \right] \right] \right] \biggr\} &&\\
\end{flalign*}
Using the fact that the vertical diffusion coefficients are uniform, and that in
$z$coordinate, the vertical scale factors do not depend on $i$ and $j$ so
that: $e_{3f} =e_{3u} =e_{3v} =e_{3t} $ and $e_{3w} =e_{3uw} =e_{3vw} $,
it follows:
+Using the fact that the vertical diffusion coefficients are uniform,
+and that in $z$coordinate, the vertical scale factors do not depend on $i$ and $j$ so that:
+$e_{3f} =e_{3u} =e_{3v} =e_{3t} $ and $e_{3w} =e_{3uw} =e_{3vw} $, it follows:
\begin{flalign*}
\equiv A^{\,vm} \sum\limits_{i,j,k} \zeta \;\delta_k
@@ 1398,7 +1390,6 @@
\left( \frac{A^{\,vm}} {e_3 }\; \frac{\partial \textbf{U}_h } {\partial k} \right) \right)\; dv = 0 &&&\\
\end{flalign*}
and the square of the horizontal divergence decreases ($i.e.$ the horizontal
divergence is dissipated) if the vertical diffusion coefficient is uniform over the
whole domain:
+and the square of the horizontal divergence decreases ($i.e.$ the horizontal divergence is dissipated) if
+the vertical diffusion coefficient is uniform over the whole domain:
\begin{flalign*}
@@ 1463,8 +1454,8 @@
\label{sec:C.8}
The numerical schemes used for tracer subgridscale physics are written such
that the heat and salt contents are conserved (equations in flux form).
Since a flux form is used to compute the temperature and salinity,
the quadratic form of these quantities ($i.e.$ their variance) globally tends to diminish.
+The numerical schemes used for tracer subgridscale physics are written such that
+the heat and salt contents are conserved (equations in flux form).
+Since a flux form is used to compute the temperature and salinity,
+the quadratic form of these quantities ($i.e.$ their variance) globally tends to diminish.
As for the advection term, there is conservation of mass only if the Equation Of Seawater is linear.
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_D.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_D.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_D.tex (revision 10368)
@@ 13,10 +13,10 @@
A "model life" is more than ten years. Its software, composed of a few
hundred modules, is used by many people who are scientists or students
and do not necessarily know every aspect of computing very well.
Moreover, a well thoughtout program is easier to read and understand,
less difficult to modify, produces fewer bugs and is easier to maintain.
Therefore, it is essential that the model development follows some rules :
+A "model life" is more than ten years.
+Its software, composed of a few hundred modules, is used by many people who are scientists or students and
+do not necessarily know every aspect of computing very well.
+Moreover, a well thoughtout program is easier to read and understand, less difficult to modify,
+produces fewer bugs and is easier to maintain.
+Therefore, it is essential that the model development follows some rules:
 well planned and designed
@@ 32,11 +32,11 @@
 flexible.
To satisfy part of these aims, \NEMO is written with a coding standard which
is close to the ECMWF rules, named DOCTOR \citep{Gibson_TR86}.
These rules present some advantages like :
+To satisfy part of these aims, \NEMO is written with a coding standard which is close to the ECMWF rules,
+named DOCTOR \citep{Gibson_TR86}.
+These rules present some advantages like:
 to provide a well presented program
 to use rules for variable names which allow recognition of their type
+ to use rules for variable names which allow recognition of their type
(integer, real, parameter, local or shared variables, etc. ).
@@ 49,5 +49,5 @@
\label{sec:D_structure}
Each program begins with a set of headline comments containing :
+Each program begins with a set of headline comments containing:
 the program title
@@ 65,11 +65,11 @@
 the author name(s), the date of creation and any updates.
 Each program is split into several well separated sections and
+ Each program is split into several well separated sections and
subsections with an underlined title and specific labelled statements.
 A program has not more than 200 to 300 lines.
A template of a module style can be found on the NEMO depository
in the following file : NEMO/OPA\_SRC/module\_example.
+A template of a module style can be found on the NEMO depository in the following file:
+NEMO/OPA\_SRC/module\_example.
% ================================================================
% Coding conventions
@@ 78,22 +78,23 @@
\label{sec:D_coding}
 Use of the universal language \textsc{Fortran} 90, and try to avoid obsolescent
features like statement functions, do not use GO TO and EQUIVALENCE statements.

 A continuation line begins with the character {\&} indented by three spaces
compared to the previous line, while the previous line ended with the character {\&}.

 All the variables must be declared. The code is usually compiled with implicit none.
+ Use of the universal language \textsc{Fortran} 90, and try to avoid obsolescent features like statement functions,
+do not use GO TO and EQUIVALENCE statements.
+
+ A continuation line begins with the character {\&} indented by three spaces compared to the previous line,
+while the previous line ended with the character {\&}.
+
+ All the variables must be declared.
+The code is usually compiled with implicit none.
 Never use continuation lines in the declaration of a variable. When searching a
variable in the code through a \textit{grep} command, the declaration line will be found.

 In the declaration of a PUBLIC variable, the comment part at the end of the line
should start with the two characters "\verb?!:?". the following UNIX command, \\
+ Never use continuation lines in the declaration of a variable.
+When searching a variable in the code through a \textit{grep} command, the declaration line will be found.
+
+ In the declaration of a PUBLIC variable, the comment part at the end of the line should start with
+the two characters "\verb?!:?".
+The following UNIX command, \\
\verb?grep var_name *90 \ grep \!: ? \\
will display the module name and the line where the var\_name declaration is.
 Always use a three spaces indentation in DO loop, CASE, or IFELSEIFELSEENDIF
statements.
+ Always use a three spaces indentation in DO loop, CASE, or IFELSEIFELSEENDIF statements.
 use a space after a comma, except when it appears to separate the indices of an array.
@@ 109,8 +110,7 @@
\label{sec:D_naming}
The purpose of the naming conventions is to use prefix letters to classify
model variables. These conventions allow the variable type to be easily
known and rapidly identified. The naming conventions are summarised
in the Table below:
+The purpose of the naming conventions is to use prefix letters to classify model variables.
+These conventions allow the variable type to be easily known and rapidly identified.
+The naming conventions are summarised in the Table below:
@@ 192,6 +192,6 @@
%
N.B. Parameter here, in not only parameter in the \textsc{Fortran} acceptation, it is also used for code variables
that are read in namelist and should never been modified during a simulation.
+N.B. Parameter here, in not only parameter in the \textsc{Fortran} acceptation,
+it is also used for code variables that are read in namelist and should never been modified during a simulation.
It is the case, for example, for the size of a domain (jpi,jpj,jpk).
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_E.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_E.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_E.tex (revision 10368)
@@ 10,7 +10,6 @@
\newpage
$\ $\newline % force a new ligne

 This appendix some on going consideration on algorithms used or planned to be used
in \NEMO.
+
+This appendix some on going consideration on algorithms used or planned to be used in \NEMO.
$\ $\newline % force a new ligne
@@ 22,8 +21,8 @@
\label{sec:TRA_adv_ubs}
The UBS advection scheme is an upstream biased third order scheme based on
an upstreambiased parabolic interpolation. It is also known as Cell Averaged
QUICK scheme (Quadratic Upstream Interpolation for Convective
Kinematics). For example, in the $i$direction :
+The UBS advection scheme is an upstream biased third order scheme based on
+an upstreambiased parabolic interpolation.
+It is also known as Cell Averaged QUICK scheme (Quadratic Upstream Interpolation for Convective Kinematics).
+For example, in the $i$direction:
\begin{equation} \label{eq:tra_adv_ubs2}
\tau _u^{ubs} = \left\{ \begin{aligned}
@@ 38,8 +37,8 @@
 \frac{1}{2}\, U_{i+1/2} \;\frac{1}{6} \;\delta_{i+1/2}[\tau"_i]
\end{equation}
where $U_{i+1/2} = e_{1u}\,e_{3u}\,u_{i+1/2}$ and
$\tau "_i =\delta _i \left[ {\delta _{i+1/2} \left[ \tau \right]} \right]$.
By choosing this expression for $\tau "$ we consider a fourth order approximation
of $\partial_i^2$ with a constant igrid spacing ($\Delta i=1$).
+where $U_{i+1/2} = e_{1u}\,e_{3u}\,u_{i+1/2}$ and
+$\tau "_i =\delta _i \left[ {\delta _{i+1/2} \left[ \tau \right]} \right]$.
+By choosing this expression for $\tau "$ we consider a fourth order approximation of $\partial_i^2$ with
+a constant igrid spacing ($\Delta i=1$).
Alternative choice: introduce the scale factors:
@@ 47,45 +46,40 @@
This results in a dissipatively dominant (i.e. hyperdiffusive) truncation
error \citep{Shchepetkin_McWilliams_OM05}. The overall performance of the
advection scheme is similar to that reported in \cite{Farrow1995}.
It is a relatively good compromise between accuracy and smoothness. It is
not a \emph{positive} scheme meaning false extrema are permitted but the
amplitude of such are significantly reduced over the centred second order
method. Nevertheless it is not recommended to apply it to a passive tracer
that requires positivity.

The intrinsic diffusion of UBS makes its use risky in the vertical direction
where the control of artificial diapycnal fluxes is of paramount importance.
It has therefore been preferred to evaluate the vertical flux using the TVD
scheme when \np{ln\_traadv\_ubs}\forcode{ = .true.}.

For stability reasons, in \autoref{eq:tra_adv_ubs}, the first term which corresponds
to a second order centred scheme is evaluated using the \textit{now} velocity
(centred in time) while the second term which is the diffusive part of the scheme,
is evaluated using the \textit{before} velocity (forward in time. This is discussed
by \citet{Webb_al_JAOT98} in the context of the Quick advection scheme. UBS and QUICK
schemes only differ by one coefficient. Substituting 1/6 with 1/8 in
(\autoref{eq:tra_adv_ubs}) leads to the QUICK advection scheme \citep{Webb_al_JAOT98}.
This option is not available through a namelist parameter, since the 1/6
coefficient is hard coded. Nevertheless it is quite easy to make the
substitution in \mdl{traadv\_ubs} module and obtain a QUICK scheme

NB 1: When a high vertical resolution $O(1m)$ is used, the model stability can
be controlled by vertical advection (not vertical diffusion which is usually
solved using an implicit scheme). Computer time can be saved by using a
timesplitting technique on vertical advection. This possibility have been
implemented and validated in ORCA05L301. It is not currently offered in the
current reference version.

NB 2 : In a forthcoming release four options will be proposed for the
vertical component used in the UBS scheme. $\tau _w^{ubs}$ will be
evaluated using either \textit{(a)} a centred $2^{nd}$ order scheme ,
or \textit{(b)} a TVD scheme, or \textit{(c)} an interpolation based on conservative
parabolic splines following \citet{Shchepetkin_McWilliams_OM05} implementation of UBS in ROMS,
or \textit{(d)} an UBS. The $3^{rd}$ case has dispersion properties similar to an
eightorder accurate conventional scheme.

NB 3 : It is straight forward to rewrite \autoref{eq:tra_adv_ubs} as follows:
+This results in a dissipatively dominant (i.e. hyperdiffusive) truncation error
+\citep{Shchepetkin_McWilliams_OM05}.
+The overall performance of the advection scheme is similar to that reported in \cite{Farrow1995}.
+It is a relatively good compromise between accuracy and smoothness.
+It is not a \emph{positive} scheme meaning false extrema are permitted but
+the amplitude of such are significantly reduced over the centred second order method.
+Nevertheless it is not recommended to apply it to a passive tracer that requires positivity.
+
+The intrinsic diffusion of UBS makes its use risky in the vertical direction where
+the control of artificial diapycnal fluxes is of paramount importance.
+It has therefore been preferred to evaluate the vertical flux using the TVD scheme when
+\np{ln\_traadv\_ubs}\forcode{ = .true.}.
+
+For stability reasons, in \autoref{eq:tra_adv_ubs}, the first term which corresponds to
+a second order centred scheme is evaluated using the \textit{now} velocity (centred in time) while
+the second term which is the diffusive part of the scheme, is evaluated using the \textit{before} velocity
+(forward in time).
+This is discussed by \citet{Webb_al_JAOT98} in the context of the Quick advection scheme.
+UBS and QUICK schemes only differ by one coefficient.
+Substituting 1/6 with 1/8 in (\autoref{eq:tra_adv_ubs}) leads to the QUICK advection scheme \citep{Webb_al_JAOT98}.
+This option is not available through a namelist parameter, since the 1/6 coefficient is hard coded.
+Nevertheless it is quite easy to make the substitution in \mdl{traadv\_ubs} module and obtain a QUICK scheme.
+
+NB 1: When a high vertical resolution $O(1m)$ is used, the model stability can be controlled by vertical advection
+(not vertical diffusion which is usually solved using an implicit scheme).
+Computer time can be saved by using a timesplitting technique on vertical advection.
+This possibility have been implemented and validated in ORCA05L301.
+It is not currently offered in the current reference version.
+
+NB 2: In a forthcoming release four options will be proposed for the vertical component used in the UBS scheme.
+$\tau _w^{ubs}$ will be evaluated using either \textit{(a)} a centered $2^{nd}$ order scheme,
+or \textit{(b)} a TVD scheme, or \textit{(c)} an interpolation based on conservative parabolic splines following
+\citet{Shchepetkin_McWilliams_OM05} implementation of UBS in ROMS, or \textit{(d)} an UBS.
+The $3^{rd}$ case has dispersion properties similar to an eightorder accurate conventional scheme.
+
+NB 3: It is straight forward to rewrite \autoref{eq:tra_adv_ubs} as follows:
\begin{equation} \label{eq:tra_adv_ubs2}
\tau _u^{ubs} = \left\{ \begin{aligned}
@@ 102,11 +96,11 @@
\end{split}
\end{equation}
\autoref{eq:tra_adv_ubs2} has several advantages. First it clearly evidence that
the UBS scheme is based on the fourth order scheme to which is added an
upstream biased diffusive term. Second, this emphasises that the $4^{th}$ order
part have to be evaluated at \emph{now} time step, not only the $2^{th}$ order
part as stated above using \autoref{eq:tra_adv_ubs}. Third, the diffusive term is
in fact a biharmonic operator with a eddy coefficient with is simply proportional
to the velocity.
+\autoref{eq:tra_adv_ubs2} has several advantages.
+First it clearly evidences that the UBS scheme is based on the fourth order scheme to which
+is added an upstream biased diffusive term.
+Second, this emphasises that the $4^{th}$ order part have to be evaluated at \emph{now} time step,
+not only the $2^{th}$ order part as stated above using \autoref{eq:tra_adv_ubs}.
+Third, the diffusive term is in fact a biharmonic operator with a eddy coefficient which
+is simply proportional to the velocity.
laplacian diffusion:
@@ 135,5 +129,5 @@
with ${A_u^{lT}}^2 = \frac{1}{12} {e_{1u}}^3\ u$,
$i.e.$ $A_u^{lT} = \frac{1}{\sqrt{12}} \,e_{1u}\ \sqrt{ e_{1u}\,u\,}$
it comes :
+it comes:
\begin{equation} \label{eq:tra_ldf_lap}
\begin{split}
@@ 163,5 +157,6 @@
\end{split}
\end{equation}
if the velocity is uniform ($i.e.$ $u=cst$) and choosing $\tau "_i =\frac{e_{1T}}{e_{2T}\,e_{3T}}\delta _i \left[ \frac{e_{2u} e_{3u} }{e_{1u} } \delta _{i+1/2}[\tau] \right]$
+if the velocity is uniform ($i.e.$ $u=cst$) and
+choosing $\tau "_i =\frac{e_{1T}}{e_{2T}\,e_{3T}}\delta _i \left[ \frac{e_{2u} e_{3u} }{e_{1u} } \delta _{i+1/2}[\tau] \right]$
sol 1 coefficient at Tpoint ( add $e_{1u}$ and $e_{1T}$ on both side of first $\delta$):
@@ 191,5 +186,7 @@
\label{sec:LF}
We adopt the following semidiscrete notation for time derivative. Given the values of a variable $q$ at successive time step, the time derivation and averaging operators at the mid time step are:
+We adopt the following semidiscrete notation for time derivative.
+Given the values of a variable $q$ at successive time step,
+the time derivation and averaging operators at the mid time step are:
\begin{subequations} \label{eq:dt_mt}
\begin{align}
@@ 198,7 +195,7 @@
\end{align}
\end{subequations}
As for space operator, the adjoint of the derivation and averaging time operators are
$\delta_t^*=\delta_{t+\rdt/2}$ and $\overline{\cdot}^{\,t\,*}= \overline{\cdot}^{\,t+\Delta/2}$
, respectively.
+As for space operator,
+the adjoint of the derivation and averaging time operators are $\delta_t^*=\delta_{t+\rdt/2}$ and
+$\overline{\cdot}^{\,t\,*}= \overline{\cdot}^{\,t+\Delta/2}$, respectively.
The Leapfrog time stepping given by \autoref{eq:DOM_nxt} can be defined as:
@@ 208,8 +205,8 @@
= \frac{q^{t+\rdt}q^{t\rdt}}{2\rdt}
\end{equation}
Note that \autoref{chap:LF} shows that the leapfrog time step is $\rdt$, not $2\rdt$
as it can be found sometime in literature.
The leapFrog time stepping is a second order centered scheme. As such it respects
the quadratic invariant in integral forms, $i.e.$ the following continuous property,
+Note that \autoref{chap:LF} shows that the leapfrog time step is $\rdt$,
+not $2\rdt$ as it can be found sometimes in literature.
+The leapFrog time stepping is a second order centered scheme.
+As such it respects the quadratic invariant in integral forms, $i.e.$ the following continuous property,
\begin{equation} \label{eq:Energy}
\int_{t_0}^{t_1} {q\, \frac{\partial q}{\partial t} \;dt}
@@ 217,5 +214,6 @@
= \frac{1}{2} \left( {q_{t_1}}^2  {q_{t_0}}^2 \right) ,
\end{equation}
is satisfied in discrete form. Indeed,
+is satisfied in discrete form.
+Indeed,
\begin{equation} \begin{split}
\int_{t_0}^{t_1} {q\, \frac{\partial q}{\partial t} \;dt}
@@ 228,8 +226,8 @@
\equiv \frac{1}{2} \left( {q_{t_1}}^2  {q_{t_0}}^2 \right)
\end{split} \end{equation}
NB here pb of boundary condition when applying the adjoin! In space, setting to 0
the quantity in land area is sufficient to get rid of the boundary condition
(equivalently of the boundary value of the integration by part). In time this boundary
condition is not physical and \textbf{add something here!!!}
+NB here pb of boundary condition when applying the adjoint!
+In space, setting to 0 the quantity in land area is sufficient to get rid of the boundary condition
+(equivalently of the boundary value of the integration by part).
+In time this boundary condition is not physical and \textbf{add something here!!!}
@@ 249,25 +247,26 @@
\subsection{Griffies isoneutral diffusion operator}
Let try to define a scheme that get its inspiration from the \citet{Griffies_al_JPO98}
scheme, but is formulated within the \NEMO framework ($i.e.$ using scale
factors rather than gridsize and having a position of $T$points that is not
necessary in the middle of vertical velocity points, see \autoref{fig:zgr_e3}).

In the formulation \autoref{eq:tra_ldf_iso} introduced in 1995 in OPA, the ancestor of \NEMO,
the offdiagonal terms of the small angle diffusion tensor contain several double
spatial averages of a gradient, for example $\overline{\overline{\delta_k \cdot}}^{\,i,k}$.
It is apparent that the combination of a $k$ average and a $k$ derivative of the tracer
allows for the presence of grid point oscillation structures that will be invisible
to the operator. These structures are \textit{computational modes}. They
will not be damped by the isoneutral operator, and even possibly amplified by it.
In other word, the operator applied to a tracer does not warranties the decrease of
its global average variance. To circumvent this, we have introduced a smoothing of
the slopes of the isoneutral surfaces (see \autoref{chap:LDF}). Nevertheless, this technique
works fine for $T$ and $S$ as they are active tracers ($i.e.$ they enter the computation
of density), but it does not work for a passive tracer. \citep{Griffies_al_JPO98} introduce
a different way to discretise the offdiagonal terms that nicely solve the problem.
The idea is to get rid of combinations of an averaged in one direction combined
with a derivative in the same direction by considering triads. For example in the
(\textbf{i},\textbf{k}) plane, the four triads are defined at the $(i,k)$ $T$point as follows:
+Let try to define a scheme that get its inspiration from the \citet{Griffies_al_JPO98} scheme,
+but is formulated within the \NEMO framework
+($i.e.$ using scale factors rather than gridsize and having a position of $T$points that
+is not necessary in the middle of vertical velocity points, see \autoref{fig:zgr_e3}).
+
+In the formulation \autoref{eq:tra_ldf_iso} introduced in 1995 in OPA, the ancestor of \NEMO,
+the offdiagonal terms of the small angle diffusion tensor contain several double spatial averages of a gradient,
+for example $\overline{\overline{\delta_k \cdot}}^{\,i,k}$.
+It is apparent that the combination of a $k$ average and a $k$ derivative of the tracer allows for
+the presence of grid point oscillation structures that will be invisible to the operator.
+These structures are \textit{computational modes}.
+They will not be damped by the isoneutral operator, and even possibly amplified by it.
+In other word, the operator applied to a tracer does not warranties the decrease of its global average variance.
+To circumvent this, we have introduced a smoothing of the slopes of the isoneutral surfaces
+(see \autoref{chap:LDF}).
+Nevertheless, this technique works fine for $T$ and $S$ as they are active tracers
+($i.e.$ they enter the computation of density), but it does not work for a passive tracer.
+\citep{Griffies_al_JPO98} introduce a different way to discretise the offdiagonal terms that
+nicely solve the problem.
+The idea is to get rid of combinations of an averaged in one direction combined with
+a derivative in the same direction by considering triads.
+For example in the (\textbf{i},\textbf{k}) plane, the four triads are defined at the $(i,k)$ $T$point as follows:
\begin{equation} \label{eq:Gf_triads}
_i^k \mathbb{T}_{i_p}^{k_p} (T)
@@ 277,9 +276,9 @@
\right)
\end{equation}
where the indices $i_p$ and $k_p$ define the four triads and take the following value:
$i_p = 1/2$ or $1/2$ and $k_p = 1/2$ or $1/2$,
$b_u= e_{1u}\,e_{2u}\,e_{3u}$ is the volume of $u$cells,
+where the indices $i_p$ and $k_p$ define the four triads and take the following value:
+$i_p = 1/2$ or $1/2$ and $k_p = 1/2$ or $1/2$,
+$b_u= e_{1u}\,e_{2u}\,e_{3u}$ is the volume of $u$cells,
$A_i^k$ is the lateral eddy diffusivity coefficient defined at $T$point,
and $_i^k \mathbb{R}_{i_p}^{k_p}$ is the slope associated with each triad :
+and $_i^k \mathbb{R}_{i_p}^{k_p}$ is the slope associated with each triad:
\begin{equation} \label{eq:Gf_slopes}
_i^k \mathbb{R}_{i_p}^{k_p}
@@ 288,20 +287,18 @@
{\left(\alpha / \beta \right)_i^k \ \delta_{k+k_p}[T^i ]  \delta_{k+k_p}[S^i ] }
\end{equation}
Note that in \autoref{eq:Gf_slopes} we use the ratio $\alpha / \beta$ instead of
multiplying the temperature derivative by $\alpha$ and the salinity derivative
by $\beta$. This is more efficient as the ratio $\alpha / \beta$ can to be
evaluated directly.

Note that in \autoref{eq:Gf_triads}, we chose to use ${b_u}_{\,i+i_p}^{\,k}$ instead of
${b_{uw}}_{\,i+i_p}^{\,k+k_p}$. This choice has been motivated by the decrease
of tracer variance and the presence of partial cell at the ocean bottom
(see \autoref{apdx:Gf_operator}).
+Note that in \autoref{eq:Gf_slopes} we use the ratio $\alpha / \beta$ instead of
+multiplying the temperature derivative by $\alpha$ and the salinity derivative by $\beta$.
+This is more efficient as the ratio $\alpha / \beta$ can to be evaluated directly.
+
+Note that in \autoref{eq:Gf_triads}, we chose to use ${b_u}_{\,i+i_p}^{\,k}$ instead of ${b_{uw}}_{\,i+i_p}^{\,k+k_p}$.
+This choice has been motivated by the decrease of tracer variance and
+the presence of partial cell at the ocean bottom (see \autoref{apdx:Gf_operator}).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!ht] \begin{center}
\includegraphics[width=0.70\textwidth]{Fig_ISO_triad}
\caption{ \protect\label{fig:ISO_triad}
Triads used in the Griffies's like isoneutral diffision scheme for
$u$component (upper panel) and $w$component (lower panel).}
+\caption{ \protect\label{fig:ISO_triad}
+ Triads used in the Griffies's like isoneutral diffision scheme for
+ $u$component (upper panel) and $w$component (lower panel).}
\end{center}
\end{figure}
@@ 309,5 +306,5 @@
The four isoneutral fluxes associated with the triads are defined at $T$point.
They take the following expression :
+They take the following expression:
\begin{flalign} \label{eq:Gf_fluxes}
\begin{split}
@@ 320,6 +317,6 @@
\end{flalign}
The resulting isoneutral fluxes at $u$ and $w$points are then given by the
sum of the fluxes that cross the $u$ and $w$face (\autoref{fig:ISO_triad}):
+The resulting isoneutral fluxes at $u$ and $w$points are then given by
+the sum of the fluxes that cross the $u$ and $w$face (\autoref{fig:ISO_triad}):
\begin{flalign} \label{eq:iso_flux}
\textbf{F}_{iso}(T)
@@ 350,6 +347,6 @@
% \end{pmatrix}
\end{flalign}
resulting in a isoneutral diffusion tendency on temperature given by the divergence
of the sum of all the four triad fluxes :
+resulting in a isoneutral diffusion tendency on temperature given by
+the divergence of the sum of all the four triad fluxes:
\begin{equation} \label{eq:Gf_operator}
D_l^T = \frac{1}{b_T} \sum_{\substack{i_p,\,k_p}} \left\{
@@ 359,9 +356,9 @@
where $b_T= e_{1T}\,e_{2T}\,e_{3T}$ is the volume of $T$cells.
This expression of the isoneutral diffusion has been chosen in order to satisfy
the following six properties:
+This expression of the isoneutral diffusion has been chosen in order to satisfy the following six properties:
\begin{description}
\item[$\bullet$ horizontal diffusion] The discretization of the diffusion operator
recovers the traditional fivepoint Laplacian in the limit of flat isoneutral direction :
+\item[$\bullet$ horizontal diffusion]
+ The discretization of the diffusion operator recovers the traditional fivepoint Laplacian in
+ the limit of flat isoneutral direction:
\begin{equation} \label{eq:Gf_property1a}
D_l^T = \frac{1}{b_T} \ \delta_{i}
@@ 371,11 +368,11 @@
\end{equation}
\item[$\bullet$ implicit treatment in the vertical] In the diagonal term associated
with the vertical divergence of the isoneutral fluxes (i.e. the term associated
with a second order vertical derivative) appears only tracer values associated
with a single water column. This is of paramount importance since it means
that the implicit in time algorithm for solving the vertical diffusion equation can
be used to evaluate this term. It is a necessity since the vertical eddy diffusivity
associated with this term,
+\item[$\bullet$ implicit treatment in the vertical]
+ In the diagonal term associated with the vertical divergence of the isoneutral fluxes
+ (i.e. the term associated with a second order vertical derivative)
+ appears only tracer values associated with a single water column.
+ This is of paramount importance since it means that
+ the implicit in time algorithm for solving the vertical diffusion equation can be used to evaluate this term.
+ It is a necessity since the vertical eddy diffusivity associated with this term,
\begin{equation}
\sum_{\substack{i_p, \,k_p}} \left\{
@@ 385,6 +382,6 @@
can be quite large.
\item[$\bullet$ pure isoneutral operator] The isoneutral flux of locally referenced
potential density is zero, $i.e.$
+\item[$\bullet$ pure isoneutral operator]
+ The isoneutral flux of locally referenced potential density is zero, $i.e.$
\begin{align} \label{eq:Gf_property2}
\begin{matrix}
@@ 397,35 +394,35 @@
\end{matrix}
\end{align}
This result is trivially obtained using the \autoref{eq:Gf_triads} applied to $T$ and $S$
and the definition of the triads' slopes \autoref{eq:Gf_slopes}.

\item[$\bullet$ conservation of tracer] The isoneutral diffusion term conserve the
total tracer content, $i.e.$
+This result is trivially obtained using the \autoref{eq:Gf_triads} applied to $T$ and $S$ and
+the definition of the triads' slopes \autoref{eq:Gf_slopes}.
+
+\item[$\bullet$ conservation of tracer]
+ The isoneutral diffusion term conserve the total tracer content, $i.e.$
\begin{equation} \label{eq:Gf_property1}
\sum_{i,j,k} \left\{ D_l^T \ b_T \right\} = 0
\end{equation}
This property is trivially satisfied since the isoneutral diffusive operator
is written in flux form.

\item[$\bullet$ decrease of tracer variance] The isoneutral diffusion term does
not increase the total tracer variance, $i.e.$
+This property is trivially satisfied since the isoneutral diffusive operator is written in flux form.
+
+\item[$\bullet$ decrease of tracer variance]
+ The isoneutral diffusion term does not increase the total tracer variance, $i.e.$
\begin{equation} \label{eq:Gf_property1}
\sum_{i,j,k} \left\{ T \ D_l^T \ b_T \right\} \leq 0
\end{equation}
The property is demonstrated in the \autoref{apdx:Gf_operator}. It is a
key property for a diffusion term. It means that the operator is also a dissipation
term, $i.e.$ it is a sink term for the square of the quantity on which it is applied.
It therfore ensure that, when the diffusivity coefficient is large enough, the field
on which it is applied become free of gridpoint noise.

\item[$\bullet$ selfadjoint operator] The isoneutral diffusion operator is selfadjoint,
$i.e.$
+The property is demonstrated in the \autoref{apdx:Gf_operator}.
+It is a key property for a diffusion term.
+It means that the operator is also a dissipation term,
+$i.e.$ it is a sink term for the square of the quantity on which it is applied.
+It therfore ensures that, when the diffusivity coefficient is large enough,
+the field on which it is applied become free of gridpoint noise.
+
+\item[$\bullet$ selfadjoint operator]
+ The isoneutral diffusion operator is selfadjoint, $i.e.$
\begin{equation} \label{eq:Gf_property1}
\sum_{i,j,k} \left\{ S \ D_l^T \ b_T \right\} = \sum_{i,j,k} \left\{ D_l^S \ T \ b_T \right\}
\end{equation}
In other word, there is no needs to develop a specific routine from the adjoint of this
operator. We just have to apply the same routine. This properties can be demonstrated
quite easily in a similar way the "non increase of tracer variance" property has been
proved (see \autoref{apdx:Gf_operator}).
+In other word, there is no needs to develop a specific routine from the adjoint of this operator.
+We just have to apply the same routine.
+This properties can be demonstrated quite easily in a similar way the "non increase of tracer variance" property
+has been proved (see \autoref{apdx:Gf_operator}).
\end{description}
@@ 437,11 +434,11 @@
\subsection{Eddy induced velocity and skew flux formulation}
When Gent and McWilliams [1990] diffusion is used (\key{traldf\_eiv} defined),
an additional advection term is added. The associated velocity is the so called
eddy induced velocity, the formulation of which depends on the slopes of iso
neutral surfaces. Contrary to the case of isoneutral mixing, the slopes used
here are referenced to the geopotential surfaces, $i.e.$ \autoref{eq:ldfslp_geo}
is used in $z$coordinate, and the sum \autoref{eq:ldfslp_geo}
+ \autoref{eq:ldfslp_iso} in $z^*$ or $s$coordinates.
+When Gent and McWilliams [1990] diffusion is used (\key{traldf\_eiv} defined),
+an additional advection term is added.
+The associated velocity is the so called eddy induced velocity,
+the formulation of which depends on the slopes of isoneutral surfaces.
+Contrary to the case of isoneutral mixing, the slopes used here are referenced to the geopotential surfaces,
+$i.e.$ \autoref{eq:ldfslp_geo} is used in $z$coordinate,
+and the sum \autoref{eq:ldfslp_geo} + \autoref{eq:ldfslp_iso} in $z^*$ or $s$coordinates.
The eddy induced velocity is given by:
@@ 456,17 +453,17 @@
\end{split}
\end{equation}
where $A_{e}$ is the eddy induced velocity coefficient, and $r_i$ and $r_j$ the
slopes between the isoneutral and the geopotential surfaces.
+where $A_{e}$ is the eddy induced velocity coefficient,
+and $r_i$ and $r_j$ the slopes between the isoneutral and the geopotential surfaces.
%%gm wrong: to be modified with 2 2D streamfunctions
 In other words,
the eddy induced velocity can be derived from a vector streamfuntion, $\phi$, which
is given by $\phi = A_e\,\textbf{r}$ as $\textbf{U}^* = \textbf{k} \times \nabla \phi$
+In other words, the eddy induced velocity can be derived from a vector streamfuntion, $\phi$,
+which is given by $\phi = A_e\,\textbf{r}$ as $\textbf{U}^* = \textbf{k} \times \nabla \phi$.
%%end gm
A traditional way to implement this additional advection is to add it to the eulerian
velocity prior to compute the tracer advection. This allows us to take advantage of
all the advection schemes offered for the tracers (see \autoref{sec:TRA_adv}) and not just
a $2^{nd}$ order advection scheme. This is particularly useful for passive tracers
where \emph{positivity} of the advection scheme is of paramount importance.
+A traditional way to implement this additional advection is to add it to the eulerian velocity prior to
+compute the tracer advection.
+This allows us to take advantage of all the advection schemes offered for the tracers
+(see \autoref{sec:TRA_adv}) and not just a $2^{nd}$ order advection scheme.
+This is particularly useful for passive tracers where
+\emph{positivity} of the advection scheme is of paramount importance.
% give here the expression using the triads. It is different from the one given in \autoref{eq:ldfeiv}
% see just below a copy of this equation:
@@ 490,9 +487,7 @@
\end{equation}
\citep{Griffies_JPO98} introduces another way to implement the eddy induced advection,
the socalled skew form. It is based on a transformation of the advective fluxes
using the nondivergent nature of the eddy induced velocity.
For example in the (\textbf{i},\textbf{k}) plane, the tracer advective fluxes can be
transformed as follows:
+\citep{Griffies_JPO98} introduces another way to implement the eddy induced advection, the socalled skew form.
+It is based on a transformation of the advective fluxes using the nondivergent nature of the eddy induced velocity.
+For example in the (\textbf{i},\textbf{k}) plane, the tracer advective fluxes can be transformed as follows:
\begin{flalign*}
\begin{split}
@@ 519,6 +514,6 @@
\end{split}
\end{flalign*}
and since the eddy induces velocity field is nodivergent, we end up with the skew
form of the eddy induced advective fluxes:
+and since the eddy induces velocity field is nodivergent,
+we end up with the skew form of the eddy induced advective fluxes:
\begin{equation} \label{eq:eiv_skew_continuous}
\textbf{F}_{eiv}^T = \begin{pmatrix}
@@ 527,10 +522,10 @@
\end{pmatrix}
\end{equation}
The tendency associated with eddy induced velocity is then simply the divergence
of the \autoref{eq:eiv_skew_continuous} fluxes. It naturally conserves the tracer
content, as it is expressed in flux form and, as the advective form, it preserve the
tracer variance. Another interesting property of \autoref{eq:eiv_skew_continuous}
form is that when $A=A_e$, a simplification occurs in the sum of the isoneutral
diffusion and eddy induced velocity terms:
+The tendency associated with eddy induced velocity is then simply the divergence of
+the \autoref{eq:eiv_skew_continuous} fluxes.
+It naturally conserves the tracer content, as it is expressed in flux form and,
+as the advective form, it preserves the tracer variance.
+Another interesting property of \autoref{eq:eiv_skew_continuous} form is that when $A=A_e$,
+a simplification occurs in the sum of the isoneutral diffusion and eddy induced velocity terms:
\begin{flalign} \label{eq:eiv_skew+eiv_continuous}
\textbf{F}_{iso}^T + \textbf{F}_{eiv}^T &=
@@ 549,12 +544,12 @@
\end{pmatrix}
\end{flalign}
The horizontal component reduces to the one use for an horizontal laplacian
operator and the vertical one keep the same complexity, but not more. This property
has been used to reduce the computational time \citep{Griffies_JPO98}, but it is
not of practical use as usually $A \neq A_e$. Nevertheless this property can be used to
choose a discret form of \autoref{eq:eiv_skew_continuous} which is consistent with the
isoneutral operator \autoref{eq:Gf_operator}. Using the slopes \autoref{eq:Gf_slopes}
and defining $A_e$ at $T$point($i.e.$ as $A$, the eddy diffusivity coefficient),
the resulting discret form is given by:
+The horizontal component reduces to the one use for an horizontal laplacian operator and
+the vertical one keeps the same complexity, but not more.
+This property has been used to reduce the computational time \citep{Griffies_JPO98},
+but it is not of practical use as usually $A \neq A_e$.
+Nevertheless this property can be used to choose a discret form of \autoref{eq:eiv_skew_continuous} which
+is consistent with the isoneutral operator \autoref{eq:Gf_operator}.
+Using the slopes \autoref{eq:Gf_slopes} and defining $A_e$ at $T$point($i.e.$ as $A$,
+the eddy diffusivity coefficient), the resulting discret form is given by:
\begin{equation} \label{eq:eiv_skew}
\textbf{F}_{eiv}^T \equiv \frac{1}{4} \left( \begin{aligned}
@@ 569,11 +564,11 @@
\end{equation}
Note that \autoref{eq:eiv_skew} is valid in $z$coordinate with or without partial cells.
In $z^*$ or $s$coordinate, the slope between the level and the geopotential surfaces
must be added to $\mathbb{R}$ for the discret form to be exact.

Such a choice of discretisation is consistent with the isoneutral operator as it uses the
same definition for the slopes. It also ensures the conservation of the tracer variance
(see Appendix \autoref{apdx:eiv_skew}), $i.e.$ it does not include a diffusive component
but is a "pure" advection term.
+In $z^*$ or $s$coordinate, the slope between the level and the geopotential surfaces must be added to
+$\mathbb{R}$ for the discret form to be exact.
+
+Such a choice of discretisation is consistent with the isoneutral operator as
+it uses the same definition for the slopes.
+It also ensures the conservation of the tracer variance (see Appendix \autoref{apdx:eiv_skew}),
+$i.e.$ it does not include a diffusive component but is a "pure" advection term.
@@ 591,5 +586,5 @@
This part will be moved in an Appendix.
The continuous property to be demonstrated is :
+The continuous property to be demonstrated is:
\begin{align*}
\int_D D_l^T \; T \;dv \leq 0
@@ 642,5 +637,9 @@
%
\allowdisplaybreaks
\intertext{The summation is done over all $i$ and $k$ indices, it is therefore possible to introduce a shift of $1$ either in $i$ or $k$ direction in order to regroup all the terms of the summation by triad at a ($i$,$k$) point. In other words, we regroup all the terms in the neighbourhood that contain a triad at the same ($i$,$k$) indices. It becomes: }
+ \intertext{The summation is done over all $i$ and $k$ indices,
+ it is therefore possible to introduce a shift of $1$ either in $i$ or $k$ direction in order to
+ regroup all the terms of the summation by triad at a ($i$,$k$) point.
+ In other words, we regroup all the terms in the neighbourhood that contain a triad at the same ($i$,$k$) indices.
+ It becomes: }
%
&\equiv \sum_{i,k}
@@ 672,5 +671,7 @@
%
\allowdisplaybreaks
\intertext{Then outing in factor the triad in each of the four terms of the summation and substituting the triads by their expression given in \autoref{eq:Gf_triads}. It becomes: }
+ \intertext{Then outing in factor the triad in each of the four terms of the summation and
+ substituting the triads by their expression given in \autoref{eq:Gf_triads}.
+ It becomes: }
%
&\equiv \sum_{i,k}
@@ 710,5 +711,6 @@
The last inequality is obviously obtained as we succeed in obtaining a negative summation of square quantities.
Note that, if instead of multiplying $D_l^T$ by $T$, we were using another tracer field, let say $S$, then the previous demonstration would have let to:
+Note that, if instead of multiplying $D_l^T$ by $T$, we were using another tracer field, let say $S$,
+then the previous demonstration would have let to:
\begin{align*}
\int_D S \; D_l^T \;dv &\equiv \sum_{i,k} \left\{ S \ D_l^T \ b_T \right\} \\
@@ 729,5 +731,6 @@
&\equiv \sum_{i,k} \left\{ D_l^S \ T \ b_T \right\}
\end{align*}
This means that the isoneutral operator is selfadjoint. There is no need to develop a specific to obtain it.
+This means that the isoneutral operator is selfadjoint.
+There is no need to develop a specific to obtain it.
@@ 745,5 +748,5 @@
This have to be moved in an Appendix.
The continuous property to be demonstrated is :
+The continuous property to be demonstrated is:
\begin{align*}
\int_D \nabla \cdot \textbf{F}_{eiv}(T) \; T \;dv \equiv 0
@@ 797,13 +800,13 @@
\end{matrix}
\end{align*}
The two terms associated with the triad ${_i^k \mathbb{R}_{+1/2}^{+1/2}}$ are the
same but of opposite signs, they cancel out.
Exactly the same thing occurs for the triad ${_i^k \mathbb{R}_{1/2}^{1/2}}$.
The two terms associated with the triad ${_i^k \mathbb{R}_{+1/2}^{1/2}}$ are the
same but both of opposite signs and shifted by 1 in $k$ direction. When summing over $k$
they cancel out with the neighbouring grid points.
Exactly the same thing occurs for the triad ${_i^k \mathbb{R}_{1/2}^{+1/2}}$ in the
$i$ direction. Therefore the sum over the domain is zero, $i.e.$ the variance of the
tracer is preserved by the discretisation of the skew fluxes.
+The two terms associated with the triad ${_i^k \mathbb{R}_{+1/2}^{+1/2}}$ are the same but of opposite signs,
+they cancel out.
+Exactly the same thing occurs for the triad ${_i^k \mathbb{R}_{1/2}^{1/2}}$.
+The two terms associated with the triad ${_i^k \mathbb{R}_{+1/2}^{1/2}}$ are the same but both of opposite signs and
+shifted by 1 in $k$ direction.
+When summing over $k$ they cancel out with the neighbouring grid points.
+Exactly the same thing occurs for the triad ${_i^k \mathbb{R}_{1/2}^{+1/2}}$ in the $i$ direction.
+Therefore the sum over the domain is zero,
+$i.e.$ the variance of the tracer is preserved by the discretisation of the skew fluxes.
\end{document}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_iso.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_iso.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/annex_iso.tex (revision 10368)
@@ 15,16 +15,15 @@
%
Two scheme are available to perform the isoneutral diffusion.
If the namelist logical \np{ln\_traldf\_triad} is set true,
\NEMO updates both active and passive tracers using the Griffies triad representation
of isoneutral diffusion and the eddyinduced advective skew (GM) fluxes.
If the namelist logical \np{ln\_traldf\_iso} is set true,
the filtered version of Cox's original scheme (the Standard scheme) is employed (\autoref{sec:LDF_slp}).
In the present implementation of the Griffies scheme,
+Two scheme are available to perform the isoneutral diffusion.
+If the namelist logical \np{ln\_traldf\_triad} is set true,
+\NEMO updates both active and passive tracers using the Griffies triad representation of isoneutral diffusion and
+the eddyinduced advective skew (GM) fluxes.
+If the namelist logical \np{ln\_traldf\_iso} is set true,
+the filtered version of Cox's original scheme (the Standard scheme) is employed (\autoref{sec:LDF_slp}).
+In the present implementation of the Griffies scheme,
the advective skew fluxes are implemented even if \np{ln\_traldf\_eiv} is false.
Values of isoneutral diffusivity and GM coefficient are set as
described in \autoref{sec:LDF_coef}. Note that when GM fluxes are used,
the eddyadvective (GM) velocities are output for diagnostic purposes using xIOS,
+Values of isoneutral diffusivity and GM coefficient are set as described in \autoref{sec:LDF_coef}.
+Note that when GM fluxes are used, the eddyadvective (GM) velocities are output for diagnostic purposes using xIOS,
even though the eddy advection is accomplished by means of the skew fluxes.
@@ 32,19 +31,23 @@
The options specific to the Griffies scheme include:
\begin{description}[font=\normalfont]
\item[\np{ln\_triad\_iso}] See \autoref{sec:taper}. If this is set false (the default), then
 `isoneutral' mixing is accomplished within the surface mixedlayer
 along slopes linearly decreasing with depth from the value immediately below
 the mixedlayer to zero (flat) at the surface (\autoref{sec:lintaper}).
 This is the same treatment as used in the default implementation \autoref{subsec:LDF_slp_iso}; \autoref{fig:eiv_slp}.
 Where \np{ln\_triad\_iso} is set true, the vertical skew flux is further reduced
 to ensure no vertical buoyancy flux, giving an almost pure
 horizontal diffusive tracer flux within the mixed layer. This is similar to
 the tapering suggested by \citet{Gerdes1991}. See \autoref{subsec:Gerdestaper}
\item[\np{ln\_botmix\_triad}] See \autoref{sec:iso_bdry}.
+\item[\np{ln\_triad\_iso}]
+ See \autoref{sec:taper}.
+ If this is set false (the default),
+ then `isoneutral' mixing is accomplished within the surface mixedlayer along slopes linearly decreasing with
+ depth from the value immediately below the mixedlayer to zero (flat) at the surface (\autoref{sec:lintaper}).
+ This is the same treatment as used in the default implementation
+ \autoref{subsec:LDF_slp_iso}; \autoref{fig:eiv_slp}.
+ Where \np{ln\_triad\_iso} is set true,
+ the vertical skew flux is further reduced to ensure no vertical buoyancy flux,
+ giving an almost pure horizontal diffusive tracer flux within the mixed layer.
+ This is similar to the tapering suggested by \citet{Gerdes1991}. See \autoref{subsec:Gerdestaper}
+\item[\np{ln\_botmix\_triad}]
+ See \autoref{sec:iso_bdry}.
If this is set false (the default) then the lateral diffusive fluxes
associated with triads partly masked by topography are neglected.
If it is set true, however, then these lateral diffusive fluxes are applied,
giving smoother bottom tracer fields at the cost of introducing diapycnal mixing.
\item[\np{rn\_sw\_triad}] blah blah to be added....
+\item[\np{rn\_sw\_triad}]
+ blah blah to be added....
\end{description}
The options shared with the Standard scheme include:
@@ 56,11 +59,10 @@
\section{Triad formulation of isoneutral diffusion}
\label{sec:iso}
We have implemented into \NEMO a scheme inspired by \citet{Griffies_al_JPO98},
+We have implemented into \NEMO a scheme inspired by \citet{Griffies_al_JPO98},
but formulated within the \NEMO framework, using scale factors rather than gridsizes.
\subsection{Isoneutral diffusion operator}
The isoneutral second order tracer diffusive operator for small
angles between isoneutral surfaces and geopotentials is given by
\autoref{eq:iso_tensor_1}:
+The isoneutral second order tracer diffusive operator for small angles between
+isoneutral surfaces and geopotentials is given by \autoref{eq:iso_tensor_1}:
\begin{subequations} \label{eq:iso_tensor_1}
\begin{equation}
@@ 94,5 +96,5 @@
% {r_1 } \hfill & {r_2 } \hfill & {r_1 ^2+r_2 ^2} \hfill \\
% \end{array} }} \right)
 Here \autoref{eq:PE_iso_slopes}
+Here \autoref{eq:PE_iso_slopes}
\begin{align*}
r_1 &=\frac{e_3 }{e_1 } \left( \frac{\partial \rho }{\partial i}
@@ 104,21 +106,19 @@
}{\partial k} \right)^{1}
\end{align*}
is the $i$component of the slope of the isoneutral surface relative to the computational
surface, and $r_2$ is the $j$component.

We will find it useful to consider the fluxes per unit area in $i,j,k$
space; we write
+is the $i$component of the slope of the isoneutral surface relative to the computational surface,
+and $r_2$ is the $j$component.
+
+We will find it useful to consider the fluxes per unit area in $i,j,k$ space; we write
\begin{equation}
\label{eq:Fijk}
\vect{F}_{\mathrm{iso}}=\left(f_1^{lT}e_2e_3, f_2^{lT}e_1e_3, f_3^{lT}e_1e_2\right).
\end{equation}
Additionally, we will sometimes write the contributions towards the
fluxes $\vect{f}$ and $\vect{F}_{\mathrm{iso}}$ from the component
$R_{ij}$ of $\Re$ as $f_{ij}$, $F_{\mathrm{iso}\: ij}$, with
$f_{ij}=R_{ij}e_i^{1}\partial T/\partial x_i$ (no summation) etc.
+Additionally, we will sometimes write the contributions towards the fluxes $\vect{f}$ and
+$\vect{F}_{\mathrm{iso}}$ from the component $R_{ij}$ of $\Re$ as $f_{ij}$, $F_{\mathrm{iso}\: ij}$,
+with $f_{ij}=R_{ij}e_i^{1}\partial T/\partial x_i$ (no summation) etc.
The offdiagonal terms of the small angle diffusion tensor
\autoref{eq:iso_tensor_1}, \autoref{eq:iso_tensor_2} produce skewfluxes along the
$i$ and $j$directions resulting from the vertical tracer gradient:
+\autoref{eq:iso_tensor_1}, \autoref{eq:iso_tensor_2} produce skewfluxes along
+the $i$ and $j$directions resulting from the vertical tracer gradient:
\begin{align}
\label{eq:i13c}
@@ 129,6 +129,5 @@
\end{align}
The vertical diffusive flux associated with the $_{33}$
component of the small angle diffusion tensor is
+The vertical diffusive flux associated with the $_{33}$ component of the small angle diffusion tensor is
\begin{equation}
\label{eq:i33c}
@@ 136,30 +135,24 @@
\end{equation}
Since there are no cross terms involving $r_1$ and $r_2$ in the above, we can
consider the isoneutral diffusive fluxes separately in the $i$$k$ and $j$$k$
planes, just adding together the vertical components from each
plane. The following description will describe the fluxes on the $i$$k$
plane.

There is no natural discretization for the $i$component of the
skewflux, \autoref{eq:i13c}, as
although it must be evaluated at $u$points, it involves vertical
gradients (both for the tracer and the slope $r_1$), defined at
$w$points. Similarly, the vertical skew flux, \autoref{eq:i31c}, is evaluated at
$w$points but involves horizontal gradients defined at $u$points.
+Since there are no cross terms involving $r_1$ and $r_2$ in the above,
+we can consider the isoneutral diffusive fluxes separately in the $i$$k$ and $j$$k$ planes,
+just adding together the vertical components from each plane.
+The following description will describe the fluxes on the $i$$k$ plane.
+
+There is no natural discretization for the $i$component of the skewflux, \autoref{eq:i13c},
+as although it must be evaluated at $u$points,
+it involves vertical gradients (both for the tracer and the slope $r_1$), defined at $w$points.
+Similarly, the vertical skew flux, \autoref{eq:i31c},
+is evaluated at $w$points but involves horizontal gradients defined at $u$points.
\subsection{Standard discretization}
The straightforward approach to discretize the lateral skew flux
\autoref{eq:i13c} from tracer cell $i,k$ to $i+1,k$, introduced in 1995
into OPA, \autoref{eq:tra_ldf_iso}, is to calculate a mean vertical
gradient at the $u$point from the average of the four surrounding
vertical tracer gradients, and multiply this by a mean slope at the
$u$point, calculated from the averaged surrounding vertical density
gradients. The total areaintegrated skewflux (flux per unit area in
$ijk$ space) from tracer cell $i,k$
to $i+1,k$, noting that the $e_{{3}_{i+1/2}^k}$ in the area
$e{_{3}}_{i+1/2}^k{e_{2}}_{i+1/2}i^k$ at the $u$point cancels out with
the $1/{e_{3}}_{i+1/2}^k$ associated with the vertical tracer
gradient, is then \autoref{eq:tra_ldf_iso}
+\autoref{eq:i13c} from tracer cell $i,k$ to $i+1,k$, introduced in 1995 into OPA,
+\autoref{eq:tra_ldf_iso}, is to calculate a mean vertical gradient at the $u$point from
+the average of the four surrounding vertical tracer gradients, and multiply this by a mean slope at the $u$point,
+calculated from the averaged surrounding vertical density gradients.
+The total areaintegrated skewflux (flux per unit area in $ijk$ space) from tracer cell $i,k$ to $i+1,k$,
+noting that the $e_{{3}_{i+1/2}^k}$ in the area $e{_{3}}_{i+1/2}^k{e_{2}}_{i+1/2}i^k$ at the $u$point cancels out with
+the $1/{e_{3}}_{i+1/2}^k$ associated with the vertical tracer gradient, is then \autoref{eq:tra_ldf_iso}
\begin{equation*}
\left(F_u^{13} \right)_{i+\hhalf}^k = \Alts_{i+\hhalf}^k
@@ 173,22 +166,17 @@
\frac{\delta_{i+1/2} [\rho]}{\overline{\overline{\delta_k \rho}}^{\,i,k}},
\end{equation*}
and here and in the following we drop the $^{lT}$ superscript from
$\Alt$ for simplicity.
Unfortunately the resulting combination $\overline{\overline{\delta_k
 \bullet}}^{\,i,k}$ of a $k$ average and a $k$ difference %of the tracer
reduces to $\bullet_{k+1}\bullet_{k1}$, so twogridpoint oscillations are
invisible to this discretization of the isoneutral operator. These
\emph{computational modes} will not be damped by this operator, and
may even possibly be amplified by it. Consequently, applying this
operator to a tracer does not guarantee the decrease of its
globalaverage variance. To correct this, we introduced a smoothing of
the slopes of the isoneutral surfaces (see \autoref{chap:LDF}). This
technique works for $T$ and $S$ in so far as they are active tracers
($i.e.$ they enter the computation of density), but it does not work
for a passive tracer.
+and here and in the following we drop the $^{lT}$ superscript from $\Alt$ for simplicity.
+Unfortunately the resulting combination $\overline{\overline{\delta_k\bullet}}^{\,i,k}$ of a $k$ average and
+a $k$ difference of the tracer reduces to $\bullet_{k+1}\bullet_{k1}$,
+so twogridpoint oscillations are invisible to this discretization of the isoneutral operator.
+These \emph{computational modes} will not be damped by this operator, and may even possibly be amplified by it.
+Consequently, applying this operator to a tracer does not guarantee the decrease of its globalaverage variance.
+To correct this, we introduced a smoothing of the slopes of the isoneutral surfaces (see \autoref{chap:LDF}).
+This technique works for $T$ and $S$ in so far as they are active tracers
+($i.e.$ they enter the computation of density), but it does not work for a passive tracer.
\subsection{Expression of the skewflux in terms of triad slopes}
\citep{Griffies_al_JPO98} introduce a different discretization of the
offdiagonal terms that nicely solves the problem.
+\citep{Griffies_al_JPO98} introduce a different discretization of the offdiagonal terms that
+nicely solves the problem.
% Instead of multiplying the mean slope calculated at the $u$point by
% the mean vertical gradient at the $u$point,
@@ 203,12 +191,10 @@
\end{center} \end{figure}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>
They get the skew flux from the products of the vertical gradients at
each $w$point surrounding the $u$point with the corresponding `triad'
slope calculated from the lateral density gradient across the $u$point
divided by the vertical density gradient at the same $w$point as the
tracer gradient. See \autoref{fig:ISO_triad}a, where the thick lines
denote the tracer gradients, and the thin lines the corresponding
triads, with slopes $s_1, \dotsc s_4$. The total areaintegrated
skewflux from tracer cell $i,k$ to $i+1,k$
+They get the skew flux from the products of the vertical gradients at each $w$point surrounding the $u$point with
+the corresponding `triad' slope calculated from the lateral density gradient across the $u$point divided by
+the vertical density gradient at the same $w$point as the tracer gradient.
+See \autoref{fig:ISO_triad}a, where the thick lines denote the tracer gradients,
+and the thin lines the corresponding triads, with slopes $s_1, \dotsc s_4$.
+The total areaintegrated skewflux from tracer cell $i,k$ to $i+1,k$
\begin{multline}
\label{eq:i13}
@@ 222,13 +208,11 @@
_{k\frac{1}{2}} \left[ T^i \right]/e_{{3w}_{i+1}}^{k+\frac{1}{2}},
\end{multline}
where the contributions of the triad fluxes are weighted by areas
$a_1, \dotsc a_4$, and $\Alts$ is now defined at the tracer points
rather than the $u$points. This discretization gives a much closer
stencil, and disallows the twopoint computational modes.

 The vertical skew flux \autoref{eq:i31c} from tracer cell $i,k$ to $i,k+1$ at the
$w$point $i,k+\hhalf$ is constructed similarly (\autoref{fig:ISO_triad}b)
by multiplying lateral tracer gradients from each of the four
surrounding $u$points by the appropriate triad slope:
+where the contributions of the triad fluxes are weighted by areas $a_1, \dotsc a_4$,
+and $\Alts$ is now defined at the tracer points rather than the $u$points.
+This discretization gives a much closer stencil, and disallows the twopoint computational modes.
+
+The vertical skew flux \autoref{eq:i31c} from tracer cell $i,k$ to $i,k+1$ at
+the $w$point $i,k+\hhalf$ is constructed similarly (\autoref{fig:ISO_triad}b) by
+multiplying lateral tracer gradients from each of the four surrounding $u$points by the appropriate triad slope:
\begin{multline}
\label{eq:i31}
@@ 241,7 +225,7 @@
We notate the triad slopes $s_i$ and $s'_i$ in terms of the `anchor point' $i,k$
(appearing in both the vertical and lateral gradient), and the $u$ and
$w$points $(i+i_p,k)$, $(i,k+k_p)$ at the centres of the `arms' of the
triad as follows (see also \autoref{fig:ISO_triad}):
+(appearing in both the vertical and lateral gradient),
+and the $u$ and $w$points $(i+i_p,k)$, $(i,k+k_p)$ at the centres of the `arms' of the triad as follows
+(see also \autoref{fig:ISO_triad}):
\begin{equation}
\label{eq:R}
@@ 253,6 +237,6 @@
{ \alpha_i^k \ \delta_{k+k_p}[T^i]  \beta_i^k \ \delta_{k+k_p}[S^i] }.
\end{equation}
In calculating the slopes of the local neutral surfaces,
the expansion coefficients $\alpha$ and $\beta$ are evaluated at the anchor points of the triad,
+In calculating the slopes of the local neutral surfaces,
+the expansion coefficients $\alpha$ and $\beta$ are evaluated at the anchor points of the triad,
while the metrics are calculated at the $u$ and $w$points on the arms.
@@ 261,31 +245,29 @@
\includegraphics[width=0.80\textwidth]{Fig_GRIFF_qcells}
\caption{ \protect\label{fig:qcells}
 Triad notation for quarter cells. $T$cells are inside
 boxes, while the $i+\half,k$ $u$cell is shaded in green and the
 $i,k+\half$ $w$cell is shaded in pink.}
+ Triad notation for quarter cells. $T$cells are inside boxes,
+ while the $i+\half,k$ $u$cell is shaded in green and
+ the $i,k+\half$ $w$cell is shaded in pink.}
\end{center} \end{figure}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>
Each triad $\{_i^{k}\:_{i_p}^{k_p}\}$ is associated (\autoref{fig:qcells}) with the quarter
cell that is the intersection of the $i,k$ $T$cell, the $i+i_p,k$ $u$cell and the $i,k+k_p$ $w$cell.
Expressing the slopes $s_i$ and $s'_i$ in \autoref{eq:i13} and \autoref{eq:i31} in this notation,
we have $e.g.$ \ $s_1=s'_1={\:}_i^k \mathbb{R}_{1/2}^{1/2}$.
Each triad slope $_i^k\mathbb{R}_{i_p}^{k_p}$ is used once (as an $s$)
to calculate the lateral flux along its $u$arm, at $(i+i_p,k)$,
and then again as an $s'$ to calculate the vertical flux along its $w$arm at $(i,k+k_p)$.
Each vertical area $a_i$ used to calculate the lateral flux and horizontal area $a'_i$ used
to calculate the vertical flux can also be identified as the area across the $u$ and $w$arms
of a unique triad, and we notate these areas, similarly to the triad slopes,
as $_i^k{\mathbb{A}_u}_{i_p}^{k_p}$, $_i^k{\mathbb{A}_w}_{i_p}^{k_p}$,
where $e.g.$ in \autoref{eq:i13} $a_{1}={\:}_i^k{\mathbb{A}_u}_{1/2}^{1/2}$,
+Each triad $\{_i^{k}\:_{i_p}^{k_p}\}$ is associated (\autoref{fig:qcells}) with the quarter cell that is
+the intersection of the $i,k$ $T$cell, the $i+i_p,k$ $u$cell and the $i,k+k_p$ $w$cell.
+Expressing the slopes $s_i$ and $s'_i$ in \autoref{eq:i13} and \autoref{eq:i31} in this notation,
+we have $e.g.$ \ $s_1=s'_1={\:}_i^k \mathbb{R}_{1/2}^{1/2}$.
+Each triad slope $_i^k\mathbb{R}_{i_p}^{k_p}$ is used once (as an $s$) to
+calculate the lateral flux along its $u$arm, at $(i+i_p,k)$,
+and then again as an $s'$ to calculate the vertical flux along its $w$arm at $(i,k+k_p)$.
+Each vertical area $a_i$ used to calculate the lateral flux and horizontal area $a'_i$ used to
+calculate the vertical flux can also be identified as the area across the $u$ and $w$arms of a unique triad,
+and we notate these areas, similarly to the triad slopes,
+as $_i^k{\mathbb{A}_u}_{i_p}^{k_p}$, $_i^k{\mathbb{A}_w}_{i_p}^{k_p}$,
+where $e.g.$ in \autoref{eq:i13} $a_{1}={\:}_i^k{\mathbb{A}_u}_{1/2}^{1/2}$,
and in \autoref{eq:i31} $a'_{1}={\:}_i^k{\mathbb{A}_w}_{1/2}^{1/2}$.
\subsection{Full triad fluxes}
A key property of isoneutral diffusion is that it should not affect
the (locally referenced) density. In particular there should be no
lateral or vertical density flux. The lateral density flux disappears so long as the
areaintegrated lateral diffusive flux from tracer cell $i,k$ to
$i+1,k$ coming from the $_{11}$ term of the diffusion tensor takes the
form
+A key property of isoneutral diffusion is that it should not affect the (locally referenced) density.
+In particular there should be no lateral or vertical density flux.
+The lateral density flux disappears so long as the areaintegrated lateral diffusive flux from
+tracer cell $i,k$ to $i+1,k$ coming from the $_{11}$ term of the diffusion tensor takes the form
\begin{equation}
\label{eq:i11}
@@ 295,8 +277,7 @@
\frac{\delta _{i+1/2} \left[ T^k\right]}{{e_{1u}}_{\,i+1/2}^{\,k}},
\end{equation}
where the areas $a_i$ are as in \autoref{eq:i13}. In this case,
separating the total lateral flux, the sum of \autoref{eq:i13} and
\autoref{eq:i11}, into triad components, a lateral tracer
flux
+where the areas $a_i$ are as in \autoref{eq:i13}.
+In this case, separating the total lateral flux, the sum of \autoref{eq:i13} and \autoref{eq:i11},
+into triad components, a lateral tracer flux
\begin{equation}
\label{eq:latfluxtriad}
@@ 308,21 +289,18 @@
\right)
\end{equation}
can be identified with each triad. Then, because the
same metric factors ${e_{3w}}_{\,i}^{\,k+k_p}$ and
${e_{1u}}_{\,i+i_p}^{\,k}$ are employed for both the density gradients
in $ _i^k \mathbb{R}_{i_p}^{k_p}$ and the tracer gradients, the lateral
density flux associated with each triad separately disappears.
+can be identified with each triad.
+Then, because the same metric factors ${e_{3w}}_{\,i}^{\,k+k_p}$ and ${e_{1u}}_{\,i+i_p}^{\,k}$ are employed for both
+the density gradients in $ _i^k \mathbb{R}_{i_p}^{k_p}$ and the tracer gradients,
+the lateral density flux associated with each triad separately disappears.
\begin{equation}
\label{eq:latfluxrho}
{\mathbb{F}_u}_{i_p}^{k_p} (\rho)=\alpha _i^k {\:}_i^k {\mathbb{F}_u}_{i_p}^{k_p} (T) + \beta_i^k {\:}_i^k {\mathbb{F}_u}_{i_p}^{k_p} (S)=0
\end{equation}
Thus the total flux $\left( F_u^{31} \right) ^i _{i,k+\frac{1}{2}} +
\left( F_u^{11} \right) ^i _{i,k+\frac{1}{2}}$ from tracer cell $i,k$
to $i+1,k$ must also vanish since it is a sum of four such triad fluxes.

The squared slope $r_1^2$ in the expression \autoref{eq:i33c} for the
$_{33}$ component is also expressed in terms of areaweighted
squared triad slopes, so the areaintegrated vertical flux from tracer
cell $i,k$ to $i,k+1$ resulting from the $r_1^2$ term is
+Thus the total flux $\left( F_u^{31} \right) ^i _{i,k+\frac{1}{2}} + \left( F_u^{11} \right) ^i _{i,k+\frac{1}{2}}$ from
+tracer cell $i,k$ to $i+1,k$ must also vanish since it is a sum of four such triad fluxes.
+
+The squared slope $r_1^2$ in the expression \autoref{eq:i33c} for the $_{33}$ component is also expressed in
+terms of areaweighted squared triad slopes,
+so the areaintegrated vertical flux from tracer cell $i,k$ to $i,k+1$ resulting from the $r_1^2$ term is
\begin{equation}
\label{eq:i33}
@@ 333,8 +311,7 @@
+ \Alts_i^k a_{4}' s_{4}'^2 \right)\delta_{k+\frac{1}{2}} \left[ T^{i+1} \right],
\end{equation}
where the areas $a'$ and slopes $s'$ are the same as in
\autoref{eq:i31}.
Then, separating the total vertical flux, the sum of \autoref{eq:i31} and
\autoref{eq:i33}, into triad components, a vertical flux
+where the areas $a'$ and slopes $s'$ are the same as in \autoref{eq:i31}.
+Then, separating the total vertical flux, the sum of \autoref{eq:i31} and \autoref{eq:i33},
+into triad components, a vertical flux
\begin{align}
\label{eq:vertfluxtriad}
@@ 349,17 +326,15 @@
{_i^k\mathbb{R}_{i_p}^{k_p}}{\: }_i^k{\mathbb{F}_u}_{i_p}^{k_p} (T) \label{eq:vertfluxtriad2}
\end{align}
may be associated with each triad. Each vertical density flux $_i^k {\mathbb{F}_w}_{i_p}^{k_p} (\rho)$
associated with a triad then separately disappears (because the
lateral flux $_i^k{\mathbb{F}_u}_{i_p}^{k_p} (\rho)$
disappears). Consequently the total vertical density flux $\left( F_w^{31} \right)_i ^{k+\frac{1}{2}} +
\left( F_w^{33} \right)_i^{k+\frac{1}{2}}$ from tracer cell $i,k$
to $i,k+1$ must also vanish since it is a sum of four such triad
fluxes.

We can explicitly identify (\autoref{fig:qcells}) the triads associated with the $s_i$, $a_i$, and $s'_i$, $a'_i$ used in the definition of
the $u$fluxes and $w$fluxes in
\autoref{eq:i31}, \autoref{eq:i13}, \autoref{eq:i11} \autoref{eq:i33} and
\autoref{fig:ISO_triad} to write out the isoneutral fluxes at $u$ and
$w$points as sums of the triad fluxes that cross the $u$ and $w$faces:
+may be associated with each triad.
+Each vertical density flux $_i^k {\mathbb{F}_w}_{i_p}^{k_p} (\rho)$ associated with a triad then
+separately disappears (because the lateral flux $_i^k{\mathbb{F}_u}_{i_p}^{k_p} (\rho)$ disappears).
+Consequently the total vertical density flux
+$\left( F_w^{31} \right)_i ^{k+\frac{1}{2}} + \left( F_w^{33} \right)_i^{k+\frac{1}{2}}$ from
+tracer cell $i,k$ to $i,k+1$ must also vanish since it is a sum of four such triad fluxes.
+
+We can explicitly identify (\autoref{fig:qcells}) the triads associated with the $s_i$, $a_i$,
+and $s'_i$, $a'_i$ used in the definition of the $u$fluxes and $w$fluxes in \autoref{eq:i31},
+\autoref{eq:i13}, \autoref{eq:i11} \autoref{eq:i33} and \autoref{fig:ISO_triad} to write out
+the isoneutral fluxes at $u$ and $w$points as sums of the triad fluxes that cross the $u$ and $w$faces:
%(\autoref{fig:ISO_triad}):
\begin{flalign} \label{eq:iso_flux} \vect{F}_{\mathrm{iso}}(T) &\equiv
@@ 375,6 +350,5 @@
\label{subsec:variance}
We now require that this operator should not increase the
globallyintegrated tracer variance.
+We now require that this operator should not increase the globallyintegrated tracer variance.
%This changes according to
% \begin{align*}
@@ 387,9 +361,8 @@
% + {_i^{k+1/2k_p} {\mathbb{F}_w}_{i_p}^{k_p}} \ \delta_{k+1/2} [T] \right\} \\
% \end{align*}
Each triad slope $_i^k\mathbb{R}_{i_p}^{k_p}$ drives a lateral flux
$_i^k{\mathbb{F}_u}_{i_p}^{k_p} (T)$ across the $u$point $i+i_p,k$ and
a vertical flux $_i^k{\mathbb{F}_w}_{i_p}^{k_p} (T)$ across the
$w$point $i,k+k_p$. The lateral flux drives a net rate of change of
variance, summed over the two $T$points $i+i_p\half,k$ and $i+i_p+\half,k$, of
+Each triad slope $_i^k\mathbb{R}_{i_p}^{k_p}$ drives a lateral flux $_i^k{\mathbb{F}_u}_{i_p}^{k_p} (T)$ across
+the $u$point $i+i_p,k$ and a vertical flux $_i^k{\mathbb{F}_w}_{i_p}^{k_p} (T)$ across the $w$point $i,k+k_p$.
+The lateral flux drives a net rate of change of variance,
+summed over the two $T$points $i+i_p\half,k$ and $i+i_p+\half,k$, of
\begin{multline}
{b_T}_{i+i_p1/2}^k\left(\frac{\partial T}{\partial t}T\right)_{i+i_p1/2}^k+
@@ 402,15 +375,13 @@
\end{aligned}
\end{multline}
while the vertical flux similarly drives a net rate of change of
variance summed over the $T$points $i,k+k_p\half$ (above) and
$i,k+k_p+\half$ (below) of
+while the vertical flux similarly drives a net rate of change of variance summed over
+the $T$points $i,k+k_p\half$ (above) and $i,k+k_p+\half$ (below) of
\begin{equation}
\label{eq:dvar_iso_k}
_i^k{\mathbb{F}_w}_{i_p}^{k_p} (T) \,\delta_{k+ k_p}[T^i].
\end{equation}
The total variance tendency driven by the triad is the sum of these
two. Expanding $_i^k{\mathbb{F}_u}_{i_p}^{k_p} (T)$ and
$_i^k{\mathbb{F}_w}_{i_p}^{k_p} (T)$ with \autoref{eq:latfluxtriad} and
\autoref{eq:vertfluxtriad}, it is
+The total variance tendency driven by the triad is the sum of these two.
+Expanding $_i^k{\mathbb{F}_u}_{i_p}^{k_p} (T)$ and $_i^k{\mathbb{F}_w}_{i_p}^{k_p} (T)$ with
+\autoref{eq:latfluxtriad} and \autoref{eq:vertfluxtriad}, it is
\begin{multline*}
\Alts_i^k\left \{
@@ 428,7 +399,6 @@
\right \}.
\end{multline*}
The key point is then that if we require
$_i^k{\mathbb{A}_u}_{i_p}^{k_p}$ and $_i^k{\mathbb{A}_w}_{i_p}^{k_p}$
to be related to a triad volume $_i^k\mathbb{V}_{i_p}^{k_p}$ by
+The key point is then that if we require $_i^k{\mathbb{A}_u}_{i_p}^{k_p}$ and $_i^k{\mathbb{A}_w}_{i_p}^{k_p}$ to
+be related to a triad volume $_i^k\mathbb{V}_{i_p}^{k_p}$ by
\begin{equation}
\label{eq:VA}
@@ 447,13 +417,11 @@
\right)^2\leq 0.
\end{equation}
Thus, the constraint \autoref{eq:VA} ensures that the fluxes (\autoref{eq:latfluxtriad}, \autoref{eq:vertfluxtriad}) associated
with a given slope triad $_i^k\mathbb{R}_{i_p}^{k_p}$ do not increase
the net variance. Since the total fluxes are sums of such fluxes from
the various triads, this constraint, applied to all triads, is
sufficient to ensure that the globally integrated variance does not
increase.

The expression \autoref{eq:VA} can be interpreted as a discretization
of the global integral
+Thus, the constraint \autoref{eq:VA} ensures that the fluxes
+(\autoref{eq:latfluxtriad}, \autoref{eq:vertfluxtriad}) associated with
+a given slope triad $_i^k\mathbb{R}_{i_p}^{k_p}$ do not increase the net variance.
+Since the total fluxes are sums of such fluxes from the various triads, this constraint, applied to all triads,
+is sufficient to ensure that the globally integrated variance does not increase.
+
+The expression \autoref{eq:VA} can be interpreted as a discretization of the global integral
\begin{equation}
\label{eq:ctsvar}
@@ 461,6 +429,5 @@
\int\!\mathbf{F}\cdot\nabla T\, dV,
\end{equation}
where, within each triad volume $_i^k\mathbb{V}_{i_p}^{k_p}$, the
lateral and vertical fluxes/unit area
+where, within each triad volume $_i^k\mathbb{V}_{i_p}^{k_p}$, the lateral and vertical fluxes/unit area
\[
\mathbf{F}=\left(
@@ 477,20 +444,18 @@
\subsection{Triad volumes in Griffes's scheme and in \NEMO}
To complete the discretization we now need only specify the triad
volumes $_i^k\mathbb{V}_{i_p}^{k_p}$. \citet{Griffies_al_JPO98} identify
these $_i^k\mathbb{V}_{i_p}^{k_p}$ as the volumes of the quarter
cells, defined in terms of the distances between $T$, $u$,$f$ and
$w$points. This is the natural discretization of
\autoref{eq:ctsvar}. The \NEMO model, however, operates with scale
factors instead of grid sizes, and scale factors for the quarter
cells are not defined. Instead, therefore we simply choose
+To complete the discretization we now need only specify the triad volumes $_i^k\mathbb{V}_{i_p}^{k_p}$.
+\citet{Griffies_al_JPO98} identifies these $_i^k\mathbb{V}_{i_p}^{k_p}$ as the volumes of the quarter cells,
+defined in terms of the distances between $T$, $u$,$f$ and $w$points.
+This is the natural discretization of \autoref{eq:ctsvar}.
+The \NEMO model, however, operates with scale factors instead of grid sizes,
+and scale factors for the quarter cells are not defined.
+Instead, therefore we simply choose
\begin{equation}
\label{eq:VNEMO}
_i^k\mathbb{V}_{i_p}^{k_p}=\quarter {b_u}_{i+i_p}^k,
\end{equation}
as a quarter of the volume of the $u$cell inside which the triad
quartercell lies. This has the nice property that when the slopes
$\mathbb{R}$ vanish, the lateral flux from tracer cell $i,k$ to
$i+1,k$ reduces to the classical form
+as a quarter of the volume of the $u$cell inside which the triad quartercell lies.
+This has the nice property that when the slopes $\mathbb{R}$ vanish,
+the lateral flux from tracer cell $i,k$ to $i+1,k$ reduces to the classical form
\begin{equation}
\label{eq:latnormal}
@@ 500,13 +465,12 @@
= \overline\Alts_{\,i+1/2}^k\;\frac{{e_{1w}}_{\,i + 1/2}^{\,k}\:{e_{1v}}_{\,i + 1/2}^{\,k}\;\delta_{i+ 1/2}[T^k]}{{e_{1u}}_{\,i + 1/2}^{\,k}}.
\end{equation}
In fact if the diffusive coefficient is defined at $u$points, so that
we employ $\Alts_{i+i_p}^k$ instead of $\Alts_i^k$ in the definitions of the
triad fluxes \autoref{eq:latfluxtriad} and \autoref{eq:vertfluxtriad},
+In fact if the diffusive coefficient is defined at $u$points,
+so that we employ $\Alts_{i+i_p}^k$ instead of $\Alts_i^k$ in the definitions of the triad fluxes
+\autoref{eq:latfluxtriad} and \autoref{eq:vertfluxtriad},
we can replace $\overline{A}_{\,i+1/2}^k$ by $A_{i+1/2}^k$ in the above.
\subsection{Summary of the scheme}
The isoneutral fluxes at $u$ and
$w$points are the sums of the triad fluxes that cross the $u$ and
$w$faces \autoref{eq:iso_flux}:
+The isoneutral fluxes at $u$ and $w$points are the sums of the triad fluxes that
+cross the $u$ and $w$faces \autoref{eq:iso_flux}:
\begin{subequations}\label{eq:alltriadflux}
\begin{flalign}\label{eq:vect_isoflux}
@@ 545,5 +509,5 @@
\end{subequations}
 The divergence of the expression \autoref{eq:iso_flux} for the fluxes gives the isoneutral diffusion tendency at
+The divergence of the expression \autoref{eq:iso_flux} for the fluxes gives the isoneutral diffusion tendency at
each tracer point:
\begin{equation} \label{eq:iso_operator} D_l^T = \frac{1}{b_T}
@@ 555,7 +519,7 @@
The diffusion scheme satisfies the following six properties:
\begin{description}
\item[$\bullet$ horizontal diffusion] The discretization of the
 diffusion operator recovers \autoref{eq:latnormal} the traditional fivepoint Laplacian in
 the limit of flat isoneutral direction :
+\item[$\bullet$ horizontal diffusion]
+ The discretization of the diffusion operator recovers the traditional fivepoint Laplacian
+ \autoref{eq:latnormal} in the limit of flat isoneutral direction:
\begin{equation} \label{eq:iso_property0} D_l^T = \frac{1}{b_T} \
\delta_{i} \left[ \frac{e_{2u}\,e_{3u}}{e_{1u}} \;
@@ 564,12 +528,10 @@
\end{equation}
\item[$\bullet$ implicit treatment in the vertical] Only tracer values
 associated with a single water column appear in the expression
 \autoref{eq:i33} for the $_{33}$ fluxes, vertical fluxes driven by
 vertical gradients. This is of paramount importance since it means
 that a timeimplicit algorithm can be used to solve the vertical
 diffusion equation. This is necessary
 since the vertical eddy
 diffusivity associated with this term,
+\item[$\bullet$ implicit treatment in the vertical]
+ Only tracer values associated with a single water column appear in the expression \autoref{eq:i33} for
+ the $_{33}$ fluxes, vertical fluxes driven by vertical gradients.
+ This is of paramount importance since it means that a timeimplicit algorithm can be used to
+ solve the vertical diffusion equation.
+ This is necessary since the vertical eddy diffusivity associated with this term,
\begin{equation}
\frac{1}{b_w}\sum_{\substack{i_p, \,k_p}} \left\{
@@ 579,45 +541,41 @@
{b_u}_{i+i_p}^k\: \Alts_i^k \: \left(_i^k \mathbb{R}_{i_p}^{k_p}\right)^2
\right\},
 \end{equation}
+ \end{equation}
(where $b_w= e_{1w}\,e_{2w}\,e_{3w}$ is the volume of $w$cells) can be quite large.
\item[$\bullet$ pure isoneutral operator] The isoneutral flux of
 locally referenced potential density is zero. See
 \autoref{eq:latfluxrho} and \autoref{eq:vertfluxtriad2}.

\item[$\bullet$ conservation of tracer] The isoneutral diffusion
 conserves tracer content, $i.e.$
+\item[$\bullet$ pure isoneutral operator]
+ The isoneutral flux of locally referenced potential density is zero.
+ See \autoref{eq:latfluxrho} and \autoref{eq:vertfluxtriad2}.
+
+\item[$\bullet$ conservation of tracer]
+ The isoneutral diffusion conserves tracer content, $i.e.$
\begin{equation} \label{eq:iso_property1} \sum_{i,j,k} \left\{ D_l^T \
b_T \right\} = 0
\end{equation}
 This property is trivially satisfied since the isoneutral diffusive
 operator is written in flux form.

\item[$\bullet$ no increase of tracer variance] The isoneutral diffusion
 does not increase the tracer variance, $i.e.$
+ This property is trivially satisfied since the isoneutral diffusive operator is written in flux form.
+
+\item[$\bullet$ no increase of tracer variance]
+ The isoneutral diffusion does not increase the tracer variance, $i.e.$
\begin{equation} \label{eq:iso_property2} \sum_{i,j,k} \left\{ T \ D_l^T
\ b_T \right\} \leq 0
\end{equation}
 The property is demonstrated in
 \autoref{subsec:variance} above. It is a key property for a diffusion
 term. It means that it is also a dissipation term, $i.e.$ it
 dissipates the square of the quantity on which it is applied. It
 therefore ensures that, when the diffusivity coefficient is large
 enough, the field on which it is applied becomes free of gridpoint
 noise.

\item[$\bullet$ selfadjoint operator] The isoneutral diffusion
 operator is selfadjoint, $i.e.$
+ The property is demonstrated in \autoref{subsec:variance} above.
+ It is a key property for a diffusion term.
+ It means that it is also a dissipation term,
+ $i.e.$ it dissipates the square of the quantity on which it is applied.
+ It therefore ensures that, when the diffusivity coefficient is large enough,
+ the field on which it is applied becomes free of gridpoint noise.
+
+\item[$\bullet$ selfadjoint operator]
+ The isoneutral diffusion operator is selfadjoint, $i.e.$
\begin{equation} \label{eq:iso_property3} \sum_{i,j,k} \left\{ S \ D_l^T
\ b_T \right\} = \sum_{i,j,k} \left\{ D_l^S \ T \ b_T \right\}
\end{equation}
 In other word, there is no need to develop a specific routine from
 the adjoint of this operator. We just have to apply the same
 routine. This property can be demonstrated similarly to the proof of
 the `no increase of tracer variance' property. The contribution by a
 single triad towards the left hand side of \autoref{eq:iso_property3}, can
 be found by replacing $\delta[T]$ by $\delta[S]$ in \autoref{eq:dvar_iso_i}
 and \autoref{eq:dvar_iso_k}. This results in a term similar to
 \autoref{eq:perfectsquare},
+ In other word, there is no need to develop a specific routine from the adjoint of this operator.
+ We just have to apply the same routine.
+ This property can be demonstrated similarly to the proof of the `no increase of tracer variance' property.
+ The contribution by a single triad towards the left hand side of \autoref{eq:iso_property3},
+ can be found by replacing $\delta[T]$ by $\delta[S]$ in \autoref{eq:dvar_iso_i} and \autoref{eq:dvar_iso_k}.
+ This results in a term similar to \autoref{eq:perfectsquare},
\begin{equation}
\label{eq:TScovar}
@@ 634,66 +592,59 @@
\right).
\end{equation}
This is symmetrical in $T $ and $S$, so exactly the same term arises
from the discretization of this triad's contribution towards the
RHS of \autoref{eq:iso_property3}.
+This is symmetrical in $T $ and $S$, so exactly the same term arises from
+the discretization of this triad's contribution towards the RHS of \autoref{eq:iso_property3}.
\end{description}
\subsection{Treatment of the triads at the boundaries}\label{sec:iso_bdry}
The triad slope can only be defined where both the grid boxes centred at
the end of the arms exist. Triads that would poke up
through the upper ocean surface into the atmosphere, or down into the
ocean floor, must be masked out. See \autoref{fig:bdry_triads}. Surface layer triads
$\triad{i}{1}{R}{1/2}{1/2}$ (magenta) and
$\triad{i+1}{1}{R}{1/2}{1/2}$ (blue) that require density to be
specified above the ocean surface are masked (\autoref{fig:bdry_triads}a): this ensures that lateral
tracer gradients produce no flux through the ocean surface. However,
to prevent surface noise, it is customary to retain the $_{11}$ contributions towards
the lateral triad fluxes $\triad[u]{i}{1}{F}{1/2}{1/2}$ and
$\triad[u]{i+1}{1}{F}{1/2}{1/2}$; this drives diapycnal tracer
fluxes. Similar comments apply to triads that would intersect the
ocean floor (\autoref{fig:bdry_triads}b). Note that both near bottom
triad slopes $\triad{i}{k}{R}{1/2}{1/2}$ and
$\triad{i+1}{k}{R}{1/2}{1/2}$ are masked when either of the $i,k+1$
or $i+1,k+1$ tracer points is masked, i.e.\ the $i,k+1$ $u$point is
masked. The associated lateral fluxes (greyblack dashed line) are
masked if \np{ln\_botmix\_triad}\forcode{ = .false.}, but left unmasked,
giving bottom mixing, if \np{ln\_botmix\_triad}\forcode{ = .true.}.

The default option \np{ln\_botmix\_triad}\forcode{ = .false.} is suitable when the
bbl mixing option is enabled (\key{trabbl}, with \np{nn\_bbl\_ldf}\forcode{ = 1}),
or for simple idealized problems. For setups with topography without
bbl mixing, \np{ln\_botmix\_triad}\forcode{ = .true.} may be necessary.
+The triad slope can only be defined where both the grid boxes centred at the end of the arms exist.
+Triads that would poke up through the upper ocean surface into the atmosphere,
+or down into the ocean floor, must be masked out.
+See \autoref{fig:bdry_triads}.
+Surface layer triads $\triad{i}{1}{R}{1/2}{1/2}$ (magenta) and $\triad{i+1}{1}{R}{1/2}{1/2}$ (blue) that
+require density to be specified above the ocean surface are masked (\autoref{fig:bdry_triads}a):
+this ensures that lateral tracer gradients produce no flux through the ocean surface.
+However, to prevent surface noise, it is customary to retain the $_{11}$ contributions towards
+the lateral triad fluxes $\triad[u]{i}{1}{F}{1/2}{1/2}$ and $\triad[u]{i+1}{1}{F}{1/2}{1/2}$;
+this drives diapycnal tracer fluxes.
+Similar comments apply to triads that would intersect the ocean floor (\autoref{fig:bdry_triads}b).
+Note that both near bottom triad slopes $\triad{i}{k}{R}{1/2}{1/2}$ and
+$\triad{i+1}{k}{R}{1/2}{1/2}$ are masked when either of the $i,k+1$ or $i+1,k+1$ tracer points is masked,
+i.e.\ the $i,k+1$ $u$point is masked.
+The associated lateral fluxes (greyblack dashed line) are masked if \np{ln\_botmix\_triad}\forcode{ = .false.},
+but left unmasked, giving bottom mixing, if \np{ln\_botmix\_triad}\forcode{ = .true.}.
+
+The default option \np{ln\_botmix\_triad}\forcode{ = .false.} is suitable when the bbl mixing option is enabled
+(\key{trabbl}, with \np{nn\_bbl\_ldf}\forcode{ = 1}), or for simple idealized problems.
+For setups with topography without bbl mixing, \np{ln\_botmix\_triad}\forcode{ = .true.} may be necessary.
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[h] \begin{center}
\includegraphics[width=0.60\textwidth]{Fig_GRIFF_bdry_triads}
\caption{ \protect\label{fig:bdry_triads}
 (a) Uppermost model layer $k=1$ with $i,1$ and $i+1,1$ tracer
 points (black dots), and $i+1/2,1$ $u$point (blue square). Triad
 slopes $\triad{i}{1}{R}{1/2}{1/2}$ (magenta) and $\triad{i+1}{1}{R}{1/2}{1/2}$
 (blue) poking through the ocean surface are masked (faded in
 figure). However, the lateral $_{11}$ contributions towards
 $\triad[u]{i}{1}{F}{1/2}{1/2}$ and $\triad[u]{i+1}{1}{F}{1/2}{1/2}$
 (yellow line) are still applied, giving diapycnal diffusive
 fluxes.\newline
+ (a) Uppermost model layer $k=1$ with $i,1$ and $i+1,1$ tracer points (black dots),
+ and $i+1/2,1$ $u$point (blue square).
+ Triad slopes $\triad{i}{1}{R}{1/2}{1/2}$ (magenta) and $\triad{i+1}{1}{R}{1/2}{1/2}$ (blue) poking through
+ the ocean surface are masked (faded in figure).
+ However, the lateral $_{11}$ contributions towards $\triad[u]{i}{1}{F}{1/2}{1/2}$ and
+ $\triad[u]{i+1}{1}{F}{1/2}{1/2}$ (yellow line) are still applied,
+ giving diapycnal diffusive fluxes.\newline
(b) Both near bottom triad slopes $\triad{i}{k}{R}{1/2}{1/2}$ and
 $\triad{i+1}{k}{R}{1/2}{1/2}$ are masked when either of the $i,k+1$
 or $i+1,k+1$ tracer points is masked, i.e.\ the $i,k+1$ $u$point
 is masked. The associated lateral fluxes (greyblack dashed
 line) are masked if \protect\np{botmix\_triad}\forcode{ = .false.}, but left
 unmasked, giving bottom mixing, if \protect\np{botmix\_triad}\forcode{ = .true.}}
+ $\triad{i+1}{k}{R}{1/2}{1/2}$ are masked when either of the $i,k+1$ or $i+1,k+1$ tracer points is masked,
+ i.e.\ the $i,k+1$ $u$point is masked.
+ The associated lateral fluxes (greyblack dashed line) are masked if
+ \protect\np{botmix\_triad}\forcode{ = .false.}, but left unmasked,
+ giving bottom mixing, if \protect\np{botmix\_triad}\forcode{ = .true.}}
\end{center} \end{figure}
% >>>>>>>>>>>>>>>>>>>>>>>>>>>>
\subsection{ Limiting of the slopes within the interior}\label{sec:limit}
As discussed in \autoref{subsec:LDF_slp_iso}, isoneutral slopes relative to
geopotentials must be bounded everywhere, both for consistency with the smallslope
approximation and for numerical stability \citep{Cox1987,
 Griffies_Bk04}. The bound chosen in \NEMO is applied to each
component of the slope separately and has a value of $1/100$ in the ocean interior.
+As discussed in \autoref{subsec:LDF_slp_iso},
+isoneutral slopes relative to geopotentials must be bounded everywhere,
+both for consistency with the smallslope approximation and for numerical stability \citep{Cox1987, Griffies_Bk04}.
+The bound chosen in \NEMO is applied to each component of the slope separately and
+has a value of $1/100$ in the ocean interior.
%, ramping linearly down above 70~m depth to zero at the surface
It is of course relevant to the isoneutral slopes $\tilde{r}_i=r_i+\sigma_i$ relative to
geopotentials (here the $\sigma_i$ are the slopes of the coordinate surfaces relative to
geopotentials) \autoref{eq:PE_slopes_eiv} rather than the slope $r_i$ relative to coordinate
surfaces, so we require
+It is of course relevant to the isoneutral slopes $\tilde{r}_i=r_i+\sigma_i$ relative to geopotentials
+(here the $\sigma_i$ are the slopes of the coordinate surfaces relative to geopotentials)
+\autoref{eq:PE_slopes_eiv} rather than the slope $r_i$ relative to coordinate surfaces, so we require
\begin{equation*}
\tilde{r}_i\leq \tilde{r}_\mathrm{max}=0.01.
@@ 701,35 +652,33 @@
and then recalculate the slopes $r_i$ relative to coordinates.
Each individual triad slope
 \begin{equation}
 \label{eq:Rtilde}
_i^k\tilde{\mathbb{R}}_{i_p}^{k_p} = {}_i^k\mathbb{R}_{i_p}^{k_p} + \frac{\delta_{i+i_p}[z_T^k]}{{e_{1u}}_{\,i + i_p}^{\,k}}
 \end{equation}
is limited like this and then the corresponding
$_i^k\mathbb{R}_{i_p}^{k_p} $ are recalculated and combined to form the fluxes.
Note that where the slopes have been limited, there is now a nonzero
isoneutral density flux that drives dianeutral mixing. In particular this isoneutral density flux
is always downwards, and so acts to reduce gravitational potential energy.
+\begin{equation}
+ \label{eq:Rtilde}
+ _i^k\tilde{\mathbb{R}}_{i_p}^{k_p} = {}_i^k\mathbb{R}_{i_p}^{k_p} + \frac{\delta_{i+i_p}[z_T^k]}{{e_{1u}}_{\,i + i_p}^{\,k}}
+\end{equation}
+is limited like this and then the corresponding $_i^k\mathbb{R}_{i_p}^{k_p} $ are recalculated and
+combined to form the fluxes.
+Note that where the slopes have been limited, there is now a nonzero isoneutral density flux that
+drives dianeutral mixing.
+In particular this isoneutral density flux is always downwards,
+and so acts to reduce gravitational potential energy.
\subsection{Tapering within the surface mixed layer}\label{sec:taper}
Additional tapering of the isoneutral fluxes is necessary within the
surface mixed layer. When the Griffies triads are used, we offer two
options for this.
+Additional tapering of the isoneutral fluxes is necessary within the surface mixed layer.
+When the Griffies triads are used, we offer two options for this.
\subsubsection{Linear slope tapering within the surface mixed layer}\label{sec:lintaper}
This is the option activated by the default choice
\np{ln\_triad\_iso}\forcode{ = .false.}. Slopes $\tilde{r}_i$ relative to
geopotentials are tapered linearly from their value immediately below the mixed layer to zero at the
surface, as described in option (c) of \autoref{fig:eiv_slp}, to values
+This is the option activated by the default choice \np{ln\_triad\_iso}\forcode{ = .false.}.
+Slopes $\tilde{r}_i$ relative to geopotentials are tapered linearly from their value immediately below
+the mixed layer to zero at the surface, as described in option (c) of \autoref{fig:eiv_slp}, to values
\begin{subequations}
\begin{equation}
 \label{eq:rmtilde}
 \rMLt =
 \frac{z}{h}\left.\tilde{r}_i\right_{z=h}\quad \text{ for } z>h,
+ \label{eq:rmtilde}
+ \rMLt =
+ \frac{z}{h}\left.\tilde{r}_i\right_{z=h}\quad \text{ for } z>h,
\end{equation}
and then the $r_i$ relative to vertical coordinate surfaces are appropriately
adjusted to
+ and then the $r_i$ relative to vertical coordinate surfaces are appropriately adjusted to
\begin{equation}
 \label{eq:rm}
 \rML =\rMLt \sigma_i \quad \text{ for } z>h.
+ \label{eq:rm}
+ \rML =\rMLt \sigma_i \quad \text{ for } z>h.
\end{equation}
\end{subequations}
@@ 744,47 +693,37 @@
\end{equation}
This slope tapering gives a natural connection between tracer in the
mixedlayer and in isopycnal layers immediately below, in the
thermocline. It is consistent with the way the $\tilde{r}_i$ are
tapered within the mixed layer (see \autoref{sec:taperskew} below)
so as to ensure a uniform GM eddyinduced velocity throughout the
mixed layer. However, it gives a downwards density flux and so acts so
as to reduce potential energy in the same way as does the slope
limiting discussed above in \autoref{sec:limit}.
+This slope tapering gives a natural connection between tracer in the mixedlayer and
+in isopycnal layers immediately below, in the thermocline.
+It is consistent with the way the $\tilde{r}_i$ are tapered within the mixed layer
+(see \autoref{sec:taperskew} below) so as to ensure a uniform GM eddyinduced velocity throughout the mixed layer.
+However, it gives a downwards density flux and so acts so as to reduce potential energy in the same way as
+does the slope limiting discussed above in \autoref{sec:limit}.
As in \autoref{sec:limit} above, the tapering
\autoref{eq:rmtilde} is applied separately to each triad
$_i^k\tilde{\mathbb{R}}_{i_p}^{k_p}$, and the
$_i^k\mathbb{R}_{i_p}^{k_p}$ adjusted. For clarity, we assume
$z$coordinates in the following; the conversion from
$\mathbb{R}$ to $\tilde{\mathbb{R}}$ and back to $\mathbb{R}$ follows exactly as described
above by \autoref{eq:Rtilde}.
+As in \autoref{sec:limit} above, the tapering \autoref{eq:rmtilde} is applied separately to
+each triad $_i^k\tilde{\mathbb{R}}_{i_p}^{k_p}$, and the $_i^k\mathbb{R}_{i_p}^{k_p}$ adjusted.
+For clarity, we assume $z$coordinates in the following;
+the conversion from $\mathbb{R}$ to $\tilde{\mathbb{R}}$ and back to $\mathbb{R}$ follows exactly as
+described above by \autoref{eq:Rtilde}.
\begin{enumerate}
\item Mixedlayer depth is defined so as to avoid including regions of weak
vertical stratification in the slope definition.
 At each $i,j$ (simplified to $i$ in
\autoref{fig:MLB_triad}), we define the mixedlayer by setting
the vertical index of the tracer point immediately below the mixed
layer, $k_{\mathrm{ML}}$, as the maximum $k$ (shallowest tracer point)
such that the potential density
${\rho_0}_{i,k}>{\rho_0}_{i,k_{10}}+\Delta\rho_c$, where $i,k_{10}$ is
the tracer gridbox within which the depth reaches 10~m. See the left
side of \autoref{fig:MLB_triad}. We use the $k_{10}$gridbox
instead of the surface gridbox to avoid problems e.g.\ with thin
daytime mixedlayers. Currently we use the same
$\Delta\rho_c=0.01\;\mathrm{kg\:m^{3}}$ for ML triad tapering as is
used to output the diagnosed mixedlayer depth
$h_{\mathrm{ML}}=z_{W}_{k_{\mathrm{ML}}+1/2}$, the depth of the $w$point
above the $i,k_{\mathrm{ML}}$ tracer point.

\item We define `basal' triad slopes
${\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p}$ as the slopes
of those triads whose vertical `arms' go down from the
$i,k_{\mathrm{ML}}$ tracer point to the $i,k_{\mathrm{ML}}1$ tracer point
below. This is to ensure that the vertical density gradients
associated with these basal triad slopes
${\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p}$ are
representative of the thermocline. The four basal triads defined in the bottom part
of \autoref{fig:MLB_triad} are then
+\item
+ Mixedlayer depth is defined so as to avoid including regions of weak vertical stratification in
+ the slope definition.
+ At each $i,j$ (simplified to $i$ in \autoref{fig:MLB_triad}),
+ we define the mixedlayer by setting the vertical index of the tracer point immediately below the mixed layer,
+ $k_{\mathrm{ML}}$, as the maximum $k$ (shallowest tracer point) such that
+ the potential density ${\rho_0}_{i,k}>{\rho_0}_{i,k_{10}}+\Delta\rho_c$,
+ where $i,k_{10}$ is the tracer gridbox within which the depth reaches 10~m.
+ See the left side of \autoref{fig:MLB_triad}.
+ We use the $k_{10}$gridbox instead of the surface gridbox to avoid problems e.g.\ with thin daytime mixedlayers.
+ Currently we use the same $\Delta\rho_c=0.01\;\mathrm{kg\:m^{3}}$ for ML triad tapering as is used to
+ output the diagnosed mixedlayer depth $h_{\mathrm{ML}}=z_{W}_{k_{\mathrm{ML}}+1/2}$,
+ the depth of the $w$point above the $i,k_{\mathrm{ML}}$ tracer point.
+\item
+ We define `basal' triad slopes ${\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p}$ as
+ the slopes of those triads whose vertical `arms' go down from the $i,k_{\mathrm{ML}}$ tracer point to
+ the $i,k_{\mathrm{ML}}1$ tracer point below.
+ This is to ensure that the vertical density gradients associated with
+ these basal triad slopes ${\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p}$ are representative of the thermocline.
+ The four basal triads defined in the bottom part of \autoref{fig:MLB_triad} are then
\begin{align}
{\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p} &=
@@ 795,20 +734,17 @@
{\:}^{k_{\mathrm{ML}}}_i{\mathbb{R}_{\mathrm{base}}}_{\,1/2}^{1/2}. \notag
\end{align}
The vertical flux associated with each of these triads passes through the $w$point
$i,k_{\mathrm{ML}}1/2$ lying \emph{below} the $i,k_{\mathrm{ML}}$ tracer point,
so it is this depth
+The vertical flux associated with each of these triads passes through
+the $w$point $i,k_{\mathrm{ML}}1/2$ lying \emph{below} the $i,k_{\mathrm{ML}}$ tracer point, so it is this depth
\begin{equation}
\label{eq:zbase}
{z_\mathrm{base}}_{\,i}={z_{w}}_{k_\mathrm{ML}1/2}
\end{equation}
(one gridbox deeper than the
diagnosed ML depth $z_{\mathrm{ML}})$ that sets the $h$ used to taper
the slopes in \autoref{eq:rmtilde}.
\item Finally, we calculate the adjusted triads
${\:}_i^k{\mathbb{R}_{\mathrm{ML}}}_{\,i_p}^{k_p}$ within the mixed
layer, by multiplying the appropriate
${\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p}$ by the ratio of
the depth of the $w$point ${z_w}_{k+k_p}$ to ${z_{\mathrm{base}}}_{\,i}$. For
instance the green triad centred on $i,k$
+one gridbox deeper than the diagnosed ML depth $z_{\mathrm{ML}})$ that sets the $h$ used to taper the slopes in
+\autoref{eq:rmtilde}.
+\item
+ Finally, we calculate the adjusted triads ${\:}_i^k{\mathbb{R}_{\mathrm{ML}}}_{\,i_p}^{k_p}$ within
+ the mixed layer, by multiplying the appropriate ${\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p}$ by
+ the ratio of the depth of the $w$point ${z_w}_{k+k_p}$ to ${z_{\mathrm{base}}}_{\,i}$.
+ For instance the green triad centred on $i,k$
\begin{align}
{\:}_i^k{\mathbb{R}_{\mathrm{ML}}}_{\,1/2}^{1/2} &=
@@ 824,22 +760,20 @@
\begin{figure}[h]
% \fcapside {
 \caption{\protect\label{fig:MLB_triad} Definition of
 mixedlayer depth and calculation of linearly tapered
 triads. The figure shows a water column at a given $i,j$
 (simplified to $i$), with the ocean surface at the top. Tracer points are denoted by
 bullets, and black lines the edges of the tracer cells; $k$
 increases upwards. \newline
 \hspace{5 em}We define the mixedlayer by setting the vertical index
 of the tracer point immediately below the mixed layer,
 $k_{\mathrm{ML}}$, as the maximum $k$ (shallowest tracer point)
 such that ${\rho_0}_{i,k}>{\rho_0}_{i,k_{10}}+\Delta\rho_c$,
 where $i,k_{10}$ is the tracer gridbox within which the depth
 reaches 10~m. We calculate the triad slopes within the mixed
 layer by linearly tapering them from zero (at the surface) to
 the `basal' slopes, the slopes of the four triads passing through the
 $w$point $i,k_{\mathrm{ML}}1/2$ (blue square),
 ${\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p}$. Triads with
 different $i_p,k_p$, denoted by different colours, (e.g. the green
 triad $i_p=1/2,k_p=1/2$) are tapered to the appropriate basal triad.}
+ \caption{\protect\label{fig:MLB_triad}
+ Definition of mixedlayer depth and calculation of linearly tapered triads.
+ The figure shows a water column at a given $i,j$ (simplified to $i$), with the ocean surface at the top.
+ Tracer points are denoted by bullets, and black lines the edges of the tracer cells;
+ $k$ increases upwards. \newline
+ \hspace{5 em}
+ We define the mixedlayer by setting the vertical index of the tracer point immediately below the mixed layer,
+ $k_{\mathrm{ML}}$, as the maximum $k$ (shallowest tracer point) such that
+ ${\rho_0}_{i,k}>{\rho_0}_{i,k_{10}}+\Delta\rho_c$,
+ where $i,k_{10}$ is the tracer gridbox within which the depth reaches 10~m.
+ We calculate the triad slopes within the mixed layer by linearly tapering them from zero
+ (at the surface) to the `basal' slopes,
+ the slopes of the four triads passing through the $w$point $i,k_{\mathrm{ML}}1/2$ (blue square),
+ ${\:}_i{\mathbb{R}_{\mathrm{base}}}_{\,i_p}^{k_p}$.
+ Triads with different $i_p,k_p$, denoted by different colours,
+ (e.g. the green triad $i_p=1/2,k_p=1/2$) are tapered to the appropriate basal triad.}
%}
{\includegraphics[width=0.60\textwidth]{Fig_GRIFF_MLB_triads}}
@@ 849,9 +783,8 @@
\subsubsection{Additional truncation of skew isoneutral flux components}
\label{subsec:Gerdestaper}
The alternative option is activated by setting \np{ln\_triad\_iso} =
 true. This retains the same tapered slope $\rML$ described above for the
calculation of the $_{33}$ term of the isoneutral diffusion tensor (the
vertical tracer flux driven by vertical tracer gradients), but
replaces the $\rML$ in the skew term by
+The alternative option is activated by setting \np{ln\_triad\_iso} = true.
+This retains the same tapered slope $\rML$ described above for the calculation of the $_{33}$ term of
+the isoneutral diffusion tensor (the vertical tracer flux driven by vertical tracer gradients),
+but replaces the $\rML$ in the skew term by
\begin{equation}
\label{eq:rm*}
@@ 868,20 +801,19 @@
\end{equation}
This operator
\footnote{To ensure good behaviour where horizontal density
 gradients are weak, we in fact follow \citet{Gerdes1991} and set
$\rML^*=\mathrm{sgn}(\tilde{r}_i)\min(\rMLt^2/\tilde{r}_i,\tilde{r}_i)\sigma_i$.}
then has the property it gives no vertical density flux, and so does
not change the potential energy.
This approach is similar to multiplying the isoneutral diffusion
coefficient by $\tilde{r}_{\mathrm{max}}^{2}\tilde{r}_i^{2}$ for steep
slopes, as suggested by \citet{Gerdes1991} (see also \citet{Griffies_Bk04}).
+\footnote{
+ To ensure good behaviour where horizontal density gradients are weak,
+ we in fact follow \citet{Gerdes1991} and
+ set $\rML^*=\mathrm{sgn}(\tilde{r}_i)\min(\rMLt^2/\tilde{r}_i,\tilde{r}_i)\sigma_i$.}
+then has the property it gives no vertical density flux, and so does not change the potential energy.
+This approach is similar to multiplying the isoneutral diffusion coefficient by
+$\tilde{r}_{\mathrm{max}}^{2}\tilde{r}_i^{2}$ for steep slopes,
+as suggested by \citet{Gerdes1991} (see also \citet{Griffies_Bk04}).
Again it is applied separately to each triad $_i^k\mathbb{R}_{i_p}^{k_p}$
In practice, this approach gives weak vertical tracer fluxes through
the mixedlayer, as well as vanishing density fluxes. While it is
theoretically advantageous that it does not change the potential
energy, it may give a discontinuity between the
fluxes within the mixedlayer (purely horizontal) and just below (along
isoneutral surfaces).
+In practice, this approach gives weak vertical tracer fluxes through the mixedlayer,
+as well as vanishing density fluxes.
+While it is theoretically advantageous that it does not change the potential energy,
+it may give a discontinuity between the fluxes within the mixedlayer (purely horizontal) and
+just below (along isoneutral surfaces).
% This may give strange looking results,
% particularly where the mixedlayer depth varies strongly laterally.
@@ 893,11 +825,10 @@
\subsection{Continuous skew flux formulation}\label{sec:continuousskewflux}
 When Gent and McWilliams's [1990] diffusion is used,
an additional advection term is added. The associated velocity is the so called
eddy induced velocity, the formulation of which depends on the slopes of iso
neutral surfaces. Contrary to the case of isoneutral mixing, the slopes used
here are referenced to the geopotential surfaces, $i.e.$ \autoref{eq:ldfslp_geo}
is used in $z$coordinate, and the sum \autoref{eq:ldfslp_geo}
+ \autoref{eq:ldfslp_iso} in $z^*$ or $s$coordinates.
+When Gent and McWilliams's [1990] diffusion is used, an additional advection term is added.
+The associated velocity is the so called eddy induced velocity,
+the formulation of which depends on the slopes of isoneutral surfaces.
+Contrary to the case of isoneutral mixing, the slopes used here are referenced to the geopotential surfaces,
+$i.e.$ \autoref{eq:ldfslp_geo} is used in $z$coordinate,
+and the sum \autoref{eq:ldfslp_geo} + \autoref{eq:ldfslp_iso} in $z^*$ or $s$coordinates.
The eddy induced velocity is given by:
@@ 919,23 +850,21 @@
\end{equation}
\end{subequations}
with $A_{e}$ the eddy induced velocity coefficient, and $\tilde{r}_1$ and $\tilde{r}_2$ the slopes between the isoneutral and the geopotential surfaces.

The traditional way to implement this additional advection is to add
it to the Eulerian velocity prior to computing the tracer
advection. This is implemented if \key{traldf\_eiv} is set in the
default implementation, where \np{ln\_traldf\_triad} is set
false. This allows us to take advantage of all the advection schemes
offered for the tracers (see \autoref{sec:TRA_adv}) and not just a $2^{nd}$
order advection scheme. This is particularly useful for passive
tracers where \emph{positivity} of the advection scheme is of
paramount importance.

However, when \np{ln\_traldf\_triad} is set true, \NEMO instead
implements eddy induced advection according to the socalled skew form
\citep{Griffies_JPO98}. It is based on a transformation of the advective fluxes
using the nondivergent nature of the eddy induced velocity.
For example in the (\textbf{i},\textbf{k}) plane, the tracer advective
fluxes per unit area in $ijk$ space can be
transformed as follows:
+with $A_{e}$ the eddy induced velocity coefficient,
+and $\tilde{r}_1$ and $\tilde{r}_2$ the slopes between the isoneutral and the geopotential surfaces.
+
+The traditional way to implement this additional advection is to add it to the Eulerian velocity prior to
+computing the tracer advection.
+This is implemented if \key{traldf\_eiv} is set in the default implementation,
+where \np{ln\_traldf\_triad} is set false.
+This allows us to take advantage of all the advection schemes offered for the tracers
+(see \autoref{sec:TRA_adv}) and not just a $2^{nd}$ order advection scheme.
+This is particularly useful for passive tracers where
+\emph{positivity} of the advection scheme is of paramount importance.
+
+However, when \np{ln\_traldf\_triad} is set true,
+\NEMO instead implements eddy induced advection according to the socalled skew form \citep{Griffies_JPO98}.
+It is based on a transformation of the advective fluxes using the nondivergent nature of the eddy induced velocity.
+For example in the (\textbf{i},\textbf{k}) plane,
+the tracer advective fluxes per unit area in $ijk$ space can be transformed as follows:
\begin{flalign*}
\begin{split}
@@ 962,6 +891,6 @@
\end{split}
\end{flalign*}
and since the eddy induced velocity field is nondivergent, we end up with the skew
form of the eddy induced advective fluxes per unit area in $ijk$ space:
+and since the eddy induced velocity field is nondivergent,
+we end up with the skew form of the eddy induced advective fluxes per unit area in $ijk$ space:
\begin{equation} \label{eq:eiv_skew_ijk}
\textbf{F}_\mathrm{eiv}^T = \begin{pmatrix}
@@ 979,9 +908,9 @@
\end{split}
\end{equation}
Note that \autoref{eq:eiv_skew_physical} takes the same form whatever the
vertical coordinate, though of course the slopes
$\tilde{r}_i$ which define the $\psi_i$ in \autoref{eq:eiv_psi} are relative to geopotentials.
The tendency associated with eddy induced velocity is then simply the convergence
of the fluxes (\autoref{eq:eiv_skew_ijk}, \autoref{eq:eiv_skew_physical}), so
+Note that \autoref{eq:eiv_skew_physical} takes the same form whatever the vertical coordinate,
+though of course the slopes $\tilde{r}_i$ which define the $\psi_i$ in \autoref{eq:eiv_psi} are relative to
+geopotentials.
+The tendency associated with eddy induced velocity is then simply the convergence of the fluxes
+(\autoref{eq:eiv_skew_ijk}, \autoref{eq:eiv_skew_physical}), so
\begin{equation} \label{eq:skew_eiv_conv}
\frac{\partial T}{\partial t}= \frac{1}{e_1 \, e_2 \, e_3 } \left[
@@ 992,16 +921,15 @@
+ e_{1} \psi_2 \partial_j T \right) \right]
\end{equation}
 It naturally conserves the tracer content, as it is expressed in flux
 form. Since it has the same divergence as the advective form it also
 preserves the tracer variance.
+It naturally conserves the tracer content, as it is expressed in flux form.
+Since it has the same divergence as the advective form it also preserves the tracer variance.
\subsection{Discrete skew flux formulation}
The skew fluxes in (\autoref{eq:eiv_skew_physical}, \autoref{eq:eiv_skew_ijk}), like the offdiagonal terms
(\autoref{eq:i13c}, \autoref{eq:i31c}) of the small angle diffusion tensor, are best
expressed in terms of the triad slopes, as in \autoref{fig:ISO_triad}
and (\autoref{eq:i13}, \autoref{eq:i31}); but now in terms of the triad slopes
$\tilde{\mathbb{R}}$ relative to geopotentials instead of the
$\mathbb{R}$ relative to coordinate surfaces. The discrete form of
\autoref{eq:eiv_skew_ijk} using the slopes \autoref{eq:R} and
+The skew fluxes in (\autoref{eq:eiv_skew_physical}, \autoref{eq:eiv_skew_ijk}),
+like the offdiagonal terms (\autoref{eq:i13c}, \autoref{eq:i31c}) of the small angle diffusion tensor,
+are best expressed in terms of the triad slopes, as in \autoref{fig:ISO_triad} and
+(\autoref{eq:i13}, \autoref{eq:i31});
+but now in terms of the triad slopes $\tilde{\mathbb{R}}$ relative to geopotentials instead of
+the $\mathbb{R}$ relative to coordinate surfaces.
+The discrete form of \autoref{eq:eiv_skew_ijk} using the slopes \autoref{eq:R} and
defining $A_e$ at $T$points is then given by:
@@ 1017,6 +945,6 @@
\end{pmatrix},
\end{flalign}
 where the skew flux in the $i$direction associated with a given
 triad is (\autoref{eq:latfluxtriad}, \autoref{eq:triadfluxu}):
+ where the skew flux in the $i$direction associated with a given triad is (\autoref{eq:latfluxtriad},
+ \autoref{eq:triadfluxu}):
\begin{align}
\label{eq:skewfluxu}
@@ 1034,41 +962,33 @@
\end{subequations}
Such a discretisation is consistent with the isoneutral
operator as it uses the same definition for the slopes. It also
ensures the following two key properties.
+Such a discretisation is consistent with the isoneutral operator as it uses the same definition for the slopes.
+It also ensures the following two key properties.
\subsubsection{No change in tracer variance}
The discretization conserves tracer variance, $i.e.$ it does not
include a diffusive component but is a `pure' advection term. This can
be seen
%either from Appendix \autoref{apdx:eiv_skew} or
by considering the
fluxes associated with a given triad slope
$_i^k{\mathbb{R}}_{i_p}^{k_p} (T)$. For, following
\autoref{subsec:variance} and \autoref{eq:dvar_iso_i}, the
associated horizontal skewflux $_i^k{\mathbb{S}_u}_{i_p}^{k_p} (T)$
drives a net rate of change of variance, summed over the two
$T$points $i+i_p\half,k$ and $i+i_p+\half,k$, of
+The discretization conserves tracer variance, $i.e.$ it does not include a diffusive component but is a `pure' advection term.
+This can be seen %either from Appendix \autoref{apdx:eiv_skew} or
+by considering the fluxes associated with a given triad slope $_i^k{\mathbb{R}}_{i_p}^{k_p} (T)$.
+For, following \autoref{subsec:variance} and \autoref{eq:dvar_iso_i},
+the associated horizontal skewflux $_i^k{\mathbb{S}_u}_{i_p}^{k_p} (T)$ drives a net rate of change of variance,
+summed over the two $T$points $i+i_p\half,k$ and $i+i_p+\half,k$, of
\begin{equation}
\label{eq:dvar_eiv_i}
_i^k{\mathbb{S}_u}_{i_p}^{k_p} (T)\,\delta_{i+ i_p}[T^k],
\end{equation}
while the associated vertical skewflux gives a variance change summed over the
$T$points $i,k+k_p\half$ (above) and $i,k+k_p+\half$ (below) of
+while the associated vertical skewflux gives a variance change summed over
+the $T$points $i,k+k_p\half$ (above) and $i,k+k_p+\half$ (below) of
\begin{equation}
\label{eq:dvar_eiv_k}
_i^k{\mathbb{S}_w}_{i_p}^{k_p} (T) \,\delta_{k+ k_p}[T^i].
\end{equation}
Inspection of the definitions (\autoref{eq:skewfluxu}, \autoref{eq:skewfluxw})
shows that these two variance changes (\autoref{eq:dvar_eiv_i}, \autoref{eq:dvar_eiv_k})
sum to zero. Hence the two fluxes associated with each triad make no
net contribution to the variance budget.
+Inspection of the definitions (\autoref{eq:skewfluxu}, \autoref{eq:skewfluxw}) shows that
+these two variance changes (\autoref{eq:dvar_eiv_i}, \autoref{eq:dvar_eiv_k}) sum to zero.
+Hence the two fluxes associated with each triad make no net contribution to the variance budget.
\subsubsection{Reduction in gravitational PE}
The vertical density flux associated with the vertical skewflux
always has the same sign as the vertical density gradient; thus, so
long as the fluid is stable (the vertical density gradient is
negative) the vertical density flux is negative (downward) and hence
reduces the gravitational PE.
+The vertical density flux associated with the vertical skewflux always has the same sign as
+the vertical density gradient;
+thus, so long as the fluid is stable (the vertical density gradient is negative)
+the vertical density flux is negative (downward) and hence reduces the gravitational PE.
For the change in gravitational PE driven by the $k$flux is
@@ 1091,8 +1011,7 @@
\frac{\alpha_i^k \delta_{k+ k_p}[T^i]+ \beta_i^k\delta_{k+ k_p}[S^i]} {{e_{3w}}_{\,i}^{\,k+k_p}},
\end{align}
using the definition of the triad slope $\rtriad{R}$,
\autoref{eq:R} to express $\alpha _i^k\delta_{i+ i_p}[T^k]+
\beta_i^k\delta_{i+ i_p}[S^k]$ in terms of $\alpha_i^k \delta_{k+
 k_p}[T^i]+ \beta_i^k\delta_{k+ k_p}[S^i]$.
+using the definition of the triad slope $\rtriad{R}$, \autoref{eq:R} to
+express $\alpha _i^k\delta_{i+ i_p}[T^k]+\beta_i^k\delta_{i+ i_p}[S^k]$ in terms of
+$\alpha_i^k \delta_{k+ k_p}[T^i]+ \beta_i^k\delta_{k+ k_p}[S^i]$.
Where the coordinates slope, the $i$flux gives a PE change
@@ 1108,6 +1027,6 @@
\frac{\alpha_i^k \delta_{k+ k_p}[T^i]+ \beta_i^k\delta_{k+ k_p}[S^i]} {{e_{3w}}_{\,i}^{\,k+k_p}},
\end{multline}
(using \autoref{eq:skewfluxu}) and so the total PE change
\autoref{eq:vert_densityPE} + \autoref{eq:lat_densityPE} associated with the triad fluxes is
+(using \autoref{eq:skewfluxu}) and so the total PE change \autoref{eq:vert_densityPE} +
+\autoref{eq:lat_densityPE} associated with the triad fluxes is
\begin{multline}
\label{eq:tot_densityPE}
@@ 1122,51 +1041,43 @@
\subsection{Treatment of the triads at the boundaries}\label{sec:skew_bdry}
Triad slopes \rtriadt{R} used for the calculation of the eddyinduced skewfluxes
are masked at the boundaries in exactly the same way as are the triad
slopes \rtriad{R} used for the isoneutral diffusive fluxes, as
described in \autoref{sec:iso_bdry} and
\autoref{fig:bdry_triads}. Thus surface layer triads
$\triadt{i}{1}{R}{1/2}{1/2}$ and $\triadt{i+1}{1}{R}{1/2}{1/2}$ are
masked, and both near bottom triad slopes $\triadt{i}{k}{R}{1/2}{1/2}$
and $\triadt{i+1}{k}{R}{1/2}{1/2}$ are masked when either of the
$i,k+1$ or $i+1,k+1$ tracer points is masked, i.e.\ the $i,k+1$
$u$point is masked. The namelist parameter \np{ln\_botmix\_triad} has
no effect on the eddyinduced skewfluxes.
+Triad slopes \rtriadt{R} used for the calculation of the eddyinduced skewfluxes are masked at the boundaries
+in exactly the same way as are the triad slopes \rtriad{R} used for the isoneutral diffusive fluxes,
+as described in \autoref{sec:iso_bdry} and \autoref{fig:bdry_triads}.
+Thus surface layer triads $\triadt{i}{1}{R}{1/2}{1/2}$ and $\triadt{i+1}{1}{R}{1/2}{1/2}$ are masked,
+and both near bottom triad slopes $\triadt{i}{k}{R}{1/2}{1/2}$ and $\triadt{i+1}{k}{R}{1/2}{1/2}$ are masked when
+either of the $i,k+1$ or $i+1,k+1$ tracer points is masked, i.e.\ the $i,k+1$ $u$point is masked.
+The namelist parameter \np{ln\_botmix\_triad} has no effect on the eddyinduced skewfluxes.
\subsection{Limiting of the slopes within the interior}\label{sec:limitskew}
Presently, the isoneutral slopes $\tilde{r}_i$ relative
to geopotentials are limited to be less than $1/100$, exactly as in
calculating the isoneutral diffusion, \S \autoref{sec:limit}. Each
individual triad \rtriadt{R} is so limited.
+Presently, the isoneutral slopes $\tilde{r}_i$ relative to geopotentials are limited to be less than $1/100$,
+exactly as in calculating the isoneutral diffusion, \S \autoref{sec:limit}.
+Each individual triad \rtriadt{R} is so limited.
\subsection{Tapering within the surface mixed layer}\label{sec:taperskew}
The slopes $\tilde{r}_i$ relative to
geopotentials (and thus the individual triads \rtriadt{R}) are always tapered linearly from their value immediately below the mixed layer to zero at the
surface \autoref{eq:rmtilde}, as described in \autoref{sec:lintaper}. This is
option (c) of \autoref{fig:eiv_slp}. This linear tapering for the
slopes used to calculate the eddyinduced fluxes is
unaffected by the value of \np{ln\_triad\_iso}.

The justification for this linear slope tapering is that, for $A_e$
that is constant or varies only in the horizontal (the most commonly
used options in \NEMO: see \autoref{sec:LDF_coef}), it is
equivalent to a horizontal eiv (eddyinduced velocity) that is uniform
within the mixed layer \autoref{eq:eiv_v}. This ensures that the
eiv velocities do not restratify the mixed layer \citep{Treguier1997,
 Danabasoglu_al_2008}. Equivantly, in terms
of the skewflux formulation we use here, the
linear slope tapering within the mixedlayer gives a linearly varying
vertical flux, and so a tracer convergence uniform in depth (the
horizontal flux convergence is relatively insignificant within the mixedlayer).
+The slopes $\tilde{r}_i$ relative to geopotentials (and thus the individual triads \rtriadt{R})
+are always tapered linearly from their value immediately below the mixed layer to zero at the surface
+\autoref{eq:rmtilde}, as described in \autoref{sec:lintaper}.
+This is option (c) of \autoref{fig:eiv_slp}.
+This linear tapering for the slopes used to calculate the eddyinduced fluxes is unaffected by
+the value of \np{ln\_triad\_iso}.
+
+The justification for this linear slope tapering is that, for $A_e$ that is constant or varies only in
+the horizontal (the most commonly used options in \NEMO: see \autoref{sec:LDF_coef}),
+it is equivalent to a horizontal eiv (eddyinduced velocity) that is uniform within the mixed layer
+\autoref{eq:eiv_v}.
+This ensures that the eiv velocities do not restratify the mixed layer \citep{Treguier1997,Danabasoglu_al_2008}.
+Equivantly, in terms of the skewflux formulation we use here,
+the linear slope tapering within the mixedlayer gives a linearly varying vertical flux,
+and so a tracer convergence uniform in depth
+(the horizontal flux convergence is relatively insignificant within the mixedlayer).
\subsection{Streamfunction diagnostics}\label{sec:sfdiag}
Where the namelist parameter \np{ln\_traldf\_gdia}\forcode{ = .true.}, diagnosed
mean eddyinduced velocities are output. Each time step,
streamfunctions are calculated in the $i$$k$ and $j$$k$ planes at
$uw$ (integer +1/2 $i$, integer $j$, integer +1/2 $k$) and $vw$
(integer $i$, integer +1/2 $j$, integer +1/2 $k$) points (see Table
\autoref{tab:cell}) respectively. We follow \citep{Griffies_Bk04} and
calculate the streamfunction at a given $uw$point from the
surrounding four triads according to:
+Where the namelist parameter \np{ln\_traldf\_gdia}\forcode{ = .true.},
+diagnosed mean eddyinduced velocities are output.
+Each time step, streamfunctions are calculated in the $i$$k$ and $j$$k$ planes at
+$uw$ (integer +1/2 $i$, integer $j$, integer +1/2 $k$) and $vw$ (integer $i$, integer +1/2 $j$, integer +1/2 $k$)
+points (see Table \autoref{tab:cell}) respectively.
+We follow \citep{Griffies_Bk04} and calculate the streamfunction at a given $uw$point from
+the surrounding four triads according to:
\begin{equation}
\label{eq:sfdiagi}
@@ 1175,6 +1086,5 @@
\end{equation}
The streamfunction $\psi_1$ is calculated similarly at $vw$ points.
The eddyinduced velocities are then calculated from the
straightforward discretisation of \autoref{eq:eiv_v}:
+The eddyinduced velocities are then calculated from the straightforward discretisation of \autoref{eq:eiv_v}:
\begin{equation}\label{eq:eiv_v_discrete}
\begin{split}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_ASM.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_ASM.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_ASM.tex (revision 10368)
@@ 15,11 +15,11 @@
$\ $\newline % force a new line
The ASM code adds the functionality to apply increments to the model variables:
temperature, salinity, sea surface height, velocity and sea ice concentration.
These are read into the model from a NetCDF file which may be produced by separate data
assimilation code. The code can also output model background fields which are used
as an input to data assimilation code. This is all controlled by the namelist
\textit{\ngn{nam\_asminc} }. There is a brief description of all the namelist options
provided. To build the ASM code \key{asminc} must be set.
+The ASM code adds the functionality to apply increments to the model variables: temperature, salinity,
+sea surface height, velocity and sea ice concentration.
+These are read into the model from a NetCDF file which may be produced by separate data assimilation code.
+The code can also output model background fields which are used as an input to data assimilation code.
+This is all controlled by the namelist \textit{\ngn{nam\_asminc} }.
+There is a brief description of all the namelist options provided.
+To build the ASM code \key{asminc} must be set.
%===============================================================
@@ 28,6 +28,6 @@
\label{sec:ASM_DI}
Direct initialization (DI) refers to the instantaneous correction
of the model background state using the analysis increment.
+Direct initialization (DI) refers to the instantaneous correction of the model background state using
+the analysis increment.
DI is used when \np{ln\_asmdin} is set to true.
@@ 36,26 +36,23 @@
Rather than updating the model state directly with the analysis increment,
it may be preferable to introduce the increment gradually into the ocean
model in order to minimize spurious adjustment processes. This technique
is referred to as Incremental Analysis Updates (IAU) \citep{Bloom_al_MWR96}.
+it may be preferable to introduce the increment gradually into the ocean model in order to
+minimize spurious adjustment processes.
+This technique is referred to as Incremental Analysis Updates (IAU) \citep{Bloom_al_MWR96}.
IAU is a common technique used with 3D assimilation methods such as 3DVar or OI.
IAU is used when \np{ln\_asmiau} is set to true.
With IAU, the model state trajectory ${\bf x}$ in the assimilation window
($t_{0} \leq t_{i} \leq t_{N}$)
is corrected by adding the analysis increments for temperature, salinity, horizontal velocity and SSH
as additional tendency terms to the prognostic equations:
+With IAU, the model state trajectory ${\bf x}$ in the assimilation window ($t_{0} \leq t_{i} \leq t_{N}$)
+is corrected by adding the analysis increments for temperature, salinity, horizontal velocity and SSH as
+additional tendency terms to the prognostic equations:
\begin{eqnarray} \label{eq:wa_traj_iau}
{\bf x}^{a}(t_{i}) = M(t_{i}, t_{0})[{\bf x}^{b}(t_{0})]
\; + \; F_{i} \delta \tilde{\bf x}^{a}
\end{eqnarray}
where $F_{i}$ is a weighting function for applying the increments $\delta
\tilde{\bf x}^{a}$ defined such that $\sum_{i=1}^{N} F_{i}=1$.
${\bf x}^b$ denotes the model initial state and ${\bf x}^a$ is the model state
after the increments are applied.
+where $F_{i}$ is a weighting function for applying the increments $\delta\tilde{\bf x}^{a}$ defined such that
+$\sum_{i=1}^{N} F_{i}=1$.
+${\bf x}^b$ denotes the model initial state and ${\bf x}^a$ is the model state after the increments are applied.
To control the adjustment time of the model to the increment,
the increment can be applied over an arbitrary subwindow,
$t_{m} \leq t_{i} \leq t_{n}$, of the main assimilation window,
where $t_{0} \leq t_{m} \leq t_{i}$ and $t_{i} \leq t_{n} \leq t_{N}$,
+the increment can be applied over an arbitrary subwindow, $t_{m} \leq t_{i} \leq t_{n}$,
+of the main assimilation window, where $t_{0} \leq t_{m} \leq t_{i}$ and $t_{i} \leq t_{n} \leq t_{N}$.
Typically the increments are spread evenly over the full window.
In addition, two different weighting functions have been implemented.
@@ 70,7 +67,6 @@
\end{eqnarray}
where $M = mn$.
The second function employs peaked hatlike weights in order to give maximum
weight in the centre of the subwindow, with the weighting reduced
linearly to a small value at the window endpoints:
+The second function employs peaked hatlike weights in order to give maximum weight in the centre of the subwindow,
+with the weighting reduced linearly to a small value at the window endpoints:
\begin{eqnarray} \label{eq:F2_i}
F^{(2)}_{i}
@@ 83,7 +79,6 @@
\end{eqnarray}
where $\alpha^{1} = \sum_{i=1}^{M/2} 2i$ and $M$ is assumed to be even.
The weights described by \autoref{eq:F2_i} provide a
smoother transition of the analysis trajectory from one assimilation cycle
to the next than that described by \autoref{eq:F1_i}.
+The weights described by \autoref{eq:F2_i} provide a smoother transition of the analysis trajectory from
+one assimilation cycle to the next than that described by \autoref{eq:F1_i}.
%==========================================================================
@@ 92,7 +87,6 @@
\label{sec:ASM_div_dmp}
The velocity increments may be initialized by the iterative application of
a divergence damping operator. In iteration step $n$ new estimates of
velocity increments $u^{n}_I$ and $v^{n}_I$ are updated by:
+The velocity increments may be initialized by the iterative application of a divergence damping operator.
+In iteration step $n$ new estimates of velocity increments $u^{n}_I$ and $v^{n}_I$ are updated by:
\begin{equation} \label{eq:asm_dmp}
\left\{ \begin{aligned}
@@ 110,14 +104,16 @@
+\delta _j \left[ {e_{1v}\,e_{3v}\,v^{n1}_I} \right]} \right).
\end{equation}
By the application of \autoref{eq:asm_dmp} and \autoref{eq:asm_dmp} the divergence is filtered
in each iteration, and the vorticity is left unchanged. In the presence of coastal boundaries
with zero velocity increments perpendicular to the coast the divergence is strongly damped.
This type of the initialisation reduces the vertical velocity magnitude and alleviates the
problem of the excessive unphysical vertical mixing in the first steps of the model
integration \citep{Talagrand_JAS72, Dobricic_al_OS07}. Diffusion coefficients are defined as
$A_D = \alpha e_{1t} e_{2t}$, where $\alpha = 0.2$. The divergence damping is activated by
assigning to \np{nn\_divdmp} in the \textit{nam\_asminc} namelist a value greater than zero.
By choosing this value to be of the order of 100 the increments in the vertical velocity will
be significantly reduced.
+By the application of \autoref{eq:asm_dmp} and \autoref{eq:asm_dmp} the divergence is filtered in each iteration,
+and the vorticity is left unchanged.
+In the presence of coastal boundaries with zero velocity increments perpendicular to the coast
+the divergence is strongly damped.
+This type of the initialisation reduces the vertical velocity magnitude and
+alleviates the problem of the excessive unphysical vertical mixing in the first steps of the model integration
+\citep{Talagrand_JAS72, Dobricic_al_OS07}.
+Diffusion coefficients are defined as $A_D = \alpha e_{1t} e_{2t}$, where $\alpha = 0.2$.
+The divergence damping is activated by assigning to \np{nn\_divdmp} in the \textit{nam\_asminc} namelist
+a value greater than zero.
+By choosing this value to be of the order of 100 the increments in
+the vertical velocity will be significantly reduced.
@@ 127,6 +123,6 @@
\label{sec:ASM_details}
Here we show an example \ngn{namasm} namelist and the header of an example assimilation
increments file on the ORCA2 grid.
+Here we show an example \ngn{namasm} namelist and the header of an example assimilation increments file on
+the ORCA2 grid.
%namasm
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_CONFIG.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_CONFIG.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_CONFIG.tex (revision 10368)
@@ 18,10 +18,10 @@
The purpose of this part of the manual is to introduce the \NEMO reference configurations.
These configurations are offered as means to explore various numerical and physical options,
thus allowing the user to verify that the code is performing in a manner consistent with that
we are running. This form of verification is critical as one adopts the code for his or her particular
research purposes. The reference configurations also provide a sense for some of the options available
in the code, though by no means are all options exercised in the reference configurations.
+The purpose of this part of the manual is to introduce the \NEMO reference configurations.
+These configurations are offered as means to explore various numerical and physical options,
+thus allowing the user to verify that the code is performing in a manner consistent with that we are running.
+This form of verification is critical as one adopts the code for his or her particular research purposes.
+The reference configurations also provide a sense for some of the options available in the code,
+though by no means are all options exercised in the reference configurations.
%namcfg
@@ 40,38 +40,43 @@
$\ $\newline
The 1D model option simulates a stand alone water column within the 3D \NEMO system.
It can be applied to the ocean alone or to the oceanice system and can include passive tracers
or a biogeochemical model. It is set up by defining the position of the 1D water column in the grid
+The 1D model option simulates a stand alone water column within the 3D \NEMO system.
+It can be applied to the ocean alone or to the oceanice system and can include passive tracers or a biogeochemical model.
+It is set up by defining the position of the 1D water column in the grid
(see \textit{CONFIG/SHARED/namelist\_ref} ).
The 1D model is a very useful tool
\textit{(a)} to learn about the physics and numerical treatment of vertical mixing processes ;
\textit{(b)} to investigate suitable parameterisations of unresolved turbulence (surface wave
breaking, Langmuir circulation, ...) ;
\textit{(c)} to compare the behaviour of different vertical mixing schemes ;
\textit{(d)} to perform sensitivity studies on the vertical diffusion at a particular point of an ocean domain ;
+The 1D model is a very useful tool
+\textit{(a)} to learn about the physics and numerical treatment of vertical mixing processes;
+\textit{(b)} to investigate suitable parameterisations of unresolved turbulence
+(surface wave breaking, Langmuir circulation, ...);
+\textit{(c)} to compare the behaviour of different vertical mixing schemes;
+\textit{(d)} to perform sensitivity studies on the vertical diffusion at a particular point of an ocean domain;
\textit{(d)} to produce extra diagnostics, without the large memory requirement of the full 3D model.
The methodology is based on the use of the zoom functionality over the smallest possible
domain : a 3x3 domain centered on the grid point of interest,
with some extra routines. There is no need to define a new mesh, bathymetry,
initial state or forcing, since the 1D model will use those of the configuration it is a zoom of.
The chosen grid point is set in \textit{\ngn{namcfg}} namelist by setting the \np{jpizoom} and \np{jpjzoom}
parameters to the indices of the location of the chosen grid point.

The 1D model has some specifies. First, all the horizontal derivatives are assumed to be zero, and
second, the two components of the velocity are moved on a $T$point.
+The methodology is based on the use of the zoom functionality over the smallest possible domain:
+a 3x3 domain centered on the grid point of interest, with some extra routines.
+There is no need to define a new mesh, bathymetry, initial state or forcing,
+since the 1D model will use those of the configuration it is a zoom of.
+The chosen grid point is set in \textit{\ngn{namcfg}} namelist by
+setting the \np{jpizoom} and \np{jpjzoom} parameters to the indices of the location of the chosen grid point.
+
+The 1D model has some specifies. First, all the horizontal derivatives are assumed to be zero,
+and second, the two components of the velocity are moved on a $T$point.
Therefore, defining \key{c1d} changes five main things in the code behaviour:
\begin{description}
\item[(1)] the lateral boundary condition routine (\rou{lbc\_lnk}) set the value of the central column
of the 3x3 domain is imposed over the whole domain ;
\item[(3)] a call to \rou{lbc\_lnk} is systematically done when reading input data ($i.e.$ in \mdl{iom}) ;
\item[(3)] a simplified \rou{stp} routine is used (\rou{stp\_c1d}, see \mdl{step\_c1d} module) in which
both lateral tendancy terms and lateral physics are not called ;
\item[(4)] the vertical velocity is zero (so far, no attempt at introducing a Ekman pumping velocity
has been made) ;
\item[(5)] a simplified treatment of the Coriolis term is performed as $U$ and $V$points are the same
(see \mdl{dyncor\_c1d}).
+\item[(1)]
+ the lateral boundary condition routine (\rou{lbc\_lnk}) set the value of the central column of
+ the 3x3 domain is imposed over the whole domain;
+\item[(3)]
+ a call to \rou{lbc\_lnk} is systematically done when reading input data ($i.e.$ in \mdl{iom});
+\item[(3)]
+ a simplified \rou{stp} routine is used (\rou{stp\_c1d}, see \mdl{step\_c1d} module) in which
+ both lateral tendancy terms and lateral physics are not called;
+\item[(4)]
+ the vertical velocity is zero
+ (so far, no attempt at introducing a Ekman pumping velocity has been made);
+\item[(5)]
+ a simplified treatment of the Coriolis term is performed as $U$ and $V$points are the same
+ (see \mdl{dyncor\_c1d}).
\end{description}
All the relevant \textit{\_c1d} modules can be found in the NEMOGCM/NEMO/OPA\_SRC/C1D directory of
+All the relevant \textit{\_c1d} modules can be found in the NEMOGCM/NEMO/OPA\_SRC/C1D directory of
the \NEMO distribution.
@@ 84,23 +89,26 @@
\label{sec:CFG_orca}
The ORCA family is a series of global ocean configurations that are run together with
the LIM seaice model (ORCALIM) and possibly with PISCES biogeochemical model
(ORCALIMPISCES), using various resolutions.
An appropriate namelist is available in \path{CONFIG/ORCA2_LIM3_PISCES/EXP00/namelist_cfg}
for ORCA2.
The domain of ORCA2 configuration is defined in \ifile{ORCA\_R2\_zps\_domcfg} file, this file is available in tar file in the wiki of NEMO : \\
+The ORCA family is a series of global ocean configurations that are run together with
+the LIM seaice model (ORCALIM) and possibly with PISCES biogeochemical model (ORCALIMPISCES),
+using various resolutions.
+An appropriate namelist is available in \path{CONFIG/ORCA2_LIM3_PISCES/EXP00/namelist_cfg} for ORCA2.
+The domain of ORCA2 configuration is defined in \ifile{ORCA\_R2\_zps\_domcfg} file,
+this file is available in tar file in the wiki of NEMO: \\
https://forge.ipsl.jussieu.fr/nemo/wiki/Users/ReferenceConfigurations/ORCA2\_LIM3\_PISCES \\
In this namelist\_cfg the name of domain input file is set in \ngn{namcfg} block of namelist.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=0.98\textwidth]{Fig_ORCA_NH_mesh}
\caption{ \protect\label{fig:MISC_ORCA_msh}
ORCA mesh conception. The departure from an isotropic Mercator grid start poleward of 20\degN.
The two "north pole" are the foci of a series of embedded ellipses (blue curves)
which are determined analytically and form the ilines of the ORCA mesh (pseudo latitudes).
Then, following \citet{Madec_Imbard_CD96}, the normal to the series of ellipses (red curves) is computed
which provide the jlines of the mesh (pseudo longitudes). }
\end{center} \end{figure}
+\begin{figure}[!t]
+ \begin{center}
+ \includegraphics[width=0.98\textwidth]{Fig_ORCA_NH_mesh}
+ \caption{ \protect\label{fig:MISC_ORCA_msh}
+ ORCA mesh conception.
+ The departure from an isotropic Mercator grid start poleward of 20\degN.
+ The two "north pole" are the foci of a series of embedded ellipses (blue curves) which
+ are determined analytically and form the ilines of the ORCA mesh (pseudo latitudes).
+ Then, following \citet{Madec_Imbard_CD96}, the normal to the series of ellipses (red curves) is computed which
+ provides the jlines of the mesh (pseudo longitudes). }
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 111,39 +119,39 @@
\label{subsec:CFG_orca_grid}
The ORCA grid is a tripolar is based on the semianalytical method of \citet{Madec_Imbard_CD96}.
It allows to construct a global orthogonal curvilinear ocean mesh which has no singularity point inside
+The ORCA grid is a tripolar is based on the semianalytical method of \citet{Madec_Imbard_CD96}.
+It allows to construct a global orthogonal curvilinear ocean mesh which has no singularity point inside
the computational domain since two north mesh poles are introduced and placed on lands.
The method involves defining an analytical set of mesh parallels in the stereographic polar plan,
computing the associated set of mesh meridians, and projecting the resulting mesh onto the sphere.
The set of mesh parallels used is a series of embedded ellipses which foci are the two mesh north
poles (\autoref{fig:MISC_ORCA_msh}). The resulting mesh presents no loss of continuity in
either the mesh lines or the scale factors, or even the scale factor derivatives over the whole
ocean domain, as the mesh is not a composite mesh.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!tbp] \begin{center}
\includegraphics[width=1.0\textwidth]{Fig_ORCA_NH_msh05_e1_e2}
\includegraphics[width=0.80\textwidth]{Fig_ORCA_aniso}
\caption { \protect\label{fig:MISC_ORCA_e1e2}
\textit{Top}: Horizontal scale factors ($e_1$, $e_2$) and
\textit{Bottom}: ratio of anisotropy ($e_1 / e_2$)
for ORCA 0.5\deg ~mesh. South of 20\degN a Mercator grid is used ($e_1 = e_2$)
so that the anisotropy ratio is 1. Poleward of 20\degN, the two "north pole"
introduce a weak anisotropy over the ocean areas ($< 1.2$) except in vicinity of Victoria Island
(Canadian Arctic Archipelago). }
+The method involves defining an analytical set of mesh parallels in the stereographic polar plan,
+computing the associated set of mesh meridians, and projecting the resulting mesh onto the sphere.
+The set of mesh parallels used is a series of embedded ellipses which foci are the two mesh north poles
+(\autoref{fig:MISC_ORCA_msh}).
+The resulting mesh presents no loss of continuity in either the mesh lines or the scale factors,
+or even the scale factor derivatives over the whole ocean domain, as the mesh is not a composite mesh.
+%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+\begin{figure}[!tbp]
+ \begin{center}
+ \includegraphics[width=1.0\textwidth]{Fig_ORCA_NH_msh05_e1_e2}
+ \includegraphics[width=0.80\textwidth]{Fig_ORCA_aniso}
+ \caption { \protect\label{fig:MISC_ORCA_e1e2}
+ \textit{Top}: Horizontal scale factors ($e_1$, $e_2$) and
+ \textit{Bottom}: ratio of anisotropy ($e_1 / e_2$)
+ for ORCA 0.5\deg ~mesh.
+ South of 20\degN a Mercator grid is used ($e_1 = e_2$) so that the anisotropy ratio is 1.
+ Poleward of 20\degN, the two "north pole" introduce a weak anisotropy over the ocean areas ($< 1.2$) except in
+ vicinity of Victoria Island (Canadian Arctic Archipelago). }
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The method is applied to Mercator grid ($i.e.$ same zonal and meridional grid spacing) poleward
of 20\degN, so that the Equator is a mesh line, which provides a better numerical solution
for equatorial dynamics. The choice of the series of embedded ellipses (position of the foci and
variation of the ellipses) is a compromise between maintaining the ratio of mesh anisotropy
($e_1 / e_2$) close to one in the ocean (especially in area of strong eddy activities such as
the Gulf Stream) and keeping the smallest scale factor in the northern hemisphere larger
than the smallest one in the southern hemisphere.
The resulting mesh is shown in \autoref{fig:MISC_ORCA_msh} and \autoref{fig:MISC_ORCA_e1e2}
for a half a degree grid (ORCA\_R05).
The smallest ocean scale factor is found in along Antarctica, while the ratio of anisotropy remains close to one except near the Victoria Island
in the Canadian Archipelago.
+The method is applied to Mercator grid ($i.e.$ same zonal and meridional grid spacing) poleward of 20\degN,
+so that the Equator is a mesh line, which provides a better numerical solution for equatorial dynamics.
+The choice of the series of embedded ellipses (position of the foci and variation of the ellipses)
+is a compromise between maintaining the ratio of mesh anisotropy ($e_1 / e_2$) close to one in the ocean
+(especially in area of strong eddy activities such as the Gulf Stream) and keeping the smallest scale factor in
+the northern hemisphere larger than the smallest one in the southern hemisphere.
+The resulting mesh is shown in \autoref{fig:MISC_ORCA_msh} and \autoref{fig:MISC_ORCA_e1e2} for
+a half a degree grid (ORCA\_R05).
+The smallest ocean scale factor is found in along Antarctica,
+while the ratio of anisotropy remains close to one except near the Victoria Island in the Canadian Archipelago.
% 
@@ 154,8 +162,10 @@
The NEMO system is provided with five builtin ORCA configurations which differ in the
horizontal resolution. The value of the resolution is given by the resolution at the Equator
expressed in degrees. Each of configuration is set through the \textit{domain\_cfg} domain configuration file,
which sets the grid size and configuration name parameters. The NEMO System Team provides only ORCA2 domain input file "\ifile{ORCA\_R2\_zps\_domcfg}" file (Tab. \autoref{tab:ORCA}).
+The NEMO system is provided with five builtin ORCA configurations which differ in the horizontal resolution.
+The value of the resolution is given by the resolution at the Equator expressed in degrees.
+Each of configuration is set through the \textit{domain\_cfg} domain configuration file,
+which sets the grid size and configuration name parameters.
+The NEMO System Team provides only ORCA2 domain input file "\ifile{ORCA\_R2\_zps\_domcfg}" file
+(Tab. \autoref{tab:ORCA}).
@@ 176,7 +186,7 @@
\hline \hline
\end{tabular}
\caption{ \protect\label{tab:ORCA}
Domain size of ORCA family configurations.
The flag for configurations of ORCA family need to be set in \textit{domain\_cfg} file. }
+\caption{ \protect\label{tab:ORCA}
+ Domain size of ORCA family configurations.
+ The flag for configurations of ORCA family need to be set in \textit{domain\_cfg} file. }
\end{center}
\end{table}
@@ 184,38 +194,40 @@
The ORCA\_R2 configuration has the following specificity : starting from a 2\deg~ORCA mesh,
local mesh refinements were applied to the Mediterranean, Red, Black and Caspian Seas,
so that the resolution is 1\deg \time 1\deg there. A local transformation were also applied
with in the Tropics in order to refine the meridional resolution up to 0.5\deg at the Equator.

The ORCA\_R1 configuration has only a local tropical transformation to refine the meridional
resolution up to 1/3\deg~at the Equator. Note that the tropical mesh refinements in ORCA\_R2
and R1 strongly increases the mesh anisotropy there.
+The ORCA\_R2 configuration has the following specificity: starting from a 2\deg~ORCA mesh,
+local mesh refinements were applied to the Mediterranean, Red, Black and Caspian Seas,
+so that the resolution is 1\deg \time 1\deg there.
+A local transformation were also applied with in the Tropics in order to refine the meridional resolution up to
+0.5\deg at the Equator.
+
+The ORCA\_R1 configuration has only a local tropical transformation to refine the meridional resolution up to
+1/3\deg~at the Equator.
+Note that the tropical mesh refinements in ORCA\_R2 and R1 strongly increases the mesh anisotropy there.
The ORCA\_R05 and higher global configurations do not incorporate any regional refinements.
For ORCA\_R1 and R025, setting the configuration key to 75 allows to use 75 vertical levels,
otherwise 46 are used. In the other ORCA configurations, 31 levels are used
+For ORCA\_R1 and R025, setting the configuration key to 75 allows to use 75 vertical levels, otherwise 46 are used.
+In the other ORCA configurations, 31 levels are used
(see \autoref{tab:orca_zgr} \sfcomment{HERE I need to put new table for ORCA2 values} and \autoref{fig:zgr}).
Only the ORCA\_R2 is provided with all its input files in the \NEMO distribution.
It is very similar to that used as part of the climate model developed at IPSL for the 4th IPCC
assessment of climate change (Marti et al., 2009). It is also the basis for the \NEMO contribution
to the Coordinate Oceanice Reference Experiments (COREs) documented in \citet{Griffies_al_OM09}.

This version of ORCA\_R2 has 31 levels in the vertical, with the highest resolution (10m)
in the upper 150m (see \autoref{tab:orca_zgr} and \autoref{fig:zgr}).
+Only the ORCA\_R2 is provided with all its input files in the \NEMO distribution.
+It is very similar to that used as part of the climate model developed at IPSL for the 4th IPCC assessment of
+climate change (Marti et al., 2009).
+It is also the basis for the \NEMO contribution to the Coordinate Oceanice Reference Experiments (COREs)
+documented in \citet{Griffies_al_OM09}.
+
+This version of ORCA\_R2 has 31 levels in the vertical, with the highest resolution (10m) in the upper 150m
+(see \autoref{tab:orca_zgr} and \autoref{fig:zgr}).
The bottom topography and the coastlines are derived from the global atlas of Smith and Sandwell (1997).
The default forcing uses the boundary forcing from \citet{Large_Yeager_Rep04} (see \autoref{subsec:SBC_blk_core}),
which was developed for the purpose of running global coupled oceanice simulations
without an interactive atmosphere. This \citet{Large_Yeager_Rep04} dataset is available
through the \href{http://nomads.gfdl.noaa.gov/nomads/forms/mom4/CORE.html}{GFDL web site}.
The "normal year" of \citet{Large_Yeager_Rep04} has been chosen of the \NEMO distribution
since release v3.3.

ORCA\_R2 predefined configuration can also be run with an AGRIF zoom over the Agulhas
current area ( \key{agrif} defined) and, by setting the appropriate variables, see \path{CONFIG/SHARED/namelist_ref}
a regional Arctic or periAntarctic configuration is extracted from an ORCA\_R2 or R05 configurations
using sponge layers at open boundaries.
+which was developed for the purpose of running global coupled oceanice simulations without
+an interactive atmosphere.
+This \citet{Large_Yeager_Rep04} dataset is available through
+the \href{http://nomads.gfdl.noaa.gov/nomads/forms/mom4/CORE.html}{GFDL web site}.
+The "normal year" of \citet{Large_Yeager_Rep04} has been chosen of the \NEMO distribution since release v3.3.
+
+ORCA\_R2 predefined configuration can also be run with an AGRIF zoom over the Agulhas current area
+(\key{agrif} defined) and, by setting the appropriate variables, see \path{CONFIG/SHARED/namelist_ref}.
+A regional Arctic or periAntarctic configuration is extracted from an ORCA\_R2 or R05 configurations using
+sponge layers at open boundaries.
% 
@@ 225,51 +237,56 @@
\label{sec:CFG_gyre}
The GYRE configuration \citep{Levy_al_OM10} has been built to simulate
the seasonal cycle of a doublegyre box model. It consists in an idealized domain
similar to that used in the studies of \citet{Drijfhout_JPO94} and \citet{Hazeleger_Drijfhout_JPO98,
Hazeleger_Drijfhout_JPO99, Hazeleger_Drijfhout_JGR00, Hazeleger_Drijfhout_JPO00},
over which an analytical seasonal forcing is applied. This allows to investigate the
spontaneous generation of a large number of interacting, transient mesoscale eddies
+The GYRE configuration \citep{Levy_al_OM10} has been built to
+simulate the seasonal cycle of a doublegyre box model.
+It consists in an idealized domain similar to that used in the studies of \citet{Drijfhout_JPO94} and
+\citet{Hazeleger_Drijfhout_JPO98, Hazeleger_Drijfhout_JPO99, Hazeleger_Drijfhout_JGR00, Hazeleger_Drijfhout_JPO00},
+over which an analytical seasonal forcing is applied.
+This allows to investigate the spontaneous generation of a large number of interacting, transient mesoscale eddies
and their contribution to the large scale circulation.
The domain geometry is a closed rectangular basin on the $\beta$plane centred
at $\sim$ 30\degN and rotated by 45\deg, 3180~km long, 2120~km wide
and 4~km deep (\autoref{fig:MISC_strait_hand}).
The domain is bounded by vertical walls and by a flat bottom. The configuration is
meant to represent an idealized North Atlantic or North Pacific basin.
The circulation is forced by analytical profiles of wind and buoyancy fluxes.
The applied forcings vary seasonally in a sinusoidal manner between winter
and summer extrema \citep{Levy_al_OM10}.
The wind stress is zonal and its curl changes sign at 22\degN and 36\degN.
It forces a subpolar gyre in the north, a subtropical gyre in the wider part of the domain
and a small recirculation gyre in the southern corner.
The net heat flux takes the form of a restoring toward a zonal apparent air
temperature profile. A portion of the net heat flux which comes from the solar radiation
is allowed to penetrate within the water column.
The fresh water flux is also prescribed and varies zonally.
It is determined such as, at each time step, the basinintegrated flux is zero.
The basin is initialised at rest with vertical profiles of temperature and salinity
uniformly applied to the whole domain.

The GYRE configuration is set like an analytical configuration. Through \np{ln\_read\_cfg}\forcode{ = .false.} in \textit{namcfg} namelist defined in the reference configuration \path{CONFIG/GYRE/EXP00/namelist_cfg} anaylitical definition of grid in GYRE is done in usrdef\_hrg, usrdef\_zgr routines. Its horizontal resolution
(and thus the size of the domain) is determined by setting \np{nn\_GYRE} in \ngn{namusr\_def}: \\
+The domain geometry is a closed rectangular basin on the $\beta$plane centred at $\sim$ 30\degN and
+rotated by 45\deg, 3180~km long, 2120~km wide and 4~km deep (\autoref{fig:MISC_strait_hand}).
+The domain is bounded by vertical walls and by a flat bottom.
+The configuration is meant to represent an idealized North Atlantic or North Pacific basin.
+The circulation is forced by analytical profiles of wind and buoyancy fluxes.
+The applied forcings vary seasonally in a sinusoidal manner between winter and summer extrema \citep{Levy_al_OM10}.
+The wind stress is zonal and its curl changes sign at 22\degN and 36\degN.
+It forces a subpolar gyre in the north, a subtropical gyre in the wider part of the domain and
+a small recirculation gyre in the southern corner.
+The net heat flux takes the form of a restoring toward a zonal apparent air temperature profile.
+A portion of the net heat flux which comes from the solar radiation is allowed to penetrate within the water column.
+The fresh water flux is also prescribed and varies zonally.
+It is determined such as, at each time step, the basinintegrated flux is zero.
+The basin is initialised at rest with vertical profiles of temperature and salinity uniformly applied to
+the whole domain.
+
+The GYRE configuration is set like an analytical configuration.
+Through \np{ln\_read\_cfg}\forcode{ = .false.} in \textit{namcfg} namelist defined in
+the reference configuration \path{CONFIG/GYRE/EXP00/namelist_cfg}
+analytical definition of grid in GYRE is done in usrdef\_hrg, usrdef\_zgr routines.
+Its horizontal resolution (and thus the size of the domain) is determined by
+setting \np{nn\_GYRE} in \ngn{namusr\_def}: \\
\np{jpiglo} $= 30 \times$ \np{nn\_GYRE} + 2 \\
\np{jpjglo} $= 20 \times$ \np{nn\_GYRE} + 2 \\
Obviously, the namelist parameters have to be adjusted to the chosen resolution, see the Configurations
pages on the NEMO web site (Using NEMO\/Configurations) .
+Obviously, the namelist parameters have to be adjusted to the chosen resolution,
+see the Configurations pages on the NEMO web site (Using NEMO\/Configurations).
In the vertical, GYRE uses the default 30 ocean levels (\jp{jpk}\forcode{ = 31}) (\autoref{fig:zgr}).
The GYRE configuration is also used in benchmark test as it is very simple to increase
its resolution and as it does not requires any input file. For example, keeping a same model size
on each processor while increasing the number of processor used is very easy, even though the
physical integrity of the solution can be compromised. Benchmark is activate via \np{ln\_bench}\forcode{ = .true.} in \ngn{namusr\_def} in namelist \path{CONFIG/GYRE/EXP00/namelist_cfg}.

%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=1.0\textwidth]{Fig_GYRE}
\caption{ \protect\label{fig:GYRE}
Snapshot of relative vorticity at the surface of the model domain
in GYRE R9, R27 and R54. From \citet{Levy_al_OM10}.}
\end{center} \end{figure}
+The GYRE configuration is also used in benchmark test as it is very simple to increase its resolution and
+as it does not requires any input file.
+For example, keeping a same model size on each processor while increasing the number of processor used is very easy,
+even though the physical integrity of the solution can be compromised.
+Benchmark is activate via \np{ln\_bench}\forcode{ = .true.} in \ngn{namusr\_def} in
+namelist \path{CONFIG/GYRE/EXP00/namelist_cfg}.
+
+%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
+\begin{figure}[!t]
+ \begin{center}
+ \includegraphics[width=1.0\textwidth]{Fig_GYRE}
+ \caption{ \protect\label{fig:GYRE}
+ Snapshot of relative vorticity at the surface of the model domain in GYRE R9, R27 and R54.
+ From \citet{Levy_al_OM10}.}
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 280,27 +297,21 @@
\label{sec:MISC_config_AMM}
The AMM, Atlantic Margins Model, is a regional model covering the
Northwest European Shelf domain on a regular latlon grid at
approximately 12km horizontal resolution. The appropriate
\textit{\&namcfg} namelist is available in \textit{CONFIG/AMM12/EXP00/namelist\_cfg}.
+The AMM, Atlantic Margins Model, is a regional model covering the Northwest European Shelf domain on
+a regular latlon grid at approximately 12km horizontal resolution.
+The appropriate \textit{\&namcfg} namelist is available in \textit{CONFIG/AMM12/EXP00/namelist\_cfg}.
It is used to build the correct dimensions of the AMM domain.
This configuration tests several features of NEMO functionality specific
to the shelf seas.
In particular, the AMM uses $S$coordinates in the vertical rather than
$z$coordinates and is forced with tidal lateral boundary conditions
using a flather boundary condition from the BDY module.
The AMM configuration uses the GLS (\key{zdfgls}) turbulence scheme, the
VVL nonlinear free surface(\key{vvl}) and timesplitting
(\key{dynspg\_ts}).

In addition to the tidal boundary condition the model may also take
open boundary conditions from a North Atlantic model. Boundaries may be
completely omitted by setting \np{ln\_bdy} to false.
Sample surface fluxes, river forcing and a sample initial restart file
are included to test a realistic model run. The Baltic boundary is
included within the river input file and is specified as a river source.
Unlike ordinary river points the Baltic inputs also include salinity and
temperature data.
+This configuration tests several features of NEMO functionality specific to the shelf seas.
+In particular, the AMM uses $S$coordinates in the vertical rather than $z$coordinates and
+is forced with tidal lateral boundary conditions using a flather boundary condition from the BDY module.
+The AMM configuration uses the GLS (\key{zdfgls}) turbulence scheme,
+the VVL nonlinear free surface(\key{vvl}) and timesplitting (\key{dynspg\_ts}).
+
+In addition to the tidal boundary condition the model may also take open boundary conditions from
+a North Atlantic model.
+Boundaries may be completely omitted by setting \np{ln\_bdy} to false.
+Sample surface fluxes, river forcing and a sample initial restart file are included to test a realistic model run.
+The Baltic boundary is included within the river input file and is specified as a river source.
+Unlike ordinary river points the Baltic inputs also include salinity and temperature data.
\end{document}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DIA.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DIA.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DIA.tex (revision 10368)
@@ 17,17 +17,15 @@
\label{sec:DIA_io_old}
The model outputs are of three types: the restart file, the output listing, and
the diagnostic output file(s).
The restart file is used internally by the code when the user wants to start the model with
+The model outputs are of three types: the restart file, the output listing, and the diagnostic output file(s).
+The restart file is used internally by the code when the user wants to start the model with
initial conditions defined by a previous simulation.
It contains all the information that is necessary in order for there to be no changes in
the model results (even at the computer precision) between a run performed with several restarts and
+It contains all the information that is necessary in order for there to be no changes in the model results
+(even at the computer precision) between a run performed with several restarts and
the same run performed in one step.
It should be noted that this requires that the restart file contains two consecutive time steps for
all the prognostic variables, and that it is saved in the same binary format as the one used by
the computer that is to read it (in particular, 32 bits binary IEEE format must not be used for this file).

The output listing and file(s) are predefined but should be checked and eventually adapted to
the user's needs.
+It should be noted that this requires that the restart file contains two consecutive time steps for
+all the prognostic variables, and that it is saved in the same binary format as the one used by the computer that
+is to read it (in particular, 32 bits binary IEEE format must not be used for this file).
+
+The output listing and file(s) are predefined but should be checked and eventually adapted to the user's needs.
The output listing is stored in the $ocean.output$ file.
The information is printed from within the code on the logical unit $numout$.
@@ 35,16 +33,16 @@
By default, diagnostic output files are written in NetCDF format.
Since version 3.2, when defining \key{iomput}, an I/O server has been added which
provides more flexibility in the choice of the fields to be written as well as
how the writing work is distributed over the processors in massively parallel computing.
A complete description of the use of this I/O server is presented in the next section.

By default, \key{iomput} is not defined, NEMO produces NetCDF with the old IOIPSL library which
has been kept for compatibility and its easy installation.
However, the IOIPSL library is quite inefficient on parallel machines and, since version 3.2,
many diagnostic options have been added presuming the use of \key{iomput}.
The usefulness of the default IOIPSLbased option is expected to reduce with each new release.
If \key{iomput} is not defined, output files and content are defined in the \mdl{diawri} module and
contain mean (or instantaneous if \key{diainstant} is defined) values over a regular period of
+Since version 3.2, when defining \key{iomput}, an I/O server has been added which
+provides more flexibility in the choice of the fields to be written as well as how
+the writing work is distributed over the processors in massively parallel computing.
+A complete description of the use of this I/O server is presented in the next section.
+
+By default, \key{iomput} is not defined,
+NEMO produces NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation.
+However, the IOIPSL library is quite inefficient on parallel machines and, since version 3.2,
+many diagnostic options have been added presuming the use of \key{iomput}.
+The usefulness of the default IOIPSLbased option is expected to reduce with each new release.
+If \key{iomput} is not defined, output files and content are defined in the \mdl{diawri} module and
+contain mean (or instantaneous if \key{diainstant} is defined) values over a regular period of
nn\_write timesteps (namelist parameter).
@@ 57,13 +55,15 @@
\label{sec:DIA_iom}
Since version 3.2, iomput is the NEMO output interface of choice.
It has been designed to be simple to use, flexible and efficient.
+Since version 3.2, iomput is the NEMO output interface of choice.
+It has been designed to be simple to use, flexible and efficient.
The two main purposes of iomput are:
\begin{enumerate}
 \item The complete and flexible control of the output files through external XML files adapted by
 the user from standard templates.
 \item To achieve high performance and scalable output through the optional distribution of
 all diagnostic output related tasks to dedicated processes.
+\item
+ The complete and flexible control of the output files through external XML files adapted by
+ the user from standard templates.
+\item
+ To achieve high performance and scalable output through the optional distribution of
+ all diagnostic output related tasks to dedicated processes.
\end{enumerate}
@@ 72,47 +72,46 @@
\begin{itemize}
 \item The choice of output frequencies that can be different for each file
 (including real months and years).
 \item The choice of file contents; includes complete flexibility over which data are written in
 which files (the same data can be written in different files).
 \item The possibility to split output files at a chosen frequency.
 \item The possibility to extract a vertical or an horizontal subdomain.
 \item The choice of the temporal operation to perform,
 e.g.: average, accumulate, instantaneous, min, max and once.
 \item Control over metadata via a large XML "database" of possible output fields.
+\item
+ The choice of output frequencies that can be different for each file (including real months and years).
+\item
+ The choice of file contents; includes complete flexibility over which data are written in which files
+ (the same data can be written in different files).
+\item
+ The possibility to split output files at a chosen frequency.
+\item
+ The possibility to extract a vertical or an horizontal subdomain.
+\item
+ The choice of the temporal operation to perform, $e.g.$: average, accumulate, instantaneous, min, max and once.
+\item
+ Control over metadata via a large XML "database" of possible output fields.
\end{itemize}
In addition, iomput allows the user to add in the code the output of any new variable (scalar, 2D or
3D) in a very easy way.
+In addition, iomput allows the user to add in the code the output of any new variable (scalar, 2D or 3D)
+in a very easy way.
All details of iomput functionalities are listed in the following subsections.
Examples of the XML files that control the outputs can be found in:
\path{NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef.xml}, \path{NEMOGCM/CONFIG/SHARED/field_def.xml} and
\path{NEMOGCM/CONFIG/SHARED/domain_def.xml}. \\
+Examples of the XML files that control the outputs can be found in: \path{NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef.xml},
+\path{NEMOGCM/CONFIG/SHARED/field_def.xml} and \path{NEMOGCM/CONFIG/SHARED/domain_def.xml}. \\
The second functionality targets output performance when running in parallel (\key{mpp\_mpi}).
Iomput provides the possibility to specify N dedicated I/O processes (in addition to
the NEMO processes) to collect and write the outputs.
With an appropriate choice of N by the user, the bottleneck associated with the writing of
+Iomput provides the possibility to specify N dedicated I/O processes (in addition to the NEMO processes)
+to collect and write the outputs.
+With an appropriate choice of N by the user, the bottleneck associated with the writing of
the output files can be greatly reduced.
In version 3.6, the iom\_put interface depends on an external code called
\href{https://forge.ipsl.jussieu.fr/ioserver/browser/XIOS/branchs/xios1.0}{XIOS1.0}
+In version 3.6, the iom\_put interface depends on
+an external code called \href{https://forge.ipsl.jussieu.fr/ioserver/browser/XIOS/branchs/xios1.0}{XIOS1.0}
(use of revision 618 or higher is required).
This new IO server can take advantage of the parallel I/O functionality of NetCDF4 to
+This new IO server can take advantage of the parallel I/O functionality of NetCDF4 to
create a single output file and therefore to bypass the rebuilding phase.
Note that writing in parallel into the same NetCDF files requires that
your NetCDF4 library is linked to an HDF5 library that has been correctly compiled ($i.e.$ with
the configure option $$enableparallel).
+Note that writing in parallel into the same NetCDF files requires that your NetCDF4 library is linked to
+an HDF5 library that has been correctly compiled ($i.e.$ with the configure option $$enableparallel).
Note that the files created by iomput through XIOS are incompatible with NetCDF3.
All postprocesssing and visualization tools must therefore be compatible with
NetCDF4 and not only NetCDF3.

Even if not using the parallel I/O functionality of NetCDF4, using N dedicated I/O servers,
where N is typically much less than the number of NEMO processors,
will reduce the number of output files created.
This can greatly reduce the postprocessing burden usually associated with using
large numbers of NEMO processors.
Note that for smaller configurations, the rebuilding phase can be avoided, even without
a parallelenabled NetCDF4 library, simply by employing only one dedicated I/O server.
+All postprocesssing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3.
+
+Even if not using the parallel I/O functionality of NetCDF4, using N dedicated I/O servers,
+where N is typically much less than the number of NEMO processors, will reduce the number of output files created.
+This can greatly reduce the postprocessing burden usually associated with using large numbers of NEMO processors.
+Note that for smaller configurations, the rebuilding phase can be avoided,
+even without a parallelenabled NetCDF4 library, simply by employing only one dedicated I/O server.
\subsection{XIOS: XML InputsOutputs Server}
@@ 120,5 +119,5 @@
\subsubsection{Attached or detached mode?}
Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS},
+Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS},
the io\_server developed by Yann Meurdesoif from IPSL.
The behaviour of the I/O subsystem is controlled by settings in the external XML files listed above.
@@ 127,27 +126,26 @@
\xmlline
The {\tt using\_server} setting determines whether or not the server will be used in
\textit{attached mode} (as a library) [{\tt> false <}] or in \textit{detached mode} (as
an external executable on N additional, dedicated cpus) [{\tt > true <}].
+The {\tt using\_server} setting determines whether or not the server will be used in \textit{attached mode}
+(as a library) [{\tt> false <}] or in \textit{detached mode}
+(as an external executable on N additional, dedicated cpus) [{\tt > true <}].
The \textit{attached mode} is simpler to use but much less efficient for massively parallel applications.
The type of each file can be either ''multiple\_file'' or ''one\_file''.
In \textit{attached mode} and if the type of file is ''multiple\_file'',
+In \textit{attached mode} and if the type of file is ''multiple\_file'',
then each NEMO process will also act as an IO server and produce its own set of output files.
Superficially, this emulates the standard behaviour in previous versions.
However, the subdomain written out by each process does not correspond to
+However, the subdomain written out by each process does not correspond to
the \forcode{jpi x jpj x jpk} domain actually computed by the process (although it may if \forcode{jpni=1}).
Instead each process will have collected and written out a number of complete longitudinal strips.
If the ''one\_file'' option is chosen then all processes will collect their longitudinal strips and
+If the ''one\_file'' option is chosen then all processes will collect their longitudinal strips and
write (in parallel) to a single output file.
In \textit{detached mode} and if the type of file is ''multiple\_file'', then
each standalone XIOS process will collect data for a range of complete longitudinal strips and
+In \textit{detached mode} and if the type of file is ''multiple\_file'',
+then each standalone XIOS process will collect data for a range of complete longitudinal strips and
write to its own set of output files.
If the ''one\_file'' option is chosen then all XIOS processes will collect their longitudinal strips and
+If the ''one\_file'' option is chosen then all XIOS processes will collect their longitudinal strips and
write (in parallel) to a single output file.
Note running in detached mode requires launching a Multiple Process Multiple Data (MPMD) parallel job.
The following subsection provides a typical example but the syntax will vary in
different MPP environments.
+The following subsection provides a typical example but the syntax will vary in different MPP environments.
\subsubsection{Number of cpu used by XIOS in detached mode}
@@ 156,10 +154,10 @@
The number of cores dedicated to XIOS should be from \texttildelow1/10 to \texttildelow1/50 of the number of
cores dedicated to NEMO.
Some manufacturers suggest using O($\sqrt{N}$) dedicated IO processors for N processors but
+Some manufacturers suggest using O($\sqrt{N}$) dedicated IO processors for N processors but
this is a general recommendation and not specific to NEMO.
It is difficult to provide precise recommendations because the optimal choice will depend on
+It is difficult to provide precise recommendations because the optimal choice will depend on
the particular hardware properties of the target system
(parallel filesystem performance, available memory, memory bandwidth etc.) and
the volume and frequency of data to be created.
+(parallel filesystem performance, available memory, memory bandwidth etc.)
+and the volume and frequency of data to be created.
Here is an example of 2 cpus for the io\_server and 62 cpu for nemo using mpirun:
\cmdmpirun np 62 ./nemo.exe : np 2 ./xios_server.exe
@@ 167,6 +165,5 @@
\subsubsection{Control of XIOS: the context in iodef.xml}
As well as the {\tt using\_server} flag, other controls on the use of XIOS are set in
the XIOS context in iodef.xml.
+As well as the {\tt using\_server} flag, other controls on the use of XIOS are set in the XIOS context in iodef.xml.
See the XML basics section below for more details on XML syntax and rules.
@@ 205,30 +202,27 @@
\subsubsection{Installation}
As mentioned, XIOS is supported separately and must be downloaded and compiled before
it can be used with NEMO.
See the installation guide on the \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS} wiki for
help and guidance.
+As mentioned, XIOS is supported separately and must be downloaded and compiled before it can be used with NEMO.
+See the installation guide on the \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS} wiki for help and guidance.
NEMO will need to link to the compiled XIOS library.
The
\href{https://forge.ipsl.jussieu.fr/nemo/wiki/Users/ModelInterfacing/InputsOutputs#InputsOutputsusingXIOS}
{XIOS with NEMO}
guide provides an example illustration of how this can be achieved.
+The \href{https://forge.ipsl.jussieu.fr/nemo/wiki/Users/ModelInterfacing/InputsOutputs#InputsOutputsusingXIOS}
+{XIOS with NEMO} guide provides an example illustration of how this can be achieved.
\subsubsection{Add your own outputs}
It is very easy to add your own outputs with iomput.
Many standard fields and diagnostics are already prepared ($i.e.$, steps 1 to 3 below have been done) and
+Many standard fields and diagnostics are already prepared ($i.e.$, steps 1 to 3 below have been done) and
simply need to be activated by including the required output in a file definition in iodef.xml (step 4).
To add new output variables, all 4 of the following steps must be taken.
\begin{enumerate}
 \item[1.] in NEMO code, add a \forcode{CALL iom\_put( 'identifier', array )} where you want to
 output a 2D or 3D array.
 \item[2.] If necessary, add \forcode{USE iom ! I/O manager library} to the list of used modules in
 the upper part of your module.
 \item[3.] in the field\_def.xml file, add the definition of your variable using
 the same identifier you used in the f90 code (see subsequent sections for
 a details of the XML syntax and rules).
For example:
+\item[1.]
+ in NEMO code, add a \forcode{CALL iom\_put( 'identifier', array )} where you want to output a 2D or 3D array.
+\item[2.]
+ If necessary, add \forcode{USE iom ! I/O manager library} to the list of used modules in
+ the upper part of your module.
+\item[3.]
+ in the field\_def.xml file, add the definition of your variable using the same identifier you used in the f90 code
+ (see subsequent sections for a details of the XML syntax and rules).
+ For example:
\begin{xmllines}
@@ 241,9 +235,9 @@
\end{xmllines}
Note your definition must be added to the field\_group whose reference grid is consistent with
the size of the array passed to iomput.
The grid\_ref attribute refers to definitions set in iodef.xml which, in turn, reference grids and
axes either defined in the code (iom\_set\_domain\_attr and iom\_set\_axis\_attr in \mdl{iom}) or
defined in the domain\_def.xml file.
+Note your definition must be added to the field\_group whose reference grid is consistent with the size of
+the array passed to iomput.
+The grid\_ref attribute refers to definitions set in iodef.xml which, in turn,
+reference grids and axes either defined in the code
+(iom\_set\_domain\_attr and iom\_set\_axis\_attr in \mdl{iom}) or defined in the domain\_def.xml file.
$e.g.$:
@@ 252,10 +246,11 @@
\end{xmllines}
Note, if your array is computed within the surface module each \np{nn\_fsbc} time\_step,
+Note, if your array is computed within the surface module each \np{nn\_fsbc} time\_step,
add the field definition within the field\_group defined with the id "SBC":
\xmlcode{} which has been defined with the correct frequency of
operations (iom\_set\_field\_attr in \mdl{iom})
 \item[4.] add your field in one of the output files defined in iodef.xml
(again see subsequent sections for syntax and rules)
+\xmlcode{} which has been defined with the correct frequency of operations
+(iom\_set\_field\_attr in \mdl{iom})
+\item[4.]
+ add your field in one of the output files defined in iodef.xml
+ (again see subsequent sections for syntax and rules)
\begin{xmllines}
@@ 274,9 +269,7 @@
XML tags begin with the lessthan character ("$<$") and end with the greaterthan character ("$>$").
You use tags to mark the start and end of elements, which are the logical units of information in
an XML document.
In addition to marking the beginning of an element, XML start tags also provide a place to
specify attributes.
An attribute specifies a single property for an element, using a name/value pair, for example:
+You use tags to mark the start and end of elements, which are the logical units of information in an XML document.
+In addition to marking the beginning of an element, XML start tags also provide a place to specify attributes.
+An attribute specifies a single property for an element, using a name/value pair, for example:
\xmlcode{ ... }.
See \href{http://www.xmlnews.org/docs/xmlbasics.html}{here} for more details.
@@ 284,5 +277,5 @@
\subsubsection{Structure of the XML file used in NEMO}
The XML file used in XIOS is structured by 7 families of tags:
+The XML file used in XIOS is structured by 7 families of tags:
context, axis, domain, grid, field, file and variable.
Each tag family has hierarchy of three flavors (except for context):
@@ 302,12 +295,11 @@
Each element may have several attributes.
Some attributes are mandatory, other are optional but have a default value and
other are completely optional.
+Some attributes are mandatory, other are optional but have a default value and other are completely optional.
Id is a special attribute used to identify an element or a group of elements.
It must be unique for a kind of element.
It is optional, but no reference to the corresponding element can be done if it is not defined.
The XML file is split into context tags that are used to isolate IO definition from different codes or
different parts of a code.
+The XML file is split into context tags that are used to isolate IO definition from
+different codes or different parts of a code.
No interference is possible between 2 different contexts.
Each context has its own calendar and an associated timestep.
@@ 370,6 +362,6 @@
\noindent In NEMO, by default, the field and domain definition is done in 2 separate files:
\path{NEMOGCM/CONFIG/SHARED/field_def.xml} and \path{NEMOGCM/CONFIG/SHARED/domain_def.xml}
that are included in the main iodef.xml file through the following commands:
+\path{NEMOGCM/CONFIG/SHARED/field_def.xml} and \path{NEMOGCM/CONFIG/SHARED/domain_def.xml} that
+are included in the main iodef.xml file through the following commands:
\begin{xmllines}
@@ 380,8 +372,9 @@
XML extensively uses the concept of inheritance.
XML has a tree based structure with a parentchild oriented relation:
all children inherit attributes from parent, but an attribute defined in a child replace
the inherited attribute value.
Note that the special attribute ''id'' is never inherited. \\ \\
+XML has a tree based structure with a parentchild oriented relation: all children inherit attributes from parent,
+but an attribute defined in a child replace the inherited attribute value.
+Note that the special attribute ''id'' is never inherited.
+\\
+\\
example 1: Direct inheritance.
@@ 393,9 +386,10 @@
\end{xmllines}
The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''average''
of the attribute ''operation'' from its parent.
+The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''average'' of
+the attribute ''operation'' from its parent.
Note that a child can overwrite the attribute definition inherited from its parents.
In the example above, the field ''sss'' will for example output instantaneous values instead of
average values. \\ \\
+In the example above, the field ''sss'' will for example output instantaneous values instead of average values.
+\\
+\\
example 2: Inheritance by reference.
@@ 418,9 +412,8 @@
Groups can be used for 2 purposes.
Firstly, the group can be used to define common attributes to be shared by the elements of
+Firstly, the group can be used to define common attributes to be shared by the elements of
the group through inheritance.
In the following example, we define a group of field that will share a common grid ''grid\_T\_2D''.
Note that for the field ''toce'', we overwrite the grid definition inherited from the group by
''grid\_T\_3D''.
+Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''.
\begin{xmllines}
@@ 456,10 +449,10 @@
\subsection{Detailed functionalities}
The file \path{NEMOGCM/CONFIG/ORCA2_LIM/iodef_demo.xml} provides several examples of the use of
+The file \path{NEMOGCM/CONFIG/ORCA2_LIM/iodef_demo.xml} provides several examples of the use of
the new functionalities offered by the XML interface of XIOS.
\subsubsection{Define horizontal subdomains}
Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of
+Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of
the tag family domain.
It must therefore be done in the domain part of the XML file.
@@ 472,6 +465,5 @@
\end{xmllines}
The use of this subdomain is done through the redefinition of the attribute domain\_ref of
the tag family field.
+The use of this subdomain is done through the redefinition of the attribute domain\_ref of the tag family field.
For example:
@@ 483,9 +475,9 @@
Moorings are seen as an extrem case corresponding to a 1 by 1 subdomain.
The Equatorial section, the TAO, RAMA and PIRATA moorings are alredy registered in the code and
+The Equatorial section, the TAO, RAMA and PIRATA moorings are already registered in the code and
can therefore be outputted without taking care of their (i,j) position in the grid.
These predefined domains can be activated by the use of specific domain\_ref:
''EqT'', ''EqU'' or ''EqW'' for the equatorial sections and the mooring position for
TAO, RAMA and PIRATA followed by ''T'' (for example: ''8s137eT'', ''1.5s80.5eT'' ...)
+''EqT'', ''EqU'' or ''EqW'' for the equatorial sections and
+the mooring position for TAO, RAMA and PIRATA followed by ''T'' (for example: ''8s137eT'', ''1.5s80.5eT'' ...)
\begin{xmllines}
@@ 503,6 +495,6 @@
\subsubsection{Define vertical zooms}
Vertical zooms are defined through the attributs zoom\_begin and zoom\_end of the tag family axis.
It must therefore be done in the axis part of the XML file.
+Vertical zooms are defined through the attributs zoom\_begin and zoom\_end of the tag family axis.
+It must therefore be done in the axis part of the XML file.
For example, in \path{NEMOGCM/CONFIG/ORCA2_LIM/iodef_demo.xml}, we provide the following example:
@@ 513,6 +505,5 @@
\end{xmllines}
The use of this vertical zoom is done through the redefinition of the attribute axis\_ref of
the tag family field.
+The use of this vertical zoom is done through the redefinition of the attribute axis\_ref of the tag family field.
For example:
@@ 539,11 +530,11 @@
\end{xmllines}
However it is often very convienent to define the file name with the name of the experiment,
the output file frequency and the date of the beginning and the end of the simulation
+However it is often very convienent to define the file name with the name of the experiment,
+the output file frequency and the date of the beginning and the end of the simulation
(which are informations stored either in the namelist or in the XML file).
To do so, we added the following rule: if the id of the tag file is ''fileN'' (where N = 1 to 999 on
1 to 3 digits) or one of the predefined sections or moorings (see next subsection),
the following part of the name and the name\_suffix (that can be inherited) will be automatically
replaced by:
+To do so, we added the following rule:
+if the id of the tag file is ''fileN'' (where N = 1 to 999 on 1 to 3 digits) or
+one of the predefined sections or moorings (see next subsection),
+the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by:
\begin{table} \scriptsize
@@ 580,16 +571,17 @@
\end{forlines}
\noindent will give the following file name radical:
\ifile{myfile\_ORCA2\_19891231\_freq1d}
+\noindent will give the following file name radical: \ifile{myfile\_ORCA2\_19891231\_freq1d}
\subsubsection{Other controls of the XML attributes from NEMO}
The values of some attributes are defined by subroutine calls within NEMO
(calls to iom\_set\_domain\_attr, iom\_set\_axis\_attr and iom\_set\_field\_attr in \mdl{iom}).
Any definition given in the XML file will be overwritten.
By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in
the XML file (but this is not necessary). \\

Here is the list of these attributes: \\
+(calls to iom\_set\_domain\_attr, iom\_set\_axis\_attr and iom\_set\_field\_attr in \mdl{iom}).
+Any definition given in the XML file will be overwritten.
+By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the XML file
+(but this is not necessary).
+\\
+
+Here is the list of these attributes:
+\\
\begin{table} \scriptsize
@@ 631,6 +623,6 @@
\begin{enumerate}
\item Simple computation: directly define the computation when refering to the variable in
the file definition.
+\item
+ Simple computation: directly define the computation when refering to the variable in the file definition.
\begin{xmllines}
@@ 640,5 +632,6 @@
\end{xmllines}
\item Simple computation: define a new variable and use it in the file definition.
+\item
+ Simple computation: define a new variable and use it in the file definition.
in field\_definition:
@@ 654,8 +647,9 @@
\end{xmllines}
Note that in this case, the following syntaxe \xmlcode{} is not working as
+Note that in this case, the following syntaxe \xmlcode{} is not working as
sst2 won't be evaluated.
\item Change of variable precision:
+\item
+ Change of variable precision:
\begin{xmllines}
@@ 667,10 +661,9 @@
Note that, then the code is crashing, writting real4 variables forces a numerical convection from
real8 to real4 which will create an internal error in NetCDF and will avoid the creation of
the output files.
Forcing double precision outputs with prec="8" (for example in the field\_definition) will
avoid this problem.

\item add user defined attributes:
+real8 to real4 which will create an internal error in NetCDF and will avoid the creation of the output files.
+Forcing double precision outputs with prec="8" (for example in the field\_definition) will avoid this problem.
+
+\item
+ add user defined attributes:
\begin{xmllines}
@@ 687,5 +680,6 @@
\end{xmllines}
\item use of the ``@'' function: example 1, weighted temporal average
+\item
+ use of the ``@'' function: example 1, weighted temporal average
 define a new variable in field\_definition
@@ 706,14 +700,15 @@
The freq\_op="5d" attribute is used to define the operation frequency of the ``@'' function: here 5 day.
The temporal operation done by the ``@'' is the one defined in the field definition:
+The temporal operation done by the ``@'' is the one defined in the field definition:
here we use the default, average.
So, in the above case, @toce\_e3t will do the 5day mean of toce*e3t.
Operation="instant" refers to the temporal operation to be performed on the field''@toce\_e3t / @e3t'':
here the temporal average is alreday done by the ``@'' function so we just use instant to do the ratio of
+Operation="instant" refers to the temporal operation to be performed on the field''@toce\_e3t / @e3t'':
+here the temporal average is alreday done by the ``@'' function so we just use instant to do the ratio of
the 2 mean values.
field\_ref="toce" means that attributes not explicitely defined, are inherited from toce field.
Note that in this case, freq\_op must be equal to the file output\_freq.
\item use of the ``@'' function: example 2, monthly SSH standard deviation
+\item
+ use of the ``@'' function: example 2, monthly SSH standard deviation
 define a new variable in field\_definition
@@ 737,14 +732,14 @@
The freq\_op="1m" attribute is used to define the operation frequency of the ``@'' function: here 1 month.
The temporal operation done by the ``@'' is the one defined in the field definition:
+The temporal operation done by the ``@'' is the one defined in the field definition:
here we use the default, average.
So, in the above case, @ssh2 will do the monthly mean of ssh*ssh.
Operation="instant" refers to the temporal operation to be performed on
the field ''sqrt( @ssh2  @ssh * @ssh )'':
+Operation="instant" refers to the temporal operation to be performed on the field ''sqrt( @ssh2  @ssh * @ssh )'':
here the temporal average is alreday done by the ``@'' function so we just use instant.
field\_ref="ssh" means that attributes not explicitely defined, are inherited from ssh field.
Note that in this case, freq\_op must be equal to the file output\_freq.
\item use of the ``@'' function: example 3, monthly average of SST diurnal cycle
+\item
+ use of the ``@'' function: example 3, monthly average of SST diurnal cycle
 define 2 new variables in field\_definition
@@ 770,8 +765,8 @@
The freq\_op="1d" attribute is used to define the operation frequency of the ``@'' function: here 1 day.
The temporal operation done by the ``@'' is the one defined in the field definition: here maximum for
sstmax and minimum for sstmin.
+The temporal operation done by the ``@'' is the one defined in the field definition:
+here maximum for sstmax and minimum for sstmin.
So, in the above case, @sstmax will do the daily max and @sstmin the daily min.
Operation="average" refers to the temporal operation to be performed on the field ``@sstmax  @sstmin'':
+Operation="average" refers to the temporal operation to be performed on the field ``@sstmax  @sstmin'':
here monthly mean (of daily max  daily min of the sst).
field\_ref="sst" means that attributes not explicitely defined, are inherited from sst field.
@@ 1126,12 +1121,11 @@
Output from the XIOS1.0 IO server is compliant with
\href{http://cfconventions.org/Data/cfconventions/cfconventions1.5/build/cfconventions.html}{version 1.5}
of the CF metadata standard.
Therefore while a user may wish to add their own metadata to the output files (as demonstrated in
example 4 of section \autoref{subsec:IOM_xmlref}) the metadata should, for the most part, comply with
the CF1.5 standard.

Some metadata that may significantly increase the file size (horizontal cell areas and
vertices) are controlled by the namelist parameter \np{ln\_cfmeta} in the \ngn{namrun} namelist.
+\href{http://cfconventions.org/Data/cfconventions/cfconventions1.5/build/cfconventions.html}{version 1.5} of
+the CF metadata standard.
+Therefore while a user may wish to add their own metadata to the output files (as demonstrated in example 4 of
+section \autoref{subsec:IOM_xmlref}) the metadata should, for the most part, comply with the CF1.5 standard.
+
+Some metadata that may significantly increase the file size (horizontal cell areas and vertices) are controlled by
+the namelist parameter \np{ln\_cfmeta} in the \ngn{namrun} namelist.
This must be set to true if these metadata are to be included in the output files.
@@ 1144,18 +1138,15 @@
Since version 3.3, support for NetCDF4 chunking and (lossless) compression has been included.
These options build on the standard NetCDF output and allow the user control over the size of
the chunks via namelist settings.
+These options build on the standard NetCDF output and allow the user control over the size of the chunks via
+namelist settings.
Chunking and compression can lead to significant reductions in file sizes for a small runtime overhead.
For a fuller discussion on chunking and other performance issues the reader is referred to
the NetCDF4 documentation found
\href{http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Chunking}{here}.

The new features are only available when the code has been linked with a NetCDF4 library
(version 4.1 onwards, recommended) which has been built with HDF5 support
(version 1.8.4 onwards, recommended).
Datasets created with chunking and compression are not backwards compatible with
NetCDF3 "classic" format but most analysis codes can be relinked simply with the new libraries and
will then read both NetCDF3 and NetCDF4 files.
NEMO executables linked with NetCDF4 libraries can be made to produce NetCDF3 files by
+For a fuller discussion on chunking and other performance issues the reader is referred to
+the NetCDF4 documentation found \href{http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Chunking}{here}.
+
+The new features are only available when the code has been linked with a NetCDF4 library
+(version 4.1 onwards, recommended) which has been built with HDF5 support (version 1.8.4 onwards, recommended).
+Datasets created with chunking and compression are not backwards compatible with NetCDF3 "classic" format but
+most analysis codes can be relinked simply with the new libraries and will then read both NetCDF3 and NetCDF4 files.
+NEMO executables linked with NetCDF4 libraries can be made to produce NetCDF3 files by
setting the \np{ln\_nc4zip} logical to false in the \textit{namnc4} namelist:
@@ 1166,25 +1157,22 @@
If \key{netcdf4} has not been defined, these namelist parameters are not read.
In this case, \np{ln\_nc4zip} is set false and dummy routines for
a few NetCDF4specific functions are defined.
These functions will not be used but need to be included so that compilation is possible with
NetCDF3 libraries.

When using NetCDF4 libraries, \key{netcdf4} should be defined even if the intention is to
+In this case, \np{ln\_nc4zip} is set false and dummy routines for a few NetCDF4specific functions are defined.
+These functions will not be used but need to be included so that compilation is possible with NetCDF3 libraries.
+
+When using NetCDF4 libraries, \key{netcdf4} should be defined even if the intention is to
create only NetCDF3compatible files.
This is necessary to avoid duplication between the dummy routines and
the actual routines present in the library.
+This is necessary to avoid duplication between the dummy routines and the actual routines present in the library.
Most compilers will fail at compile time when faced with such duplication.
Thus when linking with NetCDF4 libraries the user must define \key{netcdf4} and
+Thus when linking with NetCDF4 libraries the user must define \key{netcdf4} and
control the type of NetCDF file produced via the namelist parameter.
Chunking and compression is applied only to 4D fields and there is no advantage in
chunking across more than one time dimension since previously written chunks would have to
be read back and decompressed before being added to.
+Chunking and compression is applied only to 4D fields and
+there is no advantage in chunking across more than one time dimension since
+previously written chunks would have to be read back and decompressed before being added to.
Therefore, user control over chunk sizes is provided only for the three space dimensions.
The user sets an approximate number of chunks along each spatial axis.
The actual size of the chunks will depend on global domain size for monoprocessors or,
more likely, the local processor domain size for distributed processing.
The derived values are subject to practical minimum values (to avoid wastefully small chunk sizes) and
+The actual size of the chunks will depend on global domain size for monoprocessors or, more likely,
+the local processor domain size for distributed processing.
+The derived values are subject to practical minimum values (to avoid wastefully small chunk sizes) and
cannot be greater than the domain size in any dimension.
The algorithm used is:
@@ 1203,11 +1191,11 @@
\end{forlines}
\noindent for a standard ORCA2\_LIM configuration gives chunksizes of {\small\tt 46x38x1} respectively in
+\noindent for a standard ORCA2\_LIM configuration gives chunksizes of {\small\tt 46x38x1} respectively in
the monoprocessor case (i.e. global domain of {\small\tt 182x149x31}).
An illustration of the potential space savings that NetCDF4 chunking and compression provides is given in
table \autoref{tab:NC4} which compares the results of two short runs of
the ORCA2\_LIM reference configuration with a 4x2 mpi partitioning.
Note the variation in the compression ratio achieved which reflects chiefly the dry to
wet volume ratio of each processing region.
+table \autoref{tab:NC4} which compares the results of two short runs of the ORCA2\_LIM reference configuration with
+a 4x2 mpi partitioning.
+Note the variation in the compression ratio achieved which reflects chiefly the dry to wet volume ratio of
+each processing region.
%TABLE
@@ 1249,11 +1237,11 @@
%
When \key{iomput} is activated with \key{netcdf4} chunking and compression parameters for
fields produced via \np{iom\_put} calls are set via an equivalent and identically named namelist to
\textit{namnc4} in \np{xmlio\_server.def}.
Typically this namelist serves the mean files whilst the \ngn{ namnc4} in
the main namelist file continues to serve the restart files.
This duplication is unfortunate but appropriate since, if using io\_servers, the domain sizes of
the individual files produced by the io\_server processes may be different to those produced by
+When \key{iomput} is activated with \key{netcdf4} chunking and compression parameters for fields produced via
+\np{iom\_put} calls are set via an equivalent and identically named namelist to \textit{namnc4} in
+\np{xmlio\_server.def}.
+Typically this namelist serves the mean files whilst the \ngn{ namnc4} in the main namelist file continues to
+serve the restart files.
+This duplication is unfortunate but appropriate since, if using io\_servers, the domain sizes of
+the individual files produced by the io\_server processes may be different to those produced by
the invidual processing regions and different chunking choices may be desired.
@@ 1269,7 +1257,7 @@
%
Each trend of the dynamics and/or temperature and salinity time evolution equations can be send to
\mdl{trddyn} and/or \mdl{trdtra} modules (see TRD directory) just after their computation ($i.e.$ at
the end of each $dyn\cdots.F90$ and/or $tra\cdots.F90$ routines).
+Each trend of the dynamics and/or temperature and salinity time evolution equations can be send to
+\mdl{trddyn} and/or \mdl{trdtra} modules (see TRD directory) just after their computation
+($i.e.$ at the end of each $dyn\cdots.F90$ and/or $tra\cdots.F90$ routines).
This capability is controlled by options offered in \ngn{namtrd} namelist.
Note that the output are done with xIOS, and therefore the \key{IOM} is required.
@@ 1278,18 +1266,22 @@
\begin{description}
 \item[\np{ln\_glo\_trd}]: at each \np{nn\_trd} timestep a check of the basin averaged properties of
 the momentum and tracer equations is performed.
 This also includes a check of $T^2$, $S^2$, $\tfrac{1}{2} (u^2+v2)$, and
 potential energy time evolution equations properties;
 \item[\np{ln\_dyn\_trd}]: each 3D trend of the evolution of the two momentum components is output;
 \item[\np{ln\_dyn\_mxl}]: each 3D trend of the evolution of the two momentum components averaged over
 the mixed layer is output;
 \item[\np{ln\_vor\_trd}]: a vertical summation of the moment tendencies is performed, then
 the curl is computed to obtain the barotropic vorticity tendencies which
 are output;
 \item[\np{ln\_KE\_trd}] : each 3D trend of the Kinetic Energy equation is output ;
 \item[\np{ln\_tra\_trd}]: each 3D trend of the evolution of temperature and salinity is output ;
 \item[\np{ln\_tra\_mxl}]: each 2D trend of the evolution of temperature and salinity averaged over
 the mixed layer is output;
+\item[\np{ln\_glo\_trd}]:
+ at each \np{nn\_trd} timestep a check of the basin averaged properties of
+ the momentum and tracer equations is performed.
+ This also includes a check of $T^2$, $S^2$, $\tfrac{1}{2} (u^2+v2)$,
+ and potential energy time evolution equations properties;
+\item[\np{ln\_dyn\_trd}]:
+ each 3D trend of the evolution of the two momentum components is output;
+\item[\np{ln\_dyn\_mxl}]:
+ each 3D trend of the evolution of the two momentum components averaged over the mixed layer is output;
+\item[\np{ln\_vor\_trd}]:
+ a vertical summation of the moment tendencies is performed,
+ then the curl is computed to obtain the barotropic vorticity tendencies which are output;
+\item[\np{ln\_KE\_trd}] :
+ each 3D trend of the Kinetic Energy equation is output;
+\item[\np{ln\_tra\_trd}]:
+ each 3D trend of the evolution of temperature and salinity is output;
+\item[\np{ln\_tra\_mxl}]:
+ each 2D trend of the evolution of temperature and salinity averaged over the mixed layer is output;
\end{description}
@@ 1298,6 +1290,6 @@
\textbf{Note that} in the current version (v3.6), many changes has been introduced but not fully tested.
In particular, options associated with \np{ln\_dyn\_mxl}, \np{ln\_vor\_trd}, and \np{ln\_tra\_mxl} are
not working, and none of the options have been tested with variable volume ($i.e.$ \key{vvl} defined).
+In particular, options associated with \np{ln\_dyn\_mxl}, \np{ln\_vor\_trd}, and \np{ln\_tra\_mxl} are not working,
+and none of the options have been tested with variable volume ($i.e.$ \key{vvl} defined).
% 
@@ 1311,17 +1303,16 @@
%
The online computation of floats advected either by the three dimensional velocity field or
constraint to remain at a given depth ($w = 0$ in the computation) have been introduced in
the system during the CLIPPER project.
+The online computation of floats advected either by the three dimensional velocity field or constraint to
+remain at a given depth ($w = 0$ in the computation) have been introduced in the system during the CLIPPER project.
Options are defined by \ngn{namflo} namelis variables.
The algorithm used is based either on the work of \cite{Blanke_Raynaud_JPO97} (default option), or
on a $4^th$ RungeHutta algorithm (\np{ln\_flork4}\forcode{ = .true.}).
Note that the \cite{Blanke_Raynaud_JPO97} algorithm have the advantage of providing trajectories which
+The algorithm used is based either on the work of \cite{Blanke_Raynaud_JPO97} (default option),
+or on a $4^th$ RungeHutta algorithm (\np{ln\_flork4}\forcode{ = .true.}).
+Note that the \cite{Blanke_Raynaud_JPO97} algorithm have the advantage of providing trajectories which
are consistent with the numeric of the code, so that the trajectories never intercept the bathymetry.
\subsubsection{Input data: initial coordinates}
Initial coordinates can be given with Ariane Tools convention (IJK coordinates,
(\np{ln\_ariane}\forcode{ = .true.}) ) or with longitude and latitude.
+Initial coordinates can be given with Ariane Tools convention
+(IJK coordinates, (\np{ln\_ariane}\forcode{ = .true.}) ) or with longitude and latitude.
In case of Ariane convention, input filename is \np{init\_float\_ariane}.
@@ 1368,11 +1359,11 @@
\np{jpnfl} is the total number of floats during the run.
When initial positions are read in a restart file (\np{ln\_rstflo}\forcode{ = .true.} ),
+When initial positions are read in a restart file (\np{ln\_rstflo}\forcode{ = .true.} ),
\np{jpnflnewflo} can be added in the initialization file.
\subsubsection{Output data}
\np{nn\_writefl} is the frequency of writing in float output file and \np{nn\_stockfl} is
the frequency of creation of the float restart file.
+\np{nn\_writefl} is the frequency of writing in float output file and \np{nn\_stockfl} is the frequency of
+creation of the float restart file.
Output data can be written in ascii files (\np{ln\_flo\_ascii}\forcode{ = .true.}).
@@ 1382,6 +1373,6 @@
There are 2 possibilities:
  if (\key{iomput}) is used, outputs are selected in iodef.xml.
 Here it is an example of specification to put in files description section:
+ if (\key{iomput}) is used, outputs are selected in iodef.xml.
+Here it is an example of specification to put in files description section:
\begin{xmllines}
@@ 1401,6 +1392,6 @@
 if (\key{iomput}) is not used, a file called \ifile{trajec\_float} will be created by IOIPSL library.
See also \href{http://stockage.univbrest.fr/~grima/Ariane/}{here} the web site describing
the offline use of this marvellous diagnostic tool.
+ See also \href{http://stockage.univbrest.fr/~grima/Ariane/}{here} the web site describing the offline use of
+ this marvellous diagnostic tool.
% 
@@ 1418,5 +1409,5 @@
This online Harmonic analysis is actived with \key{diaharm}.
Some parameters are available in namelist \ngn{namdia\_harm} :
+Some parameters are available in namelist \ngn{namdia\_harm}:
 \np{nit000\_han} is the first time step used for harmonic analysis
@@ 1430,13 +1421,12 @@
 \np{tname} is an array with names of tidal constituents to analyse
\np{nit000\_han} and \np{nitend\_han} must be between \np{nit000} and \np{nitend} of the simulation.
The restart capability is not implemented.

The Harmonic analysis solve the following equation:
+ \np{nit000\_han} and \np{nitend\_han} must be between \np{nit000} and \np{nitend} of the simulation.
+ The restart capability is not implemented.
+
+ The Harmonic analysis solve the following equation:
\[h_{i}  A_{0} + \sum^{nb\_ana}_{j=1}[A_{j}cos(\nu_{j}t_{j}\phi_{j})] = e_{i}\]
With $A_{j}$, $\nu_{j}$, $\phi_{j}$, the amplitude, frequency and phase for each wave and
$e_{i}$ the error.
+With $A_{j}$, $\nu_{j}$, $\phi_{j}$, the amplitude, frequency and phase for each wave and $e_{i}$ the error.
$h_{i}$ is the sea level for the time $t_{i}$ and $A_{0}$ is the mean sea level. \\
We can rewrite this equation:
@@ 1459,13 +1449,11 @@
%
A module is available to compute the transport of volume, heat and salt through sections.
+A module is available to compute the transport of volume, heat and salt through sections.
This diagnostic is actived with \key{diadct}.
Each section is defined by the coordinates of its 2 extremities.
The pathways between them are contructed using tools which can be found in
\texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT}
and are written in a binary file
\texttt{section\_ijglobal.diadct\_ORCA2\_LIM}
which is later read in by NEMO to compute online transports.
+The pathways between them are contructed using tools which can be found in \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT}
+and are written in a binary file \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is later read in by
+NEMO to compute online transports.
The online transports module creates three output ascii files:
@@ 1477,6 +1465,6 @@
 \texttt{salt\_transport} for salt transports (unit: $10^{9}Kg s^{1}$) \\
Namelist variables in \ngn{namdct} control how frequently the flows are summed and the time scales over
which they are averaged, as well as the level of output for debugging:
+Namelist variables in \ngn{namdct} control how frequently the flows are summed and the time scales over which
+they are averaged, as well as the level of output for debugging:
\np{nn\_dct} : frequency of instantaneous transports computing
\np{nn\_dctwri}: frequency of writing ( mean of instantaneous transports )
@@ 1485,7 +1473,7 @@
\subsubsection{Creating a binary file containing the pathway of each section}
In \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT/run},
the file \textit{ {list\_sections.ascii\_global}} contains a list of all the sections that are to
be computed (this list of sections is based on MERSEA project metrics).
+In \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT/run},
+the file \textit{ {list\_sections.ascii\_global}} contains a list of all the sections that are to be computed
+(this list of sections is based on MERSEA project metrics).
Another file is available for the GYRE configuration (\texttt{ {list\_sections.ascii\_GYRE}}).
@@ 1505,8 +1493,8 @@
 \texttt{ice} to compute surface and volume ice transports, \texttt{noice} if no. \\
\noindent The results of the computing of transports, and the directions of positive and
negative flow do not depend on the order of the 2 extremities in this file. \\

\noindent If nclass $\neq$ 0,the next lines contain the class type and the nclass bounds: \\
+ \noindent The results of the computing of transports, and the directions of positive and
+ negative flow do not depend on the order of the 2 extremities in this file. \\
+
+\noindent If nclass $\neq$ 0, the next lines contain the class type and the nclass bounds: \\
{\scriptsize \texttt{
long1 lat1 long2 lat2 nclass (ok/no)strpond (no)ice section\_name \\
@@ 1531,12 +1519,12 @@
 \texttt{zsigp} for potential density classes \\
The script \texttt{job.ksh} computes the pathway for each section and
creates a binary file \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is read by NEMO. \\

It is possible to use this tools for new configuations: \texttt{job.ksh} has to be updated with
the coordinates file name and path. \\

Examples of two sections, the ACC\_Drake\_Passage with no classes, and the ATL\_Cuba\_Florida with
4 temperature clases (5 class bounds), are shown: \\
+ The script \texttt{job.ksh} computes the pathway for each section and creates a binary file
+ \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is read by NEMO. \\
+
+ It is possible to use this tools for new configuations: \texttt{job.ksh} has to be updated with
+ the coordinates file name and path. \\
+
+ Examples of two sections, the ACC\_Drake\_Passage with no classes,
+ and the ATL\_Cuba\_Florida with 4 temperature clases (5 class bounds), are shown: \\
\noindent {\scriptsize \texttt{
68. 54.5 60. 64.7 00 okstrpond noice ACC\_Drake\_Passage \\
@@ 1559,7 +1547,7 @@
transport\_total}} \\
For sections with classes, the first \texttt{nclass1} lines correspond to the transport for
each class and the last line corresponds to the total transport summed over all classes.
For sections with no classes, class number \texttt{1} corresponds to \texttt{total class} and
+For sections with classes, the first \texttt{nclass1} lines correspond to the transport for each class and
+the last line corresponds to the total transport summed over all classes.
+For sections with no classes, class number \texttt{1} corresponds to \texttt{total class} and
this class is called \texttt{N}, meaning \texttt{none}.
@@ 1568,6 +1556,6 @@
 \texttt{transport\_direction2} is the negative part of the transport ($\leq$ 0). \\
\noindent The \texttt{section slope coefficient} gives information about the significance of
transports signs and direction: \\
+\noindent The \texttt{section slope coefficient} gives information about the significance of transports signs and
+direction: \\
\begin{table} \scriptsize
@@ 1591,23 +1579,22 @@
Changes in steric sea level are caused when changes in the density of the water column imply an
expansion or contraction of the column.
It is essentially produced through surface heating/cooling and to a lesser extent through
nonlinear effects of the equation of state (cabbeling, thermobaricity...).
+Changes in steric sea level are caused when changes in the density of the water column imply an expansion or
+contraction of the column.
+It is essentially produced through surface heating/cooling and to a lesser extent through nonlinear effects of
+the equation of state (cabbeling, thermobaricity...).
NonBoussinesq models contain all ocean effects within the ocean acting on the sea level.
In particular, they include the steric effect.
In contrast, Boussinesq models, such as \NEMO, conserve volume, rather than mass,
+In contrast, Boussinesq models, such as \NEMO, conserve volume, rather than mass,
and so do not properly represent expansion or contraction.
The steric effect is therefore not explicitely represented.
This approximation does not represent a serious error with respect to the flow field calculated by
the model \citep{Greatbatch_JGR94}, but extra attention is required when investigating sea level,
as steric changes are an important contribution to local changes in sea level on seasonal and
climatic time scales.
This is especially true for investigation into sea level rise due to global warming.

Fortunately, the steric contribution to the sea level consists of a spatially uniform component that
+This approximation does not represent a serious error with respect to the flow field calculated by the model
+\citep{Greatbatch_JGR94}, but extra attention is required when investigating sea level,
+as steric changes are an important contribution to local changes in sea level on seasonal and climatic time scales.
+This is especially true for investigation into sea level rise due to global warming.
+
+Fortunately, the steric contribution to the sea level consists of a spatially uniform component that
can be diagnosed by considering the mass budget of the world ocean \citep{Greatbatch_JGR94}.
In order to better understand how global mean sea level evolves and thus how the steric sea level can
be diagnosed, we compare, in the following, the nonBoussinesq and Boussinesq cases.
+In order to better understand how global mean sea level evolves and thus how the steric sea level can be diagnosed,
+we compare, in the following, the nonBoussinesq and Boussinesq cases.
Let denote
@@ 1628,5 +1615,5 @@
\]
Temporal changes in total mass is obtained from the density conservation equation :
+Temporal changes in total mass is obtained from the density conservation equation:
\[ \frac{1}{e_3} \partial_t ( e_3\,\rho) + \nabla( \rho \, \textbf{U} )
@@ 1634,6 +1621,6 @@
\label{eq:Co_nBq} \]
where $\rho$ is the \textit{in situ} density, and \textit{emp} the surface mass
exchanges with the other media of the Earth system (atmosphere, seaice, land).
+where $\rho$ is the \textit{in situ} density, and \textit{emp} the surface mass exchanges with the other media of
+the Earth system (atmosphere, seaice, land).
Its global averaged leads to the total mass change
@@ 1642,5 +1629,5 @@
where $\overline{\textit{emp}} = \int_S \textit{emp}\,ds$ is the net mass flux through the ocean surface.
Bringing \autoref{eq:Mass_nBq} and the time derivative of \autoref{eq:MV_nBq} together leads to
+Bringing \autoref{eq:Mass_nBq} and the time derivative of \autoref{eq:MV_nBq} together leads to
the evolution equation of the mean sea level
@@ 1649,12 +1636,10 @@
\label{eq:ssh_nBq} \]
The first term in equation \autoref{eq:ssh_nBq} alters sea level by adding or
subtracting mass from the ocean.
The second term arises from temporal changes in the global mean density; $i.e.$ from steric effects.

In a Boussinesq fluid, $\rho$ is replaced by $\rho_o$ in all the equation except when
$\rho$ appears multiplied by the gravity ($i.e.$ in the hydrostatic balance of the primitive Equations).
In particular, the mass conservation equation, \autoref{eq:Co_nBq}, degenerates into
the incompressibility equation:
+The first term in equation \autoref{eq:ssh_nBq} alters sea level by adding or subtracting mass from the ocean.
+The second term arises from temporal changes in the global mean density; $i.e.$ from steric effects.
+
+In a Boussinesq fluid, $\rho$ is replaced by $\rho_o$ in all the equation except when $\rho$ appears multiplied by
+the gravity ($i.e.$ in the hydrostatic balance of the primitive Equations).
+In particular, the mass conservation equation, \autoref{eq:Co_nBq}, degenerates into the incompressibility equation:
\[ \frac{1}{e_3} \partial_t ( e_3 ) + \nabla( \textbf{U} )
@@ 1667,17 +1652,16 @@
\label{eq:V_Bq} \]
Only the volume is conserved, not mass, or, more precisely, the mass which is conserved is
the Boussinesq mass, $\mathcal{M}_o = \rho_o \mathcal{V}$.
The total volume (or equivalently the global mean sea level) is altered only by net volume fluxes across
the ocean surface, not by changes in mean mass of the ocean: the steric effect is missing in
a Boussinesq fluid.

Nevertheless, following \citep{Greatbatch_JGR94}, the steric effect on the volume can be diagnosed by
+Only the volume is conserved, not mass, or, more precisely, the mass which is conserved is the Boussinesq mass,
+$\mathcal{M}_o = \rho_o \mathcal{V}$.
+The total volume (or equivalently the global mean sea level) is altered only by net volume fluxes across
+the ocean surface, not by changes in mean mass of the ocean: the steric effect is missing in a Boussinesq fluid.
+
+Nevertheless, following \citep{Greatbatch_JGR94}, the steric effect on the volume can be diagnosed by
considering the mass budget of the ocean.
The apparent changes in $\mathcal{M}$, mass of the ocean, which are not induced by
surface mass flux must be compensated by a spatially uniform change in the mean sea level due to
expansion/contraction of the ocean \citep{Greatbatch_JGR94}.
In others words, the Boussinesq mass, $\mathcal{M}_o$, can be related to $\mathcal{M}$,
the total mass of the ocean seen by the Boussinesq model, via the steric contribution to the sea level,
+The apparent changes in $\mathcal{M}$, mass of the ocean, which are not induced by surface mass flux
+must be compensated by a spatially uniform change in the mean sea level due to expansion/contraction of the ocean
+\citep{Greatbatch_JGR94}.
+In others words, the Boussinesq mass, $\mathcal{M}_o$, can be related to $\mathcal{M}$,
+the total mass of the ocean seen by the Boussinesq model, via the steric contribution to the sea level,
$\eta_s$, a spatially uniform variable, as follows:
@@ 1685,9 +1669,9 @@
\label{eq:M_Bq} \]
Any change in $\mathcal{M}$ which cannot be explained by the net mass flux through
the ocean surface is converted into a mean change in sea level.
Introducing the total density anomaly, $\mathcal{D}= \int_D d_a \,dv$, where
$d_a = (\rho \rho_o ) / \rho_o$ is the density anomaly used in \NEMO (cf. \autoref{subsec:TRA_eos}) in
\autoref{eq:M_Bq} leads to a very simple form for the steric height:
+Any change in $\mathcal{M}$ which cannot be explained by the net mass flux through the ocean surface
+is converted into a mean change in sea level.
+Introducing the total density anomaly, $\mathcal{D}= \int_D d_a \,dv$,
+where $d_a = (\rho \rho_o ) / \rho_o$ is the density anomaly used in \NEMO (cf. \autoref{subsec:TRA_eos})
+in \autoref{eq:M_Bq} leads to a very simple form for the steric height:
\[ \eta_s =  \frac{1}{\mathcal{A}} \mathcal{D}
@@ 1699,18 +1683,17 @@
We do not recommend that.
Indeed, in this case $\rho_o$ depends on the initial state of the ocean.
Since $\rho_o$ has a direct effect on the dynamics of the ocean (it appears in
the pressure gradient term of the momentum equation) it is definitively not a good idea when
intercomparing experiments.
+Since $\rho_o$ has a direct effect on the dynamics of the ocean
+(it appears in the pressure gradient term of the momentum equation)
+it is definitively not a good idea when intercomparing experiments.
We better recommend to fixe once for all $\rho_o$ to $1035\;Kg\,m^{3}$.
This value is a sensible choice for the reference density used in a Boussinesq ocean climate model since,
with the exception of only a small percentage of the ocean, density in the World Ocean varies by
no more than 2$\%$ from this value (\cite{Gill1982}, page 47).

Second, we have assumed here that the total ocean surface, $\mathcal{A}$, does not change when
the sea level is changing as it is the case in all global ocean GCMs
+This value is a sensible choice for the reference density used in a Boussinesq ocean climate model since,
+with the exception of only a small percentage of the ocean, density in the World Ocean varies by no more than
+2$\%$ from this value (\cite{Gill1982}, page 47).
+
+Second, we have assumed here that the total ocean surface, $\mathcal{A}$,
+does not change when the sea level is changing as it is the case in all global ocean GCMs
(wetting and drying of grid point is not allowed).
Third, the discretisation of \autoref{eq:steric_Bq} depends on the type of
free surface which is considered.
+Third, the discretisation of \autoref{eq:steric_Bq} depends on the type of free surface which is considered.
In the non linear free surface case, $i.e.$ \key{vvl} defined, it is given by
@@ 1719,6 +1702,7 @@
\label{eq:discrete_steric_Bq_nfs} \]
whereas in the linear free surface, the volume above the \textit{z=0} surface must be explicitly
taken into account to better approximate the total ocean mass and thus the steric sea level:
+whereas in the linear free surface,
+the volume above the \textit{z=0} surface must be explicitly taken into account to
+better approximate the total ocean mass and thus the steric sea level:
\[ \eta_s =  \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} d_a\; e_{1t}e_{2t} \eta }
@@ 1729,12 +1713,11 @@
In the real ocean, sea ice (and snow above it) depresses the liquid seawater through its mass loading.
This depression is a result of the mass of sea ice/snow system acting on the liquid ocean.
There is, however, no dynamical effect associated with these depressions in the liquid ocean sea level,
+There is, however, no dynamical effect associated with these depressions in the liquid ocean sea level,
so that there are no associated ocean currents.
Hence, the dynamically relevant sea level is the effective sea level, $i.e.$ the sea level as if
sea ice (and snow) were converted to liquid seawater \citep{Campin_al_OM08}.
However, in the current version of \NEMO the seaice is levitating above the ocean without
mass exchanges between ice and ocean.
Therefore the model effective sea level is always given by $\eta + \eta_s$,
whether or not there is sea ice present.
+Hence, the dynamically relevant sea level is the effective sea level,
+$i.e.$ the sea level as if sea ice (and snow) were converted to liquid seawater \citep{Campin_al_OM08}.
+However, in the current version of \NEMO the seaice is levitating above the ocean without mass exchanges between
+ice and ocean.
+Therefore the model effective sea level is always given by $\eta + \eta_s$, whether or not there is sea ice present.
In AR5 outputs, the thermosteric sea level is demanded.
@@ 1747,6 +1730,6 @@
where $S_o$ and $p_o$ are the initial salinity and pressure, respectively.
Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs
the \key{diaar5} defined to be called.
+Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs the \key{diaar5} defined to
+be called.
% 
@@ 1761,5 +1744,5 @@
\subsection{Depth of various quantities (\protect\mdl{diahth})}
Among the available diagnostics the following ones are obtained when defining the \key{diahth} CPP key:
+Among the available diagnostics the following ones are obtained when defining the \key{diahth} CPP key:
 the mixed layer depth (based on a density criterion \citep{de_Boyer_Montegut_al_JGR04}) (\mdl{diahth})
@@ 1782,23 +1765,22 @@
%
The poleward heat and salt transports, their advective and diffusive component, and
the meriodional stream function can be computed online in \mdl{diaptr} \np{ln\_diaptr} to true
+The poleward heat and salt transports, their advective and diffusive component,
+and the meriodional stream function can be computed online in \mdl{diaptr} \np{ln\_diaptr} to true
(see the \textit{\ngn{namptr} } namelist below).
When \np{ln\_subbas}\forcode{ = .true.}, transports and stream function are computed for the Atlantic,
Indian, Pacific and IndoPacific Oceans (defined north of 30\deg S) as well as for the World Ocean.
The subbasin decomposition requires an input file (\ifile{subbasins}) which contains
three 2D mask arrays, the IndoPacific mask been deduced from the sum of the Indian and
Pacific mask (\autoref{fig:mask_subasins}).
+When \np{ln\_subbas}\forcode{ = .true.}, transports and stream function are computed for the Atlantic, Indian,
+Pacific and IndoPacific Oceans (defined north of 30\deg S) as well as for the World Ocean.
+The subbasin decomposition requires an input file (\ifile{subbasins}) which contains three 2D mask arrays,
+the IndoPacific mask been deduced from the sum of the Indian and Pacific mask (\autoref{fig:mask_subasins}).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
 \includegraphics[width=1.0\textwidth]{Fig_mask_subasins}
 \caption{ \protect\label{fig:mask_subasins}
 Decomposition of the World Ocean (here ORCA2) into subbasin used in to compute the heat and
 salt transports as well as the meridional streamfunction: Atlantic basin (red),
 Pacific basin (green), Indian basin (bleue), IndoPacific basin (bleue+green).
 Note that semienclosed seas (Red, Med and Baltic seas) as well as Hudson Bay are removed from
 the subbasins. Note also that the Arctic Ocean has been split into Atlantic and
 Pacific basins along the North fold line.}
+\begin{figure}[!t]
+ \begin{center}
+ \includegraphics[width=1.0\textwidth]{Fig_mask_subasins}
+ \caption{ \protect\label{fig:mask_subasins}
+ Decomposition of the World Ocean (here ORCA2) into subbasin used in to
+ compute the heat and salt transports as well as the meridional streamfunction:
+ Atlantic basin (red), Pacific basin (green), Indian basin (bleue), IndoPacific basin (bleue+green).
+ Note that semienclosed seas (Red, Med and Baltic seas) as well as Hudson Bay are removed from the subbasins.
+ Note also that the Arctic Ocean has been split into Atlantic and Pacific basins along the North fold line.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 1827,5 +1809,5 @@
The 25 hour mean is available for daily runs by summing up the 25 hourly instantananeous hourly values from
midnight at the start of the day to midight at the day end.
This diagnostic is actived with the logical $ln\_dia25h$
+This diagnostic is actived with the logical $ln\_dia25h$.
% 
@@ 1839,11 +1821,9 @@
%
A module is available to output the surface (top), mid water and bed diagnostics of a set of
standard variables.
This can be a useful diagnostic when hourly or subhourly output is required in
high resolution tidal outputs.
+A module is available to output the surface (top), mid water and bed diagnostics of a set of standard variables.
+This can be a useful diagnostic when hourly or subhourly output is required in high resolution tidal outputs.
The tidal signal is retained but the overall data usage is cut to just three vertical levels.
Also the bottom level is calculated for each cell.
This diagnostic is actived with the logical $ln\_diatmb$
+This diagnostic is actived with the logical $ln\_diatmb$.
% 
@@ 1859,17 +1839,15 @@
in the zonal, meridional and vertical directions respectively.
The vertical component is included although it is not strictly valid as the vertical velocity is
calculated from the continuity equation rather than as a prognostic variable.
+The vertical component is included although it is not strictly valid as the vertical velocity is calculated from
+the continuity equation rather than as a prognostic variable.
Physically this represents the rate at which information is propogated across a grid cell.
Values greater than 1 indicate that information is propagated across more than one grid cell in
a single time step.

The variables can be activated by setting the \np{nn\_diacfl} namelist parameter to 1 in
the \ngn{namctl} namelist.
+Values greater than 1 indicate that information is propagated across more than one grid cell in a single time step.
+
+The variables can be activated by setting the \np{nn\_diacfl} namelist parameter to 1 in the \ngn{namctl} namelist.
The diagnostics will be written out to an ascii file named cfl\_diagnostics.ascii.
In this file the maximum value of $C_u$, $C_v$, and $C_w$ are printed at each timestep along with
the coordinates of where the maximum value occurs.
At the end of the model run the maximum value of $C_u$, $C_v$, and $C_w$ for the whole model run is
printed along with the coordinates of each.
+In this file the maximum value of $C_u$, $C_v$, and $C_w$ are printed at each timestep along with the coordinates of
+where the maximum value occurs.
+At the end of the model run the maximum value of $C_u$, $C_v$, and $C_w$ for the whole model run is printed along
+with the coordinates of each.
The maximum values from the run are also copied to the ocean.output file.
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DIU.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DIU.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DIU.tex (revision 10368)
@@ 18,18 +18,20 @@
The skin temperature can be split into three parts:
\begin{itemize}
\item A foundation SST which is free from diurnal warming.
\item A warm layer, typically ~3\,m thick, where heating from solar radiation can
cause a warm stably stratified layer during the daytime
\item A cool skin, a thin layer, approximately ~1\,mm thick, where long wave cooling
is dominant and cools the immediate ocean surface.
+\item
+ A foundation SST which is free from diurnal warming.
+\item
+ A warm layer, typically ~3\,m thick,
+ where heating from solar radiation can cause a warm stably stratified layer during the daytime
+\item
+ A cool skin, a thin layer, approximately ~1\, mm thick,
+ where long wave cooling is dominant and cools the immediate ocean surface.
\end{itemize}
Models are provided for both the warm layer, \mdfl{diurnal_bulk}, and the cool skin,
\mdl{cool_skin}. Foundation SST is not considered as it can be obtained
either from the main NEMO model ($i.e.$ from the temperature of the top few model levels)
or from some other source.
It must be noted that both the cool skin and warm layer models produce estimates of
the change in temperature ($\Delta T_{\rm{cs}}$ and $\Delta T_{\rm{wl}}$)
and both must be added to a foundation SST to obtain the true skin temperature.
+Models are provided for both the warm layer, \mdfl{diurnal_bulk}, and the cool skin, \mdl{cool_skin}.
+Foundation SST is not considered as it can be obtained either from the main NEMO model
+($i.e.$ from the temperature of the top few model levels) or from some other source.
+It must be noted that both the cool skin and warm layer models produce estimates of the change in temperature
+($\Delta T_{\rm{cs}}$ and $\Delta T_{\rm{wl}}$) and
+both must be added to a foundation SST to obtain the true skin temperature.
Both the cool skin and warm layer models are controlled through the namelist \ngn{namdiu}:
@@ 38,18 +40,17 @@
This namelist contains only two variables:
\begin{description}
\item[\np{ln\_diurnal}] A logical switch for turning on/off both the cool skin and warm layer.
\item[\np{ln\_diurnal\_only}] A logical switch which if \forcode{.true.} will run the diurnal model
without the other dynamical parts of NEMO.
\np{ln\_diurnal\_only} must be \forcode{.false.} if \np{ln\_diurnal} is \forcode{.false.}.
+\item[\np{ln\_diurnal}]
+ A logical switch for turning on/off both the cool skin and warm layer.
+\item[\np{ln\_diurnal\_only}]
+ A logical switch which if \forcode{.true.} will run the diurnal model without the other dynamical parts of NEMO.
+ \np{ln\_diurnal\_only} must be \forcode{.false.} if \np{ln\_diurnal} is \forcode{.false.}.
\end{description}
Output for the diurnal model is through the variables `sst\_wl' (warm\_layer) and
`sst\_cs' (cool skin). These are 2D variables which will be included in the model
output if they are specified in the iodef.xml file.
+Output for the diurnal model is through the variables `sst\_wl' (warm\_layer) and `sst\_cs' (cool skin).
+These are 2D variables which will be included in the model output if they are specified in the iodef.xml file.
Initialisation is through the restart file. Specifically the code will expect
the presence of the 2D variable ``Dsst'' to initialise the warm layer.
The cool skin model, which is determined purely by the instantaneous fluxes,
has no initialisation variable.
+Initialisation is through the restart file.
+Specifically the code will expect the presence of the 2D variable ``Dsst'' to initialise the warm layer.
+The cool skin model, which is determined purely by the instantaneous fluxes, has no initialisation variable.
%===============================================================
@@ 58,6 +59,6 @@
%===============================================================
The warm layer is calculated using the model of \citet{Takaya_al_JGR10} (TAKAYA10 model
hereafter). This is a simple flux based model that is defined by the equations
+The warm layer is calculated using the model of \citet{Takaya_al_JGR10} (TAKAYA10 model hereafter).
+This is a simple flux based model that is defined by the equations
\begin{eqnarray}
\frac{\partial{\Delta T_{\rm{wl}}}}{\partial{t}}&=&\frac{Q(\nu+1)}{D_T\rho_w c_p
@@ 66,38 +67,30 @@
L&=&\frac{\rho_w c_p u^{*^3}_{w}}{\kappa g \alpha_w Q }\mbox{,}\label{eq:ecmwf2}
\end{eqnarray}
where $\Delta T_{\rm{wl}}$ is the temperature difference between the top of the warm
layer and the depth $D_T=3$\,m at which there is assumed to be no diurnal signal. In
equation (\autoref{eq:ecmwf1}) $\alpha_w=2\times10^{4}$ is the thermal expansion
coefficient of water, $\kappa=0.4$ is von K\'{a}rm\'{a}n's constant, $c_p$ is the heat
capacity at constant pressure of sea water, $\rho_w$ is the
water density, and $L$ is the MoninObukhov length. The tunable
variable $\nu$ is a shape parameter that defines the expected
subskin temperature profile via $T(z)=T(0)\left(\frac{z}{D_T}\right)^\nu\Delta
T_{\rm{wl}}$,
where $T$ is the absolute temperature and $z\le D_T$ is the depth
below the top of the warm layer.
The influence of wind on TAKAYA10 comes through the magnitude of the friction velocity
of the water
$u^*_{w}$, which can be related to the 10\,m wind speed $u_{10}$ through the relationship
$u^*_{w} = u_{10}\sqrt{\frac{C_d\rho_a}{\rho_w}}$, where $C_d$ is
the drag coefficient, and $\rho_a$ is the density of air. The symbol $Q$ in equation
(\autoref{eq:ecmwf1}) is the instantaneous total thermal energy
flux into
+where $\Delta T_{\rm{wl}}$ is the temperature difference between the top of the warm layer and the depth $D_T=3$\,m at which there is assumed to be no diurnal signal.
+In equation (\autoref{eq:ecmwf1}) $\alpha_w=2\times10^{4}$ is the thermal expansion coefficient of water,
+$\kappa=0.4$ is von K\'{a}rm\'{a}n's constant, $c_p$ is the heat capacity at constant pressure of sea water,
+$\rho_w$ is the water density, and $L$ is the MoninObukhov length.
+The tunable variable $\nu$ is a shape parameter that defines the expected subskin temperature profile via
+$T(z)=T(0)\left(\frac{z}{D_T}\right)^\nu\DeltaT_{\rm{wl}}$,
+where $T$ is the absolute temperature and $z\le D_T$ is the depth below the top of the warm layer.
+The influence of wind on TAKAYA10 comes through the magnitude of the friction velocity of the water $u^*_{w}$,
+which can be related to the 10\,m wind speed $u_{10}$ through
+the relationship $u^*_{w} = u_{10}\sqrt{\frac{C_d\rho_a}{\rho_w}}$, where $C_d$ is the drag coefficient,
+and $\rho_a$ is the density of air.
+The symbol $Q$ in equation (\autoref{eq:ecmwf1}) is the instantaneous total thermal energy flux into
the diurnal layer, $i.e.$
\begin{equation}
Q = Q_{\rm{sol}} + Q_{\rm{lw}} + Q_{\rm{h}}\mbox{,} \label{eq:e_flux_eqn}
\end{equation}
where $Q_{\rm{h}}$ is the sensible and latent heat flux, $Q_{\rm{lw}}$ is the long
wave flux, and $Q_{\rm{sol}}$ is the solar flux absorbed
within the diurnal warm layer. For $Q_{\rm{sol}}$ the 9 term
representation of \citet{Gentemann_al_JGR09} is used. In equation \autoref{eq:ecmwf1}
the function $f(L_a)=\max(1,L_a^{\frac{2}{3}})$, where $L_a=0.3$\footnote{This
is a global average value, more accurately $L_a$ could be computed as
$L_a=(u^*_{w}/u_s)^{\frac{1}{2}}$, where $u_s$ is the stokes drift, but this is not
currently done} is the turbulent Langmuir number and is a
parametrization of the effect of waves.
+where $Q_{\rm{h}}$ is the sensible and latent heat flux, $Q_{\rm{lw}}$ is the long wave flux,
+and $Q_{\rm{sol}}$ is the solar flux absorbed within the diurnal warm layer.
+For $Q_{\rm{sol}}$ the 9 term representation of \citet{Gentemann_al_JGR09} is used.
+In equation \autoref{eq:ecmwf1} the function $f(L_a)=\max(1,L_a^{\frac{2}{3}})$,
+where $L_a=0.3$\footnote{
+ This is a global average value, more accurately $L_a$ could be computed as $L_a=(u^*_{w}/u_s)^{\frac{1}{2}}$,
+ where $u_s$ is the stokes drift, but this is not currently done
+} is the turbulent Langmuir number and is a parametrization of the effect of waves.
The function $\Phi\!\left(\frac{D_T}{L}\right)$ is the similarity function that
parametrizes the stability of the water column and
is given by:
+parametrizes the stability of the water column and is given by:
\begin{equation}
\Phi(\zeta) = \left\{ \begin{array}{cc} 1 + \frac{5\zeta +
@@ 106,16 +99,13 @@
\end{array} \right. \label{eq:stab_func_eqn}
\end{equation}
where $\zeta=\frac{D_T}{L}$. It is clear that the first derivative of
(\autoref{eq:stab_func_eqn}), and thus of (\autoref{eq:ecmwf1}),
is discontinuous at $\zeta=0$ ($i.e.$ $Q\rightarrow0$ in equation (\autoref{eq:ecmwf2})).
+where $\zeta=\frac{D_T}{L}$. It is clear that the first derivative of (\autoref{eq:stab_func_eqn}),
+and thus of (\autoref{eq:ecmwf1}), is discontinuous at $\zeta=0$ ($i.e.$ $Q\rightarrow0$ in
+equation (\autoref{eq:ecmwf2})).
The two terms on the right hand side of (\autoref{eq:ecmwf1}) represent different processes.
The first term is simply the diabatic heating or cooling of the
diurnal warm
layer due to thermal energy
fluxes into and out of the layer. The second term
parametrizes turbulent fluxes of heat out of the diurnal warm layer due to wind
induced mixing. In practice the second term acts as a relaxation
on the temperature.
+The first term is simply the diabatic heating or cooling of the diurnal warm layer due to
+thermal energy fluxes into and out of the layer.
+The second term parametrizes turbulent fluxes of heat out of the diurnal warm layer due to wind induced mixing.
+In practice the second term acts as a relaxation on the temperature.
%===============================================================
@@ 126,9 +116,6 @@
%===============================================================
The cool skin is modelled using the framework of \citet{Saunders_JAS82} who used a
formulation of the near surface temperature difference based upon the heat flux and
the friction velocity $u^*_{w}$. As the cool skin
is so thin (~1\,mm) we ignore the solar flux component to the heat flux and the
Saunders equation for the cool skin temperature difference $\Delta T_{\rm{cs}}$ becomes
+The cool skin is modelled using the framework of \citet{Saunders_JAS82} who used a formulation of the near surface temperature difference based upon the heat flux and the friction velocity $u^*_{w}$.
+As the cool skin is so thin (~1\,mm) we ignore the solar flux component to the heat flux and the Saunders equation for the cool skin temperature difference $\Delta T_{\rm{cs}}$ becomes
\begin{equation}
\label{eq:sunders_eqn}
@@ 136,17 +123,17 @@
\end{equation}
where $Q_{\rm{ns}}$ is the, usually negative, nonsolar heat flux into the ocean and
$k_t$ is the thermal conductivity of sea water. $\delta$ is the thickness of the
skin layer and is given by
+$k_t$ is the thermal conductivity of sea water.
+$\delta$ is the thickness of the skin layer and is given by
\begin{equation}
\label{eq:sunders_thick_eqn}
\delta=\frac{\lambda \mu}{u^*_{w}} \mbox{,}
\end{equation}
where $\mu$ is the kinematic viscosity of sea water and $\lambda$ is a constant of
proportionality which \citet{Saunders_JAS82} suggested varied between 5 and 10.
+where $\mu$ is the kinematic viscosity of sea water and $\lambda$ is a constant of proportionality which
+\citet{Saunders_JAS82} suggested varied between 5 and 10.
The value of $\lambda$ used in equation (\autoref{eq:sunders_thick_eqn}) is that of
\citet{Artale_al_JGR02},
which is shown in \citet{Tu_Tsuang_GRL05} to outperform a number of other
parametrisations at both low and high wind speeds. Specifically,
+The value of $\lambda$ used in equation (\autoref{eq:sunders_thick_eqn}) is that of \citet{Artale_al_JGR02},
+which is shown in \citet{Tu_Tsuang_GRL05} to outperform a number of other parametrisations at
+both low and high wind speeds.
+Specifically,
\begin{equation}
\label{eq:artale_lambda_eqn}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DOM.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DOM.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DOM.tex (revision 10368)
@@ 20,9 +20,8 @@
$\ $\newline % force a new line
Having defined the continuous equations in \autoref{chap:PE} and chosen a time
discretization \autoref{chap:STP}, we need to choose a discretization on a grid,
and numerical algorithms. In the present chapter, we provide a general description
of the staggered grid used in \NEMO, and other information relevant to the main
directory routines as well as the DOM (DOMain) directory.
+Having defined the continuous equations in \autoref{chap:PE} and chosen a time discretization \autoref{chap:STP},
+we need to choose a discretization on a grid, and numerical algorithms.
+In the present chapter, we provide a general description of the staggered grid used in \NEMO,
+and other information relevant to the main directory routines as well as the DOM (DOMain) directory.
$\ $\newline % force a new line
@@ 43,42 +42,42 @@
\begin{figure}[!tb] \begin{center}
\includegraphics[width=0.90\textwidth]{Fig_cell}
\caption{ \protect\label{fig:cell}
Arrangement of variables. $t$ indicates scalar points where temperature,
salinity, density, pressure and horizontal divergence are defined. ($u$,$v$,$w$)
indicates vector points, and $f$ indicates vorticity points where both relative and
planetary vorticities are defined}
+\caption{ \protect\label{fig:cell}
+ Arrangement of variables.
+ $t$ indicates scalar points where temperature, salinity, density, pressure and horizontal divergence are defined.
+ ($u$,$v$,$w$) indicates vector points,
+ and $f$ indicates vorticity points where both relative and planetary vorticities are defined}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The numerical techniques used to solve the Primitive Equations in this model are
based on the traditional, centred secondorder finite difference approximation.
Special attention has been given to the homogeneity of the solution in the three
space directions. The arrangement of variables is the same in all directions.
It consists of cells centred on scalar points ($t$, $S$, $p$, $\rho$) with vector
points $(u, v, w)$ defined in the centre of each face of the cells (\autoref{fig:cell}).
This is the generalisation to three dimensions of the wellknown ``C'' grid in
Arakawa's classification \citep{Mesinger_Arakawa_Bk76}. The relative and
planetary vorticity, $\zeta$ and $f$, are defined in the centre of each vertical edge
and the barotropic stream function $\psi$ is defined at horizontal points overlying
the $\zeta$ and $f$points.

The ocean mesh ($i.e.$ the position of all the scalar and vector points) is defined
by the transformation that gives ($\lambda$ ,$\varphi$ ,$z$) as a function of $(i,j,k)$.
The gridpoints are located at integer or integer and a half value of $(i,j,k)$ as
indicated on \autoref{tab:cell}. In all the following, subscripts $u$, $v$, $w$,
$f$, $uw$, $vw$ or $fw$ indicate the position of the gridpoint where the scale
factors are defined. Each scale factor is defined as the local analytical value
provided by \autoref{eq:scale_factors}. As a result, the mesh on which partial
derivatives $\frac{\partial}{\partial \lambda}, \frac{\partial}{\partial \varphi}$, and
$\frac{\partial}{\partial z} $ are evaluated is a uniform mesh with a grid size of unity.
Discrete partial derivatives are formulated by the traditional, centred second order
finite difference approximation while the scale factors are chosen equal to their
local analytical value. An important point here is that the partial derivative of the
scale factors must be evaluated by centred finite difference approximation, not
from their analytical expression. This preserves the symmetry of the discrete set
of equations and therefore satisfies many of the continuous properties (see
\autoref{apdx:C}). A similar, related remark can be made about the domain
size: when needed, an area, volume, or the total ocean depth must be evaluated
as the sum of the relevant scale factors (see \autoref{eq:DOM_bar}) in the next section).
+The numerical techniques used to solve the Primitive Equations in this model are based on the traditional,
+centred secondorder finite difference approximation.
+Special attention has been given to the homogeneity of the solution in the three space directions.
+The arrangement of variables is the same in all directions.
+It consists of cells centred on scalar points ($t$, $S$, $p$, $\rho$) with vector points $(u, v, w)$ defined in
+the centre of each face of the cells (\autoref{fig:cell}).
+This is the generalisation to three dimensions of the wellknown ``C'' grid in Arakawa's classification
+\citep{Mesinger_Arakawa_Bk76}.
+The relative and planetary vorticity, $\zeta$ and $f$, are defined in the centre of each vertical edge and
+the barotropic stream function $\psi$ is defined at horizontal points overlying the $\zeta$ and $f$points.
+
+The ocean mesh ($i.e.$ the position of all the scalar and vector points) is defined by
+the transformation that gives ($\lambda$ ,$\varphi$ ,$z$) as a function of $(i,j,k)$.
+The gridpoints are located at integer or integer and a half value of $(i,j,k)$ as indicated on \autoref{tab:cell}.
+In all the following, subscripts $u$, $v$, $w$, $f$, $uw$, $vw$ or $fw$ indicate the position of
+the gridpoint where the scale factors are defined.
+Each scale factor is defined as the local analytical value provided by \autoref{eq:scale_factors}.
+As a result,
+the mesh on which partial derivatives $\frac{\partial}{\partial \lambda}, \frac{\partial}{\partial \varphi}$,
+and $\frac{\partial}{\partial z} $ are evaluated is a uniform mesh with a grid size of unity.
+Discrete partial derivatives are formulated by the traditional,
+centred second order finite difference approximation while
+the scale factors are chosen equal to their local analytical value.
+An important point here is that the partial derivative of the scale factors must be evaluated by
+centred finite difference approximation, not from their analytical expression.
+This preserves the symmetry of the discrete set of equations and
+therefore satisfies many of the continuous properties (see \autoref{apdx:C}).
+A similar, related remark can be made about the domain size:
+when needed, an area, volume, or the total ocean depth must be evaluated as the sum of the relevant scale factors
+(see \autoref{eq:DOM_bar}) in the next section).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 96,8 +95,8 @@
\end{tabular}
\caption{ \protect\label{tab:cell}
Location of gridpoints as a function of integer or integer and a half value of the column,
line or level. This indexing is only used for the writing of the semidiscrete equation.
In the code, the indexing uses integer values only and has a reverse direction
in the vertical (see \autoref{subsec:DOM_Num_Index})}
+ Location of gridpoints as a function of integer or integer and a half value of the column, line or level.
+ This indexing is only used for the writing of the semidiscrete equation.
+ In the code, the indexing uses integer values only and has a reverse direction in the vertical
+ (see \autoref{subsec:DOM_Num_Index})}
\end{center}
\end{table}
@@ 110,6 +109,6 @@
\label{subsec:DOM_operators}
Given the values of a variable $q$ at adjacent points, the differencing and
averaging operators at the midpoint between them are:
+Given the values of a variable $q$ at adjacent points,
+the differencing and averaging operators at the midpoint between them are:
\begin{subequations} \label{eq:di_mi}
\begin{align}
@@ 119,9 +118,9 @@
\end{subequations}
Similar operators are defined with respect to $i+1/2$, $j$, $j+1/2$, $k$, and
$k+1/2$. Following \autoref{eq:PE_grad} and \autoref{eq:PE_lap}, the gradient of a
variable $q$ defined at a $t$point has its three components defined at $u$, $v$
and $w$points while its Laplacien is defined at $t$point. These operators have
the following discrete forms in the curvilinear $s$coordinate system:
+Similar operators are defined with respect to $i+1/2$, $j$, $j+1/2$, $k$, and $k+1/2$.
+Following \autoref{eq:PE_grad} and \autoref{eq:PE_lap}, the gradient of a variable $q$ defined at
+a $t$point has its three components defined at $u$, $v$ and $w$points while
+its Laplacien is defined at $t$point.
+These operators have the following discrete forms in the curvilinear $s$coordinate system:
\begin{equation} \label{eq:DOM_grad}
\nabla q\equiv \frac{1}{e_{1u} } \delta _{i+1/2 } [q] \;\,\mathbf{i}
@@ 136,7 +135,7 @@
\end{multline}
Following \autoref{eq:PE_curl} and \autoref{eq:PE_div}, a vector ${\rm {\bf A}}=\left( a_1,a_2,a_3\right)$
defined at vector points $(u,v,w)$ has its three curl components defined at $vw$, $uw$,
and $f$points, and its divergence defined at $t$points:
+Following \autoref{eq:PE_curl} and \autoref{eq:PE_div}, a vector ${\rm {\bf A}}=\left( a_1,a_2,a_3\right)$
+defined at vector points $(u,v,w)$ has its three curl components defined at $vw$, $uw$, and $f$points,
+and its divergence defined at $t$points:
\begin{eqnarray} \label{eq:DOM_curl}
\nabla \times {\rm{\bf A}}\equiv &
@@ 151,14 +150,14 @@
\end{eqnarray}
The vertical average over the whole water column denoted by an overbar becomes
for a quantity $q$ which is a masked field (i.e. equal to zero inside solid area):
+The vertical average over the whole water column denoted by an overbar becomes for a quantity $q$ which
+is a masked field (i.e. equal to zero inside solid area):
\begin{equation} \label{eq:DOM_bar}
\bar q = \frac{1}{H} \int_{k^b}^{k^o} {q\;e_{3q} \,dk}
\equiv \frac{1}{H_q }\sum\limits_k {q\;e_{3q} }
\end{equation}
where $H_q$ is the ocean depth, which is the masked sum of the vertical scale
factors at $q$ points, $k^b$ and $k^o$ are the bottom and surface $k$indices,
and the symbol $k^o$ refers to a summation over all grid points of the same type
in the direction indicated by the subscript (here $k$).
+where $H_q$ is the ocean depth, which is the masked sum of the vertical scale factors at $q$ points,
+$k^b$ and $k^o$ are the bottom and surface $k$indices,
+and the symbol $k^o$ refers to a summation over all grid points of the same type in the direction indicated by
+the subscript (here $k$).
In continuous form, the following properties are satisfied:
@@ 170,14 +169,14 @@
\end{equation}
It is straightforward to demonstrate that these properties are verified locally in
discrete form as soon as the scalar $q$ is taken at $t$points and the vector
\textbf{A} has its components defined at vector points $(u,v,w)$.

Let $a$ and $b$ be two fields defined on the mesh, with value zero inside
continental area. Using integration by parts it can be shown that the differencing
operators ($\delta_i$, $\delta_j$ and $\delta_k$) are skewsymmetric linear operators,
and further that the averaging operators $\overline{\,\cdot\,}^{\,i}$,
$\overline{\,\cdot\,}^{\,k}$ and $\overline{\,\cdot\,}^{\,k}$) are symmetric linear
operators, $i.e.$
+It is straightforward to demonstrate that these properties are verified locally in discrete form as soon as
+the scalar $q$ is taken at $t$points and
+the vector \textbf{A} has its components defined at vector points $(u,v,w)$.
+
+Let $a$ and $b$ be two fields defined on the mesh, with value zero inside continental area.
+Using integration by parts it can be shown that
+the differencing operators ($\delta_i$, $\delta_j$ and $\delta_k$) are skewsymmetric linear operators,
+and further that the averaging operators $\overline{\,\cdot\,}^{\,i}$, $\overline{\,\cdot\,}^{\,k}$ and
+$\overline{\,\cdot\,}^{\,k}$) are symmetric linear operators,
+$i.e.$
\begin{align}
\label{eq:DOM_di_adj}
@@ 189,8 +188,7 @@
\end{align}
In other words, the adjoint of the differencing and averaging operators are
$\delta_i^*=\delta_{i+1/2}$ and
+In other words, the adjoint of the differencing and averaging operators are $\delta_i^*=\delta_{i+1/2}$ and
${(\overline{\,\cdot \,}^{\,i})}^*= \overline{\,\cdot\,}^{\,i+1/2}$, respectively.
These two properties will be used extensively in the \autoref{apdx:C} to
+These two properties will be used extensively in the \autoref{apdx:C} to
demonstrate integral conservative properties of the discrete formulation chosen.
@@ 204,17 +202,16 @@
\begin{figure}[!tb] \begin{center}
\includegraphics[width=0.90\textwidth]{Fig_index_hor}
\caption{ \protect\label{fig:index_hor}
Horizontal integer indexing used in the \textsc{Fortran} code. The dashed area indicates
the cell in which variables contained in arrays have the same $i$ and $j$indices}
+\caption{ \protect\label{fig:index_hor}
+ Horizontal integer indexing used in the \textsc{Fortran} code.
+ The dashed area indicates the cell in which variables contained in arrays have the same $i$ and $j$indices}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The array representation used in the \textsc{Fortran} code requires an integer
indexing while the analytical definition of the mesh (see \autoref{subsec:DOM_cell}) is
associated with the use of integer values for $t$points and both integer and
integer and a half values for all the other points. Therefore a specific integer
indexing must be defined for points other than $t$points ($i.e.$ velocity and
vorticity gridpoints). Furthermore, the direction of the vertical indexing has
been changed so that the surface level is at $k=1$.
+The array representation used in the \textsc{Fortran} code requires an integer indexing while
+the analytical definition of the mesh (see \autoref{subsec:DOM_cell}) is associated with the use of
+integer values for $t$points and both integer and integer and a half values for all the other points.
+Therefore a specific integer indexing must be defined for points other than $t$points
+($i.e.$ velocity and vorticity gridpoints).
+Furthermore, the direction of the vertical indexing has been changed so that the surface level is at $k=1$.
% 
@@ 224,7 +221,8 @@
\label{subsec:DOM_Num_Index_hor}
The indexing in the horizontal plane has been chosen as shown in \autoref{fig:index_hor}.
For an increasing $i$ index ($j$ index), the $t$point and the eastward $u$point
(northward $v$point) have the same index (see the dashed area in \autoref{fig:index_hor}).
+The indexing in the horizontal plane has been chosen as shown in \autoref{fig:index_hor}.
+For an increasing $i$ index ($j$ index),
+the $t$point and the eastward $u$point (northward $v$point) have the same index
+(see the dashed area in \autoref{fig:index_hor}).
A $t$point and its nearest northeast $f$point have the same $i$and $j$indices.
@@ 235,27 +233,26 @@
\label{subsec:DOM_Num_Index_vertical}
In the vertical, the chosen indexing requires special attention since the
$k$axis is reorientated downward in the \textsc{Fortran} code compared
to the indexing used in the semidiscrete equations and given in \autoref{subsec:DOM_cell}.
The sea surface corresponds to the $w$level $k=1$ which is the same index
as $t$level just below (\autoref{fig:index_vert}). The last $w$level ($k=jpk$)
either corresponds to the ocean floor or is inside the bathymetry while the last
$t$level is always inside the bathymetry (\autoref{fig:index_vert}). Note that
for an increasing $k$ index, a $w$point and the $t$point just below have the
same $k$ index, in opposition to what is done in the horizontal plane where
it is the $t$point and the nearest velocity points in the direction of the horizontal
axis that have the same $i$ or $j$ index (compare the dashed area in
\autoref{fig:index_hor} and \autoref{fig:index_vert}). Since the scale factors are
chosen to be strictly positive, a \emph{minus sign} appears in the \textsc{Fortran}
code \emph{before all the vertical derivatives} of the discrete equations given in
this documentation.
+In the vertical, the chosen indexing requires special attention since
+the $k$axis is reorientated downward in the \textsc{Fortran} code compared to
+the indexing used in the semidiscrete equations and given in \autoref{subsec:DOM_cell}.
+The sea surface corresponds to the $w$level $k=1$ which is the same index as $t$level just below
+(\autoref{fig:index_vert}).
+The last $w$level ($k=jpk$) either corresponds to the ocean floor or is inside the bathymetry while
+the last $t$level is always inside the bathymetry (\autoref{fig:index_vert}).
+Note that for an increasing $k$ index, a $w$point and the $t$point just below have the same $k$ index,
+in opposition to what is done in the horizontal plane where
+it is the $t$point and the nearest velocity points in the direction of the horizontal axis that
+have the same $i$ or $j$ index
+(compare the dashed area in \autoref{fig:index_hor} and \autoref{fig:index_vert}).
+Since the scale factors are chosen to be strictly positive, a \emph{minus sign} appears in the \textsc{Fortran}
+code \emph{before all the vertical derivatives} of the discrete equations given in this documentation.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!pt] \begin{center}
\includegraphics[width=.90\textwidth]{Fig_index_vert}
\caption{ \protect\label{fig:index_vert}
Vertical integer indexing used in the \textsc{Fortran } code. Note that
the $k$axis is orientated downward. The dashed area indicates the cell in
which variables contained in arrays have the same $k$index.}
+\caption{ \protect\label{fig:index_vert}
+ Vertical integer indexing used in the \textsc{Fortran } code.
+ Note that the $k$axis is orientated downward.
+ The dashed area indicates the cell in which variables contained in arrays have the same $k$index.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 267,11 +264,12 @@
\label{subsec:DOM_size}
The total size of the computational domain is set by the parameters \np{jpiglo},
\np{jpjglo} and \np{jpkglo} in the $i$, $j$ and $k$ directions respectively.
+The total size of the computational domain is set by the parameters \np{jpiglo},
+\np{jpjglo} and \np{jpkglo} in the $i$, $j$ and $k$ directions respectively.
%%%
%%%
%%%
Parameters $jpi$ and $jpj$ refer to the size of each processor subdomain when the code is
run in parallel using domain decomposition (\key{mpp\_mpi} defined, see \autoref{sec:LBC_mpp}).
+Parameters $jpi$ and $jpj$ refer to the size of each processor subdomain when
+the code is run in parallel using domain decomposition (\key{mpp\_mpi} defined,
+see \autoref{sec:LBC_mpp}).
@@ 283,15 +281,14 @@
\section{Needed fields}
\label{sec:DOM_fields}
The ocean mesh ($i.e.$ the position of all the scalar and vector points) is defined
by the transformation that gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$.
The gridpoints are located at integer or integer and a half values of as indicated
in \autoref{tab:cell}. The associated scale factors are defined using the
analytical first derivative of the transformation \autoref{eq:scale_factors}.
+The ocean mesh ($i.e.$ the position of all the scalar and vector points) is defined by the transformation that gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$.
+The gridpoints are located at integer or integer and a half values of as indicated in \autoref{tab:cell}.
+The associated scale factors are defined using the analytical first derivative of the transformation
+\autoref{eq:scale_factors}.
Necessary fields for configuration definition are: \\
Geographic position :
longitude : glamt , glamu , glamv and glamf (at T, U, V and F point)

latitude : gphit , gphiu , gphiv and gphif (at T, U, V and F point)\\
+longitude: glamt, glamu, glamv and glamf (at T, U, V and F point)
+
+latitude: gphit, gphiu, gphiv and gphif (at T, U, V and F point)\\
Coriolis parameter (if domain not on the sphere):
@@ 301,16 +298,20 @@
e1t, e1u, e1v and e1f (on i direction),
 e2t, e2u, e2v and e2f (on j direction)

 and ie1e2u\_v, e1e2u , e1e2v
+ e2t, e2u, e2v and e2f (on j direction) and
+
+ ie1e2u\_v, e1e2u , e1e2v
e1e2u , e1e2v are u and v surfaces (if gridsize reduction in some straits)\\
ie1e2u\_v is a flag to flag set u and v surfaces are neither read nor computed.\\
These fields can be read in an domain input file which name is setted in \np{cn\_domcfg} parameter specified in \ngn{namcfg}.
+These fields can be read in an domain input file which name is setted in
+\np{cn\_domcfg} parameter specified in \ngn{namcfg}.
\nlst{namcfg}
or they can be defined in an analytical way in MY\_SRC directory of the configuration.
For Reference Configurations of NEMO input domain files are supplied by NEMO System Team. For analytical definition of input fields two routines are supplied: \mdl{userdef\_hgr} and \mdl{userdef\_zgr}. They are an example of GYRE configuration parameters, and they are available in NEMO/OPA\_SRC/USR directory, they provide the horizontal and vertical mesh.
+For Reference Configurations of NEMO input domain files are supplied by NEMO System Team.
+For analytical definition of input fields two routines are supplied: \mdl{userdef\_hgr} and \mdl{userdef\_zgr}.
+They are an example of GYRE configuration parameters, and they are available in NEMO/OPA\_SRC/USR directory,
+they provide the horizontal and vertical mesh.
% 
% Needed fields
@@ 332,22 +333,21 @@
\label{subsec:DOM_hgr_coord_e}
The ocean mesh ($i.e.$ the position of all the scalar and vector points) is defined
by the transformation that gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$.
The gridpoints are located at integer or integer and a half values of as indicated
in \autoref{tab:cell}. The associated scale factors are defined using the
analytical first derivative of the transformation \autoref{eq:scale_factors}. These
definitions are done in two modules, \mdl{domhgr} and \mdl{domzgr}, which
provide the horizontal and vertical meshes, respectively. This section deals with
the horizontal mesh parameters.

In a horizontal plane, the location of all the model grid points is defined from the
analytical expressions of the longitude $\lambda$ and latitude $\varphi$ as a
function of $(i,j)$. The horizontal scale factors are calculated using
\autoref{eq:scale_factors}. For example, when the longitude and latitude are
function of a single value ($i$ and $j$, respectively) (geographical configuration
of the mesh), the horizontal mesh definition reduces to define the wanted
$\lambda(i)$, $\varphi(j)$, and their derivatives $\lambda'(i)$ $\varphi'(j)$ in the
\mdl{domhgr} module. The model computes the gridpoint positions and scale
factors in the horizontal plane as follows:
+The ocean mesh ($i.e.$ the position of all the scalar and vector points) is defined by
+the transformation that gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$.
+The gridpoints are located at integer or integer and a half values of as indicated in \autoref{tab:cell}.
+The associated scale factors are defined using the analytical first derivative of the transformation
+\autoref{eq:scale_factors}.
+These definitions are done in two modules, \mdl{domhgr} and \mdl{domzgr},
+which provide the horizontal and vertical meshes, respectively.
+This section deals with the horizontal mesh parameters.
+
+In a horizontal plane, the location of all the model grid points is defined from
+the analytical expressions of the longitude $\lambda$ and latitude $\varphi$ as a function of $(i,j)$.
+The horizontal scale factors are calculated using \autoref{eq:scale_factors}.
+For example, when the longitude and latitude are function of a single value
+($i$ and $j$, respectively) (geographical configuration of the mesh),
+the horizontal mesh definition reduces to define the wanted $\lambda(i)$, $\varphi(j)$,
+and their derivatives $\lambda'(i)$ $\varphi'(j)$ in the \mdl{domhgr} module.
+The model computes the gridpoint positions and scale factors in the horizontal plane as follows:
\begin{flalign*}
\lambda_t &\equiv \text{glamt}= \lambda(i) & \varphi_t &\equiv \text{gphit} = \varphi(j)\\
@@ 366,33 +366,32 @@
e_{2f} &\equiv \text{e2t} = r_a \varphi'(j+1/2)
\end{flalign*}
where the last letter of each computational name indicates the grid point
considered and $r_a$ is the earth radius (defined in \mdl{phycst} along with
all universal constants). Note that the horizontal position of and scale factors
at $w$points are exactly equal to those of $t$points, thus no specific arrays
are defined at $w$points.

Note that the definition of the scale factors ($i.e.$ as the analytical first derivative
of the transformation that gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$) is
specific to the \NEMO model \citep{Marti_al_JGR92}. As an example, $e_{1t}$ is defined
locally at a $t$point, whereas many other models on a C grid choose to define
such a scale factor as the distance between the $U$points on each side of the
$t$point. Relying on an analytical transformation has two advantages: firstly, there
is no ambiguity in the scale factors appearing in the discrete equations, since they
are first introduced in the continuous equations; secondly, analytical transformations
encourage good practice by the definition of smoothly varying grids (rather than
allowing the user to set arbitrary jumps in thickness between adjacent layers)
\citep{Treguier1996}. An example of the effect of such a choice is shown in
\autoref{fig:zgr_e3}.
+where the last letter of each computational name indicates the grid point considered and
+$r_a$ is the earth radius (defined in \mdl{phycst} along with all universal constants).
+Note that the horizontal position of and scale factors at $w$points are exactly equal to those of $t$points,
+thus no specific arrays are defined at $w$points.
+
+Note that the definition of the scale factors
+($i.e.$ as the analytical first derivative of the transformation that
+gives $(\lambda,\varphi,z)$ as a function of $(i,j,k)$)
+is specific to the \NEMO model \citep{Marti_al_JGR92}.
+As an example, $e_{1t}$ is defined locally at a $t$point,
+whereas many other models on a C grid choose to define such a scale factor as
+the distance between the $U$points on each side of the $t$point.
+Relying on an analytical transformation has two advantages:
+firstly, there is no ambiguity in the scale factors appearing in the discrete equations,
+since they are first introduced in the continuous equations;
+secondly, analytical transformations encourage good practice by the definition of smoothly varying grids
+(rather than allowing the user to set arbitrary jumps in thickness between adjacent layers) \citep{Treguier1996}.
+An example of the effect of such a choice is shown in \autoref{fig:zgr_e3}.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=0.90\textwidth]{Fig_zgr_e3}
\caption{ \protect\label{fig:zgr_e3}
Comparison of (a) traditional definitions of gridpoint position and gridsize in the vertical,
and (b) analytically derived gridpoint position and scale factors.
For both grids here, the same $w$point depth has been chosen but in (a) the
$t$points are set half way between $w$points while in (b) they are defined from
an analytical function: $z(k)=5\,(k1/2)^3  45\,(k1/2)^2 + 140\,(k1/2)  150$.
Note the resulting difference between the value of the gridsize $\Delta_k$ and
those of the scale factor $e_k$. }
+\caption{ \protect\label{fig:zgr_e3}
+ Comparison of (a) traditional definitions of gridpoint position and gridsize in the vertical,
+ and (b) analytically derived gridpoint position and scale factors.
+ For both grids here,
+ the same $w$point depth has been chosen but in (a) the $t$points are set half way between $w$points while
+ in (b) they are defined from an analytical function: $z(k)=5\,(k1/2)^3  45\,(k1/2)^2 + 140\,(k1/2)  150$.
+ Note the resulting difference between the value of the gridsize $\Delta_k$ and those of the scale factor $e_k$. }
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 411,12 +410,12 @@
\label{subsec:DOM_hgr_files}
All the arrays relating to a particular ocean model configuration (gridpoint
position, scale factors, masks) can be saved in files if \np{nn\_msh} $\not= 0$
(namelist variable in \ngn{namdom}). This can be particularly useful for plots and offline
diagnostics. In some cases, the user may choose to make a local modification
of a scale factor in the code. This is the case in global configurations when
restricting the width of a specific strait (usually a onegridpoint strait that
happens to be too wide due to insufficient model resolution). An example
is Gibraltar Strait in the ORCA2 configuration. When such modifications are done,
+All the arrays relating to a particular ocean model configuration (gridpoint position, scale factors, masks)
+can be saved in files if \np{nn\_msh} $\not= 0$ (namelist variable in \ngn{namdom}).
+This can be particularly useful for plots and offline diagnostics.
+In some cases, the user may choose to make a local modification of a scale factor in the code.
+This is the case in global configurations when restricting the width of a specific strait
+(usually a onegridpoint strait that happens to be too wide due to insufficient model resolution).
+An example is Gibraltar Strait in the ORCA2 configuration.
+When such modifications are done,
the output grid written when \np{nn\_msh} $\not= 0$ is no more equal to the input grid.
@@ 437,9 +436,8 @@
Variables are defined through the \ngn{namzgr} and \ngn{namdom} namelists.
In the vertical, the model mesh is determined by four things:
(1) the bathymetry given in meters ;
(2) the number of levels of the model (\jp{jpk}) ;
(3) the analytical transformation $z(i,j,k)$ and the vertical scale factors
(derivatives of the transformation) ;
and (4) the masking system, $i.e.$ the number of wet model levels at each
+(1) the bathymetry given in meters;
+(2) the number of levels of the model (\jp{jpk});
+(3) the analytical transformation $z(i,j,k)$ and the vertical scale factors (derivatives of the transformation); and
+(4) the masking system, $i.e.$ the number of wet model levels at each
$(i,j)$ column of points.
@@ 447,55 +445,60 @@
\begin{figure}[!tb] \begin{center}
\includegraphics[width=1.0\textwidth]{Fig_z_zps_s_sps}
\caption{ \protect\label{fig:z_zps_s_sps}
The ocean bottom as seen by the model:
(a) $z$coordinate with full step,
(b) $z$coordinate with partial step,
(c) $s$coordinate: terrain following representation,
(d) hybrid $sz$ coordinate,
(e) hybrid $sz$ coordinate with partial step, and
(f) same as (e) but in the nonlinear free surface (\protect\np{ln\_linssh}\forcode{ = .false.}).
Note that the nonlinear free surface can be used with any of the
5 coordinates (a) to (e).}
+\caption{ \protect\label{fig:z_zps_s_sps}
+ The ocean bottom as seen by the model:
+ (a) $z$coordinate with full step,
+ (b) $z$coordinate with partial step,
+ (c) $s$coordinate: terrain following representation,
+ (d) hybrid $sz$ coordinate,
+ (e) hybrid $sz$ coordinate with partial step, and
+ (f) same as (e) but in the nonlinear free surface (\protect\np{ln\_linssh}\forcode{ = .false.}).
+ Note that the nonlinear free surface can be used with any of the 5 coordinates (a) to (e).}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The choice of a vertical coordinate, even if it is made through \ngn{namzgr} namelist parameters,
must be done once of all at the beginning of an experiment. It is not intended as an
option which can be enabled or disabled in the middle of an experiment. Three main
choices are offered (\autoref{fig:z_zps_s_sps}a to c): $z$coordinate with full step
bathymetry (\np{ln\_zco}\forcode{ = .true.}), $z$coordinate with partial step bathymetry
(\np{ln\_zps}\forcode{ = .true.}), or generalized, $s$coordinate (\np{ln\_sco}\forcode{ = .true.}).
Hybridation of the three main coordinates are available: $sz$ or $szps$ coordinate
(\autoref{fig:z_zps_s_sps}d and \autoref{fig:z_zps_s_sps}e). By default a nonlinear free surface is used:
the coordinate follow the timevariation of the free surface so that the transformation is time dependent:
$z(i,j,k,t)$ (\autoref{fig:z_zps_s_sps}f). When a linear free surface is assumed (\np{ln\_linssh}\forcode{ = .true.}),
the vertical coordinate are fixed in time, but the seawater can move up and down across the z=0 surface
+must be done once of all at the beginning of an experiment.
+It is not intended as an option which can be enabled or disabled in the middle of an experiment.
+Three main choices are offered (\autoref{fig:z_zps_s_sps}a to c):
+$z$coordinate with full step bathymetry (\np{ln\_zco}\forcode{ = .true.}),
+$z$coordinate with partial step bathymetry (\np{ln\_zps}\forcode{ = .true.}),
+or generalized, $s$coordinate (\np{ln\_sco}\forcode{ = .true.}).
+Hybridation of the three main coordinates are available:
+$sz$ or $szps$ coordinate (\autoref{fig:z_zps_s_sps} and \autoref{fig:z_zps_s_sps}e).
+By default a nonlinear free surface is used: the coordinate follow the timevariation of the free surface so that
+the transformation is time dependent: $z(i,j,k,t)$ (\autoref{fig:z_zps_s_sps}f).
+When a linear free surface is assumed (\np{ln\_linssh}\forcode{ = .true.}),
+the vertical coordinate are fixed in time, but the seawater can move up and down across the z=0 surface
(in other words, the top of the ocean in not a rigidlid).
The last choice in terms of vertical coordinate concerns the presence (or not) in the model domain
of ocean cavities beneath ice shelves. Setting \np{ln\_isfcav} to true allows to manage ocean cavities,
otherwise they are filled in. This option is currently only available in $z$ or $zps$coordinate,
and partial step are also applied at the ocean/ice shelf interface.

Contrary to the horizontal grid, the vertical grid is computed in the code and no
provision is made for reading it from a file. The only input file is the bathymetry
(in meters) (\ifile{bathy\_meter}).
\footnote{N.B. in full step $z$coordinate, a \ifile{bathy\_level} file can replace the
\ifile{bathy\_meter} file, so that the computation of the number of wet ocean point
in each water column is bypassed}.
If \np{ln\_isfcav}\forcode{ = .true.}, an extra file input file describing the ice shelf draft
(in meters) (\ifile{isf\_draft\_meter}) is needed.

After reading the bathymetry, the algorithm for vertical grid definition differs
between the different options:
+The last choice in terms of vertical coordinate concerns the presence (or not) in
+the model domain of ocean cavities beneath ice shelves.
+Setting \np{ln\_isfcav} to true allows to manage ocean cavities, otherwise they are filled in.
+This option is currently only available in $z$ or $zps$coordinate,
+and partial step are also applied at the ocean/ice shelf interface.
+
+Contrary to the horizontal grid, the vertical grid is computed in the code and
+no provision is made for reading it from a file.
+The only input file is the bathymetry (in meters) (\ifile{bathy\_meter})
+\footnote{
+ N.B. in full step $z$coordinate, a \ifile{bathy\_level} file can replace the \ifile{bathy\_meter} file,
+ so that the computation of the number of wet ocean point in each water column is bypassed}.
+If \np{ln\_isfcav}\forcode{ = .true.},
+an extra file input file describing the ice shelf draft (in meters) (\ifile{isf\_draft\_meter}) is needed.
+
+After reading the bathymetry, the algorithm for vertical grid definition differs between the different options:
\begin{description}
\item[\textit{zco}] set a reference coordinate transformation $z_0 (k)$, and set $z(i,j,k,t)=z_0 (k)$.
\item[\textit{zps}] set a reference coordinate transformation $z_0 (k)$, and
calculate the thickness of the deepest level at each $(i,j)$ point using the
bathymetry, to obtain the final threedimensional depth and scale factor arrays.
\item[\textit{sco}] smooth the bathymetry to fulfil the hydrostatic consistency
criteria and set the threedimensional transformation.
\item[\textit{sz} and \textit{szps}] smooth the bathymetry to fulfil the hydrostatic
consistency criteria and set the threedimensional transformation $z(i,j,k)$, and
possibly introduce masking of extra land points to better fit the original bathymetry file
+\item[\textit{zco}]
+ set a reference coordinate transformation $z_0 (k)$, and set $z(i,j,k,t)=z_0 (k)$.
+\item[\textit{zps}]
+ set a reference coordinate transformation $z_0 (k)$,
+ and calculate the thickness of the deepest level at each $(i,j)$ point using the bathymetry,
+ to obtain the final threedimensional depth and scale factor arrays.
+\item[\textit{sco}]
+ smooth the bathymetry to fulfil the hydrostatic consistency criteria and
+ set the threedimensional transformation.
+\item[\textit{sz} and \textit{szps}]
+ smooth the bathymetry to fulfil the hydrostatic consistency criteria and
+ set the threedimensional transformation $z(i,j,k)$,
+ and possibly introduce masking of extra land points to better fit the original bathymetry file.
\end{description}
%%%
@@ 503,11 +506,12 @@
%%%
Unless a linear free surface is used (\np{ln\_linssh}\forcode{ = .false.}), the arrays describing
the grid point depths and vertical scale factors are three set of three dimensional arrays $(i,j,k)$
defined at \textit{before}, \textit{now} and \textit{after} time step. The time at which they are
defined is indicated by a suffix:$\_b$, $\_n$, or $\_a$, respectively. They are updated at each model time step
using a fixed reference coordinate system which computer names have a $\_0$ suffix.
When the linear free surface option is used (\np{ln\_linssh}\forcode{ = .true.}), \textit{before}, \textit{now}
and \textit{after} arrays are simply set one for all to their reference counterpart.
+Unless a linear free surface is used (\np{ln\_linssh}\forcode{ = .false.}),
+the arrays describing the grid point depths and vertical scale factors are three set of
+three dimensional arrays $(i,j,k)$ defined at \textit{before}, \textit{now} and \textit{after} time step.
+The time at which they are defined is indicated by a suffix:$\_b$, $\_n$, or $\_a$, respectively.
+They are updated at each model time step using a fixed reference coordinate system which
+computer names have a $\_0$ suffix.
+When the linear free surface option is used (\np{ln\_linssh}\forcode{ = .true.}),
+\textit{before}, \textit{now} and \textit{after} arrays are simply set one for all to their reference counterpart.
@@ 518,32 +522,36 @@
\label{subsec:DOM_bathy}
Three options are possible for defining the bathymetry, according to the
namelist variable \np{nn\_bathy} (found in \ngn{namdom} namelist):
+Three options are possible for defining the bathymetry, according to the namelist variable \np{nn\_bathy}
+(found in \ngn{namdom} namelist):
\begin{description}
\item[\np{nn\_bathy}\forcode{ = 0}]: a flatbottom domain is defined. The total depth $z_w (jpk)$
is given by the coordinate transformation. The domain can either be a closed
basin or a periodic channel depending on the parameter \np{jperio}.
\item[\np{nn\_bathy}\forcode{ = 1}]: a domain with a bump of topography one third of the
domain width at the central latitude. This is meant for the "EELR5" configuration,
a periodic or open boundary channel with a seamount.
\item[\np{nn\_bathy}\forcode{ = 1}]: read a bathymetry and ice shelf draft (if needed).
 The \ifile{bathy\_meter} file (Netcdf format) provides the ocean depth (positive, in meters)
 at each grid point of the model grid. The bathymetry is usually built by interpolating a standard bathymetry product
($e.g.$ ETOPO2) onto the horizontal ocean mesh. Defining the bathymetry also
defines the coastline: where the bathymetry is zero, no model levels are defined
(all levels are masked).

The \ifile{isfdraft\_meter} file (Netcdf format) provides the ice shelf draft (positive, in meters)
 at each grid point of the model grid. This file is only needed if \np{ln\_isfcav}\forcode{ = .true.}.
Defining the ice shelf draft will also define the ice shelf edge and the grounding line position.
+\item[\np{nn\_bathy}\forcode{ = 0}]:
+ a flatbottom domain is defined.
+ The total depth $z_w (jpk)$ is given by the coordinate transformation.
+ The domain can either be a closed basin or a periodic channel depending on the parameter \np{jperio}.
+\item[\np{nn\_bathy}\forcode{ = 1}]:
+ a domain with a bump of topography one third of the domain width at the central latitude.
+ This is meant for the "EELR5" configuration, a periodic or open boundary channel with a seamount.
+\item[\np{nn\_bathy}\forcode{ = 1}]:
+ read a bathymetry and ice shelf draft (if needed).
+ The \ifile{bathy\_meter} file (Netcdf format) provides the ocean depth (positive, in meters) at
+ each grid point of the model grid.
+ The bathymetry is usually built by interpolating a standard bathymetry product ($e.g.$ ETOPO2) onto
+ the horizontal ocean mesh.
+ Defining the bathymetry also defines the coastline: where the bathymetry is zero,
+ no model levels are defined (all levels are masked).
+
+ The \ifile{isfdraft\_meter} file (Netcdf format) provides the ice shelf draft (positive, in meters) at
+ each grid point of the model grid.
+ This file is only needed if \np{ln\_isfcav}\forcode{ = .true.}.
+ Defining the ice shelf draft will also define the ice shelf edge and the grounding line position.
\end{description}
When a global ocean is coupled to an atmospheric model it is better to represent
all large water bodies (e.g, great lakes, Caspian sea...) even if the model
resolution does not allow their communication with the rest of the ocean.
This is unnecessary when the ocean is forced by fixed atmospheric conditions,
so these seas can be removed from the ocean domain. The user has the option
to set the bathymetry in closed seas to zero (see \autoref{sec:MISC_closea}), but the
code has to be adapted to the user's configuration.
+When a global ocean is coupled to an atmospheric model it is better to represent all large water bodies
+(e.g, great lakes, Caspian sea...)
+even if the model resolution does not allow their communication with the rest of the ocean.
+This is unnecessary when the ocean is forced by fixed atmospheric conditions,
+so these seas can be removed from the ocean domain.
+The user has the option to set the bathymetry in closed seas to zero (see \autoref{sec:MISC_closea}),
+but the code has to be adapted to the user's configuration.
% 
@@ 557,29 +565,28 @@
\begin{figure}[!tb] \begin{center}
\includegraphics[width=0.90\textwidth]{Fig_zgr}
\caption{ \protect\label{fig:zgr}
Default vertical mesh for ORCA2: 30 ocean levels (L30). Vertical level functions for
(a) Tpoint depth and (b) the associated scale factor as computed
from \autoref{eq:DOM_zgr_ana_1} using \autoref{eq:DOM_zgr_coef} in $z$coordinate.}
+\caption{ \protect\label{fig:zgr}
+ Default vertical mesh for ORCA2: 30 ocean levels (L30).
+ Vertical level functions for (a) Tpoint depth and (b) the associated scale factor as computed from
+ \autoref{eq:DOM_zgr_ana_1} using \autoref{eq:DOM_zgr_coef} in $z$coordinate.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The reference coordinate transformation $z_0 (k)$ defines the arrays $gdept_0$
and $gdepw_0$ for $t$ and $w$points, respectively. As indicated on
\autoref{fig:index_vert} \jp{jpk} is the number of $w$levels. $gdepw_0(1)$ is the
ocean surface. There are at most \jp{jpk}1 $t$points inside the ocean, the
additional $t$point at $jk=jpk$ is below the sea floor and is not used.
The vertical location of $w$ and $t$levels is defined from the analytic expression
of the depth $z_0(k)$ whose analytical derivative with respect to $k$ provides the
vertical scale factors. The user must provide the analytical expression of both
$z_0$ and its first derivative with respect to $k$. This is done in routine \mdl{domzgr}
through statement functions, using parameters provided in the \ngn{namcfg} namelist.

It is possible to define a simple regular vertical grid by giving zero stretching (\np{ppacr=0}).
In that case, the parameters \jp{jpk} (number of $w$levels) and \np{pphmax}
(total ocean depth in meters) fully define the grid.

For climaterelated studies it is often desirable to concentrate the vertical resolution
near the ocean surface. The following function is proposed as a standard for a
$z$coordinate (with either full or partial steps):
+The reference coordinate transformation $z_0 (k)$ defines the arrays $gdept_0$ and $gdepw_0$ for
+$t$ and $w$points, respectively.
+As indicated on \autoref{fig:index_vert} \jp{jpk} is the number of $w$levels. $gdepw_0(1)$ is the ocean surface.
+There are at most \jp{jpk}1 $t$points inside the ocean,
+the additional $t$point at $jk=jpk$ is below the sea floor and is not used.
+The vertical location of $w$ and $t$levels is defined from the analytic expression of the depth $z_0(k)$ whose
+analytical derivative with respect to $k$ provides the vertical scale factors.
+The user must provide the analytical expression of both $z_0$ and its first derivative with respect to $k$.
+This is done in routine \mdl{domzgr} through statement functions,
+using parameters provided in the \ngn{namcfg} namelist.
+
+It is possible to define a simple regular vertical grid by giving zero stretching (\np{ppacr=0}).
+In that case,
+the parameters \jp{jpk} (number of $w$levels) and \np{pphmax} (total ocean depth in meters) fully define the grid.
+
+For climaterelated studies it is often desirable to concentrate the vertical resolution near the ocean surface.
+The following function is proposed as a standard for a $z$coordinate (with either full or partial steps):
\begin{equation} \label{eq:DOM_zgr_ana_1}
\begin{split}
@@ 588,10 +595,9 @@
\end{split}
\end{equation}
where $k=1$ to \jp{jpk} for $w$levels and $k=1$ to $k=1$ for $T$levels. Such an
expression allows us to define a nearly uniform vertical location of levels at the
ocean top and bottom with a smooth hyperbolic tangent transition in between
(\autoref{fig:zgr}).

If the ice shelf cavities are opened (\np{ln\_isfcav}\forcode{ = .true.}), the definition of $z_0$ is the same.
+where $k=1$ to \jp{jpk} for $w$levels and $k=1$ to $k=1$ for $T$levels.
+Such an expression allows us to define a nearly uniform vertical location of levels at the ocean top and bottom with
+a smooth hyperbolic tangent transition in between (\autoref{fig:zgr}).
+
+If the ice shelf cavities are opened (\np{ln\_isfcav}\forcode{ = .true.}), the definition of $z_0$ is the same.
However, definition of $e_3^0$ at $t$ and $w$points is respectively changed to:
\begin{equation} \label{eq:DOM_zgr_ana_2}
@@ 605,7 +611,7 @@
The most used vertical grid for ORCA2 has $10~m$ ($500~m)$ resolution in the
surface (bottom) layers and a depth which varies from 0 at the sea surface to a
minimum of $5000~m$. This leads to the following conditions:
+The most used vertical grid for ORCA2 has $10~m$ ($500~m)$ resolution in the surface (bottom) layers and
+a depth which varies from 0 at the sea surface to a minimum of $5000~m$.
+This leads to the following conditions:
\begin{equation} \label{eq:DOM_zgr_coef}
\begin{split}
@@ 617,28 +623,33 @@
\end{equation}
With the choice of the stretching $h_{cr} =3$ and the number of levels
\jp{jpk}=$31$, the four coefficients $h_{sur}$, $h_{0}$, $h_{1}$, and $h_{th}$ in
\autoref{eq:DOM_zgr_ana_2} have been determined such that \autoref{eq:DOM_zgr_coef} is
satisfied, through an optimisation procedure using a bisection method. For the first
standard ORCA2 vertical grid this led to the following values: $h_{sur} =4762.96$,
$h_0 =255.58, h_1 =245.5813$, and $h_{th} =21.43336$. The resulting depths and
scale factors as a function of the model levels are shown in \autoref{fig:zgr} and
given in \autoref{tab:orca_zgr}. Those values correspond to the parameters
\np{ppsur}, \np{ppa0}, \np{ppa1}, \np{ppkth} in \ngn{namcfg} namelist.

Rather than entering parameters $h_{sur}$, $h_{0}$, and $h_{1}$ directly, it is
possible to recalculate them. In that case the user sets
\np{ppsur}\forcode{ = }\np{ppa0}\forcode{ = }\np{ppa1}\forcode{ = 999999}., in \ngn{namcfg} namelist,
and specifies instead the four following parameters:
+With the choice of the stretching $h_{cr} =3$ and the number of levels \jp{jpk}=$31$,
+the four coefficients $h_{sur}$, $h_{0}$, $h_{1}$, and $h_{th}$ in
+\autoref{eq:DOM_zgr_ana_2} have been determined such that
+\autoref{eq:DOM_zgr_coef} is satisfied, through an optimisation procedure using a bisection method.
+For the first standard ORCA2 vertical grid this led to the following values:
+$h_{sur} =4762.96$, $h_0 =255.58, h_1 =245.5813$, and $h_{th} =21.43336$.
+The resulting depths and scale factors as a function of the model levels are shown in
+\autoref{fig:zgr} and given in \autoref{tab:orca_zgr}.
+Those values correspond to the parameters \np{ppsur}, \np{ppa0}, \np{ppa1}, \np{ppkth} in \ngn{namcfg} namelist.
+
+Rather than entering parameters $h_{sur}$, $h_{0}$, and $h_{1}$ directly, it is possible to recalculate them.
+In that case the user sets \np{ppsur}\forcode{ = }\np{ppa0}\forcode{ = }\np{ppa1}\forcode{ = 999999}.,
+in \ngn{namcfg} namelist, and specifies instead the four following parameters:
\begin{itemize}
\item \np{ppacr}=$h_{cr} $: stretching factor (nondimensional). The larger
\np{ppacr}, the smaller the stretching. Values from $3$ to $10$ are usual.
\item \np{ppkth}=$h_{th} $: is approximately the model level at which maximum
stretching occurs (nondimensional, usually of order 1/2 or 2/3 of \jp{jpk})
\item \np{ppdzmin}: minimum thickness for the top layer (in meters)
\item \np{pphmax}: total depth of the ocean (meters).
+\item
+ \np{ppacr}=$h_{cr} $: stretching factor (nondimensional).
+ The larger \np{ppacr}, the smaller the stretching.
+ Values from $3$ to $10$ are usual.
+\item
+ \np{ppkth}=$h_{th} $: is approximately the model level at which maximum stretching occurs
+ (nondimensional, usually of order 1/2 or 2/3 of \jp{jpk})
+\item
+ \np{ppdzmin}: minimum thickness for the top layer (in meters).
+\item
+ \np{pphmax}: total depth of the ocean (meters).
\end{itemize}
As an example, for the $45$ layers used in the DRAKKAR configuration those
parameters are: \jp{jpk}\forcode{ = 46}, \np{ppacr}\forcode{ = 9}, \np{ppkth}\forcode{ = 23.563}, \np{ppdzmin}\forcode{ = 6}m, \np{pphmax}\forcode{ = 5750}m.
+As an example, for the $45$ layers used in the DRAKKAR configuration those parameters are:
+\jp{jpk}\forcode{ = 46}, \np{ppacr}\forcode{ = 9}, \np{ppkth}\forcode{ = 23.563},
+\np{ppdzmin}\forcode{ = 6}m, \np{pphmax}\forcode{ = 5750}m.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 678,7 +689,7 @@
31 & \textbf{5250.23}& 5000.00 & \textbf{500.56} & 500.33 \\ \hline
\end{tabular} \end{center}
\caption{ \protect\label{tab:orca_zgr}
Default vertical mesh in $z$coordinate for 30 layers ORCA2 configuration as computed
from \autoref{eq:DOM_zgr_ana_2} using the coefficients given in \autoref{eq:DOM_zgr_coef}}
+\caption{ \protect\label{tab:orca_zgr}
+ Default vertical mesh in $z$coordinate for 30 layers ORCA2 configuration as computed from
+ \autoref{eq:DOM_zgr_ana_2} using the coefficients given in \autoref{eq:DOM_zgr_coef}}
\end{table}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 694,24 +705,22 @@
%
In $z$coordinate partial step, the depths of the model levels are defined by the
reference analytical function $z_0 (k)$ as described in the previous
section, \emph{except} in the bottom layer. The thickness of the bottom layer is
allowed to vary as a function of geographical location $(\lambda,\varphi)$ to allow a
better representation of the bathymetry, especially in the case of small
slopes (where the bathymetry varies by less than one level thickness from
one grid point to the next). The reference layer thicknesses $e_{3t}^0$ have been
defined in the absence of bathymetry. With partial steps, layers from 1 to
\jp{jpk}2 can have a thickness smaller than $e_{3t}(jk)$. The model deepest layer (\jp{jpk}1)
is allowed to have either a smaller or larger thickness than $e_{3t}(jpk)$: the
maximum thickness allowed is $2*e_{3t}(jpk1)$. This has to be kept in mind when
specifying values in \ngn{namdom} namelist, as the maximum depth \np{pphmax}
in partial steps: for example, with
\np{pphmax}$=5750~m$ for the DRAKKAR 45 layer grid, the maximum ocean depth
allowed is actually $6000~m$ (the default thickness $e_{3t}(jpk1)$ being $250~m$).
Two variables in the namdom namelist are used to define the partial step
vertical grid. The mimimum water thickness (in meters) allowed for a cell
partially filled with bathymetry at level jk is the minimum of \np{rn\_e3zps\_min}
(thickness in meters, usually $20~m$) or $e_{3t}(jk)*$\np{rn\_e3zps\_rat} (a fraction,
usually 10\%, of the default thickness $e_{3t}(jk)$).
+In $z$coordinate partial step,
+the depths of the model levels are defined by the reference analytical function $z_0 (k)$ as described in
+the previous section, \emph{except} in the bottom layer.
+The thickness of the bottom layer is allowed to vary as a function of geographical location $(\lambda,\varphi)$ to
+allow a better representation of the bathymetry, especially in the case of small slopes
+(where the bathymetry varies by less than one level thickness from one grid point to the next).
+The reference layer thicknesses $e_{3t}^0$ have been defined in the absence of bathymetry.
+With partial steps, layers from 1 to \jp{jpk}2 can have a thickness smaller than $e_{3t}(jk)$.
+The model deepest layer (\jp{jpk}1) is allowed to have either a smaller or larger thickness than $e_{3t}(jpk)$:
+the maximum thickness allowed is $2*e_{3t}(jpk1)$.
+This has to be kept in mind when specifying values in \ngn{namdom} namelist,
+as the maximum depth \np{pphmax} in partial steps:
+for example, with \np{pphmax}$=5750~m$ for the DRAKKAR 45 layer grid,
+the maximum ocean depth allowed is actually $6000~m$ (the default thickness $e_{3t}(jpk1)$ being $250~m$).
+Two variables in the namdom namelist are used to define the partial step vertical grid.
+The mimimum water thickness (in meters) allowed for a cell partially filled with bathymetry at level jk is
+the minimum of \np{rn\_e3zps\_min} (thickness in meters, usually $20~m$) or $e_{3t}(jk)*$\np{rn\_e3zps\_rat}
+(a fraction, usually 10\%, of the default thickness $e_{3t}(jk)$).
\gmcomment{ \colorbox{yellow}{Add a figure here of pstep especially at last ocean level } }
@@ 727,7 +736,6 @@
%
Options are defined in \ngn{namzgr\_sco}.
In $s$coordinate (\np{ln\_sco}\forcode{ = .true.}), the depth and thickness of the model
levels are defined from the product of a depth field and either a stretching
function or its derivative, respectively:
+In $s$coordinate (\np{ln\_sco}\forcode{ = .true.}), the depth and thickness of the model levels are defined from
+the product of a depth field and either a stretching function or its derivative, respectively:
\begin{equation} \label{eq:DOM_sco_ana}
@@ 738,19 +746,19 @@
\end{equation}
where $h$ is the depth of the last $w$level ($z_0(k)$) defined at the $t$point
location in the horizontal and $z_0(k)$ is a function which varies from $0$ at the sea
surface to $1$ at the ocean bottom. The depth field $h$ is not necessary the ocean
depth, since a mixed steplike and bottomfollowing representation of the
topography can be used (\autoref{fig:z_zps_s_sps}de) or an envelop bathymetry can be defined (\autoref{fig:z_zps_s_sps}f).
The namelist parameter \np{rn\_rmax} determines the slope at which the terrainfollowing coordinate intersects
the sea bed and becomes a pseudo zcoordinate.
The coordinate can also be hybridised by specifying \np{rn\_sbot\_min} and \np{rn\_sbot\_max}
as the minimum and maximum depths at which the terrainfollowing vertical coordinate is calculated.

Options for stretching the coordinate are provided as examples, but care must be taken to ensure
that the vertical stretch used is appropriate for the application.

The original default NEMO scoordinate stretching is available if neither of the other options
are specified as true (\np{ln\_s\_SH94}\forcode{ = .false.} and \np{ln\_s\_SF12}\forcode{ = .false.}).
+where $h$ is the depth of the last $w$level ($z_0(k)$) defined at the $t$point location in the horizontal and
+$z_0(k)$ is a function which varies from $0$ at the sea surface to $1$ at the ocean bottom.
+The depth field $h$ is not necessary the ocean depth,
+since a mixed steplike and bottomfollowing representation of the topography can be used
+(\autoref{fig:z_zps_s_sps}de) or an envelop bathymetry can be defined (\autoref{fig:z_zps_s_sps}f).
+The namelist parameter \np{rn\_rmax} determines the slope at which
+the terrainfollowing coordinate intersects the sea bed and becomes a pseudo zcoordinate.
+The coordinate can also be hybridised by specifying \np{rn\_sbot\_min} and \np{rn\_sbot\_max} as
+the minimum and maximum depths at which the terrainfollowing vertical coordinate is calculated.
+
+Options for stretching the coordinate are provided as examples,
+but care must be taken to ensure that the vertical stretch used is appropriate for the application.
+
+The original default NEMO scoordinate stretching is available if neither of the other options are specified as true
+(\np{ln\_s\_SH94}\forcode{ = .false.} and \np{ln\_s\_SF12}\forcode{ = .false.}).
This uses a depth independent $\tanh$ function for the stretching \citep{Madec_al_JPO96}:
@@ 760,6 +768,6 @@
\end{equation}
where $s_{min}$ is the depth at which the $s$coordinate stretching starts and
allows a $z$coordinate to placed on top of the stretched coordinate,
+where $s_{min}$ is the depth at which the $s$coordinate stretching starts and
+allows a $z$coordinate to placed on top of the stretched coordinate,
and $z$ is the depth (negative down from the asea surface).
@@ 777,6 +785,7 @@
\end{equation}
A stretching function, modified from the commonly used \citet{Song_Haidvogel_JCP94}
stretching (\np{ln\_s\_SH94}\forcode{ = .true.}), is also available and is more commonly used for shelf seas modelling:
+A stretching function,
+modified from the commonly used \citet{Song_Haidvogel_JCP94} stretching (\np{ln\_s\_SH94}\forcode{ = .true.}),
+is also available and is more commonly used for shelf seas modelling:
\begin{equation}
@@ 789,18 +798,19 @@
\begin{figure}[!ht] \begin{center}
\includegraphics[width=1.0\textwidth]{Fig_sco_function}
\caption{ \protect\label{fig:sco_function}
Examples of the stretching function applied to a seamount; from left to right:
surface, surface and bottom, and bottom intensified resolutions}
+\caption{ \protect\label{fig:sco_function}
+ Examples of the stretching function applied to a seamount;
+ from left to right: surface, surface and bottom, and bottom intensified resolutions}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
where $H_c$ is the critical depth (\np{rn\_hc}) at which the coordinate transitions from
pure $\sigma$ to the stretched coordinate, and $\theta$ (\np{rn\_theta}) and $b$ (\np{rn\_bb})
are the surface and bottom control parameters such that $0\leqslant \theta \leqslant 20$, and
$0\leqslant b\leqslant 1$. $b$ has been designed to allow surface and/or bottom
increase of the vertical resolution (\autoref{fig:sco_function}).

Another example has been provided at version 3.5 (\np{ln\_s\_SF12}) that allows
a fixed surface resolution in an analytical terrainfollowing stretching \citet{Siddorn_Furner_OM12}.
+where $H_c$ is the critical depth (\np{rn\_hc}) at which
+the coordinate transitions from pure $\sigma$ to the stretched coordinate,
+and $\theta$ (\np{rn\_theta}) and $b$ (\np{rn\_bb}) are the surface and bottom control parameters such that
+$0\leqslant \theta \leqslant 20$, and $0\leqslant b\leqslant 1$.
+$b$ has been designed to allow surface and/or bottom increase of the vertical resolution
+(\autoref{fig:sco_function}).
+
+Another example has been provided at version 3.5 (\np{ln\_s\_SF12}) that allows a fixed surface resolution in
+an analytical terrainfollowing stretching \citet{Siddorn_Furner_OM12}.
In this case the a stretching function $\gamma$ is defined such that:
@@ 821,8 +831,9 @@
\end{equation}
This gives an analytical stretching of $\sigma$ that is solvable in $A$ and $B$ as a function of
the user prescribed stretching parameter $\alpha$ (\np{rn\_alpha}) that stretches towards
the surface ($\alpha > 1.0$) or the bottom ($\alpha < 1.0$) and user prescribed surface (\np{rn\_zs})
and bottom depths. The bottom cell depth in this example is given as a function of water depth:
+This gives an analytical stretching of $\sigma$ that is solvable in $A$ and $B$ as a function of
+the user prescribed stretching parameter $\alpha$ (\np{rn\_alpha}) that stretches towards
+the surface ($\alpha > 1.0$) or the bottom ($\alpha < 1.0$) and
+user prescribed surface (\np{rn\_zs}) and bottom depths.
+The bottom cell depth in this example is given as a function of water depth:
\begin{equation} \label{eq:DOM_zb}
@@ 834,15 +845,34 @@
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!ht]
 \includegraphics[width=1.0\textwidth]{FIG_DOM_compare_coordinates_surface}
 \caption{A comparison of the \citet{Song_Haidvogel_JCP94} $S$coordinate (solid lines), a 50 level $Z$coordinate (contoured surfaces) and the \citet{Siddorn_Furner_OM12} $S$coordinate (dashed lines) in the surface 100m for a idealised bathymetry that goes from 50m to 5500m depth. For clarity every third coordinate surface is shown.}
+ \includegraphics[width=1.0\textwidth]{Fig_DOM_compare_coordinates_surface}
+ \caption{
+ A comparison of the \citet{Song_Haidvogel_JCP94} $S$coordinate (solid lines),
+ a 50 level $Z$coordinate (contoured surfaces) and
+ the \citet{Siddorn_Furner_OM12} $S$coordinate (dashed lines) in
+ the surface 100m for a idealised bathymetry that goes from 50m to 5500m depth.
+ For clarity every third coordinate surface is shown.}
\label{fig:fig_compare_coordinates_surface}
\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
This gives a smooth analytical stretching in computational space that is constrained to given specified surface and bottom grid cell thicknesses in real space. This is not to be confused with the hybrid schemes that superimpose geopotential coordinates on terrain following coordinates thus creating a nonanalytical vertical coordinate that therefore may suffer from large gradients in the vertical resolutions. This stretching is less straightforward to implement than the \citet{Song_Haidvogel_JCP94} stretching, but has the advantage of resolving diurnal processes in deep water and has generally flatter slopes.

As with the \citet{Song_Haidvogel_JCP94} stretching the stretch is only applied at depths greater than the critical depth $h_c$. In this example two options are available in depths shallower than $h_c$, with pure sigma being applied if the \np{ln\_sigcrit} is true and pure zcoordinates if it is false (the zcoordinate being equal to the depths of the stretched coordinate at $h_c$.

Minimising the horizontal slope of the vertical coordinate is important in terrainfollowing systems as large slopes lead to hydrostatic consistency. A hydrostatic consistency parameter diagnostic following \citet{Haney1991} has been implemented, and is output as part of the model mesh file at the start of the run.
+This gives a smooth analytical stretching in computational space that is constrained to
+given specified surface and bottom grid cell thicknesses in real space.
+This is not to be confused with the hybrid schemes that
+superimpose geopotential coordinates on terrain following coordinates thus
+creating a nonanalytical vertical coordinate that
+therefore may suffer from large gradients in the vertical resolutions.
+This stretching is less straightforward to implement than the \citet{Song_Haidvogel_JCP94} stretching,
+but has the advantage of resolving diurnal processes in deep water and has generally flatter slopes.
+
+As with the \citet{Song_Haidvogel_JCP94} stretching the stretch is only applied at depths greater than
+the critical depth $h_c$.
+In this example two options are available in depths shallower than $h_c$,
+with pure sigma being applied if the \np{ln\_sigcrit} is true and pure zcoordinates if it is false
+(the zcoordinate being equal to the depths of the stretched coordinate at $h_c$).
+
+Minimising the horizontal slope of the vertical coordinate is important in terrainfollowing systems as
+large slopes lead to hydrostatic consistency.
+A hydrostatic consistency parameter diagnostic following \citet{Haney1991} has been implemented,
+and is output as part of the model mesh file at the start of the run.
% 
@@ 862,24 +892,27 @@
\label{subsec:DOM_msk}
Whatever the vertical coordinate used, the model offers the possibility of
representing the bottom topography with steps that follow the face of the
model cells (step like topography) \citep{Madec_al_JPO96}. The distribution of
the steps in the horizontal is defined in a 2D integer array, mbathy, which
gives the number of ocean levels ($i.e.$ those that are not masked) at each
$t$point. mbathy is computed from the meter bathymetry using the definiton of
gdept as the number of $t$points which gdept $\leq$ bathy.

Modifications of the model bathymetry are performed in the \textit{bat\_ctl}
routine (see \mdl{domzgr} module) after mbathy is computed. Isolated grid points
that do not communicate with another ocean point at the same level are eliminated.

As for the representation of bathymetry, a 2D integer array, misfdep, is created.
misfdep defines the level of the first wet $t$point. All the cells between $k=1$ and $misfdep(i,j)1$ are masked.
+Whatever the vertical coordinate used,
+the model offers the possibility of representing the bottom topography with steps that
+follow the face of the model cells (step like topography) \citep{Madec_al_JPO96}.
+The distribution of the steps in the horizontal is defined in a 2D integer array, mbathy,
+which gives the number of ocean levels ($i.e.$ those that are not masked) at each $t$point.
+mbathy is computed from the meter bathymetry using the definiton of gdept as
+the number of $t$points which gdept $\leq$ bathy.
+
+Modifications of the model bathymetry are performed in the \textit{bat\_ctl} routine (see \mdl{domzgr} module) after
+mbathy is computed.
+Isolated grid points that do not communicate with another ocean point at the same level are eliminated.
+
+As for the representation of bathymetry, a 2D integer array, misfdep, is created.
+misfdep defines the level of the first wet $t$point.
+All the cells between $k=1$ and $misfdep(i,j)1$ are masked.
By default, misfdep(:,:)=1 and no cells are masked.
In case of ice shelf cavities, modifications of the model bathymetry and ice shelf draft into
the cavities are performed in the \textit{zgr\_isf} routine. The compatibility between ice shelf draft and bathymetry is checked.
+the cavities are performed in the \textit{zgr\_isf} routine.
+The compatibility between ice shelf draft and bathymetry is checked.
All the locations where the isf cavity is thinnest than \np{rn\_isfhmin} meters are grounded ($i.e.$ masked).
If only one cell on the water column is opened at $t$, $u$ or $v$points, the bathymetry or the ice shelf draft is dug to fit this constrain.
+If only one cell on the water column is opened at $t$, $u$ or $v$points,
+the bathymetry or the ice shelf draft is dug to fit this constrain.
If the incompatibility is too strong (need to dig more than 1 cell), the cell is masked.\\
@@ 896,13 +929,13 @@
\end{align*}
Note that, without ice shelves cavities, masks at $t$ and $w$points are identical with
the numerical indexing used (\autoref{subsec:DOM_Num_Index}). Nevertheless, $wmask$ are required
with oceean cavities to deal with the top boundary (ice shelf/ocean interface)
+Note that, without ice shelves cavities,
+masks at $t$ and $w$points are identical with the numerical indexing used (\autoref{subsec:DOM_Num_Index}).
+Nevertheless, $wmask$ are required with ocean cavities to deal with the top boundary (ice shelf/ocean interface)
exactly in the same way as for the bottom boundary.
The specification of closed lateral boundaries requires that at least the first and last
rows and columns of the \textit{mbathy} array are set to zero. In the particular
case of an eastwest cyclical boundary condition, \textit{mbathy} has its last
column equal to the second one and its first column equal to the last but one
+The specification of closed lateral boundaries requires that at least
+the first and last rows and columns of the \textit{mbathy} array are set to zero.
+In the particular case of an eastwest cyclical boundary condition,
+\textit{mbathy} has its last column equal to the second one and its first column equal to the last but one
(and so too the mask arrays) (see \autoref{fig:LBC_jperio}).
@@ 919,14 +952,16 @@
Options are defined in \ngn{namtsd}.
By default, the ocean start from rest (the velocity field is set to zero) and the initialization of
temperature and salinity fields is controlled through the \np{ln\_tsd\_ini} namelist parameter.
+By default, the ocean start from rest (the velocity field is set to zero) and the initialization of temperature and salinity fields is controlled through the \np{ln\_tsd\_ini} namelist parameter.
\begin{description}
\item[\np{ln\_tsd\_init}\forcode{ = .true.}] use a T and S input files that can be given on the model grid itself or
on their native input data grid. In the latter case, the data will be interpolated onthefly both in the
horizontal and the vertical to the model grid (see \autoref{subsec:SBC_iof}). The information relative to the
input files are given in the \np{sn\_tem} and \np{sn\_sal} structures.
The computation is done in the \mdl{dtatsd} module.
\item[\np{ln\_tsd\_init}\forcode{ = .false.}] use constant salinity value of 35.5 psu and an analytical profile of temperature
(typical of the tropical ocean), see \rou{istate\_t\_s} subroutine called from \mdl{istate} module.
+\item[\np{ln\_tsd\_init}\forcode{ = .true.}]
+ use a T and S input files that can be given on the model grid itself or on their native input data grid.
+ In the latter case,
+ the data will be interpolated onthefly both in the horizontal and the vertical to the model grid
+ (see \autoref{subsec:SBC_iof}).
+ The information relative to the input files are given in the \np{sn\_tem} and \np{sn\_sal} structures.
+ The computation is done in the \mdl{dtatsd} module.
+\item[\np{ln\_tsd\_init}\forcode{ = .false.}]
+ use constant salinity value of 35.5 psu and an analytical profile of temperature (typical of the tropical ocean),
+ see \rou{istate\_t\_s} subroutine called from \mdl{istate} module.
\end{description}
\end{document}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DYN.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DYN.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_DYN.tex (revision 10368)
@@ 11,10 +11,10 @@
$\ $\newline %force an empty line
Using the representation described in \autoref{chap:DOM}, several semidiscrete
space forms of the dynamical equations are available depending on the vertical
coordinate used and on the conservation properties of the vorticity term. In all
the equations presented here, the masking has been omitted for simplicity.
One must be aware that all the quantities are masked fields and that each time an
average or difference operator is used, the resulting field is multiplied by a mask.
+Using the representation described in \autoref{chap:DOM},
+several semidiscrete space forms of the dynamical equations are available depending on
+the vertical coordinate used and on the conservation properties of the vorticity term.
+In all the equations presented here, the masking has been omitted for simplicity.
+One must be aware that all the quantities are masked fields and
+that each time an average or difference operator is used, the resulting field is multiplied by a mask.
The prognostic ocean dynamics equation can be summarized as follows:
@@ 24,34 +24,33 @@
+ \text{HPG} + \text{SPG} + \text{LDF} + \text{ZDF}
\end{equation*}
NXT stands for next, referring to the timestepping. The first group of terms on
the rhs of this equation corresponds to the Coriolis and advection
terms that are decomposed into either a vorticity part (VOR), a kinetic energy part (KEG)
and a vertical advection part (ZAD) in the vector invariant formulation, or a Coriolis
and advection part (COR+ADV) in the flux formulation. The terms following these
are the pressure gradient contributions (HPG, Hydrostatic Pressure Gradient,
and SPG, Surface Pressure Gradient); and contributions from lateral diffusion
(LDF) and vertical diffusion (ZDF), which are added to the rhs in the \mdl{dynldf}
and \mdl{dynzdf} modules. The vertical diffusion term includes the surface and
bottom stresses. The external forcings and parameterisations require complex
inputs (surface wind stress calculation using bulk formulae, estimation of mixing
coefficients) that are carried out in modules SBC, LDF and ZDF and are described
in \autoref{chap:SBC}, \autoref{chap:LDF} and \autoref{chap:ZDF}, respectively.

In the present chapter we also describe the diagnostic equations used to compute
the horizontal divergence, curl of the velocities (\emph{divcur} module) and
the vertical velocity (\emph{wzvmod} module).
+NXT stands for next, referring to the timestepping.
+The first group of terms on the rhs of this equation corresponds to the Coriolis and advection terms that
+are decomposed into either a vorticity part (VOR), a kinetic energy part (KEG) and
+a vertical advection part (ZAD) in the vector invariant formulation,
+or a Coriolis and advection part (COR+ADV) in the flux formulation.
+The terms following these are the pressure gradient contributions
+(HPG, Hydrostatic Pressure Gradient, and SPG, Surface Pressure Gradient);
+and contributions from lateral diffusion (LDF) and vertical diffusion (ZDF),
+which are added to the rhs in the \mdl{dynldf} and \mdl{dynzdf} modules.
+The vertical diffusion term includes the surface and bottom stresses.
+The external forcings and parameterisations require complex inputs
+(surface wind stress calculation using bulk formulae, estimation of mixing coefficients)
+that are carried out in modules SBC, LDF and ZDF and are described in
+\autoref{chap:SBC}, \autoref{chap:LDF} and \autoref{chap:ZDF}, respectively.
+
+In the present chapter we also describe the diagnostic equations used to compute the horizontal divergence,
+curl of the velocities (\emph{divcur} module) and the vertical velocity (\emph{wzvmod} module).
The different options available to the user are managed by namelist variables.
For term \textit{ttt} in the momentum equations, the logical namelist variables are \textit{ln\_dynttt\_xxx},
where \textit{xxx} is a 3 or 4 letter acronym corresponding to each optional scheme.
If a CPP key is used for this term its name is \key{ttt}. The corresponding
code can be found in the \textit{dynttt\_xxx} module in the DYN directory, and it is
usually computed in the \textit{dyn\_ttt\_xxx} subroutine.

The user has the option of extracting and outputting each tendency term from the
3D momentum equations (\key{trddyn} defined), as described in
\autoref{chap:MISC}. Furthermore, the tendency terms associated with the 2D
barotropic vorticity balance (when \key{trdvor} is defined) can be derived from the
3D terms.
+where \textit{xxx} is a 3 or 4 letter acronym corresponding to each optional scheme.
+If a CPP key is used for this term its name is \key{ttt}.
+The corresponding code can be found in the \textit{dynttt\_xxx} module in the DYN directory,
+and it is usually computed in the \textit{dyn\_ttt\_xxx} subroutine.
+
+The user has the option of extracting and outputting each tendency term from the 3D momentum equations
+(\key{trddyn} defined), as described in \autoref{chap:MISC}.
+Furthermore, the tendency terms associated with the 2D barotropic vorticity balance (when \key{trdvor} is defined)
+can be derived from the 3D terms.
%%%
\gmcomment{STEVEN: not quite sure I've got the sense of the last sentence. does
@@ 78,5 +77,6 @@
\end{equation}
The horizontal divergence is defined at a $T$point. It is given by:
+The horizontal divergence is defined at a $T$point.
+It is given by:
\begin{equation} \label{eq:divcur_div}
\chi =\frac{1}{e_{1t}\,e_{2t}\,e_{3t} }
@@ 85,16 +85,17 @@
\end{equation}
Note that although the vorticity has the same discrete expression in $z$
and $s$coordinates, its physical meaning is not identical. $\zeta$ is a pseudo
vorticity along $s$surfaces (only pseudo because $(u,v)$ are still defined along
geopotential surfaces, but are not necessarily defined at the same depth).

The vorticity and divergence at the \textit{before} step are used in the computation
of the horizontal diffusion of momentum. Note that because they have been
calculated prior to the Asselin filtering of the \textit{before} velocities, the
\textit{before} vorticity and divergence arrays must be included in the restart file
to ensure perfect restartability. The vorticity and divergence at the \textit{now}
time step are used for the computation of the nonlinear advection and of the
vertical velocity respectively.
+Note that although the vorticity has the same discrete expression in $z$ and $s$coordinates,
+its physical meaning is not identical.
+$\zeta$ is a pseudo vorticity along $s$surfaces
+(only pseudo because $(u,v)$ are still defined along geopotential surfaces,
+but are not necessarily defined at the same depth).
+
+The vorticity and divergence at the \textit{before} step are used in the computation of
+the horizontal diffusion of momentum.
+Note that because they have been calculated prior to the Asselin filtering of the \textit{before} velocities,
+the \textit{before} vorticity and divergence arrays must be included in the restart file to
+ensure perfect restartability.
+The vorticity and divergence at the \textit{now} time step are used for the computation of
+the nonlinear advection and of the vertical velocity respectively.
%
@@ 104,5 +105,5 @@
\label{subsec:DYN_sshwzv}
The sea surface height is given by :
+The sea surface height is given by:
\begin{equation} \label{eq:dynspg_ssh}
\begin{aligned}
@@ 115,18 +116,19 @@
\end{equation}
where \textit{emp} is the surface freshwater budget (evaporation minus precipitation),
expressed in Kg/m$^2$/s (which is equal to mm/s), and $\rho _w$=1,035~Kg/m$^3$
is the reference density of sea water (Boussinesq approximation). If river runoff is
expressed as a surface freshwater flux (see \autoref{chap:SBC}) then \textit{emp} can be
written as the evaporation minus precipitation, minus the river runoff.
The seasurface height is evaluated using exactly the same time stepping scheme
as the tracer equation \autoref{eq:tra_nxt}:
a leapfrog scheme in combination with an Asselin time filter, $i.e.$ the velocity appearing
in \autoref{eq:dynspg_ssh} is centred in time (\textit{now} velocity).
This is of paramount importance. Replacing $T$ by the number $1$ in the tracer equation and summing
over the water column must lead to the sea surface height equation otherwise tracer content
will not be conserved \citep{Griffies_al_MWR01, Leclair_Madec_OM09}.

The vertical velocity is computed by an upward integration of the horizontal
divergence starting at the bottom, taking into account the change of the thickness of the levels :
+expressed in Kg/m$^2$/s (which is equal to mm/s),
+and $\rho _w$=1,035~Kg/m$^3$ is the reference density of sea water (Boussinesq approximation).
+If river runoff is expressed as a surface freshwater flux (see \autoref{chap:SBC}) then
+\textit{emp} can be written as the evaporation minus precipitation, minus the river runoff.
+The seasurface height is evaluated using exactly the same time stepping scheme as
+the tracer equation \autoref{eq:tra_nxt}:
+a leapfrog scheme in combination with an Asselin time filter,
+$i.e.$ the velocity appearing in \autoref{eq:dynspg_ssh} is centred in time (\textit{now} velocity).
+This is of paramount importance.
+Replacing $T$ by the number $1$ in the tracer equation and summing over the water column must lead to
+the sea surface height equation otherwise tracer content will not be conserved
+\citep{Griffies_al_MWR01, Leclair_Madec_OM09}.
+
+The vertical velocity is computed by an upward integration of the horizontal divergence starting at the bottom,
+taking into account the change of the thickness of the levels:
\begin{equation} \label{eq:wzv}
\left\{ \begin{aligned}
@@ 138,17 +140,17 @@
In the case of a nonlinear free surface (\key{vvl}), the top vertical velocity is $\textit{emp}/\rho_w$,
as changes in the divergence of the barotropic transport are absorbed into the change
of the level thicknesses, reorientated downward.
+as changes in the divergence of the barotropic transport are absorbed into the change of the level thicknesses,
+reorientated downward.
\gmcomment{not sure of this... to be modified with the change in emp setting}
In the case of a linear free surface, the time derivative in \autoref{eq:wzv} disappears.
The upper boundary condition applies at a fixed level $z=0$. The top vertical velocity
is thus equal to the divergence of the barotropic transport ($i.e.$ the first term in the
righthandside of \autoref{eq:dynspg_ssh}).

Note also that whereas the vertical velocity has the same discrete
expression in $z$ and $s$coordinates, its physical meaning is not the same:
in the second case, $w$ is the velocity normal to the $s$surfaces.
Note also that the $k$axis is reorientated downwards in the \textsc{fortran} code compared
to the indexing used in the semidiscrete equations such as \autoref{eq:wzv}
+The upper boundary condition applies at a fixed level $z=0$.
+The top vertical velocity is thus equal to the divergence of the barotropic transport
+($i.e.$ the first term in the righthandside of \autoref{eq:dynspg_ssh}).
+
+Note also that whereas the vertical velocity has the same discrete expression in $z$ and $s$coordinates,
+its physical meaning is not the same:
+in the second case, $w$ is the velocity normal to the $s$surfaces.
+Note also that the $k$axis is reorientated downwards in the \textsc{fortran} code compared to
+the indexing used in the semidiscrete equations such as \autoref{eq:wzv}
(see \autoref{subsec:DOM_Num_Index_vertical}).
@@ 164,13 +166,12 @@
%
The vector invariant form of the momentum equations is the one most
often used in applications of the \NEMO ocean model. The flux form option
(see next section) has been present since version $2$. Options are defined
through the \ngn{namdyn\_adv} namelist variables
Coriolis and momentum advection terms are evaluated using a leapfrog
scheme, $i.e.$ the velocity appearing in these expressions is centred in
time (\textit{now} velocity).
At the lateral boundaries either free slip, no slip or partial slip boundary
conditions are applied following \autoref{chap:LBC}.
+The vector invariant form of the momentum equations is the one most often used in
+applications of the \NEMO ocean model.
+The flux form option (see next section) has been present since version $2$.
+Options are defined through the \ngn{namdyn\_adv} namelist variables Coriolis and
+momentum advection terms are evaluated using a leapfrog scheme,
+$i.e.$ the velocity appearing in these expressions is centred in time (\textit{now} velocity).
+At the lateral boundaries either free slip, no slip or partial slip boundary conditions are applied following
+\autoref{chap:LBC}.
% 
@@ 185,14 +186,14 @@
Options are defined through the \ngn{namdyn\_vor} namelist variables.
Four discretisations of the vorticity term (\np{ln\_dynvor\_xxx}\forcode{ = .true.}) are available:
conserving potential enstrophy of horizontally nondivergent flow (ENS scheme) ;
conserving horizontal kinetic energy (ENE scheme) ; conserving potential enstrophy for
the relative vorticity term and horizontal kinetic energy for the planetary vorticity
term (MIX scheme) ; or conserving both the potential enstrophy of horizontally nondivergent
flow and horizontal kinetic energy (EEN scheme) (see \autoref{subsec:C_vorEEN}). In the
case of ENS, ENE or MIX schemes the land sea mask may be slightly modified to ensure the
consistency of vorticity term with analytical equations (\np{ln\_dynvor\_con}\forcode{ = .true.}).
The vorticity terms are all computed in dedicated routines that can be found in
the \mdl{dynvor} module.
+Four discretisations of the vorticity term (\np{ln\_dynvor\_xxx}\forcode{ = .true.}) are available:
+conserving potential enstrophy of horizontally nondivergent flow (ENS scheme);
+conserving horizontal kinetic energy (ENE scheme);
+conserving potential enstrophy for the relative vorticity term and
+horizontal kinetic energy for the planetary vorticity term (MIX scheme);
+or conserving both the potential enstrophy of horizontally nondivergent flow and horizontal kinetic energy
+(EEN scheme) (see \autoref{subsec:C_vorEEN}).
+In the case of ENS, ENE or MIX schemes the land sea mask may be slightly modified to ensure the consistency of
+vorticity term with analytical equations (\np{ln\_dynvor\_con}\forcode{ = .true.}).
+The vorticity terms are all computed in dedicated routines that can be found in the \mdl{dynvor} module.
%
@@ 202,8 +203,9 @@
\label{subsec:DYN_vor_ens}
In the enstrophy conserving case (ENS scheme), the discrete formulation of the
vorticity term provides a global conservation of the enstrophy
($ [ (\zeta +f ) / e_{3f} ]^2 $ in $s$coordinates) for a horizontally nondivergent
flow ($i.e.$ $\chi$=$0$), but does not conserve the total kinetic energy. It is given by:
+In the enstrophy conserving case (ENS scheme),
+the discrete formulation of the vorticity term provides a global conservation of the enstrophy
+($ [ (\zeta +f ) / e_{3f} ]^2 $ in $s$coordinates) for a horizontally nondivergent flow ($i.e.$ $\chi$=$0$),
+but does not conserve the total kinetic energy.
+It is given by:
\begin{equation} \label{eq:dynvor_ens}
\left\{
@@ 223,6 +225,6 @@
\label{subsec:DYN_vor_ene}
The kinetic energy conserving scheme (ENE scheme) conserves the global
kinetic energy but not the global enstrophy. It is given by:
+The kinetic energy conserving scheme (ENE scheme) conserves the global kinetic energy but not the global enstrophy.
+It is given by:
\begin{equation} \label{eq:dynvor_ene}
\left\{ \begin{aligned}
@@ 240,8 +242,7 @@
\label{subsec:DYN_vor_mix}
For the mixed energy/enstrophy conserving scheme (MIX scheme), a mixture of the
two previous schemes is used. It consists of the ENS scheme (\autoref{eq:dynvor_ens})
for the relative vorticity term, and of the ENE scheme (\autoref{eq:dynvor_ene}) applied
to the planetary vorticity term.
+For the mixed energy/enstrophy conserving scheme (MIX scheme), a mixture of the two previous schemes is used.
+It consists of the ENS scheme (\autoref{eq:dynvor_ens}) for the relative vorticity term,
+and of the ENE scheme (\autoref{eq:dynvor_ene}) applied to the planetary vorticity term.
\begin{equation} \label{eq:dynvor_mix}
\left\{ { \begin{aligned}
@@ 263,19 +264,19 @@
\label{subsec:DYN_vor_een}
In both the ENS and ENE schemes, it is apparent that the combination of $i$ and $j$
averages of the velocity allows for the presence of grid point oscillation structures
that will be invisible to the operator. These structures are \textit{computational modes}
that will be at least partly damped by the momentum diffusion operator ($i.e.$ the
subgridscale advection), but not by the resolved advection term. The ENS and ENE schemes
therefore do not contribute to dump any grid point noise in the horizontal velocity field.
Such noise would result in more noise in the vertical velocity field, an undesirable feature.
This is a wellknown characteristic of $C$grid discretization where $u$ and $v$ are located
at different grid points, a price worth paying to avoid a double averaging in the pressure
gradient term as in the $B$grid.
+In both the ENS and ENE schemes,
+it is apparent that the combination of $i$ and $j$ averages of the velocity allows for
+the presence of grid point oscillation structures that will be invisible to the operator.
+These structures are \textit{computational modes} that will be at least partly damped by
+the momentum diffusion operator ($i.e.$ the subgridscale advection), but not by the resolved advection term.
+The ENS and ENE schemes therefore do not contribute to dump any grid point noise in the horizontal velocity field.
+Such noise would result in more noise in the vertical velocity field, an undesirable feature.
+This is a wellknown characteristic of $C$grid discretization where
+$u$ and $v$ are located at different grid points,
+a price worth paying to avoid a double averaging in the pressure gradient term as in the $B$grid.
\gmcomment{ To circumvent this, Adcroft (ADD REF HERE)
Nevertheless, this technique strongly distort the phase and group velocity of Rossby waves....}
A very nice solution to the problem of double averaging was proposed by \citet{Arakawa_Hsu_MWR90}.
The idea is to get rid of the double averaging by considering triad combinations of vorticity.
+A very nice solution to the problem of double averaging was proposed by \citet{Arakawa_Hsu_MWR90}.
+The idea is to get rid of the double averaging by considering triad combinations of vorticity.
It is noteworthy that this solution is conceptually quite similar to the one proposed by
\citep{Griffies_al_JPO98} for the discretization of the isoneutral diffusion operator (see \autoref{apdx:C}).
@@ 287,6 +288,6 @@
q = \frac{\zeta +f} {e_{3f} }
\end{equation}
where the relative vorticity is defined by (\autoref{eq:divcur_cur}), the Coriolis parameter
is given by $f=2 \,\Omega \;\sin \varphi _f $ and the layer thickness at $f$points is:
+where the relative vorticity is defined by (\autoref{eq:divcur_cur}),
+the Coriolis parameter is given by $f=2 \,\Omega \;\sin \varphi _f $ and the layer thickness at $f$points is:
\begin{equation} \label{eq:een_e3f}
e_{3f} = \overline{\overline {e_{3t} }} ^{\,i+1/2,j+1/2}
@@ 296,22 +297,22 @@
\begin{figure}[!ht] \begin{center}
\includegraphics[width=0.70\textwidth]{Fig_DYN_een_triad}
\caption{ \protect\label{fig:DYN_een_triad}
Triads used in the energy and enstrophy conserving scheme (een) for
$u$component (upper panel) and $v$component (lower panel).}
+\caption{ \protect\label{fig:DYN_een_triad}
+ Triads used in the energy and enstrophy conserving scheme (een) for
+ $u$component (upper panel) and $v$component (lower panel).}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
A key point in \autoref{eq:een_e3f} is how the averaging in the \textbf{i} and \textbf{j} directions is made.
It uses the sum of masked tpoint vertical scale factor divided either
by the sum of the four tpoint masks (\np{nn\_een\_e3f}\forcode{ = 1}),
or just by $4$ (\np{nn\_een\_e3f}\forcode{ = .true.}).
The latter case preserves the continuity of $e_{3f}$ when one or more of the neighbouring $e_{3t}$
tends to zero and extends by continuity the value of $e_{3f}$ into the land areas.
This case introduces a subgridscale topography at fpoints (with a systematic reduction of $e_{3f}$
when a model level intercept the bathymetry) that tends to reinforce the topostrophy of the flow
+It uses the sum of masked tpoint vertical scale factor divided either by the sum of the four tpoint masks
+(\np{nn\_een\_e3f}\forcode{ = 1}), or just by $4$ (\np{nn\_een\_e3f}\forcode{ = .true.}).
+The latter case preserves the continuity of $e_{3f}$ when one or more of the neighbouring $e_{3t}$ tends to zero and
+extends by continuity the value of $e_{3f}$ into the land areas.
+This case introduces a subgridscale topography at fpoints
+(with a systematic reduction of $e_{3f}$ when a model level intercept the bathymetry)
+that tends to reinforce the topostrophy of the flow
($i.e.$ the tendency of the flow to follow the isobaths) \citep{Penduff_al_OS07}.
Next, the vorticity triads, $ {^i_j}\mathbb{Q}^{i_p}_{j_p}$ can be defined at a $T$point as
the following triad combinations of the neighbouring potential vorticities defined at fpoints
+Next, the vorticity triads, $ {^i_j}\mathbb{Q}^{i_p}_{j_p}$ can be defined at a $T$point as
+the following triad combinations of the neighbouring potential vorticities defined at fpoints
(\autoref{fig:DYN_een_triad}):
\begin{equation} \label{eq:Q_triads}
@@ 333,12 +334,12 @@
\end{equation}
This EEN scheme in fact combines the conservation properties of the ENS and ENE schemes.
It conserves both total energy and potential enstrophy in the limit of horizontally
nondivergent flow ($i.e.$ $\chi$=$0$) (see \autoref{subsec:C_vorEEN}).
Applied to a realistic ocean configuration, it has been shown that it leads to a significant
reduction of the noise in the vertical velocity field \citep{Le_Sommer_al_OM09}.
+This EEN scheme in fact combines the conservation properties of the ENS and ENE schemes.
+It conserves both total energy and potential enstrophy in the limit of horizontally nondivergent flow
+($i.e.$ $\chi$=$0$) (see \autoref{subsec:C_vorEEN}).
+Applied to a realistic ocean configuration, it has been shown that it leads to a significant reduction of
+the noise in the vertical velocity field \citep{Le_Sommer_al_OM09}.
Furthermore, used in combination with a partial steps representation of bottom topography,
it improves the interaction between current and topography, leading to a larger
topostrophy of the flow \citep{Barnier_al_OD06, Penduff_al_OS07}.
+it improves the interaction between current and topography,
+leading to a larger topostrophy of the flow \citep{Barnier_al_OD06, Penduff_al_OS07}.
%
@@ 348,7 +349,8 @@
\label{subsec:DYN_keg}
As demonstrated in \autoref{apdx:C}, there is a single discrete formulation
of the kinetic energy gradient term that, together with the formulation chosen for
the vertical advection (see below), conserves the total kinetic energy:
+As demonstrated in \autoref{apdx:C},
+there is a single discrete formulation of the kinetic energy gradient term that,
+together with the formulation chosen for the vertical advection (see below),
+conserves the total kinetic energy:
\begin{equation} \label{eq:dynkeg}
\left\{ \begin{aligned}
@@ 364,8 +366,9 @@
\label{subsec:DYN_zad}
The discrete formulation of the vertical advection, together with the formulation
chosen for the gradient of kinetic energy (KE) term, conserves the total kinetic
energy. Indeed, the change of KE due to the vertical advection is exactly
balanced by the change of KE due to the gradient of KE (see \autoref{apdx:C}).
+The discrete formulation of the vertical advection, t
+ogether with the formulation chosen for the gradient of kinetic energy (KE) term,
+conserves the total kinetic energy.
+Indeed, the change of KE due to the vertical advection is exactly balanced by
+the change of KE due to the gradient of KE (see \autoref{apdx:C}).
\begin{equation} \label{eq:dynzad}
\left\{ \begin{aligned}
@@ 374,9 +377,9 @@
\end{aligned} \right.
\end{equation}
When \np{ln\_dynzad\_zts}\forcode{ = .true.}, a splitexplicit time stepping with 5 subtimesteps is used
on the vertical advection term.
+When \np{ln\_dynzad\_zts}\forcode{ = .true.},
+a splitexplicit time stepping with 5 subtimesteps is used on the vertical advection term.
This option can be useful when the value of the timestep is limited by vertical advection \citep{Lemarie_OM2015}.
Note that in this case, a similar splitexplicit time stepping should be used on
vertical advection of tracer to ensure a better stability,
+Note that in this case,
+a similar splitexplicit time stepping should be used on vertical advection of tracer to ensure a better stability,
an option which is only available with a TVD scheme (see \np{ln\_traadv\_tvd\_zts} in \autoref{subsec:TRA_adv_tvd}).
@@ 393,9 +396,9 @@
Options are defined through the \ngn{namdyn\_adv} namelist variables.
In the flux form (as in the vector invariant form), the Coriolis and momentum
advection terms are evaluated using a leapfrog scheme, $i.e.$ the velocity
appearing in their expressions is centred in time (\textit{now} velocity). At the
lateral boundaries either free slip, no slip or partial slip boundary conditions
are applied following \autoref{chap:LBC}.
+In the flux form (as in the vector invariant form),
+the Coriolis and momentum advection terms are evaluated using a leapfrog scheme,
+$i.e.$ the velocity appearing in their expressions is centred in time (\textit{now} velocity).
+At the lateral boundaries either free slip,
+no slip or partial slip boundary conditions are applied following \autoref{chap:LBC}.
@@ 406,7 +409,7 @@
\label{subsec:DYN_cor_flux}
In flux form, the vorticity term reduces to a Coriolis term in which the Coriolis
parameter has been modified to account for the "metric" term. This altered
Coriolis parameter is thus discretised at $f$points. It is given by:
+In flux form, the vorticity term reduces to a Coriolis term in which the Coriolis parameter has been modified to account for the "metric" term.
+This altered Coriolis parameter is thus discretised at $f$points.
+It is given by:
\begin{multline} \label{eq:dyncor_metric}
f+\frac{1}{e_1 e_2 }\left( {v\frac{\partial e_2 }{\partial i}  u\frac{\partial e_1 }{\partial j}} \right) \\
@@ 415,9 +418,8 @@
\end{multline}
Any of the (\autoref{eq:dynvor_ens}), (\autoref{eq:dynvor_ene}) and (\autoref{eq:dynvor_een})
schemes can be used to compute the product of the Coriolis parameter and the
vorticity. However, the energyconserving scheme (\autoref{eq:dynvor_een}) has
exclusively been used to date. This term is evaluated using a leapfrog scheme,
$i.e.$ the velocity is centred in time (\textit{now} velocity).
+Any of the (\autoref{eq:dynvor_ens}), (\autoref{eq:dynvor_ene}) and (\autoref{eq:dynvor_een}) schemes can be used to
+compute the product of the Coriolis parameter and the vorticity.
+However, the energyconserving scheme (\autoref{eq:dynvor_een}) has exclusively been used to date.
+This term is evaluated using a leapfrog scheme, $i.e.$ the velocity is centred in time (\textit{now} velocity).
%
@@ 427,5 +429,5 @@
\label{subsec:DYN_adv_flux}
The discrete expression of the advection term is given by :
+The discrete expression of the advection term is given by:
\begin{equation} \label{eq:dynadv}
\left\{
@@ 444,12 +446,12 @@
\end{equation}
Two advection schemes are available: a $2^{nd}$ order centered finite
difference scheme, CEN2, or a $3^{rd}$ order upstream biased scheme, UBS.
The latter is described in \citet{Shchepetkin_McWilliams_OM05}. The schemes are
selected using the namelist logicals \np{ln\_dynadv\_cen2} and \np{ln\_dynadv\_ubs}.
In flux form, the schemes differ by the choice of a space and time interpolation to
define the value of $u$ and $v$ at the centre of each face of $u$ and $v$cells,
$i.e.$ at the $T$, $f$, and $uw$points for $u$ and at the $f$, $T$ and
$vw$points for $v$.
+Two advection schemes are available:
+a $2^{nd}$ order centered finite difference scheme, CEN2,
+or a $3^{rd}$ order upstream biased scheme, UBS.
+The latter is described in \citet{Shchepetkin_McWilliams_OM05}.
+The schemes are selected using the namelist logicals \np{ln\_dynadv\_cen2} and \np{ln\_dynadv\_ubs}.
+In flux form, the schemes differ by the choice of a space and time interpolation to define the value of
+$u$ and $v$ at the centre of each face of $u$ and $v$cells, $i.e.$ at the $T$, $f$,
+and $uw$points for $u$ and at the $f$, $T$ and $vw$points for $v$.
%
@@ 459,6 +461,5 @@
\label{subsec:DYN_adv_cen2}
In the centered $2^{nd}$ order formulation, the velocity is evaluated as the
mean of the two neighbouring points :
+In the centered $2^{nd}$ order formulation, the velocity is evaluated as the mean of the two neighbouring points:
\begin{equation} \label{eq:dynadv_cen2}
\left\{ \begin{aligned}
@@ 468,9 +469,9 @@
\end{equation}
The scheme is non diffusive (i.e. conserves the kinetic energy) but dispersive
($i.e.$ it may create false extrema). It is therefore notoriously noisy and must be
used in conjunction with an explicit diffusion operator to produce a sensible solution.
The associated timestepping is performed using a leapfrog scheme in conjunction
with an Asselin timefilter, so $u$ and $v$ are the \emph{now} velocities.
+The scheme is non diffusive (i.e. conserves the kinetic energy) but dispersive ($i.e.$ it may create false extrema).
+It is therefore notoriously noisy and must be used in conjunction with an explicit diffusion operator to
+produce a sensible solution.
+The associated timestepping is performed using a leapfrog scheme in conjunction with an Asselin timefilter,
+so $u$ and $v$ are the \emph{now} velocities.
%
@@ 480,7 +481,7 @@
\label{subsec:DYN_adv_ubs}
The UBS advection scheme is an upstream biased third order scheme based on
an upstreambiased parabolic interpolation. For example, the evaluation of
$u_T^{ubs} $ is done as follows:
+The UBS advection scheme is an upstream biased third order scheme based on
+an upstreambiased parabolic interpolation.
+For example, the evaluation of $u_T^{ubs} $ is done as follows:
\begin{equation} \label{eq:dynadv_ubs}
u_T^{ubs} =\overline u ^i\;\frac{1}{6} \begin{cases}
@@ 489,36 +490,36 @@
\end{cases}
\end{equation}
where $u"_{i+1/2} =\delta _{i+1/2} \left[ {\delta _i \left[ u \right]} \right]$. This results
in a dissipatively dominant ($i.e.$ hyperdiffusive) truncation error \citep{Shchepetkin_McWilliams_OM05}.
The overall performance of the advection scheme is similar to that reported in
\citet{Farrow1995}. It is a relatively good compromise between accuracy and
smoothness. It is not a \emph{positive} scheme, meaning that false extrema are
permitted. But the amplitudes of the false extrema are significantly reduced over
those in the centred second order method. As the scheme already includes
a diffusion component, it can be used without explicit lateral diffusion on momentum
($i.e.$ \np{ln\_dynldf\_lap}\forcode{ = }\np{ln\_dynldf\_bilap}\forcode{ = .false.}), and it is recommended to do so.

The UBS scheme is not used in all directions. In the vertical, the centred $2^{nd}$
order evaluation of the advection is preferred, $i.e.$ $u_{uw}^{ubs}$ and
$u_{vw}^{ubs}$ in \autoref{eq:dynadv_cen2} are used. UBS is diffusive and is
associated with vertical mixing of momentum. \gmcomment{ gm pursue the
+where $u"_{i+1/2} =\delta _{i+1/2} \left[ {\delta _i \left[ u \right]} \right]$.
+This results in a dissipatively dominant ($i.e.$ hyperdiffusive) truncation error
+\citep{Shchepetkin_McWilliams_OM05}.
+The overall performance of the advection scheme is similar to that reported in \citet{Farrow1995}.
+It is a relatively good compromise between accuracy and smoothness.
+It is not a \emph{positive} scheme, meaning that false extrema are permitted.
+But the amplitudes of the false extrema are significantly reduced over those in the centred second order method.
+As the scheme already includes a diffusion component, it can be used without explicit lateral diffusion on momentum
+($i.e.$ \np{ln\_dynldf\_lap}\forcode{ = }\np{ln\_dynldf\_bilap}\forcode{ = .false.}),
+and it is recommended to do so.
+
+The UBS scheme is not used in all directions.
+In the vertical, the centred $2^{nd}$ order evaluation of the advection is preferred, $i.e.$ $u_{uw}^{ubs}$ and
+$u_{vw}^{ubs}$ in \autoref{eq:dynadv_cen2} are used.
+UBS is diffusive and is associated with vertical mixing of momentum. \gmcomment{ gm pursue the
sentence:Since vertical mixing of momentum is a source term of the TKE equation... }
For stability reasons, the first term in (\autoref{eq:dynadv_ubs}), which corresponds
to a second order centred scheme, is evaluated using the \textit{now} velocity
(centred in time), while the second term, which is the diffusion part of the scheme,
is evaluated using the \textit{before} velocity (forward in time). This is discussed
by \citet{Webb_al_JAOT98} in the context of the Quick advection scheme.

Note that the UBS and QUICK (Quadratic Upstream Interpolation for Convective Kinematics)
schemes only differ by one coefficient. Replacing $1/6$ by $1/8$ in
(\autoref{eq:dynadv_ubs}) leads to the QUICK advection scheme \citep{Webb_al_JAOT98}.
This option is not available through a namelist parameter, since the $1/6$ coefficient
is hard coded. Nevertheless it is quite easy to make the substitution in the
\mdl{dynadv\_ubs} module and obtain a QUICK scheme.

Note also that in the current version of \mdl{dynadv\_ubs}, there is also the
possibility of using a $4^{th}$ order evaluation of the advective velocity as in
ROMS. This is an error and should be suppressed soon.
+For stability reasons, the first term in (\autoref{eq:dynadv_ubs}),
+which corresponds to a second order centred scheme, is evaluated using the \textit{now} velocity (centred in time),
+while the second term, which is the diffusion part of the scheme,
+is evaluated using the \textit{before} velocity (forward in time).
+This is discussed by \citet{Webb_al_JAOT98} in the context of the Quick advection scheme.
+
+Note that the UBS and QUICK (Quadratic Upstream Interpolation for Convective Kinematics) schemes only differ by
+one coefficient.
+Replacing $1/6$ by $1/8$ in (\autoref{eq:dynadv_ubs}) leads to the QUICK advection scheme \citep{Webb_al_JAOT98}.
+This option is not available through a namelist parameter, since the $1/6$ coefficient is hard coded.
+Nevertheless it is quite easy to make the substitution in the \mdl{dynadv\_ubs} module and obtain a QUICK scheme.
+
+Note also that in the current version of \mdl{dynadv\_ubs},
+there is also the possibility of using a $4^{th}$ order evaluation of the advective velocity as in ROMS.
+This is an error and should be suppressed soon.
%%%
\gmcomment{action : this have to be done}
@@ 536,14 +537,14 @@
Options are defined through the \ngn{namdyn\_hpg} namelist variables.
The key distinction between the different algorithms used for the hydrostatic
pressure gradient is the vertical coordinate used, since HPG is a \emph{horizontal}
pressure gradient, $i.e.$ computed along geopotential surfaces. As a result, any
tilt of the surface of the computational levels will require a specific treatment to
+The key distinction between the different algorithms used for
+the hydrostatic pressure gradient is the vertical coordinate used,
+since HPG is a \emph{horizontal} pressure gradient, $i.e.$ computed along geopotential surfaces.
+As a result, any tilt of the surface of the computational levels will require a specific treatment to
compute the hydrostatic pressure gradient.
The hydrostatic pressure gradient term is evaluated either using a leapfrog scheme,
$i.e.$ the density appearing in its expression is centred in time (\emph{now} $\rho$), or
a semiimplcit scheme. At the lateral boundaries either free slip, no slip or partial slip
boundary conditions are applied.
+The hydrostatic pressure gradient term is evaluated either using a leapfrog scheme,
+$i.e.$ the density appearing in its expression is centred in time (\emph{now} $\rho$),
+or a semiimplcit scheme.
+At the lateral boundaries either free slip, no slip or partial slip boundary conditions are applied.
%
@@ 553,9 +554,8 @@
\label{subsec:DYN_hpg_zco}
The hydrostatic pressure can be obtained by integrating the hydrostatic equation
vertically from the surface. However, the pressure is large at great depth while its
horizontal gradient is several orders of magnitude smaller. This may lead to large
truncation errors in the pressure gradient terms. Thus, the two horizontal components
of the hydrostatic pressure gradient are computed directly as follows:
+The hydrostatic pressure can be obtained by integrating the hydrostatic equation vertically from the surface.
+However, the pressure is large at great depth while its horizontal gradient is several orders of magnitude smaller.
+This may lead to large truncation errors in the pressure gradient terms.
+Thus, the two horizontal components of the hydrostatic pressure gradient are computed directly as follows:
for $k=km$ (surface layer, $jk=1$ in the code)
@@ 581,9 +581,9 @@
\end{equation}
Note that the $1/2$ factor in (\autoref{eq:dynhpg_zco_surf}) is adequate because of
the definition of $e_{3w}$ as the vertical derivative of the scale factor at the surface
level ($z=0$). Note also that in case of variable volume level (\key{vvl} defined), the
surface pressure gradient is included in \autoref{eq:dynhpg_zco_surf} and \autoref{eq:dynhpg_zco}
through the space and time variations of the vertical scale factor $e_{3w}$.
+Note that the $1/2$ factor in (\autoref{eq:dynhpg_zco_surf}) is adequate because of the definition of $e_{3w}$ as
+the vertical derivative of the scale factor at the surface level ($z=0$).
+Note also that in case of variable volume level (\key{vvl} defined),
+the surface pressure gradient is included in \autoref{eq:dynhpg_zco_surf} and
+\autoref{eq:dynhpg_zco} through the space and time variations of the vertical scale factor $e_{3w}$.
%
@@ 593,17 +593,18 @@
\label{subsec:DYN_hpg_zps}
With partial bottom cells, tracers in horizontally adjacent cells generally live at
different depths. Before taking horizontal gradients between these tracer points,
a linear interpolation is used to approximate the deeper tracer as if it actually lived
at the depth of the shallower tracer point.

Apart from this modification, the horizontal hydrostatic pressure gradient evaluated
in the $z$coordinate with partial step is exactly as in the pure $z$coordinate case.
As explained in detail in section \autoref{sec:TRA_zpshde}, the nonlinearity of pressure
effects in the equation of state is such that it is better to interpolate temperature and
salinity vertically before computing the density. Horizontal gradients of temperature
and salinity are needed for the TRA modules, which is the reason why the horizontal
gradients of density at the deepest model level are computed in module \mdl{zpsdhe}
located in the TRA directory and described in \autoref{sec:TRA_zpshde}.
+With partial bottom cells, tracers in horizontally adjacent cells generally live at different depths.
+Before taking horizontal gradients between these tracer points,
+a linear interpolation is used to approximate the deeper tracer as if
+it actually lived at the depth of the shallower tracer point.
+
+Apart from this modification,
+the horizontal hydrostatic pressure gradient evaluated in the $z$coordinate with partial step is exactly as in
+the pure $z$coordinate case.
+As explained in detail in section \autoref{sec:TRA_zpshde},
+the nonlinearity of pressure effects in the equation of state is such that
+it is better to interpolate temperature and salinity vertically before computing the density.
+Horizontal gradients of temperature and salinity are needed for the TRA modules,
+which is the reason why the horizontal gradients of density at the deepest model level are computed in
+module \mdl{zpsdhe} located in the TRA directory and described in \autoref{sec:TRA_zpshde}.
%
@@ 613,8 +614,8 @@
\label{subsec:DYN_hpg_sco}
Pressure gradient formulations in an $s$coordinate have been the subject of a vast
number of papers ($e.g.$, \citet{Song1998, Shchepetkin_McWilliams_OM05}).
A number of different pressure gradient options are coded but the ROMSlike, density Jacobian with
cubic polynomial method is currently disabled whilst known bugs are under investigation.
+Pressure gradient formulations in an $s$coordinate have been the subject of a vast number of papers
+($e.g.$, \citet{Song1998, Shchepetkin_McWilliams_OM05}).
+A number of different pressure gradient options are coded but the ROMSlike,
+density Jacobian with cubic polynomial method is currently disabled whilst known bugs are under investigation.
$\bullet$ Traditional coding (see for example \citet{Madec_al_JPO96}: (\np{ln\_dynhpg\_sco}\forcode{ = .true.})
@@ 628,7 +629,7 @@
\end{equation}
Where the first term is the pressure gradient along coordinates, computed as in
\autoref{eq:dynhpg_zco_surf}  \autoref{eq:dynhpg_zco}, and $z_T$ is the depth of
the $T$point evaluated from the sum of the vertical scale factors at the $w$point
+Where the first term is the pressure gradient along coordinates,
+computed as in \autoref{eq:dynhpg_zco_surf}  \autoref{eq:dynhpg_zco},
+and $z_T$ is the depth of the $T$point evaluated from the sum of the vertical scale factors at the $w$point
($e_{3w}$).
@@ 641,26 +642,33 @@
(\np{ln\_dynhpg\_djc}\forcode{ = .true.}) (currently disabled; under development)
Note that expression \autoref{eq:dynhpg_sco} is commonly used when the variable volume formulation is
activated (\key{vvl}) because in that case, even with a flat bottom, the coordinate surfaces are not
horizontal but follow the free surface \citep{Levier2007}. The pressure jacobian scheme
(\np{ln\_dynhpg\_prj}\forcode{ = .true.}) is available as an improved option to \np{ln\_dynhpg\_sco}\forcode{ = .true.} when
\key{vvl} is active. The pressure Jacobian scheme uses a constrained cubic spline to reconstruct
the density profile across the water column. This method maintains the monotonicity between the
density nodes The pressure can be calculated by analytical integration of the density profile and a
pressure Jacobian method is used to solve the horizontal pressure gradient. This method can provide
a more accurate calculation of the horizontal pressure gradient than the standard scheme.
+Note that expression \autoref{eq:dynhpg_sco} is commonly used when the variable volume formulation is activated
+(\key{vvl}) because in that case, even with a flat bottom,
+the coordinate surfaces are not horizontal but follow the free surface \citep{Levier2007}.
+The pressure jacobian scheme (\np{ln\_dynhpg\_prj}\forcode{ = .true.}) is available as
+an improved option to \np{ln\_dynhpg\_sco}\forcode{ = .true.} when \key{vvl} is active.
+The pressure Jacobian scheme uses a constrained cubic spline to
+reconstruct the density profile across the water column.
+This method maintains the monotonicity between the density nodes.
+The pressure can be calculated by analytical integration of the density profile and
+a pressure Jacobian method is used to solve the horizontal pressure gradient.
+This method can provide a more accurate calculation of the horizontal pressure gradient than the standard scheme.
\subsection{Ice shelf cavity}
\label{subsec:DYN_hpg_isf}
Beneath an ice shelf, the total pressure gradient is the sum of the pressure gradient due to the ice shelf load and
 the pressure gradient due to the ocean load. If cavity opened (\np{ln\_isfcav}\forcode{ = .true.}) these 2 terms can be
 calculated by setting \np{ln\_dynhpg\_isf}\forcode{ = .true.}. No other scheme are working with the ice shelf.\\
+the pressure gradient due to the ocean load.
+If cavity opened (\np{ln\_isfcav}\forcode{ = .true.}) these 2 terms can be calculated by
+setting \np{ln\_dynhpg\_isf}\forcode{ = .true.}.
+No other scheme are working with the ice shelf.\\
$\bullet$ The main hypothesis to compute the ice shelf load is that the ice shelf is in an isostatic equilibrium.
 The top pressure is computed integrating from surface to the base of the ice shelf a reference density profile
(prescribed as density of a water at 34.4 PSU and 1.9\degC) and corresponds to the water replaced by the ice shelf.
This top pressure is constant over time. A detailed description of this method is described in \citet{Losch2008}.\\

$\bullet$ The ocean load is computed using the expression \autoref{eq:dynhpg_sco} described in \autoref{subsec:DYN_hpg_sco}.
+The top pressure is computed integrating from surface to the base of the ice shelf a reference density profile
+(prescribed as density of a water at 34.4 PSU and 1.9\degC) and
+corresponds to the water replaced by the ice shelf.
+This top pressure is constant over time.
+A detailed description of this method is described in \citet{Losch2008}.\\
+
+$\bullet$ The ocean load is computed using the expression \autoref{eq:dynhpg_sco} described in
+\autoref{subsec:DYN_hpg_sco}.
%
@@ 670,15 +678,16 @@
\label{subsec:DYN_hpg_imp}
The default time differencing scheme used for the horizontal pressure gradient is
a leapfrog scheme and therefore the density used in all discrete expressions given
above is the \textit{now} density, computed from the \textit{now} temperature and
salinity. In some specific cases (usually high resolution simulations over an ocean
domain which includes weakly stratified regions) the physical phenomenon that
controls the timestep is internal gravity waves (IGWs). A semiimplicit scheme for
doubling the stability limit associated with IGWs can be used \citep{Brown_Campana_MWR78,
Maltrud1998}. It involves the evaluation of the hydrostatic pressure gradient as an
average over the three time levels $t\rdt$, $t$, and $t+\rdt$ ($i.e.$
\textit{before}, \textit{now} and \textit{after} timesteps), rather than at the central
time level $t$ only, as in the standard leapfrog scheme.
+The default time differencing scheme used for the horizontal pressure gradient is a leapfrog scheme and
+therefore the density used in all discrete expressions given above is the \textit{now} density,
+computed from the \textit{now} temperature and salinity.
+In some specific cases
+(usually high resolution simulations over an ocean domain which includes weakly stratified regions)
+the physical phenomenon that controls the timestep is internal gravity waves (IGWs).
+A semiimplicit scheme for doubling the stability limit associated with IGWs can be used
+\citep{Brown_Campana_MWR78, Maltrud1998}.
+It involves the evaluation of the hydrostatic pressure gradient as
+an average over the three time levels $t\rdt$, $t$, and $t+\rdt$
+($i.e.$ \textit{before}, \textit{now} and \textit{after} timesteps),
+rather than at the central time level $t$ only, as in the standard leapfrog scheme.
$\bullet$ leapfrog scheme (\np{ln\_dynhpg\_imp}\forcode{ = .true.}):
@@ 695,21 +704,20 @@
\end{equation}
The semiimplicit time scheme \autoref{eq:dynhpg_imp} is made possible without
significant additional computation since the density can be updated to time level
$t+\rdt$ before computing the horizontal hydrostatic pressure gradient. It can
be easily shown that the stability limit associated with the hydrostatic pressure
gradient doubles using \autoref{eq:dynhpg_imp} compared to that using the
standard leapfrog scheme \autoref{eq:dynhpg_lf}. Note that \autoref{eq:dynhpg_imp}
is equivalent to applying a time filter to the pressure gradient to eliminate high
frequency IGWs. Obviously, when using \autoref{eq:dynhpg_imp}, the doubling of
the timestep is achievable only if no other factors control the timestep, such as
the stability limits associated with advection or diffusion.

In practice, the semiimplicit scheme is used when \np{ln\_dynhpg\_imp}\forcode{ = .true.}.
In this case, we choose to apply the time filter to temperature and salinity used in
the equation of state, instead of applying it to the hydrostatic pressure or to the
density, so that no additional storage array has to be defined. The density used to
compute the hydrostatic pressure gradient (whatever the formulation) is evaluated
as follows:
+The semiimplicit time scheme \autoref{eq:dynhpg_imp} is made possible without
+significant additional computation since the density can be updated to time level $t+\rdt$ before
+computing the horizontal hydrostatic pressure gradient.
+It can be easily shown that the stability limit associated with the hydrostatic pressure gradient doubles using
+\autoref{eq:dynhpg_imp} compared to that using the standard leapfrog scheme \autoref{eq:dynhpg_lf}.
+Note that \autoref{eq:dynhpg_imp} is equivalent to applying a time filter to the pressure gradient to
+eliminate high frequency IGWs.
+Obviously, when using \autoref{eq:dynhpg_imp},
+the doubling of the timestep is achievable only if no other factors control the timestep,
+such as the stability limits associated with advection or diffusion.
+
+In practice, the semiimplicit scheme is used when \np{ln\_dynhpg\_imp}\forcode{ = .true.}.
+In this case, we choose to apply the time filter to temperature and salinity used in the equation of state,
+instead of applying it to the hydrostatic pressure or to the density,
+so that no additional storage array has to be defined.
+The density used to compute the hydrostatic pressure gradient (whatever the formulation) is evaluated as follows:
\begin{equation} \label{eq:rho_flt}
\rho^t = \rho( \widetilde{T},\widetilde {S},z_t)
@@ 718,7 +726,7 @@
\end{equation}
Note that in the semiimplicit case, it is necessary to save the filtered density, an
extra threedimensional field, in the restart file to restart the model with exact
reproducibility. This option is controlled by \np{nn\_dynhpg\_rst}, a namelist parameter.
+Note that in the semiimplicit case, it is necessary to save the filtered density,
+an extra threedimensional field, in the restart file to restart the model with exact reproducibility.
+This option is controlled by \np{nn\_dynhpg\_rst}, a namelist parameter.
% ================================================================
@@ 735,29 +743,29 @@
Options are defined through the \ngn{namdyn\_spg} namelist variables.
The surface pressure gradient term is related to the representation of the free surface (\autoref{sec:PE_hor_pg}).
The main distinction is between the fixed volume case (linear free surface) and the variable volume case
(nonlinear free surface, \key{vvl} is defined). In the linear free surface case (\autoref{subsec:PE_free_surface})
the vertical scale factors $e_{3}$ are fixed in time, while they are timedependent in the nonlinear case
(\autoref{subsec:PE_free_surface}).
+The surface pressure gradient term is related to the representation of the free surface (\autoref{sec:PE_hor_pg}).
+The main distinction is between the fixed volume case (linear free surface) and
+the variable volume case (nonlinear free surface, \key{vvl} is defined).
+In the linear free surface case (\autoref{subsec:PE_free_surface})
+the vertical scale factors $e_{3}$ are fixed in time,
+while they are timedependent in the nonlinear case (\autoref{subsec:PE_free_surface}).
With both linear and nonlinear free surface, external gravity waves are allowed in the equations,
which imposes a very small time step when an explicit time stepping is used.
+which imposes a very small time step when an explicit time stepping is used.
Two methods are proposed to allow a longer time step for the threedimensional equations:
the filtered free surface, which is a modification of the continuous equations (see \autoref{eq:PE_flt}),
and the splitexplicit free surface described below.
+and the splitexplicit free surface described below.
The extra term introduced in the filtered method is calculated implicitly,
so that the update of the next velocities is done in module \mdl{dynspg\_flt} and not in \mdl{dynnxt}.
The form of the surface pressure gradient term depends on how the user wants to handle
the fast external gravity waves that are a solution of the analytical equation (\autoref{sec:PE_hor_pg}).
+The form of the surface pressure gradient term depends on how the user wants to
+handle the fast external gravity waves that are a solution of the analytical equation (\autoref{sec:PE_hor_pg}).
Three formulations are available, all controlled by a CPP key (ln\_dynspg\_xxx):
an explicit formulation which requires a small time step ;
a filtered free surface formulation which allows a larger time step by adding a filtering
term into the momentum equation ;
+an explicit formulation which requires a small time step;
+a filtered free surface formulation which allows a larger time step by
+adding a filtering term into the momentum equation;
and a splitexplicit free surface formulation, described below, which also allows a larger time step.
The extra term introduced in the filtered method is calculated
implicitly, so that a solver is used to compute it. As a consequence the update of the $next$
velocities is done in module \mdl{dynspg\_flt} and not in \mdl{dynnxt}.
+The extra term introduced in the filtered method is calculated implicitly, so that a solver is used to compute it.
+As a consequence the update of the $next$ velocities is done in module \mdl{dynspg\_flt} and not in \mdl{dynnxt}.
@@ 768,6 +776,7 @@
\label{subsec:DYN_spg_exp}
In the explicit free surface formulation (\key{dynspg\_exp} defined), the model time step
is chosen to be small enough to resolve the external gravity waves (typically a few tens of seconds).
+In the explicit free surface formulation (\key{dynspg\_exp} defined),
+the model time step is chosen to be small enough to resolve the external gravity waves
+(typically a few tens of seconds).
The surface pressure gradient, evaluated using a leapfrog scheme ($i.e.$ centered in time),
is thus simply given by :
@@ 779,7 +788,8 @@
\end{equation}
Note that in the nonlinear free surface case ($i.e.$ \key{vvl} defined), the surface pressure
gradient is already included in the momentum tendency through the level thickness variation
allowed in the computation of the hydrostatic pressure gradient. Thus, nothing is done in the \mdl{dynspg\_exp} module.
+Note that in the nonlinear free surface case ($i.e.$ \key{vvl} defined),
+the surface pressure gradient is already included in the momentum tendency through
+the level thickness variation allowed in the computation of the hydrostatic pressure gradient.
+Thus, nothing is done in the \mdl{dynspg\_exp} module.
%
@@ 794,13 +804,12 @@
The splitexplicit free surface formulation used in \NEMO (\key{dynspg\_ts} defined),
also called the timesplitting formulation, follows the one
proposed by \citet{Shchepetkin_McWilliams_OM05}. The general idea is to solve the free surface
equation and the associated barotropic velocity equations with a smaller time
step than $\rdt$, the time step used for the three dimensional prognostic
variables (\autoref{fig:DYN_dynspg_ts}).
The size of the small time step, $\rdt_e$ (the external mode or barotropic time step)
 is provided through the \np{nn\_baro} namelist parameter as:
$\rdt_e = \rdt / nn\_baro$. This parameter can be optionally defined automatically (\np{ln\_bt\_nn\_auto}\forcode{ = .true.})
considering that the stability of the barotropic system is essentially controled by external waves propagation.
+also called the timesplitting formulation, follows the one proposed by \citet{Shchepetkin_McWilliams_OM05}.
+The general idea is to solve the free surface equation and the associated barotropic velocity equations with
+a smaller time step than $\rdt$, the time step used for the three dimensional prognostic variables
+(\autoref{fig:DYN_dynspg_ts}).
+The size of the small time step, $\rdt_e$ (the external mode or barotropic time step) is provided through
+the \np{nn\_baro} namelist parameter as: $\rdt_e = \rdt / nn\_baro$.
+This parameter can be optionally defined automatically (\np{ln\_bt\_nn\_auto}\forcode{ = .true.}) considering that
+the stability of the barotropic system is essentially controled by external waves propagation.
Maximum Courant number is in that case time independent, and easily computed online from the input bathymetry.
Therefore, $\rdt_e$ is adjusted so that the Maximum allowed Courant number is smaller than \np{rn\_bt\_cmax}.
@@ 819,5 +828,13 @@
\end{equation}
\end{subequations}
where $\rm {\overline{\bf G}}$ is a forcing term held constant, containing coupling term between modes, surface atmospheric forcing as well as slowly varying barotropic terms not explicitly computed to gain efficiency. The third term on the right hand side of \autoref{eq:BT_dyn} represents the bottom stress (see section \autoref{sec:ZDF_bfr}), explicitly accounted for at each barotropic iteration. Temporal discretization of the system above follows a threetime step Generalized Forward Backward algorithm detailed in \citet{Shchepetkin_McWilliams_OM05}. AB3AM4 coefficients used in \NEMO follow the secondorder accurate, "multipurpose" stability compromise as defined in \citet{Shchepetkin_McWilliams_Bk08} (see their figure 12, lower left).
+where $\rm {\overline{\bf G}}$ is a forcing term held constant, containing coupling term between modes,
+surface atmospheric forcing as well as slowly varying barotropic terms not explicitly computed to gain efficiency.
+The third term on the right hand side of \autoref{eq:BT_dyn} represents the bottom stress
+(see section \autoref{sec:ZDF_bfr}), explicitly accounted for at each barotropic iteration.
+Temporal discretization of the system above follows a threetime step Generalized Forward Backward algorithm
+detailed in \citet{Shchepetkin_McWilliams_OM05}.
+AB3AM4 coefficients used in \NEMO follow the secondorder accurate,
+"multipurpose" stability compromise as defined in \citet{Shchepetkin_McWilliams_Bk08}
+(see their figure 12, lower left).
%> > > > > > > > > > > > > > > > > > > > > > > > > > > >
@@ 825,47 +842,69 @@
\includegraphics[width=0.7\textwidth]{Fig_DYN_dynspg_ts}
\caption{ \protect\label{fig:DYN_dynspg_ts}
Schematic of the splitexplicit time stepping scheme for the external
and internal modes. Time increases to the right. In this particular exemple,
a boxcar averaging window over $nn\_baro$ barotropic time steps is used ($nn\_bt\_flt=1$) and $nn\_baro=5$.
Internal mode time steps (which are also the model time steps) are denoted
by $t\rdt$, $t$ and $t+\rdt$. Variables with $k$ superscript refer to instantaneous barotropic variables,
$< >$ and $<< >>$ operator refer to time filtered variables using respectively primary (red vertical bars) and secondary weights (blue vertical bars).
The former are used to obtain time filtered quantities at $t+\rdt$ while the latter are used to obtain time averaged
transports to advect tracers.
a) Forward time integration: \protect\np{ln\_bt\_fw}\forcode{ = .true.}, \protect\np{ln\_bt\_av}\forcode{ = .true.}.
b) Centred time integration: \protect\np{ln\_bt\_fw}\forcode{ = .false.}, \protect\np{ln\_bt\_av}\forcode{ = .true.}.
c) Forward time integration with no time filtering (POMlike scheme): \protect\np{ln\_bt\_fw}\forcode{ = .true.}, \protect\np{ln\_bt\_av}\forcode{ = .false.}. }
+ Schematic of the splitexplicit time stepping scheme for the external and internal modes.
+ Time increases to the right. In this particular exemple,
+ a boxcar averaging window over $nn\_baro$ barotropic time steps is used ($nn\_bt\_flt=1$) and $nn\_baro=5$.
+ Internal mode time steps (which are also the model time steps) are denoted by $t\rdt$, $t$ and $t+\rdt$.
+ Variables with $k$ superscript refer to instantaneous barotropic variables,
+ $< >$ and $<< >>$ operator refer to time filtered variables using respectively primary (red vertical bars) and
+ secondary weights (blue vertical bars).
+ The former are used to obtain time filtered quantities at $t+\rdt$ while
+ the latter are used to obtain time averaged transports to advect tracers.
+ a) Forward time integration: \protect\np{ln\_bt\_fw}\forcode{ = .true.},
+ \protect\np{ln\_bt\_av}\forcode{ = .true.}.
+ b) Centred time integration: \protect\np{ln\_bt\_fw}\forcode{ = .false.},
+ \protect\np{ln\_bt\_av}\forcode{ = .true.}.
+ c) Forward time integration with no time filtering (POMlike scheme):
+ \protect\np{ln\_bt\_fw}\forcode{ = .true.}, \protect\np{ln\_bt\_av}\forcode{ = .false.}. }
\end{center} \end{figure}
%> > > > > > > > > > > > > > > > > > > > > > > > > > > >
In the default case (\np{ln\_bt\_fw}\forcode{ = .true.}), the external mode is integrated
between \textit{now} and \textit{after} baroclinic timesteps (\autoref{fig:DYN_dynspg_ts}a). To avoid aliasing of fast barotropic motions into three dimensional equations, time filtering is eventually applied on barotropic
quantities (\np{ln\_bt\_av}\forcode{ = .true.}). In that case, the integration is extended slightly beyond \textit{after} time step to provide time filtered quantities.
These are used for the subsequent initialization of the barotropic mode in the following baroclinic step.
+In the default case (\np{ln\_bt\_fw}\forcode{ = .true.}),
+the external mode is integrated between \textit{now} and \textit{after} baroclinic timesteps
+(\autoref{fig:DYN_dynspg_ts}a).
+To avoid aliasing of fast barotropic motions into three dimensional equations,
+time filtering is eventually applied on barotropic quantities (\np{ln\_bt\_av}\forcode{ = .true.}).
+In that case, the integration is extended slightly beyond \textit{after} time step to
+provide time filtered quantities.
+These are used for the subsequent initialization of the barotropic mode in the following baroclinic step.
Since external mode equations written at baroclinic time steps finally follow a forward time stepping scheme,
asselin filtering is not applied to barotropic quantities. \\
Alternatively, one can choose to integrate barotropic equations starting
from \textit{before} time step (\np{ln\_bt\_fw}\forcode{ = .false.}). Although more computationaly expensive ( \np{nn\_baro} additional iterations are indeed necessary), the baroclinic to barotropic forcing term given at \textit{now} time step
become centred in the middle of the integration window. It can easily be shown that this property
removes part of splitting errors between modes, which increases the overall numerical robustness.
+asselin filtering is not applied to barotropic quantities.\\
+Alternatively, one can choose to integrate barotropic equations starting from \textit{before} time step
+(\np{ln\_bt\_fw}\forcode{ = .false.}).
+Although more computationaly expensive ( \np{nn\_baro} additional iterations are indeed necessary),
+the baroclinic to barotropic forcing term given at \textit{now} time step become centred in
+the middle of the integration window.
+It can easily be shown that this property removes part of splitting errors between modes,
+which increases the overall numerical robustness.
%references to Patrick Marsaleix' work here. Also work done by SHOM group.
%%%
As far as tracer conservation is concerned, barotropic velocities used to advect tracers must also be updated
at \textit{now} time step. This implies to change the traditional order of computations in \NEMO: most of momentum
trends (including the barotropic mode calculation) updated first, tracers' after. This \textit{de facto} makes semiimplicit hydrostatic
pressure gradient (see section \autoref{subsec:DYN_hpg_imp}) and time splitting not compatible.
Advective barotropic velocities are obtained by using a secondary set of filtering weights, uniquely defined from the filter
coefficients used for the time averaging (\citet{Shchepetkin_McWilliams_OM05}). Consistency between the time averaged continuity equation and the time stepping of tracers is here the key to obtain exact conservation.
+As far as tracer conservation is concerned,
+barotropic velocities used to advect tracers must also be updated at \textit{now} time step.
+This implies to change the traditional order of computations in \NEMO:
+most of momentum trends (including the barotropic mode calculation) updated first, tracers' after.
+This \textit{de facto} makes semiimplicit hydrostatic pressure gradient
+(see section \autoref{subsec:DYN_hpg_imp})
+and time splitting not compatible.
+Advective barotropic velocities are obtained by using a secondary set of filtering weights,
+uniquely defined from the filter coefficients used for the time averaging (\citet{Shchepetkin_McWilliams_OM05}).
+Consistency between the time averaged continuity equation and the time stepping of tracers is here the key to
+obtain exact conservation.
%%%
One can eventually choose to feedback instantaneous values by not using any time filter (\np{ln\_bt\_av}\forcode{ = .false.}).
In that case, external mode equations are continuous in time, ie they are not reinitialized when starting a new
substepping sequence. This is the method used so far in the POM model, the stability being maintained by refreshing at (almost)
each barotropic time step advection and horizontal diffusion terms. Since the latter terms have not been added in \NEMO for
computational efficiency, removing time filtering is not recommended except for debugging purposes.
This may be used for instance to appreciate the damping effect of the standard formulation on external gravity waves in idealized or weakly nonlinear cases. Although the damping is lower than for the filtered free surface, it is still significant as shown by \citet{Levier2007} in the case of an analytical barotropic Kelvin wave.
+One can eventually choose to feedback instantaneous values by not using any time filter
+(\np{ln\_bt\_av}\forcode{ = .false.}).
+In that case, external mode equations are continuous in time,
+$i.e.$ they are not reinitialized when starting a new substepping sequence.
+This is the method used so far in the POM model, the stability being maintained by
+refreshing at (almost) each barotropic time step advection and horizontal diffusion terms.
+Since the latter terms have not been added in \NEMO for computational efficiency,
+removing time filtering is not recommended except for debugging purposes.
+This may be used for instance to appreciate the damping effect of the standard formulation on
+external gravity waves in idealized or weakly nonlinear cases.
+Although the damping is lower than for the filtered free surface,
+it is still significant as shown by \citet{Levier2007} in the case of an analytical barotropic Kelvin wave.
%>>>>>===============
@@ 874,7 +913,8 @@
\textbf{title: Time stepping the barotropic system }
Assume knowledge of the full velocity and tracer fields at baroclinic time $\tau$. Hence,
we can update the surface height and vertically integrated velocity with a leapfrog
scheme using the small barotropic time step $\rdt$. We have
+Assume knowledge of the full velocity and tracer fields at baroclinic time $\tau$.
+Hence, we can update the surface height and vertically integrated velocity with a leapfrog scheme using
+the small barotropic time step $\rdt$.
+We have
\begin{equation} \label{eq:DYN_spg_ts_eta}
@@ 889,11 +929,18 @@
\
In these equations, araised (b) denotes values of surface height and vertically integrated velocity updated with the barotropic time steps. The $\tau$ time label on $\eta^{(b)}$
and $U^{(b)}$ denotes the baroclinic time at which the vertically integrated forcing $\textbf{M}(\tau)$ (note that this forcing includes the surface freshwater forcing), the tracer fields, the freshwater flux $\text{EMP}_w(\tau)$, and total depth of the ocean $H(\tau)$ are held for the duration of the barotropic time stepping over a single cycle. This is also the time
that sets the barotropic time steps via
+In these equations, araised (b) denotes values of surface height and vertically integrated velocity updated with
+the barotropic time steps.
+The $\tau$ time label on $\eta^{(b)}$ and $U^{(b)}$ denotes the baroclinic time at which
+the vertically integrated forcing $\textbf{M}(\tau)$
+(note that this forcing includes the surface freshwater forcing),
+the tracer fields, the freshwater flux $\text{EMP}_w(\tau)$,
+and total depth of the ocean $H(\tau)$ are held for the duration of the barotropic time stepping over
+a single cycle.
+This is also the time that sets the barotropic time steps via
\begin{equation} \label{eq:DYN_spg_ts_t}
t_n=\tau+n\rdt
\end{equation}
with $n$ an integer. The density scaled surface pressure is evaluated via
+with $n$ an integer.
+The density scaled surface pressure is evaluated via
\begin{equation} \label{eq:DYN_spg_ts_ps}
p_s^{(b)}(\tau,t_{n}) = \begin{cases}
@@ 914,5 +961,6 @@
\overline{\eta^{(b)}(\tau)} = \frac{1}{N+1} \sum\limits_{n=0}^N \eta^{(b)}(\tau\rdt,t_{n})
\end{equation}
the time averaged surface height taken from the previous barotropic cycle. Likewise,
+the time averaged surface height taken from the previous barotropic cycle.
+Likewise,
\begin{equation} \label{eq:DYN_spg_ts_u}
\textbf{U}^{(b)}(\tau,t_{n=0}) = \overline{\textbf{U}^{(b)}(\tau)} \\
@@ 925,12 +973,16 @@
= \frac{1}{N+1} \sum\limits_{n=0}^N\textbf{U}^{(b)}(\tau\rdt,t_{n})
\end{equation}
the time averaged vertically integrated transport. Notably, there is no RobertAsselin time filter used in the barotropic portion of the integration.

Upon reaching $t_{n=N} = \tau + 2\rdt \tau$ , the vertically integrated velocity is time averaged to produce the updated vertically integrated velocity at baroclinic time $\tau + \rdt \tau$
+the time averaged vertically integrated transport.
+Notably, there is no RobertAsselin time filter used in the barotropic portion of the integration.
+
+Upon reaching $t_{n=N} = \tau + 2\rdt \tau$ ,
+the vertically integrated velocity is time averaged to produce the updated vertically integrated velocity at
+baroclinic time $\tau + \rdt \tau$
\begin{equation} \label{eq:DYN_spg_ts_u}
\textbf{U}(\tau+\rdt) = \overline{\textbf{U}^{(b)}(\tau+\rdt)}
= \frac{1}{N+1} \sum\limits_{n=0}^N\textbf{U}^{(b)}(\tau,t_{n})
\end{equation}
The surface height on the new baroclinic time step is then determined via a baroclinic leapfrog using the following form
+The surface height on the new baroclinic time step is then determined via a baroclinic leapfrog using
+the following form
\begin{equation} \label{eq:DYN_spg_ts_ssh}
@@ 938,11 +990,13 @@
\end{equation}
 The use of this "bigleapfrog" scheme for the surface height ensures compatibility between the mass/volume budgets and the tracer budgets. More discussion of this point is provided in Chapter 10 (see in particular Section 10.2).
+The use of this "bigleapfrog" scheme for the surface height ensures compatibility between
+the mass/volume budgets and the tracer budgets.
+More discussion of this point is provided in Chapter 10 (see in particular Section 10.2).
In general, some form of time filter is needed to maintain integrity of the surface
height field due to the leapfrog splitting mode in equation \autoref{eq:DYN_spg_ts_ssh}. We
have tried various forms of such filtering, with the following method discussed in
\cite{Griffies_al_MWR01} chosen due to its stability and reasonably good maintenance of
tracer conservation properties (see ??)
+In general, some form of time filter is needed to maintain integrity of the surface height field due to
+the leapfrog splitting mode in equation \autoref{eq:DYN_spg_ts_ssh}.
+We have tried various forms of such filtering,
+with the following method discussed in \cite{Griffies_al_MWR01} chosen due to
+its stability and reasonably good maintenance of tracer conservation properties (see ??).
\begin{equation} \label{eq:DYN_spg_ts_sshf}
@@ 957,7 +1011,9 @@
\end{equation}
which is useful since it isolates all the time filtering aspects into the term multiplied
by $\alpha$. This isolation allows for an easy check that tracer conservation is exact when
eliminating tracer and surface height time filtering (see ?? for more complete discussion). However, in the general case with a nonzero $\alpha$, the filter \autoref{eq:DYN_spg_ts_sshf} was found to be more conservative, and so is recommended.
+which is useful since it isolates all the time filtering aspects into the term multiplied by $\alpha$.
+This isolation allows for an easy check that tracer conservation is exact when
+eliminating tracer and surface height time filtering (see ?? for more complete discussion).
+However, in the general case with a nonzero $\alpha$,
+the filter \autoref{eq:DYN_spg_ts_sshf} was found to be more conservative, and so is recommended.
} %%end gm comment (copy of griffies book)
@@ 984,11 +1040,12 @@
\end{equation}
where $T_c$, is a parameter with dimensions of time which characterizes the force,
$\widetilde{\rho} = \rho / \rho_o$ is the dimensionless density, and $\rm {\bf M}$
represents the collected contributions of the Coriolis, hydrostatic pressure gradient,
+$\widetilde{\rho} = \rho / \rho_o$ is the dimensionless density,
+and $\rm {\bf M}$ represents the collected contributions of the Coriolis, hydrostatic pressure gradient,
nonlinear and viscous terms in \autoref{eq:PE_dyn}.
} %end gmcomment
Note that in the linear free surface formulation (\key{vvl} not defined), the ocean depth
is timeindependent and so is the matrix to be inverted. It is computed once and for all and applies to all ocean time steps.
+Note that in the linear free surface formulation (\key{vvl} not defined),
+the ocean depth is timeindependent and so is the matrix to be inverted.
+It is computed once and for all and applies to all ocean time steps.
% ================================================================
@@ 1003,28 +1060,27 @@
Options are defined through the \ngn{namdyn\_ldf} namelist variables.
The options available for lateral diffusion are to use either laplacian
(rotated or not) or biharmonic operators. The coefficients may be constant
or spatially variable; the description of the coefficients is found in the chapter
on lateral physics (\autoref{chap:LDF}). The lateral diffusion of momentum is
evaluated using a forward scheme, $i.e.$ the velocity appearing in its expression
is the \textit{before} velocity in time, except for the pure vertical component
that appears when a tensor of rotation is used. This latter term is solved
implicitly together with the vertical diffusion term (see \autoref{chap:STP})

At the lateral boundaries either free slip, no slip or partial slip boundary
conditions are applied according to the user's choice (see \autoref{chap:LBC}).
+The options available for lateral diffusion are to use either laplacian (rotated or not) or biharmonic operators.
+The coefficients may be constant or spatially variable;
+the description of the coefficients is found in the chapter on lateral physics (\autoref{chap:LDF}).
+The lateral diffusion of momentum is evaluated using a forward scheme,
+$i.e.$ the velocity appearing in its expression is the \textit{before} velocity in time,
+except for the pure vertical component that appears when a tensor of rotation is used.
+This latter term is solved implicitly together with the vertical diffusion term (see \autoref{chap:STP}).
+
+At the lateral boundaries either free slip,
+no slip or partial slip boundary conditions are applied according to the user's choice (see \autoref{chap:LBC}).
\gmcomment{
Hyperviscous operators are frequently used in the simulation of turbulent flows to control
the dissipation of unresolved small scale features.
Their primary role is to provide strong dissipation at the smallest scale supported by the grid
while minimizing the impact on the larger scale features.
Hyperviscous operators are thus designed to be more scale selective than the traditional,
physically motivated Laplace operator.
In finite difference methods, the biharmonic operator is frequently the method of choice to achieve
this scale selective dissipation since its damping time ($i.e.$ its spin down time)
scale like $\lambda^{4}$ for disturbances of wavelength $\lambda$
(so that short waves damped more rapidelly than long ones),
whereas the Laplace operator damping time scales only like $\lambda^{2}$.
+ Hyperviscous operators are frequently used in the simulation of turbulent flows to
+ control the dissipation of unresolved small scale features.
+ Their primary role is to provide strong dissipation at the smallest scale supported by
+ the grid while minimizing the impact on the larger scale features.
+ Hyperviscous operators are thus designed to be more scale selective than the traditional,
+ physically motivated Laplace operator.
+ In finite difference methods,
+ the biharmonic operator is frequently the method of choice to achieve this scale selective dissipation since
+ its damping time ($i.e.$ its spin down time) scale like $\lambda^{4}$ for disturbances of wavelength $\lambda$
+ (so that short waves damped more rapidelly than long ones),
+ whereas the Laplace operator damping time scales only like $\lambda^{2}$.
}
@@ 1047,7 +1103,7 @@
\end{equation}
As explained in \autoref{subsec:PE_ldf}, this formulation (as the gradient of a divergence
and curl of the vorticity) preserves symmetry and ensures a complete
separation between the vorticity and divergence parts of the momentum diffusion.
+As explained in \autoref{subsec:PE_ldf},
+this formulation (as the gradient of a divergence and curl of the vorticity) preserves symmetry and
+ensures a complete separation between the vorticity and divergence parts of the momentum diffusion.
%
@@ 1058,14 +1114,14 @@
\label{subsec:DYN_ldf_iso}
A rotation of the lateral momentum diffusion operator is needed in several cases:
for isoneutral diffusion in the $z$coordinate (\np{ln\_dynldf\_iso}\forcode{ = .true.}) and for
either isoneutral (\np{ln\_dynldf\_iso}\forcode{ = .true.}) or geopotential
(\np{ln\_dynldf\_hor}\forcode{ = .true.}) diffusion in the $s$coordinate. In the partial step
case, coordinates are horizontal except at the deepest level and no
rotation is performed when \np{ln\_dynldf\_hor}\forcode{ = .true.}. The diffusion operator
is defined simply as the divergence of down gradient momentum fluxes on each
momentum component. It must be emphasized that this formulation ignores
constraints on the stress tensor such as symmetry. The resulting discrete
representation is:
+A rotation of the lateral momentum diffusion operator is needed in several cases:
+for isoneutral diffusion in the $z$coordinate (\np{ln\_dynldf\_iso}\forcode{ = .true.}) and
+for either isoneutral (\np{ln\_dynldf\_iso}\forcode{ = .true.}) or
+geopotential (\np{ln\_dynldf\_hor}\forcode{ = .true.}) diffusion in the $s$coordinate.
+In the partial step case, coordinates are horizontal except at the deepest level and
+no rotation is performed when \np{ln\_dynldf\_hor}\forcode{ = .true.}.
+The diffusion operator is defined simply as the divergence of down gradient momentum fluxes on
+each momentum component.
+It must be emphasized that this formulation ignores constraints on the stress tensor such as symmetry.
+The resulting discrete representation is:
\begin{equation} \label{eq:dyn_ldf_iso}
\begin{split}
@@ 1115,8 +1171,7 @@
\end{split}
\end{equation}
where $r_1$ and $r_2$ are the slopes between the surface along which the
diffusion operator acts and the surface of computation ($z$ or $s$surfaces).
The way these slopes are evaluated is given in the lateral physics chapter
(\autoref{chap:LDF}).
+where $r_1$ and $r_2$ are the slopes between the surface along which the diffusion operator acts and
+the surface of computation ($z$ or $s$surfaces).
+The way these slopes are evaluated is given in the lateral physics chapter (\autoref{chap:LDF}).
%
@@ 1127,9 +1182,8 @@
\label{subsec:DYN_ldf_bilap}
The lateral fourth order operator formulation on momentum is obtained by
applying \autoref{eq:dynldf_lap} twice. It requires an additional assumption on
boundary conditions: the first derivative term normal to the coast depends on
the free or noslip lateral boundary conditions chosen, while the third
derivative terms normal to the coast are set to zero (see \autoref{chap:LBC}).
+The lateral fourth order operator formulation on momentum is obtained by applying \autoref{eq:dynldf_lap} twice.
+It requires an additional assumption on boundary conditions:
+the first derivative term normal to the coast depends on the free or noslip lateral boundary conditions chosen,
+while the third derivative terms normal to the coast are set to zero (see \autoref{chap:LBC}).
%%%
\gmcomment{add a remark on the the change in the position of the coefficient}
@@ 1147,16 +1201,14 @@
Options are defined through the \ngn{namzdf} namelist variables.
The large vertical diffusion coefficient found in the surface mixed layer together
with high vertical resolution implies that in the case of explicit time stepping there
would be too restrictive a constraint on the time step. Two time stepping schemes
can be used for the vertical diffusion term : $(a)$ a forward time differencing
scheme (\np{ln\_zdfexp}\forcode{ = .true.}) using a time splitting technique
(\np{nn\_zdfexp} $>$ 1) or $(b)$ a backward (or implicit) time differencing scheme
(\np{ln\_zdfexp}\forcode{ = .false.}) (see \autoref{chap:STP}). Note that namelist variables
\np{ln\_zdfexp} and \np{nn\_zdfexp} apply to both tracers and dynamics.

The formulation of the vertical subgrid scale physics is the same whatever
the vertical coordinate is. The vertical diffusion operators given by
\autoref{eq:PE_zdf} take the following semidiscrete space form:
+The large vertical diffusion coefficient found in the surface mixed layer together with high vertical resolution implies that in the case of explicit time stepping there would be too restrictive a constraint on the time step.
+Two time stepping schemes can be used for the vertical diffusion term:
+$(a)$ a forward time differencing scheme
+(\np{ln\_zdfexp}\forcode{ = .true.}) using a time splitting technique (\np{nn\_zdfexp} $>$ 1) or
+$(b)$ a backward (or implicit) time differencing scheme (\np{ln\_zdfexp}\forcode{ = .false.})
+(see \autoref{chap:STP}).
+Note that namelist variables \np{ln\_zdfexp} and \np{nn\_zdfexp} apply to both tracers and dynamics.
+
+The formulation of the vertical subgrid scale physics is the same whatever the vertical coordinate is.
+The vertical diffusion operators given by \autoref{eq:PE_zdf} take the following semidiscrete space form:
\begin{equation} \label{eq:dynzdf}
\left\{ \begin{aligned}
@@ 1168,25 +1220,24 @@
\end{aligned} \right.
\end{equation}
where $A_{uw}^{vm} $ and $A_{vw}^{vm} $ are the vertical eddy viscosity and
diffusivity coefficients. The way these coefficients are evaluated
depends on the vertical physics used (see \autoref{chap:ZDF}).

The surface boundary condition on momentum is the stress exerted by
the wind. At the surface, the momentum fluxes are prescribed as the boundary
condition on the vertical turbulent momentum fluxes,
+where $A_{uw}^{vm} $ and $A_{vw}^{vm} $ are the vertical eddy viscosity and diffusivity coefficients.
+The way these coefficients are evaluated depends on the vertical physics used (see \autoref{chap:ZDF}).
+
+The surface boundary condition on momentum is the stress exerted by the wind.
+At the surface, the momentum fluxes are prescribed as the boundary condition on
+the vertical turbulent momentum fluxes,
\begin{equation} \label{eq:dynzdf_sbc}
\left.{\left( {\frac{A^{vm} }{e_3 }\ \frac{\partial \textbf{U}_h}{\partial k}} \right)} \right_{z=1}
= \frac{1}{\rho _o} \binom{\tau _u}{\tau _v }
\end{equation}
where $\left( \tau _u ,\tau _v \right)$ are the two components of the wind stress
vector in the (\textbf{i},\textbf{j}) coordinate system. The high mixing coefficients
in the surface mixed layer ensure that the surface wind stress is distributed in
the vertical over the mixed layer depth. If the vertical mixing coefficient
is small (when no mixed layer scheme is used) the surface stress enters only
the top model level, as a body force. The surface wind stress is calculated
in the surface module routines (SBC, see \autoref{chap:SBC})

The turbulent flux of momentum at the bottom of the ocean is specified through
a bottom friction parameterisation (see \autoref{sec:ZDF_bfr})
+where $\left( \tau _u ,\tau _v \right)$ are the two components of the wind stress vector in
+the (\textbf{i},\textbf{j}) coordinate system.
+The high mixing coefficients in the surface mixed layer ensure that the surface wind stress is distributed in
+the vertical over the mixed layer depth.
+If the vertical mixing coefficient is small (when no mixed layer scheme is used)
+the surface stress enters only the top model level, as a body force.
+The surface wind stress is calculated in the surface module routines (SBC, see \autoref{chap:SBC}).
+
+The turbulent flux of momentum at the bottom of the ocean is specified through a bottom friction parameterisation
+(see \autoref{sec:ZDF_bfr})
% ================================================================
@@ 1196,15 +1247,16 @@
\label{sec:DYN_forcing}
Besides the surface and bottom stresses (see the above section) which are
introduced as boundary conditions on the vertical mixing, three other forcings
may enter the dynamical equations by affecting the surface pressure gradient.

(1) When \np{ln\_apr\_dyn}\forcode{ = .true.} (see \autoref{sec:SBC_apr}), the atmospheric pressure is taken
into account when computing the surface pressure gradient.

(2) When \np{ln\_tide\_pot}\forcode{ = .true.} and \np{ln\_tide}\forcode{ = .true.} (see \autoref{sec:SBC_tide}),
+Besides the surface and bottom stresses (see the above section)
+which are introduced as boundary conditions on the vertical mixing,
+three other forcings may enter the dynamical equations by affecting the surface pressure gradient.
+
+(1) When \np{ln\_apr\_dyn}\forcode{ = .true.} (see \autoref{sec:SBC_apr}),
+the atmospheric pressure is taken into account when computing the surface pressure gradient.
+
+(2) When \np{ln\_tide\_pot}\forcode{ = .true.} and \np{ln\_tide}\forcode{ = .true.} (see \autoref{sec:SBC_tide}),
the tidal potential is taken into account when computing the surface pressure gradient.
(3) When \np{nn\_ice\_embd}\forcode{ = 2} and LIM or CICE is used ($i.e.$ when the seaice is embedded in the ocean),
+(3) When \np{nn\_ice\_embd}\forcode{ = 2} and LIM or CICE is used
+($i.e.$ when the seaice is embedded in the ocean),
the snowice mass is taken into account when computing the surface pressure gradient.
@@ 1225,12 +1277,13 @@
Options are defined through the \ngn{namdom} namelist variables.
The general framework for dynamics time stepping is a leapfrog scheme,
$i.e.$ a three level centred time scheme associated with an Asselin time filter
(cf. \autoref{chap:STP}). The scheme is applied to the velocity, except when using
the flux form of momentum advection (cf. \autoref{sec:DYN_adv_cor_flux}) in the variable
volume case (\key{vvl} defined), where it has to be applied to the thickness
weighted velocity (see \autoref{sec:A_momentum})

$\bullet$ vector invariant form or linear free surface (\np{ln\_dynhpg\_vec}\forcode{ = .true.} ; \key{vvl} not defined):
+The general framework for dynamics time stepping is a leapfrog scheme,
+$i.e.$ a three level centred time scheme associated with an Asselin time filter (cf. \autoref{chap:STP}).
+The scheme is applied to the velocity, except when
+using the flux form of momentum advection (cf. \autoref{sec:DYN_adv_cor_flux})
+in the variable volume case (\key{vvl} defined),
+where it has to be applied to the thickness weighted velocity (see \autoref{sec:A_momentum})
+
+$\bullet$ vector invariant form or linear free surface
+(\np{ln\_dynhpg\_vec}\forcode{ = .true.} ; \key{vvl} not defined):
\begin{equation} \label{eq:dynnxt_vec}
\left\{ \begin{aligned}
@@ 1240,5 +1293,6 @@
\end{equation}
$\bullet$ flux form and nonlinear free surface (\np{ln\_dynhpg\_vec}\forcode{ = .false.} ; \key{vvl} defined):
+$\bullet$ flux form and nonlinear free surface
+(\np{ln\_dynhpg\_vec}\forcode{ = .false.} ; \key{vvl} defined):
\begin{equation} \label{eq:dynnxt_flux}
\left\{ \begin{aligned}
@@ 1248,13 +1302,14 @@
\end{aligned} \right.
\end{equation}
where RHS is the right hand side of the momentum equation, the subscript $f$
denotes filtered values and $\gamma$ is the Asselin coefficient. $\gamma$ is
initialized as \np{nn\_atfp} (namelist parameter). Its default value is \np{nn\_atfp}\forcode{ = 10.e3}.
In both cases, the modified Asselin filter is not applied since perfect conservation
is not an issue for the momentum equations.

Note that with the filtered free surface, the update of the \textit{after} velocities
is done in the \mdl{dynsp\_flt} module, and only array swapping
and Asselin filtering is done in \mdl{dynnxt}.
+where RHS is the right hand side of the momentum equation,
+the subscript $f$ denotes filtered values and $\gamma$ is the Asselin coefficient.
+$\gamma$ is initialized as \np{nn\_atfp} (namelist parameter).
+Its default value is \np{nn\_atfp}\forcode{ = 10.e3}.
+In both cases, the modified Asselin filter is not applied since perfect conservation is not an issue for
+the momentum equations.
+
+Note that with the filtered free surface,
+the update of the \textit{after} velocities is done in the \mdl{dynsp\_flt} module,
+and only array swapping and Asselin filtering is done in \mdl{dynnxt}.
% ================================================================
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_LBC.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_LBC.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_LBC.tex (revision 10368)
@@ 29,19 +29,18 @@
Options are defined through the \ngn{namlbc} namelist variables.
The discrete representation of a domain with complex boundaries (coastlines and
bottom topography) leads to arrays that include large portions where a computation
is not required as the model variables remain at zero. Nevertheless, vectorial
supercomputers are far more efficient when computing over a whole array, and the
readability of a code is greatly improved when boundary conditions are applied in
an automatic way rather than by a specific computation before or after each
computational loop. An efficient way to work over the whole domain while specifying
the boundary conditions, is to use multiplication by mask arrays in the computation.
A mask array is a matrix whose elements are $1$ in the ocean domain and $0$
elsewhere. A simple multiplication of a variable by its own mask ensures that it will
remain zero over land areas. Since most of the boundary conditions consist of a
zero flux across the solid boundaries, they can be simply applied by multiplying
variables by the correct mask arrays, $i.e.$ the mask array of the grid point where
the flux is evaluated. For example, the heat flux in the \textbf{i}direction is evaluated
at $u$points. Evaluating this quantity as,
+The discrete representation of a domain with complex boundaries (coastlines and bottom topography) leads to
+arrays that include large portions where a computation is not required as the model variables remain at zero.
+Nevertheless, vectorial supercomputers are far more efficient when computing over a whole array,
+and the readability of a code is greatly improved when boundary conditions are applied in
+an automatic way rather than by a specific computation before or after each computational loop.
+An efficient way to work over the whole domain while specifying the boundary conditions,
+is to use multiplication by mask arrays in the computation.
+A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ elsewhere.
+A simple multiplication of a variable by its own mask ensures that it will remain zero over land areas.
+Since most of the boundary conditions consist of a zero flux across the solid boundaries,
+they can be simply applied by multiplying variables by the correct mask arrays,
+$i.e.$ the mask array of the grid point where the flux is evaluated.
+For example, the heat flux in the \textbf{i}direction is evaluated at $u$points.
+Evaluating this quantity as,
\begin{equation} \label{eq:lbc_aaaa}
@@ 49,8 +48,7 @@
}{e_{1u} } \; \delta _{i+1 / 2} \left[ T \right]\;\;mask_u
\end{equation}
(where mask$_{u}$ is the mask array at a $u$point) ensures that the heat flux is
zero inside land and at the boundaries, since mask$_{u}$ is zero at solid boundaries
which in this case are defined at $u$points (normal velocity $u$ remains zero at
the coast) (\autoref{fig:LBC_uv}).
+(where mask$_{u}$ is the mask array at a $u$point) ensures that the heat flux is zero inside land and
+at the boundaries, since mask$_{u}$ is zero at solid boundaries which in this case are defined at $u$points
+(normal velocity $u$ remains zero at the coast) (\autoref{fig:LBC_uv}).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 58,22 +56,24 @@
\includegraphics[width=0.90\textwidth]{Fig_LBC_uv}
\caption{ \protect\label{fig:LBC_uv}
Lateral boundary (thick line) at Tlevel. The velocity normal to the boundary is set to zero.}
+ Lateral boundary (thick line) at Tlevel.
+ The velocity normal to the boundary is set to zero.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
For momentum the situation is a bit more complex as two boundary conditions
must be provided along the coast (one each for the normal and tangential velocities).
The boundary of the ocean in the Cgrid is defined by the velocityfaces.
For example, at a given $T$level, the lateral boundary (a coastline or an intersection
with the bottom topography) is made of segments joining $f$points, and normal
velocity points are located between two $f$points (\autoref{fig:LBC_uv}).
The boundary condition on the normal velocity (no flux through solid boundaries)
can thus be easily implemented using the mask system. The boundary condition
on the tangential velocity requires a more specific treatment. This boundary
condition influences the relative vorticity and momentum diffusive trends, and is
required in order to compute the vorticity at the coast. Four different types of
lateral boundary condition are available, controlled by the value of the \np{rn\_shlat}
namelist parameter. (The value of the mask$_{f}$ array along the coastline is set
equal to this parameter.) These are:
+For momentum the situation is a bit more complex as two boundary conditions must be provided along the coast
+(one each for the normal and tangential velocities).
+The boundary of the ocean in the Cgrid is defined by the velocityfaces.
+For example, at a given $T$level,
+the lateral boundary (a coastline or an intersection with the bottom topography) is made of
+segments joining $f$points, and normal velocity points are located between two $f$points (\autoref{fig:LBC_uv}).
+The boundary condition on the normal velocity (no flux through solid boundaries)
+can thus be easily implemented using the mask system.
+The boundary condition on the tangential velocity requires a more specific treatment.
+This boundary condition influences the relative vorticity and momentum diffusive trends,
+and is required in order to compute the vorticity at the coast.
+Four different types of lateral boundary condition are available,
+controlled by the value of the \np{rn\_shlat} namelist parameter
+(The value of the mask$_{f}$ array along the coastline is set equal to this parameter).
+These are:
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 81,7 +81,10 @@
\includegraphics[width=0.90\textwidth]{Fig_LBC_shlat}
\caption{ \protect\label{fig:LBC_shlat}
lateral boundary condition (a) freeslip ($rn\_shlat=0$) ; (b) noslip ($rn\_shlat=2$)
; (c) "partial" freeslip ($0>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 89,22 +92,24 @@
\begin{description}
\item[freeslip boundary condition (\np{rn\_shlat}\forcode{ = 0}): ] the tangential velocity at the
coastline is equal to the offshore velocity, $i.e.$ the normal derivative of the
tangential velocity is zero at the coast, so the vorticity: mask$_{f}$ array is set
to zero inside the land and just at the coast (\autoref{fig:LBC_shlat}a).

\item[noslip boundary condition (\np{rn\_shlat}\forcode{ = 2}): ] the tangential velocity vanishes
at the coastline. Assuming that the tangential velocity decreases linearly from
the closest ocean velocity grid point to the coastline, the normal derivative is
evaluated as if the velocities at the closest land velocity gridpoint and the closest
ocean velocity gridpoint were of the same magnitude but in the opposite direction
(\autoref{fig:LBC_shlat}b). Therefore, the vorticity along the coastlines is given by:
+\item[freeslip boundary condition (\np{rn\_shlat}\forcode{ = 0}):] the tangential velocity at
+ the coastline is equal to the offshore velocity,
+ $i.e.$ the normal derivative of the tangential velocity is zero at the coast,
+ so the vorticity: mask$_{f}$ array is set to zero inside the land and just at the coast
+ (\autoref{fig:LBC_shlat}a).
+
+\item[noslip boundary condition (\np{rn\_shlat}\forcode{ = 2}):] the tangential velocity vanishes at the coastline.
+ Assuming that the tangential velocity decreases linearly from
+ the closest ocean velocity grid point to the coastline,
+ the normal derivative is evaluated as if the velocities at the closest land velocity gridpoint and
+ the closest ocean velocity gridpoint were of the same magnitude but in the opposite direction
+ (\autoref{fig:LBC_shlat}b).
+ Therefore, the vorticity along the coastlines is given by:
\begin{equation*}
\zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right]  \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ ,
\end{equation*}
where $u$ and $v$ are masked fields. Setting the mask$_{f}$ array to $2$ along
the coastline provides a vorticity field computed with the noslip boundary condition,
simply by multiplying it by the mask$_{f}$ :
+where $u$ and $v$ are masked fields.
+Setting the mask$_{f}$ array to $2$ along the coastline provides a vorticity field computed with
+the noslip boundary condition, simply by multiplying it by the mask$_{f}$ :
\begin{equation} \label{eq:lbc_bbbb}
\zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta _{i+1/2}
@@ 113,20 +118,18 @@
\end{equation}
\item["partial" freeslip boundary condition (0$<$\np{rn\_shlat}$<$2): ] the tangential
velocity at the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral
friction but not strong enough to make the tangential velocity at the coast vanish
(\autoref{fig:LBC_shlat}c). This can be selected by providing a value of mask$_{f}$
strictly inbetween $0$ and $2$.

\item["strong" noslip boundary condition (2$<$\np{rn\_shlat}): ] the viscous boundary
layer is assumed to be smaller than half the grid size (\autoref{fig:LBC_shlat}d).
The friction is thus larger than in the noslip case.
+\item["partial" freeslip boundary condition (0$<$\np{rn\_shlat}$<$2):] the tangential velocity at
+ the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral friction but
+ not strong enough to make the tangential velocity at the coast vanish (\autoref{fig:LBC_shlat}c).
+ This can be selected by providing a value of mask$_{f}$ strictly inbetween $0$ and $2$.
+
+\item["strong" noslip boundary condition (2$<$\np{rn\_shlat}):] the viscous boundary layer is assumed to
+ be smaller than half the grid size (\autoref{fig:LBC_shlat}d).
+ The friction is thus larger than in the noslip case.
\end{description}
Note that when the bottom topography is entirely represented by the $s$coordinates
(pure $s$coordinate), the lateral boundary condition on tangential velocity is of much
less importance as it is only applied next to the coast where the minimum water depth
can be quite shallow.
+Note that when the bottom topography is entirely represented by the $s$coordinates (pure $s$coordinate),
+the lateral boundary condition on tangential velocity is of much less importance as
+it is only applied next to the coast where the minimum water depth can be quite shallow.
@@ 137,7 +140,8 @@
\label{sec:LBC_jperio}
At the model domain boundaries several choices are offered: closed, cyclic eastwest,
cyclic northsouth, a northfold, and combination closednorth fold
or bicyclic eastwest and northfold. The northfold boundary condition is associated with the 3pole ORCA mesh.
+At the model domain boundaries several choices are offered:
+closed, cyclic eastwest, cyclic northsouth, a northfold, and combination closednorth fold or
+bicyclic eastwest and northfold.
+The northfold boundary condition is associated with the 3pole ORCA mesh.
% 
@@ 147,27 +151,29 @@
\label{subsec:LBC_jperio012}
The choice of closed or cyclic model domain boundary condition is made
by setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}. Each time such a boundary
condition is needed, it is set by a call to routine \mdl{lbclnk}. The computation of
momentum and tracer trends proceeds from $i=2$ to $i=jpi1$ and from $j=2$ to
$j=jpj1$, $i.e.$ in the model interior. To choose a lateral model boundary condition
is to specify the first and last rows and columns of the model variables.
+The choice of closed or cyclic model domain boundary condition is made by
+setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}.
+Each time such a boundary condition is needed, it is set by a call to routine \mdl{lbclnk}.
+The computation of momentum and tracer trends proceeds from $i=2$ to $i=jpi1$ and from $j=2$ to $j=jpj1$,
+$i.e.$ in the model interior.
+To choose a lateral model boundary condition is to specify the first and last rows and columns of
+the model variables.
\begin{description}
\item[For closed boundary (\np{jperio}\forcode{ = 0})], solid walls are imposed at all model
boundaries: first and last rows and columns are set to zero.

\item[For cyclic eastwest boundary (\np{jperio}\forcode{ = 1})], first and last rows are set
to zero (closed) whilst the first column is set to the value of the lastbutone column
and the last column to the value of the second one (\autoref{fig:LBC_jperio}a).
Whatever flows out of the eastern (western) end of the basin enters the western
(eastern) end.

\item[For cyclic northsouth boundary (\np{jperio}\forcode{ = 2})], first and last columns are set
to zero (closed) whilst the first row is set to the value of the lastbutone row
and the last row to the value of the second one (\autoref{fig:LBC_jperio}a).
Whatever flows out of the northern (southern) end of the basin enters the southern
(northern) end.
+\item[For closed boundary (\np{jperio}\forcode{ = 0})],
+ solid walls are imposed at all model boundaries:
+ first and last rows and columns are set to zero.
+
+\item[For cyclic eastwest boundary (\np{jperio}\forcode{ = 1})],
+ first and last rows are set to zero (closed) whilst the first column is set to
+ the value of the lastbutone column and the last column to the value of the second one
+ (\autoref{fig:LBC_jperio}a).
+ Whatever flows out of the eastern (western) end of the basin enters the western (eastern) end.
+
+\item[For cyclic northsouth boundary (\np{jperio}\forcode{ = 2})],
+ first and last columns are set to zero (closed) whilst the first row is set to
+ the value of the lastbutone row and the last row to the value of the second one
+ (\autoref{fig:LBC_jperio}a).
+ Whatever flows out of the northern (southern) end of the basin enters the southern (northern) end.
\item[Bicyclic eastwest and northsouth boundary (\np{jperio}\forcode{ = 7})] combines cases 1 and 2.
@@ 179,5 +185,5 @@
\includegraphics[width=1.0\textwidth]{Fig_LBC_jperio}
\caption{ \protect\label{fig:LBC_jperio}
setting of (a) eastwest cyclic (b) symmetric across the equator boundary conditions.}
+ setting of (a) eastwest cyclic (b) symmetric across the equator boundary conditions.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 189,7 +195,8 @@
\label{subsec:LBC_north_fold}
The north fold boundary condition has been introduced in order to handle the north
boundary of a threepolar ORCA grid. Such a grid has two poles in the northern hemisphere
(\autoref{fig:MISC_ORCA_msh}, and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.
+The north fold boundary condition has been introduced in order to handle the north boundary of
+a threepolar ORCA grid.
+Such a grid has two poles in the northern hemisphere (\autoref{fig:MISC_ORCA_msh},
+and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.
Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition.
@@ 197,8 +204,8 @@
\begin{figure}[!t] \begin{center}
\includegraphics[width=0.90\textwidth]{Fig_North_Fold_T}
\caption{ \protect\label{fig:North_Fold_T}
North fold boundary with a $T$point pivot and cyclic eastwest boundary condition
($jperio=4$), as used in ORCA 2, 1/4, and 1/12. Pink shaded area corresponds
to the inner domain mask (see text). }
+\caption{ \protect\label{fig:North_Fold_T}
+ North fold boundary with a $T$point pivot and cyclic eastwest boundary condition ($jperio=4$),
+ as used in ORCA 2, 1/4, and 1/12.
+ Pink shaded area corresponds to the inner domain mask (see text). }
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 210,48 +217,44 @@
\label{sec:LBC_mpp}
For massively parallel processing (mpp), a domain decomposition method is used.
The basic idea of the method is to split the large computation domain of a numerical
experiment into several smaller domains and solve the set of equations by addressing
independent local problems. Each processor has its own local memory and computes
the model equation over a subdomain of the whole model domain. The subdomain
boundary conditions are specified through communications between processors
which are organized by explicit statements (message passing method).

A big advantage is that the method does not need many modifications of the initial
FORTRAN code. From the modeller's point of view, each sub domain running on
a processor is identical to the "monodomain" code. In addition, the programmer
manages the communications between subdomains, and the code is faster when
the number of processors is increased. The porting of OPA code on an iPSC860
was achieved during Guyon's PhD [Guyon et al. 1994, 1995] in collaboration with
CETIIS and ONERA. The implementation in the operational context and the studies
of performance on a T3D and T3E Cray computers have been made in collaboration
with IDRIS and CNRS. The present implementation is largely inspired by Guyon's
work [Guyon 1995].

The parallelization strategy is defined by the physical characteristics of the
ocean model. Second order finite difference schemes lead to local discrete
operators that depend at the very most on one neighbouring point. The only
nonlocal computations concern the vertical physics (implicit diffusion,
turbulent closure scheme, ...) (delocalization over the whole water column),
and the solving of the elliptic equation associated with the surface pressure
gradient computation (delocalization over the whole horizontal domain).
Therefore, a pencil strategy is used for the data substructuration
: the 3D initial domain is laid out on local processor
memories following a 2D horizontal topological splitting. Each subdomain
computes its own surface and bottom boundary conditions and has a side
wall overlapping interface which defines the lateral boundary conditions for
computations in the inner subdomain. The overlapping area consists of the
two rows at each edge of the subdomain. After a computation, a communication
phase starts: each processor sends to its neighbouring processors the update
values of the points corresponding to the interior overlapping area to its
neighbouring subdomain ($i.e.$ the innermost of the two overlapping rows).
The communication is done through the Message Passing Interface (MPI).
The data exchanges between processors are required at the very
place where lateral domain boundary conditions are set in the monodomain
computation : the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module)
which manages such conditions is interfaced with routines found in \mdl{lib\_mpp} module
when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined).
It has to be pointed out that when using the MPP version of the model,
the eastwest cyclic boundary condition is done implicitly,
+For massively parallel processing (mpp), a domain decomposition method is used.
+The basic idea of the method is to split the large computation domain of a numerical experiment into
+several smaller domains and solve the set of equations by addressing independent local problems.
+Each processor has its own local memory and computes the model equation over a subdomain of the whole model domain.
+The subdomain boundary conditions are specified through communications between processors which
+are organized by explicit statements (message passing method).
+
+A big advantage is that the method does not need many modifications of the initial FORTRAN code.
+From the modeller's point of view, each sub domain running on a processor is identical to the "monodomain" code.
+In addition, the programmer manages the communications between subdomains,
+and the code is faster when the number of processors is increased.
+The porting of OPA code on an iPSC860 was achieved during Guyon's PhD [Guyon et al. 1994, 1995]
+in collaboration with CETIIS and ONERA.
+The implementation in the operational context and the studies of performance on
+a T3D and T3E Cray computers have been made in collaboration with IDRIS and CNRS.
+The present implementation is largely inspired by Guyon's work [Guyon 1995].
+
+The parallelization strategy is defined by the physical characteristics of the ocean model.
+Second order finite difference schemes lead to local discrete operators that
+depend at the very most on one neighbouring point.
+The only nonlocal computations concern the vertical physics
+(implicit diffusion, turbulent closure scheme, ...) (delocalization over the whole water column),
+and the solving of the elliptic equation associated with the surface pressure gradient computation
+(delocalization over the whole horizontal domain).
+Therefore, a pencil strategy is used for the data substructuration:
+the 3D initial domain is laid out on local processor memories following a 2D horizontal topological splitting.
+Each subdomain computes its own surface and bottom boundary conditions and
+has a side wall overlapping interface which defines the lateral boundary conditions for
+computations in the inner subdomain.
+The overlapping area consists of the two rows at each edge of the subdomain.
+After a computation, a communication phase starts:
+each processor sends to its neighbouring processors the update values of the points corresponding to
+the interior overlapping area to its neighbouring subdomain ($i.e.$ the innermost of the two overlapping rows).
+The communication is done through the Message Passing Interface (MPI).
+The data exchanges between processors are required at the very place where
+lateral domain boundary conditions are set in the monodomain computation:
+the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module) which manages such conditions is interfaced with
+routines found in \mdl{lib\_mpp} module when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined).
+It has to be pointed out that when using the MPP version of the model,
+the eastwest cyclic boundary condition is done implicitly,
whilst the southsymmetric boundary condition option is not available.
@@ 259,20 +262,21 @@
\begin{figure}[!t] \begin{center}
\includegraphics[width=0.90\textwidth]{Fig_mpp}
\caption{ \protect\label{fig:mpp}
Positioning of a subdomain when massively parallel processing is used. }
+\caption{ \protect\label{fig:mpp}
+ Positioning of a subdomain when massively parallel processing is used. }
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
In the standard version of \NEMO, the splitting is regular and arithmetic.
The iaxis is divided by \jp{jpni} and the jaxis by \jp{jpnj} for a number of processors
\jp{jpnij} most often equal to $jpni \times jpnj$ (parameters set in
 \ngn{nammpp} namelist). Each processor is independent and without message passing
 or synchronous process, programs run alone and access just its own local memory.
 For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil)
 that are named \jp{jpi}, \jp{jpj}, \jp{jpk}. These dimensions include the internal
 domain and the overlapping rows. The number of rows to exchange (known as
 the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). The whole domain
 dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}. The relationship between
 the whole domain and a subdomain is:
+The iaxis is divided by \jp{jpni} and
+the jaxis by \jp{jpnj} for a number of processors \jp{jpnij} most often equal to $jpni \times jpnj$
+(parameters set in \ngn{nammpp} namelist).
+Each processor is independent and without message passing or synchronous process,
+programs run alone and access just its own local memory.
+For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) that
+are named \jp{jpi}, \jp{jpj}, \jp{jpk}.
+These dimensions include the internal domain and the overlapping rows.
+The number of rows to exchange (known as the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}).
+The whole domain dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}.
+The relationship between the whole domain and a subdomain is:
\begin{eqnarray}
jpi & = & ( jpiglo2*jpreci + (jpni1) ) / jpni + 2*jpreci \nonumber \\
@@ 283,5 +287,5 @@
One also defines variables nldi and nlei which correspond to the internal domain bounds,
and the variables nimpp and njmpp which are the position of the (1,1) gridpoint in the global domain.
An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,
+An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,
a global array (whole domain) by the relationship:
\begin{equation} \label{eq:lbc_nimpp}
@@ 290,8 +294,8 @@
with $1 \leq i \leq jpi$, $1 \leq j \leq jpj $ , and $1 \leq k \leq jpk$.
Processors are numbered from 0 to $jpnij1$, the number is saved in the variable
nproc. In the standard version, a processor has no more than four neighbouring
processors named nono (for north), noea (east), noso (south) and nowe (west)
and two variables, nbondi and nbondj, indicate the relative position of the processor :
+Processors are numbered from 0 to $jpnij1$, the number is saved in the variable nproc.
+In the standard version, a processor has no more than
+four neighbouring processors named nono (for north), noea (east), noso (south) and nowe (west) and
+two variables, nbondi and nbondj, indicate the relative position of the processor:
\begin{itemize}
\item nbondi = 1 an east neighbour, no west processor,
@@ 300,33 +304,33 @@
\item nbondi = 2 no splitting following the iaxis.
\end{itemize}
During the simulation, processors exchange data with their neighbours.
If there is effectively a neighbour, the processor receives variables from this
processor on its overlapping row, and sends the data issued from internal
domain corresponding to the overlapping row of the other processor.


The \NEMO model computes equation terms with the help of mask arrays (0 on land
points and 1 on sea points). It is easily readable and very efficient in the context of
a computer with vectorial architecture. However, in the case of a scalar processor,
computations over the land regions become more expensive in terms of CPU time.
It is worse when we use a complex configuration with a realistic bathymetry like the
global ocean where more than 50 \% of points are land points. For this reason, a
preprocessing tool can be used to choose the mpp domain decomposition with a
maximum number of only land points processors, which can then be eliminated (\autoref{fig:mppini2})
(For example, the mpp\_optimiz tools, available from the DRAKKAR web site).
This optimisation is dependent on the specific bathymetry employed. The user
then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with
$jpnij < jpni \times jpnj$, leading to the elimination of $jpni \times jpnj  jpnij$
land processors. When those parameters are specified in \ngn{nammpp} namelist,
the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound,
nono, noea,...) so that the landonly processors are not taken into account.
+During the simulation, processors exchange data with their neighbours.
+If there is effectively a neighbour, the processor receives variables from this processor on its overlapping row,
+and sends the data issued from internal domain corresponding to the overlapping row of the other processor.
+
+
+The \NEMO model computes equation terms with the help of mask arrays (0 on land points and 1 on sea points).
+It is easily readable and very efficient in the context of a computer with vectorial architecture.
+However, in the case of a scalar processor, computations over the land regions become more expensive in
+terms of CPU time.
+It is worse when we use a complex configuration with a realistic bathymetry like the global ocean where
+more than 50 \% of points are land points.
+For this reason, a preprocessing tool can be used to choose the mpp domain decomposition with a maximum number of
+only land points processors, which can then be eliminated (\autoref{fig:mppini2})
+(For example, the mpp\_optimiz tools, available from the DRAKKAR web site).
+This optimisation is dependent on the specific bathymetry employed.
+The user then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with $jpnij < jpni \times jpnj$,
+leading to the elimination of $jpni \times jpnj  jpnij$ land processors.
+When those parameters are specified in \ngn{nammpp} namelist,
+the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound, nono, noea,...) so that
+the landonly processors are not taken into account.
\gmcomment{Note that the inimpp2 routine is general so that the original inimpp
routine should be suppressed from the code.}
When land processors are eliminated, the value corresponding to these locations in
the model output files is undefined. Note that this is a problem for the meshmask file
which requires to be defined over the whole domain. Therefore, user should not eliminate
land processors when creating a meshmask file ($i.e.$ when setting a nonzero value to \np{nn\_msh}).
+When land processors are eliminated,
+the value corresponding to these locations in the model output files is undefined.
+Note that this is a problem for the meshmask file which requires to be defined over the whole domain.
+Therefore, user should not eliminate land processors when creating a meshmask file
+($i.e.$ when setting a nonzero value to \np{nn\_msh}).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 334,10 +338,10 @@
\includegraphics[width=0.90\textwidth]{Fig_mppini2}
\caption { \protect\label{fig:mppini2}
Example of Atlantic domain defined for the CLIPPER projet. Initial grid is
composed of 773 x 1236 horizontal points.
(a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).
52 subdomains are land areas.
(b) 52 subdomains are eliminated (white rectangles) and the resulting number
of processors really used during the computation is jpnij=128.}
+ Example of Atlantic domain defined for the CLIPPER projet.
+ Initial grid is composed of 773 x 1236 horizontal points.
+ (a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).
+ 52 subdomains are land areas.
+ (b) 52 subdomains are eliminated (white rectangles) and
+ the resulting number of processors really used during the computation is jpnij=128.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 354,33 +358,21 @@
\nlst{nambdy}
%
%nambdy_index
%
%\nlst{nambdy_index}
%
%nambdy_dta
\nlst{nambdy_dta}
%
%nambdy_dta
%
%\nlst{nambdy_dta2}
%

Options are defined through the \ngn{nambdy} \ngn{nambdy\_index}
\ngn{nambdy\_dta} \ngn{nambdy\_dta2} namelist variables.
The BDY module is the core implementation of open boundary
conditions for regional configurations. It implements the Flow
Relaxation Scheme algorithm for temperature, salinity, velocities and
ice fields, and the Flather radiation condition for the depthmean
transports. The specification of the location of the open boundary is
completely flexible and allows for example the open boundary to follow
an isobath or other irregular contour.

The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many
features and a similar coding structure \citep{Chanut2005}.

Boundary data files used with earlier versions of NEMO may need
to be reordered to work with this version. See the
section on the Input Boundary Data Files for details.
+
+Options are defined through the \ngn{nambdy} \ngn{nambdy\_dta} namelist variables.
+The BDY module is the core implementation of open boundary conditions for regional configurations.
+It implements the Flow Relaxation Scheme algorithm for temperature, salinity, velocities and ice fields, and
+the Flather radiation condition for the depthmean transports.
+The specification of the location of the open boundary is completely flexible and
+allows for example the open boundary to follow an isobath or other irregular contour.
+
+The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many features and
+a similar coding structure \citep{Chanut2005}.
+
+Boundary data files used with earlier versions of NEMO may need to be reordered to work with this version.
+See the section on the Input Boundary Data Files for details.
%
@@ 389,28 +381,20 @@
The BDY module is activated by setting \np{ln\_bdy} to true.
It is possible to define more than one boundary ``set'' and apply
different boundary conditions to each set. The number of boundary
sets is defined by \np{nb\_bdy}. Each boundary set may be defined
as a set of straight line segments in a namelist
(\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file
(\np{ln\_coords\_file}\forcode{ = .true.}). If the set is defined in a namelist,
then the namelists nambdy\_index must be included separately, one for
each set. If the set is defined by a file, then a
``\ifile{coordinates.bdy}'' file must be provided. The coordinates.bdy file
is analagous to the usual NEMO ``\ifile{coordinates}'' file. In the example
above, there are two boundary sets, the first of which is defined via
a file and the second is defined in a namelist. For more details of
the definition of the boundary geometry see section
\autoref{subsec:BDY_geometry}.

For each boundary set a boundary
condition has to be chosen for the barotropic solution (``u2d'':
seasurface height and barotropic velocities), for the baroclinic
velocities (``u3d''), and for the active tracers\footnote{The BDY
 module does not deal with passive tracers at this version}
(``tra''). For each set of variables there is a choice of algorithm
and a choice for the data, eg. for the active tracers the algorithm is
set by \np{nn\_tra} and the choice of data is set by
\np{nn\_tra\_dta}.
+It is possible to define more than one boundary ``set'' and apply different boundary conditions to each set.
+The number of boundary sets is defined by \np{nb\_bdy}.
+Each boundary set may be defined as a set of straight line segments in a namelist
+(\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file (\np{ln\_coords\_file}\forcode{ = .true.}).
+If the set is defined in a namelist, then the namelists nambdy\_index must be included separately, one for each set.
+If the set is defined by a file, then a ``\ifile{coordinates.bdy}'' file must be provided.
+The coordinates.bdy file is analagous to the usual NEMO ``\ifile{coordinates}'' file.
+In the example above, there are two boundary sets, the first of which is defined via a file and
+the second is defined in a namelist.
+For more details of the definition of the boundary geometry see section \autoref{subsec:BDY_geometry}.
+
+For each boundary set a boundary condition has to be chosen for the barotropic solution
+(``u2d'':seasurface height and barotropic velocities), for the baroclinic velocities (``u3d''), and
+for the active tracers \footnote{The BDY module does not deal with passive tracers at this version} (``tra'').
+For each set of variables there is a choice of algorithm and a choice for the data,
+eg. for the active tracers the algorithm is set by \np{nn\_tra} and the choice of data is set by \np{nn\_tra\_dta}.
The choice of algorithm is currently as follows:
@@ 419,9 +403,9 @@
\begin{itemize}
\item[0.] No boundary condition applied. So the solution will ``see''
 the land points around the edge of the edge of the domain.
\item[1.] Flow Relaxation Scheme (FRS) available for all variables.
\item[2.] Flather radiation scheme for the barotropic variables. The
 Flather scheme is not compatible with the filtered free surface
+\item[0.] No boundary condition applied.
+ So the solution will ``see'' the land points around the edge of the edge of the domain.
+\item[1.] Flow Relaxation Scheme (FRS) available for all variables.
+\item[2.] Flather radiation scheme for the barotropic variables.
+ The Flather scheme is not compatible with the filtered free surface
({\it dynspg\_ts}).
\end{itemize}
@@ 429,31 +413,27 @@
\mbox{}
The main choice for the boundary data is
to use initial conditions as boundary data (\np{nn\_tra\_dta}\forcode{ = 0}) or to
use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}). For the
barotropic solution there is also the option to use tidal
harmonic forcing either by itself or in addition to other external
data.

If external boundary data is required then the nambdy\_dta namelist
must be defined. One nambdy\_dta namelist is required for each boundary
set in the order in which the boundary sets are defined in nambdy. In
the example given, two boundary sets have been defined and so there
are two nambdy\_dta namelists. The boundary data is read in using the
fldread module, so the nambdy\_dta namelist is in the format required
for fldread. For each variable required, the filename, the frequency
of the files and the frequency of the data in the files is given. Also
whether or not timeinterpolation is required and whether the data is
climatological (timecyclic) data. Note that onthefly spatial
interpolation of boundary data is not available at this version.

In the example namelists given, two boundary sets are defined. The
first set is defined via a file and applies FRS conditions to
temperature and salinity and Flather conditions to the barotropic
variables. External data is provided in daily files (from a
largescale model). Tidal harmonic forcing is also used. The second
set is defined in a namelist. FRS conditions are applied on
temperature and salinity and climatological data is read from external
files.
+The main choice for the boundary data is to use initial conditions as boundary data
+(\np{nn\_tra\_dta}\forcode{ = 0}) or to use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}).
+For the barotropic solution there is also the option to use tidal harmonic forcing either by
+itself or in addition to other external data.
+
+If external boundary data is required then the nambdy\_dta namelist must be defined.
+One nambdy\_dta namelist is required for each boundary set in the order in which
+the boundary sets are defined in nambdy.
+In the example given, two boundary sets have been defined and so there are two nambdy\_dta namelists.
+The boundary data is read in using the fldread module,
+so the nambdy\_dta namelist is in the format required for fldread.
+For each variable required, the filename, the frequency of the files and
+the frequency of the data in the files is given.
+Also whether or not timeinterpolation is required and whether the data is climatological (timecyclic) data.
+Note that onthefly spatial interpolation of boundary data is not available at this version.
+
+In the example namelists given, two boundary sets are defined.
+The first set is defined via a file and applies FRS conditions to temperature and salinity and
+Flather conditions to the barotropic variables.
+External data is provided in daily files (from a largescale model).
+Tidal harmonic forcing is also used.
+The second set is defined in a namelist.
+FRS conditions are applied on temperature and salinity and climatological data is read from external files.
%
@@ 462,29 +442,26 @@
The Flow Relaxation Scheme (FRS) \citep{Davies_QJRMS76,Engerdahl_Tel95},
applies a simple relaxation of the model fields to
externallyspecified values over a zone next to the edge of the model
domain. Given a model prognostic variable $\Phi$
+applies a simple relaxation of the model fields to externallyspecified values over
+a zone next to the edge of the model domain.
+Given a model prognostic variable $\Phi$
\begin{equation} \label{eq:bdy_frs1}
\Phi(d) = \alpha(d)\Phi_{e}(d) + (1\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N
\end{equation}
where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified
external field, $d$ gives the discrete distance from the model
boundary and $\alpha$ is a parameter that varies from $1$ at $d=1$ to
a small value at $d=N$. It can be shown that this scheme is equivalent
to adding a relaxation term to the prognostic equation for $\Phi$ of
the form:
+where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified external field,
+$d$ gives the discrete distance from the model boundary and
+$\alpha$ is a parameter that varies from $1$ at $d=1$ to a small value at $d=N$.
+It can be shown that this scheme is equivalent to adding a relaxation term to
+the prognostic equation for $\Phi$ of the form:
\begin{equation} \label{eq:bdy_frs2}
\frac{1}{\tau}\left(\Phi  \Phi_{e}\right)
\end{equation}
where the relaxation time scale $\tau$ is given by a function of
$\alpha$ and the model time step $\Delta t$:
+where the relaxation time scale $\tau$ is given by a function of $\alpha$ and the model time step $\Delta t$:
\begin{equation} \label{eq:bdy_frs3}
\tau = \frac{1\alpha}{\alpha} \,\rdt
\end{equation}
Thus the model solution is completely prescribed by the external
conditions at the edge of the model domain and is relaxed towards the
external conditions over the rest of the FRS zone. The application of
a relaxation zone helps to prevent spurious reflection of outgoing
signals from the model boundary.
+Thus the model solution is completely prescribed by the external conditions at the edge of the model domain and
+is relaxed towards the external conditions over the rest of the FRS zone.
+The application of a relaxation zone helps to prevent spurious reflection of
+outgoing signals from the model boundary.
The function $\alpha$ is specified as a $tanh$ function:
@@ 492,6 +469,6 @@
\alpha(d) = 1  \tanh\left(\frac{d1}{2}\right), \quad d=1,N
\end{equation}
The width of the FRS zone is specified in the namelist as
\np{nn\_rimwidth}. This is typically set to a value between 8 and 10.
+The width of the FRS zone is specified in the namelist as \np{nn\_rimwidth}.
+This is typically set to a value between 8 and 10.
%
@@ 499,22 +476,20 @@
\label{subsec:BDY_flather_scheme}
The \citet{Flather_JPO94} scheme is a radiation condition on the normal, depthmean
transport across the open boundary. It takes the form
+The \citet{Flather_JPO94} scheme is a radiation condition on the normal,
+depthmean transport across the open boundary.
+It takes the form
\begin{equation} \label{eq:bdy_fla1}
U = U_{e} + \frac{c}{h}\left(\eta  \eta_{e}\right),
\end{equation}
where $U$ is the depthmean velocity normal to the boundary and $\eta$
is the sea surface height, both from the model. The subscript $e$
indicates the same fields from external sources. The speed of external
gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the
water column. The depthmean normal velocity along the edge of the
model domain is set equal to the
external depthmean normal velocity, plus a correction term that
allows gravity waves generated internally to exit the model boundary.
Note that the seasurface height gradient in \autoref{eq:bdy_fla1}
is a spatial gradient across the model boundary, so that $\eta_{e}$ is
defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the
$T$ points with $nbr=2$. $U$ and $U_{e}$ are defined on the $U$ or
$V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points.
+where $U$ is the depthmean velocity normal to the boundary and $\eta$ is the sea surface height,
+both from the model.
+The subscript $e$ indicates the same fields from external sources.
+The speed of external gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the water column.
+The depthmean normal velocity along the edge of the model domain is set equal to
+the external depthmean normal velocity,
+plus a correction term that allows gravity waves generated internally to exit the model boundary.
+Note that the seasurface height gradient in \autoref{eq:bdy_fla1} is a spatial gradient across the model boundary,
+so that $\eta_{e}$ is defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the $T$ points with $nbr=2$.
+$U$ and $U_{e}$ are defined on the $U$ or $V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points.
%
@@ 522,46 +497,37 @@
\label{subsec:BDY_geometry}
Each open boundary set is defined as a list of points. The information
is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$
structure. The $nbi$ and $nbj$ arrays
define the local $(i,j)$ indices of each point in the boundary zone
and the $nbr$ array defines the discrete distance from the boundary
with $nbr=1$ meaning that the point is next to the edge of the
model domain and $nbr>1$ showing that the point is increasingly
further away from the edge of the model domain. A set of $nbi$, $nbj$,
and $nbr$ arrays is defined for each of the $T$, $U$ and $V$
grids. Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular
boundary.

The boundary geometry for each set may be defined in a namelist
nambdy\_index or by reading in a ``\ifile{coordinates.bdy}'' file. The
nambdy\_index namelist defines a series of straightline segments for
north, east, south and west boundaries. For the northern boundary,
\np{nbdysegn} gives the number of segments, \np{jpjnob} gives the $j$
index for each segment and \np{jpindt} and \np{jpinft} give the start
and end $i$ indices for each segment with similar for the other
boundaries. These segments define a list of $T$ grid points along the
outermost row of the boundary ($nbr\,=\, 1$). The code deduces the $U$ and
$V$ points and also the points for $nbr\,>\, 1$ if
$nn\_rimwidth\,>\,1$.

The boundary geometry may also be defined from a
``\ifile{coordinates.bdy}'' file. Figure \autoref{fig:LBC_nc_header}
gives an example of the header information from such a file. The file
should contain the index arrays for each of the $T$, $U$ and $V$
grids. The arrays must be in order of increasing $nbr$. Note that the
$nbi$, $nbj$ values in the file are global values and are converted to
local values in the code. Typically this file will be used to generate
external boundary data via interpolation and so will also contain the
latitudes and longitudes of each point as shown. However, this is not
necessary to run the model.

For some choices of irregular boundary the model domain may contain
areas of ocean which are not part of the computational domain. For
example if an open boundary is defined along an isobath, say at the
shelf break, then the areas of ocean outside of this boundary will
need to be masked out. This can be done by reading a mask file defined
as \np{cn\_mask\_file} in the nam\_bdy namelist. Only one mask file is
used even if multiple boundary sets are defined.
+Each open boundary set is defined as a list of points.
+The information is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$ structure.
+The $nbi$ and $nbj$ arrays define the local $(i,j)$ indices of each point in the boundary zone and
+the $nbr$ array defines the discrete distance from the boundary with $nbr=1$ meaning that
+the point is next to the edge of the model domain and $nbr>1$ showing that
+the point is increasingly further away from the edge of the model domain.
+A set of $nbi$, $nbj$, and $nbr$ arrays is defined for each of the $T$, $U$ and $V$ grids.
+Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular boundary.
+
+The boundary geometry for each set may be defined in a namelist nambdy\_index or
+by reading in a ``\ifile{coordinates.bdy}'' file.
+The nambdy\_index namelist defines a series of straightline segments for north, east, south and west boundaries.
+For the northern boundary, \np{nbdysegn} gives the number of segments,
+\np{jpjnob} gives the $j$ index for each segment and \np{jpindt} and
+\np{jpinft} give the start and end $i$ indices for each segment with similar for the other boundaries.
+These segments define a list of $T$ grid points along the outermost row of the boundary ($nbr\,=\, 1$).
+The code deduces the $U$ and $V$ points and also the points for $nbr\,>\, 1$ if $nn\_rimwidth\,>\,1$.
+
+The boundary geometry may also be defined from a ``\ifile{coordinates.bdy}'' file.
+Figure \autoref{fig:LBC_nc_header} gives an example of the header information from such a file.
+The file should contain the index arrays for each of the $T$, $U$ and $V$ grids.
+The arrays must be in order of increasing $nbr$.
+Note that the $nbi$, $nbj$ values in the file are global values and are converted to local values in the code.
+Typically this file will be used to generate external boundary data via interpolation and so
+will also contain the latitudes and longitudes of each point as shown.
+However, this is not necessary to run the model.
+
+For some choices of irregular boundary the model domain may contain areas of ocean which
+are not part of the computational domain.
+For example if an open boundary is defined along an isobath, say at the shelf break,
+then the areas of ocean outside of this boundary will need to be masked out.
+This can be done by reading a mask file defined as \np{cn\_mask\_file} in the nam\_bdy namelist.
+Only one mask file is used even if multiple boundary sets are defined.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 569,5 +535,5 @@
\includegraphics[width=1.0\textwidth]{Fig_LBC_bdy_geom}
\caption { \protect\label{fig:LBC_bdy_geom}
Example of geometry of unstructured open boundary}
+ Example of geometry of unstructured open boundary}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 577,38 +543,36 @@
\label{subsec:BDY_data}
The data files contain the data arrays
in the order in which the points are defined in the $nbi$ and $nbj$
arrays. The data arrays are dimensioned on: a time dimension;
+The data files contain the data arrays in the order in which the points are defined in the $nbi$ and $nbj$ arrays.
+The data arrays are dimensioned on:
+a time dimension;
$xb$ which is the index of the boundary data point in the horizontal;
and $yb$ which is a degenerate dimension of 1 to enable the file to be
read by the standard NEMO I/O routines. The 3D fields also have a
depth dimension.

At Version 3.4 there are new restrictions on the order in which the
boundary points are defined (and therefore restrictions on the order
of the data in the file). In particular:
+and $yb$ which is a degenerate dimension of 1 to enable the file to be read by the standard NEMO I/O routines.
+The 3D fields also have a depth dimension.
+
+At Version 3.4 there are new restrictions on the order in which the boundary points are defined
+(and therefore restrictions on the order of the data in the file).
+In particular:
\mbox{}
\begin{enumerate}
\item The data points must be in order of increasing $nbr$, ie. all
 the $nbr=1$ points, then all the $nbr=2$ points etc.
\item All the data for a particular boundary set must be in the same
 order. (Prior to 3.4 it was possible to define barotropic data in a
 different order to the data for tracers and baroclinic velocities).
+\item The data points must be in order of increasing $nbr$,
+ ie. all the $nbr=1$ points, then all the $nbr=2$ points etc.
+\item All the data for a particular boundary set must be in the same order.
+ (Prior to 3.4 it was possible to define barotropic data in a different order to
+ the data for tracers and baroclinic velocities).
\end{enumerate}
\mbox{}
These restrictions mean that data files used with previous versions of
the model may not work with version 3.4. A fortran utility
{\it bdy\_reorder} exists in the TOOLS directory which will reorder the
data in old BDY data files.
+These restrictions mean that data files used with previous versions of the model may not work with version 3.4.
+A fortran utility {\it bdy\_reorder} exists in the TOOLS directory which
+will reorder the data in old BDY data files.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=1.0\textwidth]{Fig_LBC_nc_header}
\caption { \protect\label{fig:LBC_nc_header}
Example of the header for a \protect\ifile{coordinates.bdy} file}
+\caption { \protect\label{fig:LBC_nc_header}
+ Example of the header for a \protect\ifile{coordinates.bdy} file}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 618,12 +582,13 @@
\label{subsec:BDY_vol_corr}
There is an option to force the total volume in the regional model to be constant,
similar to the option in the OBC module. This is controlled by the \np{nn\_volctl}
parameter in the namelist. A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used.
If \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities
around the boundary at each timestep to ensure that the integrated volume flow
through the boundary is zero. If \np{nn\_volctl}\forcode{ = 2} then the calculation of
the volume change on the timestep includes the change due to the freshwater
flux across the surface and the correction velocity corrects for this as well.
+There is an option to force the total volume in the regional model to be constant,
+similar to the option in the OBC module.
+This is controlled by the \np{nn\_volctl} parameter in the namelist.
+A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used.
+If \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities around the boundary at
+each timestep to ensure that the integrated volume flow through the boundary is zero.
+If \np{nn\_volctl}\forcode{ = 2} then the calculation of the volume change on
+the timestep includes the change due to the freshwater flux across the surface and
+the correction velocity corrects for this as well.
If more than one boundary set is used then volume correction is
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_LDF.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_LDF.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_LDF.tex (revision 10368)
@@ 14,16 +14,17 @@
The lateral physics terms in the momentum and tracer equations have been
described in \autoref{eq:PE_zdf} and their discrete formulation in \autoref{sec:TRA_ldf}
and \autoref{sec:DYN_ldf}). In this section we further discuss each lateral physics option.
Choosing one lateral physics scheme means for the user defining,
(1) the type of operator used (laplacian or bilaplacian operators, or no lateral mixing term) ;
(2) the direction along which the lateral diffusive fluxes are evaluated (model level, geopotential or isopycnal surfaces) ; and
(3) the space and time variations of the eddy coefficients.
These three aspects of the lateral diffusion are set through namelist parameters
(see the \textit{\ngn{nam\_traldf}} and \textit{\ngn{nam\_dynldf}} below).
Note that this chapter describes the standard implementation of isoneutral
tracer mixing, and Griffies's implementation, which is used if
\np{traldf\_grif}\forcode{ = .true.}, is described in Appdx\autoref{apdx:triad}
+The lateral physics terms in the momentum and tracer equations have been described in \autoref{eq:PE_zdf} and
+their discrete formulation in \autoref{sec:TRA_ldf} and \autoref{sec:DYN_ldf}).
+In this section we further discuss each lateral physics option.
+Choosing one lateral physics scheme means for the user defining,
+(1) the type of operator used (laplacian or bilaplacian operators, or no lateral mixing term);
+(2) the direction along which the lateral diffusive fluxes are evaluated
+(model level, geopotential or isopycnal surfaces); and
+(3) the space and time variations of the eddy coefficients.
+These three aspects of the lateral diffusion are set through namelist parameters
+(see the \textit{\ngn{nam\_traldf}} and \textit{\ngn{nam\_dynldf}} below).
+Note that this chapter describes the standard implementation of isoneutral tracer mixing,
+and Griffies's implementation, which is used if \np{traldf\_grif}\forcode{ = .true.},
+is described in Appdx\autoref{apdx:triad}
%nam_traldf  nam_dynldf
@@ 45,14 +46,14 @@
Better work can be achieved by using \citet{Griffies_al_JPO98, Griffies_Bk04} isoneutral scheme. }
A direction for lateral mixing has to be defined when the desired operator does
not act along the model levels. This occurs when $(a)$ horizontal mixing is
required on tracer or momentum (\np{ln\_traldf\_hor} or \np{ln\_dynldf\_hor})
in $s$ or mixed $s$$z$ coordinates, and $(b)$ isoneutral mixing is required
whatever the vertical coordinate is. This direction of mixing is defined by its
slopes in the \textbf{i} and \textbf{j}directions at the face of the cell of the
quantity to be diffused. For a tracer, this leads to the following four slopes :
$r_{1u}$, $r_{1w}$, $r_{2v}$, $r_{2w}$ (see \autoref{eq:tra_ldf_iso}), while
for momentum the slopes are $r_{1t}$, $r_{1uw}$, $r_{2f}$, $r_{2uw}$ for
$u$ and $r_{1f}$, $r_{1vw}$, $r_{2t}$, $r_{2vw}$ for $v$.
+A direction for lateral mixing has to be defined when the desired operator does not act along the model levels.
+This occurs when $(a)$ horizontal mixing is required on tracer or momentum
+(\np{ln\_traldf\_hor} or \np{ln\_dynldf\_hor}) in $s$ or mixed $s$$z$ coordinates,
+and $(b)$ isoneutral mixing is required whatever the vertical coordinate is.
+This direction of mixing is defined by its slopes in the \textbf{i} and \textbf{j}directions at the face of
+the cell of the quantity to be diffused.
+For a tracer, this leads to the following four slopes:
+$r_{1u}$, $r_{1w}$, $r_{2v}$, $r_{2w}$ (see \autoref{eq:tra_ldf_iso}),
+while for momentum the slopes are $r_{1t}$, $r_{1uw}$, $r_{2f}$, $r_{2uw}$ for $u$ and
+$r_{1f}$, $r_{1vw}$, $r_{2t}$, $r_{2vw}$ for $v$.
%gm% add here afigure of the slope in idirection
@@ 60,10 +61,9 @@
\subsection{Slopes for tracer geopotential mixing in the $s$coordinate}
In $s$coordinates, geopotential mixing ($i.e.$ horizontal mixing) $r_1$ and
$r_2$ are the slopes between the geopotential and computational surfaces.
Their discrete formulation is found by locally solving \autoref{eq:tra_ldf_iso}
when the diffusive fluxes in the three directions are set to zero and $T$ is
assumed to be horizontally uniform, $i.e.$ a linear function of $z_T$, the
depth of a $T$point.
+In $s$coordinates, geopotential mixing ($i.e.$ horizontal mixing) $r_1$ and $r_2$ are the slopes between
+the geopotential and computational surfaces.
+Their discrete formulation is found by locally solving \autoref{eq:tra_ldf_iso} when
+the diffusive fluxes in the three directions are set to zero and $T$ is assumed to be horizontally uniform,
+$i.e.$ a linear function of $z_T$, the depth of a $T$point.
%gm { Steven : My version is obviously wrong since I'm left with an arbitrary constant which is the local vertical temperature gradient}
@@ 89,16 +89,15 @@
%gm% caution I'm not sure the simplification was a good idea!
These slopes are computed once in \rou{ldfslp\_init} when \np{ln\_sco}\forcode{ = .true.}rue,
and either \np{ln\_traldf\_hor}\forcode{ = .true.}rue or \np{ln\_dynldf\_hor}\forcode{ = .true.}rue.
+These slopes are computed once in \rou{ldfslp\_init} when \np{ln\_sco}\forcode{ = .true.}rue,
+and either \np{ln\_traldf\_hor}\forcode{ = .true.} or \np{ln\_dynldf\_hor}\forcode{ = .true.}.
\subsection{Slopes for tracer isoneutral mixing}
\label{subsec:LDF_slp_iso}
In isoneutral mixing $r_1$ and $r_2$ are the slopes between the isoneutral
and computational surfaces. Their formulation does not depend on the vertical
coordinate used. Their discrete formulation is found using the fact that the
diffusive fluxes of locally referenced potential density ($i.e.$ $in situ$ density)
vanish. So, substituting $T$ by $\rho$ in \autoref{eq:tra_ldf_iso} and setting the
diffusive fluxes in the three directions to zero leads to the following definition for
the neutral slopes:
+In isoneutral mixing $r_1$ and $r_2$ are the slopes between the isoneutral and computational surfaces.
+Their formulation does not depend on the vertical coordinate used.
+Their discrete formulation is found using the fact that the diffusive fluxes of
+locally referenced potential density ($i.e.$ $in situ$ density) vanish.
+So, substituting $T$ by $\rho$ in \autoref{eq:tra_ldf_iso} and setting the diffusive fluxes in
+the three directions to zero leads to the following definition for the neutral slopes:
\begin{equation} \label{eq:ldfslp_iso}
@@ 128,39 +127,38 @@
%In the $z$coordinate, the derivative of the \autoref{eq:ldfslp_iso} numerator is evaluated at the same depth \nocite{as what?} ($T$level, which is the same as the $u$ and $v$levels), so the $in situ$ density can be used for its evaluation.
As the mixing is performed along neutral surfaces, the gradient of $\rho$ in
\autoref{eq:ldfslp_iso} has to be evaluated at the same local pressure (which,
in decibars, is approximated by the depth in meters in the model). Therefore
\autoref{eq:ldfslp_iso} cannot be used as such, but further transformation is
needed depending on the vertical coordinate used:
+As the mixing is performed along neutral surfaces, the gradient of $\rho$ in \autoref{eq:ldfslp_iso} has to
+be evaluated at the same local pressure
+(which, in decibars, is approximated by the depth in meters in the model).
+Therefore \autoref{eq:ldfslp_iso} cannot be used as such,
+but further transformation is needed depending on the vertical coordinate used:
\begin{description}
\item[$z$coordinate with full step : ] in \autoref{eq:ldfslp_iso} the densities
appearing in the $i$ and $j$ derivatives are taken at the same depth, thus
the $in situ$ density can be used. This is not the case for the vertical
derivatives: $\delta_{k+1/2}[\rho]$ is replaced by $\rho N^2/g$, where $N^2$
is the local BruntVais\"{a}l\"{a} frequency evaluated following
\citet{McDougall1987} (see \autoref{subsec:TRA_bn2}).

\item[$z$coordinate with partial step : ] this case is identical to the full step
case except that at partial step level, the \emph{horizontal} density gradient
is evaluated as described in \autoref{sec:TRA_zpshde}.

\item[$s$ or hybrid $s$$z$ coordinate : ] in the current release of \NEMO,
isoneutral mixing is only employed for $s$coordinates if the
Griffies scheme is used (\np{traldf\_grif}\forcode{ = .true.}; see Appdx \autoref{apdx:triad}).
In other words, isoneutral mixing will only be accurately represented with a
linear equation of state (\np{nn\_eos}\forcode{ = 1..2}). In the case of a "true" equation
of state, the evaluation of $i$ and $j$ derivatives in \autoref{eq:ldfslp_iso}
will include a pressure dependent part, leading to the wrong evaluation of
the neutral slopes.
+\item[$z$coordinate with full step: ]
+ in \autoref{eq:ldfslp_iso} the densities appearing in the $i$ and $j$ derivatives are taken at the same depth,
+ thus the $in situ$ density can be used.
+ This is not the case for the vertical derivatives: $\delta_{k+1/2}[\rho]$ is replaced by $\rho N^2/g$,
+ where $N^2$ is the local BruntVais\"{a}l\"{a} frequency evaluated following \citet{McDougall1987}
+ (see \autoref{subsec:TRA_bn2}).
+
+\item[$z$coordinate with partial step: ]
+ this case is identical to the full step case except that at partial step level,
+ the \emph{horizontal} density gradient is evaluated as described in \autoref{sec:TRA_zpshde}.
+
+\item[$s$ or hybrid $s$$z$ coordinate: ]
+ in the current release of \NEMO, isoneutral mixing is only employed for $s$coordinates if
+ the Griffies scheme is used (\np{traldf\_grif}\forcode{ = .true.};
+ see Appdx \autoref{apdx:triad}).
+ In other words, isoneutral mixing will only be accurately represented with a linear equation of state
+ (\np{nn\_eos}\forcode{ = 1..2}).
+ In the case of a "true" equation of state, the evaluation of $i$ and $j$ derivatives in \autoref{eq:ldfslp_iso}
+ will include a pressure dependent part, leading to the wrong evaluation of the neutral slopes.
%gm%
Note: The solution for $s$coordinate passes trough the use of different
(and better) expression for the constraint on isoneutral fluxes. Following
\citet{Griffies_Bk04}, instead of specifying directly that there is a zero neutral
diffusive flux of locally referenced potential density, we stay in the $T$$S$
plane and consider the balance between the neutral direction diffusive fluxes
of potential temperature and salinity:
+ Note: The solution for $s$coordinate passes trough the use of different (and better) expression for
+ the constraint on isoneutral fluxes.
+ Following \citet{Griffies_Bk04}, instead of specifying directly that there is a zero neutral diffusive flux of
+ locally referenced potential density, we stay in the $T$$S$ plane and consider the balance between
+ the neutral direction diffusive fluxes of potential temperature and salinity:
\begin{equation}
\alpha \ \textbf{F}(T) = \beta \ \textbf{F}(S)
@@ 194,38 +192,31 @@
\end{split}
\end{equation}
where $\alpha$ and $\beta$, the thermal expansion and saline contraction
coefficients introduced in \autoref{subsec:TRA_bn2}, have to be evaluated at the three
velocity points. In order to save computation time, they should be approximated
by the mean of their values at $T$points (for example in the case of $\alpha$:
$\alpha_u=\overline{\alpha_T}^{i+1/2}$, $\alpha_v=\overline{\alpha_T}^{j+1/2}$
and $\alpha_w=\overline{\alpha_T}^{k+1/2}$).

Note that such a formulation could be also used in the $z$coordinate and
$z$coordinate with partial steps cases.
+where $\alpha$ and $\beta$, the thermal expansion and saline contraction coefficients introduced in
+\autoref{subsec:TRA_bn2}, have to be evaluated at the three velocity points.
+In order to save computation time, they should be approximated by the mean of their values at $T$points
+(for example in the case of $\alpha$:
+$\alpha_u=\overline{\alpha_T}^{i+1/2}$, $\alpha_v=\overline{\alpha_T}^{j+1/2}$ and
+$\alpha_w=\overline{\alpha_T}^{k+1/2}$).
+
+Note that such a formulation could be also used in the $z$coordinate and $z$coordinate with partial steps cases.
\end{description}
This implementation is a rather old one. It is similar to the one
proposed by Cox [1987], except for the background horizontal
diffusion. Indeed, the Cox implementation of isopycnal diffusion in
GFDLtype models requires a minimum background horizontal diffusion
for numerical stability reasons. To overcome this problem, several
techniques have been proposed in which the numerical schemes of the
ocean model are modified \citep{Weaver_Eby_JPO97,
 Griffies_al_JPO98}. Griffies's scheme is now available in \NEMO if
\np{traldf\_grif\_iso} is set true; see Appdx \autoref{apdx:triad}. Here,
another strategy is presented \citep{Lazar_PhD97}: a local
filtering of the isoneutral slopes (made on 9 gridpoints) prevents
the development of grid point noise generated by the isoneutral
diffusion operator (\autoref{fig:LDF_ZDF1}). This allows an
isoneutral diffusion scheme without additional background horizontal
mixing. This technique can be viewed as a diffusion operator that acts
along largescale (2~$\Delta$x) \gmcomment{2deltax doesnt seem very
 large scale} isoneutral surfaces. The diapycnal diffusion required
for numerical stability is thus minimized and its net effect on the
flow is quite small when compared to the effect of an horizontal
background mixing.

Nevertheless, this isoneutral operator does not ensure that variance cannot increase,
+This implementation is a rather old one.
+It is similar to the one proposed by Cox [1987], except for the background horizontal diffusion.
+Indeed, the Cox implementation of isopycnal diffusion in GFDLtype models requires
+a minimum background horizontal diffusion for numerical stability reasons.
+To overcome this problem, several techniques have been proposed in which the numerical schemes of
+the ocean model are modified \citep{Weaver_Eby_JPO97, Griffies_al_JPO98}.
+Griffies's scheme is now available in \NEMO if \np{traldf\_grif\_iso} is set true; see Appdx \autoref{apdx:triad}.
+Here, another strategy is presented \citep{Lazar_PhD97}:
+a local filtering of the isoneutral slopes (made on 9 gridpoints) prevents the development of
+grid point noise generated by the isoneutral diffusion operator (\autoref{fig:LDF_ZDF1}).
+This allows an isoneutral diffusion scheme without additional background horizontal mixing.
+This technique can be viewed as a diffusion operator that acts along largescale
+(2~$\Delta$x) \gmcomment{2deltax doesnt seem very large scale} isoneutral surfaces.
+The diapycnal diffusion required for numerical stability is thus minimized and its net effect on the flow is quite small when compared to the effect of an horizontal background mixing.
+
+Nevertheless, this isoneutral operator does not ensure that variance cannot increase,
contrary to the \citet{Griffies_al_JPO98} operator which has that property.
@@ 234,5 +225,5 @@
\includegraphics[width=0.70\textwidth]{Fig_LDF_ZDF1}
\caption { \protect\label{fig:LDF_ZDF1}
averaging procedure for isopycnal slope computation.}
+ averaging procedure for isopycnal slope computation.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 252,40 +243,43 @@
% surface motivates this flattening of isopycnals near the surface).
For numerical stability reasons \citep{Cox1987, Griffies_Bk04}, the slopes must also
be bounded by $1/100$ everywhere. This constraint is applied in a piecewise linear
fashion, increasing from zero at the surface to $1/100$ at $70$ metres and thereafter
decreasing to zero at the bottom of the ocean. (the fact that the eddies "feel" the
surface motivates this flattening of isopycnals near the surface).
+For numerical stability reasons \citep{Cox1987, Griffies_Bk04}, the slopes must also be bounded by
+$1/100$ everywhere.
+This constraint is applied in a piecewise linear fashion, increasing from zero at the surface to
+$1/100$ at $70$ metres and thereafter decreasing to zero at the bottom of the ocean
+(the fact that the eddies "feel" the surface motivates this flattening of isopycnals near the surface).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!ht] \begin{center}
\includegraphics[width=0.70\textwidth]{Fig_eiv_slp}
\caption { \protect\label{fig:eiv_slp}
Vertical profile of the slope used for lateral mixing in the mixed layer :
\textit{(a)} in the real ocean the slope is the isoneutral slope in the ocean interior,
which has to be adjusted at the surface boundary (i.e. it must tend to zero at the
surface since there is no mixing across the airsea interface: wall boundary
condition). Nevertheless, the profile between the surface zero value and the interior
isoneutral one is unknown, and especially the value at the base of the mixed layer ;
\textit{(b)} profile of slope using a linear tapering of the slope near the surface and
imposing a maximum slope of 1/100 ; \textit{(c)} profile of slope actually used in
\NEMO: a linear decrease of the slope from zero at the surface to its ocean interior
value computed just below the mixed layer. Note the huge change in the slope at the
base of the mixed layer between \textit{(b)} and \textit{(c)}.}
\end{center} \end{figure}
+\begin{figure}[!ht]
+ \begin{center}
+ \includegraphics[width=0.70\textwidth]{Fig_eiv_slp}
+ \caption { \protect\label{fig:eiv_slp}
+ Vertical profile of the slope used for lateral mixing in the mixed layer:
+ \textit{(a)} in the real ocean the slope is the isoneutral slope in the ocean interior,
+ which has to be adjusted at the surface boundary
+ (i.e. it must tend to zero at the surface since there is no mixing across the airsea interface:
+ wall boundary condition).
+ Nevertheless, the profile between the surface zero value and the interior isoneutral one is unknown,
+ and especially the value at the base of the mixed layer;
+ \textit{(b)} profile of slope using a linear tapering of the slope near the surface and
+ imposing a maximum slope of 1/100;
+ \textit{(c)} profile of slope actually used in \NEMO: a linear decrease of the slope from
+ zero at the surface to its ocean interior value computed just below the mixed layer.
+ Note the huge change in the slope at the base of the mixed layer between \textit{(b)} and \textit{(c)}.}
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\colorbox{yellow}{add here a discussion about the flattening of the slopes, vs tapering the coefficient.}
+\colorbox{yellow}{add here a discussion about the flattening of the slopes, vs tapering the coefficient.}
\subsection{Slopes for momentum isoneutral mixing}
The isoneutral diffusion operator on momentum is the same as the one used on
tracers but applied to each component of the velocity separately (see
\autoref{eq:dyn_ldf_iso} in section~\autoref{subsec:DYN_ldf_iso}). The slopes between the
surface along which the diffusion operator acts and the surface of computation
($z$ or $s$surfaces) are defined at $T$, $f$, and \textit{uw} points for the
$u$component, and $T$, $f$ and \textit{vw} points for the $v$component.
They are computed from the slopes used for tracer diffusion, $i.e.$
\autoref{eq:ldfslp_geo} and \autoref{eq:ldfslp_iso} :
+The isoneutral diffusion operator on momentum is the same as the one used on tracers but
+applied to each component of the velocity separately
+(see \autoref{eq:dyn_ldf_iso} in section~\autoref{subsec:DYN_ldf_iso}).
+The slopes between the surface along which the diffusion operator acts and the surface of computation
+($z$ or $s$surfaces) are defined at $T$, $f$, and \textit{uw} points for the $u$component, and $T$, $f$ and
+\textit{vw} points for the $v$component.
+They are computed from the slopes used for tracer diffusion,
+$i.e.$ \autoref{eq:ldfslp_geo} and \autoref{eq:ldfslp_iso} :
\begin{equation} \label{eq:ldfslp_dyn}
@@ 298,9 +292,8 @@
\end{equation}
The major issue remaining is in the specification of the boundary conditions.
The same boundary conditions are chosen as those used for lateral
diffusion along model level surfaces, i.e. using the shear computed along
the model levels and with no additional friction at the ocean bottom (see
\autoref{sec:LBC_coast}).
+The major issue remaining is in the specification of the boundary conditions.
+The same boundary conditions are chosen as those used for lateral diffusion along model level surfaces,
+$i.e.$ using the shear computed along the model levels and with no additional friction at the ocean bottom
+(see \autoref{sec:LBC_coast}).
@@ 319,49 +312,45 @@
\label{sec:LDF_coef}
Introducing a space variation in the lateral eddy mixing coefficients changes
the model core memory requirement, adding up to four extra threedimensional
arrays for the geopotential or isopycnal second order operator applied to
momentum. Six CPP keys control the space variation of eddy coefficients:
three for momentum and three for tracer. The three choices allow:
a space variation in the three space directions (\key{traldf\_c3d}, \key{dynldf\_c3d}),
in the horizontal plane (\key{traldf\_c2d}, \key{dynldf\_c2d}),
or in the vertical only (\key{traldf\_c1d}, \key{dynldf\_c1d}).
+Introducing a space variation in the lateral eddy mixing coefficients changes the model core memory requirement,
+adding up to four extra threedimensional arrays for the geopotential or isopycnal second order operator applied to
+momentum.
+Six CPP keys control the space variation of eddy coefficients: three for momentum and three for tracer.
+The three choices allow:
+a space variation in the three space directions (\key{traldf\_c3d}, \key{dynldf\_c3d}),
+in the horizontal plane (\key{traldf\_c2d}, \key{dynldf\_c2d}),
+or in the vertical only (\key{traldf\_c1d}, \key{dynldf\_c1d}).
The default option is a constant value over the whole ocean on both momentum and tracers.
The number of additional arrays that have to be defined and the gridpoint
position at which they are defined depend on both the space variation chosen
and the type of operator used. The resulting eddy viscosity and diffusivity
coefficients can be a function of more than one variable. Changes in the
computer code when switching from one option to another have been
minimized by introducing the eddy coefficients as statement functions
(include file \hf{ldftra\_substitute} and \hf{ldfdyn\_substitute}). The functions
are replaced by their actual meaning during the preprocessing step (CPP).
The specification of the space variation of the coefficient is made in
\mdl{ldftra} and \mdl{ldfdyn}, or more precisely in include files
\hf{traldf\_cNd} and \hf{dynldf\_cNd}, with N=1, 2 or 3.
The user can modify these include files as he/she wishes. The way the
mixing coefficient are set in the reference version can be briefly described
as follows:
+The number of additional arrays that have to be defined and the gridpoint position at which
+they are defined depend on both the space variation chosen and the type of operator used.
+The resulting eddy viscosity and diffusivity coefficients can be a function of more than one variable.
+Changes in the computer code when switching from one option to another have been minimized by
+introducing the eddy coefficients as statement functions
+(include file \hf{ldftra\_substitute} and \hf{ldfdyn\_substitute}).
+The functions are replaced by their actual meaning during the preprocessing step (CPP).
+The specification of the space variation of the coefficient is made in \mdl{ldftra} and \mdl{ldfdyn},
+or more precisely in include files \hf{traldf\_cNd} and \hf{dynldf\_cNd}, with N=1, 2 or 3.
+The user can modify these include files as he/she wishes.
+The way the mixing coefficient are set in the reference version can be briefly described as follows:
\subsubsection{Constant mixing coefficients (default option)}
When none of the \key{dynldf\_...} and \key{traldf\_...} keys are
defined, a constant value is used over the whole ocean for momentum and
tracers, which is specified through the \np{rn\_ahm0} and \np{rn\_aht0} namelist
parameters.
+When none of the \key{dynldf\_...} and \key{traldf\_...} keys are defined,
+a constant value is used over the whole ocean for momentum and tracers,
+which is specified through the \np{rn\_ahm0} and \np{rn\_aht0} namelist parameters.
\subsubsection{Vertically varying mixing coefficients (\protect\key{traldf\_c1d} and \key{dynldf\_c1d})}
The 1D option is only available when using the $z$coordinate with full step.
Indeed in all the other types of vertical coordinate, the depth is a 3D function
of (\textbf{i},\textbf{j},\textbf{k}) and therefore, introducing depthdependent
mixing coefficients will require 3D arrays. In the 1D option, a hyperbolic variation
of the lateral mixing coefficient is introduced in which the surface value is
\np{rn\_aht0} (\np{rn\_ahm0}), the bottom value is 1/4 of the surface value,
and the transition takes place around z=300~m with a width of 300~m
($i.e.$ both the depth and the width of the inflection point are set to 300~m).
+The 1D option is only available when using the $z$coordinate with full step.
+Indeed in all the other types of vertical coordinate,
+the depth is a 3D function of (\textbf{i},\textbf{j},\textbf{k}) and therefore,
+introducing depthdependent mixing coefficients will require 3D arrays.
+In the 1D option, a hyperbolic variation of the lateral mixing coefficient is introduced in which
+the surface value is \np{rn\_aht0} (\np{rn\_ahm0}), the bottom value is 1/4 of the surface value,
+and the transition takes place around z=300~m with a width of 300~m
+($i.e.$ both the depth and the width of the inflection point are set to 300~m).
This profile is hard coded in file \hf{traldf\_c1d}, but can be easily modified by users.
\subsubsection{Horizontally varying mixing coefficients (\protect\key{traldf\_c2d} and \protect\key{dynldf\_c2d})}
By default the horizontal variation of the eddy coefficient depends on the local mesh
size and the type of operator used:
+By default the horizontal variation of the eddy coefficient depends on the local mesh size and
+the type of operator used:
\begin{equation} \label{eq:title}
A_l = \left\{
@@ 371,39 +360,36 @@
\end{aligned} \right.
\end{equation}
where $e_{max}$ is the maximum of $e_1$ and $e_2$ taken over the whole masked
ocean domain, and $A_o^l$ is the \np{rn\_ahm0} (momentum) or \np{rn\_aht0} (tracer)
namelist parameter. This variation is intended to reflect the lesser need for subgrid
scale eddy mixing where the grid size is smaller in the domain. It was introduced in
the context of the DYNAMO modelling project \citep{Willebrand_al_PO01}.
Note that such a grid scale dependance of mixing coefficients significantly increase
the range of stability of model configurations presenting large changes in grid pacing
such as global ocean models. Indeed, in such a case, a constant mixing coefficient
can lead to a blow up of the model due to large coefficient compare to the smallest
grid size (see \autoref{sec:STP_forward_imp}), especially when using a bilaplacian operator.

Other formulations can be introduced by the user for a given configuration.
For example, in the ORCA2 global ocean model (see Configurations), the laplacian
viscosity operator uses \np{rn\_ahm0}~= 4.10$^4$ m$^2$/s poleward of 20$^{\circ}$
north and south and decreases linearly to \np{rn\_aht0}~= 2.10$^3$ m$^2$/s
at the equator \citep{Madec_al_JPO96, Delecluse_Madec_Bk00}. This modification
can be found in routine \rou{ldf\_dyn\_c2d\_orca} defined in \mdl{ldfdyn\_c2d}.
Similar modified horizontal variations can be found with the Antarctic or Arctic
subdomain options of ORCA2 and ORCA05 (see \&namcfg namelist).
+where $e_{max}$ is the maximum of $e_1$ and $e_2$ taken over the whole masked ocean domain,
+and $A_o^l$ is the \np{rn\_ahm0} (momentum) or \np{rn\_aht0} (tracer) namelist parameter.
+This variation is intended to reflect the lesser need for subgrid scale eddy mixing where
+the grid size is smaller in the domain.
+It was introduced in the context of the DYNAMO modelling project \citep{Willebrand_al_PO01}.
+Note that such a grid scale dependance of mixing coefficients significantly increase the range of stability of
+model configurations presenting large changes in grid pacing such as global ocean models.
+Indeed, in such a case, a constant mixing coefficient can lead to a blow up of the model due to
+large coefficient compare to the smallest grid size (see \autoref{sec:STP_forward_imp}),
+especially when using a bilaplacian operator.
+
+Other formulations can be introduced by the user for a given configuration.
+For example, in the ORCA2 global ocean model (see Configurations),
+the laplacian viscosity operator uses \np{rn\_ahm0}~= 4.10$^4$ m$^2$/s poleward of 20$^{\circ}$ north and south and
+decreases linearly to \np{rn\_aht0}~= 2.10$^3$ m$^2$/s at the equator \citep{Madec_al_JPO96, Delecluse_Madec_Bk00}.
+This modification can be found in routine \rou{ldf\_dyn\_c2d\_orca} defined in \mdl{ldfdyn\_c2d}.
+Similar modified horizontal variations can be found with the Antarctic or Arctic subdomain options of
+ORCA2 and ORCA05 (see \&namcfg namelist).
\subsubsection{Space varying mixing coefficients (\protect\key{traldf\_c3d} and \key{dynldf\_c3d})}
The 3D space variation of the mixing coefficient is simply the combination of the
1D and 2D cases, $i.e.$ a hyperbolic tangent variation with depth associated with
a grid size dependence of the magnitude of the coefficient.
+The 3D space variation of the mixing coefficient is simply the combination of the 1D and 2D cases,
+$i.e.$ a hyperbolic tangent variation with depth associated with a grid size dependence of
+the magnitude of the coefficient.
\subsubsection{Space and time varying mixing coefficients}
There is no default specification of space and time varying mixing coefficient.
The only case available is specific to the ORCA2 and ORCA05 global ocean
configurations. It provides only a tracer
mixing coefficient for eddy induced velocity (ORCA2) or both isoneutral and
eddy induced velocity (ORCA05) that depends on the local growth rate of
baroclinic instability. This specification is actually used when an ORCA key
and both \key{traldf\_eiv} and \key{traldf\_c2d} are defined.
+The only case available is specific to the ORCA2 and ORCA05 global ocean configurations.
+It provides only a tracer mixing coefficient for eddy induced velocity (ORCA2) or both isoneutral and
+eddy induced velocity (ORCA05) that depends on the local growth rate of baroclinic instability.
+This specification is actually used when an ORCA key and both \key{traldf\_eiv} and \key{traldf\_c2d} are defined.
$\ $\newline % force a new ligne
@@ 411,26 +397,24 @@
The following points are relevant when the eddy coefficient varies spatially:
(1) the momentum diffusion operator acting along model level surfaces is
written in terms of curl and divergent components of the horizontal current
(see \autoref{subsec:PE_ldf}). Although the eddy coefficient could be set to different values
in these two terms, this option is not currently available.

(2) with an horizontally varying viscosity, the quadratic integral constraints
on enstrophy and on the square of the horizontal divergence for operators
acting along modelsurfaces are no longer satisfied
+(1) the momentum diffusion operator acting along model level surfaces is written in terms of curl and
+divergent components of the horizontal current (see \autoref{subsec:PE_ldf}).
+Although the eddy coefficient could be set to different values in these two terms,
+this option is not currently available.
+
+(2) with an horizontally varying viscosity, the quadratic integral constraints on enstrophy and on the square of
+the horizontal divergence for operators acting along modelsurfaces are no longer satisfied
(\autoref{sec:dynldf_properties}).
(3) for isopycnal diffusion on momentum or tracers, an additional purely
horizontal background diffusion with uniform coefficient can be added by
setting a non zero value of \np{rn\_ahmb0} or \np{rn\_ahtb0}, a background horizontal
eddy viscosity or diffusivity coefficient (namelist parameters whose default
values are $0$). However, the technique used to compute the isopycnal
slopes is intended to get rid of such a background diffusion, since it introduces
spurious diapycnal diffusion (see \autoref{sec:LDF_slp}).

(4) when an eddy induced advection term is used (\key{traldf\_eiv}), $A^{eiv}$,
the eddy induced coefficient has to be defined. Its space variations are controlled
by the same CPP variable as for the eddy diffusivity coefficient ($i.e.$
\key{traldf\_cNd}).
+(3) for isopycnal diffusion on momentum or tracers, an additional purely horizontal background diffusion with
+uniform coefficient can be added by setting a non zero value of \np{rn\_ahmb0} or \np{rn\_ahtb0},
+a background horizontal eddy viscosity or diffusivity coefficient
+(namelist parameters whose default values are $0$).
+However, the technique used to compute the isopycnal slopes is intended to get rid of such a background diffusion,
+since it introduces spurious diapycnal diffusion (see \autoref{sec:LDF_slp}).
+
+(4) when an eddy induced advection term is used (\key{traldf\_eiv}),
+$A^{eiv}$, the eddy induced coefficient has to be defined.
+Its space variations are controlled by the same CPP variable as for the eddy diffusivity coefficient
+($i.e.$ \key{traldf\_cNd}).
(5) the eddy coefficient associated with a biharmonic operator must be set to a \emph{negative} value.
@@ 438,8 +422,8 @@
(6) it is possible to use both the laplacian and biharmonic operators concurrently.
(7) it is possible to run without explicit lateral diffusion on momentum (\np{ln\_dynldf\_lap}\forcode{ =
}\np{ln\_dynldf\_bilap}\forcode{ = .false.}). This is recommended when using the UBS advection
scheme on momentum (\np{ln\_dynadv\_ubs}\forcode{ = .true.}, see \autoref{subsec:DYN_adv_ubs})
and can be useful for testing purposes.
+(7) it is possible to run without explicit lateral diffusion on momentum
+(\np{ln\_dynldf\_lap}\forcode{ = .?.}\np{ln\_dynldf\_bilap}\forcode{ = .false.}).
+This is recommended when using the UBS advection scheme on momentum (\np{ln\_dynadv\_ubs}\forcode{ = .true.},
+see \autoref{subsec:DYN_adv_ubs}) and can be useful for testing purposes.
% ================================================================
@@ 451,29 +435,27 @@
%%gm from Triad appendix : to be incorporated....
\gmcomment{
Values of isoneutral diffusivity and GM coefficient are set as
described in \autoref{sec:LDF_coef}. If none of the keys \key{traldf\_cNd},
N=1,2,3 is set (the default), spatially constant isoneutral $A_l$ and
GM diffusivity $A_e$ are directly set by \np{rn\_aeih\_0} and
\np{rn\_aeiv\_0}. If 2Dvarying coefficients are set with
\key{traldf\_c2d} then $A_l$ is reduced in proportion with horizontal
scale factor according to \autoref{eq:title} \footnote{Except in global ORCA
 $0.5^{\circ}$ runs with \key{traldf\_eiv}, where
 $A_l$ is set like $A_e$ but with a minimum vale of
 $100\;\mathrm{m}^2\;\mathrm{s}^{1}$}. In idealised setups with
\key{traldf\_c2d}, $A_e$ is reduced similarly, but if \key{traldf\_eiv}
is set in the global configurations with \key{traldf\_c2d}, a horizontally varying $A_e$ is
instead set from the HeldLarichev parameterisation\footnote{In this
 case, $A_e$ at low latitudes $\theta<20^{\circ}$ is further
 reduced by a factor $f/f_{20}$, where $f_{20}$ is the value of $f$
 at $20^{\circ}$~N} (\mdl{ldfeiv}) and \np{rn\_aeiv\_0} is ignored
unless it is zero.
+ Values of isoneutral diffusivity and GM coefficient are set as described in \autoref{sec:LDF_coef}.
+ If none of the keys \key{traldf\_cNd}, N=1,2,3 is set (the default), spatially constant isoneutral $A_l$ and
+ GM diffusivity $A_e$ are directly set by \np{rn\_aeih\_0} and \np{rn\_aeiv\_0}.
+ If 2Dvarying coefficients are set with \key{traldf\_c2d} then $A_l$ is reduced in proportion with horizontal
+ scale factor according to \autoref{eq:title} \footnote{
+ Except in global ORCA $0.5^{\circ}$ runs with \key{traldf\_eiv},
+ where $A_l$ is set like $A_e$ but with a minimum vale of $100\;\mathrm{m}^2\;\mathrm{s}^{1}$
+ }.
+ In idealised setups with \key{traldf\_c2d}, $A_e$ is reduced similarly, but if \key{traldf\_eiv} is set in
+ the global configurations with \key{traldf\_c2d}, a horizontally varying $A_e$ is instead set from
+ the HeldLarichev parameterisation \footnote{
+ In this case, $A_e$ at low latitudes $\theta<20^{\circ}$ is further reduced by a factor $f/f_{20}$,
+ where $f_{20}$ is the value of $f$ at $20^{\circ}$~N
+ } (\mdl{ldfeiv}) and \np{rn\_aeiv\_0} is ignored unless it is zero.
}
When Gent and McWilliams [1990] diffusion is used (\key{traldf\_eiv} defined),
an eddy induced tracer advection term is added, the formulation of which
depends on the slopes of isoneutral surfaces. Contrary to the case of isoneutral
mixing, the slopes used here are referenced to the geopotential surfaces, $i.e.$
\autoref{eq:ldfslp_geo} is used in $z$coordinates, and the sum \autoref{eq:ldfslp_geo}
+ \autoref{eq:ldfslp_iso} in $s$coordinates. The eddy induced velocity is given by:
+When Gent and McWilliams [1990] diffusion is used (\key{traldf\_eiv} defined),
+an eddy induced tracer advection term is added,
+the formulation of which depends on the slopes of isoneutral surfaces.
+Contrary to the case of isoneutral mixing, the slopes used here are referenced to the geopotential surfaces,
+$i.e.$ \autoref{eq:ldfslp_geo} is used in $z$coordinates,
+and the sum \autoref{eq:ldfslp_geo} + \autoref{eq:ldfslp_iso} in $s$coordinates.
+The eddy induced velocity is given by:
\begin{equation} \label{eq:ldfeiv}
\begin{split}
@@ 483,16 +465,16 @@
\end{split}
\end{equation}
where $A^{eiv}$ is the eddy induced velocity coefficient whose value is set
through \np{rn\_aeiv}, a \textit{nam\_traldf} namelist parameter.
The three components of the eddy induced velocity are computed and add
to the eulerian velocity in \mdl{traadv\_eiv}. This has been preferred to a
separate computation of the advective trends associated with the eiv velocity,
since it allows us to take advantage of all the advection schemes offered for
the tracers (see \autoref{sec:TRA_adv}) and not just the $2^{nd}$ order advection
scheme as in previous releases of OPA \citep{Madec1998}. This is particularly
useful for passive tracers where \emph{positivity} of the advection scheme is
of paramount importance.

At the surface, lateral and bottom boundaries, the eddy induced velocity,
+where $A^{eiv}$ is the eddy induced velocity coefficient whose value is set through \np{rn\_aeiv},
+a \textit{nam\_traldf} namelist parameter.
+The three components of the eddy induced velocity are computed and
+add to the eulerian velocity in \mdl{traadv\_eiv}.
+This has been preferred to a separate computation of the advective trends associated with the eiv velocity,
+since it allows us to take advantage of all the advection schemes offered for the tracers
+(see \autoref{sec:TRA_adv}) and not just the $2^{nd}$ order advection scheme as in
+previous releases of OPA \citep{Madec1998}.
+This is particularly useful for passive tracers where \emph{positivity} of the advection scheme is of
+paramount importance.
+
+At the surface, lateral and bottom boundaries, the eddy induced velocity,
and thus the advective eddy fluxes of heat and salt, are set to zero.
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_OBS.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_OBS.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_OBS.tex (revision 10368)
@@ 15,39 +15,39 @@
$\ $\newline % force a new line
The observation and model comparison code (OBS) reads in observation files (profile
temperature and salinity, sea surface temperature, sea level anomaly, sea ice concentration,
and velocity) and calculates an interpolated model equivalent value at the observation
location and nearest model timestep. The resulting data are saved in a ``feedback'' file (or
files). The code was originally developed for use with the NEMOVAR data assimilation code, but
can be used for validation or verification of the model or with any other data assimilation system.

The OBS code is called from \mdl{nemogcm} for model initialisation and to calculate the model
equivalent values for observations on the 0th timestep. The code is then called again after
each timestep from \mdl{step}. The code is only activated if the namelist logical \np{ln\_diaobs}
is set to true.

For all data types a 2D horizontal interpolator or averager is needed to interpolate/average the model fields to
the observation location. For {\em in situ} profiles, a 1D vertical interpolator is needed in
addition to provide model fields at the observation depths. This now works in a generalised vertical
coordinate system.
+The observation and model comparison code (OBS) reads in observation files
+(profile temperature and salinity, sea surface temperature, sea level anomaly, sea ice concentration, and velocity) and calculates an interpolated model equivalent value at the observation location and nearest model timestep.
+The resulting data are saved in a ``feedback'' file (or files).
+The code was originally developed for use with the NEMOVAR data assimilation code,
+but can be used for validation or verification of the model or with any other data assimilation system.
+
+The OBS code is called from \mdl{nemogcm} for model initialisation and to calculate the model equivalent values for observations on the 0th timestep.
+The code is then called again after each timestep from \mdl{step}.
+The code is only activated if the namelist logical \np{ln\_diaobs} is set to true.
+
+For all data types a 2D horizontal interpolator or averager is needed to
+interpolate/average the model fields to the observation location.
+For {\em in situ} profiles, a 1D vertical interpolator is needed in addition to
+provide model fields at the observation depths.
+This now works in a generalised vertical coordinate system.
Some profile observation types (e.g. tropical moored buoys) are made available as daily averaged quantities.
The observation operator code can be setup to calculate the equivalent daily average model temperature fields
using the \np{nn\_profdavtypes} namelist array. Some SST observations are equivalent to a nighttime
average value and the observation operator code can calculate equivalent nighttime average model SST fields by
setting the namelist value \np{ln\_sstnight} to true. Otherwise the model value from the nearest timestep to the
observation time is used.

The code is controlled by the namelist \textit{namobs}. See the following sections for more
details on setting up the namelist.
+The observation operator code can be setup to calculate the equivalent daily average model temperature fields using
+the \np{nn\_profdavtypes} namelist array.
+Some SST observations are equivalent to a nighttime average value and
+the observation operator code can calculate equivalent nighttime average model SST fields by
+setting the namelist value \np{ln\_sstnight} to true.
+Otherwise the model value from the nearest timestep to the observation time is used.
+
+The code is controlled by the namelist \textit{namobs}.
+See the following sections for more details on setting up the namelist.
\autoref{sec:OBS_example} introduces a test example of the observation operator code including
where to obtain data and how to setup the namelist. \autoref{sec:OBS_details} introduces some
more technical details of the different observation types used and also shows a more complete
namelist. \autoref{sec:OBS_theory} introduces some of the theoretical aspects of the observation
operator including interpolation methods and running on multiple processors.
+where to obtain data and how to setup the namelist.
+\autoref{sec:OBS_details} introduces some more technical details of the different observation types used and
+also shows a more complete namelist.
+\autoref{sec:OBS_theory} introduces some of the theoretical aspects of the observation operator including
+interpolation methods and running on multiple processors.
\autoref{sec:OBS_ooo} describes the offline observation operator code.
\autoref{sec:OBS_obsutils} introduces some utilities to help working with the files
produced by the OBS code.
+\autoref{sec:OBS_obsutils} introduces some utilities to help working with the files produced by the OBS code.
% ================================================================
@@ 58,14 +58,13 @@
This section describes an example of running the observation operator code using
profile data which can be freely downloaded. It shows how to adapt an
existing run and build of NEMO to run the observation operator.
+profile data which can be freely downloaded.
+It shows how to adapt an existing run and build of NEMO to run the observation operator.
\begin{enumerate}
\item Compile NEMO.
\item Download some EN4 data from
\href{http://www.metoffice.gov.uk/hadobs}{www.metoffice.gov.uk/hadobs}. Choose observations which are
valid for the period of your test run because the observation operator compares
the model and observations for a matching date and time.
+\item Download some EN4 data from \href{http://www.metoffice.gov.uk/hadobs}{www.metoffice.gov.uk/hadobs}.
+ Choose observations which are valid for the period of your test run because
+ the observation operator compares the model and observations for a matching date and time.
\item Compile the OBSTOOLS code using:
@@ 79,38 +78,31 @@
\end{cmds}
\item Include the following in the NEMO namelist to run the observation
operator on this data:
+\item Include the following in the NEMO namelist to run the observation operator on this data:
\end{enumerate}
%namobs_example
%
%\nlst{namobs_example}
%

Options are defined through the \ngn{namobs} namelist variables.
The options \np{ln\_t3d} and \np{ln\_s3d} switch on the temperature and salinity
profile observation operator code. The filename or array of filenames are
specified using the \np{cn\_profbfiles} variable. The model grid points for a
particular observation latitude and longitude are found using the grid
searching part of the code. This can be expensive, particularly for large
numbers of observations, setting \np{ln\_grid\_search\_lookup} allows the use of
a lookup table which is saved into an ``xypos`` file (or files). This will need
to be generated the first time if it does not exist in the run directory.
+Options are defined through the \ngn{namobs} namelist variables.
+The options \np{ln\_t3d} and \np{ln\_s3d} switch on the temperature and salinity profile observation operator code.
+The filename or array of filenames are specified using the \np{cn\_profbfiles} variable.
+The model grid points for a particular observation latitude and longitude are found using
+the grid searching part of the code.
+This can be expensive, particularly for large numbers of observations,
+setting \np{ln\_grid\_search\_lookup} allows the use of a lookup table which
+is saved into an ``xypos`` file (or files).
+This will need to be generated the first time if it does not exist in the run directory.
However, once produced it will significantly speed up future grid searches.
Setting \np{ln\_grid\_global} means that the code distributes the observations
evenly between processors. Alternatively each processor will work with
observations located within the model subdomain (see section~\autoref{subsec:OBS_parallel}).

A number of utilities are now provided to plot the feedback files, convert and
recombine the files. These are explained in more detail in section~\autoref{sec:OBS_obsutils}.
Utilites to convert other input data formats into the feedback format are also
described in section~\autoref{sec:OBS_obsutils}.
+Setting \np{ln\_grid\_global} means that the code distributes the observations evenly between processors.
+Alternatively each processor will work with observations located within the model subdomain
+(see section~\autoref{subsec:OBS_parallel}).
+
+A number of utilities are now provided to plot the feedback files, convert and recombine the files.
+These are explained in more detail in section~\autoref{sec:OBS_obsutils}.
+Utilites to convert other input data formats into the feedback format are also described in
+section~\autoref{sec:OBS_obsutils}.
\section{Technical details (feedback type observation file headers)}
\label{sec:OBS_details}
Here we show a more complete example namelist \ngn{namobs} and also show the NetCDF headers
of the observation
files that may be used with the observation operator
+Here we show a more complete example namelist \ngn{namobs} and also show the NetCDF headers of
+the observation files that may be used with the observation operator.
%namobs
@@ 119,10 +111,8 @@
%
The observation operator code uses the "feedback" observation file format for
all data types. All the
observation files must be in NetCDF format. Some example headers (produced using
\mbox{\textit{ncdump~h}}) for profile
data, sea level anomaly and sea surface temperature are in the following
subsections.
+The observation operator code uses the "feedback" observation file format for all data types.
+All the observation files must be in NetCDF format.
+Some example headers (produced using \mbox{\textit{ncdump~h}}) for profile data, sea level anomaly and
+sea surface temperature are in the following subsections.
\subsection{Profile feedback}
@@ 406,9 +396,8 @@
\end{clines}
The mean dynamic
topography (MDT) must be provided in a separate file defined on the model grid
 called \ifile{slaReferenceLevel}. The MDT is required in
order to produce the model equivalent sea level anomaly from the model sea
surface height. Below is an example header for this file (on the ORCA025 grid).
+The mean dynamic topography (MDT) must be provided in a separate file defined on
+the model grid called \ifile{slaReferenceLevel}.
+The MDT is required in order to produce the model equivalent sea level anomaly from the model sea surface height.
+Below is an example header for this file (on the ORCA025 grid).
\begin{clines}
@@ 551,12 +540,16 @@
\subsection{Horizontal interpolation and averaging methods}
For most observation types, the horizontal extent of the observation is small compared to the model grid size
and so the model equivalent of the observation is calculated by interpolating from the four surrounding grid
points to the observation location. Some satellite observations (e.g. microwave satellite SST data, or satellite SSS data)
have a footprint which is similar in size or larger than the model grid size (particularly when the grid size is small).
In those cases the model counterpart should be calculated by averaging the model grid points over the same size as the footprint.
NEMO therefore has the capability to specify either an interpolation or an averaging (for surface observation types only).

The main namelist option associated with the interpolation/averaging is \np{nn\_2dint}. This default option can be set to values from 0 to 6.
+For most observation types, the horizontal extent of the observation is small compared to the model grid size and so
+the model equivalent of the observation is calculated by interpolating from
+the four surrounding grid points to the observation location.
+Some satellite observations (e.g. microwave satellite SST data, or satellite SSS data) have a footprint which
+is similar in size or larger than the model grid size (particularly when the grid size is small).
+In those cases the model counterpart should be calculated by averaging the model grid points over
+the same size as the footprint.
+NEMO therefore has the capability to specify either an interpolation or an averaging
+(for surface observation types only).
+
+The main namelist option associated with the interpolation/averaging is \np{nn\_2dint}.
+This default option can be set to values from 0 to 6.
Values between 0 to 4 are associated with interpolation while values 5 or 6 are associated with averaging.
\begin{itemize}
@@ 566,22 +559,23 @@
\item \np{nn\_2dint}\forcode{ = 3}: Bilinear remapping interpolation (general grid)
\item \np{nn\_2dint}\forcode{ = 4}: Polynomial interpolation
\item \np{nn\_2dint}\forcode{ = 5}: Radial footprint averaging with diameter specified in the namelist as \np{rn\_???\_avglamscl} in degrees or metres (set using \np{ln\_???\_fp\_indegs})
\item \np{nn\_2dint}\forcode{ = 6}: Rectangular footprint averaging with E/W and N/S size specified in the namelist as \np{rn\_???\_avglamscl} and \np{rn\_???\_avgphiscl} in degrees or metres (set using \np{ln\_???\_fp\_indegs})
+\item \np{nn\_2dint}\forcode{ = 5}: Radial footprint averaging with diameter specified in the namelist as
+ \np{rn\_???\_avglamscl} in degrees or metres (set using \np{ln\_???\_fp\_indegs})
+\item \np{nn\_2dint}\forcode{ = 6}: Rectangular footprint averaging with E/W and N/S size specified in
+ the namelist as \np{rn\_???\_avglamscl} and \np{rn\_???\_avgphiscl} in degrees or metres
+ (set using \np{ln\_???\_fp\_indegs})
\end{itemize}
The ??? in the last two options indicate these options should be specified for each observation type for which the averaging is to be performed (see namelist example above).
The \np{nn\_2dint} default option can be overridden for surface observation types using namelist values \np{nn\_2dint\_???} where ??? is one of sla,sst,sss,sic.
+The ??? in the last two options indicate these options should be specified for each observation type for
+which the averaging is to be performed (see namelist example above).
+The \np{nn\_2dint} default option can be overridden for surface observation types using
+namelist values \np{nn\_2dint\_???} where ??? is one of sla,sst,sss,sic.
Below is some more detail on the various options for interpolation and averaging available in NEMO.
\subsubsection{Horizontal interpolation}
Consider an observation point ${\rm P}$ with
with longitude and latitude $({\lambda_{}}_{\rm P}, \phi_{\rm P})$ and the
four nearest neighbouring model grid points ${\rm A}$, ${\rm B}$, ${\rm C}$
and ${\rm D}$ with longitude and latitude ($\lambda_{\rm A}$, $\phi_{\rm A}$),
($\lambda_{\rm B}$, $\phi_{\rm B}$) etc.
All horizontal interpolation methods implemented in NEMO
estimate the value of a model variable $x$ at point $P$ as
a weighted linear combination of the values of the model
variables at the grid points ${\rm A}$, ${\rm B}$ etc.:
+Consider an observation point ${\rm P}$ with with longitude and latitude $({\lambda_{}}_{\rm P}, \phi_{\rm P})$ and
+the four nearest neighbouring model grid points ${\rm A}$, ${\rm B}$, ${\rm C}$ and ${\rm D}$ with
+longitude and latitude ($\lambda_{\rm A}$, $\phi_{\rm A}$),($\lambda_{\rm B}$, $\phi_{\rm B}$) etc.
+All horizontal interpolation methods implemented in NEMO estimate the value of a model variable $x$ at point $P$ as
+a weighted linear combination of the values of the model variables at the grid points ${\rm A}$, ${\rm B}$ etc.:
\begin{eqnarray}
{x_{}}_{\rm P} & \hspace{2mm} = \hspace{2mm} &
@@ 591,7 +585,6 @@
{w_{}}_{\rm D} {x_{}}_{\rm D} \right)
\end{eqnarray}
where ${w_{}}_{\rm A}$, ${w_{}}_{\rm B}$ etc. are the respective weights for the
model field at points ${\rm A}$, ${\rm B}$ etc., and
$w = {w_{}}_{\rm A} + {w_{}}_{\rm B} + {w_{}}_{\rm C} + {w_{}}_{\rm D}$.
+where ${w_{}}_{\rm A}$, ${w_{}}_{\rm B}$ etc. are the respective weights for the model field at
+points ${\rm A}$, ${\rm B}$ etc., and $w = {w_{}}_{\rm A} + {w_{}}_{\rm B} + {w_{}}_{\rm C} + {w_{}}_{\rm D}$.
Four different possibilities are available for computing the weights.
@@ 599,9 +592,9 @@
\begin{enumerate}
\item[1.] {\bf GreatCircle distanceweighted interpolation.} The weights
 are computed as a function of the greatcircle distance $s(P, \cdot)$
 between $P$ and the model grid points $A$, $B$ etc. For example,
 the weight given to the field ${x_{}}_{\rm A}$ is specified as the
 product of the distances from ${\rm P}$ to the other points:
+\item[1.] {\bf GreatCircle distanceweighted interpolation.}
+ The weights are computed as a function of the greatcircle distance $s(P, \cdot)$ between $P$ and
+ the model grid points $A$, $B$ etc.
+ For example, the weight given to the field ${x_{}}_{\rm A}$ is specified as the product of the distances
+ from ${\rm P}$ to the other points:
\begin{eqnarray}
{w_{}}_{\rm A} = s({\rm P}, {\rm B}) \, s({\rm P}, {\rm C}) \, s({\rm P}, {\rm D})
@@ 619,7 +612,6 @@
\end{eqnarray}
and $M$ corresponds to $B$, $C$ or $D$.
 A more stable form of the greatcircle distance formula for
 small distances ($x$ near 1) involves the arcsine function
 ($e.g.$ see p.~101 of \citet{Daley_Barker_Bk01}:
+ A more stable form of the greatcircle distance formula for small distances ($x$ near 1)
+ involves the arcsine function ($e.g.$ see p.~101 of \citet{Daley_Barker_Bk01}:
\begin{eqnarray}
s\left( {\rm P}, {\rm M} \right)
@@ 651,7 +643,6 @@
\end{eqnarray}
\item[2.] {\bf GreatCircle distanceweighted interpolation with small angle
 approximation.} Similar to the previous interpolation but with the
 distance $s$ computed as
+\item[2.] {\bf GreatCircle distanceweighted interpolation with small angle approximation.}
+ Similar to the previous interpolation but with the distance $s$ computed as
\begin{eqnarray}
s\left( {\rm P}, {\rm M} \right)
@@ 663,12 +654,11 @@
where $M$ corresponds to $A$, $B$, $C$ or $D$.
\item[3.] {\bf Bilinear interpolation for a regular spaced grid.} The
 interpolation is split into two 1D interpolations in the longitude
 and latitude directions, respectively.

\item[4.] {\bf Bilinear remapping interpolation for a general grid.} An
 iterative scheme that involves first mapping a quadrilateral cell
 into a cell with coordinates (0,0), (1,0), (0,1) and (1,1). This
 method is based on the SCRIP interpolation package \citep{Jones_1998}.
+\item[3.] {\bf Bilinear interpolation for a regular spaced grid.}
+ The interpolation is split into two 1D interpolations in the longitude and latitude directions, respectively.
+
+\item[4.] {\bf Bilinear remapping interpolation for a general grid.}
+ An iterative scheme that involves first mapping a quadrilateral cell into
+ a cell with coordinates (0,0), (1,0), (0,1) and (1,1).
+ This method is based on the SCRIP interpolation package \citep{Jones_1998}.
\end{enumerate}
@@ 678,12 +668,22 @@
For each surface observation type:
\begin{itemize}
\item The standard gridsearching code is used to find the nearest model grid point to the observation location (see next subsection).
\item The maximum number of grid points is calculated in the local grid domain for which the averaging is likely need to cover.
\item The lats/longs of the grid points surrounding the nearest model grid box are extracted using existing mpi routines.
\item The weights for each grid point associated with each observation are calculated, either for radial or rectangular footprints. For grid points completely within the footprint, the weight is one; for grid points completely outside the footprint, the weight is zero. For grid points which are partly within the footprint the ratio between the area of the footprint within the grid box and the total area of the grid box is used as the weight.
\item The weighted average of the model grid points associated with each observation is calculated, and this is then given as the model counterpart of the observation.
+\item The standard gridsearching code is used to find the nearest model grid point to the observation location
+ (see next subsection).
+\item The maximum number of grid points is calculated in the local grid domain for which
+ the averaging is likely need to cover.
+\item The lats/longs of the grid points surrounding the nearest model grid box are extracted using
+ existing mpi routines.
+\item The weights for each grid point associated with each observation are calculated,
+ either for radial or rectangular footprints.
+ For grid points completely within the footprint, the weight is one;
+ for grid points completely outside the footprint, the weight is zero.
+ For grid points which are partly within the footprint the ratio between the area of the footprint within
+ the grid box and the total area of the grid box is used as the weight.
+\item The weighted average of the model grid points associated with each observation is calculated,
+ and this is then given as the model counterpart of the observation.
\end{itemize}
Examples of the weights calculated for an observation with rectangular and radial footprints are shown in Figs.~\autoref{fig:obsavgrec} and~\autoref{fig:obsavgrad}.
+Examples of the weights calculated for an observation with rectangular and radial footprints are shown in
+Figs.~\autoref{fig:obsavgrec} and~\autoref{fig:obsavgrad}.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 691,5 +691,6 @@
\includegraphics[width=0.90\textwidth]{Fig_OBS_avg_rec}
\caption{ \protect\label{fig:obsavgrec}
Weights associated with each model grid box (blue lines and numbers) for an observation at 170.5E, 56.0N with a rectangular footprint of 1\deg x 1\deg.}
+ Weights associated with each model grid box (blue lines and numbers)
+ for an observation at 170.5E, 56.0N with a rectangular footprint of 1\deg x 1\deg.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 699,5 +700,6 @@
\includegraphics[width=0.90\textwidth]{Fig_OBS_avg_rad}
\caption{ \protect\label{fig:obsavgrad}
Weights associated with each model grid box (blue lines and numbers) for an observation at 170.5E, 56.0N with a radial footprint with diameter 1\deg.}
+ Weights associated with each model grid box (blue lines and numbers)
+ for an observation at 170.5E, 56.0N with a radial footprint with diameter 1\deg.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 706,25 +708,15 @@
\subsection{Grid search}
For many grids used by the NEMO model, such as the ORCA family,
the horizontal grid coordinates $i$ and $j$ are not simple functions
of latitude and longitude. Therefore, it is not always straightforward
to determine the grid points surrounding any given observational position.
Before the interpolation can be performed, a search
algorithm is then required to determine the corner points of
+For many grids used by the NEMO model, such as the ORCA family, the horizontal grid coordinates $i$ and $j$ are not simple functions of latitude and longitude.
+Therefore, it is not always straightforward to determine the grid points surrounding any given observational position.
+Before the interpolation can be performed, a search algorithm is then required to determine the corner points of
the quadrilateral cell in which the observation is located.
This is the most difficult and time consuming part of the
2D interpolation procedure.
A robust test for determining if an observation falls
within a given quadrilateral cell is as follows. Let
${\rm P}({\lambda_{}}_{\rm P} ,{\phi_{}}_{\rm P} )$ denote the observation point,
and let ${\rm A}({\lambda_{}}_{\rm A} ,{\phi_{}}_{\rm A} )$,
${\rm B}({\lambda_{}}_{\rm B} ,{\phi_{}}_{\rm B} )$,
${\rm C}({\lambda_{}}_{\rm C} ,{\phi_{}}_{\rm C} )$
and
${\rm D}({\lambda_{}}_{\rm D} ,{\phi_{}}_{\rm D} )$ denote
the bottom left, bottom right, top left and top right
corner points of the cell, respectively.
To determine if P is inside
the cell, we verify that the crossproducts
+This is the most difficult and time consuming part of the 2D interpolation procedure.
+A robust test for determining if an observation falls within a given quadrilateral cell is as follows.
+Let ${\rm P}({\lambda_{}}_{\rm P} ,{\phi_{}}_{\rm P} )$ denote the observation point,
+and let ${\rm A}({\lambda_{}}_{\rm A} ,{\phi_{}}_{\rm A} )$, ${\rm B}({\lambda_{}}_{\rm B} ,{\phi_{}}_{\rm B} )$,
+${\rm C}({\lambda_{}}_{\rm C} ,{\phi_{}}_{\rm C} )$ and ${\rm D}({\lambda_{}}_{\rm D} ,{\phi_{}}_{\rm D} )$
+denote the bottom left, bottom right, top left and top right corner points of the cell, respectively.
+To determine if P is inside the cell, we verify that the crossproducts
\begin{eqnarray}
\begin{array}{lllll}
@@ 752,35 +744,29 @@
\label{eq:cross}
\end{eqnarray}
point in the opposite direction to the unit normal
$\widehat{\bf k}$ (i.e., that the coefficients of
$\widehat{\bf k}$ are negative),
where ${{\bf r}_{}}_{\rm PA}$, ${{\bf r}_{}}_{\rm PB}$,
etc. correspond to the vectors between points P and A,
P and B, etc.. The method used is
similar to the method used in
the SCRIP interpolation package \citep{Jones_1998}.

In order to speed up the grid search, there is the possibility to construct
a lookup table for a user specified resolution. This lookup
table contains the lower and upper bounds on the $i$ and $j$ indices
to be searched for on a regular grid. For each observation position,
the closest point on the regular grid of this position is computed and
the $i$ and $j$ ranges of this point searched to determine the precise
four points surrounding the observation.
+point in the opposite direction to the unit normal $\widehat{\bf k}$
+(i.e., that the coefficients of $\widehat{\bf k}$ are negative),
+where ${{\bf r}_{}}_{\rm PA}$, ${{\bf r}_{}}_{\rm PB}$, etc. correspond to
+the vectors between points P and A, P and B, etc..
+The method used is similar to the method used in the SCRIP interpolation package \citep{Jones_1998}.
+
+In order to speed up the grid search, there is the possibility to construct a lookup table for a user specified resolution.
+This lookup table contains the lower and upper bounds on the $i$ and $j$ indices to
+be searched for on a regular grid.
+For each observation position, the closest point on the regular grid of this position is computed and
+the $i$ and $j$ ranges of this point searched to determine the precise four points surrounding the observation.
\subsection{Parallel aspects of horizontal interpolation}
\label{subsec:OBS_parallel}
For horizontal interpolation, there is the basic problem that the
observations are unevenly distributed on the globe. In numerical
models, it is common to divide the model grid into subgrids (or
domains) where each subgrid is executed on a single processing element
with explicit message passing for exchange of information along the
domain boundaries when running on a massively parallel processor (MPP)
system. This approach is used by \NEMO.

For observations there is no natural distribution since the
observations are not equally distributed on the globe.
Two options have been made available: 1) geographical distribution;
+For horizontal interpolation, there is the basic problem that
+the observations are unevenly distributed on the globe.
+In numerical models, it is common to divide the model grid into subgrids (or domains) where
+each subgrid is executed on a single processing element with explicit message passing for
+exchange of information along the domain boundaries when running on a massively parallel processor (MPP) system.
+This approach is used by \NEMO.
+
+For observations there is no natural distribution since the observations are not equally distributed on the globe.
+Two options have been made available:
+1) geographical distribution;
and 2) roundrobin.
@@ 791,24 +777,22 @@
\includegraphics[width=10cm,height=12cm,angle=90.]{Fig_ASM_obsdist_local}
\caption{ \protect\label{fig:obslocal}
Example of the distribution of observations with the geographical distribution of observational data.}
+ Example of the distribution of observations with the geographical distribution of observational data.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
This is the simplest option in which the observations are distributed according
to the domain of the gridpoint parallelization. \autoref{fig:obslocal}
shows an example of the distribution of the {\em in situ} data on processors
with a different colour for each observation
on a given processor for a 4 $\times$ 2 decomposition with ORCA2.
+This is the simplest option in which the observations are distributed according to
+the domain of the gridpoint parallelization.
+\autoref{fig:obslocal} shows an example of the distribution of the {\em in situ} data on processors with
+a different colour for each observation on a given processor for a 4 $\times$ 2 decomposition with ORCA2.
The gridpoint domain decomposition is clearly visible on the plot.
The advantage of this approach is that all
information needed for horizontal interpolation is available without
any MPP communication. Of course, this is under the assumption that
we are only using a $2 \times 2$ gridpoint stencil for the interpolation
(e.g., bilinear interpolation). For higher order interpolation schemes this
is no longer valid. A disadvantage with the above scheme is that the number of
observations on each processor can be very different. If the cost of
the actual interpolation is expensive relative to the communication of
data needed for interpolation, this could lead to load imbalance.
+The advantage of this approach is that all information needed for horizontal interpolation is available without
+any MPP communication.
+Of course, this is under the assumption that we are only using a $2 \times 2$ gridpoint stencil for
+the interpolation (e.g., bilinear interpolation).
+For higher order interpolation schemes this is no longer valid.
+A disadvantage with the above scheme is that the number of observations on each processor can be very different.
+If the cost of the actual interpolation is expensive relative to the communication of data needed for interpolation,
+this could lead to load imbalance.
\subsubsection{Roundrobin distribution of observations among processors}
@@ 818,27 +802,23 @@
\includegraphics[width=10cm,height=12cm,angle=90.]{Fig_ASM_obsdist_global}
\caption{ \protect\label{fig:obsglobal}
Example of the distribution of observations with the roundrobin distribution of observational data.}
+ Example of the distribution of observations with the roundrobin distribution of observational data.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
An alternative approach is to distribute the observations equally
among processors and use message passing in order to retrieve
the stencil for interpolation. The simplest distribution of the observations
is to distribute them using a roundrobin scheme. \autoref{fig:obsglobal}
shows the distribution of the {\em in situ} data on processors for the
roundrobin distribution of observations with a different colour for
each observation on a given processor for a 4 $\times$ 2 decomposition
with ORCA2 for the same input data as in \autoref{fig:obslocal}.
+An alternative approach is to distribute the observations equally among processors and
+use message passing in order to retrieve the stencil for interpolation.
+The simplest distribution of the observations is to distribute them using a roundrobin scheme.
+\autoref{fig:obsglobal} shows the distribution of the {\em in situ} data on processors for
+the roundrobin distribution of observations with a different colour for each observation on a given processor for
+a 4 $\times$ 2 decomposition with ORCA2 for the same input data as in \autoref{fig:obslocal}.
The observations are now clearly randomly distributed on the globe.
In order to be able to perform horizontal interpolation in this case,
a subroutine has been developed that retrieves any grid points in the
global space.
+In order to be able to perform horizontal interpolation in this case,
+a subroutine has been developed that retrieves any grid points in the global space.
\subsection{Vertical interpolation operator}
Vertical interpolation is achieved using either a cubic spline or
linear interpolation. For the cubic spline, the top and
bottom boundary conditions for the second derivative of the
interpolating polynomial in the spline are set to zero.
+Vertical interpolation is achieved using either a cubic spline or linear interpolation.
+For the cubic spline, the top and bottom boundary conditions for the second derivative of
+the interpolating polynomial in the spline are set to zero.
At the bottom boundary, this is done using the landocean mask.
@@ 856,22 +836,26 @@
\subsection{Concept}
The obs oper maps model variables to observation space. It is possible to apply this mapping
without running the model. The software which performs this functionality is known as the
\textbf{offline obs oper}. The obs oper is divided into three stages. An initialisation phase,
an interpolation phase and an output phase. The implementation of which is outlined in the
previous sections. During the interpolation phase the offline obs oper populates the model
arrays by reading saved model fields from disk.

There are two ways of exploiting this offline capacity. The first is to mimic the behaviour of
the online system by supplying model fields at regular intervals between the start and the end
of the run. This approach results in a single model counterpart per observation. This kind of
usage produces feedback files the same file format as the online obs oper.
The second is to take advantage of the offline setting in which multiple model counterparts can
be calculated per observation. In this case it is possible to consider all forecasts verifying
at the same time. By forecast, I mean any method which produces an estimate of physical reality
which is not an observed value. In the case of class 4 files this means forecasts, analyses, persisted
analyses and climatological values verifying at the same time. Although the class 4 file format
doesn't account for multiple ensemble members or multiple experiments per observation, it is possible
to include these components in the same or multiple files.
+The obs oper maps model variables to observation space.
+It is possible to apply this mapping without running the model.
+The software which performs this functionality is known as the \textbf{offline obs oper}.
+The obs oper is divided into three stages.
+An initialisation phase, an interpolation phase and an output phase.
+The implementation of which is outlined in the previous sections.
+During the interpolation phase the offline obs oper populates the model arrays by
+reading saved model fields from disk.
+
+There are two ways of exploiting this offline capacity.
+The first is to mimic the behaviour of the online system by supplying model fields at
+regular intervals between the start and the end of the run.
+This approach results in a single model counterpart per observation.
+This kind of usage produces feedback files the same file format as the online obs oper.
+The second is to take advantage of the offline setting in which
+multiple model counterparts can be calculated per observation.
+In this case it is possible to consider all forecasts verifying at the same time.
+By forecast, I mean any method which produces an estimate of physical reality which is not an observed value.
+In the case of class 4 files this means forecasts, analyses, persisted analyses and
+climatological values verifying at the same time.
+Although the class 4 file format doesn't account for multiple ensemble members or
+multiple experiments per observation, it is possible to include these components in the same or multiple files.
%
@@ 883,8 +867,8 @@
\subsubsection{Building}
In addition to \emph{OPA\_SRC} the offline obs oper requires the inclusion
of the \emph{OOO\_SRC} directory. \emph{OOO\_SRC} contains a replacement \mdl{nemo} and
\mdl{nemogcm} which overwrites the resultant \textbf{nemo.exe}. This is the approach taken
by \emph{SAS\_SRC} and \emph{OFF\_SRC}.
+In addition to \emph{OPA\_SRC} the offline obs oper requires the inclusion of the \emph{OOO\_SRC} directory.
+\emph{OOO\_SRC} contains a replacement \mdl{nemo} and \mdl{nemogcm} which
+overwrites the resultant \textbf{nemo.exe}.
+This is the approach taken by \emph{SAS\_SRC} and \emph{OFF\_SRC}.
%
@@ 898,7 +882,7 @@
\subsubsection{Quick script}
A useful Python utility to control the namelist options can be found in \textbf{OBSTOOLS/OOO}. The
functions which locate model fields and observation files can be manually specified. The package
can be installed by appropriate use of the included setup.py script.
+A useful Python utility to control the namelist options can be found in \textbf{OBSTOOLS/OOO}.
+The functions which locate model fields and observation files can be manually specified.
+The package can be installed by appropriate use of the included setup.py script.
Documentation can be autogenerated by Sphinx by running \emph{make html} in the \textbf{doc} directory.
@@ 908,8 +892,8 @@
%
\subsection{Configuring the offline observation operator}
The observation files and settings understood by \textbf{namobs} have been outlined in the online
obs oper section. In addition there are two further namelists wich control the operation of the offline
obs oper. \textbf{namooo} which controls the input model fields and \textbf{namcl4} which controls the
production of class 4 files.
+The observation files and settings understood by \textbf{namobs} have been outlined in the online obs oper section.
+In addition there are two further namelists wich control the operation of the offline obs oper.
+\textbf{namooo} which controls the input model fields and \textbf{namcl4} which
+controls the production of class 4 files.
\subsubsection{Single field}
@@ 917,11 +901,11 @@
In offline mode model arrays are populated at appropriate time steps via input files.
At present, \textbf{tsn} and \textbf{sshn} are populated by the default read routines.
These routines will be expanded upon in future versions to allow the specification of any
model variable. As such, input files must be global versions of the model domain with
+These routines will be expanded upon in future versions to allow the specification of any model variable.
+As such, input files must be global versions of the model domain with
\textbf{votemper}, \textbf{vosaline} and optionally \textbf{sshn} present.
For each field read there must be an entry in the \textbf{namooo} namelist specifying the
name of the file to read and the index along the \emph{time\_counter}. For example, to
read the second time counter from a single file the namelist would be.
+For each field read there must be an entry in the \textbf{namooo} namelist specifying
+the name of the file to read and the index along the \emph{time\_counter}.
+For example, to read the second time counter from a single file the namelist would be.
\begin{forlines}
@@ 939,7 +923,7 @@
\subsubsection{Multiple fields per run}
Model field iteration is controlled via \textbf{nn\_ooo\_freq} which specifies
the number of model steps at which the next field gets read. For example, if
12 hourly fields are to be interpolated in a setup where 288 steps equals 24 hours.
+Model field iteration is controlled via \textbf{nn\_ooo\_freq} which
+specifies the number of model steps at which the next field gets read.
+For example, if 12 hourly fields are to be interpolated in a setup where 288 steps equals 24 hours.
\begin{forlines}
@@ 957,6 +941,6 @@
\end{forlines}
The above namelist will result in feedback files whose first 12 hours contain
the first field of foo.nc and the second 12 hours contain the second field.
+The above namelist will result in feedback files whose first 12 hours contain the first field of foo.nc and
+the second 12 hours contain the second field.
%\begin{framed}
@@ 964,13 +948,12 @@
%\end{framed}
It is easy to see how a collection of fields taken fron a number of files
at different indices can be combined at a particular frequency in time to
generate a pseudo model evolution. As long as all that is needed is a single
model counterpart at a regular interval then namooo is all that needs to
be edited. However, a far more interesting approach can be taken in which
multiple forecasts, analyses, persisted analyses and climatologies are
considered against the same set of observations. For this a slightly more
complicated approach is needed. It is referred to as \emph{Class 4} since
it is the fourth metric defined by the GODAE intercomparison project.
+It is easy to see how a collection of fields taken fron a number of files at different indices can be combined at
+a particular frequency in time to generate a pseudo model evolution.
+As long as all that is needed is a single model counterpart at a regular interval then
+namooo is all that needs to be edited.
+However, a far more interesting approach can be taken in which multiple forecasts, analyses, persisted analyses and
+climatologies are considered against the same set of observations.
+For this a slightly more complicated approach is needed.
+It is referred to as \emph{Class 4} since it is the fourth metric defined by the GODAE intercomparison project.
%
@@ 979,17 +962,15 @@
\subsubsection{Multiple model counterparts per observation a.k.a Class 4}
A generalisation of feedback files to allow multiple model components per observation. For a single
observation, as well as previous forecasts verifying at the same time there are also analyses, persisted
analyses and climatologies.


The above namelist performs two basic functions. It organises the fields
given in \textbf{namooo} into groups so that observations can be matched
up multiple times. It also controls the metadata and the output variable
of the class 4 file when a write routine is called.
+A generalisation of feedback files to allow multiple model components per observation.
+For a single observation, as well as previous forecasts verifying at the same time
+there are also analyses, persisted analyses and climatologies.
+
+
+The above namelist performs two basic functions.
+It organises the fields given in \textbf{namooo} into groups so that observations can be matched up multiple times.
+It also controls the metadata and the output variable of the class 4 file when a write routine is called.
%\begin{framed}
\textbf{Note: ln\_cl4} must be set to \forcode{.true.} in \textbf{namobs}
to use class 4 outputs.
+\textbf{Note: ln\_cl4} must be set to \forcode{.true.} in \textbf{namobs} to use class 4 outputs.
%\end{framed}
@@ 1004,62 +985,58 @@
\noindent
\linebreak
Much of the namelist is devoted to specifying this convention. The
following namelist settings control the elements of the output
file names. Each should be specified as a single string of character data.
+Much of the namelist is devoted to specifying this convention.
+The following namelist settings control the elements of the output file names.
+Each should be specified as a single string of character data.
\begin{description}
\item[cl4\_prefix]
Prefix for class 4 files e.g. class4
+ Prefix for class 4 files e.g. class4
\item[cl4\_date]
YYYYMMDD validity date
+ YYYYMMDD validity date
\item[cl4\_sys]
The name of the class 4 model system e.g. FOAM
+ The name of the class 4 model system e.g. FOAM
\item[cl4\_cfg]
The name of the class 4 model configuration e.g. orca025
+ The name of the class 4 model configuration e.g. orca025
\item[cl4\_vn]
The name of the class 4 model version e.g. 12.0
+ The name of the class 4 model version e.g. 12.0
\end{description}
\noindent
The kind is specified by the observation type internally to the obs oper. The processor
number is specified internally in NEMO.
+The kind is specified by the observation type internally to the obs oper.
+The processor number is specified internally in NEMO.
\subsubsection{Class 4 file global attributes}
Global attributes necessary to fulfill the class 4 file definition. These
are also useful pieces of information when collaborating with external
partners.
+Global attributes necessary to fulfill the class 4 file definition.
+These are also useful pieces of information when collaborating with external partners.
\begin{description}
\item[cl4\_contact]
Contact email for class 4 files.
+ Contact email for class 4 files.
\item[cl4\_inst]
The name of the producers institution.
+ The name of the producers institution.
\item[cl4\_cfg]
The name of the class 4 model configuration e.g. orca025
+ The name of the class 4 model configuration e.g. orca025
\item[cl4\_vn]
The name of the class 4 model version e.g. 12.0
+ The name of the class 4 model version e.g. 12.0
\end{description}
\noindent
The obs\_type,
creation date and validity time are specified internally to the obs oper.
+The obs\_type, creation date and validity time are specified internally to the obs oper.
\subsubsection{Class 4 model counterpart configuration}
As seen previously it is possible to perform a single sweep of the
obs oper and specify a collection of model fields equally spaced
along that sweep. In the class 4 case the single sweep is replaced
with multiple sweeps and a certain ammount of book keeping is
needed to ensure each model counterpart makes its way to the
correct piece of memory in the output files.
+As seen previously it is possible to perform a single sweep of the obs oper and
+specify a collection of model fields equally spaced along that sweep.
+In the class 4 case the single sweep is replaced with multiple sweeps and
+a certain ammount of book keeping is needed to ensure each model counterpart makes its way to
+the correct piece of memory in the output files.
\noindent
\linebreak
In terms of book keeping, the offline obs oper needs to know how many
full sweeps need to be performed. This is specified via the
\textbf{cl4\_match\_len} variable and is the total number of model
counterparts per observation. For example, a 3 forecasts plus 3 persistence
fields plus an analysis field would be 7 counterparts per observation.
+In terms of book keeping, the offline obs oper needs to know how many full sweeps need to be performed.
+This is specified via the \textbf{cl4\_match\_len} variable and
+is the total number of model counterparts per observation.
+For example, a 3 forecasts plus 3 persistence fields plus an analysis field would be 7 counterparts per observation.
\begin{forlines}
@@ 1067,6 +1044,6 @@
\end{forlines}
Then to correctly allocate a class 4 file the forecast axis must be defined. This
is controlled via \textbf{cl4\_fcst\_len}, which in out above example would be 3.
+Then to correctly allocate a class 4 file the forecast axis must be defined.
+This is controlled via \textbf{cl4\_fcst\_len}, which in out above example would be 3.
\begin{forlines}
@@ 1074,8 +1051,7 @@
\end{forlines}
Then for each model field it is necessary to designate what class 4 variable and
index along the forecast dimension the model counterpart should be stored in the
output file. As well as a value for that lead time in hours, this will be useful
when interpreting the data afterwards.
+Then for each model field it is necessary to designate what class 4 variable and index along
+the forecast dimension the model counterpart should be stored in the output file.
+As well as a value for that lead time in hours, this will be useful when interpreting the data afterwards.
\begin{forlines}
@@ 1086,7 +1062,7 @@
\end{forlines}
In terms of files and indices of fields inside each file the class 4 approach
makes use of the \textbf{namooo} namelist. If our fields are in separate files
with a single field per file our example inputs will be specified.
+In terms of files and indices of fields inside each file the class 4 approach makes use of
+the \textbf{namooo} namelist.
+If our fields are in separate files with a single field per file our example inputs will be specified.
\begin{forlines}
@@ 1095,6 +1071,5 @@
\end{forlines}
When we combine all of the naming conventions, global attributes and i/o instructions
the class 4 namelist becomes.
+When we combine all of the naming conventions, global attributes and i/o instructions the class 4 namelist becomes.
\begin{forlines}
@@ 1150,22 +1125,22 @@
\subsubsection{Climatology interpolation}
The climatological counterpart is generated at the start of the run by restarting
the model from climatology through appropriate use of \textbf{namtsd}. To override
the offline observation operator read routine and to take advantage of the restart
settings, specify the first entry in \textbf{cl4\_vars} as "climatology". This will then
pipe the restart from climatology into the output class 4 file. As in every other
class 4 matchup the input file, input index and output index must be specified.
These can be replaced with dummy data since they are not used but they must be
present to cycle through the matchups correctly.
+The climatological counterpart is generated at the start of the run by
+restarting the model from climatology through appropriate use of \textbf{namtsd}.
+To override the offline observation operator read routine and to take advantage of the restart settings,
+specify the first entry in \textbf{cl4\_vars} as "climatology".
+This will then pipe the restart from climatology into the output class 4 file.
+As in every other class 4 matchup the input file, input index and output index must be specified.
+These can be replaced with dummy data since they are not used but
+they must be present to cycle through the matchups correctly.
\subsection{Advanced usage}
In certain cases it may be desirable to include both multiple model fields per
observation window with multiple match ups per observation. This can be achieved
by specifying \textbf{nn\_ooo\_freq} as well as the class 4 settings. Care must
be taken in generating the ooo\_files list such that the files are arranged into
consecutive blocks of single match ups. For example, 2 forecast fields
of 12 hourly data would result in 4 separate read operations but only 2 write
operations, 1 per forecast.
+In certain cases it may be desirable to include both multiple model fields per observation window with
+multiple match ups per observation.
+This can be achieved by specifying \textbf{nn\_ooo\_freq} as well as the class 4 settings.
+Care must be taken in generating the ooo\_files list such that the files are arranged into
+consecutive blocks of single match ups.
+For example, 2 forecast fields of 12 hourly data would result in 4 separate read operations but
+only 2 write operations, 1 per forecast.
\begin{forlines}
@@ 1175,7 +1150,6 @@
\end{forlines}
The above notation reveals the internal split between match up iterators and file
iterators. This technique has not been used before so experimentation is needed
before results can be trusted.
+The above notation reveals the internal split between match up iterators and file iterators.
+This technique has not been used before so experimentation is needed before results can be trusted.
@@ 1187,21 +1161,24 @@
\label{sec:OBS_obsutils}
Some tools for viewing and processing of observation and feedback files are provided in the
NEMO repository for convenience. These include OBSTOOLS which are a collection of Fortran
programs which are helpful to deal with feedback files. They do such tasks as observation file
conversion, printing of file contents, some basic statistical analysis of feedback files. The
other tool is an IDL program called dataplot which uses a graphical interface to visualise
observations and feedback files. OBSTOOLS and dataplot are described in more detail below.
+Some tools for viewing and processing of observation and feedback files are provided in
+the NEMO repository for convenience.
+These include OBSTOOLS which are a collection of Fortran programs which are helpful to deal with feedback files.
+They do such tasks as observation file conversion, printing of file contents,
+some basic statistical analysis of feedback files.
+The other tool is an IDL program called dataplot which uses a graphical interface to
+visualise observations and feedback files.
+OBSTOOLS and dataplot are described in more detail below.
\subsection{Obstools}
A series of Fortran utilities is provided with NEMO called OBSTOOLS. This are helpful in
handling observation files and the feedback file output from the NEMO observation operator.
+A series of Fortran utilities is provided with NEMO called OBSTOOLS.
+This are helpful in handling observation files and the feedback file output from the NEMO observation operator.
The utilities are as follows
\subsubsection{c4comb}
The program c4comb combines multiple class 4 files produced by individual processors in an
MPI run of NEMO offline obs\_oper into a single class 4 file. The program is called in the following way:
+The program c4comb combines multiple class 4 files produced by individual processors in
+an MPI run of NEMO offline obs\_oper into a single class 4 file.
+The program is called in the following way:
@@ 1213,6 +1190,6 @@
\subsubsection{corio2fb}
The program corio2fb converts profile observation files from the Coriolis format to the
standard feedback format. The program is called in the following way:
+The program corio2fb converts profile observation files from the Coriolis format to the standard feedback format.
+The program is called in the following way:
\footnotesize
@@ 1223,6 +1200,6 @@
\subsubsection{enact2fb}
The program enact2fb converts profile observation files from the ENACT format to the standard
feedback format. The program is called in the following way:
+The program enact2fb converts profile observation files from the ENACT format to the standard feedback format.
+The program is called in the following way:
\footnotesize
@@ 1233,6 +1210,7 @@
\subsubsection{fbcomb}
The program fbcomb combines multiple feedback files produced by individual processors in an
MPI run of NEMO into a single feedback file. The program is called in the following way:
+The program fbcomb combines multiple feedback files produced by individual processors in
+an MPI run of NEMO into a single feedback file.
+The program is called in the following way:
\footnotesize
@@ 1243,6 +1221,6 @@
\subsubsection{fbmatchup}
The program fbmatchup will match observations from two feedback files. The program is called
in the following way:
+The program fbmatchup will match observations from two feedback files.
+The program is called in the following way:
\footnotesize
@@ 1254,6 +1232,6 @@
The program fbprint will print the contents of a feedback file or files to standard output.
Selected information can be output using optional arguments. The program is called in the
following way:
+Selected information can be output using optional arguments.
+The program is called in the following way:
\footnotesize
@@ 1282,6 +1260,6 @@
\subsubsection{fbsel}
The program fbsel will select or subsample observations. The program is called in the
following way:
+The program fbsel will select or subsample observations.
+The program is called in the following way:
\footnotesize
@@ 1292,6 +1270,6 @@
\subsubsection{fbstat}
The program fbstat will output summary statistics in different global areas into a number of
files. The program is called in the following way:
+The program fbstat will output summary statistics in different global areas into a number of files.
+The program is called in the following way:
\footnotesize
@@ 1302,6 +1280,7 @@
\subsubsection{fbthin}
The program fbthin will thin the data to 1 degree resolution. The code could easily be
modified to thin to a different resolution. The program is called in the following way:
+The program fbthin will thin the data to 1 degree resolution.
+The code could easily be modified to thin to a different resolution.
+The program is called in the following way:
\footnotesize
@@ 1312,6 +1291,6 @@
\subsubsection{sla2fb}
The program sla2fb will convert an AVISO SLA format file to feedback format. The program is
called in the following way:
+The program sla2fb will convert an AVISO SLA format file to feedback format.
+The program is called in the following way:
\footnotesize
@@ 1325,6 +1304,6 @@
\subsubsection{vel2fb}
The program vel2fb will convert TAO/PIRATA/RAMA currents files to feedback format. The program
is called in the following way:
+The program vel2fb will convert TAO/PIRATA/RAMA currents files to feedback format.
+The program is called in the following way:
\footnotesize
@@ 1339,7 +1318,8 @@
\subsection{Dataplot}
An IDL program called dataplot is included which uses a graphical interface to visualise
observations and feedback files. It is possible to zoom in, plot individual profiles and
calculate some basic statistics. To plot some data run IDL and then:
+An IDL program called dataplot is included which uses a graphical interface to
+visualise observations and feedback files.
+It is possible to zoom in, plot individual profiles and calculate some basic statistics.
+To plot some data run IDL and then:
\footnotesize
\begin{minted}{idl}
@@ 1347,7 +1327,7 @@
\end{minted}
To read multiple files into dataplot, for example multiple feedback files from different
processors or from different days, the easiest method is to use the spawn command to generate
a list of files which can then be passed to dataplot.
+To read multiple files into dataplot,
+for example multiple feedback files from different processors or from different days,
+the easiest method is to use the spawn command to generate a list of files which can then be passed to dataplot.
\footnotesize
\begin{minted}{idl}
@@ 1357,27 +1337,32 @@
\autoref{fig:obsdataplotmain} shows the main window which is launched when dataplot starts.
This is split into three parts. At the top there is a menu bar which contains a variety of
drop down menus. Areas  zooms into prespecified regions; plot  plots the data as a
timeseries or a TS diagram if appropriate; Find  allows data to be searched; Config  sets
various configuration options.

The middle part is a plot of the geographical location of the observations. This will plot the
observation value, the model background value or observation minus background value depending
on the option selected in the radio button at the bottom of the window. The plotting colour
range can be changed by clicking on the colour bar. The title of the plot gives some basic
information about the date range and depth range shown, the extreme values, and the mean and
rms values. It is possible to zoom in using a dragbox. You may also zoom in or out using the
mouse wheel.

The bottom part of the window controls what is visible in the plot above. There are two bars
which select the level range plotted (for profile data). The other bars below select the date
range shown. The bottom of the figure allows the option to plot the mean, root mean square,
standard deviation or mean square values. As mentioned above you can choose to plot the
observation value, the model background value or observation minus background value. The next
group of radio buttons selects the map projection. This can either be regular latitude
longitude grid, or north or south polar stereographic. The next group of radio buttons will
plot bad observations, switch to salinity and plot density for profile observations. The
rightmost group of buttons will print the plot window as a postscript, save it as png, or exit
from dataplot.
+This is split into three parts.
+At the top there is a menu bar which contains a variety of drop down menus.
+Areas  zooms into prespecified regions;
+plot  plots the data as a timeseries or a TS diagram if appropriate;
+Find  allows data to be searched;
+Config  sets various configuration options.
+
+The middle part is a plot of the geographical location of the observations.
+This will plot the observation value, the model background value or observation minus background value depending on
+the option selected in the radio button at the bottom of the window.
+The plotting colour range can be changed by clicking on the colour bar.
+The title of the plot gives some basic information about the date range and depth range shown,
+the extreme values, and the mean and rms values.
+It is possible to zoom in using a dragbox.
+You may also zoom in or out using the mouse wheel.
+
+The bottom part of the window controls what is visible in the plot above.
+There are two bars which select the level range plotted (for profile data).
+The other bars below select the date range shown.
+The bottom of the figure allows the option to plot the mean, root mean square, standard deviation or
+mean square values.
+As mentioned above you can choose to plot the observation value, the model background value or
+observation minus background value.
+The next group of radio buttons selects the map projection.
+This can either be regular latitude longitude grid, or north or south polar stereographic.
+The next group of radio buttons will plot bad observations, switch to salinity and
+plot density for profile observations.
+The rightmost group of buttons will print the plot window as a postscript, save it as png, or exit from dataplot.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 1386,10 +1371,10 @@
\includegraphics[width=9cm,angle=90.]{Fig_OBS_dataplot_main}
\caption{ \protect\label{fig:obsdataplotmain}
Main window of dataplot.}
+ Main window of dataplot.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
If a profile point is clicked with the mouse button a plot of the observation and background
values as a function of depth (\autoref{fig:obsdataplotprofile}).
+If a profile point is clicked with the mouse button a plot of the observation and background values as
+a function of depth (\autoref{fig:obsdataplotprofile}).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 1398,5 +1383,5 @@
\includegraphics[width=7cm,angle=90.]{Fig_OBS_dataplot_prof}
\caption{ \protect\label{fig:obsdataplotprofile}
Profile plot from dataplot produced by right clicking on a point in the main window.}
+ Profile plot from dataplot produced by right clicking on a point in the main window.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_SBC.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_SBC.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_SBC.tex (revision 10368)
@@ 18,8 +18,12 @@
The ocean needs six fields as surface boundary condition:
\begin{itemize}
 \item the two components of the surface ocean stress $\left( {\tau _u \;,\;\tau _v} \right)$
 \item the incoming solar and non solar heat fluxes $\left( {Q_{ns} \;,\;Q_{sr} } \right)$
 \item the surface freshwater budget $\left( {\textit{emp}} \right)$
 \item the surface salt flux associated with freezing/melting of seawater $\left( {\textit{sfx}} \right)$
+\item
+ the two components of the surface ocean stress $\left( {\tau _u \;,\;\tau _v} \right)$
+\item
+ the incoming solar and non solar heat fluxes $\left( {Q_{ns} \;,\;Q_{sr} } \right)$
+\item
+ the surface freshwater budget $\left( {\textit{emp}} \right)$
+\item
+ the surface salt flux associated with freezing/melting of seawater $\left( {\textit{sfx}} \right)$
\end{itemize}
plus an optional field:
@@ 28,43 +32,59 @@
\end{itemize}
Five different ways to provide the first six fields to the ocean are available which
are controlled by namelist \ngn{namsbc} variables: an analytical formulation (\np{ln\_ana}\forcode{ = .true.}),
a flux formulation (\np{ln\_flx}\forcode{ = .true.}), a bulk formulae formulation (CORE
(\np{ln\_blk\_core}\forcode{ = .true.}), CLIO (\np{ln\_blk\_clio}\forcode{ = .true.}) or MFS
\footnote { Note that MFS bulk formulae compute fluxes only for the ocean component}
(\np{ln\_blk\_mfs}\forcode{ = .true.}) bulk formulae) and a coupled or mixed forced/coupled formulation
(exchanges with a atmospheric model via the OASIS coupler) (\np{ln\_cpl} or \np{ln\_mixcpl}\forcode{ = .true.}).
When used ($i.e.$ \np{ln\_apr\_dyn}\forcode{ = .true.}), the atmospheric pressure forces both ocean and ice dynamics.

The frequency at which the forcing fields have to be updated is given by the \np{nn\_fsbc} namelist parameter.
When the fields are supplied from data files (flux and bulk formulations), the input fields
need not be supplied on the model grid. Instead a file of coordinates and weights can
be supplied which maps the data from the supplied grid to the model points
(so called "Interpolation on the Fly", see \autoref{subsec:SBC_iof}).
+Five different ways to provide the first six fields to the ocean are available which are controlled by
+namelist \ngn{namsbc} variables:
+an analytical formulation (\np{ln\_ana}\forcode{ = .true.}),
+a flux formulation (\np{ln\_flx}\forcode{ = .true.}),
+a bulk formulae formulation (CORE (\np{ln\_blk\_core}\forcode{ = .true.}),
+CLIO (\np{ln\_blk\_clio}\forcode{ = .true.}) or
+MFS \footnote { Note that MFS bulk formulae compute fluxes only for the ocean component}
+(\np{ln\_blk\_mfs}\forcode{ = .true.}) bulk formulae) and
+a coupled or mixed forced/coupled formulation (exchanges with a atmospheric model via the OASIS coupler)
+(\np{ln\_cpl} or \np{ln\_mixcpl}\forcode{ = .true.}).
+When used ($i.e.$ \np{ln\_apr\_dyn}\forcode{ = .true.}),
+the atmospheric pressure forces both ocean and ice dynamics.
+
+The frequency at which the forcing fields have to be updated is given by the \np{nn\_fsbc} namelist parameter.
+When the fields are supplied from data files (flux and bulk formulations),
+the input fields need not be supplied on the model grid.
+Instead a file of coordinates and weights can be supplied which maps the data from the supplied grid to
+the model points (so called "Interpolation on the Fly", see \autoref{subsec:SBC_iof}).
If the Interpolation on the Fly option is used, input data belonging to land points (in the native grid),
can be masked to avoid spurious results in proximity of the coasts as large sealand gradients characterize
most of the atmospheric variables.

In addition, the resulting fields can be further modified using several namelist options.
+can be masked to avoid spurious results in proximity of the coasts as
+large sealand gradients characterize most of the atmospheric variables.
+
+In addition, the resulting fields can be further modified using several namelist options.
These options control
\begin{itemize}
\item the rotation of vector components supplied relative to an eastnorth
coordinate system onto the local grid directions in the model ;
\item the addition of a surface restoring term to observed SST and/or SSS (\np{ln\_ssr}\forcode{ = .true.}) ;
\item the modification of fluxes below icecovered areas (using observed icecover or a seaice model) (\np{nn\_ice}\forcode{ = 0..3}) ;
\item the addition of river runoffs as surface freshwater fluxes or lateral inflow (\np{ln\_rnf}\forcode{ = .true.}) ;
\item the addition of isf melting as lateral inflow (parameterisation) or as fluxes applied at the landice ocean interface (\np{ln\_isf}) ;
\item the addition of a freshwater flux adjustment in order to avoid a mean sealevel drift (\np{nn\_fwb}\forcode{ = 0..2}) ;
\item the transformation of the solar radiation (if provided as daily mean) into a diurnal cycle (\np{ln\_dm2dc}\forcode{ = .true.}) ;
and a neutral drag coefficient can be read from an external wave model (\np{ln\_cdgw}\forcode{ = .true.}).
+\item
+ the rotation of vector components supplied relative to an eastnorth coordinate system onto
+ the local grid directions in the model;
+\item
+ the addition of a surface restoring term to observed SST and/or SSS (\np{ln\_ssr}\forcode{ = .true.});
+\item
+ the modification of fluxes below icecovered areas (using observed icecover or a seaice model)
+ (\np{nn\_ice}\forcode{ = 0..3});
+\item
+ the addition of river runoffs as surface freshwater fluxes or lateral inflow (\np{ln\_rnf}\forcode{ = .true.});
+\item
+ the addition of isf melting as lateral inflow (parameterisation) or
+ as fluxes applied at the landice ocean interface (\np{ln\_isf}) ;
+\item
+ the addition of a freshwater flux adjustment in order to avoid a mean sealevel drift
+ (\np{nn\_fwb}\forcode{ = 0..2});
+\item
+ the transformation of the solar radiation (if provided as daily mean) into a diurnal cycle
+ (\np{ln\_dm2dc}\forcode{ = .true.});
+ and a neutral drag coefficient can be read from an external wave model (\np{ln\_cdgw}\forcode{ = .true.}).
\end{itemize}
The latter option is possible only in case core or mfs bulk formulas are selected.
In this chapter, we first discuss where the surface boundary condition appears in the
model equations. Then we present the five ways of providing the surface boundary condition,
+In this chapter, we first discuss where the surface boundary condition appears in the model equations.
+Then we present the five ways of providing the surface boundary condition,
followed by the description of the atmospheric pressure and the river runoff.
Next the scheme for interpolation on the fly is described.
Finally, the different options that further modify the fluxes applied to the ocean are discussed.
One of these is modification by icebergs (see \autoref{sec:ICB_icebergs}), which act as drifting sources of fresh water.
+One of these is modification by icebergs (see \autoref{sec:ICB_icebergs}),
+which act as drifting sources of fresh water.
Another example of modification is that due to the ice shelf melting/freezing (see \autoref{sec:SBC_isf}),
which provides additional sources of fresh water.
@@ 77,31 +97,32 @@
\label{sec:SBC_general}
The surface ocean stress is the stress exerted by the wind and the seaice
on the ocean. It is applied in \mdl{dynzdf} module as a surface boundary condition of the
computation of the momentum vertical mixing trend (see \autoref{eq:dynzdf_sbc} in \autoref{sec:DYN_zdf}).
As such, it has to be provided as a 2D vector interpolated
onto the horizontal velocity ocean mesh, $i.e.$ resolved onto the model
(\textbf{i},\textbf{j}) direction at $u$ and $v$points.

The surface heat flux is decomposed into two parts, a non solar and a solar heat
flux, $Q_{ns}$ and $Q_{sr}$, respectively. The former is the non penetrative part
of the heat flux ($i.e.$ the sum of sensible, latent and long wave heat fluxes
plus the heat content of the mass exchange with the atmosphere and seaice).
It is applied in \mdl{trasbc} module as a surface boundary condition trend of
the first level temperature time evolution equation (see \autoref{eq:tra_sbc}
and \autoref{eq:tra_sbc_lin} in \autoref{subsec:TRA_sbc}).
The latter is the penetrative part of the heat flux. It is applied as a 3D
trends of the temperature equation (\mdl{traqsr} module) when \np{ln\_traqsr}\forcode{ = .true.}.
The way the light penetrates inside the water column is generally a sum of decreasing
exponentials (see \autoref{subsec:TRA_qsr}).
+The surface ocean stress is the stress exerted by the wind and the seaice on the ocean.
+It is applied in \mdl{dynzdf} module as a surface boundary condition of the computation of
+the momentum vertical mixing trend (see \autoref{eq:dynzdf_sbc} in \autoref{sec:DYN_zdf}).
+As such, it has to be provided as a 2D vector interpolated onto the horizontal velocity ocean mesh,
+$i.e.$ resolved onto the model (\textbf{i},\textbf{j}) direction at $u$ and $v$points.
+
+The surface heat flux is decomposed into two parts, a non solar and a solar heat flux,
+$Q_{ns}$ and $Q_{sr}$, respectively.
+The former is the non penetrative part of the heat flux
+($i.e.$ the sum of sensible, latent and long wave heat fluxes plus
+the heat content of the mass exchange with the atmosphere and seaice).
+It is applied in \mdl{trasbc} module as a surface boundary condition trend of
+the first level temperature time evolution equation
+(see \autoref{eq:tra_sbc} and \autoref{eq:tra_sbc_lin} in \autoref{subsec:TRA_sbc}).
+The latter is the penetrative part of the heat flux.
+It is applied as a 3D trends of the temperature equation (\mdl{traqsr} module) when
+\np{ln\_traqsr}\forcode{ = .true.}.
+The way the light penetrates inside the water column is generally a sum of decreasing exponentials
+(see \autoref{subsec:TRA_qsr}).
The surface freshwater budget is provided by the \textit{emp} field.
It represents the mass flux exchanged with the atmosphere (evaporation minus precipitation)
and possibly with the seaice and ice shelves (freezing minus melting of ice).
It affects both the ocean in two different ways:
$(i)$ it changes the volume of the ocean and therefore appears in the sea surface height
equation as a volume flux, and
$(ii)$ it changes the surface temperature and salinity through the heat and salt contents
of the mass exchanged with the atmosphere, the seaice and the ice shelves.
+It represents the mass flux exchanged with the atmosphere (evaporation minus precipitation) and
+possibly with the seaice and ice shelves (freezing minus melting of ice).
+It affects both the ocean in two different ways:
+$(i)$ it changes the volume of the ocean and therefore appears in the sea surface height equation as
+a volume flux, and
+$(ii)$ it changes the surface temperature and salinity through the heat and salt contents of
+the mass exchanged with the atmosphere, the seaice and the ice shelves.
@@ 129,9 +150,8 @@
%\colorbox{yellow}{End Miss }
The ocean model provides, at each time step, to the surface module (\mdl{sbcmod})
+The ocean model provides, at each time step, to the surface module (\mdl{sbcmod})
the surface currents, temperature and salinity.
These variables are averaged over \np{nn\_fsbc} timestep (\autoref{tab:ssm}),
and it is these averaged fields which are used to computes the surface fluxes
at a frequency of \np{nn\_fsbc} timestep.
+These variables are averaged over \np{nn\_fsbc} timestep (\autoref{tab:ssm}), and
+it is these averaged fields which are used to computes the surface fluxes at a frequency of \np{nn\_fsbc} timestep.
@@ 145,8 +165,8 @@
Sea surface salinty & sss\_m & $psu$ & T \\ \hline
\end{tabular}
\caption{ \protect\label{tab:ssm}
Ocean variables provided by the ocean to the surface module (SBC).
The variable are averaged over nn{\_}fsbc time step,
$i.e.$ the frequency of computation of surface fluxes.}
+\caption{ \protect\label{tab:ssm}
+ Ocean variables provided by the ocean to the surface module (SBC).
+ The variable are averaged over nn{\_}fsbc time step,
+ $i.e.$ the frequency of computation of surface fluxes.}
\end{center} \end{table}
%
@@ 161,32 +181,36 @@
\label{sec:SBC_input}
A generic interface has been introduced to manage the way input data (2D or 3D fields,
like surface forcing or ocean T and S) are specify in \NEMO. This task is archieved by \mdl{fldread}.
+A generic interface has been introduced to manage the way input data
+(2D or 3D fields, like surface forcing or ocean T and S) are specify in \NEMO.
+This task is archieved by \mdl{fldread}.
The module was design with four main objectives in mind:
\begin{enumerate}
\item optionally provide a time interpolation of the input data at model timestep,
whatever their input frequency is, and according to the different calendars available in the model.
\item optionally provide an onthefly space interpolation from the native input data grid to the model grid.
\item make the run duration independent from the period cover by the input files.
\item provide a simple user interface and a rather simple developer interface by limiting the
 number of prerequisite information.
+\begin{enumerate}
+\item
+ optionally provide a time interpolation of the input data at model timestep, whatever their input frequency is,
+ and according to the different calendars available in the model.
+\item
+ optionally provide an onthefly space interpolation from the native input data grid to the model grid.
+\item
+ make the run duration independent from the period cover by the input files.
+\item
+ provide a simple user interface and a rather simple developer interface by
+ limiting the number of prerequisite information.
\end{enumerate}
As a results the user have only to fill in for each variable a structure in the namelist file
to defined the input data file and variable names, the frequency of the data (in hours or months),
whether its is climatological data or not, the period covered by the input file (one year, month, week or day),
and three additional parameters for onthefly interpolation. When adding a new input variable,
the developer has to add the associated structure in the namelist, read this information
by mirroring the namelist read in \rou{sbc\_blk\_init} for example, and simply call \rou{fld\_read}
to obtain the desired input field at the model timestep and grid points.
+As a results the user have only to fill in for each variable a structure in the namelist file to
+define the input data file and variable names, the frequency of the data (in hours or months),
+whether its is climatological data or not, the period covered by the input file (one year, month, week or day),
+and three additional parameters for onthefly interpolation.
+When adding a new input variable, the developer has to add the associated structure in the namelist,
+read this information by mirroring the namelist read in \rou{sbc\_blk\_init} for example,
+and simply call \rou{fld\_read} to obtain the desired input field at the model timestep and grid points.
The only constraints are that the input file is a NetCDF file, the file name follows a nomenclature
(see \autoref{subsec:SBC_fldread}), the period it cover is one year, month, week or day, and, if onthefly
interpolation is used, a file of weights must be supplied (see \autoref{subsec:SBC_iof}).

Note that when an input data is archived on a disc which is accessible directly
from the workspace where the code is executed, then the use can set the \np{cn\_dir}
to the pathway leading to the data. By default, the data are assumed to have been
copied so that cn\_dir='./'.
+(see \autoref{subsec:SBC_fldread}), the period it cover is one year, month, week or day, and,
+if onthefly interpolation is used, a file of weights must be supplied (see \autoref{subsec:SBC_iof}).
+
+Note that when an input data is archived on a disc which is accessible directly from the workspace where
+the code is executed, then the use can set the \np{cn\_dir} to the pathway leading to the data.
+By default, the data are assumed to have been copied so that cn\_dir='./'.
% 
@@ 203,9 +227,10 @@
where
\begin{description}
\item[File name]: the stem name of the NetCDF file to be open.
This stem will be completed automatically by the model, with the addition of a '.nc' at its end
and by date information and possibly a prefix (when using AGRIF).
Tab.\autoref{tab:fldread} provides the resulting file name in all possible cases according to whether
it is a climatological file or not, and to the open/close frequency (see below for definition).
+\item[File name]:
+ the stem name of the NetCDF file to be open.
+ This stem will be completed automatically by the model, with the addition of a '.nc' at its end and
+ by date information and possibly a prefix (when using AGRIF).
+ Tab.\autoref{tab:fldread} provides the resulting file name in all possible cases according to
+ whether it is a climatological file or not, and to the open/close frequency (see below for definition).
%TABLE
@@ 219,81 +244,92 @@
\end{tabular}
\end{center}
\caption{ \protect\label{tab:fldread} naming nomenclature for climatological or interannual input file,
as a function of the Open/close frequency. The stem name is assumed to be 'fn'.
For weekly files, the 'LLL' corresponds to the first three letters of the first day of the week ($i.e.$ 'sun','sat','fri','thu','wed','tue','mon'). The 'YYYY', 'MM' and 'DD' should be replaced by the
actual year/month/day, always coded with 4 or 2 digits. Note that (1) in mpp, if the file is split
over each subdomain, the suffix '.nc' is replaced by '\_PPPP.nc', where 'PPPP' is the
process number coded with 4 digits; (2) when using AGRIF, the prefix
'\_N' is added to files,
where 'N' is the child grid number.}
+\caption{ \protect\label{tab:fldread}
+ naming nomenclature for climatological or interannual input file, as a function of the Open/close frequency.
+ The stem name is assumed to be 'fn'.
+ For weekly files, the 'LLL' corresponds to the first three letters of the first day of the week
+ ($i.e.$ 'sun','sat','fri','thu','wed','tue','mon').
+ The 'YYYY', 'MM' and 'DD' should be replaced by the actual year/month/day, always coded with 4 or 2 digits.
+ Note that (1) in mpp, if the file is split over each subdomain, the suffix '.nc' is replaced by '\_PPPP.nc',
+ where 'PPPP' is the process number coded with 4 digits;
+ (2) when using AGRIF, the prefix '\_N' is added to files, where 'N' is the child grid number.}
\end{table}
%
\item[Record frequency]: the frequency of the records contained in the input file.
Its unit is in hours if it is positive (for example 24 for daily forcing) or in months if negative
(for example 1 for monthly forcing or 12 for annual forcing).
Note that this frequency must really be an integer and not a real.
On some computers, seting it to '24.' can be interpreted as 240!

\item[Variable name]: the name of the variable to be read in the input NetCDF file.

\item[Time interpolation]: a logical to activate, or not, the time interpolation. If set to 'false',
the forcing will have a steplike shape remaining constant during each forcing period.
For example, when using a daily forcing without time interpolation, the forcing remaining
constant from 00h00'00'' to 23h59'59". If set to 'true', the forcing will have a broken line shape.
Records are assumed to be dated the middle of the forcing period.
For example, when using a daily forcing with time interpolation, linear interpolation will
be performed between midday of two consecutive days.

\item[Climatological forcing]: a logical to specify if a input file contains climatological forcing
which can be cycle in time, or an interannual forcing which will requires additional files
if the period covered by the simulation exceed the one of the file. See the above the file
naming strategy which impacts the expected name of the file to be opened.

\item[Open/close frequency]: the frequency at which forcing files must be opened/closed.
Four cases are coded: 'daily', 'weekLLL' (with 'LLL' the first 3 letters of the first day of the week),
'monthly' and 'yearly' which means the forcing files will contain data for one day, one week,
one month or one year. Files are assumed to contain data from the beginning of the open/close period.
For example, the first record of a yearly file containing daily data is Jan 1st even if the experiment
is not starting at the beginning of the year.

\item[Others]: 'weights filename', 'pairing rotation' and 'land/sea mask' are associted with onthefly interpolation
which is described in \autoref{subsec:SBC_iof}.
+\item[Record frequency]:
+ the frequency of the records contained in the input file.
+ Its unit is in hours if it is positive (for example 24 for daily forcing) or in months if negative
+ (for example 1 for monthly forcing or 12 for annual forcing).
+ Note that this frequency must really be an integer and not a real.
+ On some computers, seting it to '24.' can be interpreted as 240!
+
+\item[Variable name]:
+ the name of the variable to be read in the input NetCDF file.
+
+\item[Time interpolation]:
+ a logical to activate, or not, the time interpolation.
+ If set to 'false', the forcing will have a steplike shape remaining constant during each forcing period.
+ For example, when using a daily forcing without time interpolation, the forcing remaining constant from
+ 00h00'00'' to 23h59'59".
+ If set to 'true', the forcing will have a broken line shape.
+ Records are assumed to be dated the middle of the forcing period.
+ For example, when using a daily forcing with time interpolation,
+ linear interpolation will be performed between midday of two consecutive days.
+
+\item[Climatological forcing]:
+ a logical to specify if a input file contains climatological forcing which can be cycle in time,
+ or an interannual forcing which will requires additional files if
+ the period covered by the simulation exceed the one of the file.
+ See the above the file naming strategy which impacts the expected name of the file to be opened.
+
+\item[Open/close frequency]:
+ the frequency at which forcing files must be opened/closed.
+ Four cases are coded:
+ 'daily', 'weekLLL' (with 'LLL' the first 3 letters of the first day of the week), 'monthly' and 'yearly' which
+ means the forcing files will contain data for one day, one week, one month or one year.
+ Files are assumed to contain data from the beginning of the open/close period.
+ For example, the first record of a yearly file containing daily data is Jan 1st even if
+ the experiment is not starting at the beginning of the year.
+
+\item[Others]:
+ 'weights filename', 'pairing rotation' and 'land/sea mask' are associated with
+ onthefly interpolation which is described in \autoref{subsec:SBC_iof}.
\end{description}
Additional remarks:\\
(1) The time interpolation is a simple linear interpolation between two consecutive records of
the input data. The only tricky point is therefore to specify the date at which we need to do
the interpolation and the date of the records read in the input files.
Following \citet{Leclair_Madec_OM09}, the date of a time step is set at the middle of the
time step. For example, for an experiment starting at 0h00'00" with a one hour timestep,
+(1) The time interpolation is a simple linear interpolation between two consecutive records of the input data.
+The only tricky point is therefore to specify the date at which we need to do the interpolation and
+the date of the records read in the input files.
+Following \citet{Leclair_Madec_OM09}, the date of a time step is set at the middle of the time step.
+For example, for an experiment starting at 0h00'00" with a one hour timestep,
a time interpolation will be performed at the following time: 0h30'00", 1h30'00", 2h30'00", etc.
However, for forcing data related to the surface module, values are not needed at every
timestep but at every \np{nn\_fsbc} timestep. For example with \np{nn\_fsbc}\forcode{ = 3},
the surface module will be called at timesteps 1, 4, 7, etc. The date used for the time interpolation
is thus redefined to be at the middle of \np{nn\_fsbc} timestep period. In the previous example,
this leads to: 1h30'00", 4h30'00", 7h30'00", etc. \\
(2) For code readablility and maintenance issues, we don't take into account the NetCDF input file
calendar. The calendar associated with the forcing field is build according to the information
provided by user in the record frequency, the open/close frequency and the type of temporal interpolation.
For example, the first record of a yearly file containing daily data that will be interpolated in time
is assumed to be start Jan 1st at 12h00'00" and end Dec 31st at 12h00'00". \\
(3) If a time interpolation is requested, the code will pick up the needed data in the previous (next) file
when interpolating data with the first (last) record of the open/close period.
+However, for forcing data related to the surface module,
+values are not needed at every timestep but at every \np{nn\_fsbc} timestep.
+For example with \np{nn\_fsbc}\forcode{ = 3}, the surface module will be called at timesteps 1, 4, 7, etc.
+The date used for the time interpolation is thus redefined to be at the middle of \np{nn\_fsbc} timestep period.
+In the previous example, this leads to: 1h30'00", 4h30'00", 7h30'00", etc. \\
+(2) For code readablility and maintenance issues, we don't take into account the NetCDF input file calendar.
+The calendar associated with the forcing field is build according to the information provided by
+user in the record frequency, the open/close frequency and the type of temporal interpolation.
+For example, the first record of a yearly file containing daily data that will be interpolated in time is assumed to
+be start Jan 1st at 12h00'00" and end Dec 31st at 12h00'00". \\
+(3) If a time interpolation is requested, the code will pick up the needed data in the previous (next) file when
+interpolating data with the first (last) record of the open/close period.
For example, if the input file specifications are ''yearly, containing daily data to be interpolated in time'',
the values given by the code between 00h00'00" and 11h59'59" on Jan 1st will be interpolated values
between Dec 31st 12h00'00" and Jan 1st 12h00'00". If the forcing is climatological, Dec and Jan will
be keepup from the same year. However, if the forcing is not climatological, at the end of the
open/close period the code will automatically close the current file and open the next one.
Note that, if the experiment is starting (ending) at the beginning (end) of an open/close period
we do accept that the previous (next) file is not existing. In this case, the time interpolation
will be performed between two identical values. For example, when starting an experiment on
Jan 1st of year Y with yearly files and daily data to be interpolated, we do accept that the file
related to year Y1 is not existing. The value of Jan 1st will be used as the missing one for
Dec 31st of year Y1. If the file of year Y1 exists, the code will read its last record.
Therefore, this file can contain only one record corresponding to Dec 31st, a useful feature for
user considering that it is too heavy to manipulate the complete file for year Y1.
+the values given by the code between 00h00'00" and 11h59'59" on Jan 1st will be interpolated values between
+Dec 31st 12h00'00" and Jan 1st 12h00'00".
+If the forcing is climatological, Dec and Jan will be keepup from the same year.
+However, if the forcing is not climatological, at the end of
+the open/close period the code will automatically close the current file and open the next one.
+Note that, if the experiment is starting (ending) at the beginning (end) of
+an open/close period we do accept that the previous (next) file is not existing.
+In this case, the time interpolation will be performed between two identical values.
+For example, when starting an experiment on Jan 1st of year Y with yearly files and daily data to be interpolated,
+we do accept that the file related to year Y1 is not existing.
+The value of Jan 1st will be used as the missing one for Dec 31st of year Y1.
+If the file of year Y1 exists, the code will read its last record.
+Therefore, this file can contain only one record corresponding to Dec 31st,
+a useful feature for user considering that it is too heavy to manipulate the complete file for year Y1.
@@ 304,23 +340,21 @@
\label{subsec:SBC_iof}
Interpolation on the Fly allows the user to supply input files required
for the surface forcing on grids other than the model grid.
To do this he or she must supply, in addition to the source data file,
a file of weights to be used to interpolate from the data grid to the model grid.
The original development of this code used the SCRIP package (freely available
\href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement).
In principle, any package can be used to generate the weights, but the
variables in the input weights file must have the same names and meanings as
assumed by the model.
+Interpolation on the Fly allows the user to supply input files required for the surface forcing on
+grids other than the model grid.
+To do this he or she must supply, in addition to the source data file, a file of weights to be used to
+interpolate from the data grid to the model grid.
+The original development of this code used the SCRIP package
+(freely available \href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement).
+In principle, any package can be used to generate the weights, but the variables in
+the input weights file must have the same names and meanings as assumed by the model.
Two methods are currently available: bilinear and bicubic interpolation.
Prior to the interpolation, providing a land/sea mask file, the user can decide to
 remove land points from the input file and substitute the corresponding values
with the average of the 8 neighbouring points in the native external grid.
 Only "sea points" are considered for the averaging. The land/sea mask file must
be provided in the structure associated with the input variable.
 The netcdf land/sea mask variable name must be 'LSM' it must have the same
horizontal and vertical dimensions of the associated variable and should
be equal to 1 over land and 0 elsewhere.
The procedure can be recursively applied setting nn\_lsm > 1 in namsbc namelist.
+Prior to the interpolation, providing a land/sea mask file, the user can decide to remove land points from
+the input file and substitute the corresponding values with the average of the 8 neighbouring points in
+the native external grid.
+Only "sea points" are considered for the averaging.
+The land/sea mask file must be provided in the structure associated with the input variable.
+The netcdf land/sea mask variable name must be 'LSM' it must have the same horizontal and vertical dimensions of
+the associated variable and should be equal to 1 over land and 0 elsewhere.
+The procedure can be recursively applied setting nn\_lsm > 1 in namsbc namelist.
Note that nn\_lsm=0 forces the code to not apply the procedure even if a file for land/sea mask is supplied.
@@ 328,17 +362,16 @@
\label{subsec:SBC_iof_bilinear}
The input weights file in this case has two sets of variables: src01, src02,
src03, src04 and wgt01, wgt02, wgt03, wgt04.
The "src" variables correspond to the point in the input grid to which the weight
"wgt" is to be applied. Each src value is an integer corresponding to the index of a
point in the input grid when written as a one dimensional array. For example, for an input grid
of size 5x10, point (3,2) is referenced as point 8, since (21)*5+3=8.
+The input weights file in this case has two sets of variables:
+src01, src02, src03, src04 and wgt01, wgt02, wgt03, wgt04.
+The "src" variables correspond to the point in the input grid to which the weight "wgt" is to be applied.
+Each src value is an integer corresponding to the index of a point in the input grid when
+written as a one dimensional array.
+For example, for an input grid of size 5x10, point (3,2) is referenced as point 8, since (21)*5+3=8.
There are four of each variable because bilinear interpolation uses the four points defining
the grid box containing the point to be interpolated.
All of these arrays are on the model grid, so that values src01(i,j) and
wgt01(i,j) are used to generate a value for point (i,j) in the model.
+All of these arrays are on the model grid, so that values src01(i,j) and wgt01(i,j) are used to
+generate a value for point (i,j) in the model.
Symbolically, the algorithm used is:

\begin{equation}
f_{m}(i,j) = f_{m}(i,j) + \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}
@@ 361,15 +394,17 @@
\end{split}
\end{equation*}
The gradients here are taken with respect to the horizontal indices and not distances since the spatial dependency has been absorbed into the weights.
+The gradients here are taken with respect to the horizontal indices and not distances since
+the spatial dependency has been absorbed into the weights.
\subsubsection{Implementation}
\label{subsec:SBC_iof_imp}
To activate this option, a nonempty string should be supplied in the weights filename column
of the relevant namelist; if this is left as an empty string no action is taken.
In the model, weights files are read in and stored in a structured type (WGT) in the fldread
module, as and when they are first required.
This initialisation procedure determines whether the input data grid should be treated
as cyclical or not by inspecting a global attribute stored in the weights input file.
+To activate this option, a nonempty string should be supplied in
+the weights filename column of the relevant namelist;
+if this is left as an empty string no action is taken.
+In the model, weights files are read in and stored in a structured type (WGT) in the fldread module,
+as and when they are first required.
+This initialisation procedure determines whether the input data grid should be treated as cyclical or not by
+inspecting a global attribute stored in the weights input file.
This attribute must be called "ew\_wrap" and be of integer type.
If it is negative, the input nonmodel grid is assumed not to be cyclic.
@@ 378,23 +413,22 @@
if longitudes are 0.5, 2.5, .... , 358.5, 360.5, 362.5, ew\_wrap should be 2.
If the model does not find attribute ew\_wrap, then a value of 999 is assumed.
In this case the \rou{fld\_read} routine defaults ew\_wrap to value 0 and therefore the grid
is assumed to be cyclic with no overlapping columns.
+In this case the \rou{fld\_read} routine defaults ew\_wrap to value 0 and
+therefore the grid is assumed to be cyclic with no overlapping columns.
(In fact this only matters when bicubic interpolation is required.)
Note that no testing is done to check the validity in the model, since there is no way
of knowing the name used for the longitude variable,
+Note that no testing is done to check the validity in the model,
+since there is no way of knowing the name used for the longitude variable,
so it is up to the user to make sure his or her data is correctly represented.
Next the routine reads in the weights.
Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise
bilinear interpolation is used. The WGT structure includes dynamic arrays both for
the storage of the weights (on the model grid), and when required, for reading in
the variable to be interpolated (on the input data grid).
The size of the input data array is determined by examining the values in the "src"
arrays to find the minimum and maximum i and j values required.
Since bicubic interpolation requires the calculation of gradients at each point on the grid,
+Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise bilinear interpolation is used.
+The WGT structure includes dynamic arrays both for the storage of the weights (on the model grid),
+and when required, for reading in the variable to be interpolated (on the input data grid).
+The size of the input data array is determined by examining the values in the "src" arrays to
+find the minimum and maximum i and j values required.
+Since bicubic interpolation requires the calculation of gradients at each point on the grid,
the corresponding arrays are dimensioned with a halo of width one grid point all the way around.
When the array of points from the data file is adjacent to an edge of the data grid,
the halo is either a copy of the row/column next to it (noncyclical case), or is a copy
of one from the first few columns on the opposite side of the grid (cyclical case).
+When the array of points from the data file is adjacent to an edge of the data grid,
+the halo is either a copy of the row/column next to it (noncyclical case),
+or is a copy of one from the first few columns on the opposite side of the grid (cyclical case).
\subsubsection{Limitations}
@@ 402,12 +436,17 @@
\begin{enumerate}
\item The case where input data grids are not logically rectangular has not been tested.
\item This code is not guaranteed to produce positive definite answers from positive definite inputs
 when a bicubic interpolation method is used.
\item The cyclic condition is only applied on left and right columns, and not to top and bottom rows.
\item The gradients across the ends of a cyclical grid assume that the grid spacing between
 the two columns involved are consistent with the weights used.
\item Neither interpolation scheme is conservative. (There is a conservative scheme available
 in SCRIP, but this has not been implemented.)
+\item
+ The case where input data grids are not logically rectangular has not been tested.
+\item
+ This code is not guaranteed to produce positive definite answers from positive definite inputs when
+ a bicubic interpolation method is used.
+\item
+ The cyclic condition is only applied on left and right columns, and not to top and bottom rows.
+\item
+ The gradients across the ends of a cyclical grid assume that the grid spacing between
+ the two columns involved are consistent with the weights used.
+\item
+ Neither interpolation scheme is conservative. (There is a conservative scheme available in SCRIP,
+ but this has not been implemented.)
\end{enumerate}
@@ 416,5 +455,5 @@
% to be completed
A set of utilities to create a weights file for a rectilinear input grid is available
+A set of utilities to create a weights file for a rectilinear input grid is available
(see the directory NEMOGCM/TOOLS/WEIGHTS).
@@ 430,15 +469,20 @@
%
In some circumstances it may be useful to avoid calculating the 3D temperature, salinity and velocity fields
and simply read them in from a previous run or receive them from OASIS.
+In some circumstances it may be useful to avoid calculating the 3D temperature,
+salinity and velocity fields and simply read them in from a previous run or receive them from OASIS.
For example:
\begin{itemize}
\item Multiple runs of the model are required in code development to see the effect of different algorithms in
 the bulk formulae.
\item The effect of different parameter sets in the ice model is to be examined.
\item Development of seaice algorithms or parameterizations.
\item spinup of the iceberg floats
\item ocean/seaice simulation with both media running in parallel (\np{ln\_mixcpl}\forcode{ = .true.})
+\item
+ Multiple runs of the model are required in code development to
+ see the effect of different algorithms in the bulk formulae.
+\item
+ The effect of different parameter sets in the ice model is to be examined.
+\item
+ Development of seaice algorithms or parameterizations.
+\item
+ Spinup of the iceberg floats
+\item
+ Ocean/seaice simulation with both media running in parallel (\np{ln\_mixcpl}\forcode{ = .true.})
\end{itemize}
@@ 446,20 +490,35 @@
Its options are defined through the \ngn{namsbc\_sas} namelist variables.
A new copy of the model has to be compiled with a configuration based on ORCA2\_SAS\_LIM.
However no namelist parameters need be changed from the settings of the previous run (except perhaps nn{\_}date0)
+However no namelist parameters need be changed from the settings of the previous run (except perhaps nn{\_}date0).
In this configuration, a few routines in the standard model are overriden by new versions.
Routines replaced are:
\begin{itemize}
\item \mdl{nemogcm} : This routine initialises the rest of the model and repeatedly calls the stp time stepping routine (\mdl{step})
 Since the ocean state is not calculated all associated initialisations have been removed.
\item \mdl{step} : The main time stepping routine now only needs to call the sbc routine (and a few utility functions).
\item \mdl{sbcmod} : This has been cut down and now only calculates surface forcing and the ice model required. New surface modules
 that can function when only the surface level of the ocean state is defined can also be added (e.g. icebergs).
\item \mdl{daymod} : No ocean restarts are read or written (though the ice model restarts are retained), so calls to restart functions
 have been removed. This also means that the calendar cannot be controlled by time in a restart file, so the user
 must make sure that nn{\_}date0 in the model namelist is correct for his or her purposes.
\item \mdl{stpctl} : Since there is no free surface solver, references to it have been removed from \rou{stp\_ctl} module.
\item \mdl{diawri} : All 3D data have been removed from the output. The surface temperature, salinity and velocity components (which
 have been read in) are written along with relevant forcing and ice data.
+\item
+ \mdl{nemogcm}:
+ This routine initialises the rest of the model and repeatedly calls the stp time stepping routine (\mdl{step}).
+ Since the ocean state is not calculated all associated initialisations have been removed.
+\item
+ \mdl{step}:
+ The main time stepping routine now only needs to call the sbc routine (and a few utility functions).
+\item
+ \mdl{sbcmod}:
+ This has been cut down and now only calculates surface forcing and the ice model required.
+ New surface modules that can function when only the surface level of the ocean state is defined can also be added
+ (e.g. icebergs).
+\item
+ \mdl{daymod}:
+ No ocean restarts are read or written (though the ice model restarts are retained),
+ so calls to restart functions have been removed.
+ This also means that the calendar cannot be controlled by time in a restart file,
+ so the user must make sure that nn{\_}date0 in the model namelist is correct for his or her purposes.
+\item
+ \mdl{stpctl}:
+ Since there is no free surface solver, references to it have been removed from \rou{stp\_ctl} module.
+\item
+ \mdl{diawri}:
+ All 3D data have been removed from the output.
+ The surface temperature, salinity and velocity components (which have been read in) are written along with
+ relevant forcing and ice data.
\end{itemize}
@@ 467,8 +526,13 @@
\begin{itemize}
\item \mdl{sbcsas} : This module initialises the input files needed for reading temperature, salinity and velocity arrays at the surface.
 These filenames are supplied in namelist namsbc{\_}sas. Unfortunately because of limitations with the \mdl{iom} module,
 the full 3D fields from the mean files have to be read in and interpolated in time, before using just the top level.
 Since fldread is used to read in the data, Interpolation on the Fly may be used to change input data resolution.
+\item
+ \mdl{sbcsas}:
+ This module initialises the input files needed for reading temperature, salinity and
+ velocity arrays at the surface.
+ These filenames are supplied in namelist namsbc{\_}sas.
+ Unfortunately because of limitations with the \mdl{iom} module,
+ the full 3D fields from the mean files have to be read in and interpolated in time,
+ before using just the top level.
+ Since fldread is used to read in the data, Interpolation on the Fly may be used to change input data resolution.
\end{itemize}
@@ 492,15 +556,15 @@
The analytical formulation of the surface boundary condition is the default scheme.
In this case, all the six fluxes needed by the ocean are assumed to
be uniform in space. They take constant values given in the namelist
\ngn{namsbc{\_}ana} by the variables \np{rn\_utau0}, \np{rn\_vtau0}, \np{rn\_qns0},
\np{rn\_qsr0}, and \np{rn\_emp0} ($\textit{emp}=\textit{emp}_S$). The runoff is set to zero.
In addition, the wind is allowed to reach its nominal value within a given number
of time steps (\np{nn\_tau000}).

If a user wants to apply a different analytical forcing, the \mdl{sbcana}
module can be modified to use another scheme. As an example,
the \mdl{sbc\_ana\_gyre} routine provides the analytical forcing for the
GYRE configuration (see GYRE configuration manual, in preparation).
+In this case, all the six fluxes needed by the ocean are assumed to be uniform in space.
+They take constant values given in the namelist \ngn{namsbc{\_}ana} by
+the variables \np{rn\_utau0}, \np{rn\_vtau0}, \np{rn\_qns0}, \np{rn\_qsr0}, and \np{rn\_emp0}
+($\textit{emp}=\textit{emp}_S$).
+The runoff is set to zero.
+In addition, the wind is allowed to reach its nominal value within a given number of time steps (\np{nn\_tau000}).
+
+If a user wants to apply a different analytical forcing,
+the \mdl{sbcana} module can be modified to use another scheme.
+As an example, the \mdl{sbc\_ana\_gyre} routine provides the analytical forcing for the GYRE configuration
+(see GYRE configuration manual, in preparation).
@@ 515,14 +579,13 @@
%
In the flux formulation (\np{ln\_flx}\forcode{ = .true.}), the surface boundary
condition fields are directly read from input files. The user has to define
in the namelist \ngn{namsbc{\_}flx} the name of the file, the name of the variable
read in the file, the time frequency at which it is given (in hours), and a logical
setting whether a time interpolation to the model time step is required
for this field. See \autoref{subsec:SBC_fldread} for a more detailed description of the parameters.

Note that in general, a flux formulation is used in associated with a
restoring term to observed SST and/or SSS. See \autoref{subsec:SBC_ssr} for its
specification.
+In the flux formulation (\np{ln\_flx}\forcode{ = .true.}),
+the surface boundary condition fields are directly read from input files.
+The user has to define in the namelist \ngn{namsbc{\_}flx} the name of the file,
+the name of the variable read in the file, the time frequency at which it is given (in hours),
+and a logical setting whether a time interpolation to the model time step is required for this field.
+See \autoref{subsec:SBC_fldread} for a more detailed description of the parameters.
+
+Note that in general, a flux formulation is used in associated with a restoring term to observed SST and/or SSS.
+See \autoref{subsec:SBC_ssr} for its specification.
@@ 534,14 +597,16 @@
\label{sec:SBC_blk}
In the bulk formulation, the surface boundary condition fields are computed
using bulk formulae and atmospheric fields and ocean (and ice) variables.

The atmospheric fields used depend on the bulk formulae used. Three bulk formulations
are available : the CORE, the CLIO and the MFS bulk formulea. The choice is made by setting to true
one of the following namelist variable : \np{ln\_core} ; \np{ln\_clio} or \np{ln\_mfs}.

Note : in forced mode, when a seaice model is used, a bulk formulation (CLIO or CORE) have to be used.
Therefore the two bulk (CLIO and CORE) formulea include the computation of the fluxes over both
an ocean and an ice surface.
+In the bulk formulation, the surface boundary condition fields are computed using bulk formulae and atmospheric fields and ocean (and ice) variables.
+
+The atmospheric fields used depend on the bulk formulae used.
+Three bulk formulations are available:
+the CORE, the CLIO and the MFS bulk formulea.
+The choice is made by setting to true one of the following namelist variable:
+\np{ln\_core} ; \np{ln\_clio} or \np{ln\_mfs}.
+
+Note:
+in forced mode, when a seaice model is used, a bulk formulation (CLIO or CORE) have to be used.
+Therefore the two bulk (CLIO and CORE) formulea include the computation of the fluxes over
+both an ocean and an ice surface.
% 
@@ 555,15 +620,13 @@
%
The CORE bulk formulae have been developed by \citet{Large_Yeager_Rep04}.
They have been designed to handle the CORE forcing, a mixture of NCEP
reanalysis and satellite data. They use an inertial dissipative method to compute
the turbulent transfer coefficients (momentum, sensible heat and evaporation)
from the 10 metre wind speed, air temperature and specific humidity.
This \citet{Large_Yeager_Rep04} dataset is available through the
\href{http://nomads.gfdl.noaa.gov/nomads/forms/mom4/CORE.html}{GFDL web site}.

Note that substituting ERA40 to NCEP reanalysis fields
does not require changes in the bulk formulea themself.
This is the socalled DRAKKAR Forcing Set (DFS) \citep{Brodeau_al_OM09}.
+The CORE bulk formulae have been developed by \citet{Large_Yeager_Rep04}.
+They have been designed to handle the CORE forcing, a mixture of NCEP reanalysis and satellite data.
+They use an inertial dissipative method to compute the turbulent transfer coefficients
+(momentum, sensible heat and evaporation) from the 10 metre wind speed, air temperature and specific humidity.
+This \citet{Large_Yeager_Rep04} dataset is available through
+the \href{http://nomads.gfdl.noaa.gov/nomads/forms/mom4/CORE.html}{GFDL web site}.
+
+Note that substituting ERA40 to NCEP reanalysis fields does not require changes in the bulk formulea themself.
+This is the socalled DRAKKAR Forcing Set (DFS) \citep{Brodeau_al_OM09}.
Options are defined through the \ngn{namsbc\_core} namelist variables.
@@ 589,12 +652,11 @@
%
Note that the air velocity is provided at a tracer ocean point, not at a velocity ocean
point ($u$ and $v$points). It is simpler and faster (less fields to be read),
but it is not the recommended method when the ocean grid size is the same
or larger than the one of the input atmospheric fields.

The \np{sn\_wndi}, \np{sn\_wndj}, \np{sn\_qsr}, \np{sn\_qlw}, \np{sn\_tair}, \np{sn\_humi},
\np{sn\_prec}, \np{sn\_snow}, \np{sn\_tdif} parameters describe the fields
and the way they have to be used (spatial and temporal interpolations).
+Note that the air velocity is provided at a tracer ocean point, not at a velocity ocean point ($u$ and $v$points).
+It is simpler and faster (less fields to be read), but it is not the recommended method when
+the ocean grid size is the same or larger than the one of the input atmospheric fields.
+
+The \np{sn\_wndi}, \np{sn\_wndj}, \np{sn\_qsr}, \np{sn\_qlw}, \np{sn\_tair}, \np{sn\_humi}, \np{sn\_prec},
+\np{sn\_snow}, \np{sn\_tdif} parameters describe the fields and the way they have to be used
+(spatial and temporal interpolations).
\np{cn\_dir} is the directory of location of bulk files
@@ 603,10 +665,10 @@
\np{rn\_zu}: is the height of wind measurements (m)
Three multiplicative factors are availables :
\np{rn\_pfac} and \np{rn\_efac} allows to adjust (if necessary) the global freshwater budget
by increasing/reducing the precipitations (total and snow) and or evaporation, respectively.
The third one,\np{rn\_vfac}, control to which extend the ice/ocean velocities are taken into account
in the calculation of surface wind stress. Its range should be between zero and one,
and it is recommended to set it to 0.
+Three multiplicative factors are availables:
+\np{rn\_pfac} and \np{rn\_efac} allows to adjust (if necessary) the global freshwater budget by
+increasing/reducing the precipitations (total and snow) and or evaporation, respectively.
+The third one,\np{rn\_vfac}, control to which extend the ice/ocean velocities are taken into account in
+the calculation of surface wind stress.
+Its range should be between zero and one, and it is recommended to set it to 0.
% 
@@ 620,8 +682,8 @@
%
The CLIO bulk formulae were developed several years ago for the
Louvainlaneuve coupled iceocean model (CLIO, \cite{Goosse_al_JGR99}).
They are simpler bulk formulae. They assume the stress to be known and
compute the radiative fluxes from a climatological cloud cover.
+The CLIO bulk formulae were developed several years ago for the Louvainlaneuve coupled iceocean model
+(CLIO, \cite{Goosse_al_JGR99}).
+They are simpler bulk formulae.
+They assume the stress to be known and compute the radiative fluxes from a climatological cloud cover.
Options are defined through the \ngn{namsbc\_clio} namelist variables.
@@ 647,7 +709,6 @@
%
As for the flux formulation, information about the input data required by the
model is provided in the namsbc\_blk\_core or namsbc\_blk\_clio
namelist (see \autoref{subsec:SBC_fldread}).
+As for the flux formulation, information about the input data required by the model is provided in
+the namsbc\_blk\_core or namsbc\_blk\_clio namelist (see \autoref{subsec:SBC_fldread}).
% 
@@ 661,23 +722,21 @@
%
The MFS (Mediterranean Forecasting System) bulk formulae have been developed by
 \citet{Castellari_al_JMS1998}.
They have been designed to handle the ECMWF operational data and are currently
in use in the MFS operational system \citep{Tonani_al_OS08}, \citep{Oddo_al_OS09}.
+The MFS (Mediterranean Forecasting System) bulk formulae have been developed by \citet{Castellari_al_JMS1998}.
+They have been designed to handle the ECMWF operational data and are currently in use in
+the MFS operational system \citep{Tonani_al_OS08}, \citep{Oddo_al_OS09}.
The wind stress computation uses a drag coefficient computed according to \citet{Hellerman_Rosenstein_JPO83}.
The surface boundary condition for temperature involves the balance between surface solar radiation,
net longwave radiation, the latent and sensible heat fluxes.
Solar radiation is dependent on cloud cover and is computed by means of
an astronomical formula \citep{Reed_JPO77}. Albedo monthly values are from \citet{Payne_JAS72}
as means of the values at $40^{o}N$ and $30^{o}N$ for the Atlantic Ocean (hence the same latitudinal
band of the Mediterranean Sea). The net longwave radiation flux
\citep{Bignami_al_JGR95} is a function of
+The surface boundary condition for temperature involves the balance between
+surface solar radiation, net longwave radiation, the latent and sensible heat fluxes.
+Solar radiation is dependent on cloud cover and is computed by means of an astronomical formula \citep{Reed_JPO77}.
+Albedo monthly values are from \citet{Payne_JAS72} as means of the values at $40^{o}N$ and $30^{o}N$ for
+the Atlantic Ocean (hence the same latitudinal band of the Mediterranean Sea).
+The net longwave radiation flux \citep{Bignami_al_JGR95} is a function of
air temperature, seasurface temperature, cloud cover and relative humidity.
Sensible heat and latent heat fluxes are computed by classical
bulk formulae parameterised according to \citet{Kondo1975}.
+Sensible heat and latent heat fluxes are computed by classical bulk formulae parameterised according to
+\citet{Kondo1975}.
Details on the bulk formulae used can be found in \citet{Maggiore_al_PCE98} and \citet{Castellari_al_JMS1998}.
Options are defined through the \ngn{namsbc\_mfs} namelist variables.
The required 7 input fields must be provided on the model GridT and are:
+Options are defined through the \ngn{namsbc\_mfs} namelist variables.
+The required 7 input fields must be provided on the model GridT and are:
\begin{itemize}
\item Zonal Component of the 10m wind ($ms^{1}$) (\np{sn\_windi})
@@ 700,31 +759,33 @@
%
In the coupled formulation of the surface boundary condition, the fluxes are
provided by the OASIS coupler at a frequency which is defined in the OASIS coupler,
while sea and ice surface temperature, ocean and ice albedo, and ocean currents
are sent to the atmospheric component.

A generalised coupled interface has been developed.
It is currently interfaced with OASIS3MCT (\key{oasis3}).
It has been successfully used to interface \NEMO to most of the European atmospheric
GCM (ARPEGE, ECHAM, ECMWF, HadAM, HadGAM, LMDz),
as well as to \href{http://wrfmodel.org/}{WRF} (Weather Research and Forecasting Model).

Note that in addition to the setting of \np{ln\_cpl} to true, the \key{coupled} have to be defined.
The CPP key is mainly used in seaice to ensure that the atmospheric fluxes are
actually recieved by the iceocean system (no calculation of ice sublimation in coupled mode).
+In the coupled formulation of the surface boundary condition,
+the fluxes are provided by the OASIS coupler at a frequency which is defined in the OASIS coupler,
+while sea and ice surface temperature, ocean and ice albedo, and ocean currents are sent to
+the atmospheric component.
+
+A generalised coupled interface has been developed.
+It is currently interfaced with OASIS3MCT (\key{oasis3}).
+It has been successfully used to interface \NEMO to most of the European atmospheric GCM
+(ARPEGE, ECHAM, ECMWF, HadAM, HadGAM, LMDz), as well as to \href{http://wrfmodel.org/}{WRF}
+(Weather Research and Forecasting Model).
+
+Note that in addition to the setting of \np{ln\_cpl} to true, the \key{coupled} have to be defined.
+The CPP key is mainly used in seaice to ensure that the atmospheric fluxes are actually received by
+the iceocean system (no calculation of ice sublimation in coupled mode).
When PISCES biogeochemical model (\key{top} and \key{pisces}) is also used in the coupled system,
the whole carbon cycle is computed by defining \key{cpl\_carbon\_cycle}. In this case,
CO$_2$ fluxes will be exchanged between the atmosphere and the iceocean system (and need to be activated in \ngn{namsbc{\_}cpl} ).

The namelist above allows control of various aspects of the coupling fields (particularly for
vectors) and now allows for any coupling fields to have multiple sea ice categories (as required by LIM3
and CICE). When indicating a multicategory coupling field in namsbc{\_}cpl the number of categories will be
determined by the number used in the sea ice model. In some limited cases it may be possible to specify
single category coupling fields even when the sea ice model is running with multiple categories  in this
case the user should examine the code to be sure the assumptions made are satisfactory. In cases where
this is definitely not possible the model should abort with an error message. The new code has been tested using
ECHAM with LIM2, and HadGAM3 with CICE but although it will compile with \key{lim3} additional minor code changes
may be required to run using LIM3.
+the whole carbon cycle is computed by defining \key{cpl\_carbon\_cycle}.
+In this case, CO$_2$ fluxes will be exchanged between the atmosphere and the iceocean system
+(and need to be activated in \ngn{namsbc{\_}cpl} ).
+
+The namelist above allows control of various aspects of the coupling fields (particularly for vectors) and
+now allows for any coupling fields to have multiple sea ice categories (as required by LIM3 and CICE).
+When indicating a multicategory coupling field in namsbc{\_}cpl the number of categories will be determined by
+the number used in the sea ice model.
+In some limited cases it may be possible to specify single category coupling fields even when
+the sea ice model is running with multiple categories 
+in this case the user should examine the code to be sure the assumptions made are satisfactory.
+In cases where this is definitely not possible the model should abort with an error message.
+The new code has been tested using ECHAM with LIM2, and HadGAM3 with CICE but
+although it will compile with \key{lim3} additional minor code changes may be required to run using LIM3.
@@ 739,27 +800,27 @@
%
The optional atmospheric pressure can be used to force ocean and ice dynamics
(\np{ln\_apr\_dyn}\forcode{ = .true.}, \textit{\ngn{namsbc}} namelist ).
The input atmospheric forcing defined via \np{sn\_apr} structure (\textit{namsbc\_apr} namelist)
can be interpolated in time to the model time step, and even in space when the
interpolation onthefly is used. When used to force the dynamics, the atmospheric
pressure is further transformed into an equivalent inverse barometer sea surface height,
$\eta_{ib}$, using:
+The optional atmospheric pressure can be used to force ocean and ice dynamics
+(\np{ln\_apr\_dyn}\forcode{ = .true.}, \textit{\ngn{namsbc}} namelist).
+The input atmospheric forcing defined via \np{sn\_apr} structure (\textit{namsbc\_apr} namelist)
+can be interpolated in time to the model time step, and even in space when the interpolation onthefly is used.
+When used to force the dynamics, the atmospheric pressure is further transformed into
+an equivalent inverse barometer sea surface height, $\eta_{ib}$, using:
\begin{equation} \label{eq:SBC_ssh_ib}
\eta_{ib} =  \frac{1}{g\,\rho_o} \left( P_{atm}  P_o \right)
\end{equation}
where $P_{atm}$ is the atmospheric pressure and $P_o$ a reference atmospheric pressure.
A value of $101,000~N/m^2$ is used unless \np{ln\_ref\_apr} is set to true. In this case $P_o$
is set to the value of $P_{atm}$ averaged over the ocean domain, $i.e.$ the mean value of
$\eta_{ib}$ is kept to zero at all time step.

The gradient of $\eta_{ib}$ is added to the RHS of the ocean momentum equation
(see \mdl{dynspg} for the ocean). For seaice, the sea surface height, $\eta_m$,
which is provided to the sea ice model is set to $\eta  \eta_{ib}$ (see \mdl{sbcssr} module).
$\eta_{ib}$ can be set in the output. This can simplify altimetry data and model comparison
as inverse barometer sea surface height is usually removed from these date prior to their distribution.

When using timesplitting and BDY package for open boundaries conditions, the equivalent
inverse barometer sea surface height $\eta_{ib}$ can be added to BDY ssh data:
+A value of $101,000~N/m^2$ is used unless \np{ln\_ref\_apr} is set to true.
+In this case $P_o$ is set to the value of $P_{atm}$ averaged over the ocean domain,
+$i.e.$ the mean value of $\eta_{ib}$ is kept to zero at all time step.
+
+The gradient of $\eta_{ib}$ is added to the RHS of the ocean momentum equation (see \mdl{dynspg} for the ocean).
+For seaice, the sea surface height, $\eta_m$, which is provided to the sea ice model is set to $\eta  \eta_{ib}$
+(see \mdl{sbcssr} module).
+$\eta_{ib}$ can be set in the output.
+This can simplify altimetry data and model comparison as
+inverse barometer sea surface height is usually removed from these date prior to their distribution.
+
+When using timesplitting and BDY package for open boundaries conditions,
+the equivalent inverse barometer sea surface height $\eta_{ib}$ can be added to BDY ssh data:
\np{ln\_apr\_obc} might be set to true.
@@ 775,5 +836,7 @@
%
The tidal forcing, generated by the gravity forces of the EarthMoon and EarthSun sytems, is activated if \np{ln\_tide} and \np{ln\_tide\_pot} are both set to \np{.true.} in \ngn{nam\_tide}. This translates as an additional barotropic force in the momentum equations \ref{eq:PE_dyn} such that:
+The tidal forcing, generated by the gravity forces of the EarthMoon and EarthSun sytems,
+is activated if \np{ln\_tide} and \np{ln\_tide\_pot} are both set to \np{.true.} in \ngn{nam\_tide}.
+This translates as an additional barotropic force in the momentum equations \ref{eq:PE_dyn} such that:
\begin{equation} \label{eq:PE_dyn_tides}
\frac{\partial {\rm {\bf U}}_h }{\partial t}= ...
@@ 782,5 +845,7 @@
where $\Pi_{eq}$ stands for the equilibrium tidal forcing and $\Pi_{sal}$ a selfattraction and loading term (SAL).
The equilibrium tidal forcing is expressed as a sum over the chosen constituents $l$ in \ngn{nam\_tide}. The constituents are defined such that \np{clname(1) = 'M2', clname(2)='S2', etc...}. For the three types of tidal frequencies it reads : \\
+The equilibrium tidal forcing is expressed as a sum over the chosen constituents $l$ in \ngn{nam\_tide}.
+The constituents are defined such that \np{clname(1) = 'M2', clname(2)='S2', etc...}.
+For the three types of tidal frequencies it reads: \\
Long period tides :
\begin{equation}
@@ 795,11 +860,20 @@
\Pi_{eq}(l)=A_{l}(1+kh)(cos^{2}\phi)cos(\omega_{l}t+2\lambda+V_{l})
\end{equation}
Here $A_{l}$ is the amplitude, $\omega_{l}$ is the frequency, $\phi$ the latitude, $\lambda$ the longitude, $V_{0l}$ a phase shift with respect to Greenwich meridian and $t$ the time. The Love number factor $(1+kh)$ is here taken as a constant (0.7).

The SAL term should in principle be computed online as it depends on the model tidal prediction itself (see \citet{Arbic2004} for a discussion about the practical implementation of this term). Nevertheless, the complex calculations involved would make this computationally too expensive. Here, practical solutions are whether to read complex estimates $\Pi_{sal}(l)$ from an external model (\np{ln\_read\_load=.true.}) or use a ``scalar approximation'' (\np{ln\_scal\_load=.true.}). In the latter case, it reads:\\
+Here $A_{l}$ is the amplitude, $\omega_{l}$ is the frequency, $\phi$ the latitude, $\lambda$ the longitude,
+$V_{0l}$ a phase shift with respect to Greenwich meridian and $t$ the time.
+The Love number factor $(1+kh)$ is here taken as a constant (0.7).
+
+The SAL term should in principle be computed online as it depends on the model tidal prediction itself
+(see \citet{Arbic2004} for a discussion about the practical implementation of this term).
+Nevertheless, the complex calculations involved would make this computationally too expensive.
+Here, practical solutions are whether to read complex estimates $\Pi_{sal}(l)$ from an external model
+(\np{ln\_read\_load=.true.}) or use a ``scalar approximation'' (\np{ln\_scal\_load=.true.}).
+In the latter case, it reads:\\
\begin{equation}
\Pi_{sal} = \beta \eta
\end{equation}
where $\beta$ (\np{rn\_scal\_load}, $\approx0.09$) is a spatially constant scalar, often chosen to minimize tidal prediction errors. Setting both \np{ln\_read\_load} and \np{ln\_scal\_load} to false removes the SAL contribution.
+where $\beta$ (\np{rn\_scal\_load}, $\approx0.09$) is a spatially constant scalar,
+often chosen to minimize tidal prediction errors.
+Setting both \np{ln\_read\_load} and \np{ln\_scal\_load} to false removes the SAL contribution.
% ================================================================
@@ 834,67 +908,76 @@
River runoff generally enters the ocean at a nonzero depth rather than through the surface.
Many models, however, have traditionally inserted river runoff to the top model cell.
This was the case in \NEMO prior to the version 3.3, and was combined with an option
to increase vertical mixing near the river mouth.

However, with this method numerical and physical problems arise when the top grid cells are
of the order of one meter. This situation is common in coastal modelling and is becoming
more common in open ocean and climate modelling
\footnote{At least a top cells thickness of 1~meter and a 3 hours forcing frequency are
required to properly represent the diurnal cycle \citep{Bernie_al_JC05}. see also \autoref{fig:SBC_dcy}.}.

As such from V~3.3 onwards it is possible to add river runoff through a nonzero depth, and for the
temperature and salinity of the river to effect the surrounding ocean.
The user is able to specify, in a NetCDF input file, the temperature and salinity of the river, along with the
depth (in metres) which the river should be added to.

Namelist variables in \ngn{namsbc\_rnf}, \np{ln\_rnf\_depth}, \np{ln\_rnf\_sal} and \np{ln\_rnf\_temp} control whether
the river attributes (depth, salinity and temperature) are read in and used. If these are set
as false the river is added to the surface box only, assumed to be fresh (0~psu), and/or
taken as surface temperature respectively.
+This was the case in \NEMO prior to the version 3.3,
+and was combined with an option to increase vertical mixing near the river mouth.
+
+However, with this method numerical and physical problems arise when the top grid cells are of the order of one meter.
+This situation is common in coastal modelling and is becoming more common in open ocean and climate modelling
+\footnote{
+ At least a top cells thickness of 1~meter and a 3 hours forcing frequency are required to
+ properly represent the diurnal cycle \citep{Bernie_al_JC05}.
+ see also \autoref{fig:SBC_dcy}.}.
+
+As such from V~3.3 onwards it is possible to add river runoff through a nonzero depth,
+and for the temperature and salinity of the river to effect the surrounding ocean.
+The user is able to specify, in a NetCDF input file, the temperature and salinity of the river,
+along with the depth (in metres) which the river should be added to.
+
+Namelist variables in \ngn{namsbc\_rnf}, \np{ln\_rnf\_depth}, \np{ln\_rnf\_sal} and
+\np{ln\_rnf\_temp} control whether the river attributes (depth, salinity and temperature) are read in and used.
+If these are set as false the river is added to the surface box only, assumed to be fresh (0~psu),
+and/or taken as surface temperature respectively.
The runoff value and attributes are read in in sbcrnf.
For temperature 999 is taken as missing data and the river temperature is taken to be the
surface temperatue at the river point.
+For temperature 999 is taken as missing data and the river temperature is taken to
+be the surface temperatue at the river point.
For the depth parameter a value of 1 means the river is added to the surface box only,
and a value of 999 means the river is added through the entire water column.
After being read in the temperature and salinity variables are multiplied by the amount of runoff (converted into m/s)
to give the heat and salt content of the river runoff.
After the user specified depth is read ini, the number of grid boxes this corresponds to is
calculated and stored in the variable \np{nz\_rnf}.
The variable \textit{h\_dep} is then calculated to be the depth (in metres) of the bottom of the
lowest box the river water is being added to (i.e. the total depth that river water is being added to in the model).

The mass/volume addition due to the river runoff is, at each relevant depth level, added to the horizontal divergence
(\textit{hdivn}) in the subroutine \rou{sbc\_rnf\_div} (called from \mdl{divcur}).
+After being read in the temperature and salinity variables are multiplied by the amount of runoff
+(converted into m/s) to give the heat and salt content of the river runoff.
+After the user specified depth is read ini,
+the number of grid boxes this corresponds to is calculated and stored in the variable \np{nz\_rnf}.
+The variable \textit{h\_dep} is then calculated to be the depth (in metres) of
+the bottom of the lowest box the river water is being added to
+(i.e. the total depth that river water is being added to in the model).
+
+The mass/volume addition due to the river runoff is, at each relevant depth level, added to
+the horizontal divergence (\textit{hdivn}) in the subroutine \rou{sbc\_rnf\_div} (called from \mdl{divcur}).
This increases the diffusion term in the vicinity of the river, thereby simulating a momentum flux.
The sea surface height is calculated using the sum of the horizontal divergence terms, and so the
river runoff indirectly forces an increase in sea surface height.
+The sea surface height is calculated using the sum of the horizontal divergence terms,
+and so the river runoff indirectly forces an increase in sea surface height.
The \textit{hdivn} terms are used in the tracer advection modules to force vertical velocities.
This causes a mass of water, equal to the amount of runoff, to be moved into the box above.
The heat and salt content of the river runoff is not included in this step, and so the tracer
concentrations are diluted as water of ocean temperature and salinity is moved upward out of the box
and replaced by the same volume of river water with no corresponding heat and salt addition.

For the linear free surface case, at the surface box the tracer advection causes a flux of water
(of equal volume to the runoff) through the sea surface out of the domain, which causes a salt and heat flux out of the model.
+This causes a mass of water, equal to the amount of runoff, to be moved into the box above.
+The heat and salt content of the river runoff is not included in this step,
+and so the tracer concentrations are diluted as water of ocean temperature and salinity is moved upward out of
+the box and replaced by the same volume of river water with no corresponding heat and salt addition.
+
+For the linear free surface case, at the surface box the tracer advection causes a flux of water
+(of equal volume to the runoff) through the sea surface out of the domain,
+which causes a salt and heat flux out of the model.
As such the volume of water does not change, but the water is diluted.
For the nonlinear free surface case (\key{vvl}), no flux is allowed through the surface.
Instead in the surface box (as well as water moving up from the boxes below) a volume of runoff water
is added with no corresponding heat and salt addition and so as happens in the lower boxes there is a dilution effect.
(The runoff addition to the top box along with the water being moved up through boxes below means the surface box has a large
increase in volume, whilst all other boxes remain the same size)
+Instead in the surface box (as well as water moving up from the boxes below) a volume of runoff water is added with
+no corresponding heat and salt addition and so as happens in the lower boxes there is a dilution effect.
+(The runoff addition to the top box along with the water being moved up through
+boxes below means the surface box has a large increase in volume, whilst all other boxes remain the same size)
In trasbc the addition of heat and salt due to the river runoff is added.
This is done in the same way for both vvl and nonvvl.
The temperature and salinity are increased through the specified depth according to the heat and salt content of the river.

In the nonlinear free surface case (vvl), near the end of the time step the change in sea surface height is redistrubuted
through the grid boxes, so that the original ratios of grid box heights are restored.
In doing this water is moved into boxes below, throughout the water column, so the large volume addition to the surface box is spread between all the grid boxes.

It is also possible for runnoff to be specified as a negative value for modelling flow through straits, i.e. modelling the Baltic flow in and out of the North Sea.
When the flow is out of the domain there is no change in temperature and salinity, regardless of the namelist options used, as the ocean water leaving the domain removes heat and salt (at the same concentration) with it.
+The temperature and salinity are increased through the specified depth according to
+the heat and salt content of the river.
+
+In the nonlinear free surface case (vvl),
+near the end of the time step the change in sea surface height is redistrubuted through the grid boxes,
+so that the original ratios of grid box heights are restored.
+In doing this water is moved into boxes below, throughout the water column,
+so the large volume addition to the surface box is spread between all the grid boxes.
+
+It is also possible for runnoff to be specified as a negative value for modelling flow through straits,
+i.e. modelling the Baltic flow in and out of the North Sea.
+When the flow is out of the domain there is no change in temperature and salinity,
+regardless of the namelist options used,
+as the ocean water leaving the domain removes heat and salt (at the same concentration) with it.
@@ 931,75 +1014,77 @@
\begin{description}
\item[\np{nn\_isf}\forcode{ = 1}]
The ice shelf cavity is represented (\np{ln\_isfcav}\forcode{ = .true.} needed). The fwf and heat flux are computed.
Two different bulk formula are available:
+ The ice shelf cavity is represented (\np{ln\_isfcav}\forcode{ = .true.} needed).
+ The fwf and heat flux are computed.
+ Two different bulk formula are available:
\begin{description}
\item[\np{nn\_isfblk}\forcode{ = 1}]
 The bulk formula used to compute the melt is based the one described in \citet{Hunter2006}.
 This formulation is based on a balance between the upward ocean heat flux and the latent heat flux at the ice shelf base.

 \item[\np{nn\_isfblk}\forcode{ = 2}]
 The bulk formula used to compute the melt is based the one described in \citet{Jenkins1991}.
 This formulation is based on a 3 equations formulation (a heat flux budget, a salt flux budget
 and a linearised freezing point temperature equation).
+ The bulk formula used to compute the melt is based the one described in \citet{Hunter2006}.
+ This formulation is based on a balance between the upward ocean heat flux and
+ the latent heat flux at the ice shelf base.
+ \item[\np{nn\_isfblk}\forcode{ = 2}]
+ The bulk formula used to compute the melt is based the one described in \citet{Jenkins1991}.
+ This formulation is based on a 3 equations formulation
+ (a heat flux budget, a salt flux budget and a linearised freezing point temperature equation).
\end{description}

For this 2 bulk formulations, there are 3 different ways to compute the exchange coeficient:
+ For this 2 bulk formulations, there are 3 different ways to compute the exchange coeficient:
\begin{description}
 \item[\np{nn\_gammablk}\forcode{ = 0}]
 The salt and heat exchange coefficients are constant and defined by \np{rn\_gammas0} and \np{rn\_gammat0}

+ \item[\np{nn\_gammablk}\forcode{ = 0}]
+ The salt and heat exchange coefficients are constant and defined by \np{rn\_gammas0} and \np{rn\_gammat0}
\item[\np{nn\_gammablk}\forcode{ = 1}]
 The salt and heat exchange coefficients are velocity dependent and defined as \np{rn\_gammas0}$ \times u_{*}$ and \np{rn\_gammat0}$ \times u_{*}$
 where $u_{*}$ is the friction velocity in the top boundary layer (ie first \np{rn\_hisf\_tbl} meters).
 See \citet{Jenkins2010} for all the details on this formulation.

+ The salt and heat exchange coefficients are velocity dependent and defined as
+ \np{rn\_gammas0}$ \times u_{*}$ and \np{rn\_gammat0}$ \times u_{*}$ where
+ $u_{*}$ is the friction velocity in the top boundary layer (ie first \np{rn\_hisf\_tbl} meters).
+ See \citet{Jenkins2010} for all the details on this formulation.
\item[\np{nn\_gammablk}\forcode{ = 2}]
 The salt and heat exchange coefficients are velocity and stability dependent and defined as
 $\gamma_{T,S} = \frac{u_{*}}{\Gamma_{Turb} + \Gamma^{T,S}_{Mole}}$
 where $u_{*}$ is the friction velocity in the top boundary layer (ie first \np{rn\_hisf\_tbl} meters),
 $\Gamma_{Turb}$ the contribution of the ocean stability and
 $\Gamma^{T,S}_{Mole}$ the contribution of the molecular diffusion.
 See \citet{Holland1999} for all the details on this formulation.
 \end{description}

\item[\np{nn\_isf}\forcode{ = 2}]
A parameterisation of isf is used. The ice shelf cavity is not represented.
The fwf is distributed along the ice shelf edge between the depth of the average grounding line (GL)
(\np{sn\_depmax\_isf}) and the base of the ice shelf along the calving front (\np{sn\_depmin\_isf}) as in (\np{nn\_isf}\forcode{ = 3}).
Furthermore the fwf and heat flux are computed using the \citet{Beckmann2003} parameterisation of isf melting.
The effective melting length (\np{sn\_Leff\_isf}) is read from a file.

\item[\np{nn\_isf}\forcode{ = 3}]
A simple parameterisation of isf is used. The ice shelf cavity is not represented.
The fwf (\np{sn\_rnfisf}) is prescribed and distributed along the ice shelf edge between the depth of the average grounding line (GL)
(\np{sn\_depmax\_isf}) and the base of the ice shelf along the calving front (\np{sn\_depmin\_isf}).
The heat flux ($Q_h$) is computed as $Q_h = fwf \times L_f$.

\item[\np{nn\_isf}\forcode{ = 4}]
The ice shelf cavity is opened (\np{ln\_isfcav}\forcode{ = .true.} needed). However, the fwf is not computed but specified from file \np{sn\_fwfisf}).
The heat flux ($Q_h$) is computed as $Q_h = fwf \times L_f$.\\
+ The salt and heat exchange coefficients are velocity and stability dependent and defined as
+ $\gamma_{T,S} = \frac{u_{*}}{\Gamma_{Turb} + \Gamma^{T,S}_{Mole}}$ where
+ $u_{*}$ is the friction velocity in the top boundary layer (ie first \np{rn\_hisf\_tbl} meters),
+ $\Gamma_{Turb}$ the contribution of the ocean stability and
+ $\Gamma^{T,S}_{Mole}$ the contribution of the molecular diffusion.
+ See \citet{Holland1999} for all the details on this formulation.
+ \end{description}
+ \item[\np{nn\_isf}\forcode{ = 2}]
+ A parameterisation of isf is used. The ice shelf cavity is not represented.
+ The fwf is distributed along the ice shelf edge between the depth of the average grounding line (GL)
+ (\np{sn\_depmax\_isf}) and the base of the ice shelf along the calving front
+ (\np{sn\_depmin\_isf}) as in (\np{nn\_isf}\forcode{ = 3}).
+ Furthermore the fwf and heat flux are computed using the \citet{Beckmann2003} parameterisation of isf melting.
+ The effective melting length (\np{sn\_Leff\_isf}) is read from a file.
+ \item[\np{nn\_isf}\forcode{ = 3}]
+ A simple parameterisation of isf is used. The ice shelf cavity is not represented.
+ The fwf (\np{sn\_rnfisf}) is prescribed and distributed along the ice shelf edge between
+ the depth of the average grounding line (GL) (\np{sn\_depmax\_isf}) and
+ the base of the ice shelf along the calving front (\np{sn\_depmin\_isf}).
+ The heat flux ($Q_h$) is computed as $Q_h = fwf \times L_f$.
+ \item[\np{nn\_isf}\forcode{ = 4}]
+ The ice shelf cavity is opened (\np{ln\_isfcav}\forcode{ = .true.} needed).
+ However, the fwf is not computed but specified from file \np{sn\_fwfisf}).
+ The heat flux ($Q_h$) is computed as $Q_h = fwf \times L_f$.\\
\end{description}

$\bullet$ \np{nn\_isf}\forcode{ = 1} and \np{nn\_isf}\forcode{ = 2} compute a melt rate based on the water mass properties, ocean velocities and depth.
 This flux is thus highly dependent of the model resolution (horizontal and vertical), realism of the water masses onto the shelf ...\\


$\bullet$ \np{nn\_isf}\forcode{ = 3} and \np{nn\_isf}\forcode{ = 4} read the melt rate from a file. You have total control of the fwf forcing.
This can be usefull if the water masses on the shelf are not realistic or the resolution (horizontal/vertical) are too
coarse to have realistic melting or for studies where you need to control your heat and fw input.\\

A namelist parameters control over how many meters the heat and fw fluxes are spread.
\np{rn\_hisf\_tbl}] is the top boundary layer thickness as defined in \citet{Losch2008}.
This parameter is only used if \np{nn\_isf}\forcode{ = 1} or \np{nn\_isf}\forcode{ = 4}
+$\bullet$ \np{nn\_isf}\forcode{ = 1} and \np{nn\_isf}\forcode{ = 2} compute a melt rate based on
+the water mass properties, ocean velocities and depth.
+This flux is thus highly dependent of the model resolution (horizontal and vertical),
+realism of the water masses onto the shelf ...\\
+
+$\bullet$ \np{nn\_isf}\forcode{ = 3} and \np{nn\_isf}\forcode{ = 4} read the melt rate from a file.
+You have total control of the fwf forcing.
+This can be useful if the water masses on the shelf are not realistic or
+the resolution (horizontal/vertical) are too coarse to have realistic melting or
+for studies where you need to control your heat and fw input.\\
+
+A namelist parameters control over how many meters the heat and fw fluxes are spread.
+\np{rn\_hisf\_tbl}] is the top boundary layer thickness as defined in \citet{Losch2008}.
+This parameter is only used if \np{nn\_isf}\forcode{ = 1} or \np{nn\_isf}\forcode{ = 4}.
If \np{rn\_hisf\_tbl}\forcode{ = 0}., the fluxes are put in the top level whatever is its tickness.
If \np{rn\_hisf\_tbl} $>$ 0., the fluxes are spread over the first \np{rn\_hisf\_tbl} m (ie over one or several cells).\\
+If \np{rn\_hisf\_tbl} $>$ 0., the fluxes are spread over the first \np{rn\_hisf\_tbl} m
+(ie over one or several cells).\\
The ice shelf melt is implemented as a volume flux with in the same way as for the runoff.
The fw addition due to the ice shelf melting is, at each relevant depth level, added to the horizontal divergence
(\textit{hdivn}) in the subroutine \rou{sbc\_isf\_div}, called from \mdl{divcur}.
See the runoff section \autoref{sec:SBC_rnf} for all the details about the divergence correction.
+The fw addition due to the ice shelf melting is, at each relevant depth level, added to
+the horizontal divergence (\textit{hdivn}) in the subroutine \rou{sbc\_isf\_div}, called from \mdl{divcur}.
+See the runoff section \autoref{sec:SBC_rnf} for all the details about the divergence correction.
@@ 1010,33 +1095,43 @@
\nlst{namsbc_iscpl}
%
Ice sheet/ocean coupling is done through file exchange at the restart step. NEMO, at each restart step,
read the bathymetry and ice shelf draft variable in a netcdf file.
If \np{ln\_iscpl}\forcode{ = .true.}, the isf draft is assume to be different at each restart step
with potentially some new wet/dry cells due to the ice sheet dynamics/thermodynamics.
+Ice sheet/ocean coupling is done through file exchange at the restart step.
+NEMO, at each restart step, read the bathymetry and ice shelf draft variable in a netcdf file.
+If \np{ln\_iscpl}\forcode{ = .true.}, the isf draft is assume to be different at each restart step with
+potentially some new wet/dry cells due to the ice sheet dynamics/thermodynamics.
The wetting and drying scheme applied on the restart is very simple and described below for the 6 different cases:
\begin{description}
\item[Thin a cell down:]
 T/S/ssh are unchanged and U/V in the top cell are corrected to keep the barotropic transport (bt) constant ($bt_b=bt_n$).
+ T/S/ssh are unchanged and U/V in the top cell are corrected to keep the barotropic transport (bt) constant
+ ($bt_b=bt_n$).
\item[Enlarge a cell:]
 See case "Thin a cell down"
+ See case "Thin a cell down"
\item[Dry a cell:]
 mask, T/S, U/V and ssh are set to 0. Furthermore, U/V into the water column are modified to satisfy ($bt_b=bt_n$).
+ mask, T/S, U/V and ssh are set to 0.
+ Furthermore, U/V into the water column are modified to satisfy ($bt_b=bt_n$).
\item[Wet a cell:]
 mask is set to 1, T/S is extrapolated from neighbours, $ssh_n = ssh_b$ and U/V set to 0. If no neighbours along i,j and k, T/S/U/V and mask are set to 0.
+ mask is set to 1, T/S is extrapolated from neighbours, $ssh_n = ssh_b$ and U/V set to 0.
+ If no neighbours along i,j and k, T/S/U/V and mask are set to 0.
\item[Dry a column:]
mask, T/S, U/V are set to 0 everywhere in the column and ssh set to 0.
\item[Wet a column:]
 set mask to 1, T/S is extrapolated from neighbours, ssh is extrapolated from neighbours and U/V set to 0. If no neighbour, T/S/U/V and mask set to 0.
+ set mask to 1, T/S is extrapolated from neighbours, ssh is extrapolated from neighbours and U/V set to 0.
+ If no neighbour, T/S/U/V and mask set to 0.
\end{description}
The extrapolation is call \np{nn\_drown} times. It means that if the grounding line retreat by more than \np{nn\_drown} cells between 2 coupling steps,
 the code will be unable to fill all the new wet cells properly. The default number is set up for the MISOMIP idealised experiments.\\
This coupling procedure is able to take into account grounding line and calving front migration. However, it is a nonconservative processe.
This could lead to a trend in heat/salt content and volume. In order to remove the trend and keep the conservation level as close to 0 as possible,
 a simple conservation scheme is available with \np{ln\_hsb}\forcode{ = .true.}. The heat/salt/vol. gain/loss is diagnose, as well as the location.
Based on what is done on sbcrnf to prescribed a source of heat/salt/vol., the heat/salt/vol. gain/loss is removed/added,
 over a period of \np{rn\_fiscpl} time step, into the system.
+The extrapolation is call \np{nn\_drown} times.
+It means that if the grounding line retreat by more than \np{nn\_drown} cells between 2 coupling steps,
+the code will be unable to fill all the new wet cells properly.
+The default number is set up for the MISOMIP idealised experiments.\\
+This coupling procedure is able to take into account grounding line and calving front migration.
+However, it is a nonconservative processe.
+This could lead to a trend in heat/salt content and volume.
+In order to remove the trend and keep the conservation level as close to 0 as possible,
+a simple conservation scheme is available with \np{ln\_hsb}\forcode{ = .true.}.
+The heat/salt/vol. gain/loss is diagnose, as well as the location.
+Based on what is done on sbcrnf to prescribed a source of heat/salt/vol.,
+the heat/salt/vol. gain/loss is removed/added, over a period of \np{rn\_fiscpl} time step, into the system.
So after \np{rn\_fiscpl} time step, all the heat/salt/vol. gain/loss due to extrapolation process is canceled.\\
As the before and now fields are not compatible (modification of the geometry), the restart time step is prescribed to be an euler time step instead of a leap frog and $fields_b = fields_n$.
+As the before and now fields are not compatible (modification of the geometry),
+the restart time step is prescribed to be an euler time step instead of a leap frog and $fields_b = fields_n$.
%
% ================================================================
@@ 1053,9 +1148,9 @@
Their physical behaviour is controlled by equations as described in \citet{Martin_Adcroft_OM10} ).
(Note that the authors kindly provided a copy of their code to act as a basis for implementation in NEMO).
Icebergs are initially spawned into one of ten classes which have specific mass and thickness as described
in the \ngn{namberg} namelist:
\np{rn\_initial\_mass} and \np{rn\_initial\_thickness}.
Each class has an associated scaling (\np{rn\_mass\_scaling}), which is an integer representing how many icebergs
of this class are being described as one lagrangian point (this reduces the numerical problem of tracking every single iceberg).
+Icebergs are initially spawned into one of ten classes which have specific mass and thickness as
+described in the \ngn{namberg} namelist: \np{rn\_initial\_mass} and \np{rn\_initial\_thickness}.
+Each class has an associated scaling (\np{rn\_mass\_scaling}),
+which is an integer representing how many icebergs of this class are being described as one lagrangian point
+(this reduces the numerical problem of tracking every single iceberg).
They are enabled by setting \np{ln\_icebergs}\forcode{ = .true.}.
@@ 1063,24 +1158,27 @@
\begin{description}
\item[\np{nn\_test\_icebergs}~$>$~0]
In this scheme, the value of \np{nn\_test\_icebergs} represents the class of iceberg to generate
(so between 1 and 10), and \np{nn\_test\_icebergs} provides a lon/lat box in the domain at each
grid point of which an iceberg is generated at the beginning of the run.
(Note that this happens each time the timestep equals \np{nn\_nit000}.)
\np{nn\_test\_icebergs} is defined by four numbers in \np{nn\_test\_box} representing the corners
of the geographical box: lonmin,lonmax,latmin,latmax
+ In this scheme, the value of \np{nn\_test\_icebergs} represents the class of iceberg to generate
+ (so between 1 and 10), and \np{nn\_test\_icebergs} provides a lon/lat box in the domain at each grid point of
+ which an iceberg is generated at the beginning of the run.
+ (Note that this happens each time the timestep equals \np{nn\_nit000}.)
+ \np{nn\_test\_icebergs} is defined by four numbers in \np{nn\_test\_box} representing the corners of
+ the geographical box: lonmin,lonmax,latmin,latmax
\item[\np{nn\_test\_icebergs}\forcode{ = 1}]
In this scheme the model reads a calving file supplied in the \np{sn\_icb} parameter.
This should be a file with a field on the configuration grid (typically ORCA) representing ice accumulation rate at each model point.
These should be ocean points adjacent to land where icebergs are known to calve.
Most points in this input grid are going to have value zero.
When the model runs, ice is accumulated at each grid point which has a nonzero source term.
At each time step, a test is performed to see if there is enough ice mass to calve an iceberg of each class in order (1 to 10).
Note that this is the initial mass multiplied by the number each particle represents ($i.e.$ the scaling).
If there is enough ice, a new iceberg is spawned and the total available ice reduced accordingly.
+ In this scheme the model reads a calving file supplied in the \np{sn\_icb} parameter.
+ This should be a file with a field on the configuration grid (typically ORCA)
+ representing ice accumulation rate at each model point.
+ These should be ocean points adjacent to land where icebergs are known to calve.
+ Most points in this input grid are going to have value zero.
+ When the model runs, ice is accumulated at each grid point which has a nonzero source term.
+ At each time step, a test is performed to see if there is enough ice mass to
+ calve an iceberg of each class in order (1 to 10).
+ Note that this is the initial mass multiplied by the number each particle represents ($i.e.$ the scaling).
+ If there is enough ice, a new iceberg is spawned and the total available ice reduced accordingly.
\end{description}
Icebergs are influenced by wind, waves and currents, bottom melt and erosion.
The latter act to disintegrate the iceberg. This is either all melted freshwater, or
(if \np{rn\_bits\_erosion\_fraction}~$>$~0) into melt and additionally small ice bits
+The latter act to disintegrate the iceberg.
+This is either all melted freshwater,
+or (if \np{rn\_bits\_erosion\_fraction}~$>$~0) into melt and additionally small ice bits
which are assumed to propagate with their larger parent and thus delay fluxing into the ocean.
Melt water (and other variables on the configuration grid) are written into the main NEMO model output files.
@@ 1091,7 +1189,7 @@
The amount of information is controlled by two integer parameters:
\begin{description}
\item[\np{nn\_verbose\_level}] takes a value between one and four and represents
an increasing number of points in the code at which variables are written, and an
increasing level of obscurity.
+\item[\np{nn\_verbose\_level}] takes a value between one and four and
+ represents an increasing number of points in the code at which variables are written,
+ and an increasing level of obscurity.
\item[\np{nn\_verbose\_write}] is the number of timesteps between writes
\end{description}
@@ 1102,6 +1200,6 @@
When \key{mpp\_mpi} is defined, each output file contains only those icebergs in the corresponding processor.
Trajectory points are written out in the order of their parent iceberg in the model's "linked list" of icebergs.
So care is needed to recreate data for individual icebergs, since its trajectory data may be spread across
multiple files.
+So care is needed to recreate data for individual icebergs,
+since its trajectory data may be spread across multiple files.
@@ 1125,48 +1223,48 @@
\begin{figure}[!t] \begin{center}
\includegraphics[width=0.8\textwidth]{Fig_SBC_diurnal}
\caption{ \protect\label{fig:SBC_diurnal}
Example of recontruction of the diurnal cycle variation of short wave flux
from daily mean values. The reconstructed diurnal cycle (black line) is chosen
as the mean value of the analytical cycle (blue line) over a time step, not
as the mid time step value of the analytically cycle (red square). From \citet{Bernie_al_CD07}.}
+\caption{ \protect\label{fig:SBC_diurnal}
+ Example of recontruction of the diurnal cycle variation of short wave flux from daily mean values.
+ The reconstructed diurnal cycle (black line) is chosen as
+ the mean value of the analytical cycle (blue line) over a time step,
+ not as the mid time step value of the analytically cycle (red square).
+ From \citet{Bernie_al_CD07}.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\cite{Bernie_al_JC05} have shown that to capture 90$\%$ of the diurnal variability of
SST requires a vertical resolution in upper ocean of 1~m or better and a temporal resolution
of the surface fluxes of 3~h or less. Unfortunately high frequency forcing fields are rare,
not to say inexistent. Nevertheless, it is possible to obtain a reasonable diurnal cycle
of the SST knowning only short wave flux (SWF) at high frequency \citep{Bernie_al_CD07}.
Furthermore, only the knowledge of daily mean value of SWF is needed,
as higher frequency variations can be reconstructed from them, assuming that
the diurnal cycle of SWF is a scaling of the top of the atmosphere diurnal cycle
of incident SWF. The \cite{Bernie_al_CD07} reconstruction algorithm is available
in \NEMO by setting \np{ln\_dm2dc}\forcode{ = .true.} (a \textit{\ngn{namsbc}} namelist variable) when using
CORE bulk formulea (\np{ln\_blk\_core}\forcode{ = .true.}) or the flux formulation (\np{ln\_flx}\forcode{ = .true.}).
The reconstruction is performed in the \mdl{sbcdcy} module. The detail of the algoritm used
can be found in the appendix~A of \cite{Bernie_al_CD07}. The algorithm preserve the daily
mean incomming SWF as the reconstructed SWF at a given time step is the mean value
of the analytical cycle over this time step (\autoref{fig:SBC_diurnal}).
The use of diurnal cycle reconstruction requires the input SWF to be daily
+\cite{Bernie_al_JC05} have shown that to capture 90$\%$ of the diurnal variability of SST requires a vertical resolution in upper ocean of 1~m or better and a temporal resolution of the surface fluxes of 3~h or less.
+Unfortunately high frequency forcing fields are rare, not to say inexistent.
+Nevertheless, it is possible to obtain a reasonable diurnal cycle of the SST knowning only short wave flux (SWF) at
+high frequency \citep{Bernie_al_CD07}.
+Furthermore, only the knowledge of daily mean value of SWF is needed,
+as higher frequency variations can be reconstructed from them,
+assuming that the diurnal cycle of SWF is a scaling of the top of the atmosphere diurnal cycle of incident SWF.
+The \cite{Bernie_al_CD07} reconstruction algorithm is available in \NEMO by
+setting \np{ln\_dm2dc}\forcode{ = .true.} (a \textit{\ngn{namsbc}} namelist variable) when
+using CORE bulk formulea (\np{ln\_blk\_core}\forcode{ = .true.}) or
+the flux formulation (\np{ln\_flx}\forcode{ = .true.}).
+The reconstruction is performed in the \mdl{sbcdcy} module.
+The detail of the algoritm used can be found in the appendix~A of \cite{Bernie_al_CD07}.
+The algorithm preserve the daily mean incoming SWF as the reconstructed SWF at
+a given time step is the mean value of the analytical cycle over this time step (\autoref{fig:SBC_diurnal}).
+The use of diurnal cycle reconstruction requires the input SWF to be daily
($i.e.$ a frequency of 24 and a time interpolation set to true in \np{sn\_qsr} namelist parameter).
Furthermore, it is recommended to have a least 8 surface module time step per day,
that is $\rdt \ nn\_fsbc < 10,800~s = 3~h$. An example of recontructed SWF
is given in \autoref{fig:SBC_dcy} for a 12 reconstructed diurnal cycle, one every 2~hours
(from 1am to 11pm).
+that is $\rdt \ nn\_fsbc < 10,800~s = 3~h$.
+An example of recontructed SWF is given in \autoref{fig:SBC_dcy} for a 12 reconstructed diurnal cycle,
+one every 2~hours (from 1am to 11pm).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=0.7\textwidth]{Fig_SBC_dcy}
\caption{ \protect\label{fig:SBC_dcy}
Example of recontruction of the diurnal cycle variation of short wave flux
from daily mean values on an ORCA2 grid with a time sampling of 2~hours (from 1am to 11pm).
The display is on (i,j) plane. }
+\caption{ \protect\label{fig:SBC_dcy}
+ Example of recontruction of the diurnal cycle variation of short wave flux from
+ daily mean values on an ORCA2 grid with a time sampling of 2~hours (from 1am to 11pm).
+ The display is on (i,j) plane. }
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Note also that the setting a diurnal cycle in SWF is highly recommended when
the top layer thickness approach 1~m or less, otherwise large error in SST can
appear due to an inconsistency between the scale of the vertical resolution
and the forcing acting on that scale.
+Note also that the setting a diurnal cycle in SWF is highly recommended when
+the top layer thickness approach 1~m or less, otherwise large error in SST can appear due to
+an inconsistency between the scale of the vertical resolution and the forcing acting on that scale.
% 
@@ 1176,14 +1274,15 @@
\label{subsec:SBC_rotation}
When using a flux (\np{ln\_flx}\forcode{ = .true.}) or bulk (\np{ln\_clio}\forcode{ = .true.} or \np{ln\_core}\forcode{ = .true.}) formulation,
pairs of vector components can be rotated from eastnorth directions onto the local grid directions.
This is particularly useful when interpolation on the fly is used since here any vectors are likely to be defined
relative to a rectilinear grid.
+When using a flux (\np{ln\_flx}\forcode{ = .true.}) or
+bulk (\np{ln\_clio}\forcode{ = .true.} or \np{ln\_core}\forcode{ = .true.}) formulation,
+pairs of vector components can be rotated from eastnorth directions onto the local grid directions.
+This is particularly useful when interpolation on the fly is used since here any vectors are likely to
+be defined relative to a rectilinear grid.
To activate this option a nonempty string is supplied in the rotation pair column of the relevant namelist.
The eastward component must start with "U" and the northward component with "V".
The remaining characters in the strings are used to identify which pair of components go together.
So for example, strings "U1" and "V1" next to "utau" and "vtau" would pair the wind stress components together
and rotate them on to the model grid directions; "U2" and "V2" could be used against a second pair of components,
and so on.
+So for example, strings "U1" and "V1" next to "utau" and "vtau" would pair the wind stress components together and
+rotate them on to the model grid directions;
+"U2" and "V2" could be used against a second pair of components, and so on.
The extra characters used in the strings are arbitrary.
The rot\_rep routine from the \mdl{geo2ocean} module is used to perform the rotation.
@@ 1199,19 +1298,18 @@
%
IOptions are defined through the \ngn{namsbc\_ssr} namelist variables.
n forced mode using a flux formulation (\np{ln\_flx}\forcode{ = .true.}), a
feedback term \emph{must} be added to the surface heat flux $Q_{ns}^o$:
+IOptions are defined through the \ngn{namsbc\_ssr} namelist variables.
+On forced mode using a flux formulation (\np{ln\_flx}\forcode{ = .true.}),
+a feedback term \emph{must} be added to the surface heat flux $Q_{ns}^o$:
\begin{equation} \label{eq:sbc_dmp_q}
Q_{ns} = Q_{ns}^o + \frac{dQ}{dT} \left( \left. T \right_{k=1}  SST_{Obs} \right)
\end{equation}
where SST is a sea surface temperature field (observed or climatological), $T$ is
the model surface layer temperature and $\frac{dQ}{dT}$ is a negative feedback
coefficient usually taken equal to $40~W/m^2/K$. For a $50~m$
mixedlayer depth, this value corresponds to a relaxation time scale of two months.
This term ensures that if $T$ perfectly matches the supplied SST, then $Q$ is
equal to $Q_o$.

In the fresh water budget, a feedback term can also be added. Converted into an
equivalent freshwater flux, it takes the following expression :
+where SST is a sea surface temperature field (observed or climatological),
+$T$ is the model surface layer temperature and
+$\frac{dQ}{dT}$ is a negative feedback coefficient usually taken equal to $40~W/m^2/K$.
+For a $50~m$ mixedlayer depth, this value corresponds to a relaxation time scale of two months.
+This term ensures that if $T$ perfectly matches the supplied SST, then $Q$ is equal to $Q_o$.
+
+In the fresh water budget, a feedback term can also be added.
+Converted into an equivalent freshwater flux, it takes the following expression :
\begin{equation} \label{eq:sbc_dmp_emp}
@@ 1220,13 +1318,14 @@
\end{equation}
where $\textit{emp}_{o }$ is a net surface fresh water flux (observed, climatological or an
atmospheric model product), \textit{SSS}$_{Obs}$ is a sea surface salinity (usually a time
interpolation of the monthly mean Polar Hydrographic Climatology \citep{Steele2001}),
$\left.S\right_{k=1}$ is the model surface layer salinity and $\gamma_s$ is a negative
feedback coefficient which is provided as a namelist parameter. Unlike heat flux, there is no
physical justification for the feedback term in \autoref{eq:sbc_dmp_emp} as the atmosphere
does not care about ocean surface salinity \citep{Madec1997}. The SSS restoring
term should be viewed as a flux correction on freshwater fluxes to reduce the
uncertainties we have on the observed freshwater budget.
+where $\textit{emp}_{o }$ is a net surface fresh water flux
+(observed, climatological or an atmospheric model product),
+\textit{SSS}$_{Obs}$ is a sea surface salinity
+(usually a time interpolation of the monthly mean Polar Hydrographic Climatology \citep{Steele2001}),
+$\left.S\right_{k=1}$ is the model surface layer salinity and
+$\gamma_s$ is a negative feedback coefficient which is provided as a namelist parameter.
+Unlike heat flux, there is no physical justification for the feedback term in \autoref{eq:sbc_dmp_emp} as
+the atmosphere does not care about ocean surface salinity \citep{Madec1997}.
+The SSS restoring term should be viewed as a flux correction on freshwater fluxes to
+reduce the uncertainties we have on the observed freshwater budget.
% 
@@ 1236,26 +1335,33 @@
\label{subsec:SBC_icecover}
The presence at the sea surface of an ice covered area modifies all the fluxes
transmitted to the ocean. There are several way to handle seaice in the system
depending on the value of the \np{nn\_ice} namelist parameter found in \ngn{namsbc} namelist.
+The presence at the sea surface of an ice covered area modifies all the fluxes transmitted to the ocean.
+There are several way to handle seaice in the system depending on
+the value of the \np{nn\_ice} namelist parameter found in \ngn{namsbc} namelist.
\begin{description}
\item[nn{\_}ice = 0] there will never be seaice in the computational domain.
This is a typical namelist value used for tropical ocean domain. The surface fluxes
are simply specified for an icefree ocean. No specific things is done for seaice.
\item[nn{\_}ice = 1] seaice can exist in the computational domain, but no seaice model
is used. An observed ice covered area is read in a file. Below this area, the SST is
restored to the freezing point and the heat fluxes are set to $4~W/m^2$ ($2~W/m^2$)
in the northern (southern) hemisphere. The associated modification of the freshwater
fluxes are done in such a way that the change in buoyancy fluxes remains zero.
This prevents deep convection to occur when trying to reach the freezing point
(and so ice covered area condition) while the SSS is too large. This manner of
managing seaice area, just by using si IF case, is usually referred as the \textit{iceif}
model. It can be found in the \mdl{sbcice{\_}if} module.
\item[nn{\_}ice = 2 or more] A full sea ice model is used. This model computes the
iceocean fluxes, that are combined with the airsea fluxes using the ice fraction of
each model cell to provide the surface ocean fluxes. Note that the activation of a
seaice model is is done by defining a CPP key (\key{lim3} or \key{cice}).
The activation automatically overwrites the read value of nn{\_}ice to its appropriate
value ($i.e.$ $2$ for LIM3 or $3$ for CICE).
+\item[nn{\_}ice = 0]
+ there will never be seaice in the computational domain.
+ This is a typical namelist value used for tropical ocean domain.
+ The surface fluxes are simply specified for an icefree ocean.
+ No specific things is done for seaice.
+\item[nn{\_}ice = 1]
+ seaice can exist in the computational domain, but no seaice model is used.
+ An observed ice covered area is read in a file.
+ Below this area, the SST is restored to the freezing point and
+ the heat fluxes are set to $4~W/m^2$ ($2~W/m^2$) in the northern (southern) hemisphere.
+ The associated modification of the freshwater fluxes are done in such a way that
+ the change in buoyancy fluxes remains zero.
+ This prevents deep convection to occur when trying to reach the freezing point
+ (and so ice covered area condition) while the SSS is too large.
+ This manner of managing seaice area, just by using si IF case,
+ is usually referred as the \textit{iceif} model.
+ It can be found in the \mdl{sbcice{\_}if} module.
+\item[nn{\_}ice = 2 or more]
+ A full sea ice model is used.
+ This model computes the iceocean fluxes,
+ that are combined with the airsea fluxes using the ice fraction of each model cell to
+ provide the surface ocean fluxes.
+ Note that the activation of a seaice model is is done by defining a CPP key (\key{lim3} or \key{cice}).
+ The activation automatically overwrites the read value of nn{\_}ice to its appropriate value
+ ($i.e.$ $2$ for LIM3 or $3$ for CICE).
\end{description}
@@ 1265,24 +1371,30 @@
\label{subsec:SBC_cice}
It is now possible to couple a regional or global NEMO configuration (without AGRIF) to the CICE seaice
model by using \key{cice}. The CICE code can be obtained from
\href{http://oceans11.lanl.gov/trac/CICE/}{LANL} and the additional 'hadgem3' drivers will be required,
even with the latest code release. Input grid files consistent with those used in NEMO will also be needed,
and CICE CPP keys \textbf{ORCA\_GRID}, \textbf{CICE\_IN\_NEMO} and \textbf{coupled} should be used (seek advice from UKMO
if necessary). Currently the code is only designed to work when using the CORE forcing option for NEMO (with
\textit{calc\_strair}\forcode{ = .true.} and \textit{calc\_Tsfc}\forcode{ = .true.} in the CICE namelist), or alternatively when NEMO
is coupled to the HadGAM3 atmosphere model (with \textit{calc\_strair}\forcode{ = .false.} and \textit{calc\_Tsfc}\forcode{ = false}).
The code is intended to be used with \np{nn\_fsbc} set to 1 (although coupling ocean and ice less frequently
should work, it is possible the calculation of some of the oceanice fluxes needs to be modified slightly  the
user should check that results are not significantly different to the standard case).

There are two options for the technical coupling between NEMO and CICE. The standard version allows
complete flexibility for the domain decompositions in the individual models, but this is at the expense of global
gather and scatter operations in the coupling which become very expensive on larger numbers of processors. The
alternative option (using \key{nemocice\_decomp} for both NEMO and CICE) ensures that the domain decomposition is
identical in both models (provided domain parameters are set appropriately, and
\textit{processor\_shape~=~squareice} and \textit{distribution\_wght~=~block} in the CICE namelist) and allows
much more efficient direct coupling on individual processors. This solution scales much better although it is at
the expense of having more idle CICE processors in areas where there is no sea ice.
+It is now possible to couple a regional or global NEMO configuration (without AGRIF)
+to the CICE seaice model by using \key{cice}.
+The CICE code can be obtained from \href{http://oceans11.lanl.gov/trac/CICE/}{LANL} and
+the additional 'hadgem3' drivers will be required, even with the latest code release.
+Input grid files consistent with those used in NEMO will also be needed,
+and CICE CPP keys \textbf{ORCA\_GRID}, \textbf{CICE\_IN\_NEMO} and \textbf{coupled} should be used
+(seek advice from UKMO if necessary).
+Currently the code is only designed to work when using the CORE forcing option for NEMO
+(with \textit{calc\_strair}\forcode{ = .true.} and \textit{calc\_Tsfc}\forcode{ = .true.} in the CICE namelist),
+or alternatively when NEMO is coupled to the HadGAM3 atmosphere model
+(with \textit{calc\_strair}\forcode{ = .false.} and \textit{calc\_Tsfc}\forcode{ = false}).
+The code is intended to be used with \np{nn\_fsbc} set to 1
+(although coupling ocean and ice less frequently should work,
+it is possible the calculation of some of the oceanice fluxes needs to be modified slightly 
+the user should check that results are not significantly different to the standard case).
+
+There are two options for the technical coupling between NEMO and CICE.
+The standard version allows complete flexibility for the domain decompositions in the individual models,
+but this is at the expense of global gather and scatter operations in the coupling which
+become very expensive on larger numbers of processors.
+The alternative option (using \key{nemocice\_decomp} for both NEMO and CICE) ensures that
+the domain decomposition is identical in both models (provided domain parameters are set appropriately,
+and \textit{processor\_shape~=~squareice} and \textit{distribution\_wght~=~block} in the CICE namelist) and
+allows much more efficient direct coupling on individual processors.
+This solution scales much better although it is at the expense of having more idle CICE processors in areas where
+there is no sea ice.
% 
@@ 1292,17 +1404,19 @@
\label{subsec:SBC_fwb}
For global ocean simulation it can be useful to introduce a control of the mean sea
level in order to prevent unrealistic drift of the sea surface height due to inaccuracy
in the freshwater fluxes. In \NEMO, two way of controlling the the freshwater budget.
+For global ocean simulation it can be useful to introduce a control of the mean sea level in order to
+prevent unrealistic drift of the sea surface height due to inaccuracy in the freshwater fluxes.
+In \NEMO, two way of controlling the the freshwater budget.
\begin{description}
\item[\np{nn\_fwb}\forcode{ = 0}] no control at all. The mean sea level is free to drift, and will
certainly do so.
\item[\np{nn\_fwb}\forcode{ = 1}] global mean \textit{emp} set to zero at each model time step.
+\item[\np{nn\_fwb}\forcode{ = 0}]
+ no control at all.
+ The mean sea level is free to drift, and will certainly do so.
+\item[\np{nn\_fwb}\forcode{ = 1}]
+ global mean \textit{emp} set to zero at each model time step.
%Note that with a seaice model, this technique only control the mean sea level with linear free surface (\key{vvl} not defined) and no mass flux between ocean and ice (as it is implemented in the current iceocean coupling).
\item[\np{nn\_fwb}\forcode{ = 2}] freshwater budget is adjusted from the previous year annual
mean budget which is read in the \textit{EMPave\_old.dat} file. As the model uses the
Boussinesq approximation, the annual mean fresh water budget is simply evaluated
from the change in the mean sea level at January the first and saved in the
\textit{EMPav.dat} file.
+\item[\np{nn\_fwb}\forcode{ = 2}]
+ freshwater budget is adjusted from the previous year annual mean budget which
+ is read in the \textit{EMPave\_old.dat} file.
+ As the model uses the Boussinesq approximation, the annual mean fresh water budget is simply evaluated from
+ the change in the mean sea level at January the first and saved in the \textit{EMPav.dat} file.
\end{description}
@@ 1318,12 +1432,13 @@
%
In order to read a neutral drag coefficient, from an external data source ($i.e.$ a wave model), the
logical variable \np{ln\_cdgw} in \ngn{namsbc} namelist must be set to \forcode{.true.}.
The \mdl{sbcwave} module containing the routine \np{sbc\_wave} reads the
namelist \ngn{namsbc\_wave} (for external data names, locations, frequency, interpolation and all
the miscellanous options allowed by Input Data generic Interface see \autoref{sec:SBC_input})
and a 2D field of neutral drag coefficient.
+In order to read a neutral drag coefficient, from an external data source ($i.e.$ a wave model),
+the logical variable \np{ln\_cdgw} in \ngn{namsbc} namelist must be set to \forcode{.true.}.
+The \mdl{sbcwave} module containing the routine \np{sbc\_wave} reads the namelist \ngn{namsbc\_wave}
+(for external data names, locations, frequency, interpolation and all the miscellanous options allowed by
+Input Data generic Interface see \autoref{sec:SBC_input}) and
+a 2D field of neutral drag coefficient.
Then using the routine TURB\_CORE\_1Z or TURB\_CORE\_2Z, and starting from the neutral drag coefficent provided,
the drag coefficient is computed according to stable/unstable conditions of the airsea interface following \citet{Large_Yeager_Rep04}.
+the drag coefficient is computed according to stable/unstable conditions of the airsea interface following
+\citet{Large_Yeager_Rep04}.
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_STO.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_STO.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_STO.tex (revision 10368)
@@ 14,10 +14,10 @@
The stochastic parametrization module aims to explicitly simulate uncertainties in the model.
More particularly, \cite{Brankart_OM2013} has shown that,
because of the nonlinearity of the seawater equation of state, unresolved scales represent
a major source of uncertainties in the computation of the large scale horizontal density gradient
(from T/S large scale fields), and that the impact of these uncertainties can be simulated
by random processes representing unresolved T/S fluctuations.
+The stochastic parametrization module aims to explicitly simulate uncertainties in the model.
+More particularly, \cite{Brankart_OM2013} has shown that,
+because of the nonlinearity of the seawater equation of state, unresolved scales represent a major source of
+uncertainties in the computation of the large scale horizontal density gradient (from T/S large scale fields),
+and that the impact of these uncertainties can be simulated by
+random processes representing unresolved T/S fluctuations.
The stochastic formulation of the equation of state can be written as:
@@ 26,13 +26,13 @@
\rho = \frac{1}{2} \sum_{i=1}^m\{ \rho[T+\Delta T_i,S+\Delta S_i,p_o(z)] + \rho[T\Delta T_i,S\Delta S_i,p_o(z)] \}
\end{equation}
where $p_o(z)$ is the reference pressure depending on the depth and,
$\Delta T_i$ and $\Delta S_i$ are a set of T/S perturbations defined as the scalar product
of the respective local T/S gradients with random walks $\mathbf{\xi}$:
+where $p_o(z)$ is the reference pressure depending on the depth and,
+$\Delta T_i$ and $\Delta S_i$ are a set of T/S perturbations defined as
+the scalar product of the respective local T/S gradients with random walks $\mathbf{\xi}$:
\begin{equation}
\label{eq:sto_pert}
\Delta T_i = \mathbf{\xi}_i \cdot \nabla T \qquad \hbox{and} \qquad \Delta S_i = \mathbf{\xi}_i \cdot \nabla S
\end{equation}
$\mathbf{\xi}_i$ are produced by a firstorder autoregressive processes (AR1) with
a parametrized decorrelation time scale, and horizontal and vertical standard deviations $\sigma_s$.
+$\mathbf{\xi}_i$ are produced by a firstorder autoregressive processes (AR1) with
+a parametrized decorrelation time scale, and horizontal and vertical standard deviations $\sigma_s$.
$\mathbf{\xi}$ are uncorrelated over the horizontal and fully correlated along the vertical.
@@ 41,16 +41,14 @@
\label{sec:STO_the_details}
The starting point of our implementation of stochastic parameterizations
in NEMO is to observe that many existing parameterizations are based
on autoregressive processes, which are used as a basic source of randomness
to transform a deterministic model into a probabilistic model.
+The starting point of our implementation of stochastic parameterizations in NEMO is to observe that
+many existing parameterizations are based on autoregressive processes,
+which are used as a basic source of randomness to transform a deterministic model into a probabilistic model.
A generic approach is thus to add one single new module in NEMO,
generating processes with appropriate statistics
to simulate each kind of uncertainty in the model
+generating processes with appropriate statistics to simulate each kind of uncertainty in the model
(see \cite{Brankart_al_GMD2015} for more details).
In practice, at every model grid point, independent Gaussian autoregressive
processes~$\xi^{(i)},\,i=1,\ldots,m$ are first generated
using the same basic equation:
+In practice, at every model grid point,
+independent Gaussian autoregressive processes~$\xi^{(i)},\,i=1,\ldots,m$ are first generated using
+the same basic equation:
\begin{equation}
@@ 60,13 +58,12 @@
\noindent
where $k$ is the index of the model timestep; and
$a^{(i)}$, $b^{(i)}$, $c^{(i)}$ are parameters defining
the mean ($\mu^{(i)}$) standard deviation ($\sigma^{(i)}$)
and correlation timescale ($\tau^{(i)}$) of each process:
+where $k$ is the index of the model timestep and
+$a^{(i)}$, $b^{(i)}$, $c^{(i)}$ are parameters defining the mean ($\mu^{(i)}$) standard deviation ($\sigma^{(i)}$) and
+correlation timescale ($\tau^{(i)}$) of each process:
\begin{itemize}
\item for order~1 processes, $w^{(i)}$ is a Gaussian white noise,
with zero mean and standard deviation equal to~1, and the parameters
$a^{(i)}$, $b^{(i)}$, $c^{(i)}$ are given by:
+\item
+ for order~1 processes, $w^{(i)}$ is a Gaussian white noise, with zero mean and standard deviation equal to~1,
+ and the parameters $a^{(i)}$, $b^{(i)}$, $c^{(i)}$ are given by:
\begin{equation}
@@ 83,7 +80,9 @@
\end{equation}
\item for order~$n>1$ processes, $w^{(i)}$ is an order~$n1$ autoregressive process,
with zero mean, standard deviation equal to~$\sigma^{(i)}$; correlation timescale
equal to~$\tau^{(i)}$; and the parameters $a^{(i)}$, $b^{(i)}$, $c^{(i)}$ are given by:
+\item
+ for order~$n>1$ processes, $w^{(i)}$ is an order~$n1$ autoregressive process, with zero mean,
+ standard deviation equal to~$\sigma^{(i)}$;
+ correlation timescale equal to~$\tau^{(i)}$;
+ and the parameters $a^{(i)}$, $b^{(i)}$, $c^{(i)}$ are given by:
\begin{equation}
@@ 103,20 +102,15 @@
\noindent
In this way, higher order processes can be easily generated recursively using
the same piece of code implementing (\autoref{eq:autoreg}),
and using succesively processes from order $0$ to~$n1$ as~$w^{(i)}$.
The parameters in (\autoref{eq:ord2}) are computed so that this recursive application
of (\autoref{eq:autoreg}) leads to processes with the required standard deviation
and correlation timescale, with the additional condition that
the $n1$ first derivatives of the autocorrelation function
are equal to zero at~$t=0$, so that the resulting processes
become smoother and smoother as $n$ is increased.
+In this way, higher order processes can be easily generated recursively using the same piece of code implementing
+(\autoref{eq:autoreg}), and using succesively processes from order $0$ to~$n1$ as~$w^{(i)}$.
+The parameters in (\autoref{eq:ord2}) are computed so that this recursive application of
+(\autoref{eq:autoreg}) leads to processes with the required standard deviation and correlation timescale,
+with the additional condition that the $n1$ first derivatives of the autocorrelation function are equal to
+zero at~$t=0$, so that the resulting processes become smoother and smoother as $n$ is increased.
Overall, this method provides quite a simple and generic way of generating
a wide class of stochastic processes.
However, this also means that new model parameters are needed to specify each of
these stochastic processes. As in any parameterization of lacking physics,
a very important issues then to tune these new parameters using either first principles,
model simulations, or realworld observations.
+Overall, this method provides quite a simple and generic way of generating a wide class of stochastic processes.
+However, this also means that new model parameters are needed to specify each of these stochastic processes.
+As in any parameterization of lacking physics, a very important issues then to tune these new parameters using
+either first principles, model simulations, or realworld observations.
\section{Implementation details}
@@ 131,10 +125,14 @@
It involves three modules :
\begin{description}
\item[\mdl{stopar}] : define the Stochastic parameters and their time evolution.
\item[\mdl{storng}] : a random number generator based on (and includes) the 64bit KISS
 (Keep It Simple Stupid) random number generator distributed by George Marsaglia
 (see \href{https://groups.google.com/forum/#!searchin/comp.lang.fortran/64bit$20KISS$20RNGs}{here})
\item[\mdl{stopts}] : stochastic parametrisation associated with the nonlinearity of the equation of seawater,
 implementing \autoref{eq:sto_pert} and specific piece of code in the equation of state implementing \autoref{eq:eos_sto}.
+\item[\mdl{stopar}:]
+ define the Stochastic parameters and their time evolution.
+\item[\mdl{storng}:]
+ a random number generator based on (and includes) the 64bit KISS (Keep It Simple Stupid) random number generator
+ distributed by George Marsaglia
+ (see \href{https://groups.google.com/forum/#!searchin/comp.lang.fortran/64bit$20KISS$20RNGs}{here})
+\item[\mdl{stopts}:]
+ stochastic parametrisation associated with the nonlinearity of the equation of seawater,
+ implementing \autoref{eq:sto_pert} and specific piece of code in
+ the equation of state implementing \autoref{eq:eos_sto}.
\end{description}
@@ 142,37 +140,36 @@
The first routine (\rou{sto\_par}) is a direct implementation of (\autoref{eq:autoreg}),
applied at each model grid point (in 2D or 3D),
and called at each model time step ($k$) to update
every autoregressive process ($i=1,\ldots,m$).
+applied at each model grid point (in 2D or 3D), and called at each model time step ($k$) to
+update every autoregressive process ($i=1,\ldots,m$).
This routine also includes a filtering operator, applied to $w^{(i)}$,
to introduce a spatial correlation between the stochastic processes.
The second routine (\rou{sto\_par\_init}) is an initialization routine mainly dedicated
to the computation of parameters $a^{(i)}, b^{(i)}, c^{(i)}$
for each autoregressive process, as a function of the statistical properties
required by the model user (mean, standard deviation, time correlation,
order of the process,\ldots).
+The second routine (\rou{sto\_par\_init}) is an initialization routine mainly dedicated to
+the computation of parameters $a^{(i)}, b^{(i)}, c^{(i)}$ for each autoregressive process,
+as a function of the statistical properties required by the model user
+(mean, standard deviation, time correlation, order of the process,\ldots).
Parameters for the processes can be specified through the following \ngn{namsto} namelist parameters:
\begin{description}
 \item[\np{nn\_sto\_eos}] : number of independent random walks
 \item[\np{rn\_eos\_stdxy}] : random walk horz. standard deviation (in grid points)
 \item[\np{rn\_eos\_stdz}] : random walk vert. standard deviation (in grid points)
 \item[\np{rn\_eos\_tcor}] : random walk time correlation (in timesteps)
 \item[\np{nn\_eos\_ord}] : order of autoregressive processes
 \item[\np{nn\_eos\_flt}] : passes of Laplacian filter
 \item[\np{rn\_eos\_lim}] : limitation factor (default = 3.0)
+\item[\np{nn\_sto\_eos}:] number of independent random walks
+\item[\np{rn\_eos\_stdxy}:] random walk horz. standard deviation (in grid points)
+\item[\np{rn\_eos\_stdz}:] random walk vert. standard deviation (in grid points)
+\item[\np{rn\_eos\_tcor}:] random walk time correlation (in timesteps)
+\item[\np{nn\_eos\_ord}:] order of autoregressive processes
+\item[\np{nn\_eos\_flt}:] passes of Laplacian filter
+\item[\np{rn\_eos\_lim}:] limitation factor (default = 3.0)
\end{description}
This routine also includes the initialization (seeding) of the random number generator.
The third routine (\rou{sto\_rst\_write}) writes a restart file (which suffix name is
given by \np{cn\_storst\_out} namelist parameter) containing the current value of
+The third routine (\rou{sto\_rst\_write}) writes a restart file
+(which suffix name is given by \np{cn\_storst\_out} namelist parameter) containing the current value of
all autoregressive processes to allow restarting a simulation from where it has been interrupted.
This file also contains the current state of the random number generator.
When \np{ln\_rststo} is set to \forcode{.true.}), the restart file (which suffix name is
given by \np{cn\_storst\_in} namelist parameter) is read by the initialization routine
(\rou{sto\_par\_init}). The simulation will continue exactly as if it was not interrupted
only when \np{ln\_rstseed} is set to \forcode{.true.}, $i.e.$ when the state of
the random number generator is read in the restart file.
+When \np{ln\_rststo} is set to \forcode{.true.}),
+the restart file (which suffix name is given by \np{cn\_storst\_in} namelist parameter) is read by
+the initialization routine (\rou{sto\_par\_init}).
+The simulation will continue exactly as if it was not interrupted only
+when \np{ln\_rstseed} is set to \forcode{.true.},
+$i.e.$ when the state of the random number generator is read in the restart file.
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_TRA.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_TRA.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_TRA.tex (revision 10368)
@@ 17,13 +17,14 @@
%$\ $\newline % force a new ligne
Using the representation described in \autoref{chap:DOM}, several semidiscrete
space forms of the tracer equations are available depending on the vertical
coordinate used and on the physics used. In all the equations presented
here, the masking has been omitted for simplicity. One must be aware that
all the quantities are masked fields and that each time a mean or difference
operator is used, the resulting field is multiplied by a mask.

The two active tracers are potential temperature and salinity. Their prognostic
equations can be summarized as follows:
+Using the representation described in \autoref{chap:DOM},
+several semidiscrete space forms of the tracer equations are available depending on
+the vertical coordinate used and on the physics used.
+In all the equations presented here, the masking has been omitted for simplicity.
+One must be aware that all the quantities are masked fields and
+that each time a mean or difference operator is used,
+the resulting field is multiplied by a mask.
+
+The two active tracers are potential temperature and salinity.
+Their prognostic equations can be summarized as follows:
\begin{equation*}
\text{NXT} = \text{ADV}+\text{LDF}+\text{ZDF}+\text{SBC}
@@ 31,31 +32,30 @@
\end{equation*}
NXT stands for next, referring to the timestepping. From left to right, the terms
on the rhs of the tracer equations are the advection (ADV), the lateral diffusion
(LDF), the vertical diffusion (ZDF), the contributions from the external forcings
(SBC: Surface Boundary Condition, QSR: penetrative Solar Radiation, and BBC:
Bottom Boundary Condition), the contribution from the bottom boundary Layer
(BBL) parametrisation, and an internal damping (DMP) term. The terms QSR,
BBC, BBL and DMP are optional. The external forcings and parameterisations
require complex inputs and complex calculations ($e.g.$ bulk formulae, estimation
of mixing coefficients) that are carried out in the SBC, LDF and ZDF modules and
described in \autoref{chap:SBC}, \autoref{chap:LDF} and \autoref{chap:ZDF}, respectively.
Note that \mdl{tranpc}, the nonpenetrative convection module, although
located in the NEMO/OPA/TRA directory as it directly modifies the tracer fields,
is described with the model vertical physics (ZDF) together with other available
parameterization of convection.

In the present chapter we also describe the diagnostic equations used to compute
the seawater properties (density, BruntV\"{a}is\"{a}l\"{a} frequency, specific heat and
freezing point with associated modules \mdl{eosbn2} and \mdl{phycst}).

The different options available to the user are managed by namelist logicals or CPP keys.
For each equation term \textit{TTT}, the namelist logicals are \textit{ln\_traTTT\_xxx},
where \textit{xxx} is a 3 or 4 letter acronym corresponding to each optional scheme.
The CPP key (when it exists) is \key{traTTT}. The equivalent code can be
found in the \textit{traTTT} or \textit{traTTT\_xxx} module, in the NEMO/OPA/TRA directory.

The user has the option of extracting each tendency term on the RHS of the tracer
equation for output (\np{ln\_tra\_trd} or \np{ln\_tra\_mxl}\forcode{ = .true.}), as described in \autoref{chap:DIA}.
+NXT stands for next, referring to the timestepping.
+From left to right, the terms on the rhs of the tracer equations are the advection (ADV),
+the lateral diffusion (LDF), the vertical diffusion (ZDF), the contributions from the external forcings
+(SBC: Surface Boundary Condition, QSR: penetrative Solar Radiation, and BBC: Bottom Boundary Condition),
+the contribution from the bottom boundary Layer (BBL) parametrisation, and an internal damping (DMP) term.
+The terms QSR, BBC, BBL and DMP are optional.
+The external forcings and parameterisations require complex inputs and complex calculations
+($e.g.$ bulk formulae, estimation of mixing coefficients) that are carried out in the SBC, LDF and ZDF modules and
+described in \autoref{chap:SBC}, \autoref{chap:LDF} and \autoref{chap:ZDF}, respectively.
+Note that \mdl{tranpc}, the nonpenetrative convection module, although located in the NEMO/OPA/TRA directory as
+it directly modifies the tracer fields, is described with the model vertical physics (ZDF) together with
+other available parameterization of convection.
+
+In the present chapter we also describe the diagnostic equations used to compute the seawater properties
+(density, BruntV\"{a}is\"{a}l\"{a} frequency, specific heat and freezing point with
+associated modules \mdl{eosbn2} and \mdl{phycst}).
+
+The different options available to the user are managed by namelist logicals or CPP keys.
+For each equation term \textit{TTT}, the namelist logicals are \textit{ln\_traTTT\_xxx},
+where \textit{xxx} is a 3 or 4 letter acronym corresponding to each optional scheme.
+The CPP key (when it exists) is \key{traTTT}.
+The equivalent code can be found in the \textit{traTTT} or \textit{traTTT\_xxx} module,
+in the NEMO/OPA/TRA directory.
+
+The user has the option of extracting each tendency term on the RHS of the tracer equation for output
+(\np{ln\_tra\_trd} or \np{ln\_tra\_mxl}\forcode{ = .true.}), as described in \autoref{chap:DIA}.
$\ $\newline % force a new ligne
@@ 70,7 +70,8 @@
%
When considered ($i.e.$ when \np{ln\_traadv\_NONE} is not set to \forcode{.true.}),
the advection tendency of a tracer is expressed in flux form,
$i.e.$ as the divergence of the advective fluxes. Its discrete expression is given by :
+When considered ($i.e.$ when \np{ln\_traadv\_NONE} is not set to \forcode{.true.}),
+the advection tendency of a tracer is expressed in flux form,
+$i.e.$ as the divergence of the advective fluxes.
+Its discrete expression is given by :
\begin{equation} \label{eq:tra_adv}
ADV_\tau =\frac{1}{b_t} \left(
@@ 79,95 +80,100 @@
\frac{1}{e_{3t}} \;\delta _k \left[ w\; \tau _w \right]
\end{equation}
where $\tau$ is either T or S, and $b_t= e_{1t}\,e_{2t}\,e_{3t}$ is the volume of $T$cells.
The flux form in \autoref{eq:tra_adv}
implicitly requires the use of the continuity equation. Indeed, it is obtained
by using the following equality : $\nabla \cdot \left( \vect{U}\,T \right)=\vect{U} \cdot \nabla T$
which results from the use of the continuity equation, $\partial _t e_3 + e_3\;\nabla \cdot \vect{U}=0$
(which reduces to $\nabla \cdot \vect{U}=0$ in linear free surface, $i.e.$ \np{ln\_linssh}\forcode{ = .true.}).
Therefore it is of paramount importance to design the discrete analogue of the
advection tendency so that it is consistent with the continuity equation in order to
enforce the conservation properties of the continuous equations. In other words,
by setting $\tau = 1$ in (\autoref{eq:tra_adv}) we recover the discrete form of
+where $\tau$ is either T or S, and $b_t= e_{1t}\,e_{2t}\,e_{3t}$ is the volume of $T$cells.
+The flux form in \autoref{eq:tra_adv} implicitly requires the use of the continuity equation.
+Indeed, it is obtained by using the following equality:
+$\nabla \cdot \left( \vect{U}\,T \right)=\vect{U} \cdot \nabla T$ which
+results from the use of the continuity equation, $\partial _t e_3 + e_3\;\nabla \cdot \vect{U}=0$
+(which reduces to $\nabla \cdot \vect{U}=0$ in linear free surface, $i.e.$ \np{ln\_linssh}\forcode{ = .true.}).
+Therefore it is of paramount importance to design the discrete analogue of the advection tendency so that
+it is consistent with the continuity equation in order to enforce the conservation properties of
+the continuous equations.
+In other words, by setting $\tau = 1$ in (\autoref{eq:tra_adv}) we recover the discrete form of
the continuity equation which is used to calculate the vertical velocity.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=0.9\textwidth]{Fig_adv_scheme}
\caption{ \protect\label{fig:adv_scheme}
Schematic representation of some ways used to evaluate the tracer value
at $u$point and the amount of tracer exchanged between two neighbouring grid
points. Upsteam biased scheme (ups): the upstream value is used and the black
area is exchanged. Piecewise parabolic method (ppm): a parabolic interpolation
is used and the black and dark grey areas are exchanged. Monotonic upstream
scheme for conservative laws (muscl): a parabolic interpolation is used and black,
dark grey and grey areas are exchanged. Second order scheme (cen2): the mean
value is used and black, dark grey, grey and light grey areas are exchanged. Note
that this illustration does not include the flux limiter used in ppm and muscl schemes.}
\end{center} \end{figure}
+\begin{figure}[!t]
+ \begin{center}
+ \includegraphics[width=0.9\textwidth]{Fig_adv_scheme}
+ \caption{ \protect\label{fig:adv_scheme}
+ Schematic representation of some ways used to evaluate the tracer value at $u$point and
+ the amount of tracer exchanged between two neighbouring grid points.
+ Upsteam biased scheme (ups):
+ the upstream value is used and the black area is exchanged.
+ Piecewise parabolic method (ppm):
+ a parabolic interpolation is used and the black and dark grey areas are exchanged.
+ Monotonic upstream scheme for conservative laws (muscl):
+ a parabolic interpolation is used and black, dark grey and grey areas are exchanged.
+ Second order scheme (cen2):
+ the mean value is used and black, dark grey, grey and light grey areas are exchanged.
+ Note that this illustration does not include the flux limiter used in ppm and muscl schemes.
+ }
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The key difference between the advection schemes available in \NEMO is the choice
made in space and time interpolation to define the value of the tracer at the
velocity points (\autoref{fig:adv_scheme}).

Along solid lateral and bottom boundaries a zero tracer flux is automatically
specified, since the normal velocity is zero there. At the sea surface the
boundary condition depends on the type of sea surface chosen:
+The key difference between the advection schemes available in \NEMO is the choice made in space and
+time interpolation to define the value of the tracer at the velocity points
+(\autoref{fig:adv_scheme}).
+
+Along solid lateral and bottom boundaries a zero tracer flux is automatically specified,
+since the normal velocity is zero there.
+At the sea surface the boundary condition depends on the type of sea surface chosen:
\begin{description}
\item [linear free surface:] (\np{ln\_linssh}\forcode{ = .true.}) the first level thickness is constant in time:
the vertical boundary condition is applied at the fixed surface $z=0$
rather than on the moving surface $z=\eta$. There is a nonzero advective
flux which is set for all advection schemes as
$\left. {\tau _w } \right_{k=1/2} =T_{k=1} $, $i.e.$
the product of surface velocity (at $z=0$) by the first level tracer value.
\item [nonlinear free surface:] (\np{ln\_linssh}\forcode{ = .false.})
convergence/divergence in the first ocean level moves the free surface
up/down. There is no tracer advection through it so that the advective
fluxes through the surface are also zero
+\item[linear free surface:]
+ (\np{ln\_linssh}\forcode{ = .true.})
+ the first level thickness is constant in time:
+ the vertical boundary condition is applied at the fixed surface $z=0$ rather than on the moving surface $z=\eta$.
+ There is a nonzero advective flux which is set for all advection schemes as
+ $\left. {\tau _w } \right_{k=1/2} =T_{k=1} $,
+ $i.e.$ the product of surface velocity (at $z=0$) by the first level tracer value.
+\item[nonlinear free surface:]
+ (\np{ln\_linssh}\forcode{ = .false.})
+ convergence/divergence in the first ocean level moves the free surface up/down.
+ There is no tracer advection through it so that the advective fluxes through the surface are also zero.
\end{description}
In all cases, this boundary condition retains local conservation of tracer.
Global conservation is obtained in nonlinear free surface case,
but \textit{not} in the linear free surface case. Nevertheless, in the latter case,
it is achieved to a good approximation since the nonconservative
term is the product of the time derivative of the tracer and the free surface
height, two quantities that are not correlated \citep{Roullet_Madec_JGR00,Griffies_al_MWR01,Campin2004}.

The velocity field that appears in (\autoref{eq:tra_adv}) and (\autoref{eq:tra_adv_zco})
is the centred (\textit{now}) \textit{effective} ocean velocity, $i.e.$ the \textit{eulerian} velocity
(see \autoref{chap:DYN}) plus the eddy induced velocity (\textit{eiv})
and/or the mixed layer eddy induced velocity (\textit{eiv})
when those parameterisations are used (see \autoref{chap:LDF}).

Several tracer advection scheme are proposed, namely
a $2^{nd}$ or $4^{th}$ order centred schemes (CEN),
+In all cases, this boundary condition retains local conservation of tracer.
+Global conservation is obtained in nonlinear free surface case, but \textit{not} in the linear free surface case.
+Nevertheless, in the latter case, it is achieved to a good approximation since
+the nonconservative term is the product of the time derivative of the tracer and the free surface height,
+two quantities that are not correlated \citep{Roullet_Madec_JGR00,Griffies_al_MWR01,Campin2004}.
+
+The velocity field that appears in (\autoref{eq:tra_adv} and \autoref{eq:tra_adv_zco})
+is the centred (\textit{now}) \textit{effective} ocean velocity,
+$i.e.$ the \textit{eulerian} velocity (see \autoref{chap:DYN}) plus
+the eddy induced velocity (\textit{eiv}) and/or
+the mixed layer eddy induced velocity (\textit{eiv}) when
+those parameterisations are used (see \autoref{chap:LDF}).
+
+Several tracer advection scheme are proposed, namely a $2^{nd}$ or $4^{th}$ order centred schemes (CEN),
a $2^{nd}$ or $4^{th}$ order Flux Corrected Transport scheme (FCT),
a Monotone Upstream Scheme for Conservative Laws scheme (MUSCL),
a $3^{rd}$ Upstream Biased Scheme (UBS, also often called UP3), and
a Quadratic Upstream Interpolation for Convective Kinematics with
+a $3^{rd}$ Upstream Biased Scheme (UBS, also often called UP3),
+and a Quadratic Upstream Interpolation for Convective Kinematics with
Estimated Streaming Terms scheme (QUICKEST).
The choice is made in the \textit{\ngn{namtra\_adv}} namelist, by
setting to \forcode{.true.} one of the logicals \textit{ln\_traadv\_xxx}.
The corresponding code can be found in the \mdl{traadv\_xxx} module,
where \textit{xxx} is a 3 or 4 letter acronym corresponding to each scheme.
By default ($i.e.$ in the reference namelist, \ngn{namelist\_ref}), all the logicals
are set to \forcode{.false.}. If the user does not select an advection scheme
in the configuration namelist (\ngn{namelist\_cfg}), the tracers will \textit{not} be advected !

Details of the advection schemes are given below. The choosing an advection scheme
is a complex matter which depends on the model physics, model resolution,
+The choice is made in the \textit{\ngn{namtra\_adv}} namelist,
+by setting to \forcode{.true.} one of the logicals \textit{ln\_traadv\_xxx}.
+The corresponding code can be found in the \mdl{traadv\_xxx} module,
+where \textit{xxx} is a 3 or 4 letter acronym corresponding to each scheme.
+By default ($i.e.$ in the reference namelist, \ngn{namelist\_ref}), all the logicals are set to \forcode{.false.}.
+If the user does not select an advection scheme in the configuration namelist (\ngn{namelist\_cfg}),
+the tracers will \textit{not} be advected!
+
+Details of the advection schemes are given below.
+The choosing an advection scheme is a complex matter which depends on the model physics, model resolution,
type of tracer, as well as the issue of numerical cost. In particular, we note that
(1) CEN and FCT schemes require an explicit diffusion operator
while the other schemes are diffusive enough so that they do not necessarily need additional diffusion ;
+(1) CEN and FCT schemes require an explicit diffusion operator while the other schemes are diffusive enough so that
+they do not necessarily need additional diffusion;
(2) CEN and UBS are not \textit{positive} schemes
\footnote{negative values can appear in an initially strictly positive tracer field
which is advected}
, implying that false extrema are permitted. Their use is not recommended on passive tracers ;
(3) It is recommended that the same advectiondiffusion scheme is
used on both active and passive tracers. Indeed, if a source or sink of a
passive tracer depends on an active one, the difference of treatment of
active and passive tracers can create very nicelooking frontal structures
that are pure numerical artefacts. Nevertheless, most of our users set a different
treatment on passive and active tracers, that's the reason why this possibility
is offered. We strongly suggest them to perform a sensitivity experiment
using a same treatment to assess the robustness of their results.
+\footnote{negative values can appear in an initially strictly positive tracer field which is advected},
+implying that false extrema are permitted.
+Their use is not recommended on passive tracers;
+(3) It is recommended that the same advectiondiffusion scheme is used on both active and passive tracers.
+Indeed, if a source or sink of a passive tracer depends on an active one,
+the difference of treatment of active and passive tracers can create very nicelooking frontal structures that
+are pure numerical artefacts.
+Nevertheless, most of our users set a different treatment on passive and active tracers,
+that's the reason why this possibility is offered.
+We strongly suggest them to perform a sensitivity experiment using a same treatment to
+assess the robustness of their results.
% 
@@ 179,11 +185,11 @@
% 2nd order centred scheme
The centred advection scheme (CEN) is used when \np{ln\_traadv\_cen}\forcode{ = .true.}.
Its order ($2^{nd}$ or $4^{th}$) can be chosen independently on horizontal (isolevel)
and vertical direction by setting \np{nn\_cen\_h} and \np{nn\_cen\_v} to $2$ or $4$.
+The centred advection scheme (CEN) is used when \np{ln\_traadv\_cen}\forcode{ = .true.}.
+Its order ($2^{nd}$ or $4^{th}$) can be chosen independently on horizontal (isolevel) and vertical direction by
+setting \np{nn\_cen\_h} and \np{nn\_cen\_v} to $2$ or $4$.
CEN implementation can be found in the \mdl{traadv\_cen} module.
In the $2^{nd}$ order centred formulation (CEN2), the tracer at velocity points
is evaluated as the mean of the two neighbouring $T$point values.
+In the $2^{nd}$ order centred formulation (CEN2), the tracer at velocity points is evaluated as the mean of
+the two neighbouring $T$point values.
For example, in the $i$direction :
\begin{equation} \label{eq:tra_adv_cen2}
@@ 191,19 +197,18 @@
\end{equation}
CEN2 is non diffusive ($i.e.$ it conserves the tracer variance, $\tau^2)$
but dispersive ($i.e.$ it may create false extrema). It is therefore notoriously
noisy and must be used in conjunction with an explicit diffusion operator to
produce a sensible solution. The associated timestepping is performed using
a leapfrog scheme in conjunction with an Asselin timefilter, so $T$ in
(\autoref{eq:tra_adv_cen2}) is the \textit{now} tracer value.

Note that using the CEN2, the overall tracer advection is of second
order accuracy since both (\autoref{eq:tra_adv}) and (\autoref{eq:tra_adv_cen2})
have this order of accuracy.
+CEN2 is non diffusive ($i.e.$ it conserves the tracer variance, $\tau^2)$ but dispersive
+($i.e.$ it may create false extrema).
+It is therefore notoriously noisy and must be used in conjunction with an explicit diffusion operator to
+produce a sensible solution.
+The associated timestepping is performed using a leapfrog scheme in conjunction with an Asselin timefilter,
+so $T$ in (\autoref{eq:tra_adv_cen2}) is the \textit{now} tracer value.
+
+Note that using the CEN2, the overall tracer advection is of second order accuracy since
+both (\autoref{eq:tra_adv}) and (\autoref{eq:tra_adv_cen2}) have this order of accuracy.
% 4nd order centred scheme
In the $4^{th}$ order formulation (CEN4), tracer values are evaluated at u and vpoints as
a $4^{th}$ order interpolation, and thus depend on the four neighbouring $T$points.
+In the $4^{th}$ order formulation (CEN4), tracer values are evaluated at u and vpoints as
+a $4^{th}$ order interpolation, and thus depend on the four neighbouring $T$points.
For example, in the $i$direction:
\begin{equation} \label{eq:tra_adv_cen4}
@@ 211,34 +216,31 @@
=\overline{ T  \frac{1}{6}\,\delta _i \left[ \delta_{i+1/2}[T] \,\right] }^{\,i+1/2}
\end{equation}
In the vertical direction (\np{nn\_cen\_v}\forcode{ = 4}), a $4^{th}$ COMPACT interpolation
has been prefered \citep{Demange_PhD2014}.
In the COMPACT scheme, both the field and its derivative are interpolated,
which leads, after a matrix inversion, spectral characteristics
similar to schemes of higher order \citep{Lele_JCP1992}.
+In the vertical direction (\np{nn\_cen\_v}\forcode{ = 4}),
+a $4^{th}$ COMPACT interpolation has been prefered \citep{Demange_PhD2014}.
+In the COMPACT scheme, both the field and its derivative are interpolated, which leads, after a matrix inversion,
+spectral characteristics similar to schemes of higher order \citep{Lele_JCP1992}.
Strictly speaking, the CEN4 scheme is not a $4^{th}$ order advection scheme
but a $4^{th}$ order evaluation of advective fluxes, since the divergence of
advective fluxes \autoref{eq:tra_adv} is kept at $2^{nd}$ order.
The expression \textit{$4^{th}$ order scheme} used in oceanographic literature
is usually associated with the scheme presented here.
Introducing a \forcode{.true.} $4^{th}$ order advection scheme is feasible but,
for consistency reasons, it requires changes in the discretisation of the tracer
advection together with changes in the continuity equation,
and the momentum advection and pressure terms.

A direct consequence of the pseudofourth order nature of the scheme is that
it is not nondiffusive, $i.e.$ the global variance of a tracer is not preserved using CEN4.
Furthermore, it must be used in conjunction with an explicit diffusion operator
to produce a sensible solution.
As in CEN2 case, the timestepping is performed using a leapfrog scheme in conjunction
with an Asselin timefilter, so $T$ in (\autoref{eq:tra_adv_cen4}) is the \textit{now} tracer.

At a $T$grid cell adjacent to a boundary (coastline, bottom and surface),
an additional hypothesis must be made to evaluate $\tau _u^{cen4}$.
This hypothesis usually reduces the order of the scheme.
Here we choose to set the gradient of $T$ across the boundary to zero.
Alternative conditions can be specified, such as a reduction to a second order scheme
for these near boundary grid points.
+Strictly speaking, the CEN4 scheme is not a $4^{th}$ order advection scheme but
+a $4^{th}$ order evaluation of advective fluxes,
+since the divergence of advective fluxes \autoref{eq:tra_adv} is kept at $2^{nd}$ order.
+The expression \textit{$4^{th}$ order scheme} used in oceanographic literature is usually associated with
+the scheme presented here.
+Introducing a \forcode{.true.} $4^{th}$ order advection scheme is feasible but, for consistency reasons,
+it requires changes in the discretisation of the tracer advection together with changes in the continuity equation,
+and the momentum advection and pressure terms.
+
+A direct consequence of the pseudofourth order nature of the scheme is that it is not nondiffusive,
+$i.e.$ the global variance of a tracer is not preserved using CEN4.
+Furthermore, it must be used in conjunction with an explicit diffusion operator to produce a sensible solution.
+As in CEN2 case, the timestepping is performed using a leapfrog scheme in conjunction with an Asselin timefilter,
+so $T$ in (\autoref{eq:tra_adv_cen4}) is the \textit{now} tracer.
+
+At a $T$grid cell adjacent to a boundary (coastline, bottom and surface),
+an additional hypothesis must be made to evaluate $\tau _u^{cen4}$.
+This hypothesis usually reduces the order of the scheme.
+Here we choose to set the gradient of $T$ across the boundary to zero.
+Alternative conditions can be specified, such as a reduction to a second order scheme for
+these near boundary grid points.
% 
@@ 248,11 +250,12 @@
\label{subsec:TRA_adv_tvd}
The Flux Corrected Transport schemes (FCT) is used when \np{ln\_traadv\_fct}\forcode{ = .true.}.
Its order ($2^{nd}$ or $4^{th}$) can be chosen independently on horizontal (isolevel)
and vertical direction by setting \np{nn\_fct\_h} and \np{nn\_fct\_v} to $2$ or $4$.
+The Flux Corrected Transport schemes (FCT) is used when \np{ln\_traadv\_fct}\forcode{ = .true.}.
+Its order ($2^{nd}$ or $4^{th}$) can be chosen independently on horizontal (isolevel) and vertical direction by
+setting \np{nn\_fct\_h} and \np{nn\_fct\_v} to $2$ or $4$.
FCT implementation can be found in the \mdl{traadv\_fct} module.
In FCT formulation, the tracer at velocity points is evaluated using a combination of
an upstream and a centred scheme. For example, in the $i$direction :
+In FCT formulation, the tracer at velocity points is evaluated using a combination of an upstream and
+a centred scheme.
+For example, in the $i$direction :
\begin{equation} \label{eq:tra_adv_fct}
\begin{split}
@@ 265,25 +268,26 @@
\end{split}
\end{equation}
where $c_u$ is a flux limiter function taking values between 0 and 1.
The FCT order is the one of the centred scheme used ($i.e.$ it depends on the setting of
\np{nn\_fct\_h} and \np{nn\_fct\_v}.
There exist many ways to define $c_u$, each corresponding to a different
FCT scheme. The one chosen in \NEMO is described in \citet{Zalesak_JCP79}.
$c_u$ only departs from $1$ when the advective term produces a local extremum in the tracer field.
The resulting scheme is quite expensive but \emph{positive}.
It can be used on both active and passive tracers.
A comparison of FCT2 with MUSCL and a MPDATA scheme can be found in \citet{Levy_al_GRL01}.

An additional option has been added controlled by \np{nn\_fct\_zts}. By setting this integer to
a value larger than zero, a $2^{nd}$ order FCT scheme is used on both horizontal and vertical direction,
but on the latter, a splitexplicit time stepping is used, with a number of subtimestep equals
to \np{nn\_fct\_zts}. This option can be useful when the size of the timestep is limited
by vertical advection \citep{Lemarie_OM2015}. Note that in this case, a similar splitexplicit
time stepping should be used on vertical advection of momentum to insure a better stability
(see \autoref{subsec:DYN_zad}).

For stability reasons (see \autoref{chap:STP}), $\tau _u^{cen}$ is evaluated in (\autoref{eq:tra_adv_fct})
using the \textit{now} tracer while $\tau _u^{ups}$ is evaluated using the \textit{before} tracer. In other words,
the advective part of the scheme is time stepped with a leapfrog scheme
+where $c_u$ is a flux limiter function taking values between 0 and 1.
+The FCT order is the one of the centred scheme used
+($i.e.$ it depends on the setting of \np{nn\_fct\_h} and \np{nn\_fct\_v}).
+There exist many ways to define $c_u$, each corresponding to a different FCT scheme.
+The one chosen in \NEMO is described in \citet{Zalesak_JCP79}.
+$c_u$ only departs from $1$ when the advective term produces a local extremum in the tracer field.
+The resulting scheme is quite expensive but \emph{positive}.
+It can be used on both active and passive tracers.
+A comparison of FCT2 with MUSCL and a MPDATA scheme can be found in \citet{Levy_al_GRL01}.
+
+An additional option has been added controlled by \np{nn\_fct\_zts}.
+By setting this integer to a value larger than zero,
+a $2^{nd}$ order FCT scheme is used on both horizontal and vertical direction, but on the latter,
+a splitexplicit time stepping is used, with a number of subtimestep equals to \np{nn\_fct\_zts}.
+This option can be useful when the size of the timestep is limited by vertical advection \citep{Lemarie_OM2015}.
+Note that in this case, a similar splitexplicit time stepping should be used on vertical advection of momentum to
+insure a better stability (see \autoref{subsec:DYN_zad}).
+
+For stability reasons (see \autoref{chap:STP}),
+$\tau _u^{cen}$ is evaluated in (\autoref{eq:tra_adv_fct}) using the \textit{now} tracer while
+$\tau _u^{ups}$ is evaluated using the \textit{before} tracer.
+In other words, the advective part of the scheme is time stepped with a leapfrog scheme
while a forward scheme is used for the diffusive part.
@@ 294,10 +298,11 @@
\label{subsec:TRA_adv_mus}
The Monotone Upstream Scheme for Conservative Laws (MUSCL) is used when \np{ln\_traadv\_mus}\forcode{ = .true.}.
+The Monotone Upstream Scheme for Conservative Laws (MUSCL) is used when \np{ln\_traadv\_mus}\forcode{ = .true.}.
MUSCL implementation can be found in the \mdl{traadv\_mus} module.
MUSCL has been first implemented in \NEMO by \citet{Levy_al_GRL01}. In its formulation, the tracer at velocity points
is evaluated assuming a linear tracer variation between two $T$points
(\autoref{fig:adv_scheme}). For example, in the $i$direction :
+MUSCL has been first implemented in \NEMO by \citet{Levy_al_GRL01}.
+In its formulation, the tracer at velocity points is evaluated assuming a linear tracer variation between
+two $T$points (\autoref{fig:adv_scheme}).
+For example, in the $i$direction :
\begin{equation} \label{eq:tra_adv_mus}
\tau _u^{mus} = \left\{ \begin{aligned}
@@ 308,15 +313,15 @@
\end{aligned} \right.
\end{equation}
where $\widetilde{\partial _i \tau}$ is the slope of the tracer on which a limitation
is imposed to ensure the \textit{positive} character of the scheme.

The time stepping is performed using a forward scheme, that is the \textit{before}
tracer field is used to evaluate $\tau _u^{mus}$.

For an ocean grid point adjacent to land and where the ocean velocity is
directed toward land, an upstream flux is used. This choice ensure
the \textit{positive} character of the scheme.
In addition, fluxes round a gridpoint where a runoff is applied can optionally be
computed using upstream fluxes (\np{ln\_mus\_ups}\forcode{ = .true.}).
+where $\widetilde{\partial _i \tau}$ is the slope of the tracer on which a limitation is imposed to
+ensure the \textit{positive} character of the scheme.
+
+The time stepping is performed using a forward scheme,
+that is the \textit{before} tracer field is used to evaluate $\tau _u^{mus}$.
+
+For an ocean grid point adjacent to land and where the ocean velocity is directed toward land,
+an upstream flux is used.
+This choice ensure the \textit{positive} character of the scheme.
+In addition, fluxes round a gridpoint where a runoff is applied can optionally be computed using upstream fluxes
+(\np{ln\_mus\_ups}\forcode{ = .true.}).
% 
@@ 326,11 +331,11 @@
\label{subsec:TRA_adv_ubs}
The UpstreamBiased Scheme (UBS) is used when \np{ln\_traadv\_ubs}\forcode{ = .true.}.
+The UpstreamBiased Scheme (UBS) is used when \np{ln\_traadv\_ubs}\forcode{ = .true.}.
UBS implementation can be found in the \mdl{traadv\_mus} module.
The UBS scheme, often called UP3, is also known as the Cell Averaged QUICK scheme
(Quadratic Upstream Interpolation for Convective Kinematics). It is an upstreambiased
third order scheme based on an upstreambiased parabolic interpolation.
For example, in the $i$direction :
+The UBS scheme, often called UP3, is also known as the Cell Averaged QUICK scheme
+(Quadratic Upstream Interpolation for Convective Kinematics).
+It is an upstreambiased third order scheme based on an upstreambiased parabolic interpolation.
+For example, in the $i$direction:
\begin{equation} \label{eq:tra_adv_ubs}
\tau _u^{ubs} =\overline T ^{i+1/2}\;\frac{1}{6} \left\{
@@ 342,30 +347,28 @@
where $\tau "_i =\delta _i \left[ {\delta _{i+1/2} \left[ \tau \right]} \right]$.
This results in a dissipatively dominant (i.e. hyperdiffusive) truncation
error \citep{Shchepetkin_McWilliams_OM05}. The overall performance of
 the advection scheme is similar to that reported in \cite{Farrow1995}.
It is a relatively good compromise between accuracy and smoothness.
Nevertheless the scheme is not \emph{positive}, meaning that false extrema are permitted,
but the amplitude of such are significantly reduced over the centred second
or fourth order method. therefore it is not recommended that it should be
applied to a passive tracer that requires positivity.

The intrinsic diffusion of UBS makes its use risky in the vertical direction
where the control of artificial diapycnal fluxes is of paramount importance \citep{Shchepetkin_McWilliams_OM05, Demange_PhD2014}.
Therefore the vertical flux is evaluated using either a $2^nd$ order FCT scheme
or a $4^th$ order COMPACT scheme (\np{nn\_cen\_v}\forcode{ = 2 or 4}).

For stability reasons (see \autoref{chap:STP}),
the first term in \autoref{eq:tra_adv_ubs} (which corresponds to a second order
centred scheme) is evaluated using the \textit{now} tracer (centred in time)
while the second term (which is the diffusive part of the scheme), is
evaluated using the \textit{before} tracer (forward in time).
This choice is discussed by \citet{Webb_al_JAOT98} in the context of the
QUICK advection scheme. UBS and QUICK schemes only differ
by one coefficient. Replacing 1/6 with 1/8 in \autoref{eq:tra_adv_ubs}
leads to the QUICK advection scheme \citep{Webb_al_JAOT98}.
This option is not available through a namelist parameter, since the
1/6 coefficient is hard coded. Nevertheless it is quite easy to make the
substitution in the \mdl{traadv\_ubs} module and obtain a QUICK scheme.
+This results in a dissipatively dominant (i.e. hyperdiffusive) truncation error
+\citep{Shchepetkin_McWilliams_OM05}.
+The overall performance of the advection scheme is similar to that reported in \cite{Farrow1995}.
+It is a relatively good compromise between accuracy and smoothness.
+Nevertheless the scheme is not \emph{positive}, meaning that false extrema are permitted,
+but the amplitude of such are significantly reduced over the centred second or fourth order method.
+Therefore it is not recommended that it should be applied to a passive tracer that requires positivity.
+
+The intrinsic diffusion of UBS makes its use risky in the vertical direction where
+the control of artificial diapycnal fluxes is of paramount importance
+\citep{Shchepetkin_McWilliams_OM05, Demange_PhD2014}.
+Therefore the vertical flux is evaluated using either a $2^nd$ order FCT scheme or a $4^th$ order COMPACT scheme
+(\np{nn\_cen\_v}\forcode{ = 2 or 4}).
+
+For stability reasons (see \autoref{chap:STP}), the first term in \autoref{eq:tra_adv_ubs}
+(which corresponds to a second order centred scheme)
+is evaluated using the \textit{now} tracer (centred in time) while the second term
+(which is the diffusive part of the scheme),
+is evaluated using the \textit{before} tracer (forward in time).
+This choice is discussed by \citet{Webb_al_JAOT98} in the context of the QUICK advection scheme.
+UBS and QUICK schemes only differ by one coefficient.
+Replacing 1/6 with 1/8 in \autoref{eq:tra_adv_ubs} leads to the QUICK advection scheme \citep{Webb_al_JAOT98}.
+This option is not available through a namelist parameter, since the 1/6 coefficient is hard coded.
+Nevertheless it is quite easy to make the substitution in the \mdl{traadv\_ubs} module and obtain a QUICK scheme.
Note that it is straightforward to rewrite \autoref{eq:tra_adv_ubs} as follows:
@@ 384,13 +387,13 @@
\end{equation}
\autoref{eq:traadv_ubs2} has several advantages. Firstly, it clearly reveals
that the UBS scheme is based on the fourth order scheme to which an
upstreambiased diffusion term is added. Secondly, this emphasises that the
$4^{th}$ order part (as well as the $2^{nd}$ order part as stated above) has
to be evaluated at the \emph{now} time step using \autoref{eq:tra_adv_ubs}.
Thirdly, the diffusion term is in fact a biharmonic operator with an eddy
coefficient which is simply proportional to the velocity:
 $A_u^{lm}= \frac{1}{12}\,{e_{1u}}^3\,u$. Note the current version of NEMO uses
the computationally more efficient formulation \autoref{eq:tra_adv_ubs}.
+\autoref{eq:traadv_ubs2} has several advantages.
+Firstly, it clearly reveals that the UBS scheme is based on the fourth order scheme to which
+an upstreambiased diffusion term is added.
+Secondly, this emphasises that the $4^{th}$ order part (as well as the $2^{nd}$ order part as stated above) has to
+be evaluated at the \emph{now} time step using \autoref{eq:tra_adv_ubs}.
+Thirdly, the diffusion term is in fact a biharmonic operator with an eddy coefficient which
+is simply proportional to the velocity:
+$A_u^{lm}= \frac{1}{12}\,{e_{1u}}^3\,u$.
+Note the current version of NEMO uses the computationally more efficient formulation \autoref{eq:tra_adv_ubs}.
% 
@@ 400,20 +403,18 @@
\label{subsec:TRA_adv_qck}
The Quadratic Upstream Interpolation for Convective Kinematics with
Estimated Streaming Terms (QUICKEST) scheme proposed by \citet{Leonard1979}
is used when \np{ln\_traadv\_qck}\forcode{ = .true.}.
+The Quadratic Upstream Interpolation for Convective Kinematics with Estimated Streaming Terms (QUICKEST) scheme
+proposed by \citet{Leonard1979} is used when \np{ln\_traadv\_qck}\forcode{ = .true.}.
QUICKEST implementation can be found in the \mdl{traadv\_qck} module.
QUICKEST is the third order Godunov scheme which is associated with the ULTIMATE QUICKEST
limiter \citep{Leonard1991}. It has been implemented in NEMO by G. Reffray
(MERCATORocean) and can be found in the \mdl{traadv\_qck} module.
The resulting scheme is quite expensive but \emph{positive}.
It can be used on both active and passive tracers.
However, the intrinsic diffusion of QCK makes its use risky in the vertical
direction where the control of artificial diapycnal fluxes is of paramount importance.
Therefore the vertical flux is evaluated using the CEN2 scheme.
This no longer guarantees the positivity of the scheme.
The use of FCT in the vertical direction (as for the UBS case) should be implemented
to restore this property.
+QUICKEST is the third order Godunov scheme which is associated with the ULTIMATE QUICKEST limiter
+\citep{Leonard1991}.
+It has been implemented in NEMO by G. Reffray (MERCATORocean) and can be found in the \mdl{traadv\_qck} module.
+The resulting scheme is quite expensive but \emph{positive}.
+It can be used on both active and passive tracers.
+However, the intrinsic diffusion of QCK makes its use risky in the vertical direction where
+the control of artificial diapycnal fluxes is of paramount importance.
+Therefore the vertical flux is evaluated using the CEN2 scheme.
+This no longer guarantees the positivity of the scheme.
+The use of FCT in the vertical direction (as for the UBS case) should be implemented to restore this property.
%%%gmcomment : Cross term are missing in the current implementation....
@@ 432,17 +433,18 @@
Options are defined through the \ngn{namtra\_ldf} namelist variables.
They are regrouped in four items, allowing to specify
$(i)$ the type of operator used (none, laplacian, bilaplacian),
$(ii)$ the direction along which the operator acts (isolevel, horizontal, isoneutral),
$(iii)$ some specific options related to the rotated operators ($i.e.$ nonisolevel operator), and
+$(i)$ the type of operator used (none, laplacian, bilaplacian),
+$(ii)$ the direction along which the operator acts (isolevel, horizontal, isoneutral),
+$(iii)$ some specific options related to the rotated operators ($i.e.$ nonisolevel operator), and
$(iv)$ the specification of eddy diffusivity coefficient (either constant or variable in space and time).
Item $(iv)$ will be described in \autoref{chap:LDF} .
The direction along which the operators act is defined through the slope between this direction and the isolevel surfaces.
The slope is computed in the \mdl{ldfslp} module and will also be described in \autoref{chap:LDF}.

The lateral diffusion of tracers is evaluated using a forward scheme,
$i.e.$ the tracers appearing in its expression are the \textit{before} tracers in time,
except for the pure vertical component that appears when a rotation tensor is used.
This latter component is solved implicitly together with the vertical diffusion term (see \autoref{chap:STP}).
When \np{ln\_traldf\_msc}\forcode{ = .true.}, a Method of Stabilizing Correction is used in which
+Item $(iv)$ will be described in \autoref{chap:LDF}.
+The direction along which the operators act is defined through the slope between
+this direction and the isolevel surfaces.
+The slope is computed in the \mdl{ldfslp} module and will also be described in \autoref{chap:LDF}.
+
+The lateral diffusion of tracers is evaluated using a forward scheme,
+$i.e.$ the tracers appearing in its expression are the \textit{before} tracers in time,
+except for the pure vertical component that appears when a rotation tensor is used.
+This latter component is solved implicitly together with the vertical diffusion term (see \autoref{chap:STP}).
+When \np{ln\_traldf\_msc}\forcode{ = .true.}, a Method of Stabilizing Correction is used in which
the pure vertical component is split into an explicit and an implicit part \citep{Lemarie_OM2012}.
@@ 456,25 +458,27 @@
Three operator options are proposed and, one and only one of them must be selected:
\begin{description}
\item [\np{ln\_traldf\_NONE}\forcode{ = .true.}]: no operator selected, the lateral diffusive tendency will not be
applied to the tracer equation. This option can be used when the selected advection scheme
is diffusive enough (MUSCL scheme for example).
\item [\np{ln\_traldf\_lap}\forcode{ = .true.}]: a laplacian operator is selected. This harmonic operator
takes the following expression: $\mathpzc{L}(T)=\nabla \cdot A_{ht}\;\nabla T $,
where the gradient operates along the selected direction (see \autoref{subsec:TRA_ldf_dir}),
and $A_{ht}$ is the eddy diffusivity coefficient expressed in $m^2/s$ (see \autoref{chap:LDF}).
\item [\np{ln\_traldf\_blp}\forcode{ = .true.}]: a bilaplacian operator is selected. This biharmonic operator
takes the following expression:
$\mathpzc{B}= \mathpzc{L}\left(\mathpzc{L}(T) \right) = \nabla \cdot b\nabla \left( {\nabla \cdot b\nabla T} \right)$
where the gradient operats along the selected direction,
and $b^2=B_{ht}$ is the eddy diffusivity coefficient expressed in $m^4/s$ (see \autoref{chap:LDF}).
In the code, the bilaplacian operator is obtained by calling the laplacian twice.
+\item[\np{ln\_traldf\_NONE}\forcode{ = .true.}:]
+ no operator selected, the lateral diffusive tendency will not be applied to the tracer equation.
+ This option can be used when the selected advection scheme is diffusive enough (MUSCL scheme for example).
+\item[\np{ln\_traldf\_lap}\forcode{ = .true.}:]
+ a laplacian operator is selected.
+ This harmonic operator takes the following expression: $\mathpzc{L}(T)=\nabla \cdot A_{ht}\;\nabla T $,
+ where the gradient operates along the selected direction (see \autoref{subsec:TRA_ldf_dir}),
+ and $A_{ht}$ is the eddy diffusivity coefficient expressed in $m^2/s$ (see \autoref{chap:LDF}).
+\item[\np{ln\_traldf\_blp}\forcode{ = .true.}]:
+ a bilaplacian operator is selected.
+ This biharmonic operator takes the following expression:
+ $\mathpzc{B}= \mathpzc{L}\left(\mathpzc{L}(T) \right) = \nabla \cdot b\nabla \left( {\nabla \cdot b\nabla T} \right)$
+ where the gradient operats along the selected direction,
+ and $b^2=B_{ht}$ is the eddy diffusivity coefficient expressed in $m^4/s$ (see \autoref{chap:LDF}).
+ In the code, the bilaplacian operator is obtained by calling the laplacian twice.
\end{description}
Both laplacian and bilaplacian operators ensure the total tracer variance decrease.
Their primary role is to provide strong dissipation at the smallest scale supported
by the grid while minimizing the impact on the larger scale features.
The main difference between the two operators is the scale selectiveness.
The bilaplacian damping time ($i.e.$ its spin down time) scales like $\lambda^{4}$
for disturbances of wavelength $\lambda$ (so that short waves damped more rapidelly than long ones),
+Both laplacian and bilaplacian operators ensure the total tracer variance decrease.
+Their primary role is to provide strong dissipation at the smallest scale supported by the grid while
+minimizing the impact on the larger scale features.
+The main difference between the two operators is the scale selectiveness.
+The bilaplacian damping time ($i.e.$ its spin down time) scales like $\lambda^{4}$ for
+disturbances of wavelength $\lambda$ (so that short waves damped more rapidelly than long ones),
whereas the laplacian damping time scales only like $\lambda^{2}$.
@@ 487,22 +491,23 @@
\label{subsec:TRA_ldf_dir}
The choice of a direction of action determines the form of operator used.
The operator is a simple (reentrant) laplacian acting in the (\textbf{i},\textbf{j}) plane
when isolevel option is used (\np{ln\_traldf\_lev}\forcode{ = .true.})
or when a horizontal ($i.e.$ geopotential) operator is demanded in \textit{z}coordinate
(\np{ln\_traldf\_hor} and \np{ln\_zco} equal \forcode{.true.}).
+The choice of a direction of action determines the form of operator used.
+The operator is a simple (reentrant) laplacian acting in the (\textbf{i},\textbf{j}) plane when
+isolevel option is used (\np{ln\_traldf\_lev}\forcode{ = .true.}) or
+when a horizontal ($i.e.$ geopotential) operator is demanded in \textit{z}coordinate
+(\np{ln\_traldf\_hor} and \np{ln\_zco} equal \forcode{.true.}).
The associated code can be found in the \mdl{traldf\_lap\_blp} module.
The operator is a rotated (reentrant) laplacian when the direction along which it acts
does not coincide with the isolevel surfaces,
that is when standard or triad isoneutral option is used (\np{ln\_traldf\_iso} or
 \np{ln\_traldf\_triad} equals \forcode{.true.}, see \mdl{traldf\_iso} or \mdl{traldf\_triad} module, resp.),
or when a horizontal ($i.e.$ geopotential) operator is demanded in \textit{s}coordinate
+The operator is a rotated (reentrant) laplacian when
+the direction along which it acts does not coincide with the isolevel surfaces,
+that is when standard or triad isoneutral option is used
+(\np{ln\_traldf\_iso} or \np{ln\_traldf\_triad} equals \forcode{.true.},
+see \mdl{traldf\_iso} or \mdl{traldf\_triad} module, resp.), or
+when a horizontal ($i.e.$ geopotential) operator is demanded in \textit{s}coordinate
(\np{ln\_traldf\_hor} and \np{ln\_sco} equal \forcode{.true.})
\footnote{In this case, the standard isoneutral operator will be automatically selected}.
In that case, a rotation is applied to the gradient(s) that appears in the operator
so that diffusive fluxes acts on the three spatial direction.

The resulting discret form of the three operators (one isolevel and two rotated one)
is given in the next two subsections.
+\footnote{In this case, the standard isoneutral operator will be automatically selected}.
+In that case, a rotation is applied to the gradient(s) that appears in the operator so that
+diffusive fluxes acts on the three spatial direction.
+
+The resulting discret form of the three operators (one isolevel and two rotated one) is given in
+the next two subsections.
@@ 519,23 +524,22 @@
+ \delta _{j}\left[ A_v^{lT} \; \frac{e_{1v}\,e_{3v}}{e_{2v}} \;\delta _{j+1/2} [T] \right] \;\right)
\end{equation}
where $b_t$=$e_{1t}\,e_{2t}\,e_{3t}$ is the volume of $T$cells
and where zero diffusive fluxes is assumed across solid boundaries,
first (and third in bilaplacian case) horizontal tracer derivative are masked.
It is implemented in the \rou{traldf\_lap} subroutine found in the \mdl{traldf\_lap} module.
The module also contains \rou{traldf\_blp}, the subroutine calling twice \rou{traldf\_lap}
in order to compute the isolevel bilaplacian operator.

It is a \emph{horizontal} operator ($i.e.$ acting along geopotential surfaces) in the $z$coordinate
with or without partial steps, but is simply an isolevel operator in the $s$coordinate.
It is thus used when, in addition to \np{ln\_traldf\_lap} or \np{ln\_traldf\_blp}\forcode{ = .true.},
we have \np{ln\_traldf\_lev}\forcode{ = .true.} or \np{ln\_traldf\_hor}~=~\np{ln\_zco}\forcode{ = .true.}.
In both cases, it significantly contributes to diapycnal mixing.
+where $b_t$=$e_{1t}\,e_{2t}\,e_{3t}$ is the volume of $T$cells and
+where zero diffusive fluxes is assumed across solid boundaries,
+first (and third in bilaplacian case) horizontal tracer derivative are masked.
+It is implemented in the \rou{traldf\_lap} subroutine found in the \mdl{traldf\_lap} module.
+The module also contains \rou{traldf\_blp}, the subroutine calling twice \rou{traldf\_lap} in order to
+compute the isolevel bilaplacian operator.
+
+It is a \emph{horizontal} operator ($i.e.$ acting along geopotential surfaces) in
+the $z$coordinate with or without partial steps, but is simply an isolevel operator in the $s$coordinate.
+It is thus used when, in addition to \np{ln\_traldf\_lap} or \np{ln\_traldf\_blp}\forcode{ = .true.},
+we have \np{ln\_traldf\_lev}\forcode{ = .true.} or \np{ln\_traldf\_hor}~=~\np{ln\_zco}\forcode{ = .true.}.
+In both cases, it significantly contributes to diapycnal mixing.
It is therefore never recommended, even when using it in the bilaplacian case.
Note that in the partial step $z$coordinate (\np{ln\_zps}\forcode{ = .true.}), tracers in horizontally
adjacent cells are located at different depths in the vicinity of the bottom.
In this case, horizontal derivatives in (\autoref{eq:tra_ldf_lap}) at the bottom level
require a specific treatment. They are calculated in the \mdl{zpshde} module,
described in \autoref{sec:TRA_zpshde}.
+Note that in the partial step $z$coordinate (\np{ln\_zps}\forcode{ = .true.}),
+tracers in horizontally adjacent cells are located at different depths in the vicinity of the bottom.
+In this case, horizontal derivatives in (\autoref{eq:tra_ldf_lap}) at the bottom level require a specific treatment.
+They are calculated in the \mdl{zpshde} module, described in \autoref{sec:TRA_zpshde}.
@@ 550,6 +554,6 @@
\subsubsection{Standard rotated (bi)laplacian operator (\protect\mdl{traldf\_iso})}
\label{subsec:TRA_ldf_iso}
The general form of the second order lateral tracer subgrid scale physics
(\autoref{eq:PE_zdf}) takes the following semidiscrete space form in $z$ and $s$coordinates:
+The general form of the second order lateral tracer subgrid scale physics (\autoref{eq:PE_zdf})
+takes the following semidiscrete space form in $z$ and $s$coordinates:
\begin{equation} \label{eq:tra_ldf_iso}
\begin{split}
@@ 569,32 +573,31 @@
& \left. {\left. { \qquad \qquad \ \ \ \left. {
+\;\frac{e_{1w}\,e_{2w}}{e_{3w}} \,\left( r_{1w}^2 + r_{2w}^2 \right)
 \,\delta_{k+1/2} [T] } \right) } \right] \quad } \right\}
 \end{split}
 \end{equation}
where $b_t$=$e_{1t}\,e_{2t}\,e_{3t}$ is the volume of $T$cells,
$r_1$ and $r_2$ are the slopes between the surface of computation
($z$ or $s$surfaces) and the surface along which the diffusion operator
acts ($i.e.$ horizontal or isoneutral surfaces). It is thus used when,
in addition to \np{ln\_traldf\_lap}\forcode{ = .true.}, we have \np{ln\_traldf\_iso}\forcode{ = .true.},
or both \np{ln\_traldf\_hor}\forcode{ = .true.} and \np{ln\_zco}\forcode{ = .true.}. The way these
slopes are evaluated is given in \autoref{sec:LDF_slp}. At the surface, bottom
and lateral boundaries, the turbulent fluxes of heat and salt are set to zero
using the mask technique (see \autoref{sec:LBC_coast}).

The operator in \autoref{eq:tra_ldf_iso} involves both lateral and vertical
derivatives. For numerical stability, the vertical second derivative must
be solved using the same implicit time scheme as that used in the vertical
physics (see \autoref{sec:TRA_zdf}). For computer efficiency reasons, this term
is not computed in the \mdl{traldf\_iso} module, but in the \mdl{trazdf} module
where, if isoneutral mixing is used, the vertical mixing coefficient is simply
increased by $\frac{e_{1w}\,e_{2w} }{e_{3w} }\ \left( {r_{1w} ^2+r_{2w} ^2} \right)$.

This formulation conserves the tracer but does not ensure the decrease
of the tracer variance. Nevertheless the treatment performed on the slopes
(see \autoref{chap:LDF}) allows the model to run safely without any additional
background horizontal diffusion \citep{Guilyardi_al_CD01}.

Note that in the partial step $z$coordinate (\np{ln\_zps}\forcode{ = .true.}), the horizontal derivatives
at the bottom level in \autoref{eq:tra_ldf_iso} require a specific treatment.
+ \,\delta_{k+1/2} [T] } \right) } \right] \quad } \right\}
+\end{split}
+\end{equation}
+where $b_t$=$e_{1t}\,e_{2t}\,e_{3t}$ is the volume of $T$cells,
+$r_1$ and $r_2$ are the slopes between the surface of computation ($z$ or $s$surfaces) and
+the surface along which the diffusion operator acts ($i.e.$ horizontal or isoneutral surfaces).
+It is thus used when, in addition to \np{ln\_traldf\_lap}\forcode{ = .true.},
+we have \np{ln\_traldf\_iso}\forcode{ = .true.},
+or both \np{ln\_traldf\_hor}\forcode{ = .true.} and \np{ln\_zco}\forcode{ = .true.}.
+The way these slopes are evaluated is given in \autoref{sec:LDF_slp}.
+At the surface, bottom and lateral boundaries, the turbulent fluxes of heat and salt are set to zero using
+the mask technique (see \autoref{sec:LBC_coast}).
+
+The operator in \autoref{eq:tra_ldf_iso} involves both lateral and vertical derivatives.
+For numerical stability, the vertical second derivative must be solved using the same implicit time scheme as that
+used in the vertical physics (see \autoref{sec:TRA_zdf}).
+For computer efficiency reasons, this term is not computed in the \mdl{traldf\_iso} module,
+but in the \mdl{trazdf} module where, if isoneutral mixing is used,
+the vertical mixing coefficient is simply increased by
+$\frac{e_{1w}\,e_{2w} }{e_{3w} }\ \left( {r_{1w} ^2+r_{2w} ^2} \right)$.
+
+This formulation conserves the tracer but does not ensure the decrease of the tracer variance.
+Nevertheless the treatment performed on the slopes (see \autoref{chap:LDF}) allows the model to run safely without
+any additional background horizontal diffusion \citep{Guilyardi_al_CD01}.
+
+Note that in the partial step $z$coordinate (\np{ln\_zps}\forcode{ = .true.}),
+the horizontal derivatives at the bottom level in \autoref{eq:tra_ldf_iso} require a specific treatment.
They are calculated in module zpshde, described in \autoref{sec:TRA_zpshde}.
@@ 604,19 +607,18 @@
\label{subsec:TRA_ldf_triad}
If the Griffies triad scheme is employed (\np{ln\_traldf\_triad}\forcode{ = .true.} ; see \autoref{apdx:triad})

An alternative scheme developed by \cite{Griffies_al_JPO98} which ensures tracer variance decreases
is also available in \NEMO (\np{ln\_traldf\_grif}\forcode{ = .true.}). A complete description of
the algorithm is given in \autoref{apdx:triad}.

The lateral fourth order bilaplacian operator on tracers is obtained by
applying (\autoref{eq:tra_ldf_lap}) twice. The operator requires an additional assumption
on boundary conditions: both first and third derivative terms normal to the
coast are set to zero.

The lateral fourth order operator formulation on tracers is obtained by
applying (\autoref{eq:tra_ldf_iso}) twice. It requires an additional assumption
on boundary conditions: first and third derivative terms normal to the
coast, normal to the bottom and normal to the surface are set to zero.
+If the Griffies triad scheme is employed (\np{ln\_traldf\_triad}\forcode{ = .true.}; see \autoref{apdx:triad})
+
+An alternative scheme developed by \cite{Griffies_al_JPO98} which ensures tracer variance decreases
+is also available in \NEMO (\np{ln\_traldf\_grif}\forcode{ = .true.}).
+A complete description of the algorithm is given in \autoref{apdx:triad}.
+
+The lateral fourth order bilaplacian operator on tracers is obtained by applying (\autoref{eq:tra_ldf_lap}) twice.
+The operator requires an additional assumption on boundary conditions:
+both first and third derivative terms normal to the coast are set to zero.
+
+The lateral fourth order operator formulation on tracers is obtained by applying (\autoref{eq:tra_ldf_iso}) twice.
+It requires an additional assumption on boundary conditions:
+first and third derivative terms normal to the coast,
+normal to the bottom and normal to the surface are set to zero.
%&& Option for the rotated operators
@@ 631,5 +633,6 @@
\np{ln\_triad\_iso} = pure horizontal mixing in ML (triad only)
\np{rn\_sw\_triad} =1 switching triad ; =0 all 4 triads used (triad only)
+\np{rn\_sw\_triad} =1 switching triad;
+ =0 all 4 triads used (triad only)
\np{ln\_botmix\_triad} = lateral mixing on bottom (triad only)
@@ 646,8 +649,7 @@
Options are defined through the \ngn{namzdf} namelist variables.
The formulation of the vertical subgrid scale tracer physics is the same
for all the vertical coordinates, and is based on a laplacian operator.
The vertical diffusion operator given by (\autoref{eq:PE_zdf}) takes the
following semidiscrete space form:
+The formulation of the vertical subgrid scale tracer physics is the same for all the vertical coordinates,
+and is based on a laplacian operator.
+The vertical diffusion operator given by (\autoref{eq:PE_zdf}) takes the following semidiscrete space form:
\begin{equation} \label{eq:tra_zdf}
\begin{split}
@@ 657,29 +659,25 @@
\end{split}
\end{equation}
where $A_w^{vT}$ and $A_w^{vS}$ are the vertical eddy diffusivity
coefficients on temperature and salinity, respectively. Generally,
$A_w^{vT}=A_w^{vS}$ except when double diffusive mixing is
parameterised ($i.e.$ \key{zdfddm} is defined). The way these coefficients
are evaluated is given in \autoref{chap:ZDF} (ZDF). Furthermore, when
isoneutral mixing is used, both mixing coefficients are increased
by $\frac{e_{1w}\,e_{2w} }{e_{3w} }\ \left( {r_{1w} ^2+r_{2w} ^2} \right)$
to account for the vertical second derivative of \autoref{eq:tra_ldf_iso}.

At the surface and bottom boundaries, the turbulent fluxes of
heat and salt must be specified. At the surface they are prescribed
from the surface forcing and added in a dedicated routine (see \autoref{subsec:TRA_sbc}),
whilst at the bottom they are set to zero for heat and salt unless
a geothermal flux forcing is prescribed as a bottom boundary
condition (see \autoref{subsec:TRA_bbc}).

The large eddy coefficient found in the mixed layer together with high
vertical resolution implies that in the case of explicit time stepping
(\np{ln\_zdfexp}\forcode{ = .true.}) there would be too restrictive a constraint on
the time step. Therefore, the default implicit time stepping is preferred
for the vertical diffusion since it overcomes the stability constraint.
A forward time differencing scheme (\np{ln\_zdfexp}\forcode{ = .true.}) using a time
splitting technique (\np{nn\_zdfexp} $> 1$) is provided as an alternative.
Namelist variables \np{ln\_zdfexp} and \np{nn\_zdfexp} apply to both
tracers and dynamics.
+where $A_w^{vT}$ and $A_w^{vS}$ are the vertical eddy diffusivity coefficients on temperature and salinity,
+respectively.
+Generally, $A_w^{vT}=A_w^{vS}$ except when double diffusive mixing is parameterised ($i.e.$ \key{zdfddm} is defined).
+The way these coefficients are evaluated is given in \autoref{chap:ZDF} (ZDF).
+Furthermore, when isoneutral mixing is used, both mixing coefficients are increased by
+$\frac{e_{1w}\,e_{2w} }{e_{3w} }\ \left( {r_{1w} ^2+r_{2w} ^2} \right)$ to account for
+the vertical second derivative of \autoref{eq:tra_ldf_iso}.
+
+At the surface and bottom boundaries, the turbulent fluxes of heat and salt must be specified.
+At the surface they are prescribed from the surface forcing and added in a dedicated routine
+(see \autoref{subsec:TRA_sbc}), whilst at the bottom they are set to zero for heat and salt unless
+a geothermal flux forcing is prescribed as a bottom boundary condition (see \autoref{subsec:TRA_bbc}).
+
+The large eddy coefficient found in the mixed layer together with high vertical resolution implies that
+in the case of explicit time stepping (\np{ln\_zdfexp}\forcode{ = .true.})
+there would be too restrictive a constraint on the time step.
+Therefore, the default implicit time stepping is preferred for the vertical diffusion since
+it overcomes the stability constraint.
+A forward time differencing scheme (\np{ln\_zdfexp}\forcode{ = .true.}) using
+a time splitting technique (\np{nn\_zdfexp} $> 1$) is provided as an alternative.
+Namelist variables \np{ln\_zdfexp} and \np{nn\_zdfexp} apply to both tracers and dynamics.
% ================================================================
@@ 695,35 +693,34 @@
\label{subsec:TRA_sbc}
The surface boundary condition for tracers is implemented in a separate
module (\mdl{trasbc}) instead of entering as a boundary condition on the vertical
diffusion operator (as in the case of momentum). This has been found to
enhance readability of the code. The two formulations are completely
equivalent; the forcing terms in trasbc are the surface fluxes divided by
the thickness of the top model layer.

Due to interactions and mass exchange of water ($F_{mass}$) with other Earth system components
($i.e.$ atmosphere, seaice, land), the change in the heat and salt content of the surface layer
of the ocean is due both to the heat and salt fluxes crossing the sea surface (not linked with $F_{mass}$)
and to the heat and salt content of the mass exchange. They are both included directly in $Q_{ns}$,
the surface heat flux, and $F_{salt}$, the surface salt flux (see \autoref{chap:SBC} for further details).
+The surface boundary condition for tracers is implemented in a separate module (\mdl{trasbc}) instead of
+entering as a boundary condition on the vertical diffusion operator (as in the case of momentum).
+This has been found to enhance readability of the code.
+The two formulations are completely equivalent;
+the forcing terms in trasbc are the surface fluxes divided by the thickness of the top model layer.
+
+Due to interactions and mass exchange of water ($F_{mass}$) with other Earth system components
+($i.e.$ atmosphere, seaice, land), the change in the heat and salt content of the surface layer of the ocean is due
+both to the heat and salt fluxes crossing the sea surface (not linked with $F_{mass}$) and
+to the heat and salt content of the mass exchange.
+They are both included directly in $Q_{ns}$, the surface heat flux,
+and $F_{salt}$, the surface salt flux (see \autoref{chap:SBC} for further details).
By doing this, the forcing formulation is the same for any tracer (including temperature and salinity).
The surface module (\mdl{sbcmod}, see \autoref{chap:SBC}) provides the following
forcing fields (used on tracers):

$\bullet$ $Q_{ns}$, the nonsolar part of the net surface heat flux that crosses the sea surface
+The surface module (\mdl{sbcmod}, see \autoref{chap:SBC}) provides the following forcing fields (used on tracers):
+
+$\bullet$ $Q_{ns}$, the nonsolar part of the net surface heat flux that crosses the sea surface
(i.e. the difference between the total surface heat flux and the fraction of the short wave flux that
penetrates into the water column, see \autoref{subsec:TRA_qsr}) plus the heat content associated with
of the mass exchange with the atmosphere and lands.
+penetrates into the water column, see \autoref{subsec:TRA_qsr})
+plus the heat content associated with of the mass exchange with the atmosphere and lands.
$\bullet$ $\textit{sfx}$, the salt flux resulting from iceocean mass exchange (freezing, melting, ridging...)
$\bullet$ \textit{emp}, the mass flux exchanged with the atmosphere (evaporation minus precipitation)
 and possibly with the seaice and iceshelves.

$\bullet$ \textit{rnf}, the mass flux associated with runoff
+$\bullet$ \textit{emp}, the mass flux exchanged with the atmosphere (evaporation minus precipitation) and
+possibly with the seaice and iceshelves.
+
+$\bullet$ \textit{rnf}, the mass flux associated with runoff
(see \autoref{sec:SBC_rnf} for further detail of how it acts on temperature and salinity tendencies)
$\bullet$ \textit{fwfisf}, the mass flux associated with ice shelf melt,
+$\bullet$ \textit{fwfisf}, the mass flux associated with ice shelf melt,
(see \autoref{sec:SBC_isf} for further details on how the ice shelf melt is computed and applied).
@@ 735,13 +732,12 @@
\end{aligned}
\end{equation}
where $\overline{x }^t$ means that $x$ is averaged over two consecutive time steps
($t\rdt/2$ and $t+\rdt/2$). Such time averaging prevents the
divergence of odd and even time step (see \autoref{chap:STP}).

In the linear free surface case (\np{ln\_linssh}\forcode{ = .true.}),
an additional term has to be added on both temperature and salinity.
On temperature, this term remove the heat content associated with mass exchange
that has been added to $Q_{ns}$. On salinity, this term mimics the concentration/dilution effect that
would have resulted from a change in the volume of the first level.
+where $\overline{x }^t$ means that $x$ is averaged over two consecutive time steps ($t\rdt/2$ and $t+\rdt/2$).
+Such time averaging prevents the divergence of odd and even time step (see \autoref{chap:STP}).
+
+In the linear free surface case (\np{ln\_linssh}\forcode{ = .true.}), an additional term has to be added on
+both temperature and salinity.
+On temperature, this term remove the heat content associated with mass exchange that has been added to $Q_{ns}$.
+On salinity, this term mimics the concentration/dilution effect that would have resulted from a change in
+the volume of the first level.
The resulting surface boundary condition is applied as follows:
\begin{equation} \label{eq:tra_sbc_lin}
@@ 754,7 +750,7 @@
\end{aligned}
\end{equation}
Note that an exact conservation of heat and salt content is only achieved with nonlinear free surface.
In the linear free surface case, there is a small imbalance. The imbalance is larger
than the imbalance associated with the Asselin time filter \citep{Leclair_Madec_OM09}.
+Note that an exact conservation of heat and salt content is only achieved with nonlinear free surface.
+In the linear free surface case, there is a small imbalance.
+The imbalance is larger than the imbalance associated with the Asselin time filter \citep{Leclair_Madec_OM09}.
This is the reason why the modified filter is not applied in the linear free surface case (see \autoref{chap:STP}).
@@ 770,10 +766,9 @@
Options are defined through the \ngn{namtra\_qsr} namelist variables.
When the penetrative solar radiation option is used (\np{ln\_flxqsr}\forcode{ = .true.}),
the solar radiation penetrates the top few tens of meters of the ocean. If it is not used
(\np{ln\_flxqsr}\forcode{ = .false.}) all the heat flux is absorbed in the first ocean level.
Thus, in the former case a term is added to the time evolution equation of
temperature \autoref{eq:PE_tra_T} and the surface boundary condition is
modified to take into account only the nonpenetrative part of the surface
+When the penetrative solar radiation option is used (\np{ln\_flxqsr}\forcode{ = .true.}),
+the solar radiation penetrates the top few tens of meters of the ocean.
+If it is not used (\np{ln\_flxqsr}\forcode{ = .false.}) all the heat flux is absorbed in the first ocean level.
+Thus, in the former case a term is added to the time evolution equation of temperature \autoref{eq:PE_tra_T} and
+the surface boundary condition is modified to take into account only the nonpenetrative part of the surface
heat flux:
\begin{equation} \label{eq:PE_qsr}
@@ 783,6 +778,6 @@
\end{split}
\end{equation}
where $Q_{sr}$ is the penetrative part of the surface heat flux ($i.e.$ the shortwave radiation)
and $I$ is the downward irradiance ($\left. I \right_{z=\eta}=Q_{sr}$).
+where $Q_{sr}$ is the penetrative part of the surface heat flux ($i.e.$ the shortwave radiation) and
+$I$ is the downward irradiance ($\left. I \right_{z=\eta}=Q_{sr}$).
The additional term in \autoref{eq:PE_qsr} is discretized as follows:
\begin{equation} \label{eq:tra_qsr}
@@ 790,82 +785,87 @@
\end{equation}
The shortwave radiation, $Q_{sr}$, consists of energy distributed across a wide spectral range.
The ocean is strongly absorbing for wavelengths longer than 700~nm and these
wavelengths contribute to heating the upper few tens of centimetres. The fraction of $Q_{sr}$
that resides in these almost nonpenetrative wavebands, $R$, is $\sim 58\%$ (specified
through namelist parameter \np{rn\_abs}). It is assumed to penetrate the ocean
with a decreasing exponential profile, with an efolding depth scale, $\xi_0$,
+The shortwave radiation, $Q_{sr}$, consists of energy distributed across a wide spectral range.
+The ocean is strongly absorbing for wavelengths longer than 700~nm and these wavelengths contribute to
+heating the upper few tens of centimetres.
+The fraction of $Q_{sr}$ that resides in these almost nonpenetrative wavebands, $R$, is $\sim 58\%$
+(specified through namelist parameter \np{rn\_abs}).
+It is assumed to penetrate the ocean with a decreasing exponential profile, with an efolding depth scale, $\xi_0$,
of a few tens of centimetres (typically $\xi_0=0.35~m$ set as \np{rn\_si0} in the \ngn{namtra\_qsr} namelist).
For shorter wavelengths (400700~nm), the ocean is more transparent, and solar energy
propagates to larger depths where it contributes to
local heating.
The way this second part of the solar energy penetrates into the ocean depends on
which formulation is chosen. In the simple 2waveband light penetration scheme (\np{ln\_qsr\_2bd}\forcode{ = .true.})
a chlorophyllindependent monochromatic formulation is chosen for the shorter wavelengths,
+For shorter wavelengths (400700~nm), the ocean is more transparent, and solar energy propagates to
+larger depths where it contributes to local heating.
+The way this second part of the solar energy penetrates into the ocean depends on which formulation is chosen.
+In the simple 2waveband light penetration scheme (\np{ln\_qsr\_2bd}\forcode{ = .true.})
+a chlorophyllindependent monochromatic formulation is chosen for the shorter wavelengths,
leading to the following expression \citep{Paulson1977}:
\begin{equation} \label{eq:traqsr_iradiance}
I(z) = Q_{sr} \left[Re^{z / \xi_0} + \left( 1R\right) e^{z / \xi_1} \right]
\end{equation}
where $\xi_1$ is the second extinction length scale associated with the shorter wavelengths.
It is usually chosen to be 23~m by setting the \np{rn\_si0} namelist parameter.
The set of default values ($\xi_0$, $\xi_1$, $R$) corresponds to a Type I water in
Jerlov's (1968) classification (oligotrophic waters).

Such assumptions have been shown to provide a very crude and simplistic
representation of observed light penetration profiles (\cite{Morel_JGR88}, see also
\autoref{fig:traqsr_irradiance}). Light absorption in the ocean depends on
particle concentration and is spectrally selective. \cite{Morel_JGR88} has shown
that an accurate representation of light penetration can be provided by a 61 waveband
formulation. Unfortunately, such a model is very computationally expensive.
Thus, \cite{Lengaigne_al_CD07} have constructed a simplified version of this
formulation in which visible light is split into three wavebands: blue (400500 nm),
green (500600 nm) and red (600700nm). For each waveband, the chlorophylldependent
attenuation coefficient is fitted to the coefficients computed from the full spectral model
of \cite{Morel_JGR88} (as modified by \cite{Morel_Maritorena_JGR01}), assuming
the same powerlaw relationship. As shown in \autoref{fig:traqsr_irradiance},
this formulation, called RGB (RedGreenBlue), reproduces quite closely
the light penetration profiles predicted by the full spectal model, but with much greater
computational efficiency. The 2bands formulation does not reproduce the full model very well.

The RGB formulation is used when \np{ln\_qsr\_rgb}\forcode{ = .true.}. The RGB attenuation coefficients
($i.e.$ the inverses of the extinction length scales) are tabulated over 61 nonuniform
chlorophyll classes ranging from 0.01 to 10 g.Chl/L (see the routine \rou{trc\_oce\_rgb}
in \mdl{trc\_oce} module). Four types of chlorophyll can be chosen in the RGB formulation:
+where $\xi_1$ is the second extinction length scale associated with the shorter wavelengths.
+It is usually chosen to be 23~m by setting the \np{rn\_si0} namelist parameter.
+The set of default values ($\xi_0$, $\xi_1$, $R$) corresponds to a Type I water in Jerlov's (1968) classification
+(oligotrophic waters).
+
+Such assumptions have been shown to provide a very crude and simplistic representation of
+observed light penetration profiles (\cite{Morel_JGR88}, see also \autoref{fig:traqsr_irradiance}).
+Light absorption in the ocean depends on particle concentration and is spectrally selective.
+\cite{Morel_JGR88} has shown that an accurate representation of light penetration can be provided by
+a 61 waveband formulation.
+Unfortunately, such a model is very computationally expensive.
+Thus, \cite{Lengaigne_al_CD07} have constructed a simplified version of this formulation in which
+visible light is split into three wavebands: blue (400500 nm), green (500600 nm) and red (600700nm).
+For each waveband, the chlorophylldependent attenuation coefficient is fitted to the coefficients computed from
+the full spectral model of \cite{Morel_JGR88} (as modified by \cite{Morel_Maritorena_JGR01}),
+assuming the same powerlaw relationship.
+As shown in \autoref{fig:traqsr_irradiance}, this formulation, called RGB (RedGreenBlue),
+reproduces quite closely the light penetration profiles predicted by the full spectal model,
+but with much greater computational efficiency.
+The 2bands formulation does not reproduce the full model very well.
+
+The RGB formulation is used when \np{ln\_qsr\_rgb}\forcode{ = .true.}.
+The RGB attenuation coefficients ($i.e.$ the inverses of the extinction length scales) are tabulated over
+61 nonuniform chlorophyll classes ranging from 0.01 to 10 g.Chl/L
+(see the routine \rou{trc\_oce\_rgb} in \mdl{trc\_oce} module).
+Four types of chlorophyll can be chosen in the RGB formulation:
\begin{description}
\item[\np{nn\_chdta}\forcode{ = 0}]
a constant 0.05 g.Chl/L value everywhere ;
\item[\np{nn\_chdta}\forcode{ = 1}]
an observed time varying chlorophyll deduced from satellite surface ocean color measurement
spread uniformly in the vertical direction ;
\item[\np{nn\_chdta}\forcode{ = 2}]
same as previous case except that a vertical profile of chlorophyl is used.
Following \cite{Morel_Berthon_LO89}, the profile is computed from the local surface chlorophyll value ;
\item[\np{ln\_qsr\_bio}\forcode{ = .true.}]
simulated time varying chlorophyll by TOP biogeochemical model.
In this case, the RGB formulation is used to calculate both the phytoplankton
light limitation in PISCES or LOBSTER and the oceanic heating rate.
+\item[\np{nn\_chdta}\forcode{ = 0}]
+ a constant 0.05 g.Chl/L value everywhere ;
+\item[\np{nn\_chdta}\forcode{ = 1}]
+ an observed time varying chlorophyll deduced from satellite surface ocean color measurement spread uniformly in
+ the vertical direction;
+\item[\np{nn\_chdta}\forcode{ = 2}]
+ same as previous case except that a vertical profile of chlorophyl is used.
+ Following \cite{Morel_Berthon_LO89}, the profile is computed from the local surface chlorophyll value;
+\item[\np{ln\_qsr\_bio}\forcode{ = .true.}]
+ simulated time varying chlorophyll by TOP biogeochemical model.
+ In this case, the RGB formulation is used to calculate both the phytoplankton light limitation in
+ PISCES or LOBSTER and the oceanic heating rate.
\end{description}
The trend in \autoref{eq:tra_qsr} associated with the penetration of the solar radiation
is added to the temperature trend, and the surface heat flux is modified in routine \mdl{traqsr}.

When the $z$coordinate is preferred to the $s$coordinate, the depth of $w$levels does
not significantly vary with location. The level at which the light has been totally
absorbed ($i.e.$ it is less than the computer precision) is computed once,
and the trend associated with the penetration of the solar radiation is only added down to that level.
Finally, note that when the ocean is shallow ($<$ 200~m), part of the
solar radiation can reach the ocean floor. In this case, we have
chosen that all remaining radiation is absorbed in the last ocean
level ($i.e.$ $I$ is masked).
+The trend in \autoref{eq:tra_qsr} associated with the penetration of the solar radiation is added to
+the temperature trend, and the surface heat flux is modified in routine \mdl{traqsr}.
+
+When the $z$coordinate is preferred to the $s$coordinate,
+the depth of $w$levels does not significantly vary with location.
+The level at which the light has been totally absorbed
+($i.e.$ it is less than the computer precision) is computed once,
+and the trend associated with the penetration of the solar radiation is only added down to that level.
+Finally, note that when the ocean is shallow ($<$ 200~m), part of the solar radiation can reach the ocean floor.
+In this case, we have chosen that all remaining radiation is absorbed in the last ocean level
+($i.e.$ $I$ is masked).
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=1.0\textwidth]{Fig_TRA_Irradiance}
\caption{ \protect\label{fig:traqsr_irradiance}
Penetration profile of the downward solar irradiance calculated by four models.
Two waveband chlorophyllindependent formulation (blue), a chlorophylldependent
monochromatic formulation (green), 4 waveband RGB formulation (red),
61 waveband Morel (1988) formulation (black) for a chlorophyll concentration of
(a) Chl=0.05 mg/m$^3$ and (b) Chl=0.5 mg/m$^3$. From \citet{Lengaigne_al_CD07}.}
\end{center} \end{figure}
+\begin{figure}[!t]
+ \begin{center}
+ \includegraphics[width=1.0\textwidth]{Fig_TRA_Irradiance}
+ \caption{ \protect\label{fig:traqsr_irradiance}
+ Penetration profile of the downward solar irradiance calculated by four models.
+ Two waveband chlorophyllindependent formulation (blue),
+ a chlorophylldependent monochromatic formulation (green),
+ 4 waveband RGB formulation (red),
+ 61 waveband Morel (1988) formulation (black) for a chlorophyll concentration of
+ (a) Chl=0.05 mg/m$^3$ and (b) Chl=0.5 mg/m$^3$.
+ From \citet{Lengaigne_al_CD07}.
+ }
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 880,32 +880,30 @@
%
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=1.0\textwidth]{Fig_TRA_geoth}
\caption{ \protect\label{fig:geothermal}
Geothermal Heat flux (in $mW.m^{2}$) used by \cite{EmileGeay_Madec_OS09}.
It is inferred from the age of the sea floor and the formulae of \citet{Stein_Stein_Nat92}.}
\end{center} \end{figure}
+\begin{figure}[!t]
+ \begin{center}
+ \includegraphics[width=1.0\textwidth]{Fig_TRA_geoth}
+ \caption{ \protect\label{fig:geothermal}
+ Geothermal Heat flux (in $mW.m^{2}$) used by \cite{EmileGeay_Madec_OS09}.
+ It is inferred from the age of the sea floor and the formulae of \citet{Stein_Stein_Nat92}.
+ }
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Usually it is assumed that there is no exchange of heat or salt through
the ocean bottom, $i.e.$ a no flux boundary condition is applied on active
tracers at the bottom. This is the default option in \NEMO, and it is
implemented using the masking technique. However, there is a
nonzero heat flux across the seafloor that is associated with solid
earth cooling. This flux is weak compared to surface fluxes (a mean
global value of $\sim0.1\;W/m^2$ \citep{Stein_Stein_Nat92}), but it warms
systematically the ocean and acts on the densest water masses.
Taking this flux into account in a global ocean model increases
the deepest overturning cell ($i.e.$ the one associated with the Antarctic
Bottom Water) by a few Sverdrups \citep{EmileGeay_Madec_OS09}.
+Usually it is assumed that there is no exchange of heat or salt through the ocean bottom,
+$i.e.$ a no flux boundary condition is applied on active tracers at the bottom.
+This is the default option in \NEMO, and it is implemented using the masking technique.
+However, there is a nonzero heat flux across the seafloor that is associated with solid earth cooling.
+This flux is weak compared to surface fluxes (a mean global value of $\sim0.1\;W/m^2$ \citep{Stein_Stein_Nat92}),
+but it warms systematically the ocean and acts on the densest water masses.
+Taking this flux into account in a global ocean model increases the deepest overturning cell
+($i.e.$ the one associated with the Antarctic Bottom Water) by a few Sverdrups \citep{EmileGeay_Madec_OS09}.
Options are defined through the \ngn{namtra\_bbc} namelist variables.
The presence of geothermal heating is controlled by setting the namelist
parameter \np{ln\_trabbc} to true. Then, when \np{nn\_geoflx} is set to 1,
a constant geothermal heating is introduced whose value is given by the
\np{nn\_geoflx\_cst}, which is also a namelist parameter.
When \np{nn\_geoflx} is set to 2, a spatially varying geothermal heat flux is
introduced which is provided in the \ifile{geothermal\_heating} NetCDF file
(\autoref{fig:geothermal}) \citep{EmileGeay_Madec_OS09}.
+The presence of geothermal heating is controlled by setting the namelist parameter \np{ln\_trabbc} to true.
+Then, when \np{nn\_geoflx} is set to 1, a constant geothermal heating is introduced whose value is given by
+the \np{nn\_geoflx\_cst}, which is also a namelist parameter.
+When \np{nn\_geoflx} is set to 2, a spatially varying geothermal heat flux is introduced which is provided in
+the \ifile{geothermal\_heating} NetCDF file (\autoref{fig:geothermal}) \citep{EmileGeay_Madec_OS09}.
% ================================================================
@@ 920,28 +918,24 @@
Options are defined through the \ngn{nambbl} namelist variables.
In a $z$coordinate configuration, the bottom topography is represented by a
series of discrete steps. This is not adequate to represent gravity driven
downslope flows. Such flows arise either downstream of sills such as the Strait of
Gibraltar or Denmark Strait, where dense water formed in marginal seas flows
into a basin filled with less dense water, or along the continental slope when dense
water masses are formed on a continental shelf. The amount of entrainment
that occurs in these gravity plumes is critical in determining the density
and volume flux of the densest waters of the ocean, such as Antarctic Bottom Water,
or North Atlantic Deep Water. $z$coordinate models tend to overestimate the
entrainment, because the gravity flow is mixed vertically by convection
as it goes ''downstairs'' following the step topography, sometimes over a thickness
much larger than the thickness of the observed gravity plume. A similar problem
occurs in the $s$coordinate when the thickness of the bottom level varies rapidly
downstream of a sill \citep{Willebrand_al_PO01}, and the thickness
of the plume is not resolved.

The idea of the bottom boundary layer (BBL) parameterisation, first introduced by
\citet{Beckmann_Doscher1997}, is to allow a direct communication between
two adjacent bottom cells at different levels, whenever the densest water is
located above the less dense water. The communication can be by a diffusive flux
(diffusive BBL), an advective flux (advective BBL), or both. In the current
implementation of the BBL, only the tracers are modified, not the velocities.
Furthermore, it only connects ocean bottom cells, and therefore does not include
all the improvements introduced by \citet{Campin_Goosse_Tel99}.
+In a $z$coordinate configuration, the bottom topography is represented by a series of discrete steps.
+This is not adequate to represent gravity driven downslope flows.
+Such flows arise either downstream of sills such as the Strait of Gibraltar or Denmark Strait,
+where dense water formed in marginal seas flows into a basin filled with less dense water,
+or along the continental slope when dense water masses are formed on a continental shelf.
+The amount of entrainment that occurs in these gravity plumes is critical in determining the density and
+volume flux of the densest waters of the ocean, such as Antarctic Bottom Water, or North Atlantic Deep Water.
+$z$coordinate models tend to overestimate the entrainment,
+because the gravity flow is mixed vertically by convection as it goes ''downstairs'' following the step topography,
+sometimes over a thickness much larger than the thickness of the observed gravity plume.
+A similar problem occurs in the $s$coordinate when the thickness of the bottom level varies rapidly downstream of
+a sill \citep{Willebrand_al_PO01}, and the thickness of the plume is not resolved.
+
+The idea of the bottom boundary layer (BBL) parameterisation, first introduced by \citet{Beckmann_Doscher1997},
+is to allow a direct communication between two adjacent bottom cells at different levels,
+whenever the densest water is located above the less dense water.
+The communication can be by a diffusive flux (diffusive BBL), an advective flux (advective BBL), or both.
+In the current implementation of the BBL, only the tracers are modified, not the velocities.
+Furthermore, it only connects ocean bottom cells, and therefore does not include all the improvements introduced by
+\citet{Campin_Goosse_Tel99}.
% 
@@ 951,12 +945,13 @@
\label{subsec:TRA_bbl_diff}
When applying sigmadiffusion (\key{trabbl} defined and \np{nn\_bbl\_ldf} set to 1),
+When applying sigmadiffusion (\key{trabbl} defined and \np{nn\_bbl\_ldf} set to 1),
the diffusive flux between two adjacent cells at the ocean floor is given by
\begin{equation} \label{eq:tra_bbl_diff}
{\rm {\bf F}}_\sigma=A_l^\sigma \; \nabla_\sigma T
\end{equation}
with $\nabla_\sigma$ the lateral gradient operator taken between bottom cells,
and $A_l^\sigma$ the lateral diffusivity in the BBL. Following \citet{Beckmann_Doscher1997},
the latter is prescribed with a spatial dependence, $i.e.$ in the conditional form
+with $\nabla_\sigma$ the lateral gradient operator taken between bottom cells,
+and $A_l^\sigma$ the lateral diffusivity in the BBL.
+Following \citet{Beckmann_Doscher1997}, the latter is prescribed with a spatial dependence,
+$i.e.$ in the conditional form
\begin{equation} \label{eq:tra_bbl_coef}
A_l^\sigma (i,j,t)=\left\{ {\begin{array}{l}
@@ 966,17 +961,16 @@
\end{array}} \right.
\end{equation}
where $A_{bbl}$ is the BBL diffusivity coefficient, given by the namelist
parameter \np{rn\_ahtbbl} and usually set to a value much larger
than the one used for lateral mixing in the open ocean. The constraint in \autoref{eq:tra_bbl_coef}
implies that sigmalike diffusion only occurs when the density above the sea floor, at the top of
the slope, is larger than in the deeper ocean (see green arrow in \autoref{fig:bbl}).
In practice, this constraint is applied separately in the two horizontal directions,
+where $A_{bbl}$ is the BBL diffusivity coefficient, given by the namelist parameter \np{rn\_ahtbbl} and
+usually set to a value much larger than the one used for lateral mixing in the open ocean.
+The constraint in \autoref{eq:tra_bbl_coef} implies that sigmalike diffusion only occurs when
+the density above the sea floor, at the top of the slope, is larger than in the deeper ocean
+(see green arrow in \autoref{fig:bbl}).
+In practice, this constraint is applied separately in the two horizontal directions,
and the density gradient in \autoref{eq:tra_bbl_coef} is evaluated with the log gradient formulation:
\begin{equation} \label{eq:tra_bbl_Drho}
\nabla_\sigma \rho / \rho = \alpha \,\nabla_\sigma T + \beta \,\nabla_\sigma S
\end{equation}
where $\rho$, $\alpha$ and $\beta$ are functions of $\overline{T}^\sigma$,
$\overline{S}^\sigma$ and $\overline{H}^\sigma$, the along bottom mean temperature,
salinity and depth, respectively.
+where $\rho$, $\alpha$ and $\beta$ are functions of $\overline{T}^\sigma$,
+$\overline{S}^\sigma$ and $\overline{H}^\sigma$, the along bottom mean temperature, salinity and depth, respectively.
% 
@@ 990,16 +984,17 @@
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=0.7\textwidth]{Fig_BBL_adv}
\caption{ \protect\label{fig:bbl}
Advective/diffusive Bottom Boundary Layer. The BBL parameterisation is
activated when $\rho^i_{kup}$ is larger than $\rho^{i+1}_{kdnw}$.
Red arrows indicate the additional overturning circulation due to the advective BBL.
The transport of the downslope flow is defined either as the transport of the bottom
ocean cell (black arrow), or as a function of the along slope density gradient.
The green arrow indicates the diffusive BBL flux directly connecting $kup$ and $kdwn$
ocean bottom cells.
connection}
\end{center} \end{figure}
+\begin{figure}[!t]
+ \begin{center}
+ \includegraphics[width=0.7\textwidth]{Fig_BBL_adv}
+ \caption{ \protect\label{fig:bbl}
+ Advective/diffusive Bottom Boundary Layer.
+ The BBL parameterisation is activated when $\rho^i_{kup}$ is larger than $\rho^{i+1}_{kdnw}$.
+ Red arrows indicate the additional overturning circulation due to the advective BBL.
+ The transport of the downslope flow is defined either as the transport of the bottom ocean cell (black arrow),
+ or as a function of the along slope density gradient.
+ The green arrow indicates the diffusive BBL flux directly connecting $kup$ and $kdwn$ ocean bottom cells.
+ }
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 1011,44 +1006,41 @@
%%%gmcomment : this section has to be really written
When applying an advective BBL (\np{nn\_bbl\_adv}\forcode{ = 1..2}), an overturning
circulation is added which connects two adjacent bottom gridpoints only if dense
water overlies less dense water on the slope. The density difference causes dense
water to move down the slope.

\np{nn\_bbl\_adv}\forcode{ = 1} : the downslope velocity is chosen to be the Eulerian
ocean velocity just above the topographic step (see black arrow in \autoref{fig:bbl})
\citep{Beckmann_Doscher1997}. It is a \textit{conditional advection}, that is, advection
is allowed only if dense water overlies less dense water on the slope ($i.e.$
$\nabla_\sigma \rho \cdot \nabla H<0$) and if the velocity is directed towards
greater depth ($i.e.$ $\vect{U} \cdot \nabla H>0$).

\np{nn\_bbl\_adv}\forcode{ = 2} : the downslope velocity is chosen to be proportional to $\Delta \rho$,
+When applying an advective BBL (\np{nn\_bbl\_adv}\forcode{ = 1..2}), an overturning circulation is added which
+connects two adjacent bottom gridpoints only if dense water overlies less dense water on the slope.
+The density difference causes dense water to move down the slope.
+
+\np{nn\_bbl\_adv}\forcode{ = 1}:
+the downslope velocity is chosen to be the Eulerian ocean velocity just above the topographic step
+(see black arrow in \autoref{fig:bbl}) \citep{Beckmann_Doscher1997}.
+It is a \textit{conditional advection}, that is, advection is allowed only
+if dense water overlies less dense water on the slope ($i.e.$ $\nabla_\sigma \rho \cdot \nabla H<0$) and
+if the velocity is directed towards greater depth ($i.e.$ $\vect{U} \cdot \nabla H>0$).
+
+\np{nn\_bbl\_adv}\forcode{ = 2}:
+the downslope velocity is chosen to be proportional to $\Delta \rho$,
the density difference between the higher cell and lower cell densities \citep{Campin_Goosse_Tel99}.
The advection is allowed only if dense water overlies less dense water on the slope ($i.e.$
$\nabla_\sigma \rho \cdot \nabla H<0$). For example, the resulting transport of the
downslope flow, here in the $i$direction (\autoref{fig:bbl}), is simply given by the
following expression:
+The advection is allowed only if dense water overlies less dense water on the slope
+($i.e.$ $\nabla_\sigma \rho \cdot \nabla H<0$).
+For example, the resulting transport of the downslope flow, here in the $i$direction (\autoref{fig:bbl}),
+is simply given by the following expression:
\begin{equation} \label{eq:bbl_Utr}
u^{tr}_{bbl} = \gamma \, g \frac{\Delta \rho}{\rho_o} e_{1u} \; min \left( {e_{3u}}_{kup},{e_{3u}}_{kdwn} \right)
\end{equation}
where $\gamma$, expressed in seconds, is the coefficient of proportionality
provided as \np{rn\_gambbl}, a namelist parameter, and \textit{kup} and \textit{kdwn}
are the vertical index of the higher and lower cells, respectively.
The parameter $\gamma$ should take a different value for each bathymetric
step, but for simplicity, and because no direct estimation of this parameter is
available, a uniform value has been assumed. The possible values for $\gamma$
range between 1 and $10~s$ \citep{Campin_Goosse_Tel99}.

Scalar properties are advected by this additional transport $( u^{tr}_{bbl}, v^{tr}_{bbl} )$
using the upwind scheme. Such a diffusive advective scheme has been chosen
to mimic the entrainment between the downslope plume and the surrounding
water at intermediate depths. The entrainment is replaced by the vertical mixing
implicit in the advection scheme. Let us consider as an example the
case displayed in \autoref{fig:bbl} where the density at level $(i,kup)$ is
larger than the one at level $(i,kdwn)$. The advective BBL scheme
modifies the tracer time tendency of the ocean cells near the
topographic step by the downslope flow \autoref{eq:bbl_dw},
the horizontal \autoref{eq:bbl_hor} and the upward \autoref{eq:bbl_up}
return flows as follows:
+where $\gamma$, expressed in seconds, is the coefficient of proportionality provided as \np{rn\_gambbl},
+a namelist parameter, and \textit{kup} and \textit{kdwn} are the vertical index of the higher and lower cells,
+respectively.
+The parameter $\gamma$ should take a different value for each bathymetric step, but for simplicity,
+and because no direct estimation of this parameter is available, a uniform value has been assumed.
+The possible values for $\gamma$ range between 1 and $10~s$ \citep{Campin_Goosse_Tel99}.
+
+Scalar properties are advected by this additional transport $( u^{tr}_{bbl}, v^{tr}_{bbl} )$ using the upwind scheme.
+Such a diffusive advective scheme has been chosen to mimic the entrainment between the downslope plume and
+the surrounding water at intermediate depths.
+The entrainment is replaced by the vertical mixing implicit in the advection scheme.
+Let us consider as an example the case displayed in \autoref{fig:bbl} where
+the density at level $(i,kup)$ is larger than the one at level $(i,kdwn)$.
+The advective BBL scheme modifies the tracer time tendency of the ocean cells near the topographic step by
+the downslope flow \autoref{eq:bbl_dw}, the horizontal \autoref{eq:bbl_hor} and
+the upward \autoref{eq:bbl_up} return flows as follows:
\begin{align}
\partial_t T^{do}_{kdw} &\equiv \partial_t T^{do}_{kdw}
@@ 1065,7 +1057,6 @@
where $b_t$ is the $T$cell volume.
Note that the BBL transport, $( u^{tr}_{bbl}, v^{tr}_{bbl} )$, is available in
the model outputs. It has to be used to compute the effective velocity
as well as the effective overturning circulation.
+Note that the BBL transport, $( u^{tr}_{bbl}, v^{tr}_{bbl} )$, is available in the model outputs.
+It has to be used to compute the effective velocity as well as the effective overturning circulation.
% ================================================================
@@ 1079,52 +1070,55 @@
%
In some applications it can be useful to add a Newtonian damping term
into the temperature and salinity equations:
+In some applications it can be useful to add a Newtonian damping term into the temperature and salinity equations:
\begin{equation} \label{eq:tra_dmp}
\begin{split}
\frac{\partial T}{\partial t}=\;\cdots \;\gamma \,\left( {TT_o } \right) \\
 \frac{\partial S}{\partial t}=\;\cdots \;\gamma \,\left( {SS_o } \right)
 \end{split}
 \end{equation}
where $\gamma$ is the inverse of a time scale, and $T_o$ and $S_o$
are given temperature and salinity fields (usually a climatology).
+ \frac{\partial S}{\partial t}=\;\cdots \;\gamma \,\left( {SS_o } \right)
+\end{split}
+\end{equation}
+where $\gamma$ is the inverse of a time scale, and $T_o$ and $S_o$ are given temperature and salinity fields
+(usually a climatology).
Options are defined through the \ngn{namtra\_dmp} namelist variables.
The restoring term is added when the namelist parameter \np{ln\_tradmp} is set to true.
It also requires that both \np{ln\_tsd\_init} and \np{ln\_tsd\_tradmp} are set to true
in \textit{namtsd} namelist as well as \np{sn\_tem} and \np{sn\_sal} structures are
correctly set ($i.e.$ that $T_o$ and $S_o$ are provided in input files and read
using \mdl{fldread}, see \autoref{subsec:SBC_fldread}).
The restoring coefficient $\gamma$ is a threedimensional array read in during the \rou{tra\_dmp\_init} routine. The file name is specified by the namelist variable \np{cn\_resto}. The DMP\_TOOLS tool is provided to allow users to generate the netcdf file.

The two main cases in which \autoref{eq:tra_dmp} is used are \textit{(a)}
the specification of the boundary conditions along artificial walls of a
limited domain basin and \textit{(b)} the computation of the velocity
field associated with a given $T$$S$ field (for example to build the
initial state of a prognostic simulation, or to use the resulting velocity
field for a passive tracer study). The first case applies to regional
models that have artificial walls instead of open boundaries.
In the vicinity of these walls, $\gamma$ takes large values (equivalent to
a time scale of a few days) whereas it is zero in the interior of the
model domain. The second case corresponds to the use of the robust
diagnostic method \citep{Sarmiento1982}. It allows us to find the velocity
field consistent with the model dynamics whilst having a $T$, $S$ field
close to a given climatological field ($T_o$, $S_o$).

The robust diagnostic method is very efficient in preventing temperature
drift in intermediate waters but it produces artificial sources of heat and salt
within the ocean. It also has undesirable effects on the ocean convection.
It tends to prevent deep convection and subsequent deepwater formation,
by stabilising the water column too much.

The namelist parameter \np{nn\_zdmp} sets whether the damping should be applied in the whole water column or only below the mixed layer (defined either on a density or $S_o$ criterion). It is common to set the damping to zero in the mixed layer as the adjustment time scale is short here \citep{Madec_al_JPO96}.
+The restoring term is added when the namelist parameter \np{ln\_tradmp} is set to true.
+It also requires that both \np{ln\_tsd\_init} and \np{ln\_tsd\_tradmp} are set to true in
+\textit{namtsd} namelist as well as \np{sn\_tem} and \np{sn\_sal} structures are correctly set
+($i.e.$ that $T_o$ and $S_o$ are provided in input files and read using \mdl{fldread},
+see \autoref{subsec:SBC_fldread}).
+The restoring coefficient $\gamma$ is a threedimensional array read in during the \rou{tra\_dmp\_init} routine.
+The file name is specified by the namelist variable \np{cn\_resto}.
+The DMP\_TOOLS tool is provided to allow users to generate the netcdf file.
+
+The two main cases in which \autoref{eq:tra_dmp} is used are
+\textit{(a)} the specification of the boundary conditions along artificial walls of a limited domain basin and
+\textit{(b)} the computation of the velocity field associated with a given $T$$S$ field
+(for example to build the initial state of a prognostic simulation,
+or to use the resulting velocity field for a passive tracer study).
+The first case applies to regional models that have artificial walls instead of open boundaries.
+In the vicinity of these walls, $\gamma$ takes large values (equivalent to a time scale of a few days) whereas
+it is zero in the interior of the model domain.
+The second case corresponds to the use of the robust diagnostic method \citep{Sarmiento1982}.
+It allows us to find the velocity field consistent with the model dynamics whilst
+having a $T$, $S$ field close to a given climatological field ($T_o$, $S_o$).
+
+The robust diagnostic method is very efficient in preventing temperature drift in intermediate waters but
+it produces artificial sources of heat and salt within the ocean.
+It also has undesirable effects on the ocean convection.
+It tends to prevent deep convection and subsequent deepwater formation, by stabilising the water column too much.
+
+The namelist parameter \np{nn\_zdmp} sets whether the damping should be applied in the whole water column or
+only below the mixed layer (defined either on a density or $S_o$ criterion).
+It is common to set the damping to zero in the mixed layer as the adjustment time scale is short here
+\citep{Madec_al_JPO96}.
\subsection{Generating \ifile{resto} using DMP\_TOOLS}
DMP\_TOOLS can be used to generate a netcdf file containing the restoration coefficient $\gamma$.
Note that in order to maintain bit comparison with previous NEMO versions DMP\_TOOLS must be compiled
and run on the same machine as the NEMO model. A \ifile{mesh\_mask} file for the model configuration is required as an input.
This can be generated by carrying out a short model run with the namelist parameter \np{nn\_msh} set to 1.
The namelist parameter \np{ln\_tradmp} will also need to be set to .false. for this to work.
The \nl{nam\_dmp\_create} namelist in the DMP\_TOOLS directory is used to specify options for the restoration coefficient.
+DMP\_TOOLS can be used to generate a netcdf file containing the restoration coefficient $\gamma$.
+Note that in order to maintain bit comparison with previous NEMO versions DMP\_TOOLS must be compiled and
+run on the same machine as the NEMO model.
+A \ifile{mesh\_mask} file for the model configuration is required as an input.
+This can be generated by carrying out a short model run with the namelist parameter \np{nn\_msh} set to 1.
+The namelist parameter \np{ln\_tradmp} will also need to be set to .false. for this to work.
+The \nl{nam\_dmp\_create} namelist in the DMP\_TOOLS directory is used to specify options for
+the restoration coefficient.
%nam_dmp_create
@@ 1132,25 +1126,37 @@
%
\np{cp\_cfg}, \np{cp\_cpz}, \np{jp\_cfg} and \np{jperio} specify the model configuration being used and should be the same as specified in \nl{namcfg}. The variable \nl{lzoom} is used to specify that the damping is being used as in case \textit{a} above to provide boundary conditions to a zoom configuration. In the case of the arctic or antarctic zoom configurations this includes some specific treatment. Otherwise damping is applied to the 6 grid points along the ocean boundaries. The open boundaries are specified by the variables \np{lzoom\_n}, \np{lzoom\_e}, \np{lzoom\_s}, \np{lzoom\_w} in the \nl{nam\_zoom\_dmp} name list.

The remaining switch namelist variables determine the spatial variation of the restoration coefficient in nonzoom configurations.
\np{ln\_full\_field} specifies that newtonian damping should be applied to the whole model domain.
\np{ln\_med\_red\_seas} specifies grid specific restoration coefficients in the Mediterranean Sea
for the ORCA4, ORCA2 and ORCA05 configurations.
If \np{ln\_old\_31\_lev\_code} is set then the depth variation of the coeffients will be specified as
a function of the model number. This option is included to allow backwards compatability of the ORCA2 reference
configurations with previous model versions.
\np{ln\_coast} specifies that the restoration coefficient should be reduced near to coastlines.
This option only has an effect if \np{ln\_full\_field} is true.
\np{ln\_zero\_top\_layer} specifies that the restoration coefficient should be zero in the surface layer.
Finally \np{ln\_custom} specifies that the custom module will be called.
This module is contained in the file \mdl{custom} and can be edited by users. For example damping could be applied in a specific region.

The restoration coefficient can be set to zero in equatorial regions by specifying a positive value of \np{nn\_hdmp}.
+\np{cp\_cfg}, \np{cp\_cpz}, \np{jp\_cfg} and \np{jperio} specify the model configuration being used and
+should be the same as specified in \nl{namcfg}.
+The variable \nl{lzoom} is used to specify that the damping is being used as in case \textit{a} above to
+provide boundary conditions to a zoom configuration.
+In the case of the arctic or antarctic zoom configurations this includes some specific treatment.
+Otherwise damping is applied to the 6 grid points along the ocean boundaries.
+The open boundaries are specified by the variables \np{lzoom\_n}, \np{lzoom\_e}, \np{lzoom\_s}, \np{lzoom\_w} in
+the \nl{nam\_zoom\_dmp} name list.
+
+The remaining switch namelist variables determine the spatial variation of the restoration coefficient in
+nonzoom configurations.
+\np{ln\_full\_field} specifies that newtonian damping should be applied to the whole model domain.
+\np{ln\_med\_red\_seas} specifies grid specific restoration coefficients in the Mediterranean Sea for
+the ORCA4, ORCA2 and ORCA05 configurations.
+If \np{ln\_old\_31\_lev\_code} is set then the depth variation of the coeffients will be specified as
+a function of the model number.
+This option is included to allow backwards compatability of the ORCA2 reference configurations with
+previous model versions.
+\np{ln\_coast} specifies that the restoration coefficient should be reduced near to coastlines.
+This option only has an effect if \np{ln\_full\_field} is true.
+\np{ln\_zero\_top\_layer} specifies that the restoration coefficient should be zero in the surface layer.
+Finally \np{ln\_custom} specifies that the custom module will be called.
+This module is contained in the file \mdl{custom} and can be edited by users.
+For example damping could be applied in a specific region.
+
+The restoration coefficient can be set to zero in equatorial regions by
+specifying a positive value of \np{nn\_hdmp}.
Equatorward of this latitude the restoration coefficient will be zero with a smooth transition to
the full values of a 10\deg latitud band.
This is often used because of the short adjustment time scale in the equatorial region
\citep{Reverdin1991, Fujio1991, Marti_PhD92}. The time scale associated with the damping depends on the depth as a
hyperbolic tangent, with \np{rn\_surf} as surface value, \np{rn\_bot} as bottom value and a transition depth of \np{rn\_dep}.
+This is often used because of the short adjustment time scale in the equatorial region
+\citep{Reverdin1991, Fujio1991, Marti_PhD92}.
+The time scale associated with the damping depends on the depth as a hyperbolic tangent,
+with \np{rn\_surf} as surface value, \np{rn\_bot} as bottom value and a transition depth of \np{rn\_dep}.
% ================================================================
@@ 1165,7 +1171,6 @@
Options are defined through the \ngn{namdom} namelist variables.
The general framework for tracer time stepping is a modified leapfrog scheme
\citep{Leclair_Madec_OM09}, $i.e.$ a three level centred time scheme associated
with a Asselin time filter (cf. \autoref{sec:STP_mLF}):
+The general framework for tracer time stepping is a modified leapfrog scheme \citep{Leclair_Madec_OM09},
+$i.e.$ a three level centred time scheme associated with a Asselin time filter (cf. \autoref{sec:STP_mLF}):
\begin{equation} \label{eq:tra_nxt}
\begin{aligned}
@@ 1177,19 +1182,19 @@
\end{aligned}
\end{equation}
where RHS is the right hand side of the temperature equation,
the subscript $f$ denotes filtered values, $\gamma$ is the Asselin coefficient,
and $S$ is the total forcing applied on $T$ ($i.e.$ fluxes plus content in mass exchanges).
$\gamma$ is initialized as \np{rn\_atfp} (\textbf{namelist} parameter).
Its default value is \np{rn\_atfp}\forcode{ = 10.e3}. Note that the forcing correction term in the filter
is not applied in linear free surface (\jp{lk\_vvl}\forcode{ = .false.}) (see \autoref{subsec:TRA_sbc}.
Not also that in constant volume case, the time stepping is performed on $T$,
not on its content, $e_{3t}T$.

When the vertical mixing is solved implicitly, the update of the \textit{next} tracer
fields is done in module \mdl{trazdf}. In this case only the swapping of arrays
and the Asselin filtering is done in the \mdl{tranxt} module.

In order to prepare for the computation of the \textit{next} time step,
a swap of tracer arrays is performed: $T^{t\rdt} = T^t$ and $T^t = T_f$.
+where RHS is the right hand side of the temperature equation, the subscript $f$ denotes filtered values,
+$\gamma$ is the Asselin coefficient, and $S$ is the total forcing applied on $T$
+($i.e.$ fluxes plus content in mass exchanges).
+$\gamma$ is initialized as \np{rn\_atfp} (\textbf{namelist} parameter).
+Its default value is \np{rn\_atfp}\forcode{ = 10.e3}.
+Note that the forcing correction term in the filter is not applied in linear free surface
+(\jp{lk\_vvl}\forcode{ = .false.}) (see \autoref{subsec:TRA_sbc}.
+Not also that in constant volume case, the time stepping is performed on $T$, not on its content, $e_{3t}T$.
+
+When the vertical mixing is solved implicitly,
+the update of the \textit{next} tracer fields is done in module \mdl{trazdf}.
+In this case only the swapping of arrays and the Asselin filtering is done in the \mdl{tranxt} module.
+
+In order to prepare for the computation of the \textit{next} time step, a swap of tracer arrays is performed:
+$T^{t\rdt} = T^t$ and $T^t = T_f$.
% ================================================================
@@ 1209,91 +1214,91 @@
\label{subsec:TRA_eos}
The Equation Of Seawater (EOS) is an empirical nonlinear thermodynamic relationship
linking seawater density, $\rho$, to a number of state variables,
most typically temperature, salinity and pressure.
Because density gradients control the pressure gradient force through the hydrostatic balance,
the equation of state provides a fundamental bridge between the distribution of active tracers
and the fluid dynamics. Nonlinearities of the EOS are of major importance, in particular
influencing the circulation through determination of the static stability below the mixed layer,
thus controlling rates of exchange between the atmosphere and the ocean interior \citep{Roquet_JPO2015}.
Therefore an accurate EOS based on either the 1980 equation of state (EOS80, \cite{UNESCO1983})
or TEOS10 \citep{TEOS10} standards should be used anytime a simulation of the real
ocean circulation is attempted \citep{Roquet_JPO2015}.
The use of TEOS10 is highly recommended because
\textit{(i)} it is the new official EOS,
\textit{(ii)} it is more accurate, being based on an updated database of laboratory measurements, and
\textit{(iii)} it uses Conservative Temperature and Absolute Salinity (instead of potential temperature
and practical salinity for EOS980, both variables being more suitable for use as model variables
\citep{TEOS10, Graham_McDougall_JPO13}.
+The Equation Of Seawater (EOS) is an empirical nonlinear thermodynamic relationship linking seawater density,
+$\rho$, to a number of state variables, most typically temperature, salinity and pressure.
+Because density gradients control the pressure gradient force through the hydrostatic balance,
+the equation of state provides a fundamental bridge between the distribution of active tracers and
+the fluid dynamics.
+Nonlinearities of the EOS are of major importance, in particular influencing the circulation through
+determination of the static stability below the mixed layer,
+thus controlling rates of exchange between the atmosphere and the ocean interior \citep{Roquet_JPO2015}.
+Therefore an accurate EOS based on either the 1980 equation of state (EOS80, \cite{UNESCO1983}) or
+TEOS10 \citep{TEOS10} standards should be used anytime a simulation of the real ocean circulation is attempted
+\citep{Roquet_JPO2015}.
+The use of TEOS10 is highly recommended because
+\textit{(i)} it is the new official EOS,
+\textit{(ii)} it is more accurate, being based on an updated database of laboratory measurements, and
+\textit{(iii)} it uses Conservative Temperature and Absolute Salinity (instead of potential temperature and
+practical salinity for EOS980, both variables being more suitable for use as model variables
+\citep{TEOS10, Graham_McDougall_JPO13}.
EOS80 is an obsolescent feature of the NEMO system, kept only for backward compatibility.
For process studies, it is often convenient to use an approximation of the EOS. To that purposed,
a simplified EOS (SEOS) inspired by \citet{Vallis06} is also available.

In the computer code, a density anomaly, $d_a= \rho / \rho_o  1$,
is computed, with $\rho_o$ a reference density. Called \textit{rau0}
in the code, $\rho_o$ is set in \mdl{phycst} to a value of $1,026~Kg/m^3$.
This is a sensible choice for the reference density used in a Boussinesq ocean
climate model, as, with the exception of only a small percentage of the ocean,
+For process studies, it is often convenient to use an approximation of the EOS.
+To that purposed, a simplified EOS (SEOS) inspired by \citet{Vallis06} is also available.
+
+In the computer code, a density anomaly, $d_a= \rho / \rho_o  1$, is computed, with $\rho_o$ a reference density.
+Called \textit{rau0} in the code, $\rho_o$ is set in \mdl{phycst} to a value of $1,026~Kg/m^3$.
+This is a sensible choice for the reference density used in a Boussinesq ocean climate model, as,
+with the exception of only a small percentage of the ocean,
density in the World Ocean varies by no more than 2$\%$ from that value \citep{Gill1982}.
Options are defined through the \ngn{nameos} namelist variables, and in particular \np{nn\_eos}
which controls the EOS used (\forcode{= 1} for TEOS10 ; \forcode{= 0} for EOS80 ; \forcode{= 1} for SEOS).
+Options are defined through the \ngn{nameos} namelist variables, and in particular \np{nn\_eos} which
+controls the EOS used (\forcode{= 1} for TEOS10 ; \forcode{= 0} for EOS80 ; \forcode{= 1} for SEOS).
\begin{description}

\item[\np{nn\_eos}\forcode{ = 1}] the polyTEOS10bsq equation of seawater \citep{Roquet_OM2015} is used.
The accuracy of this approximation is comparable to the TEOS10 rational function approximation,
but it is optimized for a boussinesq fluid and the polynomial expressions have simpler
and more computationally efficient expressions for their derived quantities
which make them more adapted for use in ocean models.
Note that a slightly higher precision polynomial form is now used replacement of the TEOS10
rational function approximation for hydrographic data analysis \citep{TEOS10}.
A key point is that conservative state variables are used:
Absolute Salinity (unit: g/kg, notation: $S_A$) and Conservative Temperature (unit: \degC, notation: $\Theta$).
The pressure in decibars is approximated by the depth in meters.
With TEOS10, the specific heat capacity of sea water, $C_p$, is a constant. It is set to
$C_p=3991.86795711963~J\,Kg^{1}\,^{\circ}K^{1}$, according to \citet{TEOS10}.

Choosing polyTEOS10bsq implies that the state variables used by the model are
$\Theta$ and $S_A$. In particular, the initial state deined by the user have to be given as
\textit{Conservative} Temperature and \textit{Absolute} Salinity.
In addition, setting \np{ln\_useCT} to \forcode{.true.} convert the Conservative SST to potential SST
prior to either computing the airsea and icesea fluxes (forced mode)
or sending the SST field to the atmosphere (coupled mode).

\item[\np{nn\_eos}\forcode{ = 0}] the polyEOS80bsq equation of seawater is used.
It takes the same polynomial form as the polyTEOS10, but the coefficients have been optimized
to accurately fit EOS80 (Roquet, personal comm.). The state variables used in both the EOS80
and the ocean model are:
the Practical Salinity ((unit: psu, notation: $S_p$)) and Potential Temperature (unit: $^{\circ}C$, notation: $\theta$).
The pressure in decibars is approximated by the depth in meters.
With thsi EOS, the specific heat capacity of sea water, $C_p$, is a function of temperature,
salinity and pressure \citep{UNESCO1983}. Nevertheless, a severe assumption is made in order to
have a heat content ($C_p T_p$) which is conserved by the model: $C_p$ is set to a constant
value, the TEOS10 value.
+\item[\np{nn\_eos}\forcode{ = 1}]
+ the polyTEOS10bsq equation of seawater \citep{Roquet_OM2015} is used.
+ The accuracy of this approximation is comparable to the TEOS10 rational function approximation,
+ but it is optimized for a boussinesq fluid and the polynomial expressions have simpler and
+ more computationally efficient expressions for their derived quantities which make them more adapted for
+ use in ocean models.
+ Note that a slightly higher precision polynomial form is now used replacement of
+ the TEOS10 rational function approximation for hydrographic data analysis \citep{TEOS10}.
+ A key point is that conservative state variables are used:
+ Absolute Salinity (unit: g/kg, notation: $S_A$) and Conservative Temperature (unit: \degC, notation: $\Theta$).
+ The pressure in decibars is approximated by the depth in meters.
+ With TEOS10, the specific heat capacity of sea water, $C_p$, is a constant.
+ It is set to $C_p=3991.86795711963~J\,Kg^{1}\,^{\circ}K^{1}$, according to \citet{TEOS10}.
+
+ Choosing polyTEOS10bsq implies that the state variables used by the model are $\Theta$ and $S_A$.
+ In particular, the initial state deined by the user have to be given as \textit{Conservative} Temperature and
+ \textit{Absolute} Salinity.
+ In addition, setting \np{ln\_useCT} to \forcode{.true.} convert the Conservative SST to potential SST prior to
+ either computing the airsea and icesea fluxes (forced mode) or
+ sending the SST field to the atmosphere (coupled mode).
+
+\item[\np{nn\_eos}\forcode{ = 0}]
+ the polyEOS80bsq equation of seawater is used.
+ It takes the same polynomial form as the polyTEOS10, but the coefficients have been optimized to
+ accurately fit EOS80 (Roquet, personal comm.).
+ The state variables used in both the EOS80 and the ocean model are:
+ the Practical Salinity ((unit: psu, notation: $S_p$)) and
+ Potential Temperature (unit: $^{\circ}C$, notation: $\theta$).
+ The pressure in decibars is approximated by the depth in meters.
+ With thsi EOS, the specific heat capacity of sea water, $C_p$, is a function of temperature, salinity and
+ pressure \citep{UNESCO1983}.
+ Nevertheless, a severe assumption is made in order to have a heat content ($C_p T_p$) which
+ is conserved by the model: $C_p$ is set to a constant value, the TEOS10 value.
\item[\np{nn\_eos}\forcode{ = 1}] a simplified EOS (SEOS) inspired by \citet{Vallis06} is chosen,
the coefficients of which has been optimized to fit the behavior of TEOS10 (Roquet, personal comm.)
(see also \citet{Roquet_JPO2015}). It provides a simplistic linear representation of both
cabbeling and thermobaricity effects which is enough for a proper treatment of the EOS
in theoretical studies \citep{Roquet_JPO2015}.
With such an equation of state there is no longer a distinction between
\textit{conservative} and \textit{potential} temperature, as well as between \textit{absolute}
and \textit{practical} salinity.
SEOS takes the following expression:
\begin{equation} \label{eq:tra_SEOS}
\begin{split}
 d_a(T,S,z) = ( &  a_0 \; ( 1 + 0.5 \; \lambda_1 \; T_a + \mu_1 \; z ) * T_a \\
 & + b_0 \; ( 1  0.5 \; \lambda_2 \; S_a  \mu_2 \; z ) * S_a \\
 &  \nu \; T_a \; S_a \; ) \; / \; \rho_o \\
 with \ \ T_a = T10 \; ; & \; S_a = S35 \; ;\; \rho_o = 1026~Kg/m^3
\end{split}
\end{equation}
where the computer name of the coefficients as well as their standard value are given in \autoref{tab:SEOS}.
In fact, when choosing SEOS, various approximation of EOS can be specified simply by changing
the associated coefficients.
Setting to zero the two thermobaric coefficients ($\mu_1$, $\mu_2$) remove thermobaric effect from SEOS.
setting to zero the three cabbeling coefficients ($\lambda_1$, $\lambda_2$, $\nu$) remove cabbeling effect from SEOS.
Keeping nonzero value to $a_0$ and $b_0$ provide a linear EOS function of T and S.

+\item[\np{nn\_eos}\forcode{ = 1}]
+ a simplified EOS (SEOS) inspired by \citet{Vallis06} is chosen,
+ the coefficients of which has been optimized to fit the behavior of TEOS10
+ (Roquet, personal comm.) (see also \citet{Roquet_JPO2015}).
+ It provides a simplistic linear representation of both cabbeling and thermobaricity effects which
+ is enough for a proper treatment of the EOS in theoretical studies \citep{Roquet_JPO2015}.
+ With such an equation of state there is no longer a distinction between
+ \textit{conservative} and \textit{potential} temperature,
+ as well as between \textit{absolute} and \textit{practical} salinity.
+ SEOS takes the following expression:
+ \begin{equation} \label{eq:tra_SEOS}
+ \begin{split}
+ d_a(T,S,z) = ( &  a_0 \; ( 1 + 0.5 \; \lambda_1 \; T_a + \mu_1 \; z ) * T_a \\
+ & + b_0 \; ( 1  0.5 \; \lambda_2 \; S_a  \mu_2 \; z ) * S_a \\
+ &  \nu \; T_a \; S_a \; ) \; / \; \rho_o \\
+ with \ \ T_a = T10 \; ; & \; S_a = S35 \; ;\; \rho_o = 1026~Kg/m^3
+ \end{split}
+ \end{equation}
+ where the computer name of the coefficients as well as their standard value are given in \autoref{tab:SEOS}.
+ In fact, when choosing SEOS, various approximation of EOS can be specified simply by changing the associated coefficients.
+ Setting to zero the two thermobaric coefficients ($\mu_1$, $\mu_2$) remove thermobaric effect from SEOS.
+ setting to zero the three cabbeling coefficients ($\lambda_1$, $\lambda_2$, $\nu$) remove cabbeling effect from SEOS.
+ Keeping nonzero value to $a_0$ and $b_0$ provide a linear EOS function of T and S.
\end{description}
@@ 1313,5 +1318,6 @@
\end{tabular}
\caption{ \protect\label{tab:SEOS}
Standard value of SEOS coefficients. }
+ Standard value of SEOS coefficients.
+}
\end{center}
\end{table}
@@ 1325,19 +1331,19 @@
\label{subsec:TRA_bn2}
An accurate computation of the ocean stability (i.e. of $N$, the bruntV\"{a}is\"{a}l\"{a}
 frequency) is of paramount importance as determine the ocean stratification and
 is used in several ocean parameterisations (namely TKE, GLS, Richardson number dependent
 vertical diffusion, enhanced vertical diffusion, nonpenetrative convection, tidal mixing
 parameterisation, isoneutral diffusion). In particular, $N^2$ has to be computed at the local pressure
 (pressure in decibar being approximated by the depth in meters). The expression for $N^2$
 is given by:
+An accurate computation of the ocean stability (i.e. of $N$, the bruntV\"{a}is\"{a}l\"{a} frequency) is of
+paramount importance as determine the ocean stratification and is used in several ocean parameterisations
+(namely TKE, GLS, Richardson number dependent vertical diffusion, enhanced vertical diffusion,
+nonpenetrative convection, tidal mixing parameterisation, isoneutral diffusion).
+In particular, $N^2$ has to be computed at the local pressure
+(pressure in decibar being approximated by the depth in meters).
+The expression for $N^2$ is given by:
\begin{equation} \label{eq:tra_bn2}
N^2 = \frac{g}{e_{3w}} \left( \beta \;\delta_{k+1/2}[S]  \alpha \;\delta_{k+1/2}[T] \right)
\end{equation}
where $(T,S) = (\Theta, S_A)$ for TEOS10, $= (\theta, S_p)$ for TEOS80, or $=(T,S)$ for SEOS,
and, $\alpha$ and $\beta$ are the thermal and haline expansion coefficients.
The coefficients are a polynomial function of temperature, salinity and depth which expression
depends on the chosen EOS. They are computed through \textit{eos\_rab}, a \textsc{Fortran}
function that can be found in \mdl{eosbn2}.
+where $(T,S) = (\Theta, S_A)$ for TEOS10, $= (\theta, S_p)$ for TEOS80, or $=(T,S)$ for SEOS,
+and, $\alpha$ and $\beta$ are the thermal and haline expansion coefficients.
+The coefficients are a polynomial function of temperature, salinity and depth which
+expression depends on the chosen EOS.
+They are computed through \textit{eos\_rab}, a \textsc{Fortran} function that can be found in \mdl{eosbn2}.
% 
@@ 1356,9 +1362,9 @@
\end{equation}
\autoref{eq:tra_eos_fzp} is only used to compute the potential freezing point of
sea water ($i.e.$ referenced to the surface $p=0$), thus the pressure dependent
terms in \autoref{eq:tra_eos_fzp} (last term) have been dropped. The freezing
point is computed through \textit{eos\_fzp}, a \textsc{Fortran} function that can be found
in \mdl{eosbn2}.
+\autoref{eq:tra_eos_fzp} is only used to compute the potential freezing point of sea water
+($i.e.$ referenced to the surface $p=0$),
+thus the pressure dependent terms in \autoref{eq:tra_eos_fzp} (last term) have been dropped.
+The freezing point is computed through \textit{eos\_fzp},
+a \textsc{Fortran} function that can be found in \mdl{eosbn2}.
@@ 1380,30 +1386,33 @@
\gmcomment{STEVEN: to be consistent with earlier discussion of differencing and averaging operators,
 I've changed "derivative" to "difference" and "mean" to "average"}

With partial cells (\np{ln\_zps}\forcode{ = .true.}) at bottom and top (\np{ln\_isfcav}\forcode{ = .true.}), in general,
tracers in horizontally adjacent cells live at different depths.
Horizontal gradients of tracers are needed for horizontal diffusion (\mdl{traldf} module)
and the hydrostatic pressure gradient calculations (\mdl{dynhpg} module).
The partial cell properties at the top (\np{ln\_isfcav}\forcode{ = .true.}) are computed in the same way as for the bottom.
+I've changed "derivative" to "difference" and "mean" to "average"}
+
+With partial cells (\np{ln\_zps}\forcode{ = .true.}) at bottom and top (\np{ln\_isfcav}\forcode{ = .true.}),
+in general, tracers in horizontally adjacent cells live at different depths.
+Horizontal gradients of tracers are needed for horizontal diffusion (\mdl{traldf} module) and
+the hydrostatic pressure gradient calculations (\mdl{dynhpg} module).
+The partial cell properties at the top (\np{ln\_isfcav}\forcode{ = .true.}) are computed in the same way as
+for the bottom.
So, only the bottom interpolation is explained below.
Before taking horizontal gradients between the tracers next to the bottom, a linear
interpolation in the vertical is used to approximate the deeper tracer as if it actually
lived at the depth of the shallower tracer point (\autoref{fig:Partial_step_scheme}).
For example, for temperature in the $i$direction the needed interpolated
temperature, $\widetilde{T}$, is:
+Before taking horizontal gradients between the tracers next to the bottom,
+a linear interpolation in the vertical is used to approximate the deeper tracer as if
+it actually lived at the depth of the shallower tracer point (\autoref{fig:Partial_step_scheme}).
+For example, for temperature in the $i$direction the needed interpolated temperature, $\widetilde{T}$, is:
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!p] \begin{center}
\includegraphics[width=0.9\textwidth]{Partial_step_scheme}
\caption{ \protect\label{fig:Partial_step_scheme}
Discretisation of the horizontal difference and average of tracers in the $z$partial
step coordinate (\protect\np{ln\_zps}\forcode{ = .true.}) in the case $( e3w_k^{i+1}  e3w_k^i )>0$.
A linear interpolation is used to estimate $\widetilde{T}_k^{i+1}$, the tracer value
at the depth of the shallower tracer point of the two adjacent bottom $T$points.
The horizontal difference is then given by: $\delta _{i+1/2} T_k= \widetilde{T}_k^{\,i+1} T_k^{\,i}$
and the average by: $\overline{T}_k^{\,i+1/2}= ( \widetilde{T}_k^{\,i+1/2}  T_k^{\,i} ) / 2$. }
\end{center} \end{figure}
+\begin{figure}[!p]
+ \begin{center}
+ \includegraphics[width=0.9\textwidth]{Fig_partial_step_scheme}
+ \caption{ \protect\label{fig:Partial_step_scheme}
+ Discretisation of the horizontal difference and average of tracers in the $z$partial step coordinate
+ (\protect\np{ln\_zps}\forcode{ = .true.}) in the case $( e3w_k^{i+1}  e3w_k^i )>0$.
+ A linear interpolation is used to estimate $\widetilde{T}_k^{i+1}$,
+ the tracer value at the depth of the shallower tracer point of the two adjacent bottom $T$points.
+ The horizontal difference is then given by: $\delta _{i+1/2} T_k= \widetilde{T}_k^{\,i+1} T_k^{\,i}$ and
+ the average by: $\overline{T}_k^{\,i+1/2}= ( \widetilde{T}_k^{\,i+1/2}  T_k^{\,i} ) / 2$.
+ }
+ \end{center}
+\end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{equation*}
@@ 1416,6 +1425,5 @@
\end{aligned} \right.
\end{equation*}
and the resulting forms for the horizontal difference and the horizontal average
value of $T$ at a $U$point are:
+and the resulting forms for the horizontal difference and the horizontal average value of $T$ at a $U$point are:
\begin{equation} \label{eq:zps_hde}
\begin{aligned}
@@ 1434,12 +1442,11 @@
\end{equation}
The computation of horizontal derivative of tracers as well as of density is
performed once for all at each time step in \mdl{zpshde} module and stored
in shared arrays to be used when needed. It has to be emphasized that the
procedure used to compute the interpolated density, $\widetilde{\rho}$, is not
the same as that used for $T$ and $S$. Instead of forming a linear approximation
of density, we compute $\widetilde{\rho }$ from the interpolated values of $T$
and $S$, and the pressure at a $u$point (in the equation of state pressure is
approximated by depth, see \autoref{subsec:TRA_eos} ) :
+The computation of horizontal derivative of tracers as well as of density is performed once for all at
+each time step in \mdl{zpshde} module and stored in shared arrays to be used when needed.
+It has to be emphasized that the procedure used to compute the interpolated density, $\widetilde{\rho}$,
+is not the same as that used for $T$ and $S$.
+Instead of forming a linear approximation of density, we compute $\widetilde{\rho }$ from the interpolated values of
+$T$ and $S$, and the pressure at a $u$point
+(in the equation of state pressure is approximated by depth, see \autoref{subsec:TRA_eos} ):
\begin{equation} \label{eq:zps_hde_rho}
\widetilde{\rho } = \rho ( {\widetilde{T},\widetilde {S},z_u })
@@ 1447,18 +1454,18 @@
\end{equation}
This is a much better approximation as the variation of $\rho$ with depth (and
thus pressure) is highly nonlinear with a true equation of state and thus is badly
approximated with a linear interpolation. This approximation is used to compute
both the horizontal pressure gradient (\autoref{sec:DYN_hpg}) and the slopes of neutral
surfaces (\autoref{sec:LDF_slp})

Note that in almost all the advection schemes presented in this Chapter, both
averaging and differencing operators appear. Yet \autoref{eq:zps_hde} has not
been used in these schemes: in contrast to diffusion and pressure gradient
computations, no correction for partial steps is applied for advection. The main
motivation is to preserve the domain averaged mean variance of the advected
field when using the $2^{nd}$ order centred scheme. Sensitivity of the advection
schemes to the way horizontal averages are performed in the vicinity of partial
cells should be further investigated in the near future.
+This is a much better approximation as the variation of $\rho$ with depth (and thus pressure)
+is highly nonlinear with a true equation of state and thus is badly approximated with a linear interpolation.
+This approximation is used to compute both the horizontal pressure gradient (\autoref{sec:DYN_hpg}) and
+the slopes of neutral surfaces (\autoref{sec:LDF_slp}).
+
+Note that in almost all the advection schemes presented in this Chapter,
+both averaging and differencing operators appear.
+Yet \autoref{eq:zps_hde} has not been used in these schemes:
+in contrast to diffusion and pressure gradient computations,
+no correction for partial steps is applied for advection.
+The main motivation is to preserve the domain averaged mean variance of the advected field when
+using the $2^{nd}$ order centred scheme.
+Sensitivity of the advection schemes to the way horizontal averages are performed in the vicinity of
+partial cells should be further investigated in the near future.
%%%
\gmcomment{gm : this last remark has to be done}
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_ZDF.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_ZDF.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_ZDF.tex (revision 10368)
@@ 21,28 +21,26 @@
\label{sec:ZDF_zdf}
The discrete form of the ocean subgrid scale physics has been presented in
\autoref{sec:TRA_zdf} and \autoref{sec:DYN_zdf}. At the surface and bottom boundaries,
the turbulent fluxes of momentum, heat and salt have to be defined. At the
surface they are prescribed from the surface forcing (see \autoref{chap:SBC}),
while at the bottom they are set to zero for heat and salt, unless a geothermal
flux forcing is prescribed as a bottom boundary condition ($i.e.$ \key{trabbl}
defined, see \autoref{subsec:TRA_bbc}), and specified through a bottom friction
parameterisation for momentum (see \autoref{sec:ZDF_bfr}).

In this section we briefly discuss the various choices offered to compute
the vertical eddy viscosity and diffusivity coefficients, $A_u^{vm}$ ,
$A_v^{vm}$ and $A^{vT}$ ($A^{vS}$), defined at $uw$, $vw$ and $w$
points, respectively (see \autoref{sec:TRA_zdf} and \autoref{sec:DYN_zdf}). These
coefficients can be assumed to be either constant, or a function of the local
Richardson number, or computed from a turbulent closure model (either
TKE or GLS formulation). The computation of these coefficients is initialized
in the \mdl{zdfini} module and performed in the \mdl{zdfric}, \mdl{zdftke} or
\mdl{zdfgls} modules. The trends due to the vertical momentum and tracer
diffusion, including the surface forcing, are computed and added to the
general trend in the \mdl{dynzdf} and \mdl{trazdf} modules, respectively.
These trends can be computed using either a forward time stepping scheme
(namelist parameter \np{ln\_zdfexp}\forcode{ = .true.}) or a backward time stepping
scheme (\np{ln\_zdfexp}\forcode{ = .false.}) depending on the magnitude of the mixing
coefficients, and thus of the formulation used (see \autoref{chap:STP}).
+The discrete form of the ocean subgrid scale physics has been presented in
+\autoref{sec:TRA_zdf} and \autoref{sec:DYN_zdf}.
+At the surface and bottom boundaries, the turbulent fluxes of momentum, heat and salt have to be defined.
+At the surface they are prescribed from the surface forcing (see \autoref{chap:SBC}),
+while at the bottom they are set to zero for heat and salt,
+unless a geothermal flux forcing is prescribed as a bottom boundary condition ($i.e.$ \key{trabbl} defined,
+see \autoref{subsec:TRA_bbc}), and specified through a bottom friction parameterisation for momentum
+(see \autoref{sec:ZDF_bfr}).
+
+In this section we briefly discuss the various choices offered to compute the vertical eddy viscosity and
+diffusivity coefficients, $A_u^{vm}$ , $A_v^{vm}$ and $A^{vT}$ ($A^{vS}$), defined at $uw$, $vw$ and $w$ points,
+respectively (see \autoref{sec:TRA_zdf} and \autoref{sec:DYN_zdf}).
+These coefficients can be assumed to be either constant, or a function of the local Richardson number,
+or computed from a turbulent closure model (either TKE or GLS formulation).
+The computation of these coefficients is initialized in the \mdl{zdfini} module and performed in
+the \mdl{zdfric}, \mdl{zdftke} or \mdl{zdfgls} modules.
+The trends due to the vertical momentum and tracer diffusion, including the surface forcing,
+are computed and added to the general trend in the \mdl{dynzdf} and \mdl{trazdf} modules, respectively.
+These trends can be computed using either a forward time stepping scheme
+(namelist parameter \np{ln\_zdfexp}\forcode{ = .true.}) or a backward time stepping scheme
+(\np{ln\_zdfexp}\forcode{ = .false.}) depending on the magnitude of the mixing coefficients,
+and thus of the formulation used (see \autoref{chap:STP}).
% 
@@ 56,9 +54,10 @@
%
Options are defined through the \ngn{namzdf} namelist variables.
When \key{zdfcst} is defined, the momentum and tracer vertical eddy coefficients
are set to constant values over the whole ocean. This is the crudest way to define
the vertical ocean physics. It is recommended that this option is only used in
process studies, not in basin scale simulations. Typical values used in this case are:
+Options are defined through the \ngn{namzdf} namelist variables.
+When \key{zdfcst} is defined, the momentum and tracer vertical eddy coefficients are set to
+constant values over the whole ocean.
+This is the crudest way to define the vertical ocean physics.
+It is recommended that this option is only used in process studies, not in basin scale simulations.
+Typical values used in this case are:
\begin{align*}
A_u^{vm} = A_v^{vm} &= 1.2\ 10^{4}~m^2.s^{1} \\
@@ 67,7 +66,7 @@
These values are set through the \np{rn\_avm0} and \np{rn\_avt0} namelist parameters.
In all cases, do not use values smaller that those associated with the molecular
viscosity and diffusivity, that is $\sim10^{6}~m^2.s^{1}$ for momentum,
$\sim10^{7}~m^2.s^{1}$ for temperature and $\sim10^{9}~m^2.s^{1}$ for salinity.
+In all cases, do not use values smaller that those associated with the molecular viscosity and diffusivity,
+that is $\sim10^{6}~m^2.s^{1}$ for momentum, $\sim10^{7}~m^2.s^{1}$ for temperature and
+$\sim10^{9}~m^2.s^{1}$ for salinity.
@@ 83,14 +82,12 @@
%
When \key{zdfric} is defined, a local Richardson number dependent formulation
for the vertical momentum and tracer eddy coefficients is set through the \ngn{namzdf\_ric}
namelist variables.The vertical mixing
coefficients are diagnosed from the large scale variables computed by the model.
\textit{In situ} measurements have been used to link vertical turbulent activity to
large scale ocean structures. The hypothesis of a mixing mainly maintained by the
growth of KelvinHelmholtz like instabilities leads to a dependency between the
vertical eddy coefficients and the local Richardson number ($i.e.$ the
ratio of stratification to vertical shear). Following \citet{Pacanowski_Philander_JPO81}, the following
formulation has been implemented:
+When \key{zdfric} is defined, a local Richardson number dependent formulation for the vertical momentum and
+tracer eddy coefficients is set through the \ngn{namzdf\_ric} namelist variables.
+The vertical mixing coefficients are diagnosed from the large scale variables computed by the model.
+\textit{In situ} measurements have been used to link vertical turbulent activity to large scale ocean structures.
+The hypothesis of a mixing mainly maintained by the growth of KelvinHelmholtz like instabilities leads to
+a dependency between the vertical eddy coefficients and the local Richardson number
+($i.e.$ the ratio of stratification to vertical shear).
+Following \citet{Pacanowski_Philander_JPO81}, the following formulation has been implemented:
\begin{equation} \label{eq:zdfric}
\left\{ \begin{aligned}
@@ 99,34 +96,31 @@
\end{aligned} \right.
\end{equation}
where $Ri = N^2 / \left(\partial_z \textbf{U}_h \right)^2$ is the local Richardson
number, $N$ is the local BruntVais\"{a}l\"{a} frequency (see \autoref{subsec:TRA_bn2}),
$A_b^{vT} $ and $A_b^{vm}$ are the constant background values set as in the
constant case (see \autoref{subsec:ZDF_cst}), and $A_{ric}^{vT} = 10^{4}~m^2.s^{1}$
is the maximum value that can be reached by the coefficient when $Ri\leq 0$,
$a=5$ and $n=2$. The last three values can be modified by setting the
\np{rn\_avmri}, \np{rn\_alp} and \np{nn\_ric} namelist parameters, respectively.

A simple mixinglayer model to transfer and dissipate the atmospheric
 forcings (windstress and buoyancy fluxes) can be activated setting
the \np{ln\_mldw}\forcode{ = .true.} in the namelist.

In this case, the local depth of turbulent windmixing or "Ekman depth"
 $h_{e}(x,y,t)$ is evaluated and the vertical eddy coefficients prescribed within this layer.
+where $Ri = N^2 / \left(\partial_z \textbf{U}_h \right)^2$ is the local Richardson number,
+$N$ is the local BruntVais\"{a}l\"{a} frequency (see \autoref{subsec:TRA_bn2}),
+$A_b^{vT} $ and $A_b^{vm}$ are the constant background values set as in the constant case
+(see \autoref{subsec:ZDF_cst}), and $A_{ric}^{vT} = 10^{4}~m^2.s^{1}$ is the maximum value that
+can be reached by the coefficient when $Ri\leq 0$, $a=5$ and $n=2$.
+The last three values can be modified by setting the \np{rn\_avmri}, \np{rn\_alp} and
+\np{nn\_ric} namelist parameters, respectively.
+
+A simple mixinglayer model to transfer and dissipate the atmospheric forcings
+(windstress and buoyancy fluxes) can be activated setting the \np{ln\_mldw}\forcode{ = .true.} in the namelist.
+
+In this case, the local depth of turbulent windmixing or "Ekman depth" $h_{e}(x,y,t)$ is evaluated and
+the vertical eddy coefficients prescribed within this layer.
This depth is assumed proportional to the "depth of frictional influence" that is limited by rotation:
\begin{equation}
 h_{e} = Ek \frac {u^{*}} {f_{0}} \\
\end{equation}
where, $Ek$ is an empirical parameter, $u^{*}$ is the friction velocity and $f_{0}$ is the Coriolis
parameter.
+h_{e} = Ek \frac {u^{*}} {f_{0}}
+\end{equation}
+where, $Ek$ is an empirical parameter, $u^{*}$ is the friction velocity and $f_{0}$ is the Coriolis parameter.
In this similarity height relationship, the turbulent friction velocity:
\begin{equation}
 u^{*} = \sqrt \frac {\tau} {\rho_o} \\
\end{equation}

+u^{*} = \sqrt \frac {\tau} {\rho_o}
+\end{equation}
is computed from the wind stress vector $\tau$ and the reference density $ \rho_o$.
The final $h_{e}$ is further constrained by the adjustable bounds \np{rn\_mldmin} and \np{rn\_mldmax}.
Once $h_{e}$ is computed, the vertical eddy coefficients within $h_{e}$ are set to
+Once $h_{e}$ is computed, the vertical eddy coefficients within $h_{e}$ are set to
the empirical values \np{rn\_wtmix} and \np{rn\_wvmix} \citep{Lermusiaux2001}.
@@ 142,15 +136,14 @@
%
The vertical eddy viscosity and diffusivity coefficients are computed from a TKE
turbulent closure model based on a prognostic equation for $\bar{e}$, the turbulent
kinetic energy, and a closure assumption for the turbulent length scales. This
turbulent closure model has been developed by \citet{Bougeault1989} in the
atmospheric case, adapted by \citet{Gaspar1990} for the oceanic case, and
embedded in OPA, the ancestor of NEMO, by \citet{Blanke1993} for equatorial Atlantic
simulations. Since then, significant modifications have been introduced by
\citet{Madec1998} in both the implementation and the formulation of the mixing
length scale. The time evolution of $\bar{e}$ is the result of the production of
$\bar{e}$ through vertical shear, its destruction through stratification, its vertical
diffusion, and its dissipation of \citet{Kolmogorov1942} type:
+The vertical eddy viscosity and diffusivity coefficients are computed from a TKE turbulent closure model based on
+a prognostic equation for $\bar{e}$, the turbulent kinetic energy,
+and a closure assumption for the turbulent length scales.
+This turbulent closure model has been developed by \citet{Bougeault1989} in the atmospheric case,
+adapted by \citet{Gaspar1990} for the oceanic case, and embedded in OPA, the ancestor of NEMO,
+by \citet{Blanke1993} for equatorial Atlantic simulations.
+Since then, significant modifications have been introduced by \citet{Madec1998} in both the implementation and
+the formulation of the mixing length scale.
+The time evolution of $\bar{e}$ is the result of the production of $\bar{e}$ through vertical shear,
+its destruction through stratification, its vertical diffusion, and its dissipation of \citet{Kolmogorov1942} type:
\begin{equation} \label{eq:zdftke_e}
\frac{\partial \bar{e}}{\partial t} =
@@ 170,10 +163,9 @@
where $N$ is the local BruntVais\"{a}l\"{a} frequency (see \autoref{subsec:TRA_bn2}),
$l_{\epsilon }$ and $l_{\kappa }$ are the dissipation and mixing length scales,
$P_{rt}$ is the Prandtl number, $K_m$ and $K_\rho$ are the vertical eddy viscosity
and diffusivity coefficients. The constants $C_k = 0.1$ and $C_\epsilon = \sqrt {2} /2$
$\approx 0.7$ are designed to deal with vertical mixing at any depth \citep{Gaspar1990}.
They are set through namelist parameters \np{nn\_ediff} and \np{nn\_ediss}.
$P_{rt}$ can be set to unity or, following \citet{Blanke1993}, be a function
of the local Richardson number, $R_i$:
+$P_{rt}$ is the Prandtl number, $K_m$ and $K_\rho$ are the vertical eddy viscosity and diffusivity coefficients.
+The constants $C_k = 0.1$ and $C_\epsilon = \sqrt {2} /2$ $\approx 0.7$ are designed to deal with
+vertical mixing at any depth \citep{Gaspar1990}.
+They are set through namelist parameters \np{nn\_ediff} and \np{nn\_ediss}.
+$P_{rt}$ can be set to unity or, following \citet{Blanke1993}, be a function of the local Richardson number, $R_i$:
\begin{align*} \label{eq:prt}
P_{rt} = \begin{cases}
@@ 186,55 +178,52 @@
The choice of $P_{rt}$ is controlled by the \np{nn\_pdl} namelist variable.
At the sea surface, the value of $\bar{e}$ is prescribed from the wind
stress field as $\bar{e}_o = e_{bb} \tau / \rho_o$, with $e_{bb}$ the \np{rn\_ebb}
namelist parameter. The default value of $e_{bb}$ is 3.75. \citep{Gaspar1990}),
however a much larger value can be used when taking into account the
surface wave breaking (see below Eq. \autoref{eq:ZDF_Esbc}).
The bottom value of TKE is assumed to be equal to the value of the level just above.
The time integration of the $\bar{e}$ equation may formally lead to negative values
because the numerical scheme does not ensure its positivity. To overcome this
problem, a cutoff in the minimum value of $\bar{e}$ is used (\np{rn\_emin}
namelist parameter). Following \citet{Gaspar1990}, the cutoff value is set
to $\sqrt{2}/2~10^{6}~m^2.s^{2}$. This allows the subsequent formulations
to match that of \citet{Gargett1984} for the diffusion in the thermocline and
deep ocean : $K_\rho = 10^{3} / N$.
In addition, a cutoff is applied on $K_m$ and $K_\rho$ to avoid numerical
instabilities associated with too weak vertical diffusion. They must be
specified at least larger than the molecular values, and are set through
\np{rn\_avm0} and \np{rn\_avt0} (namzdf namelist, see \autoref{subsec:ZDF_cst}).
+At the sea surface, the value of $\bar{e}$ is prescribed from the wind stress field as
+$\bar{e}_o = e_{bb} \tau / \rho_o$, with $e_{bb}$ the \np{rn\_ebb} namelist parameter.
+The default value of $e_{bb}$ is 3.75. \citep{Gaspar1990}), however a much larger value can be used when
+taking into account the surface wave breaking (see below Eq. \autoref{eq:ZDF_Esbc}).
+The bottom value of TKE is assumed to be equal to the value of the level just above.
+The time integration of the $\bar{e}$ equation may formally lead to negative values because
+the numerical scheme does not ensure its positivity.
+To overcome this problem, a cutoff in the minimum value of $\bar{e}$ is used (\np{rn\_emin} namelist parameter).
+Following \citet{Gaspar1990}, the cutoff value is set to $\sqrt{2}/2~10^{6}~m^2.s^{2}$.
+This allows the subsequent formulations to match that of \citet{Gargett1984} for the diffusion in
+the thermocline and deep ocean : $K_\rho = 10^{3} / N$.
+In addition, a cutoff is applied on $K_m$ and $K_\rho$ to avoid numerical instabilities associated with
+too weak vertical diffusion.
+They must be specified at least larger than the molecular values, and are set through \np{rn\_avm0} and
+\np{rn\_avt0} (namzdf namelist, see \autoref{subsec:ZDF_cst}).
\subsubsection{Turbulent length scale}
For computational efficiency, the original formulation of the turbulent length
scales proposed by \citet{Gaspar1990} has been simplified. Four formulations
are proposed, the choice of which is controlled by the \np{nn\_mxl} namelist
parameter. The first two are based on the following first order approximation
\citep{Blanke1993}:
+For computational efficiency, the original formulation of the turbulent length scales proposed by
+\citet{Gaspar1990} has been simplified.
+Four formulations are proposed, the choice of which is controlled by the \np{nn\_mxl} namelist parameter.
+The first two are based on the following first order approximation \citep{Blanke1993}:
\begin{equation} \label{eq:tke_mxl0_1}
l_k = l_\epsilon = \sqrt {2 \bar{e}\; } / N
\end{equation}
which is valid in a stable stratified region with constant values of the Brunt
Vais\"{a}l\"{a} frequency. The resulting length scale is bounded by the distance
to the surface or to the bottom (\np{nn\_mxl}\forcode{ = 0}) or by the local vertical scale factor
(\np{nn\_mxl}\forcode{ = 1}). \citet{Blanke1993} notice that this simplification has two major
drawbacks: it makes no sense for locally unstable stratification and the
computation no longer uses all the information contained in the vertical density
profile. To overcome these drawbacks, \citet{Madec1998} introduces the
\np{nn\_mxl}\forcode{ = 2..3} cases, which add an extra assumption concerning the vertical
gradient of the computed length scale. So, the length scales are first evaluated
as in \autoref{eq:tke_mxl0_1} and then bounded such that:
+which is valid in a stable stratified region with constant values of the BruntVais\"{a}l\"{a} frequency.
+The resulting length scale is bounded by the distance to the surface or to the bottom
+(\np{nn\_mxl}\forcode{ = 0}) or by the local vertical scale factor (\np{nn\_mxl}\forcode{ = 1}).
+\citet{Blanke1993} notice that this simplification has two major drawbacks:
+it makes no sense for locally unstable stratification and the computation no longer uses all
+the information contained in the vertical density profile.
+To overcome these drawbacks, \citet{Madec1998} introduces the \np{nn\_mxl}\forcode{ = 2..3} cases,
+which add an extra assumption concerning the vertical gradient of the computed length scale.
+So, the length scales are first evaluated as in \autoref{eq:tke_mxl0_1} and then bounded such that:
\begin{equation} \label{eq:tke_mxl_constraint}
\frac{1}{e_3 }\left {\frac{\partial l}{\partial k}} \right \leq 1
\qquad \text{with }\ l = l_k = l_\epsilon
\end{equation}
\autoref{eq:tke_mxl_constraint} means that the vertical variations of the length
scale cannot be larger than the variations of depth. It provides a better
approximation of the \citet{Gaspar1990} formulation while being much less
time consuming. In particular, it allows the length scale to be limited not only
by the distance to the surface or to the ocean bottom but also by the distance
to a strongly stratified portion of the water column such as the thermocline
(\autoref{fig:mixing_length}). In order to impose the \autoref{eq:tke_mxl_constraint}
constraint, we introduce two additional length scales: $l_{up}$ and $l_{dwn}$,
the upward and downward length scales, and evaluate the dissipation and
mixing length scales as (and note that here we use numerical indexing):
+\autoref{eq:tke_mxl_constraint} means that the vertical variations of the length scale cannot be larger than
+the variations of depth.
+It provides a better approximation of the \citet{Gaspar1990} formulation while being much less
+time consuming.
+In particular, it allows the length scale to be limited not only by the distance to the surface or
+to the ocean bottom but also by the distance to a strongly stratified portion of the water column such as
+the thermocline (\autoref{fig:mixing_length}).
+In order to impose the \autoref{eq:tke_mxl_constraint} constraint, we introduce two additional length scales:
+$l_{up}$ and $l_{dwn}$, the upward and downward length scales, and
+evaluate the dissipation and mixing length scales as
+(and note that here we use numerical indexing):
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
@@ 253,11 +242,9 @@
\end{aligned}
\end{equation}
where $l^{(k)}$ is computed using \autoref{eq:tke_mxl0_1},
$i.e.$ $l^{(k)} = \sqrt {2 {\bar e}^{(k)} / {N^2}^{(k)} }$.

In the \np{nn\_mxl}\forcode{ = 2} case, the dissipation and mixing length scales take the same
value: $ l_k= l_\epsilon = \min \left(\ l_{up} \;,\; l_{dwn}\ \right)$, while in the
\np{nn\_mxl}\forcode{ = 3} case, the dissipation and mixing turbulent length scales are give
as in \citet{Gaspar1990}:
+where $l^{(k)}$ is computed using \autoref{eq:tke_mxl0_1}, $i.e.$ $l^{(k)} = \sqrt {2 {\bar e}^{(k)} / {N^2}^{(k)} }$.
+
+In the \np{nn\_mxl}\forcode{ = 2} case, the dissipation and mixing length scales take the same value:
+$ l_k= l_\epsilon = \min \left(\ l_{up} \;,\; l_{dwn}\ \right)$, while in the \np{nn\_mxl}\forcode{ = 3} case,
+the dissipation and mixing turbulent length scales are give as in \citet{Gaspar1990}:
\begin{equation} \label{eq:tke_mxl_gaspar}
\begin{aligned}
@@ 267,19 +254,19 @@
\end{equation}
At the ocean surface, a non zero length scale is set through the \np{rn\_mxl0} namelist
parameter. Usually the surface scale is given by $l_o = \kappa \,z_o$
where $\kappa = 0.4$ is von Karman's constant and $z_o$ the roughness
parameter of the surface. Assuming $z_o=0.1$~m \citep{Craig_Banner_JPO94}
leads to a 0.04~m, the default value of \np{rn\_mxl0}. In the ocean interior
a minimum length scale is set to recover the molecular viscosity when $\bar{e}$
reach its minimum value ($1.10^{6}= C_k\, l_{min} \,\sqrt{\bar{e}_{min}}$ ).
+At the ocean surface, a non zero length scale is set through the \np{rn\_mxl0} namelist parameter.
+Usually the surface scale is given by $l_o = \kappa \,z_o$ where $\kappa = 0.4$ is von Karman's constant and
+$z_o$ the roughness parameter of the surface.
+Assuming $z_o=0.1$~m \citep{Craig_Banner_JPO94} leads to a 0.04~m, the default value of \np{rn\_mxl0}.
+In the ocean interior a minimum length scale is set to recover the molecular viscosity when
+$\bar{e}$ reach its minimum value ($1.10^{6}= C_k\, l_{min} \,\sqrt{\bar{e}_{min}}$ ).
\subsubsection{Surface wave breaking parameterization}
%%
Following \citet{Mellor_Blumberg_JPO04}, the TKE turbulence closure model has been modified
to include the effect of surface wave breaking energetics. This results in a reduction of summertime
surface temperature when the mixed layer is relatively shallow. The \citet{Mellor_Blumberg_JPO04}
modifications acts on surface length scale and TKE values and airsea drag coefficient.
+Following \citet{Mellor_Blumberg_JPO04}, the TKE turbulence closure model has been modified to
+include the effect of surface wave breaking energetics.
+This results in a reduction of summertime surface temperature when the mixed layer is relatively shallow.
+The \citet{Mellor_Blumberg_JPO04} modifications acts on surface length scale and TKE values and
+airsea drag coefficient.
The latter concerns the bulk formulea and is not discussed here.
@@ 288,7 +275,6 @@
\bar{e}_o = \frac{1}{2}\,\left( 15.8\,\alpha_{CB} \right)^{2/3} \,\frac{\tau}{\rho_o}
\end{equation}
where $\alpha_{CB}$ is the \citet{Craig_Banner_JPO94} constant of proportionality
which depends on the ''wave age'', ranging from 57 for mature waves to 146 for
younger waves \citep{Mellor_Blumberg_JPO04}.
+where $\alpha_{CB}$ is the \citet{Craig_Banner_JPO94} constant of proportionality which depends on the ''wave age'',
+ranging from 57 for mature waves to 146 for younger waves \citep{Mellor_Blumberg_JPO04}.
The boundary condition on the turbulent length scale follows the Charnock's relation:
\begin{equation} \label{eq:ZDF_Lsbc}
@@ 296,32 +282,35 @@
\end{equation}
where $\kappa=0.40$ is the von Karman constant, and $\beta$ is the Charnock's constant.
\citet{Mellor_Blumberg_JPO04} suggest $\beta = 2.10^{5}$ the value chosen by \citet{Stacey_JPO99}
citing observation evidence, and $\alpha_{CB} = 100$ the Craig and Banner's value.
As the surface boundary condition on TKE is prescribed through $\bar{e}_o = e_{bb} \tau / \rho_o$,
+\citet{Mellor_Blumberg_JPO04} suggest $\beta = 2.10^{5}$ the value chosen by
+\citet{Stacey_JPO99} citing observation evidence, and
+$\alpha_{CB} = 100$ the Craig and Banner's value.
+As the surface boundary condition on TKE is prescribed through $\bar{e}_o = e_{bb} \tau / \rho_o$,
with $e_{bb}$ the \np{rn\_ebb} namelist parameter, setting \np{rn\_ebb}\forcode{ = 67.83} corresponds
to $\alpha_{CB} = 100$. Further setting \np{ln\_mxl0} to true applies \autoref{eq:ZDF_Lsbc}
as surface boundary condition on length scale, with $\beta$ hard coded to the Stacey's value.
Note that a minimal threshold of \np{rn\_emin0}$=10^{4}~m^2.s^{2}$ (namelist parameters)
is applied on surface $\bar{e}$ value.
+to $\alpha_{CB} = 100$.
+Further setting \np{ln\_mxl0} to true applies \autoref{eq:ZDF_Lsbc} as surface boundary condition on length scale,
+with $\beta$ hard coded to the Stacey's value.
+Note that a minimal threshold of \np{rn\_emin0}$=10^{4}~m^2.s^{2}$ (namelist parameters) is applied on
+surface $\bar{e}$ value.
\subsubsection{Langmuir cells}
%%
Langmuir circulations (LC) can be described as ordered largescale vertical motions
in the surface layer of the oceans. Although LC have nothing to do with convection,
the circulation pattern is rather similar to socalled convective rolls in the atmospheric
boundary layer. The detailed physics behind LC is described in, for example,
\citet{Craik_Leibovich_JFM76}. The prevailing explanation is that LC arise from
a nonlinear interaction between the Stokes drift and wind drift currents.

Here we introduced in the TKE turbulent closure the simple parameterization of
Langmuir circulations proposed by \citep{Axell_JGR02} for a $k\epsilon$ turbulent closure.
The parameterization, tuned against largeeddy simulation, includes the whole effect
of LC in an extra source terms of TKE, $P_{LC}$.
The presence of $P_{LC}$ in \autoref{eq:zdftke_e}, the TKE equation, is controlled
by setting \np{ln\_lc} to \forcode{.true.} in the namtke namelist.
+Langmuir circulations (LC) can be described as ordered largescale vertical motions in
+the surface layer of the oceans.
+Although LC have nothing to do with convection, the circulation pattern is rather similar to
+socalled convective rolls in the atmospheric boundary layer.
+The detailed physics behind LC is described in, for example, \citet{Craik_Leibovich_JFM76}.
+The prevailing explanation is that LC arise from a nonlinear interaction between the Stokes drift and
+wind drift currents.
+
+Here we introduced in the TKE turbulent closure the simple parameterization of Langmuir circulations proposed by
+\citep{Axell_JGR02} for a $k\epsilon$ turbulent closure.
+The parameterization, tuned against largeeddy simulation, includes the whole effect of LC in
+an extra source terms of TKE, $P_{LC}$.
+The presence of $P_{LC}$ in \autoref{eq:zdftke_e}, the TKE equation, is controlled by setting \np{ln\_lc} to
+\forcode{.true.} in the namtke namelist.
By making an analogy with the characteristic convective velocity scale
($e.g.$, \citet{D'Alessio_al_JPO98}), $P_{LC}$ is assumed to be :
+By making an analogy with the characteristic convective velocity scale ($e.g.$, \citet{D'Alessio_al_JPO98}),
+$P_{LC}$ is assumed to be :
\begin{equation}
P_{LC}(z) = \frac{w_{LC}^3(z)}{H_{LC}}
@@ 330,11 +319,11 @@
With no information about the wave field, $w_{LC}$ is assumed to be proportional to
the Stokes drift $u_s = 0.377\,\,\tau^{1/2}$, where $\tau$ is the surface wind stress module
\footnote{Following \citet{Li_Garrett_JMR93}, the surface Stoke drift velocity
may be expressed as $u_s = 0.016 \,U_{10m}$. Assuming an air density of
$\rho_a=1.22 \,Kg/m^3$ and a drag coefficient of $1.5~10^{3}$ give the expression
used of $u_s$ as a function of the module of surface stress}.
For the vertical variation, $w_{LC}$ is assumed to be zero at the surface as well as
at a finite depth $H_{LC}$ (which is often close to the mixed layer depth), and simply
varies as a sine function in between (a firstorder profile for the Langmuir cell structures).
+\footnote{Following \citet{Li_Garrett_JMR93}, the surface Stoke drift velocity may be expressed as
+ $u_s = 0.016 \,U_{10m}$.
+ Assuming an air density of $\rho_a=1.22 \,Kg/m^3$ and a drag coefficient of
+ $1.5~10^{3}$ give the expression used of $u_s$ as a function of the module of surface stress}.
+For the vertical variation, $w_{LC}$ is assumed to be zero at the surface as well as at
+a finite depth $H_{LC}$ (which is often close to the mixed layer depth),
+and simply varies as a sine function in between (a firstorder profile for the Langmuir cell structures).
The resulting expression for $w_{LC}$ is :
\begin{equation}
@@ 344,12 +333,12 @@
\end{cases}
\end{equation}
where $c_{LC} = 0.15$ has been chosen by \citep{Axell_JGR02} as a good compromise
to fit LES data. The chosen value yields maximum vertical velocities $w_{LC}$ of the order
of a few centimeters per second. The value of $c_{LC}$ is set through the \np{rn\_lc}
namelist parameter, having in mind that it should stay between 0.15 and 0.54 \citep{Axell_JGR02}.
+where $c_{LC} = 0.15$ has been chosen by \citep{Axell_JGR02} as a good compromise to fit LES data.
+The chosen value yields maximum vertical velocities $w_{LC}$ of the order of a few centimeters per second.
+The value of $c_{LC}$ is set through the \np{rn\_lc} namelist parameter,
+having in mind that it should stay between 0.15 and 0.54 \citep{Axell_JGR02}.
The $H_{LC}$ is estimated in a similar way as the turbulent length scale of TKE equations:
$H_{LC}$ is depth to which a water parcel with kinetic energy due to Stoke drift
can reach on its own by converting its kinetic energy to potential energy, according to
+$H_{LC}$ is depth to which a water parcel with kinetic energy due to Stoke drift can reach on its own by
+converting its kinetic energy to potential energy, according to
\begin{equation}
 \int_{H_{LC}}^0 { N^2\;z \;dz} = \frac{1}{2} u_s^2
@@ 360,34 +349,32 @@
%%
Vertical mixing parameterizations commonly used in ocean general circulation models
tend to produce mixedlayer depths that are too shallow during summer months and windy conditions.
This bias is particularly acute over the Southern Ocean.
To overcome this systematic bias, an ad hoc parameterization is introduced into the TKE scheme \cite{Rodgers_2014}.
The parameterization is an empirical one, $i.e.$ not derived from theoretical considerations,
+Vertical mixing parameterizations commonly used in ocean general circulation models tend to
+produce mixedlayer depths that are too shallow during summer months and windy conditions.
+This bias is particularly acute over the Southern Ocean.
+To overcome this systematic bias, an ad hoc parameterization is introduced into the TKE scheme \cite{Rodgers_2014}.
+The parameterization is an empirical one, $i.e.$ not derived from theoretical considerations,
but rather is meant to account for observed processes that affect the density structure of
the oceanâ€™s planetary boundary layer that are not explicitly captured by default in the TKE scheme
($i.e.$ nearinertial oscillations and ocean swells and waves).
When using this parameterization ($i.e.$ when \np{nn\_etau}\forcode{ = 1}), the TKE input to the ocean ($S$)
imposed by the winds in the form of nearinertial oscillations, swell and waves is parameterized
by \autoref{eq:ZDF_Esbc} the standard TKE surface boundary condition, plus a depth depend one given by:
+When using this parameterization ($i.e.$ when \np{nn\_etau}\forcode{ = 1}),
+the TKE input to the ocean ($S$) imposed by the winds in the form of nearinertial oscillations,
+swell and waves is parameterized by \autoref{eq:ZDF_Esbc} the standard TKE surface boundary condition,
+plus a depth depend one given by:
\begin{equation} \label{eq:ZDF_Ehtau}
S = (1f_i) \; f_r \; e_s \; e^{z / h_\tau}
\end{equation}
where
$z$ is the depth,
$e_s$ is TKE surface boundary condition,
$f_r$ is the fraction of the surface TKE that penetrate in the ocean,
$h_\tau$ is a vertical mixing length scale that controls exponential shape of the penetration,
and $f_i$ is the ice concentration (no penetration if $f_i=1$, that is if the ocean is entirely
covered by seaice).
The value of $f_r$, usually a few percents, is specified through \np{rn\_efr} namelist parameter.
The vertical mixing length scale, $h_\tau$, can be set as a 10~m uniform value (\np{nn\_etau}\forcode{ = 0})
or a latitude dependent value (varying from 0.5~m at the Equator to a maximum value of 30~m
at high latitudes (\np{nn\_etau}\forcode{ = 1}).

Note that two other option existe, \np{nn\_etau}\forcode{ = 2..3}. They correspond to applying
\autoref{eq:ZDF_Ehtau} only at the base of the mixed layer, or to using the high frequency part
of the stress to evaluate the fraction of TKE that penetrate the ocean.
+where $z$ is the depth, $e_s$ is TKE surface boundary condition, $f_r$ is the fraction of the surface TKE that
+penetrate in the ocean, $h_\tau$ is a vertical mixing length scale that controls exponential shape of
+the penetration, and $f_i$ is the ice concentration
+(no penetration if $f_i=1$, that is if the ocean is entirely covered by seaice).
+The value of $f_r$, usually a few percents, is specified through \np{rn\_efr} namelist parameter.
+The vertical mixing length scale, $h_\tau$, can be set as a 10~m uniform value (\np{nn\_etau}\forcode{ = 0}) or
+a latitude dependent value (varying from 0.5~m at the Equator to a maximum value of 30~m at high latitudes
+(\np{nn\_etau}\forcode{ = 1}).
+
+Note that two other option existe, \np{nn\_etau}\forcode{ = 2..3}.
+They correspond to applying \autoref{eq:ZDF_Ehtau} only at the base of the mixed layer,
+or to using the high frequency part of the stress to evaluate the fraction of TKE that penetrate the ocean.
Those two options are obsolescent features introduced for test purposes.
They will be removed in the next release.
@@ 420,15 +407,16 @@
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The production of turbulence by vertical shear (the first term of the right hand side
of \autoref{eq:zdftke_e}) should balance the loss of kinetic energy associated with
the vertical momentum diffusion (first line in \autoref{eq:PE_zdf}). To do so a special care
have to be taken for both the time and space discretization of the TKE equation
\citep{Burchard_OM02,Marsaleix_al_OM08}.

Let us first address the time stepping issue. \autoref{fig:TKE_time_scheme} shows
how the twolevel LeapFrog time stepping of the momentum and tracer equations interplays
with the onelevel forward time stepping of TKE equation. With this framework, the total loss
of kinetic energy (in 1D for the demonstration) due to the vertical momentum diffusion is
obtained by multiplying this quantity by $u^t$ and summing the result vertically:
+The production of turbulence by vertical shear (the first term of the right hand side of
+\autoref{eq:zdftke_e}) should balance the loss of kinetic energy associated with the vertical momentum diffusion
+(first line in \autoref{eq:PE_zdf}).
+To do so a special care have to be taken for both the time and space discretization of
+the TKE equation \citep{Burchard_OM02,Marsaleix_al_OM08}.
+
+Let us first address the time stepping issue. \autoref{fig:TKE_time_scheme} shows how
+the twolevel LeapFrog time stepping of the momentum and tracer equations interplays with
+the onelevel forward time stepping of TKE equation.
+With this framework, the total loss of kinetic energy (in 1D for the demonstration) due to
+the vertical momentum diffusion is obtained by multiplying this quantity by $u^t$ and
+summing the result vertically:
\begin{equation} \label{eq:energ1}
\begin{split}
@@ 438,22 +426,21 @@
\end{split}
\end{equation}
Here, the vertical diffusion of momentum is discretized backward in time
with a coefficient, $K_m$, known at time $t$ (\autoref{fig:TKE_time_scheme}),
as it is required when using the TKE scheme (see \autoref{sec:STP_forward_imp}).
The first term of the right hand side of \autoref{eq:energ1} represents the kinetic energy
transfer at the surface (atmospheric forcing) and at the bottom (friction effect).
The second term is always negative. It is the dissipation rate of kinetic energy,
and thus minus the shear production rate of $\bar{e}$. \autoref{eq:energ1}
implies that, to be energetically consistent, the production rate of $\bar{e}$
used to compute $(\bar{e})^t$ (and thus ${K_m}^t$) should be expressed as
${K_m}^{t\rdt}\,(\partial_z u)^{t\rdt} \,(\partial_z u)^t$ (and not by the more straightforward
$K_m \left( \partial_z u \right)^2$ expression taken at time $t$ or $t\rdt$).

A similar consideration applies on the destruction rate of $\bar{e}$ due to stratification
(second term of the right hand side of \autoref{eq:zdftke_e}). This term
must balance the input of potential energy resulting from vertical mixing.
The rate of change of potential energy (in 1D for the demonstration) due vertical
mixing is obtained by multiplying vertical density diffusion
tendency by $g\,z$ and and summing the result vertically:
+Here, the vertical diffusion of momentum is discretized backward in time with a coefficient, $K_m$,
+known at time $t$ (\autoref{fig:TKE_time_scheme}), as it is required when using the TKE scheme
+(see \autoref{sec:STP_forward_imp}).
+The first term of the right hand side of \autoref{eq:energ1} represents the kinetic energy transfer at
+the surface (atmospheric forcing) and at the bottom (friction effect).
+The second term is always negative.
+It is the dissipation rate of kinetic energy, and thus minus the shear production rate of $\bar{e}$.
+\autoref{eq:energ1} implies that, to be energetically consistent,
+the production rate of $\bar{e}$ used to compute $(\bar{e})^t$ (and thus ${K_m}^t$) should be expressed as
+${K_m}^{t\rdt}\,(\partial_z u)^{t\rdt} \,(\partial_z u)^t$
+(and not by the more straightforward $K_m \left( \partial_z u \right)^2$ expression taken at time $t$ or $t\rdt$).
+
+A similar consideration applies on the destruction rate of $\bar{e}$ due to stratification
+(second term of the right hand side of \autoref{eq:zdftke_e}).
+This term must balance the input of potential energy resulting from vertical mixing.
+The rate of change of potential energy (in 1D for the demonstration) due vertical mixing is obtained by
+multiplying vertical density diffusion tendency by $g\,z$ and and summing the result vertically:
\begin{equation} \label{eq:energ2}
\begin{split}
@@ 466,21 +453,19 @@
\end{equation}
where we use $N^2 = g \,\partial_k \rho / (e_3 \rho)$.
The first term of the right hand side of \autoref{eq:energ2} is always zero
because there is no diffusive flux through the ocean surface and bottom).
The second term is minus the destruction rate of $\bar{e}$ due to stratification.
Therefore \autoref{eq:energ1} implies that, to be energetically consistent, the product
${K_\rho}^{t\rdt}\,(N^2)^t$ should be used in \autoref{eq:zdftke_e}, the TKE equation.

Let us now address the space discretization issue.
The vertical eddy coefficients are defined at $w$point whereas the horizontal velocity
components are in the centre of the side faces of a $t$box in staggered Cgrid
(\autoref{fig:cell}). A space averaging is thus required to obtain the shear TKE production term.
By redoing the \autoref{eq:energ1} in the 3D case, it can be shown that the product of
eddy coefficient by the shear at $t$ and $t\rdt$ must be performed prior to the averaging.
Furthermore, the possible time variation of $e_3$ (\key{vvl} case) have to be taken into
account.

The above energetic considerations leads to
the following final discrete form for the TKE equation:
+The first term of the right hand side of \autoref{eq:energ2} is always zero because
+there is no diffusive flux through the ocean surface and bottom).
+The second term is minus the destruction rate of $\bar{e}$ due to stratification.
+Therefore \autoref{eq:energ1} implies that, to be energetically consistent,
+the product ${K_\rho}^{t\rdt}\,(N^2)^t$ should be used in \autoref{eq:zdftke_e}, the TKE equation.
+
+Let us now address the space discretization issue.
+The vertical eddy coefficients are defined at $w$point whereas the horizontal velocity components are in
+the centre of the side faces of a $t$box in staggered Cgrid (\autoref{fig:cell}).
+A space averaging is thus required to obtain the shear TKE production term.
+By redoing the \autoref{eq:energ1} in the 3D case, it can be shown that the product of eddy coefficient by
+the shear at $t$ and $t\rdt$ must be performed prior to the averaging.
+Furthermore, the possible time variation of $e_3$ (\key{vvl} case) have to be taken into account.
+
+The above energetic considerations leads to the following final discrete form for the TKE equation:
\begin{equation} \label{eq:zdftke_ene}
\begin{split}
@@ 500,11 +485,10 @@
\end{split}
\end{equation}
where the last two terms in \autoref{eq:zdftke_ene} (vertical diffusion and Kolmogorov dissipation)
are time stepped using a backward scheme (see\autoref{sec:STP_forward_imp}).
Note that the Kolmogorov term has been linearized in time in order to render
the implicit computation possible. The restart of the TKE scheme
requires the storage of $\bar {e}$, $K_m$, $K_\rho$ and $l_\epsilon$ as they all appear in
the right hand side of \autoref{eq:zdftke_ene}. For the latter, it is in fact
the ratio $\sqrt{\bar{e}}/l_\epsilon$ which is stored.
+where the last two terms in \autoref{eq:zdftke_ene} (vertical diffusion and Kolmogorov dissipation)
+are time stepped using a backward scheme (see\autoref{sec:STP_forward_imp}).
+Note that the Kolmogorov term has been linearized in time in order to render the implicit computation possible.
+The restart of the TKE scheme requires the storage of $\bar {e}$, $K_m$, $K_\rho$ and $l_\epsilon$ as
+they all appear in the right hand side of \autoref{eq:zdftke_ene}.
+For the latter, it is in fact the ratio $\sqrt{\bar{e}}/l_\epsilon$ which is stored.
% 
@@ 519,12 +503,11 @@
%
The Generic Length Scale (GLS) scheme is a turbulent closure scheme based on
two prognostic equations: one for the turbulent kinetic energy $\bar {e}$, and another
for the generic length scale, $\psi$ \citep{Umlauf_Burchard_JMS03, Umlauf_Burchard_CSR05}.
This later variable is defined as : $\psi = {C_{0\mu}}^{p} \ {\bar{e}}^{m} \ l^{n}$,
where the triplet $(p, m, n)$ value given in Tab.\autoref{tab:GLS} allows to recover
a number of wellknown turbulent closures ($k$$kl$ \citep{Mellor_Yamada_1982},
$k$$\epsilon$ \citep{Rodi_1987}, $k$$\omega$ \citep{Wilcox_1988}
among others \citep{Umlauf_Burchard_JMS03,Kantha_Carniel_CSR05}).
+The Generic Length Scale (GLS) scheme is a turbulent closure scheme based on two prognostic equations:
+one for the turbulent kinetic energy $\bar {e}$, and another for the generic length scale,
+$\psi$ \citep{Umlauf_Burchard_JMS03, Umlauf_Burchard_CSR05}.
+This later variable is defined as: $\psi = {C_{0\mu}}^{p} \ {\bar{e}}^{m} \ l^{n}$,
+where the triplet $(p, m, n)$ value given in Tab.\autoref{tab:GLS} allows to recover a number of
+wellknown turbulent closures ($k$$kl$ \citep{Mellor_Yamada_1982}, $k$$\epsilon$ \citep{Rodi_1987},
+$k$$\omega$ \citep{Wilcox_1988} among others \citep{Umlauf_Burchard_JMS03,Kantha_Carniel_CSR05}).
The GLS scheme is given by the following set of equations:
\begin{equation} \label{eq:zdfgls_e}
@@ 558,9 +541,10 @@
{\epsilon} = C_{0\mu} \,\frac{\bar {e}^{3/2}}{l} \;
\end{equation}
where $N$ is the local BruntVais\"{a}l\"{a} frequency (see \autoref{subsec:TRA_bn2})
and $\epsilon$ the dissipation rate.
The constants $C_1$, $C_2$, $C_3$, ${\sigma_e}$, ${\sigma_{\psi}}$ and the wall function ($Fw$)
depends of the choice of the turbulence model. Four different turbulent models are predefined
(Tab.\autoref{tab:GLS}). They are made available through the \np{nn\_clo} namelist parameter.
+where $N$ is the local BruntVais\"{a}l\"{a} frequency (see \autoref{subsec:TRA_bn2}) and
+$\epsilon$ the dissipation rate.
+The constants $C_1$, $C_2$, $C_3$, ${\sigma_e}$, ${\sigma_{\psi}}$ and the wall function ($Fw$) depends of
+the choice of the turbulence model.
+Four different turbulent models are predefined (Tab.\autoref{tab:GLS}).
+They are made available through the \np{nn\_clo} namelist parameter.
%TABLE
@@ 584,34 +568,36 @@
\end{tabular}
\caption{ \protect\label{tab:GLS}
Set of predefined GLS parameters, or equivalently predefined turbulence models available
with \protect\key{zdfgls} and controlled by the \protect\np{nn\_clos} namelist variable in \protect\ngn{namzdf\_gls} .}
+ Set of predefined GLS parameters, or equivalently predefined turbulence models available with
+ \protect\key{zdfgls} and controlled by the \protect\np{nn\_clos} namelist variable in \protect\ngn{namzdf\_gls}.}
\end{center} \end{table}
%
In the MellorYamada model, the negativity of $n$ allows to use a wall function to force
the convergence of the mixing length towards $K z_b$ ($K$: Kappa and $z_b$: rugosity length)
value near physical boundaries (logarithmic boundary layer law). $C_{\mu}$ and $C_{\mu'}$
are calculated from stability function proposed by \citet{Galperin_al_JAS88}, or by \citet{Kantha_Clayson_1994}
or one of the two functions suggested by \citet{Canuto_2001} (\np{nn\_stab\_func}\forcode{ = 0..3}, resp.).
+In the MellorYamada model, the negativity of $n$ allows to use a wall function to force the convergence of
+the mixing length towards $K z_b$ ($K$: Kappa and $z_b$: rugosity length) value near physical boundaries
+(logarithmic boundary layer law).
+$C_{\mu}$ and $C_{\mu'}$ are calculated from stability function proposed by \citet{Galperin_al_JAS88},
+or by \citet{Kantha_Clayson_1994} or one of the two functions suggested by \citet{Canuto_2001}
+(\np{nn\_stab\_func}\forcode{ = 0..3}, resp.).
The value of $C_{0\mu}$ depends of the choice of the stability function.
The surface and bottom boundary condition on both $\bar{e}$ and $\psi$ can be calculated
thanks to Dirichlet or Neumann condition through \np{nn\_tkebc\_surf} and \np{nn\_tkebc\_bot}, resp.
As for TKE closure , the wave effect on the mixing is considered when \np{ln\_crban}\forcode{ = .true.}
\citep{Craig_Banner_JPO94, Mellor_Blumberg_JPO04}. The \np{rn\_crban} namelist parameter
is $\alpha_{CB}$ in \autoref{eq:ZDF_Esbc} and \np{rn\_charn} provides the value of $\beta$ in \autoref{eq:ZDF_Lsbc}.

The $\psi$ equation is known to fail in stably stratified flows, and for this reason
almost all authors apply a clipping of the length scale as an \textit{ad hoc} remedy.
With this clipping, the maximum permissible length scale is determined by
$l_{max} = c_{lim} \sqrt{2\bar{e}}/ N$. A value of $c_{lim} = 0.53$ is often used
\citep{Galperin_al_JAS88}. \cite{Umlauf_Burchard_CSR05} show that the value of
the clipping factor is of crucial importance for the entrainment depth predicted in
stably stratified situations, and that its value has to be chosen in accordance
with the algebraic model for the turbulent fluxes. The clipping is only activated
if \np{ln\_length\_lim}\forcode{ = .true.}, and the $c_{lim}$ is set to the \np{rn\_clim\_galp} value.

The time and space discretization of the GLS equations follows the same energetic
consideration as for the TKE case described in \autoref{subsec:ZDF_tke_ene} \citep{Burchard_OM02}.
+The surface and bottom boundary condition on both $\bar{e}$ and $\psi$ can be calculated thanks to Dirichlet or
+Neumann condition through \np{nn\_tkebc\_surf} and \np{nn\_tkebc\_bot}, resp.
+As for TKE closure, the wave effect on the mixing is considered when
+\np{ln\_crban}\forcode{ = .true.} \citep{Craig_Banner_JPO94, Mellor_Blumberg_JPO04}.
+The \np{rn\_crban} namelist parameter is $\alpha_{CB}$ in \autoref{eq:ZDF_Esbc} and
+\np{rn\_charn} provides the value of $\beta$ in \autoref{eq:ZDF_Lsbc}.
+
+The $\psi$ equation is known to fail in stably stratified flows, and for this reason
+almost all authors apply a clipping of the length scale as an \textit{ad hoc} remedy.
+With this clipping, the maximum permissible length scale is determined by $l_{max} = c_{lim} \sqrt{2\bar{e}}/ N$.
+A value of $c_{lim} = 0.53$ is often used \citep{Galperin_al_JAS88}.
+\cite{Umlauf_Burchard_CSR05} show that the value of the clipping factor is of crucial importance for
+the entrainment depth predicted in stably stratified situations,
+and that its value has to be chosen in accordance with the algebraic model for the turbulent fluxes.
+The clipping is only activated if \np{ln\_length\_lim}\forcode{ = .true.},
+and the $c_{lim}$ is set to the \np{rn\_clim\_galp} value.
+
+The time and space discretization of the GLS equations follows the same energetic consideration as for
+the TKE case described in \autoref{subsec:ZDF_tke_ene} \citep{Burchard_OM02}.
Examples of performance of the 4 turbulent closure scheme can be found in \citet{Warner_al_OM05}.
@@ 640,12 +626,10 @@
%
Static instabilities (i.e. light potential densities under heavy ones) may
occur at particular ocean grid points. In nature, convective processes
quickly reestablish the static stability of the water column. These
processes have been removed from the model via the hydrostatic
assumption so they must be parameterized. Three parameterisations
are available to deal with convective processes: a nonpenetrative
convective adjustment or an enhanced vertical diffusion, or/and the
use of a turbulent closure scheme.
+Static instabilities (i.e. light potential densities under heavy ones) may occur at particular ocean grid points.
+In nature, convective processes quickly reestablish the static stability of the water column.
+These processes have been removed from the model via the hydrostatic assumption so they must be parameterized.
+Three parameterisations are available to deal with convective processes:
+a nonpenetrative convective adjustment or an enhanced vertical diffusion,
+or/and the use of a turbulent closure scheme.
% 
@@ 665,55 +649,55 @@
\includegraphics[width=0.90\textwidth]{Fig_npc}
\caption{ \protect\label{fig:npc}
Example of an unstable density profile treated by the non penetrative
convective adjustment algorithm. $1^{st}$ step: the initial profile is checked from
the surface to the bottom. It is found to be unstable between levels 3 and 4.
They are mixed. The resulting $\rho$ is still larger than $\rho$(5): levels 3 to 5
are mixed. The resulting $\rho$ is still larger than $\rho$(6): levels 3 to 6 are
mixed. The $1^{st}$ step ends since the density profile is then stable below
the level 3. $2^{nd}$ step: the new $\rho$ profile is checked following the same
procedure as in $1^{st}$ step: levels 2 to 5 are mixed. The new density profile
is checked. It is found stable: end of algorithm.}
+ Example of an unstable density profile treated by the non penetrative convective adjustment algorithm.
+ $1^{st}$ step: the initial profile is checked from the surface to the bottom.
+ It is found to be unstable between levels 3 and 4.
+ They are mixed.
+ The resulting $\rho$ is still larger than $\rho$(5): levels 3 to 5 are mixed.
+ The resulting $\rho$ is still larger than $\rho$(6): levels 3 to 6 are mixed.
+ The $1^{st}$ step ends since the density profile is then stable below the level 3.
+ $2^{nd}$ step: the new $\rho$ profile is checked following the same procedure as in $1^{st}$ step:
+ levels 2 to 5 are mixed.
+ The new density profile is checked.
+ It is found stable: end of algorithm.}
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Options are defined through the \ngn{namzdf} namelist variables.
The nonpenetrative convective adjustment is used when \np{ln\_zdfnpc}\forcode{ = .true.}.
It is applied at each \np{nn\_npc} time step and mixes downwards instantaneously
the statically unstable portion of the water column, but only until the density
structure becomes neutrally stable ($i.e.$ until the mixed portion of the water
column has \textit{exactly} the density of the water just below) \citep{Madec_al_JPO91}.
The associated algorithm is an iterative process used in the following way
(\autoref{fig:npc}): starting from the top of the ocean, the first instability is
found. Assume in the following that the instability is located between levels
$k$ and $k+1$. The temperature and salinity in the two levels are
vertically mixed, conserving the heat and salt contents of the water column.
The new density is then computed by a linear approximation. If the new
density profile is still unstable between levels $k+1$ and $k+2$, levels $k$,
$k+1$ and $k+2$ are then mixed. This process is repeated until stability is
established below the level $k$ (the mixing process can go down to the
ocean bottom). The algorithm is repeated to check if the density profile
between level $k1$ and $k$ is unstable and/or if there is no deeper instability.

This algorithm is significantly different from mixing statically unstable levels
two by two. The latter procedure cannot converge with a finite number
of iterations for some vertical profiles while the algorithm used in \NEMO
converges for any profile in a number of iterations which is less than the
number of vertical levels. This property is of paramount importance as
pointed out by \citet{Killworth1989}: it avoids the existence of permanent
and unrealistic static instabilities at the sea surface. This nonpenetrative
convective algorithm has been proved successful in studies of the deep
water formation in the northwestern Mediterranean Sea
\citep{Madec_al_JPO91, Madec_al_DAO91, Madec_Crepon_Bk91}.

The current implementation has been modified in order to deal with any non linear
equation of seawater (L. Brodeau, personnal communication).
Two main differences have been introduced compared to the original algorithm:
+Options are defined through the \ngn{namzdf} namelist variables.
+The nonpenetrative convective adjustment is used when \np{ln\_zdfnpc}\forcode{ = .true.}.
+It is applied at each \np{nn\_npc} time step and mixes downwards instantaneously the statically unstable portion of
+the water column, but only until the density structure becomes neutrally stable
+($i.e.$ until the mixed portion of the water column has \textit{exactly} the density of the water just below)
+\citep{Madec_al_JPO91}.
+The associated algorithm is an iterative process used in the following way (\autoref{fig:npc}):
+starting from the top of the ocean, the first instability is found.
+Assume in the following that the instability is located between levels $k$ and $k+1$.
+The temperature and salinity in the two levels are vertically mixed, conserving the heat and salt contents of
+the water column.
+The new density is then computed by a linear approximation.
+If the new density profile is still unstable between levels $k+1$ and $k+2$,
+levels $k$, $k+1$ and $k+2$ are then mixed.
+This process is repeated until stability is established below the level $k$
+(the mixing process can go down to the ocean bottom).
+The algorithm is repeated to check if the density profile between level $k1$ and $k$ is unstable and/or
+if there is no deeper instability.
+
+This algorithm is significantly different from mixing statically unstable levels two by two.
+The latter procedure cannot converge with a finite number of iterations for some vertical profiles while
+the algorithm used in \NEMO converges for any profile in a number of iterations which is less than
+the number of vertical levels.
+This property is of paramount importance as pointed out by \citet{Killworth1989}:
+it avoids the existence of permanent and unrealistic static instabilities at the sea surface.
+This nonpenetrative convective algorithm has been proved successful in studies of the deep water formation in
+the northwestern Mediterranean Sea \citep{Madec_al_JPO91, Madec_al_DAO91, Madec_Crepon_Bk91}.
+
+The current implementation has been modified in order to deal with any non linear equation of seawater
+(L. Brodeau, personnal communication).
+Two main differences have been introduced compared to the original algorithm:
$(i)$ the stability is now checked using the BruntV\"{a}is\"{a}l\"{a} frequency
(not the the difference in potential density) ;
$(ii)$ when two levels are found unstable, their thermal and haline expansion coefficients
are vertically mixed in the same way their temperature and salinity has been mixed.
These two modifications allow the algorithm to perform properly and accurately
with TEOS10 or EOS80 without having to recompute the expansion coefficients at each
mixing iteration.
+(not the the difference in potential density);
+$(ii)$ when two levels are found unstable, their thermal and haline expansion coefficients are vertically mixed in
+the same way their temperature and salinity has been mixed.
+These two modifications allow the algorithm to perform properly and accurately with TEOS10 or EOS80 without
+having to recompute the expansion coefficients at each mixing iteration.
% 
@@ 729,23 +713,24 @@
Options are defined through the \ngn{namzdf} namelist variables.
The enhanced vertical diffusion parameterisation is used when \np{ln\_zdfevd}\forcode{ = .true.}.
In this case, the vertical eddy mixing coefficients are assigned very large values
(a typical value is $10\;m^2s^{1})$ in regions where the stratification is unstable
($i.e.$ when $N^2$ the BruntVais\"{a}l\"{a} frequency is negative)
\citep{Lazar_PhD97, Lazar_al_JPO99}. This is done either on tracers only
(\np{nn\_evdm}\forcode{ = 0}) or on both momentum and tracers (\np{nn\_evdm}\forcode{ = 1}).

In practice, where $N^2\leq 10^{12}$, $A_T^{vT}$ and $A_T^{vS}$, and
if \np{nn\_evdm}\forcode{ = 1}, the four neighbouring $A_u^{vm} \;\mbox{and}\;A_v^{vm}$
values also, are set equal to the namelist parameter \np{rn\_avevd}. A typical value
for $rn\_avevd$ is between 1 and $100~m^2.s^{1}$. This parameterisation of
convective processes is less time consuming than the convective adjustment
algorithm presented above when mixing both tracers and momentum in the
case of static instabilities. It requires the use of an implicit time stepping on
vertical diffusion terms (i.e. \np{ln\_zdfexp}\forcode{ = .false.}).

Note that the stability test is performed on both \textit{before} and \textit{now}
values of $N^2$. This removes a potential source of divergence of odd and
even time step in a leapfrog environment \citep{Leclair_PhD2010} (see \autoref{sec:STP_mLF}).
+The enhanced vertical diffusion parameterisation is used when \np{ln\_zdfevd}\forcode{ = .true.}.
+In this case, the vertical eddy mixing coefficients are assigned very large values
+(a typical value is $10\;m^2s^{1})$ in regions where the stratification is unstable
+($i.e.$ when $N^2$ the BruntVais\"{a}l\"{a} frequency is negative) \citep{Lazar_PhD97, Lazar_al_JPO99}.
+This is done either on tracers only (\np{nn\_evdm}\forcode{ = 0}) or
+on both momentum and tracers (\np{nn\_evdm}\forcode{ = 1}).
+
+In practice, where $N^2\leq 10^{12}$, $A_T^{vT}$ and $A_T^{vS}$, and if \np{nn\_evdm}\forcode{ = 1},
+the four neighbouring $A_u^{vm} \;\mbox{and}\;A_v^{vm}$ values also, are set equal to
+the namelist parameter \np{rn\_avevd}.
+A typical value for $rn\_avevd$ is between 1 and $100~m^2.s^{1}$.
+This parameterisation of convective processes is less time consuming than
+the convective adjustment algorithm presented above when mixing both tracers and
+momentum in the case of static instabilities.
+It requires the use of an implicit time stepping on vertical diffusion terms
+(i.e. \np{ln\_zdfexp}\forcode{ = .false.}).
+
+Note that the stability test is performed on both \textit{before} and \textit{now} values of $N^2$.
+This removes a potential source of divergence of odd and even time step in
+a leapfrog environment \citep{Leclair_PhD2010} (see \autoref{sec:STP_mLF}).
% 
@@ 755,24 +740,23 @@
\label{subsec:ZDF_tcs}
The turbulent closure scheme presented in \autoref{subsec:ZDF_tke} and \autoref{subsec:ZDF_gls}
(\key{zdftke} or \key{zdftke} is defined) in theory solves the problem of statically
unstable density profiles. In such a case, the term corresponding to the
destruction of turbulent kinetic energy through stratification in \autoref{eq:zdftke_e}
or \autoref{eq:zdfgls_e} becomes a source term, since $N^2$ is negative.
It results in large values of $A_T^{vT}$ and $A_T^{vT}$, and also the four neighbouring
$A_u^{vm} {and}\;A_v^{vm}$ (up to $1\;m^2s^{1})$. These large values
restore the static stability of the water column in a way similar to that of the
enhanced vertical diffusion parameterisation (\autoref{subsec:ZDF_evd}). However,
in the vicinity of the sea surface (first ocean layer), the eddy coefficients
computed by the turbulent closure scheme do not usually exceed $10^{2}m.s^{1}$,
because the mixing length scale is bounded by the distance to the sea surface.
It can thus be useful to combine the enhanced vertical
diffusion with the turbulent closure scheme, $i.e.$ setting the \np{ln\_zdfnpc}
namelist parameter to true and defining the turbulent closure CPP key all together.

The KPP turbulent closure scheme already includes enhanced vertical diffusion
in the case of convection, as governed by the variables $bvsqcon$ and $difcon$
found in \mdl{zdfkpp}, therefore \np{ln\_zdfevd}\forcode{ = .false.} should be used with the KPP
scheme. %gm% + one word on non local flux with KPP scheme trakpp.F90 module...
+The turbulent closure scheme presented in \autoref{subsec:ZDF_tke} and \autoref{subsec:ZDF_gls}
+(\key{zdftke} or \key{zdftke} is defined) in theory solves the problem of statically unstable density profiles.
+In such a case, the term corresponding to the destruction of turbulent kinetic energy through stratification in
+\autoref{eq:zdftke_e} or \autoref{eq:zdfgls_e} becomes a source term, since $N^2$ is negative.
+It results in large values of $A_T^{vT}$ and $A_T^{vT}$, and also the four neighbouring $A_u^{vm} {and}\;A_v^{vm}$
+(up to $1\;m^2s^{1}$).
+These large values restore the static stability of the water column in a way similar to that of
+the enhanced vertical diffusion parameterisation (\autoref{subsec:ZDF_evd}).
+However, in the vicinity of the sea surface (first ocean layer), the eddy coefficients computed by
+the turbulent closure scheme do not usually exceed $10^{2}m.s^{1}$,
+because the mixing length scale is bounded by the distance to the sea surface.
+It can thus be useful to combine the enhanced vertical diffusion with the turbulent closure scheme,
+$i.e.$ setting the \np{ln\_zdfnpc} namelist parameter to true and
+defining the turbulent closure CPP key all together.
+
+The KPP turbulent closure scheme already includes enhanced vertical diffusion in the case of convection,
+as governed by the variables $bvsqcon$ and $difcon$ found in \mdl{zdfkpp},
+therefore \np{ln\_zdfevd}\forcode{ = .false.} should be used with the KPP scheme.
+% gm% + one word on non local flux with KPP scheme trakpp.F90 module...
% ================================================================
@@ 788,12 +772,11 @@
Options are defined through the \ngn{namzdf\_ddm} namelist variables.
Double diffusion occurs when relatively warm, salty water overlies cooler, fresher
water, or vice versa. The former condition leads to salt fingering and the latter
to diffusive convection. Doublediffusive phenomena contribute to diapycnal
mixing in extensive regions of the ocean. \citet{Merryfield1999} include a
parameterisation of such phenomena in a global ocean model and show that
it leads to relatively minor changes in circulation but exerts significant regional
influences on temperature and salinity. This parameterisation has been
introduced in \mdl{zdfddm} module and is controlled by the \key{zdfddm} CPP key.
+Double diffusion occurs when relatively warm, salty water overlies cooler, fresher water, or vice versa.
+The former condition leads to salt fingering and the latter to diffusive convection.
+Doublediffusive phenomena contribute to diapycnal mixing in extensive regions of the ocean.
+\citet{Merryfield1999} include a parameterisation of such phenomena in a global ocean model and show that
+it leads to relatively minor changes in circulation but exerts significant regional influences on
+temperature and salinity.
+This parameterisation has been introduced in \mdl{zdfddm} module and is controlled by the \key{zdfddm} CPP key.
Diapycnal mixing of S and T are described by diapycnal diffusion coefficients
@@ 802,10 +785,11 @@
&A^{vS} = A_o^{vS}+A_f^{vS}+A_d^{vS}
\end{align*}
where subscript $f$ represents mixing by salt fingering, $d$ by diffusive convection,
and $o$ by processes other than double diffusion. The rates of doublediffusive
mixing depend on the buoyancy ratio $R_\rho = \alpha \partial_z T / \beta \partial_z S$,
where $\alpha$ and $\beta$ are coefficients of thermal expansion and saline
contraction (see \autoref{subsec:TRA_eos}). To represent mixing of $S$ and $T$ by salt
fingering, we adopt the diapycnal diffusivities suggested by Schmitt (1981):
+where subscript $f$ represents mixing by salt fingering, $d$ by diffusive convection,
+and $o$ by processes other than double diffusion.
+The rates of doublediffusive mixing depend on the buoyancy ratio
+$R_\rho = \alpha \partial_z T / \beta \partial_z S$, where $\alpha$ and $\beta$ are coefficients of
+thermal expansion and saline contraction (see \autoref{subsec:TRA_eos}).
+To represent mixing of $S$ and $T$ by salt fingering, we adopt the diapycnal diffusivities suggested by Schmitt
+(1981):
\begin{align} \label{eq:zdfddm_f}
A_f^{vS} &= \begin{cases}
@@ 821,20 +805,19 @@
\includegraphics[width=0.99\textwidth]{Fig_zdfddm}
\caption{ \protect\label{fig:zdfddm}
From \citet{Merryfield1999} : (a) Diapycnal diffusivities $A_f^{vT}$
and $A_f^{vS}$ for temperature and salt in regions of salt fingering. Heavy
curves denote $A^{\ast v} = 10^{3}~m^2.s^{1}$ and thin curves
$A^{\ast v} = 10^{4}~m^2.s^{1}$ ; (b) diapycnal diffusivities $A_d^{vT}$ and
$A_d^{vS}$ for temperature and salt in regions of diffusive convection. Heavy
curves denote the Federov parameterisation and thin curves the Kelley
parameterisation. The latter is not implemented in \NEMO. }
+ From \citet{Merryfield1999} :
+ (a) Diapycnal diffusivities $A_f^{vT}$ and $A_f^{vS}$ for temperature and salt in regions of salt fingering.
+ Heavy curves denote $A^{\ast v} = 10^{3}~m^2.s^{1}$ and thin curves $A^{\ast v} = 10^{4}~m^2.s^{1}$;
+ (b) diapycnal diffusivities $A_d^{vT}$ and $A_d^{vS}$ for temperature and salt in regions of diffusive convection.
+ Heavy curves denote the Federov parameterisation and thin curves the Kelley parameterisation.
+ The latter is not implemented in \NEMO. }
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
The factor 0.7 in \autoref{eq:zdfddm_f_T} reflects the measured ratio
$\alpha F_T /\beta F_S \approx 0.7$ of buoyancy flux of heat to buoyancy
flux of salt ($e.g.$, \citet{McDougall_Taylor_JMR84}). Following \citet{Merryfield1999},
we adopt $R_c = 1.6$, $n = 6$, and $A^{\ast v} = 10^{4}~m^2.s^{1}$.

To represent mixing of S and T by diffusive layering, the diapycnal diffusivities suggested by Federov (1988) is used:
+The factor 0.7 in \autoref{eq:zdfddm_f_T} reflects the measured ratio $\alpha F_T /\beta F_S \approx 0.7$ of
+buoyancy flux of heat to buoyancy flux of salt ($e.g.$, \citet{McDougall_Taylor_JMR84}).
+Following \citet{Merryfield1999}, we adopt $R_c = 1.6$, $n = 6$, and $A^{\ast v} = 10^{4}~m^2.s^{1}$.
+
+To represent mixing of S and T by diffusive layering, the diapycnal diffusivities suggested by
+Federov (1988) is used:
\begin{align} \label{eq:zdfddm_d}
A_d^{vT} &= \begin{cases}
@@ 853,9 +836,9 @@
\end{align}
The dependencies of \autoref{eq:zdfddm_f} to \autoref{eq:zdfddm_d_S} on $R_\rho$
are illustrated in \autoref{fig:zdfddm}. Implementing this requires computing
$R_\rho$ at each grid point on every time step. This is done in \mdl{eosbn2} at the
same time as $N^2$ is computed. This avoids duplication in the computation of
$\alpha$ and $\beta$ (which is usually quite expensive).
+The dependencies of \autoref{eq:zdfddm_f} to \autoref{eq:zdfddm_d_S} on $R_\rho$ are illustrated in
+\autoref{fig:zdfddm}.
+Implementing this requires computing $R_\rho$ at each grid point on every time step.
+This is done in \mdl{eosbn2} at the same time as $N^2$ is computed.
+This avoids duplication in the computation of $\alpha$ and $\beta$ (which is usually quite expensive).
% ================================================================
@@ 870,51 +853,48 @@
%
Options to define the top and bottom friction are defined through the \ngn{nambfr} namelist variables.
The bottom friction represents the friction generated by the bathymetry.
The top friction represents the friction generated by the ice shelf/ocean interface.
As the friction processes at the top and bottom are treated in similar way,
+Options to define the top and bottom friction are defined through the \ngn{nambfr} namelist variables.
+The bottom friction represents the friction generated by the bathymetry.
+The top friction represents the friction generated by the ice shelf/ocean interface.
+As the friction processes at the top and bottom are treated in similar way,
only the bottom friction is described in detail below.
Both the surface momentum flux (wind stress) and the bottom momentum
flux (bottom friction) enter the equations as a condition on the vertical
diffusive flux. For the bottom boundary layer, one has:
+Both the surface momentum flux (wind stress) and the bottom momentum flux (bottom friction) enter the equations as
+a condition on the vertical diffusive flux.
+For the bottom boundary layer, one has:
\begin{equation} \label{eq:zdfbfr_flux}
A^{vm} \left( \partial {\textbf U}_h / \partial z \right) = {{\cal F}}_h^{\textbf U}
\end{equation}
where ${\cal F}_h^{\textbf U}$ is represents the downward flux of horizontal momentum
outside the logarithmic turbulent boundary layer (thickness of the order of
1~m in the ocean). How ${\cal F}_h^{\textbf U}$ influences the interior depends on the
vertical resolution of the model near the bottom relative to the Ekman layer
depth. For example, in order to obtain an Ekman layer depth
$d = \sqrt{2\;A^{vm}} / f = 50$~m, one needs a vertical diffusion coefficient
$A^{vm} = 0.125$~m$^2$s$^{1}$ (for a Coriolis frequency
$f = 10^{4}$~m$^2$s$^{1}$). With a background diffusion coefficient
$A^{vm} = 10^{4}$~m$^2$s$^{1}$, the Ekman layer depth is only 1.4~m.
When the vertical mixing coefficient is this small, using a flux condition is
equivalent to entering the viscous forces (either wind stress or bottom friction)
as a body force over the depth of the top or bottom model layer. To illustrate
this, consider the equation for $u$ at $k$, the last ocean level:
+where ${\cal F}_h^{\textbf U}$ is represents the downward flux of horizontal momentum outside
+the logarithmic turbulent boundary layer (thickness of the order of 1~m in the ocean).
+How ${\cal F}_h^{\textbf U}$ influences the interior depends on the vertical resolution of the model near
+the bottom relative to the Ekman layer depth.
+For example, in order to obtain an Ekman layer depth $d = \sqrt{2\;A^{vm}} / f = 50$~m,
+one needs a vertical diffusion coefficient $A^{vm} = 0.125$~m$^2$s$^{1}$
+(for a Coriolis frequency $f = 10^{4}$~m$^2$s$^{1}$).
+With a background diffusion coefficient $A^{vm} = 10^{4}$~m$^2$s$^{1}$, the Ekman layer depth is only 1.4~m.
+When the vertical mixing coefficient is this small, using a flux condition is equivalent to
+entering the viscous forces (either wind stress or bottom friction) as a body force over the depth of the top or
+bottom model layer.
+To illustrate this, consider the equation for $u$ at $k$, the last ocean level:
\begin{equation} \label{eq:zdfbfr_flux2}
\frac{\partial u_k}{\partial t} = \frac{1}{e_{3u}} \left[ \frac{A_{uw}^{vm}}{e_{3uw}} \delta_{k+1/2}\;[u]  {\cal F}^u_h \right] \approx  \frac{{\cal F}^u_{h}}{e_{3u}}
\end{equation}
If the bottom layer thickness is 200~m, the Ekman transport will
be distributed over that depth. On the other hand, if the vertical resolution
is high (1~m or less) and a turbulent closure model is used, the turbulent
Ekman layer will be represented explicitly by the model. However, the
logarithmic layer is never represented in current primitive equation model
applications: it is \emph{necessary} to parameterize the flux ${\cal F}^u_h $.
Two choices are available in \NEMO: a linear and a quadratic bottom friction.
Note that in both cases, the rotation between the interior velocity and the
bottom friction is neglected in the present release of \NEMO.

In the code, the bottom friction is imposed by adding the trend due to the bottom
friction to the general momentum trend in \mdl{dynbfr}. For the timesplit surface
pressure gradient algorithm, the momentum trend due to the barotropic component
needs to be handled separately. For this purpose it is convenient to compute and
store coefficients which can be simply combined with bottom velocities and geometric
values to provide the momentum trend due to bottom friction.
These coefficients are computed in \mdl{zdfbfr} and generally take the form
$c_b^{\textbf U}$ where:
+If the bottom layer thickness is 200~m, the Ekman transport will be distributed over that depth.
+On the other hand, if the vertical resolution is high (1~m or less) and a turbulent closure model is used,
+the turbulent Ekman layer will be represented explicitly by the model.
+However, the logarithmic layer is never represented in current primitive equation model applications:
+it is \emph{necessary} to parameterize the flux ${\cal F}^u_h $.
+Two choices are available in \NEMO: a linear and a quadratic bottom friction.
+Note that in both cases, the rotation between the interior velocity and the bottom friction is neglected in
+the present release of \NEMO.
+
+In the code, the bottom friction is imposed by adding the trend due to the bottom friction to
+the general momentum trend in \mdl{dynbfr}.
+For the timesplit surface pressure gradient algorithm, the momentum trend due to
+the barotropic component needs to be handled separately.
+For this purpose it is convenient to compute and store coefficients which can be simply combined with
+bottom velocities and geometric values to provide the momentum trend due to bottom friction.
+These coefficients are computed in \mdl{zdfbfr} and generally take the form $c_b^{\textbf U}$ where:
\begin{equation} \label{eq:zdfbfr_bdef}
\frac{\partial {\textbf U_h}}{\partial t} =
@@ 929,27 +909,22 @@
\label{subsec:ZDF_bfr_linear}
The linear bottom friction parameterisation (including the special case
of a freeslip condition) assumes that the bottom friction
is proportional to the interior velocity (i.e. the velocity of the last
model level):
+The linear bottom friction parameterisation (including the special case of a freeslip condition) assumes that
+the bottom friction is proportional to the interior velocity (i.e. the velocity of the last model level):
\begin{equation} \label{eq:zdfbfr_linear}
{\cal F}_h^\textbf{U} = \frac{A^{vm}}{e_3} \; \frac{\partial \textbf{U}_h}{\partial k} = r \; \textbf{U}_h^b
\end{equation}
where $r$ is a friction coefficient expressed in ms$^{1}$.
This coefficient is generally estimated by setting a typical decay time
$\tau$ in the deep ocean,
and setting $r = H / \tau$, where $H$ is the ocean depth. Commonly accepted
values of $\tau$ are of the order of 100 to 200 days \citep{Weatherly_JMR84}.
A value $\tau^{1} = 10^{7}$~s$^{1}$ equivalent to 115 days, is usually used
in quasigeostrophic models. One may consider the linear friction as an
approximation of quadratic friction, $r \approx 2\;C_D\;U_{av}$ (\citet{Gill1982},
Eq. 9.6.6). For example, with a drag coefficient $C_D = 0.002$, a typical speed
of tidal currents of $U_{av} =0.1$~m\;s$^{1}$, and assuming an ocean depth
$H = 4000$~m, the resulting friction coefficient is $r = 4\;10^{4}$~m\;s$^{1}$.
This is the default value used in \NEMO. It corresponds to a decay time scale
of 115~days. It can be changed by specifying \np{rn\_bfri1} (namelist parameter).

For the linear friction case the coefficients defined in the general
expression \autoref{eq:zdfbfr_bdef} are:
+where $r$ is a friction coefficient expressed in ms$^{1}$.
+This coefficient is generally estimated by setting a typical decay time $\tau$ in the deep ocean,
+and setting $r = H / \tau$, where $H$ is the ocean depth.
+Commonly accepted values of $\tau$ are of the order of 100 to 200 days \citep{Weatherly_JMR84}.
+A value $\tau^{1} = 10^{7}$~s$^{1}$ equivalent to 115 days, is usually used in quasigeostrophic models.
+One may consider the linear friction as an approximation of quadratic friction, $r \approx 2\;C_D\;U_{av}$
+(\citet{Gill1982}, Eq. 9.6.6).
+For example, with a drag coefficient $C_D = 0.002$, a typical speed of tidal currents of $U_{av} =0.1$~m\;s$^{1}$,
+and assuming an ocean depth $H = 4000$~m, the resulting friction coefficient is $r = 4\;10^{4}$~m\;s$^{1}$.
+This is the default value used in \NEMO. It corresponds to a decay time scale of 115~days.
+It can be changed by specifying \np{rn\_bfri1} (namelist parameter).
+
+For the linear friction case the coefficients defined in the general expression \autoref{eq:zdfbfr_bdef} are:
\begin{equation} \label{eq:zdfbfr_linbfr_b}
\begin{split}
@@ 958,12 +933,13 @@
\end{split}
\end{equation}
When \np{nn\_botfr}\forcode{ = 1}, the value of $r$ used is \np{rn\_bfri1}.
Setting \np{nn\_botfr}\forcode{ = 0} is equivalent to setting $r=0$ and leads to a freeslip
bottom boundary condition. These values are assigned in \mdl{zdfbfr}.
From v3.2 onwards there is support for local enhancement of these values
via an externally defined 2D mask array (\np{ln\_bfr2d}\forcode{ = .true.}) given
in the \ifile{bfr\_coef} input NetCDF file. The mask values should vary from 0 to 1.
Locations with a nonzero mask value will have the friction coefficient increased
by $mask\_value$*\np{rn\_bfrien}*\np{rn\_bfri1}.
+When \np{nn\_botfr}\forcode{ = 1}, the value of $r$ used is \np{rn\_bfri1}.
+Setting \np{nn\_botfr}\forcode{ = 0} is equivalent to setting $r=0$ and
+leads to a freeslip bottom boundary condition.
+These values are assigned in \mdl{zdfbfr}.
+From v3.2 onwards there is support for local enhancement of these values via an externally defined 2D mask array
+(\np{ln\_bfr2d}\forcode{ = .true.}) given in the \ifile{bfr\_coef} input NetCDF file.
+The mask values should vary from 0 to 1.
+Locations with a nonzero mask value will have the friction coefficient increased by
+$mask\_value$*\np{rn\_bfrien}*\np{rn\_bfri1}.
% 
@@ 973,24 +949,20 @@
\label{subsec:ZDF_bfr_nonlinear}
The nonlinear bottom friction parameterisation assumes that the bottom
friction is quadratic:
+The nonlinear bottom friction parameterisation assumes that the bottom friction is quadratic:
\begin{equation} \label{eq:zdfbfr_nonlinear}
{\cal F}_h^\textbf{U} = \frac{A^{vm}}{e_3 }\frac{\partial \textbf {U}_h
}{\partial k}=C_D \;\sqrt {u_b ^2+v_b ^2+e_b } \;\; \textbf {U}_h^b
\end{equation}
where $C_D$ is a drag coefficient, and $e_b $ a bottom turbulent kinetic energy
due to tides, internal waves breaking and other short time scale currents.
A typical value of the drag coefficient is $C_D = 10^{3} $. As an example,
the CME experiment \citep{Treguier_JGR92} uses $C_D = 10^{3}$ and
$e_b = 2.5\;10^{3}$m$^2$\;s$^{2}$, while the FRAM experiment \citep{Killworth1992}
uses $C_D = 1.4\;10^{3}$ and $e_b =2.5\;\;10^{3}$m$^2$\;s$^{2}$.
The CME choices have been set as default values (\np{rn\_bfri2} and \np{rn\_bfeb2}
namelist parameters).

As for the linear case, the bottom friction is imposed in the code by
adding the trend due to the bottom friction to the general momentum trend
in \mdl{dynbfr}.
For the nonlinear friction case the terms
computed in \mdl{zdfbfr} are:
+where $C_D$ is a drag coefficient, and $e_b $ a bottom turbulent kinetic energy due to tides,
+internal waves breaking and other short time scale currents.
+A typical value of the drag coefficient is $C_D = 10^{3} $.
+As an example, the CME experiment \citep{Treguier_JGR92} uses $C_D = 10^{3}$ and
+$e_b = 2.5\;10^{3}$m$^2$\;s$^{2}$, while the FRAM experiment \citep{Killworth1992} uses $C_D = 1.4\;10^{3}$ and
+$e_b =2.5\;\;10^{3}$m$^2$\;s$^{2}$.
+The CME choices have been set as default values (\np{rn\_bfri2} and \np{rn\_bfeb2} namelist parameters).
+
+As for the linear case, the bottom friction is imposed in the code by adding the trend due to
+the bottom friction to the general momentum trend in \mdl{dynbfr}.
+For the nonlinear friction case the terms computed in \mdl{zdfbfr} are:
\begin{equation} \label{eq:zdfbfr_nonlinbfr}
\begin{split}
@@ 1000,10 +972,10 @@
\end{equation}
The coefficients that control the strength of the nonlinear bottom friction are
initialised as namelist parameters: $C_D$= \np{rn\_bfri2}, and $e_b$ =\np{rn\_bfeb2}.
Note for applications which treat tides explicitly a low or even zero value of
\np{rn\_bfeb2} is recommended. From v3.2 onwards a local enhancement of $C_D$ is possible
via an externally defined 2D mask array (\np{ln\_bfr2d}\forcode{ = .true.}). This works in the same way
as for the linear bottom friction case with nonzero masked locations increased by
+The coefficients that control the strength of the nonlinear bottom friction are initialised as namelist parameters:
+$C_D$= \np{rn\_bfri2}, and $e_b$ =\np{rn\_bfeb2}.
+Note for applications which treat tides explicitly a low or even zero value of \np{rn\_bfeb2} is recommended.
+From v3.2 onwards a local enhancement of $C_D$ is possible via an externally defined 2D mask array
+(\np{ln\_bfr2d}\forcode{ = .true.}).
+This works in the same way as for the linear bottom friction case with nonzero masked locations increased by
$mask\_value$*\np{rn\_bfrien}*\np{rn\_bfri2}.
@@ 1015,27 +987,25 @@
\label{subsec:ZDF_bfr_loglayer}
In the nonlinear bottom friction case, the drag coefficient, $C_D$, can be optionally
enhanced using a "law of the wall" scaling. If \np{ln\_loglayer} = .true., $C_D$ is no
longer constant but is related to the thickness of the last wet layer in each column by:

+In the nonlinear bottom friction case, the drag coefficient, $C_D$, can be optionally enhanced using
+a "law of the wall" scaling.
+If \np{ln\_loglayer} = .true., $C_D$ is no longer constant but is related to the thickness of
+the last wet layer in each column by:
\begin{equation}
C_D = \left ( {\kappa \over {\rm log}\left ( 0.5e_{3t}/rn\_bfrz0 \right ) } \right )^2
\end{equation}
\noindent where $\kappa$ is the vonKarman constant and \np{rn\_bfrz0} is a roughness
length provided via the namelist.
+\noindent where $\kappa$ is the vonKarman constant and \np{rn\_bfrz0} is a roughness length provided via
+the namelist.
For stability, the drag coefficient is bounded such that it is kept greater or equal to
the base \np{rn\_bfri2} value and it is not allowed to exceed the value of an additional
namelist parameter: \np{rn\_bfri2\_max}, i.e.:

+the base \np{rn\_bfri2} value and it is not allowed to exceed the value of an additional namelist parameter:
+\np{rn\_bfri2\_max}, i.e.:
\begin{equation}
rn\_bfri2 \leq C_D \leq rn\_bfri2\_max
\end{equation}
\noindent Note also that a loglayer enhancement can also be applied to the top boundary
friction if under iceshelf cavities are in use (\np{ln\_isfcav}\forcode{ = .true.}). In this case, the
relevant namelist parameters are \np{rn\_tfrz0}, \np{rn\_tfri2}
and \np{rn\_tfri2\_max}.
+\noindent Note also that a loglayer enhancement can also be applied to the top boundary friction if
+under iceshelf cavities are in use (\np{ln\_isfcav}\forcode{ = .true.}).
+In this case, the relevant namelist parameters are \np{rn\_tfrz0}, \np{rn\_tfri2} and \np{rn\_tfri2\_max}.
% 
@@ 1045,8 +1015,7 @@
\label{subsec:ZDF_bfr_stability}
Some care needs to exercised over the choice of parameters to ensure that the
implementation of bottom friction does not induce numerical instability. For
the purposes of stability analysis, an approximation to \autoref{eq:zdfbfr_flux2}
is:
+Some care needs to exercised over the choice of parameters to ensure that the implementation of
+bottom friction does not induce numerical instability.
+For the purposes of stability analysis, an approximation to \autoref{eq:zdfbfr_flux2} is:
\begin{equation} \label{eq:Eqn_bfrstab}
\begin{split}
@@ 1055,5 +1024,5 @@
\end{split}
\end{equation}
\noindent where linear bottom friction and a leapfrog timestep have been assumed.
+\noindent where linear bottom friction and a leapfrog timestep have been assumed.
To ensure that the bottom friction cannot reverse the direction of flow it is necessary to have:
\begin{equation}
@@ 1064,27 +1033,26 @@
r\frac{2\rdt}{e_{3u}} < 1 \qquad \Rightarrow \qquad r < \frac{e_{3u}}{2\rdt}\\
\end{equation}
This same inequality can also be derived in the nonlinear bottom friction case
if a velocity of 1 m.s$^{1}$ is assumed. Alternatively, this criterion can be
rearranged to suggest a minimum bottom box thickness to ensure stability:
+This same inequality can also be derived in the nonlinear bottom friction case if
+a velocity of 1 m.s$^{1}$ is assumed.
+Alternatively, this criterion can be rearranged to suggest a minimum bottom box thickness to ensure stability:
\begin{equation}
e_{3u} > 2\;r\;\rdt
\end{equation}
\noindent which it may be necessary to impose if partial steps are being used.
For example, if $u = 1$ m.s$^{1}$, $rdt = 1800$ s, $r = 10^{3}$ then
$e_{3u}$ should be greater than 3.6 m. For most applications, with physically
sensible parameters these restrictions should not be of concern. But
caution may be necessary if attempts are made to locally enhance the bottom
friction parameters.
To ensure stability limits are imposed on the bottom friction coefficients both during
initialisation and at each time step. Checks at initialisation are made in \mdl{zdfbfr}
(assuming a 1 m.s$^{1}$ velocity in the nonlinear case).
The number of breaches of the stability criterion are reported as well as the minimum
and maximum values that have been set. The criterion is also checked at each time step,
using the actual velocity, in \mdl{dynbfr}. Values of the bottom friction coefficient are
reduced as necessary to ensure stability; these changes are not reported.
+\noindent which it may be necessary to impose if partial steps are being used.
+For example, if $u = 1$ m.s$^{1}$, $rdt = 1800$ s, $r = 10^{3}$ then $e_{3u}$ should be greater than 3.6 m.
+For most applications, with physically sensible parameters these restrictions should not be of concern.
+But caution may be necessary if attempts are made to locally enhance the bottom friction parameters.
+To ensure stability limits are imposed on the bottom friction coefficients both
+during initialisation and at each time step.
+Checks at initialisation are made in \mdl{zdfbfr} (assuming a 1 m.s$^{1}$ velocity in the nonlinear case).
+The number of breaches of the stability criterion are reported as well as
+the minimum and maximum values that have been set.
+The criterion is also checked at each time step, using the actual velocity, in \mdl{dynbfr}.
+Values of the bottom friction coefficient are reduced as necessary to ensure stability;
+these changes are not reported.
Limits on the bottom friction coefficient are not imposed if the user has elected to
handle the bottom friction implicitly (see \autoref{subsec:ZDF_bfr_imp}). The number of potential
breaches of the explicit stability criterion are still reported for information purposes.
+handle the bottom friction implicitly (see \autoref{subsec:ZDF_bfr_imp}).
+The number of potential breaches of the explicit stability criterion are still reported for information purposes.
% 
@@ 1094,12 +1062,11 @@
\label{subsec:ZDF_bfr_imp}
An optional implicit form of bottom friction has been implemented to improve
model stability. We recommend this option for shelf sea and coastal ocean applications, especially
for splitexplicit time splitting. This option can be invoked by setting \np{ln\_bfrimp}
to \forcode{.true.} in the \textit{nambfr} namelist. This option requires \np{ln\_zdfexp} to be \forcode{.false.}
in the \textit{namzdf} namelist.

This implementation is realised in \mdl{dynzdf\_imp} and \mdl{dynspg\_ts}. In \mdl{dynzdf\_imp}, the
bottom boundary condition is implemented implicitly.
+An optional implicit form of bottom friction has been implemented to improve model stability.
+We recommend this option for shelf sea and coastal ocean applications, especially for splitexplicit time splitting.
+This option can be invoked by setting \np{ln\_bfrimp} to \forcode{.true.} in the \textit{nambfr} namelist.
+This option requires \np{ln\_zdfexp} to be \forcode{.false.} in the \textit{namzdf} namelist.
+
+This implementation is realised in \mdl{dynzdf\_imp} and \mdl{dynspg\_ts}. In \mdl{dynzdf\_imp},
+the bottom boundary condition is implemented implicitly.
\begin{equation} \label{eq:dynzdf_bfr}
@@ 1108,17 +1075,17 @@
\end{equation}
where $mbk$ is the layer number of the bottom wet layer. superscript $n+1$ means the velocity used in the
friction formula is to be calculated, so, it is implicit.

If splitexplicit time splitting is used, care must be taken to avoid the double counting of
the bottom friction in the 2D barotropic momentum equations. As NEMO only updates the barotropic
pressure gradient and Coriolis' forcing terms in the 2D barotropic calculation, we need to remove
the bottom friction induced by these two terms which has been included in the 3D momentum trend
and update it with the latest value. On the other hand, the bottom friction contributed by the
other terms (e.g. the advection term, viscosity term) has been included in the 3D momentum equations
and should not be added in the 2D barotropic mode.

The implementation of the implicit bottom friction in \mdl{dynspg\_ts} is done in two steps as the
following:
+where $mbk$ is the layer number of the bottom wet layer.
+Superscript $n+1$ means the velocity used in the friction formula is to be calculated, so, it is implicit.
+
+If splitexplicit time splitting is used, care must be taken to avoid the double counting of the bottom friction in
+the 2D barotropic momentum equations.
+As NEMO only updates the barotropic pressure gradient and Coriolis' forcing terms in the 2D barotropic calculation,
+we need to remove the bottom friction induced by these two terms which has been included in the 3D momentum trend
+and update it with the latest value.
+On the other hand, the bottom friction contributed by the other terms
+(e.g. the advection term, viscosity term) has been included in the 3D momentum equations and
+should not be added in the 2D barotropic mode.
+
+The implementation of the implicit bottom friction in \mdl{dynspg\_ts} is done in two steps as the following:
\begin{equation} \label{eq:dynspg_ts_bfr1}
@@ 1132,9 +1099,11 @@
\end{equation}
where $\textbf{T}$ is the vertical integrated 3D momentum trend. We assume the leapfrog timestepping
is used here. $\Delta t$ is the barotropic mode time step and $\Delta t_{bc}$ is the baroclinic mode time step.
 $c_{b}$ is the friction coefficient. $\eta$ is the sea surface level calculated in the barotropic loops
while $\eta^{'}$ is the sea surface level used in the 3D baroclinic mode. $\textbf{u}_{b}$ is the bottom
layer horizontal velocity.
+where $\textbf{T}$ is the vertical integrated 3D momentum trend.
+We assume the leapfrog timestepping is used here.
+$\Delta t$ is the barotropic mode time step and $\Delta t_{bc}$ is the baroclinic mode time step.
+$c_{b}$ is the friction coefficient.
+$\eta$ is the sea surface level calculated in the barotropic loops while $\eta^{'}$ is the sea surface level used in
+the 3D baroclinic mode.
+$\textbf{u}_{b}$ is the bottom layer horizontal velocity.
@@ 1148,42 +1117,41 @@
\label{subsec:ZDF_bfr_ts}
When calculating the momentum trend due to bottom friction in \mdl{dynbfr}, the
bottom velocity at the before time step is used. This velocity includes both the
baroclinic and barotropic components which is appropriate when using either the
explicit or filtered surface pressure gradient algorithms (\key{dynspg\_exp} or
\key{dynspg\_flt}). Extra attention is required, however, when using
splitexplicit time stepping (\key{dynspg\_ts}). In this case the free surface
equation is solved with a small time step \np{rn\_rdt}/\np{nn\_baro}, while the three
dimensional prognostic variables are solved with the longer time step
of \np{rn\_rdt} seconds. The trend in the barotropic momentum due to bottom
friction appropriate to this method is that given by the selected parameterisation
($i.e.$ linear or nonlinear bottom friction) computed with the evolving velocities
at each barotropic timestep.

In the case of nonlinear bottom friction, we have elected to partially linearise
the problem by keeping the coefficients fixed throughout the barotropic
timestepping to those computed in \mdl{zdfbfr} using the now timestep.
+When calculating the momentum trend due to bottom friction in \mdl{dynbfr},
+the bottom velocity at the before time step is used.
+This velocity includes both the baroclinic and barotropic components which is appropriate when
+using either the explicit or filtered surface pressure gradient algorithms
+(\key{dynspg\_exp} or \key{dynspg\_flt}).
+Extra attention is required, however, when using splitexplicit time stepping (\key{dynspg\_ts}).
+In this case the free surface equation is solved with a small time step \np{rn\_rdt}/\np{nn\_baro},
+while the three dimensional prognostic variables are solved with the longer time step of \np{rn\_rdt} seconds.
+The trend in the barotropic momentum due to bottom friction appropriate to this method is that given by
+the selected parameterisation ($i.e.$ linear or nonlinear bottom friction) computed with
+the evolving velocities at each barotropic timestep.
+
+In the case of nonlinear bottom friction, we have elected to partially linearise the problem by
+keeping the coefficients fixed throughout the barotropic timestepping to those computed in
+\mdl{zdfbfr} using the now timestep.
This decision allows an efficient use of the $c_b^{\vect{U}}$ coefficients to:
\begin{enumerate}
\item On entry to \rou{dyn\_spg\_ts}, remove the contribution of the before
barotropic velocity to the bottom friction component of the vertically
integrated momentum trend. Note the same stability check that is carried out
on the bottom friction coefficient in \mdl{dynbfr} has to be applied here to
ensure that the trend removed matches that which was added in \mdl{dynbfr}.
\item At each barotropic step, compute the contribution of the current barotropic
velocity to the trend due to bottom friction. Add this contribution to the
vertically integrated momentum trend. This contribution is handled implicitly which
eliminates the need to impose a stability criteria on the values of the bottom friction
coefficient within the barotropic loop.
+\item On entry to \rou{dyn\_spg\_ts}, remove the contribution of the before barotropic velocity to
+ the bottom friction component of the vertically integrated momentum trend.
+ Note the same stability check that is carried out on the bottom friction coefficient in \mdl{dynbfr} has to
+ be applied here to ensure that the trend removed matches that which was added in \mdl{dynbfr}.
+\item At each barotropic step, compute the contribution of the current barotropic velocity to
+ the trend due to bottom friction.
+ Add this contribution to the vertically integrated momentum trend.
+ This contribution is handled implicitly which eliminates the need to impose a stability criteria on
+ the values of the bottom friction coefficient within the barotropic loop.
\end{enumerate}
Note that the use of an implicit formulation within the barotropic loop
for the bottom friction trend means that any limiting of the bottom friction coefficient
in \mdl{dynbfr} does not adversely affect the solution when using splitexplicit time
splitting. This is because the major contribution to bottom friction is likely to come from
the barotropic component which uses the unrestricted value of the coefficient. However, if the
limiting is thought to be having a major effect (a more likely prospect in coastal and shelf seas
applications) then the fully implicit form of the bottom friction should be used (see \autoref{subsec:ZDF_bfr_imp} )
+Note that the use of an implicit formulation within the barotropic loop for the bottom friction trend means that
+any limiting of the bottom friction coefficient in \mdl{dynbfr} does not adversely affect the solution when
+using splitexplicit time splitting.
+This is because the major contribution to bottom friction is likely to come from the barotropic component which
+uses the unrestricted value of the coefficient.
+However, if the limiting is thought to be having a major effect
+(a more likely prospect in coastal and shelf seas applications) then
+the fully implicit form of the bottom friction should be used (see \autoref{subsec:ZDF_bfr_imp})
which can be selected by setting \np{ln\_bfrimp} $=$ \forcode{.true.}.
@@ 1193,6 +1161,7 @@
\end{equation}
where $\bar U$ is the barotropic velocity, $H_e$ is the full depth (including sea surface height),
$c_b^u$ is the bottom friction coefficient as calculated in \rou{zdf\_bfr} and $RHS$ represents
all the components to the vertically integrated momentum trend except for that due to bottom friction.
+$c_b^u$ is the bottom friction coefficient as calculated in \rou{zdf\_bfr} and
+$RHS$ represents all the components to the vertically integrated momentum trend except for
+that due to bottom friction.
@@ 1218,27 +1187,26 @@
Options are defined through the \ngn{namzdf\_tmx} namelist variables.
The parameterization of tidal mixing follows the general formulation for
the vertical eddy diffusivity proposed by \citet{St_Laurent_al_GRL02} and
first introduced in an OGCM by \citep{Simmons_al_OM04}.
In this formulation an additional vertical diffusivity resulting from internal tide breaking,
$A^{vT}_{tides}$ is expressed as a function of $E(x,y)$, the energy transfer from barotropic
tides to baroclinic tides :
+The parameterization of tidal mixing follows the general formulation for the vertical eddy diffusivity proposed by
+\citet{St_Laurent_al_GRL02} and first introduced in an OGCM by \citep{Simmons_al_OM04}.
+In this formulation an additional vertical diffusivity resulting from internal tide breaking,
+$A^{vT}_{tides}$ is expressed as a function of $E(x,y)$,
+the energy transfer from barotropic tides to baroclinic tides:
\begin{equation} \label{eq:Ktides}
A^{vT}_{tides} = q \,\Gamma \,\frac{ E(x,y) \, F(z) }{ \rho \, N^2 }
\end{equation}
where $\Gamma$ is the mixing efficiency, $N$ the BruntVais\"{a}l\"{a} frequency
(see \autoref{subsec:TRA_bn2}), $\rho$ the density, $q$ the tidal dissipation efficiency,
and $F(z)$ the vertical structure function.

The mixing efficiency of turbulence is set by $\Gamma$ (\np{rn\_me} namelist parameter)
and is usually taken to be the canonical value of $\Gamma = 0.2$ (Osborn 1980).
The tidal dissipation efficiency is given by the parameter $q$ (\np{rn\_tfe} namelist parameter)
represents the part of the internal wave energy flux $E(x, y)$ that is dissipated locally,
with the remaining $1q$ radiating away as low mode internal waves and
contributing to the background internal wave field. A value of $q=1/3$ is typically used
\citet{St_Laurent_al_GRL02}.
The vertical structure function $F(z)$ models the distribution of the turbulent mixing in the vertical.
It is implemented as a simple exponential decaying upward away from the bottom,
with a vertical scale of $h_o$ (\np{rn\_htmx} namelist parameter, with a typical value of $500\,m$) \citep{St_Laurent_Nash_DSR04},
+where $\Gamma$ is the mixing efficiency, $N$ the BruntVais\"{a}l\"{a} frequency (see \autoref{subsec:TRA_bn2}),
+$\rho$ the density, $q$ the tidal dissipation efficiency, and $F(z)$ the vertical structure function.
+
+The mixing efficiency of turbulence is set by $\Gamma$ (\np{rn\_me} namelist parameter) and
+is usually taken to be the canonical value of $\Gamma = 0.2$ (Osborn 1980).
+The tidal dissipation efficiency is given by the parameter $q$ (\np{rn\_tfe} namelist parameter)
+represents the part of the internal wave energy flux $E(x, y)$ that is dissipated locally,
+with the remaining $1q$ radiating away as low mode internal waves and
+contributing to the background internal wave field.
+A value of $q=1/3$ is typically used \citet{St_Laurent_al_GRL02}.
+The vertical structure function $F(z)$ models the distribution of the turbulent mixing in the vertical.
+It is implemented as a simple exponential decaying upward away from the bottom,
+with a vertical scale of $h_o$ (\np{rn\_htmx} namelist parameter,
+with a typical value of $500\,m$) \citep{St_Laurent_Nash_DSR04},
\begin{equation} \label{eq:Fz}
F(i,j,k) = \frac{ e^{ \frac{H+z}{h_o} } }{ h_o \left( 1 e^{ \frac{H}{h_o} } \right) }
@@ 1246,23 +1214,22 @@
and is normalized so that vertical integral over the water column is unity.
The associated vertical viscosity is calculated from the vertical
diffusivity assuming a Prandtl number of 1, $i.e.$ $A^{vm}_{tides}=A^{vT}_{tides}$.
In the limit of $N \rightarrow 0$ (or becoming negative), the vertical diffusivity
is capped at $300\,cm^2/s$ and impose a lower limit on $N^2$ of \np{rn\_n2min}
usually set to $10^{8} s^{2}$. These bounds are usually rarely encountered.

The internal wave energy map, $E(x, y)$ in \autoref{eq:Ktides}, is derived
from a barotropic model of the tides utilizing a parameterization of the
conversion of barotropic tidal energy into internal waves.
The essential goal of the parameterization is to represent the momentum
exchange between the barotropic tides and the unrepresented internal waves
induced by the tidal flow over rough topography in a stratified ocean.
In the current version of \NEMO, the map is built from the output of
+The associated vertical viscosity is calculated from the vertical diffusivity assuming a Prandtl number of 1,
+$i.e.$ $A^{vm}_{tides}=A^{vT}_{tides}$.
+In the limit of $N \rightarrow 0$ (or becoming negative), the vertical diffusivity is capped at $300\,cm^2/s$ and
+impose a lower limit on $N^2$ of \np{rn\_n2min} usually set to $10^{8} s^{2}$.
+These bounds are usually rarely encountered.
+
+The internal wave energy map, $E(x, y)$ in \autoref{eq:Ktides}, is derived from a barotropic model of
+the tides utilizing a parameterization of the conversion of barotropic tidal energy into internal waves.
+The essential goal of the parameterization is to represent the momentum exchange between the barotropic tides and
+the unrepresented internal waves induced by the tidal flow over rough topography in a stratified ocean.
+In the current version of \NEMO, the map is built from the output of
the barotropic global ocean tide model MOG2DG \citep{Carrere_Lyard_GRL03}.
This model provides the dissipation associated with internal wave energy for the M2 and K1
tides component (\autoref{fig:ZDF_M2_K1_tmx}). The S2 dissipation is simply approximated
as being $1/4$ of the M2 one. The internal wave energy is thus : $E(x, y) = 1.25 E_{M2} + E_{K1}$.
Its global mean value is $1.1$ TW, in agreement with independent estimates
\citep{Egbert_Ray_Nat00, Egbert_Ray_JGR01}.
+This model provides the dissipation associated with internal wave energy for the M2 and K1 tides component
+(\autoref{fig:ZDF_M2_K1_tmx}).
+The S2 dissipation is simply approximated as being $1/4$ of the M2 one.
+The internal wave energy is thus : $E(x, y) = 1.25 E_{M2} + E_{K1}$.
+Its global mean value is $1.1$ TW,
+in agreement with independent estimates \citep{Egbert_Ray_Nat00, Egbert_Ray_JGR01}.
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
@@ 1281,27 +1248,23 @@
When the Indonesian Through Flow (ITF) area is included in the model domain,
a specific treatment of tidal induced mixing in this area can be used.
It is activated through the namelist logical \np{ln\_tmx\_itf}, and the user must provide
an input NetCDF file, \ifile{mask\_itf}, which contains a mask array defining the ITF area
where the specific treatment is applied.

When \np{ln\_tmx\_itf}\forcode{ = .true.}, the two key parameters $q$ and $F(z)$ are adjusted following
+a specific treatment of tidal induced mixing in this area can be used.
+It is activated through the namelist logical \np{ln\_tmx\_itf}, and the user must provide an input NetCDF file,
+\ifile{mask\_itf}, which contains a mask array defining the ITF area where the specific treatment is applied.
+
+When \np{ln\_tmx\_itf}\forcode{ = .true.}, the two key parameters $q$ and $F(z)$ are adjusted following
the parameterisation developed by \citet{KochLarrouy_al_GRL07}:
First, the Indonesian archipelago is a complex geographic region
with a series of large, deep, semienclosed basins connected via
numerous narrow straits. Once generated, internal tides remain
confined within this semienclosed area and hardly radiate away.
Therefore all the internal tides energy is consumed within this area.
+First, the Indonesian archipelago is a complex geographic region with a series of
+large, deep, semienclosed basins connected via numerous narrow straits.
+Once generated, internal tides remain confined within this semienclosed area and hardly radiate away.
+Therefore all the internal tides energy is consumed within this area.
So it is assumed that $q = 1$, $i.e.$ all the energy generated is available for mixing.
Note that for test purposed, the ITF tidal dissipation efficiency is a
namelist parameter (\np{rn\_tfe\_itf}). A value of $1$ or close to is
this recommended for this parameter.

Second, the vertical structure function, $F(z)$, is no more associated
with a bottom intensification of the mixing, but with a maximum of
energy available within the thermocline. \citet{KochLarrouy_al_GRL07}
have suggested that the vertical distribution of the energy dissipation
proportional to $N^2$ below the core of the thermocline and to $N$ above.
+Note that for test purposed, the ITF tidal dissipation efficiency is a namelist parameter (\np{rn\_tfe\_itf}).
+A value of $1$ or close to is this recommended for this parameter.
+
+Second, the vertical structure function, $F(z)$, is no more associated with a bottom intensification of the mixing,
+but with a maximum of energy available within the thermocline.
+\citet{KochLarrouy_al_GRL07} have suggested that the vertical distribution of
+the energy dissipation proportional to $N^2$ below the core of the thermocline and to $N$ above.
The resulting $F(z)$ is:
\begin{equation} \label{eq:Fz_itf}
@@ 1313,11 +1276,10 @@
Averaged over the ITF area, the resulting tidal mixing coefficient is $1.5\,cm^2/s$,
which agrees with the independent estimates inferred from observations.
Introduced in a regional OGCM, the parameterization improves the water mass
characteristics in the different Indonesian seas, suggesting that the horizontal
and vertical distributions of the mixing are adequately prescribed
\citep{KochLarrouy_al_GRL07, KochLarrouy_al_OD08a, KochLarrouy_al_OD08b}.
Note also that such a parameterisation has a significant impact on the behaviour
of global coupled GCMs \citep{KochLarrouy_al_CD10}.
+which agrees with the independent estimates inferred from observations.
+Introduced in a regional OGCM, the parameterization improves the water mass characteristics in
+the different Indonesian seas, suggesting that the horizontal and vertical distributions of
+the mixing are adequately prescribed \citep{KochLarrouy_al_GRL07, KochLarrouy_al_OD08a, KochLarrouy_al_OD08b}.
+Note also that such a parameterisation has a significant impact on the behaviour of
+global coupled GCMs \citep{KochLarrouy_al_CD10}.
@@ 1333,28 +1295,30 @@
%
The parameterization of mixing induced by breaking internal waves is a generalization
of the approach originally proposed by \citet{St_Laurent_al_GRL02}.
A threedimensional field of internal wave energy dissipation $\epsilon(x,y,z)$ is first constructed,
+The parameterization of mixing induced by breaking internal waves is a generalization of
+the approach originally proposed by \citet{St_Laurent_al_GRL02}.
+A threedimensional field of internal wave energy dissipation $\epsilon(x,y,z)$ is first constructed,
and the resulting diffusivity is obtained as
\begin{equation} \label{eq:Kwave}
A^{vT}_{wave} = R_f \,\frac{ \epsilon }{ \rho \, N^2 }
\end{equation}
where $R_f$ is the mixing efficiency and $\epsilon$ is a specified three dimensional distribution
of the energy available for mixing. If the \np{ln\_mevar} namelist parameter is set to false,
the mixing efficiency is taken as constant and equal to 1/6 \citep{Osborn_JPO80}.
In the opposite (recommended) case, $R_f$ is instead a function of the turbulence intensity parameter
$Re_b = \frac{ \epsilon}{\nu \, N^2}$, with $\nu$ the molecular viscosity of seawater,
following the model of \cite{Bouffard_Boegman_DAO2013}
and the implementation of \cite{de_lavergne_JPO2016_efficiency}.
Note that $A^{vT}_{wave}$ is bounded by $10^{2}\,m^2/s$, a limit that is often reached when the mixing efficiency is constant.
+where $R_f$ is the mixing efficiency and $\epsilon$ is a specified three dimensional distribution of
+the energy available for mixing.
+If the \np{ln\_mevar} namelist parameter is set to false, the mixing efficiency is taken as constant and
+equal to 1/6 \citep{Osborn_JPO80}.
+In the opposite (recommended) case, $R_f$ is instead a function of
+the turbulence intensity parameter $Re_b = \frac{ \epsilon}{\nu \, N^2}$,
+with $\nu$ the molecular viscosity of seawater, following the model of \cite{Bouffard_Boegman_DAO2013} and
+the implementation of \cite{de_lavergne_JPO2016_efficiency}.
+Note that $A^{vT}_{wave}$ is bounded by $10^{2}\,m^2/s$, a limit that is often reached when
+the mixing efficiency is constant.
In addition to the mixing efficiency, the ratio of salt to heat diffusivities can chosen to vary
as a function of $Re_b$ by setting the \np{ln\_tsdiff} parameter to true, a recommended choice).
This parameterization of differential mixing, due to \cite{Jackson_Rehmann_JPO2014},
+as a function of $Re_b$ by setting the \np{ln\_tsdiff} parameter to true, a recommended choice.
+This parameterization of differential mixing, due to \cite{Jackson_Rehmann_JPO2014},
is implemented as in \cite{de_lavergne_JPO2016_efficiency}.
The threedimensional distribution of the energy available for mixing, $\epsilon(i,j,k)$, is constructed
from three static maps of columnintegrated internal wave energy dissipation, $E_{cri}(i,j)$,
$E_{pyc}(i,j)$, and $E_{bot}(i,j)$, combined to three corresponding vertical structures
+The threedimensional distribution of the energy available for mixing, $\epsilon(i,j,k)$,
+is constructed from three static maps of columnintegrated internal wave energy dissipation,
+$E_{cri}(i,j)$, $E_{pyc}(i,j)$, and $E_{bot}(i,j)$, combined to three corresponding vertical structures
(de Lavergne et al., in prep):
\begin{align*}
@@ 1363,15 +1327,16 @@
F_{bot}(i,j,k) &\propto N^2 \, e^{ h_{wkb} / h_{bot} }
\end{align*}
In the above formula, $h_{ab}$ denotes the height above bottom,
+In the above formula, $h_{ab}$ denotes the height above bottom,
$h_{wkb}$ denotes the WKBstretched height above bottom, defined by
\begin{equation*}
h_{wkb} = H \, \frac{ \int_{H}^{z} N \, dz' } { \int_{H}^{\eta} N \, dz' } \; ,
\end{equation*}
The $n_p$ parameter (given by \np{nn\_zpyc} in \ngn{namzdf\_tmx\_new} namelist) controls the stratificationdependence of the pycnoclineintensified dissipation.
+The $n_p$ parameter (given by \np{nn\_zpyc} in \ngn{namzdf\_tmx\_new} namelist)
+controls the stratificationdependence of the pycnoclineintensified dissipation.
It can take values of 1 (recommended) or 2.
Finally, the vertical structures $F_{cri}$ and $F_{bot}$ require the specification of
the decay scales $h_{cri}(i,j)$ and $h_{bot}(i,j)$, which are defined by two additional input maps.
$h_{cri}$ is related to the largescale topography of the ocean (etopo2)
and $h_{bot}$ is a function of the energy flux $E_{bot}$, the characteristic horizontal scale of
+Finally, the vertical structures $F_{cri}$ and $F_{bot}$ require the specification of
+the decay scales $h_{cri}(i,j)$ and $h_{bot}(i,j)$, which are defined by two additional input maps.
+$h_{cri}$ is related to the largescale topography of the ocean (etopo2) and
+$h_{bot}$ is a function of the energy flux $E_{bot}$, the characteristic horizontal scale of
the abyssal hill topography \citep{Goff_JGR2010} and the latitude.
Index: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_conservation.tex
===================================================================
 NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_conservation.tex (revision 10165)
+++ NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_conservation.tex (revision 10368)
@@ 9,38 +9,34 @@
\minitoc
The continuous equations of motion have many analytic properties. Many
quantities (total mass, energy, enstrophy, etc.) are strictly conserved in
the inviscid and unforced limit, while ocean physics conserve the total
quantities on which they act (momentum, temperature, salinity) but dissipate
their total variance (energy, enstrophy, etc.). Unfortunately, the finite
difference form of these equations is not guaranteed to retain all these
important properties. In constructing the finite differencing schemes, we
wish to ensure that certain integral constraints will be maintained. In
particular, it is desirable to construct the finite difference equations so
that horizontal kinetic energy and/or potential enstrophy of horizontally
nondivergent flow, and variance of temperature and salinity will be
conserved in the absence of dissipative effects and forcing. \citet{Arakawa1966}
has first pointed out the advantage of this approach. He showed that if
integral constraints on energy are maintained, the computation will be free
of the troublesome "non linear" instability originally pointed out by
\citet{Phillips1959}. A consistent formulation of the energetic properties is
also extremely important in carrying out longterm numerical simulations for
an oceanographic model. Such a formulation avoids systematic errors that
accumulate with time \citep{Bryan1997}.

The general philosophy of OPA which has led to the discrete formulation
presented in {\S}II.2 and II.3 is to choose second order nondiffusive
scheme for advective terms for both dynamical and tracer equations. At this
level of complexity, the resulting schemes are dispersive schemes.
Therefore, they require the addition of a diffusive operator to be stable.
The alternative is to use diffusive schemes such as upstream or flux
corrected schemes. This last option was rejected because we prefer a
complete handling of the model diffusion, i.e. of the model physics rather
than letting the advective scheme produces its own implicit diffusion
without controlling the space and time structure of this implicit diffusion.
Note that in some very specific cases as passive tracer studies, the
positivity of the advective scheme is required. In that case, and in that
case only, the advective scheme used for passive tracer is a flux correction
scheme \citep{Marti1992, Levy1996, Levy1998}.
+The continuous equations of motion have many analytic properties.
+Many quantities (total mass, energy, enstrophy, etc.) are strictly conserved in the inviscid and unforced limit,
+while ocean physics conserve the total quantities on which they act (momentum, temperature, salinity) but
+dissipate their total variance (energy, enstrophy, etc.).
+Unfortunately, the finite difference form of these equations is not guaranteed to
+retain all these important properties.
+In constructing the finite differencing schemes, we wish to ensure that
+certain integral constraints will be maintained.
+In particular, it is desirable to construct the finite difference equations so that
+horizontal kinetic energy and/or potential enstrophy of horizontally nondivergent flow,
+and variance of temperature and salinity will be conserved in the absence of dissipative effects and forcing.
+\citet{Arakawa1966} has first pointed out the advantage of this approach.
+He showed that if integral constraints on energy are maintained,
+the computation will be free of the troublesome "non linear" instability originally pointed out by
+\citet{Phillips1959}.
+A consistent formulation of the energetic properties is also extremely important in carrying out
+longterm numerical simulations for an oceanographic model.
+Such a formulation avoids systematic errors that accumulate with time \citep{Bryan1997}.
+
+The general philosophy of OPA which has led to the discrete formulation presented in {\S}II.2 and II.3 is to
+choose second order nondiffusive scheme for advective terms for both dynamical and tracer equations.
+At this level of complexity, the resulting schemes are dispersive schemes.
+Therefore, they require the addition of a diffusive operator to be stable.
+The alternative is to use diffusive schemes such as upstream or flux corrected schemes.
+This last option was rejected because we prefer a complete handling of the model diffusion,
+i.e. of the model physics rather than letting the advective scheme produces its own implicit diffusion without
+controlling the space and time structure of this implicit diffusion.
+Note that in some very specific cases as passive tracer studies, the positivity of the advective scheme is required.
+In that case, and in that case only, the advective scheme used for passive tracer is a flux correction scheme
+\citep{Marti1992, Levy1996, Levy1998}.
% 
@@ 50,22 +46,19 @@
\label{sec:Invariant_dyn}
The non linear term of the momentum equations has been split into a
vorticity term, a gradient of horizontal kinetic energy and a vertical
advection term. Three schemes are available for the former (see {\S}~II.2)
according to the CPP variable defined (default option\textbf{
}or \textbf{key{\_}vorenergy } or \textbf{key{\_}vorcombined
} defined). They differ in their conservative
properties (energy or enstrophy conserving scheme). The two latter terms
preserve the total kinetic energy: the large scale kinetic energy is also
preserved in practice. The remaining nondiffusive terms of the momentum
equation (namely the hydrostatic and surface pressure gradient terms) also
preserve the total kinetic energy and have no effect on the vorticity of the
flow.
+The non linear term of the momentum equations has been split into a vorticity term,
+a gradient of horizontal kinetic energy and a vertical advection term.
+Three schemes are available for the former (see {\S}~II.2) according to the CPP variable defined
+(default option\textbf{?}or \textbf{key{\_}vorenergy} or \textbf{key{\_}vorcombined} defined).
+They differ in their conservative properties (energy or enstrophy conserving scheme).
+The two latter terms preserve the total kinetic energy:
+the large scale kinetic energy is also preserved in practice.
+The remaining nondiffusive terms of the momentum equation
+(namely the hydrostatic and surface pressure gradient terms) also preserve the total kinetic energy and
+have no effect on the vorticity of the flow.
\textbf{* relative, planetary and total vorticity term:}
Let us define as either the relative, planetary and total potential
vorticity, i.e. , , and , respectively. The continuous formulation of the
vorticity term satisfies following integral constraints:
+Let us define as either the relative, planetary and total potential vorticity, i.e. ?, ?, and ?, respectively.
+The continuous formulation of the vorticity term satisfies following integral constraints:
\begin{equation} \label{eq:vor_vorticity}
\int_D {{\textbf {k}}\cdot \frac{1}{e_3 }\nabla \times \left( {\varsigma
@@ 82,28 +75,26 @@
\int_D {{\textbf{U}}_h \times \left( {\varsigma \;{\textbf{k}}\times {\textbf{U}}_h } \right)\;dv} =0
\end{equation}
where $dv = e_1\, e_2\, e_3\, di\, dj\, dk$ is the volume element.
(II.4.1a) means that $\varsigma $ is conserved. (II.4.1b) is obtained by an
integration by part. It means that $\varsigma^2$ is conserved for a horizontally
nondivergent flow.
(II.4.1c) is even satisfied locally since the vorticity term is orthogonal
to the horizontal velocity. It means that the vorticity term has no
contribution to the evolution of the total kinetic energy. (II.4.1a) is
obviously always satisfied, but (II.4.1b) and (II.4.1c) cannot be satisfied
simultaneously with a second order scheme. Using the symmetry or
antisymmetry properties of the operators (Eqs II.1.10 and 11), it can be
shown that the scheme (II.2.11) satisfies (II.4.1b) but not (II.4.1c), while
scheme (II.2.12) satisfies (II.4.1c) but not (II.4.1b) (see appendix C).
Note that the enstrophy conserving scheme on total vorticity has been chosen
as the standard discrete form of the vorticity term.
+where $dv = e_1\, e_2\, e_3\, di\, dj\, dk$ is the volume element.
+(II.4.1a) means that $\varsigma $ is conserved. (II.4.1b) is obtained by an integration by part.
+It means that $\varsigma^2$ is conserved for a horizontally nondivergent flow.
+(II.4.1c) is even satisfied locally since the vorticity term is orthogonal to the horizontal velocity.
+It means that the vorticity term has no contribution to the evolution of the total kinetic energy.
+(II.4.1a) is obviously always satisfied, but (II.4.1b) and (II.4.1c) cannot be satisfied simultaneously with
+a second order scheme.
+Using the symmetry or antisymmetry properties of the operators (Eqs II.1.10 and 11),
+it can be shown that the scheme (II.2.11) satisfies (II.4.1b) but not (II.4.1c),
+while scheme (II.2.12) satisfies (II.4.1c) but not (II.4.1b) (see appendix C).
+Note that the enstrophy conserving scheme on total vorticity has been chosen as the standard discrete form of
+the vorticity term.
\textbf{* Gradient of kinetic energy / vertical advection}
In continuous formulation, the gradient of horizontal kinetic energy has no
contribution to the evolution of the vorticity as the curl of a gradient is
zero. This property is satisfied locally with the discrete form of both the
gradient and the curl operator we have made (property (II.1.9)~). Another
continuous property is that the change of horizontal kinetic energy due to
vertical advection is exactly balanced by the change of horizontal kinetic
energy due to the horizontal gradient of horizontal kinetic energy:
+In continuous formulation, the gradient of horizontal kinetic energy has no contribution to the evolution of
+the vorticity as the curl of a gradient is zero.
+This property is satisfied locally with the discrete form of both the gradient and the curl operator we have made
+(property (II.1.9)~).
+Another continuous property is that the change of horizontal kinetic energy due to
+vertical advection is exactly balanced by the change of horizontal kinetic energy due to
+the horizontal gradient of horizontal kinetic energy:
\begin{equation} \label{eq:keg_zad}
@@ 112,23 +103,22 @@
\end{equation}
Using the discrete form given in {\S}II.2a and the symmetry or
antisymmetry properties of the mean and difference operators, \autoref{eq:keg_zad} is
demonstrated in the Appendix C. The main point here is that satisfying
\autoref{eq:keg_zad} links the choice of the discrete forms of the vertical advection
and of the horizontal gradient of horizontal kinetic energy. Choosing one
imposes the other. The discrete form of the vertical advection given in
{\S}II.2a is a direct consequence of formulating the horizontal kinetic
energy as $1/2 \left( \overline{u^2}^i + \overline{v^2}^j \right) $ in the gradient term.
+Using the discrete form given in {\S}II.2a and the symmetry or antisymmetry properties of
+the mean and difference operators, \autoref{eq:keg_zad} is demonstrated in the Appendix C.
+The main point here is that satisfying \autoref{eq:keg_zad} links the choice of the discrete forms of
+the vertical advection and of the horizontal gradient of horizontal kinetic energy.
+Choosing one imposes the other.
+The discrete form of the vertical advection given in {\S}II.2a is a direct consequence of
+formulating the horizontal kinetic energy as $1/2 \left( \overline{u^2}^i + \overline{v^2}^j \right) $ in
+the gradient term.
\textbf{* hydrostatic pressure gradient term}
In continuous formulation, a pressure gradient has no contribution to the
evolution of the vorticity as the curl of a gradient is zero. This
properties is satisfied locally with the choice of discretization we have
made (property (II.1.9)~). In addition, when the equation of state is linear
(i.e. when an advectivediffusive equation for density can be derived from
those of temperature and salinity) the change of horizontal kinetic energy
due to the work of pressure forces is balanced by the change of potential
energy due to buoyancy forces:
+In continuous formulation, a pressure gradient has no contribution to the evolution of the vorticity as
+the curl of a gradient is zero.
+This properties is satisfied locally with the choice of discretization we have made (property (II.1.9)~).
+In addition, when the equation of state is linear
+(i.e. when an advectivediffusive equation for density can be derived from those of temperature and salinity)
+the change of horizontal kinetic energy due to the work of pressure forces is balanced by the change of
+potential energy due to buoyancy forces:
\begin{equation} \label{eq:hpg_pe}
@@ 136,22 +126,19 @@
\end{equation}
Using the discrete form given in {\S}~II.2a and the symmetry or
antisymmetry properties of the mean and difference operators, (II.4.3) is
demonstrated in the Appendix C. The main point here is that satisfying
(II.4.3) strongly constraints the discrete expression of the depth of
$T$points and of the term added to the pressure gradient in $s$coordinates: the
depth of a $T$point, $z_T$, is defined as the sum the vertical scale
factors at $w$points starting from the surface.
+Using the discrete form given in {\S}~II.2a and the symmetry or antisymmetry properties of
+the mean and difference operators, (II.4.3) is demonstrated in the Appendix C.
+The main point here is that satisfying (II.4.3) strongly constraints the discrete expression of the depth of
+$T$points and of