% ================================================================
% Chapter Ñ I/O & Diagnostics
% ================================================================
\chapter{Ouput and Diagnostics (IOM, DIA, TRD, FLO)}
\label{DIA}
\minitoc
\newpage
$\ $\newline % force a new ligne
% ================================================================
% Old Model Output
% ================================================================
\section{Old Model Output (default or \key{dimgout})}
\label{DIA_io_old}
The model outputs are of three types: the restart file, the output listing,
and the output file(s). The restart file is used internally by the code when
the user wants to start the model with initial conditions defined by a
previous simulation. It contains all the information that is necessary in
order for there to be no changes in the model results (even at the computer
precision) between a run performed with several restarts and the same run
performed in one step. It should be noted that this requires that the restart file
contain two consecutive time steps for all the prognostic variables, and
that it is saved in the same binary format as the one used by the computer
that is to read it (in particular, 32 bits binary IEEE format must not be used for
this file). The output listing and file(s) are predefined but should be checked
and eventually adapted to the user's needs. The output listing is stored in
the $ocean.output$ file. The information is printed from within the code on the
logical unit $numout$. To locate these prints, use the UNIX command
"\textit{grep -i numout}" in the source code directory.
By default, outpout files are written in NetCDF format but an IEEE output format, called DIMG, can be choosen when defining \key{dimgout}. Since version 3.2, when defining \key{iomput}, an I/O server has been added which provides more flexibility in the choice of the fields to be outputted as well as how the writing work is distributed over the processors in massively parallel computing. The complete description of the use of this I/O server is presented in next section. If neither \key{iomput} nor \key{dimgout} are defined, NEMO is producing NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation, but it is quite inefficient on parrallel machines. If \key{iomput} is not defined, output files are defined in the \mdl{diawri} module and containing mean (or instantaneous if \key{diainstant} is defined) values over a period of nn\_write time-step (namelist parameter).
%\gmcomment{ % start of gmcomment
% ================================================================
% Diagnostics
% ================================================================
\section{Standard model Output (IOM)}
\label{DIA_iom}
Since version 3.2, iomput is the NEMO output interface. It was designed to be simple to use, flexible and efficient. The two main purposes of iomput are: \\
(1) the complete and flexible control of the output files through an external xml file defined by the user \\
(2) to achieve high performance outputs through the distribution (or not) of all tasks related to output files on dedicated processes. \\
The first functionality allows the user to specify, without touching anything into the code, the way he want to output data: \\
- choice of output frequencies that can be different for each file (including real months and years) \\
- choice of file contents: decide which data will be written in which file (the same data can be outputted in different files) \\
- possibility to split output files at a choosen frequency \\
- possibility to extract a vertical or an horizontal subdomain \\
- choice of the temporal operation to perform: average, accumulate, instantaneous, min, max and once \\
- extremely large choice of data available \\
- redefine variables name and long\_name \\
In addition, iomput allows the user to output any variable (scalar, 2D or 3D) in the code in a very easy way. All details of iomput functionalities are listed in the following subsections. Example of the iodef.xml files that control the outputs can be found here: NEMOGCM/CONFIG/ORCA2\_LIM/EXP00/iodef*.xml
The second functionality targets outputs performances when running on a very large number of processes. First, iomput provides the possibility to dedicate N specific processes (in addition to NEMO processes) to write the outputs, where N is big enough (and defined by the user) to suppress the bottle neck associated with the the writing of the output files. Since version 3.5, this interface depends on an external code called \href{http://forge.ipsl.jussieu.fr/ioserver}{XIOS}. This new IO server takes advantage of the new functionalitiy of NetCDF4 that allows the user to write files in parallel and therefore to bypass the rebuilding phase. Note that writting in parallel into the same NetCDF files requires that your NetCDF4 library is linked to an HDF5 library that has been correctly compiled (i.e. with the configure option $--$enable-parallel). Note that the files created by iomput trough xios are incompatible with NetCDF3. All post-processsing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3.
\subsection{Basic knowledge}
\subsubsection{ XML basic rules}
XML tags begin with the less-than character ("$<$") and end with the greater-than character (''$>$'').
You use tags to mark the start and end of elements, which are the logical units of information
in an XML document. In addition to marking the beginning of an element, XML start tags also
provide a place to specify attributes. An attribute specifies a single property for an element,
using a name/value pair, for example: $<$a b="x" c="y" b="z"$>$ ... $<$/a$>$.
See \href{http://www.xmlnews.org/docs/xml-basics.html}{here} for more details.
\subsubsection{Structure of the xml file used in NEMO}
The XML file used in XIOS is structured by 7 families of tags: context, axis, domain, grid, field, file and variable. Each tag family has hierarchy of three flavors (except for context):
\begin{description}
\item[root]: declaration of the root element that can contain element groups or elements, for example : $<$file\_definition ...$/>$ \\
\item[group]: declaration of a group element that can contain element groups or elements, for example : $<$file\_group ...$/>$ \\
\item[element]: declaration of an element that can contain elements, for example : $<$file ...$/>$ \\
\end{description}
Each element may have several attributes. Some attributes are mandatory, other are optional but have a default value and other are are completely optional. Id is a special attribute used to identify an element or a group of elements. It must be unique for a kind of element. It is optional, but no reference to the corresponding element can be done if it is not defined.
The XML file is split into context tags that are used to isolate IO definition from different codes or different parts of a code. No interference is possible between 2 different contexts. Each context has its own calendar and an associated timestep. In NEMO, we used the following contexts (that can be defined in any order):
\begin{description}
\item[contex xios]: context containing informations for XIOS \\
\verb? ?
In NEMO, by default, the field and domain définition is done in 2 séparate files: \\
NEMOGCM/CONFIG/SHARED/field\_def.xml and \\
NEMOGCM/CONFIG/SHARED/domain\_def.xml that are included in the main iodef.xml file through the following commands: \\
\verb? ? \\
\verb? ?
\subsubsection{Use of inheritance}
XML extensively uses the concept of inheritance. XML has a based tree structure with a parent-child oriented relation: all children inherit attributes from parent, but an attribute defined in a child replace the inherited attribute value. Note that the special attribute ''id'' is never inherited. \\
\\
example 1: Direct inheritance. \\
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''average''
of the attribute ''operation'' from its parent. Note that a child can overwrite
the attribute definition inherited from its parents. In the example above, the field ''sss'' will
for example output instantaneous values instead of average values. \\
\\
example 2: Inheritance by reference. \\
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
Inherite (and overwrite, if needed) the attributes of a tag you are refering to.
\subsubsection{Use of Group}
Groups can be used fort 2 purposes. \\
First, the group can be used to define common attributes to be shared by the elements of the group through the inheritance. In the following example, we define a group of field that will share a common grid ''grid\_T\_2D''. Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''.
\begin{alltt} {{\scriptsize
\begin{verbatim}
...
\end{verbatim}
}}\end{alltt}
Second, the group can be used to replace a list of elements. Several examples of groups of fields are proposed at the end of the file \\
NEMOGCM/CONFIG/SHARED/field\_def.xml. For example, a short list of usual variables related to the U grid:
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
that can be directly include in a file through the following syntaxe:
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
\subsection{Detailed functionalities }
The file NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xml provides several examples of the use of the new functionalities offered by the XML interface of XIOS.
\subsubsection{Define horizontal subdomains}
Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of the tag family domain. It must therefore be done in the domain part of the XML file. For example, in NEMOGCM/CONFIG/SHARED/domain\_def.xml, we provide the following example of a definition of a 5 by 5 box with the bottom left corner at point (10,10).
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
The use of this subdomain is done through the redefinition of the attribute domain\_ref of the tag family field. For example:
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
Moorings are seen as an extrem case corresponding to a 1 by 1 subdomain. The Equatorial section, the TAO, RAMA and PIRATA moorings are alredy registered in the code and can therefore be outputted without taking care of their (i,j) position in the grid. These predefined domains can be activated by the use of specific domain\_ref: ''EqT'', ''EqU'' or ''EqW'' for the equatorial sections and the mooring position for TAO, RAMA and PIRATA followed by ''T'' (for example: ''8s137eT'', ''1.5s80.5eT'' ...)
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
Note that if the domain decomposition used in XIOS cuts the subdomain in several parts and if you use the ''multiple\_file'' type for your output files, you will endup with several files you will need to rebuild using unprovided tools (like ncpdq and ncrcat, \href{http://nco.sourceforge.net/nco.html#Concatenation}{see nco manual}). We are therefore advising to use the ''one\_file'' type in this case.
\subsubsection{Define vertical zooms}
Vertical zooms are defined through the attributs zoom\_begin and zoom\_end of the tag family axis. It must therefore be done in the axis part of the XML file. For example, in NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xml, we provide the following example:
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
The use of this vertical zoom is done through the redefinition of the attribute axis\_ref of the tag family field. For example:
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
\subsubsection{Control of the output file names}
The output file names are defined by the attributs ''name'' and ''name\_suffix'' of the tag family file. for example:
\begin{alltt} {{\scriptsize
\begin{verbatim}
...
...
\end{verbatim}
}}\end{alltt}
However it is also often very convienent to define the file name with the name of the experience, the output file frequency and the date of the beginning and the end of the simulation (which are informations stored either in the namelist or in the XML file). To do so, we added the following rule: if the id of the tag file is ''fileN''(where N = 1 to 99) or one of the predefined section or mooring (see next subsection), the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by: \\
\\
\begin{tabular}{|p{4cm}|p{8cm}|}
\hline
\centering part of the name automatically to be replaced &
by \\
\hline
\hline
\centering @expname@ &
the experience name (from cn\_exp in the namelist) \\
\hline
\centering @freq@ &
output frequency (from attribute output\_freq) \\
\hline
\centering @startdate@ &
starting date of the simulation (from nn\_date0 in the restart or the namelist). \verb?yyyymmdd? format \\
\hline
\centering @startdatefull@ &
starting date of the simulation (from nn\_date0 in the restart or the namelist). \verb?yyyymmdd_hh:mm:ss? format \\
\hline
\centering @enddate@ &
ending date of the simulation (from nn\_date0 and nn\_itend in the namelist). \verb?yyyymmdd? format \\
\hline
\centering @enddatefull@ &
ending date of the simulation (from nn\_date0 and nn\_itend in the namelist). \verb?yyyymmdd_hh:mm:ss? format \\
\hline
\end{tabular}
\\
For example,
\begin{alltt} {{\scriptsize
\begin{verbatim}
\end{verbatim}
}}\end{alltt}
With, in the namelist:
\begin{alltt} {{\scriptsize
\begin{verbatim}
cn_exp = "ORCA2"
nn_date0 = 19891231
ln_rstart = .false.
\end{verbatim}
}}\end{alltt}
will give the following file name radical:
\begin{alltt} {{\scriptsize
\begin{verbatim}
myfile_ORCA2_19891231_freq1d
\end{verbatim}
}}\end{alltt}
\subsubsection{Other controls of the xml attributes from NEMO}
The values of some attributes are automatically defined by NEMO (and any definition given in the xml file is overwritten). By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the xml file (but this is not necessary).
Here is the list of these attributes: \\
\\
\begin{tabular}{|l|c|c|c|}
\hline
\multicolumn{2}{|c|}{tag ids affected by automatic } & name & attribute value \\
\multicolumn{2}{|c|}{definition of some of their attributes } & attribute & \\
\hline
\hline
\multicolumn{2}{|c|}{field\_definition} & freq\_op & \np{rn\_rdt} \\
\hline
\multicolumn{2}{|c|}{SBC} & freq\_op & \np{rn\_rdt} $\times$ \np{nn\_fsbc} \\
\hline
\multicolumn{2}{|c|}{ptrc\_T} & freq\_op & \np{rn\_rdt} $\times$ \np{nn\_dttrc} \\
\hline
\multicolumn{2}{|c|}{diad\_T} & freq\_op & \np{rn\_rdt} $\times$ \np{nn\_dttrc} \\
\hline
\multicolumn{2}{|c|}{EqT, EqU, EqW} & jbegin, ni, & according to the grid \\
\multicolumn{2}{|c|}{ } & name\_suffix & \\
\hline
\multicolumn{2}{|c|}{TAO, RAMA and PIRATA moorings} & zoom\_ibegin, zoom\_jbegin, & according to the grid \\
\multicolumn{2}{|c|}{ } & name\_suffix & \\
\hline
\end{tabular}
\subsubsection{Tag list}
\begin{tabular}{|p{2cm}|p{2.5cm}|p{3.5cm}|p{2cm}|p{2cm}|}
\hline
tag name &
description &
accepted attribute &
child of &
parent of \\
\hline
\hline
simulation &
this tag is the root tag which encapsulates all the content of the xml file &
none &
none &
context \\
\hline
context &
encapsulates parts of the xml file dédicated to different codes or different parts of a code &
id (''xios'', ''nemo'' or ''n\_nemo'' for the nth AGRIF zoom), src, time\_origin &
simulation &
all root tags: ...\_definition \\
\hline
\hline
field\_definition &
encapsulates the definition of all the fields that can potentially be outputted &
axis\_ref, default\_value, domain\_ref, enabled, grid\_ref, level, operation, prec, src &
context &
field or field\_group \\
\hline
field\_group &
encapsulates a group of fields &
axis\_ref, default\_value, domain\_ref, enabled, group\_ref, grid\_ref, id, level, operation, prec, src &
field\_definition, field\_group, file &
field or field\_group \\
\hline
field &
define a specific field &
axis\_ref, default\_value, domain\_ref, enabled, field\_ref, grid\_ref, id, level, long\_name, name, operation, prec, standard\_name, unit &
field\_definition, field\_group, file &
none \\
\hline
\hline
file\_definition &
encapsulates the definition of all the files that will be outputted &
enabled, min\_digits, name, name\_suffix, output\_level, split\_format, split\_freq, sync\_freq, type, src &
context &
file or file\_group \\
\hline
file\_group &
encapsulates a group of files that will be outputted &
enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_format, split\_freq, sync\_freq, type, src &
file\_definition, file\_group &
file or file\_group \\
\hline
file &
defile the contentof a file to be outputted &
enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_format, split\_freq, sync\_freq, type, src &
file\_definition, file\_group &
field \\
\hline
\end{tabular}
\begin{tabular}{|p{2cm}|p{2.5cm}|p{3.5cm}|p{2cm}|p{2cm}|}
\hline
tag name &
description &
accepted attribute &
child of &
parent of \\
\hline
\hline
axis\_definition &
define all the vertical axis potentially used by the variables &
src &
context &
axis\_group, axis \\
\hline
axis\_group &
encapsulates a group of vertical axis &
id, lon\_name, positive, src, standard\_name, unit, zoom\_begin, zoom\_end, zoom\_size &
axis\_definition, axis\_group &
axis\_group, axis \\
\hline
axis &
define a vertical axis &
id, lon\_name, positive, src, standard\_name, unit, zoom\_begin, zoom\_end, zoom\_size &
axis\_definition, axis\_group &
none \\
\hline
\hline
domain\_definition &
define all the horizontal domains potentially used by the variables &
src &
context &
domain\_group, domain \\
\hline
domain\_group &
encapsulates a group of horizontal domains &
id, lon\_name, src, zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj &
domain\_definition, domain\_group &
domain\_group, domain \\
\hline
domain &
define an horizontal domain &
id, lon\_name, src, zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj &
domain\_definition, domain\_group &
none \\
\hline
\hline
grid\_definition &
define all the grid (association of a domain and/or an axis) potentially used by the variables &
src &
context &
grid\_group, grid \\
\hline
grid\_group &
encapsulates a group of grids &
id, domain\_ref, axis\_ref &
grid\_definition, grid\_group &
grid\_group, grid \\
\hline
grid &
define a grid &
id, domain\_ref, axis\_ref &
grid\_definition, grid\_group &
none \\
\hline
\end{tabular}
\subsubsection{Attributes list}
\begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|}
\hline
attribute name &
description &
example &
accepted by \\
\hline
\hline
axis\_ref &
refers to the id of a vertical axis &
axis\_ref="deptht" &
field, grid families \\
\hline
enabled &
switch on/off the output of a field or a file &
enabled=".TRUE." &
field, file families \\
\hline
default\_value &
missing\_value definition &
default\_value="1.e20" &
field family \\
\hline
description &
just for information, not used &
description="ocean T grid variables" &
all tags \\
\hline
domain\_ref &
refers to the id of a domain &
domain\_ref="grid\_T" &
field or grid families \\
\hline
field\_ref= &
id of the field we want to add in a file &
field\_ref="toce" &
field \\
\hline
grid\_ref &
refers to the id of a grid &
grid\_ref="grid\_T\_2D" &
field family \\
\hline
group\_ref &
refer to a group of variables &
group\_ref="mooring" &
field\_group \\
\hline
id &
allow to identify a tag &
id="nemo" &
accepted by all tags except simulation \\
\hline
level &
output priority of a field: 0 (high) to 10 (low)&
level="1" &
field family \\
\hline
long\_name &
define the long\_name attribute in the NetCDF file &
long\_name="Vertical T levels" &
field \\
\hline
min\_digits &
specify the minimum of digits used in the core number in the name of the NetCDF file &
min\_digits="4" &
file family \\
\hline
name &
name of a variable or a file. If the name of a file is undefined, its id is used as a name &
name="tos" &
field or file families \\
\hline
name\_suffix &
suffix to be inserted after the name and before the cpu number and the ''.nc'' termination of a file &
name\_suffix="\_myzoom" &
file family \\
\hline
\end{tabular}
\begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|}
\hline
attribute name &
description &
example &
accepted by \\
\hline
\hline
operation &
type of temporal operation: average, accumulate, instantaneous, min, max and once &
operation="average" &
field family \\
\hline
output\_freq &
operation frequency. units can be ts (timestep), y, mo, d, h, mi, s. &
output\_freq="1d12h" &
field family \\
\hline
output\_level &
output priority of variables in a file: 0 (high) to 10 (low). All variables listed in the file with a level smaller or equal to output\_level will be output. Other variables won't be output even if they are listed in the file. &
output\_level="10"&
file family \\
\hline
positive &
convention used for the orientation of vertival axis (positive downward in \NEMO). &
positive="down" &
axis family \\
\hline
prec &
output precision: real 4 or real 8 &
prec="4" &
field family \\
\hline
split\_format &
date format used in the name of splitted output files. can be spécified using the following syntaxe: \%y, \%mo, \%d, \%h \%mi and \%s &
split\_format="\%yy\%mom\%dd" &
file family \\
\hline
split\_freq &
split output files frequency. units can be ts (timestep), y, mo, d, h, mi, s. &
split\_freq="1mo" &
file family \\
\hline
src &
allow to include a file &
src="./field\_def.xml" &
accepted by all tags except simulation \\
\hline
standard\_name &
define the standard\_name attribute in the NetCDF file &
standard\_name="Eastward\_Sea\_Ice\_Transport" &
field \\
\hline
sync\_freq &
NetCDF file synchronization frequency (update of the time\_counter). units can be ts (timestep), y, mo, d, h, mi, s. &
sync\_freq="10d" &
file family \\
\hline
\end{tabular}
\begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|}
\hline
attribute name &
description &
example &
accepted by \\
\hline
\hline
time\_origin &
specify the origin of the time counter &
time\_origin="1900-01-01 00:00:00"&
context \\
\hline
type (1)&
specify if the output files must be splitted (multiple\_file) or not (one\_file) &
type="multiple\_file" &
file familly \\
\hline
type (2)&
define the type of a variable tag &
type="boolean" &
variable \\
\hline
unit &
unit of a variable or the vertical axis &
unit="m" &
field and axis families \\
\hline
zoom\_ibegin &
starting point along x direction of the zoom. Automatically defined for TAO/RAMA/PIRATA moorings &
zoom\_ibegin="1" &
domain family \\
\hline
zoom\_jbegin &
starting point along y direction of the zoom. Automatically defined for TAO/RAMA/PIRATA moorings &
zoom\_jbegin="1" &
domain family \\
\hline
zoom\_ni &
zoom extent along x direction &
zoom\_ni="1" &
domain family \\
\hline
zoom\_nj &
zoom extent along y direction &
zoom\_nj="1" &
domain family \\
\hline
\end{tabular}
\subsection{XIOS: the IO\_SERVER}
\subsubsection{Attached or detached mode?}
Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS}, the io\_server developed by Yann Meurdesoif from IPSL. This server can be used in ''attached mode'' (as a library) or in ''detached mode'' (as an external executable on n cpus). The ''attached mode'' is simpler to use but much less efficient. If the type of file is ''multiple\_file'', then in attached(detached) mode, each NEMO(XIOS) process will output its own subdomain: if NEMO(XIOS) is runnning on N cores, the ouput files will be splitted into N files. If the type of file is ''one\_file'', the output files will be directly recombined into one unique file either in ''detached mode'' or ''attached mode''.
\subsubsection{Control of xios: the xios context in iodef.xml}
The control of the use of xios is done through the xios context in iodef.xml.
\begin{tabular}{|p{3cm}|p{6.5cm}|p{2.5cm}|}
\hline
variable name &
description &
example \\
\hline
\hline
buffer\_size &
buffer size used by XIOS to send data from NEMO to XIOS. Larger is more efficient. Note that needed/used buffer sizes are summarized at the end of the job &
25000000 \\
\hline
buffer\_server\_factor\_size &
ratio between NEMO and XIOS buffer size. Should be 2. &
2 \\
\hline
info\_level &
verbosity level (0 to 100) &
0 \\
\hline
using\_server &
activate attached(false) or detached(true) mode &
true \\
\hline
using\_oasis &
xios is used with OASIS(true) or not (false) &
false \\
\hline
oasis\_codes\_id &
when using oasis, define the identifier of NEMO in the namcouple. Not that the identifier of XIOS is xios.x &
oceanx \\
\hline
\end{tabular}
\subsubsection{Number of cpu used by XIOS in detached mode}
The number of cores used by the xios is specified only when launching the model. The number or cores dedicated to XIOS should be from ~1/10 to ~1/50 of the number or cores dedicated to NEMO (according of the amount of data to be created). Here is an example of 2 cpus for the io\_server and 62 cpu for opa using mpirun:
\texttt{ mpirun -np 2 ./nemo.exe : -np 62 ./xios\_server.exe }
\subsection{Practical issues}
\subsubsection{Add your own outputs}
It is very easy to add you own outputs with iomput. 4 points must be followed.
\begin{description}
\item[1-] in NEMO code, add a \\
\texttt{ CALL iom\_put( 'identifier', array ) } \\
where you want to output a 2D or 3D array.
\item[2-] don't forget to add \\
\texttt{ USE iom ! I/O manager library } \\
in the list of used modules in the upper part of your module.
\item[3-] in the file\_definition part of the xml file, add the definition of your variable using the same identifier you used in the f90 code.
\vspace{-20pt}
\begin{alltt} {{\scriptsize
\begin{verbatim}
...
...
\end{verbatim}
}}\end{alltt}
attributes axis\_ref and grid\_ref must be consistent with the size of the array to pass to iomput.
if your array is computed within the surface module each nn\_fsbc time\_step,
add the field definition within the group defined with the id ''SBC'': $<$group id=''SBC''...$>$
\item[4-] add your field in one of the output files \\
\vspace{-20pt}
\begin{alltt} {{\scriptsize
\begin{verbatim}
...
...
\end{verbatim}
}}\end{alltt}
\end{description}
% ================================================================
% NetCDF4 support
% ================================================================
\section{NetCDF4 Support (\key{netcdf4})}
\label{DIA_iom}
Since version 3.3, support for NetCDF4 chunking and (loss-less) compression has
been included. These options build on the standard NetCDF output and allow
the user control over the size of the chunks via namelist settings. Chunking
and compression can lead to significant reductions in file sizes for a small
runtime overhead. For a fuller discussion on chunking and other performance
issues the reader is referred to the NetCDF4 documentation found
\href{http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Chunking}{here}.
The new features are only available when the code has been linked with a
NetCDF4 library (version 4.1 onwards, recommended) which has been built
with HDF5 support (version 1.8.4 onwards, recommended). Datasets created
with chunking and compression are not backwards compatible with NetCDF3
"classic" format but most analysis codes can be relinked simply with the
new libraries and will then read both NetCDF3 and NetCDF4 files. NEMO
executables linked with NetCDF4 libraries can be made to produce NetCDF3
files by setting the \np{ln\_nc4zip} logical to false in the \textit{namnc4}
namelist:
%------------------------------------------namnc4----------------------------------------------------
\namdisplay{namnc4}
%-------------------------------------------------------------------------------------------------------------
If \key{netcdf4} has not been defined, these namelist parameters are not read.
In this case, \np{ln\_nc4zip} is set false and dummy routines for a few
NetCDF4-specific functions are defined. These functions will not be used but
need to be included so that compilation is possible with NetCDF3 libraries.
When using NetCDF4 libraries, \key{netcdf4} should be defined even if the
intention is to create only NetCDF3-compatible files. This is necessary to
avoid duplication between the dummy routines and the actual routines present
in the library. Most compilers will fail at compile time when faced with
such duplication. Thus when linking with NetCDF4 libraries the user must
define \key{netcdf4} and control the type of NetCDF file produced via the
namelist parameter.
Chunking and compression is applied only to 4D fields and there is no
advantage in chunking across more than one time dimension since previously
written chunks would have to be read back and decompressed before being
added to. Therefore, user control over chunk sizes is provided only for the
three space dimensions. The user sets an approximate number of chunks along
each spatial axis. The actual size of the chunks will depend on global domain
size for mono-processors or, more likely, the local processor domain size for
distributed processing. The derived values are subject to practical minimum
values (to avoid wastefully small chunk sizes) and cannot be greater than the
domain size in any dimension. The algorithm used is:
\begin{alltt} {{\scriptsize
\begin{verbatim}
ichunksz(1) = MIN( idomain_size,MAX( (idomain_size-1)/nn_nchunks_i + 1 ,16 ) )
ichunksz(2) = MIN( jdomain_size,MAX( (jdomain_size-1)/nn_nchunks_j + 1 ,16 ) )
ichunksz(3) = MIN( kdomain_size,MAX( (kdomain_size-1)/nn_nchunks_k + 1 , 1 ) )
ichunksz(4) = 1
\end{verbatim}
}}\end{alltt}
\noindent As an example, setting:
\vspace{-20pt}
\begin{alltt} {{\scriptsize
\begin{verbatim}
nn_nchunks_i=4, nn_nchunks_j=4 and nn_nchunks_k=31
\end{verbatim}
}}\end{alltt} \vspace{-10pt}
\noindent for a standard ORCA2\_LIM configuration gives chunksizes of {\small\tt 46x38x1}
respectively in the mono-processor case (i.e. global domain of {\small\tt 182x149x31}).
An illustration of the potential space savings that NetCDF4 chunking and compression
provides is given in table \ref{Tab_NC4} which compares the results of two short
runs of the ORCA2\_LIM reference configuration with a 4x2 mpi partitioning. Note
the variation in the compression ratio achieved which reflects chiefly the dry to wet
volume ratio of each processing region.
%------------------------------------------TABLE----------------------------------------------------
\begin{table} \begin{tabular}{lrrr}
Filename & NetCDF3 & NetCDF4 & Reduction\\
&filesize & filesize & \% \\
&(KB) & (KB) & \\
ORCA2\_restart\_0000.nc & 16420 & 8860 & 47\%\\
ORCA2\_restart\_0001.nc & 16064 & 11456 & 29\%\\
ORCA2\_restart\_0002.nc & 16064 & 9744 & 40\%\\
ORCA2\_restart\_0003.nc & 16420 & 9404 & 43\%\\
ORCA2\_restart\_0004.nc & 16200 & 5844 & 64\%\\
ORCA2\_restart\_0005.nc & 15848 & 8172 & 49\%\\
ORCA2\_restart\_0006.nc & 15848 & 8012 & 50\%\\
ORCA2\_restart\_0007.nc & 16200 & 5148 & 69\%\\
ORCA2\_2d\_grid\_T\_0000.nc & 2200 & 1504 & 32\%\\
ORCA2\_2d\_grid\_T\_0001.nc & 2200 & 1748 & 21\%\\
ORCA2\_2d\_grid\_T\_0002.nc & 2200 & 1592 & 28\%\\
ORCA2\_2d\_grid\_T\_0003.nc & 2200 & 1540 & 30\%\\
ORCA2\_2d\_grid\_T\_0004.nc & 2200 & 1204 & 46\%\\
ORCA2\_2d\_grid\_T\_0005.nc & 2200 & 1444 & 35\%\\
ORCA2\_2d\_grid\_T\_0006.nc & 2200 & 1428 & 36\%\\
ORCA2\_2d\_grid\_T\_0007.nc & 2200 & 1148 & 48\%\\
... & ... & ... & .. \\
ORCA2\_2d\_grid\_W\_0000.nc & 4416 & 2240 & 50\%\\
ORCA2\_2d\_grid\_W\_0001.nc & 4416 & 2924 & 34\%\\
ORCA2\_2d\_grid\_W\_0002.nc & 4416 & 2512 & 44\%\\
ORCA2\_2d\_grid\_W\_0003.nc & 4416 & 2368 & 47\%\\
ORCA2\_2d\_grid\_W\_0004.nc & 4416 & 1432 & 68\%\\
ORCA2\_2d\_grid\_W\_0005.nc & 4416 & 1972 & 56\%\\
ORCA2\_2d\_grid\_W\_0006.nc & 4416 & 2028 & 55\%\\
ORCA2\_2d\_grid\_W\_0007.nc & 4416 & 1368 & 70\%\\
\end{tabular}
\caption{ \label{Tab_NC4}
Filesize comparison between NetCDF3 and NetCDF4 with chunking and compression}
\end{table}
%----------------------------------------------------------------------------------------------------
When \key{iomput} is activated with \key{netcdf4} chunking and
compression parameters for fields produced via \np{iom\_put} calls are
set via an equivalent and identically named namelist to \textit{namnc4}
in \np{xmlio\_server.def}. Typically this namelist serves the mean files
whilst the \np{ namnc4} in the main namelist file continues to serve the
restart files. This duplication is unfortunate but appropriate since, if
using io\_servers, the domain sizes of the individual files produced by the
io\_server processes may be different to those produced by the invidual
processing regions and different chunking choices may be desired.
% -------------------------------------------------------------------------------------------------------------
% Tracer/Dynamics Trends
% -------------------------------------------------------------------------------------------------------------
\section[Tracer/Dynamics Trends (TRD)]
{Tracer/Dynamics Trends (\key{trdtra}, \key{trddyn}, \\
\key{trddvor}, \key{trdmld})}
\label{DIA_trd}
%------------------------------------------namtrd----------------------------------------------------
\namdisplay{namtrd}
%-------------------------------------------------------------------------------------------------------------
When \key{trddyn} and/or \key{trddyn} CPP variables are defined, each
trend of the dynamics and/or temperature and salinity time evolution equations
is stored in three-dimensional arrays just after their computation ($i.e.$ at the end
of each $dyn\cdots.F90$ and/or $tra\cdots.F90$ routines). These trends are then
used in \mdl{trdmod} (see TRD directory) every \textit{nn\_trd } time-steps.
What is done depends on the CPP keys defined:
\begin{description}
\item[\key{trddyn}, \key{trdtra}] : a check of the basin averaged properties of the momentum
and/or tracer equations is performed ;
\item[\key{trdvor}] : a vertical summation of the moment tendencies is performed,
then the curl is computed to obtain the barotropic vorticity tendencies which are output ;
\item[\key{trdmld}] : output of the tracer tendencies averaged vertically
either over the mixed layer (\np{nn\_ctls}=0),
or over a fixed number of model levels (\np{nn\_ctls}$>$1 provides the number of level),
or over a spatially varying but temporally fixed number of levels (typically the base
of the winter mixed layer) read in \ifile{ctlsurf\_idx} (\np{nn\_ctls}=1) ;
\end{description}
The units in the output file can be changed using the \np{nn\_ucf} namelist parameter.
For example, in case of salinity tendency the units are given by PSU/s/\np{nn\_ucf}.
Setting \np{nn\_ucf}=86400 ($i.e.$ the number of second in a day) provides the tendencies in PSU/d.
When \key{trdmld} is defined, two time averaging procedure are proposed.
Setting \np{ln\_trdmld\_instant} to \textit{true}, a simple time averaging is performed,
so that the resulting tendency is the contribution to the change of a quantity between
the two instantaneous values taken at the extremities of the time averaging period.
Setting \np{ln\_trdmld\_instant} to \textit{false}, a double time averaging is performed,
so that the resulting tendency is the contribution to the change of a quantity between
two \textit{time mean} values. The later option requires the use of an extra file, \ifile{restart\_mld}
(\np{ln\_trdmld\_restart}=true), to restart a run.
Note that the mixed layer tendency diagnostic can also be used on biogeochemical models
via the \key{trdtrc} and \key{trdmld\_trc} CPP keys.
% -------------------------------------------------------------------------------------------------------------
% On-line Floats trajectories
% -------------------------------------------------------------------------------------------------------------
\section{On-line Floats trajectories (FLO) (\key{floats})}
\label{FLO}
%--------------------------------------------namflo-------------------------------------------------------
\namdisplay{namflo}
%--------------------------------------------------------------------------------------------------------------
The on-line computation of floats advected either by the three dimensional velocity
field or constraint to remain at a given depth ($w = 0$ in the computation) have been
introduced in the system during the CLIPPER project. The algorithm used is based
either on the work of \cite{Blanke_Raynaud_JPO97} (default option), or on a $4^th$
Runge-Hutta algorithm (\np{ln\_flork4}=true). Note that the \cite{Blanke_Raynaud_JPO97}
algorithm have the advantage of providing trajectories which are consistent with the
numeric of the code, so that the trajectories never intercept the bathymetry.
\subsubsection{ Input data: initial coordinates }
Initial coordinates can be given with Ariane Tools convention ( IJK coordinates ,(\np{ln\_ariane}=true) )
or with longitude and latitude.
In case of Ariane convention, input filename is \np{"init\_float\_ariane"}. Its format is:
\texttt{ I J K nisobfl itrash itrash }
\noindent with:
- I,J,K : indexes of initial position
- nisobfl: 0 for an isobar float, 1 for a float following the w velocity
- itrash : set to zero; it is a dummy variable to respect Ariane Tools convention
- itrash :set to zero; it is a dummy variable to respect Ariane Tools convention
\noindent Example:\\
\noindent \texttt{ 100.00000 90.00000 -1.50000 1.00000 0.00000}\\
\texttt{ 102.00000 90.00000 -1.50000 1.00000 0.00000}\\
\texttt{ 104.00000 90.00000 -1.50000 1.00000 0.00000}\\
\texttt{ 106.00000 90.00000 -1.50000 1.00000 0.00000}\\
\texttt{ 108.00000 90.00000 -1.50000 1.00000 0.00000}\\
In the other case ( longitude and latitude ), input filename is \np{"init\_float"}. Its format is:
\texttt{ Long Lat depth nisobfl ngrpfl itrash}
\noindent with:
- Long, Lat, depth : Longitude, latitude, depth
- nisobfl: 0 for an isobar float, 1 for a float following the w velocity
- ngrpfl : number to identify searcher group
- itrash :set to 1; it is a dummy variable.
\noindent Example:
\noindent\texttt{ 20.0 0.0 0.0 0 1 1 }\\
\texttt{ -21.0 0.0 0.0 0 1 1 }\\
\texttt{ -22.0 0.0 0.0 0 1 1 }\\
\texttt{ -23.0 0.0 0.0 0 1 1 }\\
\texttt{ -24.0 0.0 0.0 0 1 1 }\\
\np{jpnfl} is the total number of floats during the run.
When initial positions are read in a restart file ( \np{ln\_rstflo= .TRUE.} ), \np{jpnflnewflo}
can be added in the initialization file.
\subsubsection{ Output data }
\np{nn\_writefl} is the frequency of writing in float output file and \np{nn\_stockfl}
is the frequency of creation of the float restart file.
Output data can be written in ascii files (\np{ln\_flo\_ascii = .TRUE.} ). In that case,
output filename is \np{is trajec\_float}.
Another possiblity of writing format is Netcdf (\np{ln\_flo\_ascii = .FALSE.} ). There are 2 possibilities:
- if (\key{iomput}) is used, outputs are selected in \np{iodef.xml}. Here it is an example of specification
to put in files description section:
\vspace{-30pt}
\begin{alltt} {{\scriptsize
\begin{verbatim}
}
}\\
}
}
}
}
}
}
}
}
}
\end{verbatim}
}}\end{alltt}
- if (\key{iomput}) is not used, a file called \np{trajec\_float.nc} will be created by IOIPSL library.
See also \href{http://stockage.univ-brest.fr/~grima/Ariane/}{here} the web site describing
the off-line use of this marvellous diagnostic tool.
% -------------------------------------------------------------------------------------------------------------
% Harmonic analysis of tidal constituents
% -------------------------------------------------------------------------------------------------------------
\section{Harmonic analysis of tidal constituents (\key{diaharm}) }
\label{DIA_diag_harm}
A module is available to compute the amplitude and phase for tidal waves.
This diagnostic is actived with \key{diaharm}.
%------------------------------------------namdia_harm----------------------------------------------------
\namdisplay{namdia_harm}
%----------------------------------------------------------------------------------------------------------
Concerning the on-line Harmonic analysis, some parameters are available in namelist:
- \texttt{nit000\_han} is the first time step used for harmonic analysis
- \texttt{nitend\_han} is the last time step used for harmonic analysis
- \texttt{nstep\_han} is the time step frequency for harmonic analysis
- \texttt{nb\_ana} is the number of harmonics to analyse
- \texttt{tname} is an array with names of tidal constituents to analyse
\texttt{nit000\_han} and \texttt{nitend\_han} must be between \texttt{nit000} and \texttt{nitend} of the simulation.
The restart capability is not implemented.
The Harmonic analysis solve this equation:
\begin{equation}
h_{i} - A_{0} + \sum^{nb\_ana}_{j=1}[A_{j}cos(\nu_{j}t_{j}-\phi_{j})] = e_{i}
\end{equation}
With $A_{j}$,$\nu_{j}$,$\phi_{j}$, the amplitude, frequency and phase for each wave and $e_{i}$ the error.
$h_{i}$ is the sea level for the time $t_{i}$ and $A_{0}$ is the mean sea level. \\
We can rewrite this equation:
\begin{equation}
h_{i} - A_{0} + \sum^{nb\_ana}_{j=1}[C_{j}cos(\nu_{j}t_{j})+S_{j}sin(\nu_{j}t_{j})] = e_{i}
\end{equation}
with $A_{j}=\sqrt{C^{2}_{j}+S^{2}_{j}}$ et $\phi_{j}=arctan(S_{j}/C_{j})$.
We obtain in output $C_{j}$ and $S_{j}$ for each tidal wave.
% -------------------------------------------------------------------------------------------------------------
% Sections transports
% -------------------------------------------------------------------------------------------------------------
\section{Transports across sections (\key{diadct}) }
\label{DIA_diag_dct}
A module is available to compute the transport of volume, heat and salt through sections. This diagnostic
is actived with \key{diadct}.
Each section is defined by the coordinates of its 2 extremities. The pathways between them are contructed
using tools which can be found in \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT} and are written in a binary file
\texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is later read in by NEMO to compute on-line transports.
The on-line transports module creates three output ascii files:
- \texttt{volume\_transport} for volume transports ( unit: $10^{6} m^{3} s^{-1}$ )
- \texttt{heat\_transport} for heat transports ( unit: $10^{15} W $ )
- \texttt{salt\_transport} for salt transports ( unit: $10^{9}Kg s^{-1}$ )\\
Namelist parameters control how frequently the flows are summed and the time scales over which
they are averaged, as well as the level of output for debugging:
%------------------------------------------namdct----------------------------------------------------
\namdisplay{namdct}
%-------------------------------------------------------------------------------------------------------------
\texttt{nn\_dct}: frequency of instantaneous transports computing
\texttt{nn\_dctwri}: frequency of writing ( mean of instantaneous transports )
\texttt{nn\_debug}: debugging of the section
\subsubsection{ To create a binary file containing the pathway of each section }
In \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT/run}, the file \texttt{ {list\_sections.ascii\_global}}
contains a list of all the sections that are to be computed (this list of sections is based on MERSEA project metrics).
Another file is available for the GYRE configuration (\texttt{ {list\_sections.ascii\_GYRE}}).
Each section is defined by:
\noindent \texttt{ long1 lat1 long2 lat2 nclass (ok/no)strpond (no)ice section\_name }\\
with:
- \texttt{long1 lat1} , coordinates of the first extremity of the section;
- \texttt{long2 lat2} , coordinates of the second extremity of the section;
- \texttt{nclass} the number of bounds of your classes (e.g. 3 bounds for 2 classes);
- \texttt{okstrpond} to compute heat and salt transport, \texttt{nostrpond} if no;
- \texttt{ice} to compute surface and volume ice transports, \texttt{noice} if no. \\
\noindent The results of the computing of transports, and the directions of positive
and negative flow do not depend on the order of the 2 extremities in this file.\\
\noindent If nclass =/ 0,the next lines contain the class type and the nclass bounds:
\texttt{long1 lat1 long2 lat2 nclass (ok/no)strpond (no)ice section\_name}
\texttt{classtype}
\texttt{zbound1}
\texttt{zbound2}
\texttt{.}
\texttt{.}
\texttt{nclass-1}
\texttt{nclass}
\noindent where \texttt{classtype} can be:
- \texttt{zsal} for salinity classes
- \texttt{ztem} for temperature classes
- \texttt{zlay} for depth classes
- \texttt{zsigi} for insitu density classes
- \texttt{zsigp} for potential density classes \\
The script \texttt{job.ksh} computes the pathway for each section and creates a binary file
\texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is read by NEMO. \\
It is possible to use this tools for new configuations: \texttt{job.ksh} has to be updated
with the coordinates file name and path. \\
Examples of two sections, the ACC\_Drake\_Passage with no classes, and the
ATL\_Cuba\_Florida with 4 temperature clases (5 class bounds), are shown:
\noindent \texttt{ -68. -54.5 -60. -64.7 00 okstrpond noice ACC\_Drake\_Passage}
\noindent \texttt{ -80.5 22.5 -80.5 25.5 05 nostrpond noice ATL\_Cuba\_Florida}
\noindent \texttt{ztem}
\noindent \texttt{-2.0}
\noindent \texttt{ 4.5}
\noindent \texttt{ 7.0}
\noindent \texttt{12.0}
\noindent \texttt{40.0}
\subsubsection{ To read the output files }
The output format is :
{\small\texttt{date, time-step number, section number, section name, section slope coefficient, class number,
class name, class bound 1 , classe bound2, transport\_direction1 , transport\_direction2, transport\_total}}\\
For sections with classes, the first \texttt{nclass-1} lines correspond to the transport for each class
and the last line corresponds to the total transport summed over all classes. For sections with no classes, class number
\texttt{ 1 } corresponds to \texttt{ total class } and this class is called \texttt{N}, meaning \texttt{none}.\\
\noindent \texttt{ transport\_direction1 } is the positive part of the transport ( \texttt{ > = 0 } ).
\noindent \texttt{ transport\_direction2 } is the negative part of the transport ( \texttt{ < = 0 } ).\\
\noindent The \texttt{section slope coefficient} gives information about the significance of transports signs and direction:\\
\begin{tabular}{|c|c|c|c|p{4cm}|}
\hline
section slope coefficient & section type & direction 1 & direction 2 & total transport \\ \hline
0. & horizontal & northward & southward & postive: northward \\ \hline
1000. & vertical & eastward & westward & postive: eastward \\ \hline
\texttt{=/0, =/ 1000.} & diagonal & eastward & westward & postive: eastward \\ \hline
\end{tabular}
% -------------------------------------------------------------------------------------------------------------
% Other Diagnostics
% -------------------------------------------------------------------------------------------------------------
\section{Other Diagnostics (\key{diahth}, \key{diaar5})}
\label{DIA_diag_others}
Aside from the standard model variables, other diagnostics can be computed
on-line. The available ready-to-add diagnostics routines can be found in directory DIA.
Among the available diagnostics the following ones are obtained when defining
the \key{diahth} CPP key:
- the mixed layer depth (based on a density criterion, \citet{de_Boyer_Montegut_al_JGR04}) (\mdl{diahth})
- the turbocline depth (based on a turbulent mixing coefficient criterion) (\mdl{diahth})
- the depth of the 20\deg C isotherm (\mdl{diahth})
- the depth of the thermocline (maximum of the vertical temperature gradient) (\mdl{diahth})
The poleward heat and salt transports, their advective and diffusive component, and
the meriodional stream function can be computed on-line in \mdl{diaptr} by setting
\np{ln\_diaptr} to true (see the \textit{namptr} namelist below).
When \np{ln\_subbas}~=~true, transports and stream function are computed
for the Atlantic, Indian, Pacific and Indo-Pacific Oceans (defined north of 30\deg S)
as well as for the World Ocean. The sub-basin decomposition requires an input file
(\ifile{subbasins}) which contains three 2D mask arrays, the Indo-Pacific mask
been deduced from the sum of the Indian and Pacific mask (Fig~\ref{Fig_mask_subasins}).
%------------------------------------------namptr----------------------------------------------------
\namdisplay{namptr}
%-------------------------------------------------------------------------------------------------------------
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
\begin{figure}[!t] \begin{center}
\includegraphics[width=1.0\textwidth]{./TexFiles/Figures/Fig_mask_subasins.pdf}
\caption{ \label{Fig_mask_subasins}
Decomposition of the World Ocean (here ORCA2) into sub-basin used in to compute
the heat and salt transports as well as the meridional stream-function: Atlantic basin (red),
Pacific basin (green), Indian basin (bleue), Indo-Pacific basin (bleue+green).
Note that semi-enclosed seas (Red, Med and Baltic seas) as well as Hudson Bay
are removed from the sub-basins. Note also that the Arctic Ocean has been split
into Atlantic and Pacific basins along the North fold line. }
\end{center} \end{figure}
%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
In addition, a series of diagnostics has been added in the \mdl{diaar5}.
They corresponds to outputs that are required for AR5 simulations
(see Section \ref{DIA_steric} below for one of them).
Activating those outputs requires to define the \key{diaar5} CPP key.
\\
\\
% ================================================================
% Steric effect in sea surface height
% ================================================================
\section{Diagnosing the Steric effect in sea surface height}
\label{DIA_steric}
Changes in steric sea level are caused when changes in the density of the water
column imply an expansion or contraction of the column. It is essentially produced
through surface heating/cooling and to a lesser extent through non-linear effects of
the equation of state (cabbeling, thermobaricity...).
Non-Boussinesq models contain all ocean effects within the ocean acting
on the sea level. In particular, they include the steric effect. In contrast,
Boussinesq models, such as \NEMO, conserve volume, rather than mass,
and so do not properly represent expansion or contraction. The steric effect is
therefore not explicitely represented.
This approximation does not represent a serious error with respect to the flow field
calculated by the model \citep{Greatbatch_JGR94}, but extra attention is required
when investigating sea level, as steric changes are an important
contribution to local changes in sea level on seasonal and climatic time scales.
This is especially true for investigation into sea level rise due to global warming.
Fortunately, the steric contribution to the sea level consists of a spatially uniform
component that can be diagnosed by considering the mass budget of the world
ocean \citep{Greatbatch_JGR94}.
In order to better understand how global mean sea level evolves and thus how
the steric sea level can be diagnosed, we compare, in the following, the
non-Boussinesq and Boussinesq cases.
Let denote
$\mathcal{M}$ the total mass of liquid seawater ($\mathcal{M}=\int_D \rho dv$),
$\mathcal{V}$ the total volume of seawater ($\mathcal{V}=\int_D dv$),
$\mathcal{A}$ the total surface of the ocean ($\mathcal{A}=\int_S ds$),
$\bar{\rho}$ the global mean seawater (\textit{in situ}) density ($\bar{\rho}= 1/\mathcal{V} \int_D \rho \,dv$), and
$\bar{\eta}$ the global mean sea level ($\bar{\eta}=1/\mathcal{A}\int_S \eta \,ds$).
A non-Boussinesq fluid conserves mass. It satisfies the following relations:
\begin{equation} \label{Eq_MV_nBq}
\begin{split}
\mathcal{M} &= \mathcal{V} \;\bar{\rho} \\
\mathcal{V} &= \mathcal{A} \;\bar{\eta}
\end{split}
\end{equation}
Temporal changes in total mass is obtained from the density conservation equation :
\begin{equation} \label{Eq_Co_nBq}
\frac{1}{e_3} \partial_t ( e_3\,\rho) + \nabla( \rho \, \textbf{U} ) = \left. \frac{\textit{emp}}{e_3}\right|_\textit{surface}
\end{equation}
where $\rho$ is the \textit{in situ} density, and \textit{emp} the surface mass
exchanges with the other media of the Earth system (atmosphere, sea-ice, land).
Its global averaged leads to the total mass change
\begin{equation} \label{Eq_Mass_nBq}
\partial_t \mathcal{M} = \mathcal{A} \;\overline{\textit{emp}}
\end{equation}
where $\overline{\textit{emp}}=\int_S \textit{emp}\,ds$ is the net mass flux
through the ocean surface.
Bringing \eqref{Eq_Mass_nBq} and the time derivative of \eqref{Eq_MV_nBq}
together leads to the evolution equation of the mean sea level
\begin{equation} \label{Eq_ssh_nBq}
\partial_t \bar{\eta} = \frac{\overline{\textit{emp}}}{ \bar{\rho}}
- \frac{\mathcal{V}}{\mathcal{A}} \;\frac{\partial_t \bar{\rho} }{\bar{\rho}}
\end{equation}
The first term in equation \eqref{Eq_ssh_nBq} alters sea level by adding or
subtracting mass from the ocean.
The second term arises from temporal changes in the global mean
density; $i.e.$ from steric effects.
In a Boussinesq fluid, $\rho$ is replaced by $\rho_o$ in all the equation except when $\rho$
appears multiplied by the gravity ($i.e.$ in the hydrostatic balance of the primitive Equations).
In particular, the mass conservation equation, \eqref{Eq_Co_nBq}, degenerates into
the incompressibility equation:
\begin{equation} \label{Eq_Co_Bq}
\frac{1}{e_3} \partial_t ( e_3 ) + \nabla( \textbf{U} ) = \left. \frac{\textit{emp}}{\rho_o \,e_3}\right|_ \textit{surface}
\end{equation}
and the global average of this equation now gives the temporal change of the total volume,
\begin{equation} \label{Eq_V_Bq}
\partial_t \mathcal{V} = \mathcal{A} \;\frac{\overline{\textit{emp}}}{\rho_o}
\end{equation}
Only the volume is conserved, not mass, or, more precisely, the mass which is conserved is the
Boussinesq mass, $\mathcal{M}_o = \rho_o \mathcal{V}$. The total volume (or equivalently
the global mean sea level) is altered only by net volume fluxes across the ocean surface,
not by changes in mean mass of the ocean: the steric effect is missing in a Boussinesq fluid.
Nevertheless, following \citep{Greatbatch_JGR94}, the steric effect on the volume can be
diagnosed by considering the mass budget of the ocean.
The apparent changes in $\mathcal{M}$, mass of the ocean, which are not induced by surface
mass flux must be compensated by a spatially uniform change in the mean sea level due to
expansion/contraction of the ocean \citep{Greatbatch_JGR94}. In others words, the Boussinesq
mass, $\mathcal{M}_o$, can be related to $\mathcal{M}$, the total mass of the ocean seen
by the Boussinesq model, via the steric contribution to the sea level, $\eta_s$, a spatially
uniform variable, as follows:
\begin{equation} \label{Eq_M_Bq}
\mathcal{M}_o = \mathcal{M} + \rho_o \,\eta_s \,\mathcal{A}
\end{equation}
Any change in $\mathcal{M}$ which cannot be explained by the net mass flux through
the ocean surface is converted into a mean change in sea level. Introducing the total density
anomaly, $\mathcal{D}= \int_D d_a \,dv$, where $d_a= (\rho -\rho_o ) / \rho_o$
is the density anomaly used in \NEMO (cf. \S\ref{TRA_eos}) in \eqref{Eq_M_Bq}
leads to a very simple form for the steric height:
\begin{equation} \label{Eq_steric_Bq}
\eta_s = - \frac{1}{\mathcal{A}} \mathcal{D}
\end{equation}
The above formulation of the steric height of a Boussinesq ocean requires four remarks.
First, one can be tempted to define $\rho_o$ as the initial value of $\mathcal{M}/\mathcal{V}$,
$i.e.$ set $\mathcal{D}_{t=0}=0$, so that the initial steric height is zero. We do not
recommend that. Indeed, in this case $\rho_o$ depends on the initial state of the ocean.
Since $\rho_o$ has a direct effect on the dynamics of the ocean (it appears in the pressure
gradient term of the momentum equation) it is definitively not a good idea when
inter-comparing experiments.
We better recommend to fixe once for all $\rho_o$ to $1035\;Kg\,m^{-3}$. This value is a
sensible choice for the reference density used in a Boussinesq ocean climate model since,
with the exception of only a small percentage of the ocean, density in the World Ocean
varies by no more than 2$\%$ from this value (\cite{Gill1982}, page 47).
Second, we have assumed here that the total ocean surface, $\mathcal{A}$, does not
change when the sea level is changing as it is the case in all global ocean GCMs
(wetting and drying of grid point is not allowed).
Third, the discretisation of \eqref{Eq_steric_Bq} depends on the type of free surface
which is considered. In the non linear free surface case, $i.e.$ \key{vvl} defined, it is
given by
\begin{equation} \label{Eq_discrete_steric_Bq}
\eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t} e_{2t} e_{3t} }
{ \sum_{i,\,j,\,k} e_{1t} e_{2t} e_{3t} }
\end{equation}
whereas in the linear free surface, the volume above the \textit{z=0} surface must be explicitly taken
into account to better approximate the total ocean mass and thus the steric sea level:
\begin{equation} \label{Eq_discrete_steric_Bq}
\eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} d_a\; e_{1t}e_{2t} \eta }
{\sum_{i,\,j,\,k} e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} e_{1t}e_{2t} \eta }
\end{equation}
The fourth and last remark concerns the effective sea level and the presence of sea-ice.
In the real ocean, sea ice (and snow above it) depresses the liquid seawater through
its mass loading. This depression is a result of the mass of sea ice/snow system acting
on the liquid ocean. There is, however, no dynamical effect associated with these depressions
in the liquid ocean sea level, so that there are no associated ocean currents. Hence, the
dynamically relevant sea level is the effective sea level, $i.e.$ the sea level as if sea ice
(and snow) were converted to liquid seawater \citep{Campin_al_OM08}. However,
in the current version of \NEMO the sea-ice is levitating above the ocean without
mass exchanges between ice and ocean. Therefore the model effective sea level
is always given by $\eta + \eta_s$, whether or not there is sea ice present.
In AR5 outputs, the thermosteric sea level is demanded. It is steric sea level due to
changes in ocean density arising just from changes in temperature. It is given by:
\begin{equation} \label{Eq_thermosteric_Bq}
\eta_s = - \frac{1}{\mathcal{A}} \int_D d_a(T,S_o,p_o) \,dv
\end{equation}
where $S_o$ and $p_o$ are the initial salinity and pressure, respectively.
Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs
the \key{diaar5} defined to be called.
% ================================================================