# Changeset 10354 for NEMO/trunk/doc/latex/NEMO/subfiles/chap_DIA.tex

Ignore:
Timestamp:
2018-11-21T17:59:55+01:00 (2 years ago)
Message:

Vast edition of LaTeX subfiles to improve the readability by cutting sentences in a more suitable way
Every sentence begins in a new line and if necessary is splitted around 110 characters lenght for side-by-side visualisation,
this setting may not be adequate for everyone but something has to be set.
The punctuation was the primer trigger for the cutting process, otherwise subordinators and coordinators, in order to mostly keep a meaning for each line

File:
1 edited

### Legend:

Unmodified
 r10146 \label{sec:DIA_io_old} The model outputs are of three types: the restart file, the output listing, and the diagnostic output file(s). The restart file is used internally by the code when the user wants to start the model with The model outputs are of three types: the restart file, the output listing, and the diagnostic output file(s). The restart file is used internally by the code when the user wants to start the model with initial conditions defined by a previous simulation. It contains all the information that is necessary in order for there to be no changes in the model results (even at the computer precision) between a run performed with several restarts and It contains all the information that is necessary in order for there to be no changes in the model results (even at the computer precision) between a run performed with several restarts and the same run performed in one step. It should be noted that this requires that the restart file contains two consecutive time steps for all the prognostic variables, and that it is saved in the same binary format as the one used by the computer that is to read it (in particular, 32 bits binary IEEE format must not be used for this file). The output listing and file(s) are predefined but should be checked and eventually adapted to the user's needs. It should be noted that this requires that the restart file contains two consecutive time steps for all the prognostic variables, and that it is saved in the same binary format as the one used by the computer that is to read it (in particular, 32 bits binary IEEE format must not be used for this file). The output listing and file(s) are predefined but should be checked and eventually adapted to the user's needs. The output listing is stored in the $ocean.output$ file. The information is printed from within the code on the logical unit $numout$. By default, diagnostic output files are written in NetCDF format. Since version 3.2, when defining \key{iomput}, an I/O server has been added which provides more flexibility in the choice of the fields to be written as well as how the writing work is distributed over the processors in massively parallel computing. A complete description of the use of this I/O server is presented in the next section. By default, \key{iomput} is not defined, NEMO produces NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation. However, the IOIPSL library is quite inefficient on parallel machines and, since version 3.2, many diagnostic options have been added presuming the use of \key{iomput}. The usefulness of the default IOIPSL-based option is expected to reduce with each new release. If \key{iomput} is not defined, output files and content are defined in the \mdl{diawri} module and contain mean (or instantaneous if \key{diainstant} is defined) values over a regular period of Since version 3.2, when defining \key{iomput}, an I/O server has been added which provides more flexibility in the choice of the fields to be written as well as how the writing work is distributed over the processors in massively parallel computing. A complete description of the use of this I/O server is presented in the next section. By default, \key{iomput} is not defined, NEMO produces NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation. However, the IOIPSL library is quite inefficient on parallel machines and, since version 3.2, many diagnostic options have been added presuming the use of \key{iomput}. The usefulness of the default IOIPSL-based option is expected to reduce with each new release. If \key{iomput} is not defined, output files and content are defined in the \mdl{diawri} module and contain mean (or instantaneous if \key{diainstant} is defined) values over a regular period of nn\_write time-steps (namelist parameter). \label{sec:DIA_iom} Since version 3.2, iomput is the NEMO output interface of choice. It has been designed to be simple to use, flexible and efficient. Since version 3.2, iomput is the NEMO output interface of choice. It has been designed to be simple to use, flexible and efficient. The two main purposes of iomput are: \begin{enumerate} \item The complete and flexible control of the output files through external XML files adapted by the user from standard templates. \item To achieve high performance and scalable output through the optional distribution of all diagnostic output related tasks to dedicated processes. \item The complete and flexible control of the output files through external XML files adapted by the user from standard templates. \item To achieve high performance and scalable output through the optional distribution of all diagnostic output related tasks to dedicated processes. \end{enumerate} \begin{itemize} \item The choice of output frequencies that can be different for each file (including real months and years). \item The choice of file contents; includes complete flexibility over which data are written in which files (the same data can be written in different files). \item The possibility to split output files at a chosen frequency. \item The possibility to extract a vertical or an horizontal subdomain. \item The choice of the temporal operation to perform, e.g.: average, accumulate, instantaneous, min, max and once. \item Control over metadata via a large XML "database" of possible output fields. \item The choice of output frequencies that can be different for each file (including real months and years). \item The choice of file contents; includes complete flexibility over which data are written in which files (the same data can be written in different files). \item The possibility to split output files at a chosen frequency. \item The possibility to extract a vertical or an horizontal subdomain. \item The choice of the temporal operation to perform, $e.g.$: average, accumulate, instantaneous, min, max and once. \item Control over metadata via a large XML "database" of possible output fields. \end{itemize} In addition, iomput allows the user to add in the code the output of any new variable (scalar, 2D or 3D) in a very easy way. In addition, iomput allows the user to add in the code the output of any new variable (scalar, 2D or 3D) in a very easy way. All details of iomput functionalities are listed in the following subsections. Examples of the XML files that control the outputs can be found in: \path{NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef.xml}, \path{NEMOGCM/CONFIG/SHARED/field_def.xml} and \path{NEMOGCM/CONFIG/SHARED/domain_def.xml}. \\ Examples of the XML files that control the outputs can be found in: \path{NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef.xml}, \path{NEMOGCM/CONFIG/SHARED/field_def.xml} and \path{NEMOGCM/CONFIG/SHARED/domain_def.xml}. \\ The second functionality targets output performance when running in parallel (\key{mpp\_mpi}). Iomput provides the possibility to specify N dedicated I/O processes (in addition to the NEMO processes) to collect and write the outputs. With an appropriate choice of N by the user, the bottleneck associated with the writing of Iomput provides the possibility to specify N dedicated I/O processes (in addition to the NEMO processes) to collect and write the outputs. With an appropriate choice of N by the user, the bottleneck associated with the writing of the output files can be greatly reduced. In version 3.6, the iom\_put interface depends on an external code called \href{https://forge.ipsl.jussieu.fr/ioserver/browser/XIOS/branchs/xios-1.0}{XIOS-1.0} In version 3.6, the iom\_put interface depends on an external code called \href{https://forge.ipsl.jussieu.fr/ioserver/browser/XIOS/branchs/xios-1.0}{XIOS-1.0} (use of revision 618 or higher is required). This new IO server can take advantage of the parallel I/O functionality of NetCDF4 to This new IO server can take advantage of the parallel I/O functionality of NetCDF4 to create a single output file and therefore to bypass the rebuilding phase. Note that writing in parallel into the same NetCDF files requires that your NetCDF4 library is linked to an HDF5 library that has been correctly compiled ($i.e.$ with the configure option $--$enable-parallel). Note that writing in parallel into the same NetCDF files requires that your NetCDF4 library is linked to an HDF5 library that has been correctly compiled ($i.e.$ with the configure option $--$enable-parallel). Note that the files created by iomput through XIOS are incompatible with NetCDF3. All post-processsing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3. Even if not using the parallel I/O functionality of NetCDF4, using N dedicated I/O servers, where N is typically much less than the number of NEMO processors, will reduce the number of output files created. This can greatly reduce the post-processing burden usually associated with using large numbers of NEMO processors. Note that for smaller configurations, the rebuilding phase can be avoided, even without a parallel-enabled NetCDF4 library, simply by employing only one dedicated I/O server. All post-processsing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3. Even if not using the parallel I/O functionality of NetCDF4, using N dedicated I/O servers, where N is typically much less than the number of NEMO processors, will reduce the number of output files created. This can greatly reduce the post-processing burden usually associated with using large numbers of NEMO processors. Note that for smaller configurations, the rebuilding phase can be avoided, even without a parallel-enabled NetCDF4 library, simply by employing only one dedicated I/O server. \subsection{XIOS: XML Inputs-Outputs Server} \subsubsection{Attached or detached mode?} Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS}, Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS}, the io\_server developed by Yann Meurdesoif from IPSL. The behaviour of the I/O subsystem is controlled by settings in the external XML files listed above. \xmlline|| The {\tt using\_server} setting determines whether or not the server will be used in \textit{attached mode} (as a library) [{\tt> false <}] or in \textit{detached mode} (as an external executable on N additional, dedicated cpus) [{\tt > true <}]. The {\tt using\_server} setting determines whether or not the server will be used in \textit{attached mode} (as a library) [{\tt> false <}] or in \textit{detached mode} (as an external executable on N additional, dedicated cpus) [{\tt > true <}]. The \textit{attached mode} is simpler to use but much less efficient for massively parallel applications. The type of each file can be either ''multiple\_file'' or ''one\_file''. In \textit{attached mode} and if the type of file is ''multiple\_file'', In \textit{attached mode} and if the type of file is ''multiple\_file'', then each NEMO process will also act as an IO server and produce its own set of output files. Superficially, this emulates the standard behaviour in previous versions. However, the subdomain written out by each process does not correspond to However, the subdomain written out by each process does not correspond to the \forcode{jpi x jpj x jpk} domain actually computed by the process (although it may if \forcode{jpni=1}). Instead each process will have collected and written out a number of complete longitudinal strips. If the ''one\_file'' option is chosen then all processes will collect their longitudinal strips and If the ''one\_file'' option is chosen then all processes will collect their longitudinal strips and write (in parallel) to a single output file. In \textit{detached mode} and if the type of file is ''multiple\_file'', then each stand-alone XIOS process will collect data for a range of complete longitudinal strips and In \textit{detached mode} and if the type of file is ''multiple\_file'', then each stand-alone XIOS process will collect data for a range of complete longitudinal strips and write to its own set of output files. If the ''one\_file'' option is chosen then all XIOS processes will collect their longitudinal strips and If the ''one\_file'' option is chosen then all XIOS processes will collect their longitudinal strips and write (in parallel) to a single output file. Note running in detached mode requires launching a Multiple Process Multiple Data (MPMD) parallel job. The following subsection provides a typical example but the syntax will vary in different MPP environments. The following subsection provides a typical example but the syntax will vary in different MPP environments. \subsubsection{Number of cpu used by XIOS in detached mode} The number of cores dedicated to XIOS should be from \texttildelow1/10 to \texttildelow1/50 of the number of cores dedicated to NEMO. Some manufacturers suggest using O($\sqrt{N}$) dedicated IO processors for N processors but Some manufacturers suggest using O($\sqrt{N}$) dedicated IO processors for N processors but this is a general recommendation and not specific to NEMO. It is difficult to provide precise recommendations because the optimal choice will depend on It is difficult to provide precise recommendations because the optimal choice will depend on the particular hardware properties of the target system (parallel filesystem performance, available memory, memory bandwidth etc.) and the volume and frequency of data to be created. (parallel filesystem performance, available memory, memory bandwidth etc.) and the volume and frequency of data to be created. Here is an example of 2 cpus for the io\_server and 62 cpu for nemo using mpirun: \cmd|mpirun -np 62 ./nemo.exe : -np 2 ./xios_server.exe| \subsubsection{Control of XIOS: the context in iodef.xml} As well as the {\tt using\_server} flag, other controls on the use of XIOS are set in the XIOS context in iodef.xml. As well as the {\tt using\_server} flag, other controls on the use of XIOS are set in the XIOS context in iodef.xml. See the XML basics section below for more details on XML syntax and rules. \subsubsection{Installation} As mentioned, XIOS is supported separately and must be downloaded and compiled before it can be used with NEMO. See the installation guide on the \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS} wiki for help and guidance. As mentioned, XIOS is supported separately and must be downloaded and compiled before it can be used with NEMO. See the installation guide on the \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS} wiki for help and guidance. NEMO will need to link to the compiled XIOS library. The \href{https://forge.ipsl.jussieu.fr/nemo/wiki/Users/ModelInterfacing/InputsOutputs#Inputs-OutputsusingXIOS} {XIOS with NEMO} guide provides an example illustration of how this can be achieved. The \href{https://forge.ipsl.jussieu.fr/nemo/wiki/Users/ModelInterfacing/InputsOutputs#Inputs-OutputsusingXIOS} {XIOS with NEMO} guide provides an example illustration of how this can be achieved. \subsubsection{Add your own outputs} It is very easy to add your own outputs with iomput. Many standard fields and diagnostics are already prepared ($i.e.$, steps 1 to 3 below have been done) and Many standard fields and diagnostics are already prepared ($i.e.$, steps 1 to 3 below have been done) and simply need to be activated by including the required output in a file definition in iodef.xml (step 4). To add new output variables, all 4 of the following steps must be taken. \begin{enumerate} \item[1.] in NEMO code, add a \forcode{CALL iom\_put( 'identifier', array )} where you want to output a 2D or 3D array. \item[2.] If necessary, add \forcode{USE iom ! I/O manager library} to the list of used modules in the upper part of your module. \item[3.] in the field\_def.xml file, add the definition of your variable using the same identifier you used in the f90 code (see subsequent sections for a details of the XML syntax and rules). For example: \item[1.] in NEMO code, add a \forcode{CALL iom\_put( 'identifier', array )} where you want to output a 2D or 3D array. \item[2.] If necessary, add \forcode{USE iom ! I/O manager library} to the list of used modules in the upper part of your module. \item[3.] in the field\_def.xml file, add the definition of your variable using the same identifier you used in the f90 code (see subsequent sections for a details of the XML syntax and rules). For example: \begin{xmllines} \end{xmllines} Note your definition must be added to the field\_group whose reference grid is consistent with the size of the array passed to iomput. The grid\_ref attribute refers to definitions set in iodef.xml which, in turn, reference grids and axes either defined in the code (iom\_set\_domain\_attr and iom\_set\_axis\_attr in \mdl{iom}) or defined in the domain\_def.xml file. Note your definition must be added to the field\_group whose reference grid is consistent with the size of the array passed to iomput. The grid\_ref attribute refers to definitions set in iodef.xml which, in turn, reference grids and axes either defined in the code (iom\_set\_domain\_attr and iom\_set\_axis\_attr in \mdl{iom}) or defined in the domain\_def.xml file. $e.g.$: \end{xmllines} Note, if your array is computed within the surface module each \np{nn\_fsbc} time\_step, Note, if your array is computed within the surface module each \np{nn\_fsbc} time\_step, add the field definition within the field\_group defined with the id "SBC": \xmlcode{} which has been defined with the correct frequency of operations (iom\_set\_field\_attr in \mdl{iom}) \item[4.] add your field in one of the output files defined in iodef.xml (again see subsequent sections for syntax and rules) \xmlcode{} which has been defined with the correct frequency of operations (iom\_set\_field\_attr in \mdl{iom}) \item[4.] add your field in one of the output files defined in iodef.xml (again see subsequent sections for syntax and rules) \begin{xmllines} XML tags begin with the less-than character ("$<$") and end with the greater-than character ("$>$"). You use tags to mark the start and end of elements, which are the logical units of information in an XML document. In addition to marking the beginning of an element, XML start tags also provide a place to specify attributes. An attribute specifies a single property for an element, using a name/value pair, for example: You use tags to mark the start and end of elements, which are the logical units of information in an XML document. In addition to marking the beginning of an element, XML start tags also provide a place to specify attributes. An attribute specifies a single property for an element, using a name/value pair, for example: \xmlcode{ ... }. See \href{http://www.xmlnews.org/docs/xml-basics.html}{here} for more details. \subsubsection{Structure of the XML file used in NEMO} The XML file used in XIOS is structured by 7 families of tags: The XML file used in XIOS is structured by 7 families of tags: context, axis, domain, grid, field, file and variable. Each tag family has hierarchy of three flavors (except for context): Each element may have several attributes. Some attributes are mandatory, other are optional but have a default value and other are completely optional. Some attributes are mandatory, other are optional but have a default value and other are completely optional. Id is a special attribute used to identify an element or a group of elements. It must be unique for a kind of element. It is optional, but no reference to the corresponding element can be done if it is not defined. The XML file is split into context tags that are used to isolate IO definition from different codes or different parts of a code. The XML file is split into context tags that are used to isolate IO definition from different codes or different parts of a code. No interference is possible between 2 different contexts. Each context has its own calendar and an associated timestep. \noindent In NEMO, by default, the field and domain definition is done in 2 separate files: \path{NEMOGCM/CONFIG/SHARED/field_def.xml} and \path{NEMOGCM/CONFIG/SHARED/domain_def.xml} that are included in the main iodef.xml file through the following commands: \path{NEMOGCM/CONFIG/SHARED/field_def.xml} and \path{NEMOGCM/CONFIG/SHARED/domain_def.xml} that are included in the main iodef.xml file through the following commands: \begin{xmllines} XML extensively uses the concept of inheritance. XML has a tree based structure with a parent-child oriented relation: all children inherit attributes from parent, but an attribute defined in a child replace the inherited attribute value. Note that the special attribute ''id'' is never inherited. \\ \\ XML has a tree based structure with a parent-child oriented relation: all children inherit attributes from parent, but an attribute defined in a child replace the inherited attribute value. Note that the special attribute ''id'' is never inherited. \\ \\ example 1: Direct inheritance. \end{xmllines} The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''average'' of the attribute ''operation'' from its parent. The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''average'' of the attribute ''operation'' from its parent. Note that a child can overwrite the attribute definition inherited from its parents. In the example above, the field ''sss'' will for example output instantaneous values instead of average values. \\ \\ In the example above, the field ''sss'' will for example output instantaneous values instead of average values. \\ \\ example 2: Inheritance by reference. Groups can be used for 2 purposes. Firstly, the group can be used to define common attributes to be shared by the elements of Firstly, the group can be used to define common attributes to be shared by the elements of the group through inheritance. In the following example, we define a group of field that will share a common grid ''grid\_T\_2D''. Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''. Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''. \begin{xmllines} \subsection{Detailed functionalities} The file \path{NEMOGCM/CONFIG/ORCA2_LIM/iodef_demo.xml} provides several examples of the use of The file \path{NEMOGCM/CONFIG/ORCA2_LIM/iodef_demo.xml} provides several examples of the use of the new functionalities offered by the XML interface of XIOS. \subsubsection{Define horizontal subdomains} Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of the tag family domain. It must therefore be done in the domain part of the XML file. \end{xmllines} The use of this subdomain is done through the redefinition of the attribute domain\_ref of the tag family field. The use of this subdomain is done through the redefinition of the attribute domain\_ref of the tag family field. For example: Moorings are seen as an extrem case corresponding to a 1 by 1 subdomain. The Equatorial section, the TAO, RAMA and PIRATA moorings are alredy registered in the code and The Equatorial section, the TAO, RAMA and PIRATA moorings are already registered in the code and can therefore be outputted without taking care of their (i,j) position in the grid. These predefined domains can be activated by the use of specific domain\_ref: ''EqT'', ''EqU'' or ''EqW'' for the equatorial sections and the mooring position for TAO, RAMA and PIRATA followed by ''T'' (for example: ''8s137eT'', ''1.5s80.5eT'' ...) ''EqT'', ''EqU'' or ''EqW'' for the equatorial sections and the mooring position for TAO, RAMA and PIRATA followed by ''T'' (for example: ''8s137eT'', ''1.5s80.5eT'' ...) \begin{xmllines} \subsubsection{Define vertical zooms} Vertical zooms are defined through the attributs zoom\_begin and zoom\_end of the tag family axis. It must therefore be done in the axis part of the XML file. Vertical zooms are defined through the attributs zoom\_begin and zoom\_end of the tag family axis. It must therefore be done in the axis part of the XML file. For example, in \path{NEMOGCM/CONFIG/ORCA2_LIM/iodef_demo.xml}, we provide the following example: \end{xmllines} The use of this vertical zoom is done through the redefinition of the attribute axis\_ref of the tag family field. The use of this vertical zoom is done through the redefinition of the attribute axis\_ref of the tag family field. For example: \end{xmllines} However it is often very convienent to define the file name with the name of the experiment, the output file frequency and the date of the beginning and the end of the simulation However it is often very convienent to define the file name with the name of the experiment, the output file frequency and the date of the beginning and the end of the simulation (which are informations stored either in the namelist or in the XML file). To do so, we added the following rule: if the id of the tag file is ''fileN'' (where N = 1 to 999 on 1 to 3 digits) or one of the predefined sections or moorings (see next subsection), the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by: To do so, we added the following rule: if the id of the tag file is ''fileN'' (where N = 1 to 999 on 1 to 3 digits) or one of the predefined sections or moorings (see next subsection), the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by: \begin{table} \scriptsize \end{forlines} \noindent will give the following file name radical: \ifile{myfile\_ORCA2\_19891231\_freq1d} \noindent will give the following file name radical: \ifile{myfile\_ORCA2\_19891231\_freq1d} \subsubsection{Other controls of the XML attributes from NEMO} The values of some attributes are defined by subroutine calls within NEMO (calls to iom\_set\_domain\_attr, iom\_set\_axis\_attr and iom\_set\_field\_attr in \mdl{iom}). Any definition given in the XML file will be overwritten. By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the XML file (but this is not necessary). \\ Here is the list of these attributes: \\ (calls to iom\_set\_domain\_attr, iom\_set\_axis\_attr and iom\_set\_field\_attr in \mdl{iom}). Any definition given in the XML file will be overwritten. By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the XML file (but this is not necessary). \\ Here is the list of these attributes: \\ \begin{table} \scriptsize \begin{enumerate} \item Simple computation: directly define the computation when refering to the variable in the file definition. \item Simple computation: directly define the computation when refering to the variable in the file definition. \begin{xmllines} \end{xmllines} \item Simple computation: define a new variable and use it in the file definition. \item Simple computation: define a new variable and use it in the file definition. in field\_definition: \end{xmllines} Note that in this case, the following syntaxe \xmlcode{} is not working as Note that in this case, the following syntaxe \xmlcode{} is not working as sst2 won't be evaluated. \item Change of variable precision: \item Change of variable precision: \begin{xmllines} Note that, then the code is crashing, writting real4 variables forces a numerical convection from real8 to real4 which will create an internal error in NetCDF and will avoid the creation of the output files. Forcing double precision outputs with prec="8" (for example in the field\_definition) will avoid this problem. \item add user defined attributes: real8 to real4 which will create an internal error in NetCDF and will avoid the creation of the output files. Forcing double precision outputs with prec="8" (for example in the field\_definition) will avoid this problem. \item add user defined attributes: \begin{xmllines} \end{xmllines} \item use of the @'' function: example 1, weighted temporal average \item use of the @'' function: example 1, weighted temporal average - define a new variable in field\_definition The freq\_op="5d" attribute is used to define the operation frequency of the @'' function: here 5 day. The temporal operation done by the @'' is the one defined in the field definition: The temporal operation done by the @'' is the one defined in the field definition: here we use the default, average. So, in the above case, @toce\_e3t will do the 5-day mean of toce*e3t. Operation="instant" refers to the temporal operation to be performed on the field''@toce\_e3t / @e3t'': here the temporal average is alreday done by the @'' function so we just use instant to do the ratio of Operation="instant" refers to the temporal operation to be performed on the field''@toce\_e3t / @e3t'': here the temporal average is alreday done by the @'' function so we just use instant to do the ratio of the 2 mean values. field\_ref="toce" means that attributes not explicitely defined, are inherited from toce field. Note that in this case, freq\_op must be equal to the file output\_freq. \item use of the @'' function: example 2, monthly SSH standard deviation \item use of the @'' function: example 2, monthly SSH standard deviation - define a new variable in field\_definition The freq\_op="1m" attribute is used to define the operation frequency of the @'' function: here 1 month. The temporal operation done by the @'' is the one defined in the field definition: The temporal operation done by the @'' is the one defined in the field definition: here we use the default, average. So, in the above case, @ssh2 will do the monthly mean of ssh*ssh. Operation="instant" refers to the temporal operation to be performed on the field ''sqrt( @ssh2 - @ssh * @ssh )'': Operation="instant" refers to the temporal operation to be performed on the field ''sqrt( @ssh2 - @ssh * @ssh )'': here the temporal average is alreday done by the @'' function so we just use instant. field\_ref="ssh" means that attributes not explicitely defined, are inherited from ssh field. Note that in this case, freq\_op must be equal to the file output\_freq. \item use of the @'' function: example 3, monthly average of SST diurnal cycle \item use of the @'' function: example 3, monthly average of SST diurnal cycle - define 2 new variables in field\_definition The freq\_op="1d" attribute is used to define the operation frequency of the @'' function: here 1 day. The temporal operation done by the @'' is the one defined in the field definition: here maximum for sstmax and minimum for sstmin. The temporal operation done by the @'' is the one defined in the field definition: here maximum for sstmax and minimum for sstmin. So, in the above case, @sstmax will do the daily max and @sstmin the daily min. Operation="average" refers to the temporal operation to be performed on the field @sstmax - @sstmin'': Operation="average" refers to the temporal operation to be performed on the field @sstmax - @sstmin'': here monthly mean (of daily max - daily min of the sst). field\_ref="sst" means that attributes not explicitely defined, are inherited from sst field. Output from the XIOS-1.0 IO server is compliant with \href{http://cfconventions.org/Data/cf-conventions/cf-conventions-1.5/build/cf-conventions.html}{version 1.5} of the CF metadata standard. Therefore while a user may wish to add their own metadata to the output files (as demonstrated in example 4 of section \autoref{subsec:IOM_xmlref}) the metadata should, for the most part, comply with the CF-1.5 standard. Some metadata that may significantly increase the file size (horizontal cell areas and vertices) are controlled by the namelist parameter \np{ln\_cfmeta} in the \ngn{namrun} namelist. \href{http://cfconventions.org/Data/cf-conventions/cf-conventions-1.5/build/cf-conventions.html}{version 1.5} of the CF metadata standard. Therefore while a user may wish to add their own metadata to the output files (as demonstrated in example 4 of section \autoref{subsec:IOM_xmlref}) the metadata should, for the most part, comply with the CF-1.5 standard. Some metadata that may significantly increase the file size (horizontal cell areas and vertices) are controlled by the namelist parameter \np{ln\_cfmeta} in the \ngn{namrun} namelist. This must be set to true if these metadata are to be included in the output files. Since version 3.3, support for NetCDF4 chunking and (loss-less) compression has been included. These options build on the standard NetCDF output and allow the user control over the size of the chunks via namelist settings. These options build on the standard NetCDF output and allow the user control over the size of the chunks via namelist settings. Chunking and compression can lead to significant reductions in file sizes for a small runtime overhead. For a fuller discussion on chunking and other performance issues the reader is referred to the NetCDF4 documentation found \href{http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Chunking}{here}. The new features are only available when the code has been linked with a NetCDF4 library (version 4.1 onwards, recommended) which has been built with HDF5 support (version 1.8.4 onwards, recommended). Datasets created with chunking and compression are not backwards compatible with NetCDF3 "classic" format but most analysis codes can be relinked simply with the new libraries and will then read both NetCDF3 and NetCDF4 files. NEMO executables linked with NetCDF4 libraries can be made to produce NetCDF3 files by For a fuller discussion on chunking and other performance issues the reader is referred to the NetCDF4 documentation found \href{http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Chunking}{here}. The new features are only available when the code has been linked with a NetCDF4 library (version 4.1 onwards, recommended) which has been built with HDF5 support (version 1.8.4 onwards, recommended). Datasets created with chunking and compression are not backwards compatible with NetCDF3 "classic" format but most analysis codes can be relinked simply with the new libraries and will then read both NetCDF3 and NetCDF4 files. NEMO executables linked with NetCDF4 libraries can be made to produce NetCDF3 files by setting the \np{ln\_nc4zip} logical to false in the \textit{namnc4} namelist: If \key{netcdf4} has not been defined, these namelist parameters are not read. In this case, \np{ln\_nc4zip} is set false and dummy routines for a few NetCDF4-specific functions are defined. These functions will not be used but need to be included so that compilation is possible with NetCDF3 libraries. When using NetCDF4 libraries, \key{netcdf4} should be defined even if the intention is to In this case, \np{ln\_nc4zip} is set false and dummy routines for a few NetCDF4-specific functions are defined. These functions will not be used but need to be included so that compilation is possible with NetCDF3 libraries. When using NetCDF4 libraries, \key{netcdf4} should be defined even if the intention is to create only NetCDF3-compatible files. This is necessary to avoid duplication between the dummy routines and the actual routines present in the library. This is necessary to avoid duplication between the dummy routines and the actual routines present in the library. Most compilers will fail at compile time when faced with such duplication. Thus when linking with NetCDF4 libraries the user must define \key{netcdf4} and Thus when linking with NetCDF4 libraries the user must define \key{netcdf4} and control the type of NetCDF file produced via the namelist parameter. Chunking and compression is applied only to 4D fields and there is no advantage in chunking across more than one time dimension since previously written chunks would have to be read back and decompressed before being added to. Chunking and compression is applied only to 4D fields and there is no advantage in chunking across more than one time dimension since previously written chunks would have to be read back and decompressed before being added to. Therefore, user control over chunk sizes is provided only for the three space dimensions. The user sets an approximate number of chunks along each spatial axis. The actual size of the chunks will depend on global domain size for mono-processors or, more likely, the local processor domain size for distributed processing. The derived values are subject to practical minimum values (to avoid wastefully small chunk sizes) and The actual size of the chunks will depend on global domain size for mono-processors or, more likely, the local processor domain size for distributed processing. The derived values are subject to practical minimum values (to avoid wastefully small chunk sizes) and cannot be greater than the domain size in any dimension. The algorithm used is: \end{forlines} \noindent for a standard ORCA2\_LIM configuration gives chunksizes of {\small\tt 46x38x1} respectively in \noindent for a standard ORCA2\_LIM configuration gives chunksizes of {\small\tt 46x38x1} respectively in the mono-processor case (i.e. global domain of {\small\tt 182x149x31}). An illustration of the potential space savings that NetCDF4 chunking and compression provides is given in table \autoref{tab:NC4} which compares the results of two short runs of the ORCA2\_LIM reference configuration with a 4x2 mpi partitioning. Note the variation in the compression ratio achieved which reflects chiefly the dry to wet volume ratio of each processing region. table \autoref{tab:NC4} which compares the results of two short runs of the ORCA2\_LIM reference configuration with a 4x2 mpi partitioning. Note the variation in the compression ratio achieved which reflects chiefly the dry to wet volume ratio of each processing region. %------------------------------------------TABLE---------------------------------------------------- %---------------------------------------------------------------------------------------------------- When \key{iomput} is activated with \key{netcdf4} chunking and compression parameters for fields produced via \np{iom\_put} calls are set via an equivalent and identically named namelist to \textit{namnc4} in \np{xmlio\_server.def}. Typically this namelist serves the mean files whilst the \ngn{ namnc4} in the main namelist file continues to serve the restart files. This duplication is unfortunate but appropriate since, if using io\_servers, the domain sizes of the individual files produced by the io\_server processes may be different to those produced by When \key{iomput} is activated with \key{netcdf4} chunking and compression parameters for fields produced via \np{iom\_put} calls are set via an equivalent and identically named namelist to \textit{namnc4} in \np{xmlio\_server.def}. Typically this namelist serves the mean files whilst the \ngn{ namnc4} in the main namelist file continues to serve the restart files. This duplication is unfortunate but appropriate since, if using io\_servers, the domain sizes of the individual files produced by the io\_server processes may be different to those produced by the invidual processing regions and different chunking choices may be desired. %------------------------------------------------------------------------------------------------------------- Each trend of the dynamics and/or temperature and salinity time evolution equations can be send to \mdl{trddyn} and/or \mdl{trdtra} modules (see TRD directory) just after their computation ($i.e.$ at the end of each $dyn\cdots.F90$ and/or $tra\cdots.F90$ routines). Each trend of the dynamics and/or temperature and salinity time evolution equations can be send to \mdl{trddyn} and/or \mdl{trdtra} modules (see TRD directory) just after their computation ($i.e.$ at the end of each $dyn\cdots.F90$ and/or $tra\cdots.F90$ routines). This capability is controlled by options offered in \ngn{namtrd} namelist. Note that the output are done with xIOS, and therefore the \key{IOM} is required. \begin{description} \item[\np{ln\_glo\_trd}]: at each \np{nn\_trd} time-step a check of the basin averaged properties of the momentum and tracer equations is performed. This also includes a check of $T^2$, $S^2$, $\tfrac{1}{2} (u^2+v2)$, and potential energy time evolution equations properties; \item[\np{ln\_dyn\_trd}]: each 3D trend of the evolution of the two momentum components is output; \item[\np{ln\_dyn\_mxl}]: each 3D trend of the evolution of the two momentum components averaged over the mixed layer is output; \item[\np{ln\_vor\_trd}]: a vertical summation of the moment tendencies is performed, then the curl is computed to obtain the barotropic vorticity tendencies which are output; \item[\np{ln\_KE\_trd}] : each 3D trend of the Kinetic Energy equation is output ; \item[\np{ln\_tra\_trd}]: each 3D trend of the evolution of temperature and salinity is output ; \item[\np{ln\_tra\_mxl}]: each 2D trend of the evolution of temperature and salinity averaged over the mixed layer is output; \item[\np{ln\_glo\_trd}]: at each \np{nn\_trd} time-step a check of the basin averaged properties of the momentum and tracer equations is performed. This also includes a check of $T^2$, $S^2$, $\tfrac{1}{2} (u^2+v2)$, and potential energy time evolution equations properties; \item[\np{ln\_dyn\_trd}]: each 3D trend of the evolution of the two momentum components is output; \item[\np{ln\_dyn\_mxl}]: each 3D trend of the evolution of the two momentum components averaged over the mixed layer is output; \item[\np{ln\_vor\_trd}]: a vertical summation of the moment tendencies is performed, then the curl is computed to obtain the barotropic vorticity tendencies which are output; \item[\np{ln\_KE\_trd}] : each 3D trend of the Kinetic Energy equation is output; \item[\np{ln\_tra\_trd}]: each 3D trend of the evolution of temperature and salinity is output; \item[\np{ln\_tra\_mxl}]: each 2D trend of the evolution of temperature and salinity averaged over the mixed layer is output; \end{description} \textbf{Note that} in the current version (v3.6), many changes has been introduced but not fully tested. In particular, options associated with \np{ln\_dyn\_mxl}, \np{ln\_vor\_trd}, and \np{ln\_tra\_mxl} are not working, and none of the options have been tested with variable volume ($i.e.$ \key{vvl} defined). In particular, options associated with \np{ln\_dyn\_mxl}, \np{ln\_vor\_trd}, and \np{ln\_tra\_mxl} are not working, and none of the options have been tested with variable volume ($i.e.$ \key{vvl} defined). % ------------------------------------------------------------------------------------------------------------- %-------------------------------------------------------------------------------------------------------------- The on-line computation of floats advected either by the three dimensional velocity field or constraint to remain at a given depth ($w = 0$ in the computation) have been introduced in the system during the CLIPPER project. The on-line computation of floats advected either by the three dimensional velocity field or constraint to remain at a given depth ($w = 0$ in the computation) have been introduced in the system during the CLIPPER project. Options are defined by \ngn{namflo} namelis variables. The algorithm used is based either on the work of \cite{Blanke_Raynaud_JPO97} (default option), or on a $4^th$ Runge-Hutta algorithm (\np{ln\_flork4}\forcode{ = .true.}). Note that the \cite{Blanke_Raynaud_JPO97} algorithm have the advantage of providing trajectories which The algorithm used is based either on the work of \cite{Blanke_Raynaud_JPO97} (default option), or on a $4^th$ Runge-Hutta algorithm (\np{ln\_flork4}\forcode{ = .true.}). Note that the \cite{Blanke_Raynaud_JPO97} algorithm have the advantage of providing trajectories which are consistent with the numeric of the code, so that the trajectories never intercept the bathymetry. \subsubsection{Input data: initial coordinates} Initial coordinates can be given with Ariane Tools convention (IJK coordinates, (\np{ln\_ariane}\forcode{ = .true.}) ) or with longitude and latitude. Initial coordinates can be given with Ariane Tools convention (IJK coordinates, (\np{ln\_ariane}\forcode{ = .true.}) ) or with longitude and latitude. In case of Ariane convention, input filename is \np{init\_float\_ariane}. \np{jpnfl} is the total number of floats during the run. When initial positions are read in a restart file (\np{ln\_rstflo}\forcode{ = .true.} ), When initial positions are read in a restart file (\np{ln\_rstflo}\forcode{ = .true.} ), \np{jpnflnewflo} can be added in the initialization file. \subsubsection{Output data} \np{nn\_writefl} is the frequency of writing in float output file and \np{nn\_stockfl} is the frequency of creation of the float restart file. \np{nn\_writefl} is the frequency of writing in float output file and \np{nn\_stockfl} is the frequency of creation of the float restart file. Output data can be written in ascii files (\np{ln\_flo\_ascii}\forcode{ = .true.}). There are 2 possibilities: - if (\key{iomput}) is used, outputs are selected in  iodef.xml. Here it is an example of specification to put in files description section: - if (\key{iomput}) is used, outputs are selected in  iodef.xml. Here it is an example of specification to put in files description section: \begin{xmllines} -  if (\key{iomput}) is not used, a file called \ifile{trajec\_float} will be created by IOIPSL library. See also \href{http://stockage.univ-brest.fr/~grima/Ariane/}{here} the web site describing the off-line use of this marvellous diagnostic tool. See also \href{http://stockage.univ-brest.fr/~grima/Ariane/}{here} the web site describing the off-line use of this marvellous diagnostic tool. % ------------------------------------------------------------------------------------------------------------- This on-line Harmonic analysis is actived with \key{diaharm}. Some parameters are available in namelist \ngn{namdia\_harm} : Some parameters are available in namelist \ngn{namdia\_harm}: - \np{nit000\_han} is the first time step used for harmonic analysis - \np{tname}       is an array with names of tidal constituents to analyse \np{nit000\_han} and \np{nitend\_han} must be between \np{nit000} and \np{nitend} of the simulation. The restart capability is not implemented. The Harmonic analysis solve the following equation: \np{nit000\_han} and \np{nitend\_han} must be between \np{nit000} and \np{nitend} of the simulation. The restart capability is not implemented. The Harmonic analysis solve the following equation: $h_{i} - A_{0} + \sum^{nb\_ana}_{j=1}[A_{j}cos(\nu_{j}t_{j}-\phi_{j})] = e_{i}$ With $A_{j}$, $\nu_{j}$, $\phi_{j}$, the amplitude, frequency and phase for each wave and $e_{i}$ the error. With $A_{j}$, $\nu_{j}$, $\phi_{j}$, the amplitude, frequency and phase for each wave and $e_{i}$ the error. $h_{i}$ is the sea level for the time $t_{i}$ and $A_{0}$ is the mean sea level. \\ We can rewrite this equation: %------------------------------------------------------------------------------------------------------------- A module is available to compute the transport of volume, heat and salt through sections. A module is available to compute the transport of volume, heat and salt through sections. This diagnostic is actived with \key{diadct}. Each section is defined by the coordinates of its 2 extremities. The pathways between them are contructed using tools which can be found in \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT} and are written in a binary file \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is later read in by NEMO to compute on-line transports. The pathways between them are contructed using tools which can be found in \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT} and are written in a binary file \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is later read in by NEMO to compute on-line transports. The on-line transports module creates three output ascii files: - \texttt{salt\_transport}   for   salt transports (unit: $10^{9}Kg s^{-1}$) \\ Namelist variables in \ngn{namdct} control how frequently the flows are summed and the time scales over which they are averaged, as well as the level of output for debugging: Namelist variables in \ngn{namdct} control how frequently the flows are summed and the time scales over which they are averaged, as well as the level of output for debugging: \np{nn\_dct}   : frequency of instantaneous transports computing \np{nn\_dctwri}: frequency of writing ( mean of instantaneous transports ) \subsubsection{Creating a binary file containing the pathway of each section} In \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT/run}, the file \textit{ {list\_sections.ascii\_global}} contains a list of all the sections that are to be computed (this list of sections is based on MERSEA project metrics). In \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT/run}, the file \textit{ {list\_sections.ascii\_global}} contains a list of all the sections that are to be computed (this list of sections is based on MERSEA project metrics). Another file is available for the GYRE configuration (\texttt{ {list\_sections.ascii\_GYRE}}). - \texttt{ice}       to compute surface and volume ice transports, \texttt{noice}     if no. \\ \noindent The results of the computing of transports, and the directions of positive and negative flow do not depend on the order of the 2 extremities in this file. \\ \noindent If nclass $\neq$ 0,the next lines contain the class type and the nclass bounds: \\ \noindent The results of the computing of transports, and the directions of positive and negative flow do not depend on the order of the 2 extremities in this file. \\ \noindent If nclass $\neq$ 0, the next lines contain the class type and the nclass bounds: \\ {\scriptsize \texttt{ long1 lat1 long2 lat2 nclass (ok/no)strpond (no)ice section\_name \\ - \texttt{zsigp} for potential density classes \\ The script \texttt{job.ksh} computes the pathway for each section and creates a binary file \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is read by NEMO. \\ It is possible to use this tools for new configuations: \texttt{job.ksh} has to be updated with the coordinates file name and path. \\ Examples of two sections, the ACC\_Drake\_Passage with no classes, and the ATL\_Cuba\_Florida with 4 temperature clases (5 class bounds), are shown: \\ The script \texttt{job.ksh} computes the pathway for each section and creates a binary file \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is read by NEMO. \\ It is possible to use this tools for new configuations: \texttt{job.ksh} has to be updated with the coordinates file name and path. \\ Examples of two sections, the ACC\_Drake\_Passage with no classes, and the ATL\_Cuba\_Florida with 4 temperature clases (5 class bounds), are shown: \\ \noindent {\scriptsize \texttt{ -68.    -54.5   -60.    -64.7  00 okstrpond noice ACC\_Drake\_Passage \\ transport\_total}}                                     \\ For sections with classes, the first \texttt{nclass-1} lines correspond to the transport for each class and the last line corresponds to the total transport summed over all classes. For sections with no classes, class number \texttt{1} corresponds to \texttt{total class} and For sections with classes, the first \texttt{nclass-1} lines correspond to the transport for each class and the last line corresponds to the total transport summed over all classes. For sections with no classes, class number \texttt{1} corresponds to \texttt{total class} and this class is called \texttt{N}, meaning \texttt{none}. - \texttt{transport\_direction2} is the negative part of the transport ($\leq$ 0). \\ \noindent The \texttt{section slope coefficient} gives information about the significance of transports signs and direction: \\ \noindent The \texttt{section slope coefficient} gives information about the significance of transports signs and direction: \\ \begin{table} \scriptsize Changes in steric sea level are caused when changes in the density of the water column imply an expansion or contraction of the column. It is essentially produced through surface heating/cooling and to a lesser extent through non-linear effects of the equation of state (cabbeling, thermobaricity...). Changes in steric sea level are caused when changes in the density of the water column imply an expansion or contraction of the column. It is essentially produced through surface heating/cooling and to a lesser extent through non-linear effects of the equation of state (cabbeling, thermobaricity...). Non-Boussinesq models contain all ocean effects within the ocean acting on the sea level. In particular, they include the steric effect. In contrast, Boussinesq models, such as \NEMO, conserve volume, rather than mass, In contrast, Boussinesq models, such as \NEMO, conserve volume, rather than mass, and so do not properly represent expansion or contraction. The steric effect is therefore not explicitely represented. This approximation does not represent a serious error with respect to the flow field calculated by the model \citep{Greatbatch_JGR94}, but extra attention is required when investigating sea level, as steric changes are an important contribution to local changes in sea level on seasonal and climatic time scales. This is especially true for investigation into sea level rise due to global warming. Fortunately, the steric contribution to the sea level consists of a spatially uniform component that This approximation does not represent a serious error with respect to the flow field calculated by the model \citep{Greatbatch_JGR94}, but extra attention is required when investigating sea level, as steric changes are an important contribution to local changes in sea level on seasonal and climatic time scales. This is especially true for investigation into sea level rise due to global warming. Fortunately, the steric contribution to the sea level consists of a spatially uniform component that can be diagnosed by considering the mass budget of the world ocean \citep{Greatbatch_JGR94}. In order to better understand how global mean sea level evolves and thus how the steric sea level can be diagnosed, we compare, in the following, the non-Boussinesq and Boussinesq cases. In order to better understand how global mean sea level evolves and thus how the steric sea level can be diagnosed, we compare, in the following, the non-Boussinesq and Boussinesq cases. Let denote \] Temporal changes in total mass is obtained from the density conservation equation : Temporal changes in total mass is obtained from the density conservation equation: $\frac{1}{e_3} \partial_t ( e_3\,\rho) + \nabla( \rho \, \textbf{U} ) \label{eq:Co_nBq}$ where $\rho$ is the \textit{in situ} density, and \textit{emp} the surface mass exchanges with the other media of the Earth system (atmosphere, sea-ice, land). where $\rho$ is the \textit{in situ} density, and \textit{emp} the surface mass exchanges with the other media of the Earth system (atmosphere, sea-ice, land). Its global averaged leads to the total mass change where $\overline{\textit{emp}} = \int_S \textit{emp}\,ds$ is the net mass flux through the ocean surface. Bringing \autoref{eq:Mass_nBq} and the time derivative of \autoref{eq:MV_nBq} together leads to Bringing \autoref{eq:Mass_nBq} and the time derivative of \autoref{eq:MV_nBq} together leads to the evolution equation of the mean sea level \label{eq:ssh_nBq} \] The first term in equation \autoref{eq:ssh_nBq} alters sea level by adding or subtracting mass from the ocean. The second term arises from temporal changes in the global mean density; $i.e.$ from steric effects. In a Boussinesq fluid, $\rho$ is replaced by $\rho_o$ in all the equation except when $\rho$ appears multiplied by the gravity ($i.e.$ in the hydrostatic balance of the primitive Equations). In particular, the mass conservation equation, \autoref{eq:Co_nBq}, degenerates into the incompressibility equation: The first term in equation \autoref{eq:ssh_nBq} alters sea level by adding or subtracting mass from the ocean. The second term arises from temporal changes in the global mean density; $i.e.$ from steric effects. In a Boussinesq fluid, $\rho$ is replaced by $\rho_o$ in all the equation except when $\rho$ appears multiplied by the gravity ($i.e.$ in the hydrostatic balance of the primitive Equations). In particular, the mass conservation equation, \autoref{eq:Co_nBq}, degenerates into the incompressibility equation: $\frac{1}{e_3} \partial_t ( e_3 ) + \nabla( \textbf{U} ) \label{eq:V_Bq}$ Only the volume is conserved, not mass, or, more precisely, the mass which is conserved is the Boussinesq mass, $\mathcal{M}_o = \rho_o \mathcal{V}$. The total volume (or equivalently the global mean sea level) is altered only by net volume fluxes across the ocean surface, not by changes in mean mass of the ocean: the steric effect is missing in a Boussinesq fluid. Nevertheless, following \citep{Greatbatch_JGR94}, the steric effect on the volume can be diagnosed by Only the volume is conserved, not mass, or, more precisely, the mass which is conserved is the Boussinesq mass, $\mathcal{M}_o = \rho_o \mathcal{V}$. The total volume (or equivalently the global mean sea level) is altered only by net volume fluxes across the ocean surface, not by changes in mean mass of the ocean: the steric effect is missing in a Boussinesq fluid. Nevertheless, following \citep{Greatbatch_JGR94}, the steric effect on the volume can be diagnosed by considering the mass budget of the ocean. The apparent changes in $\mathcal{M}$, mass of the ocean, which are not induced by surface mass flux must be compensated by a spatially uniform change in the mean sea level due to expansion/contraction of the ocean \citep{Greatbatch_JGR94}. In others words, the Boussinesq mass, $\mathcal{M}_o$, can be related to $\mathcal{M}$, the  total mass of the ocean seen by the Boussinesq model, via the steric contribution to the sea level, The apparent changes in $\mathcal{M}$, mass of the ocean, which are not induced by surface mass flux must be compensated by a spatially uniform change in the mean sea level due to expansion/contraction of the ocean \citep{Greatbatch_JGR94}. In others words, the Boussinesq mass, $\mathcal{M}_o$, can be related to $\mathcal{M}$, the total mass of the ocean seen by the Boussinesq model, via the steric contribution to the sea level, $\eta_s$, a spatially uniform variable, as follows: \label{eq:M_Bq} \] Any change in $\mathcal{M}$ which cannot be explained by the net mass flux through the ocean surface is converted into a mean change in sea level. Introducing the total density anomaly, $\mathcal{D}= \int_D d_a \,dv$, where $d_a = (\rho -\rho_o ) / \rho_o$ is the density anomaly used in \NEMO (cf. \autoref{subsec:TRA_eos}) in \autoref{eq:M_Bq} leads to a very simple form for the steric height: Any change in $\mathcal{M}$ which cannot be explained by the net mass flux through the ocean surface is converted into a mean change in sea level. Introducing the total density anomaly, $\mathcal{D}= \int_D d_a \,dv$, where $d_a = (\rho -\rho_o ) / \rho_o$ is the density anomaly used in \NEMO (cf. \autoref{subsec:TRA_eos}) in \autoref{eq:M_Bq} leads to a very simple form for the steric height: $\eta_s = - \frac{1}{\mathcal{A}} \mathcal{D} We do not recommend that. Indeed, in this case \rho_o depends on the initial state of the ocean. Since \rho_o has a direct effect on the dynamics of the ocean (it appears in the pressure gradient term of the momentum equation) it is definitively not a good idea when inter-comparing experiments. Since \rho_o has a direct effect on the dynamics of the ocean (it appears in the pressure gradient term of the momentum equation) it is definitively not a good idea when inter-comparing experiments. We better recommend to fixe once for all \rho_o to 1035\;Kg\,m^{-3}. This value is a sensible choice for the reference density used in a Boussinesq ocean climate model since, with the exception of only a small percentage of the ocean, density in the World Ocean varies by no more than 2\% from this value (\cite{Gill1982}, page 47). Second, we have assumed here that the total ocean surface, \mathcal{A}, does not change when the sea level is changing as it is the case in all global ocean GCMs This value is a sensible choice for the reference density used in a Boussinesq ocean climate model since, with the exception of only a small percentage of the ocean, density in the World Ocean varies by no more than 2\% from this value (\cite{Gill1982}, page 47). Second, we have assumed here that the total ocean surface, \mathcal{A}, does not change when the sea level is changing as it is the case in all global ocean GCMs (wetting and drying of grid point is not allowed). Third, the discretisation of \autoref{eq:steric_Bq} depends on the type of free surface which is considered. Third, the discretisation of \autoref{eq:steric_Bq} depends on the type of free surface which is considered. In the non linear free surface case, i.e. \key{vvl} defined, it is given by \label{eq:discrete_steric_Bq_nfs}$ whereas in the linear free surface, the volume above the \textit{z=0} surface must be explicitly taken into account to better approximate the total ocean mass and thus the steric sea level: whereas in the linear free surface, the volume above the \textit{z=0} surface must be explicitly taken into account to better approximate the total ocean mass and thus the steric sea level: \[ \eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} d_a\; e_{1t}e_{2t} \eta } In the real ocean, sea ice (and snow above it)  depresses the liquid seawater through its mass loading. This depression is a result of the mass of sea ice/snow system acting on the liquid ocean. There is, however, no dynamical effect associated with these depressions in the liquid ocean sea level, There is, however, no dynamical effect associated with these depressions in the liquid ocean sea level, so that there are no associated ocean currents. Hence, the dynamically relevant sea level is the effective sea level, $i.e.$ the sea level as if sea ice (and snow) were converted to liquid seawater \citep{Campin_al_OM08}. However, in the current version of \NEMO the sea-ice is levitating above the ocean without mass exchanges between ice and ocean. Therefore the model effective sea level is always given by $\eta + \eta_s$, whether or not there is sea ice present. Hence, the dynamically relevant sea level is the effective sea level, $i.e.$ the sea level as if sea ice (and snow) were converted to liquid seawater \citep{Campin_al_OM08}. However, in the current version of \NEMO the sea-ice is levitating above the ocean without mass exchanges between ice and ocean. Therefore the model effective sea level is always given by $\eta + \eta_s$, whether or not there is sea ice present. In AR5 outputs, the thermosteric sea level is demanded. where $S_o$ and $p_o$ are the initial salinity and pressure, respectively. Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs the \key{diaar5} defined to be called. Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs the \key{diaar5} defined to be called. % ------------------------------------------------------------------------------------------------------------- \subsection{Depth of various quantities (\protect\mdl{diahth})} Among the available diagnostics the following ones are obtained when defining the \key{diahth} CPP key: Among the available diagnostics the following ones are obtained when defining the \key{diahth} CPP key: - the mixed layer depth (based on a density criterion \citep{de_Boyer_Montegut_al_JGR04}) (\mdl{diahth}) %----------------------------------------------------------------------------------------- The poleward heat and salt transports, their advective and diffusive component, and the meriodional stream function can be computed on-line in \mdl{diaptr} \np{ln\_diaptr} to true The poleward heat and salt transports, their advective and diffusive component, and the meriodional stream function can be computed on-line in \mdl{diaptr} \np{ln\_diaptr} to true (see the \textit{\ngn{namptr} } namelist below). When \np{ln\_subbas}\forcode{ = .true.}, transports and stream function are computed for the Atlantic, Indian, Pacific and Indo-Pacific Oceans (defined north of 30\deg S) as well as for the World Ocean. The sub-basin decomposition requires an input file (\ifile{subbasins}) which contains three 2D mask arrays, the Indo-Pacific mask been deduced from the sum of the Indian and Pacific mask (\autoref{fig:mask_subasins}). When \np{ln\_subbas}\forcode{ = .true.}, transports and stream function are computed for the Atlantic, Indian, Pacific and Indo-Pacific Oceans (defined north of 30\deg S) as well as for the World Ocean. The sub-basin decomposition requires an input file (\ifile{subbasins}) which contains three 2D mask arrays, the Indo-Pacific mask been deduced from the sum of the Indian and Pacific mask (\autoref{fig:mask_subasins}). %>>>>>>>>>>>>>>>>>>>>>>>>>>>> \begin{figure}[!t] \begin{center} \includegraphics[width=1.0\textwidth]{Fig_mask_subasins} \caption{   \protect\label{fig:mask_subasins} Decomposition of the World Ocean (here ORCA2) into sub-basin used in to compute the heat and salt transports as well as the meridional stream-function: Atlantic basin (red), Pacific basin (green), Indian basin (bleue), Indo-Pacific basin (bleue+green). Note that semi-enclosed seas (Red, Med and Baltic seas) as well as Hudson Bay are removed from the sub-basins. Note also that the Arctic Ocean has been split into Atlantic and Pacific basins along the North fold line.} \begin{figure}[!t] \begin{center} \includegraphics[width=1.0\textwidth]{Fig_mask_subasins} \caption{  \protect\label{fig:mask_subasins} Decomposition of the World Ocean (here ORCA2) into sub-basin used in to compute the heat and salt transports as well as the meridional stream-function: Atlantic basin (red), Pacific basin (green), Indian basin (bleue), Indo-Pacific basin (bleue+green). Note that semi-enclosed seas (Red, Med and Baltic seas) as well as Hudson Bay are removed from the sub-basins. Note also that the Arctic Ocean has been split into Atlantic and Pacific basins along the North fold line.} \end{center} \end{figure} %>>>>>>>>>>>>>>>>>>>>>>>>>>>> The 25 hour mean is available for daily runs by summing up the 25 hourly instantananeous hourly values from midnight at the start of the day to midight at the day end. This diagnostic is actived with the logical  $ln\_dia25h$ This diagnostic is actived with the logical $ln\_dia25h$. % ----------------------------------------------------------- %---------------------------------------------------------------------------------------------------------- A module is available to output the surface (top), mid water and bed diagnostics of a set of standard variables. This can be a useful diagnostic when hourly or sub-hourly output is required in high resolution tidal outputs. A module is available to output the surface (top), mid water and bed diagnostics of a set of standard variables. This can be a useful diagnostic when hourly or sub-hourly output is required in high resolution tidal outputs. The tidal signal is retained but the overall data usage is cut to just three vertical levels. Also the bottom level is calculated for each cell. This diagnostic is actived with the logical  $ln\_diatmb$ This diagnostic is actived with the logical $ln\_diatmb$. % ----------------------------------------------------------- in the zonal, meridional and vertical directions respectively. The vertical component is included although it is not strictly valid as the vertical velocity is calculated from the continuity equation rather than as a prognostic variable. The vertical component is included although it is not strictly valid as the vertical velocity is calculated from the continuity equation rather than as a prognostic variable. Physically this represents the rate at which information is propogated across a grid cell. Values greater than 1 indicate that information is propagated across more than one grid cell in a single time step. The variables can be activated by setting the \np{nn\_diacfl} namelist parameter to 1 in the \ngn{namctl} namelist. Values greater than 1 indicate that information is propagated across more than one grid cell in a single time step. The variables can be activated by setting the \np{nn\_diacfl} namelist parameter to 1 in the \ngn{namctl} namelist. The diagnostics will be written out to an ascii file named cfl\_diagnostics.ascii. In this file the maximum value of $C_u$, $C_v$, and $C_w$ are printed at each timestep along with the coordinates of where the maximum value occurs. At the end of the model run the maximum value of $C_u$, $C_v$, and $C_w$ for the whole model run is printed along with the coordinates of each. In this file the maximum value of $C_u$, $C_v$, and $C_w$ are printed at each timestep along with the coordinates of where the maximum value occurs. At the end of the model run the maximum value of $C_u$, $C_v$, and $C_w$ for the whole model run is printed along with the coordinates of each. The maximum values from the run are also copied to the ocean.output file.