% ================================================================ % Chapter Ñ I/O & Diagnostics % ================================================================ \chapter{Ouput and Diagnostics (IOM, DIA, TRD, FLO)} \label{DIA} \minitoc \newpage $\ $\newline % force a new ligne % ================================================================ % Old Model Output % ================================================================ \section{Old Model Output (default or \key{dimgout})} \label{DIA_io_old} The model outputs are of three types: the restart file, the output listing, and the output file(s). The restart file is used internally by the code when the user wants to start the model with initial conditions defined by a previous simulation. It contains all the information that is necessary in order for there to be no changes in the model results (even at the computer precision) between a run performed with several restarts and the same run performed in one step. It should be noted that this requires that the restart file contain two consecutive time steps for all the prognostic variables, and that it is saved in the same binary format as the one used by the computer that is to read it (in particular, 32 bits binary IEEE format must not be used for this file). The output listing and file(s) are predefined but should be checked and eventually adapted to the user's needs. The output listing is stored in the $ocean.output$ file. The information is printed from within the code on the logical unit $numout$. To locate these prints, use the UNIX command "\textit{grep -i numout}" in the source code directory. In the standard configuration, the user will find the model results in NetCDF files containing mean values (or instantaneous values if \key{diainstant} is defined) for every time-step where output is demanded. These outputs are defined in the \mdl{diawri} module. When defining \key{dimgout}, the output are written in DIMG format, an IEEE output format. Since version 3.2, an I/O server has been added which provides more flexibility in the choice of the fields to be output as well as how the writing work is distributed over the processors in massively parallel computing. It is presented in next section. %\gmcomment{ % start of gmcomment % ================================================================ % Diagnostics % ================================================================ \section{Standard model Output (IOM)} \label{DIA_iom} Since version 3.2, iom\_put is the NEMO output interface. It was designed to be simple to use, flexible and efficient. Two main functionalities are covered by iom\_put: (1) the control of the output files through an external xml file defined by the user ; (2) the distribution (or not) of all task related to output files on dedicated processors. The first functionality allows the user to specify, without touching anything into the code, the way he want to output data: \\ - choice of output frequencies that can be different for each file (including real months and years) \\ - choice of file contents: decide which data will be written in which file (the same data can be outputted in different files) \\ - possibility to extract a subdomain (for example all TAO-PIRATA-RAMA moorings are already defined) \\ - choice of the temporal operation to perform: mean, instantaneous, min, max \\ - extremely large choice of data available \\ - redefine variables name and long\_name \\ In addition, iom\_put allows the user to output any variable (scalar, 2D or 3D) in the code in a very easy way. All details of iom\_put functionalities are listed in the following subsections. An example of the iodef.xml file that control the outputs can be found here: NEMOGCM/CONFIG/ORCA2\_LIM/EXP00/iodef.xml The second functionality targets outputs performances when running on a very large number of processes. The idea is to dedicate N specific processes to write the outputs, where N is defined by the user. In the current version, this functionality is technically working however, its performance are usually poor (for known reasons). Users can therefore test this functionality but they must be aware that expected performance improvement will not be achieved before the release 3.4. An example of xmlio\_server.def NEMOGCM/CONFIG/ORCA2\_LIM/EXP00/xmlio\_server.def \subsection{Basic knowledge} \subsubsection{ XML basic rules} XML tags begin with the less-than character ("$<$") and end with the greater-than character (''$>$''). You use tags to mark the start and end of elements, which are the logical units of information in an XML document. In addition to marking the beginning of an element, XML start tags also provide a place to specify attributes. An attribute specifies a single property for an element, using a name/value pair, for example: $<$a b="x" c="y" b="z"$>$ ... $<$/a$>$. See \href{http://www.xmlnews.org/docs/xml-basics.html}{here} for more details. \subsubsection{Structure of the xml file used in NEMO} The xml file is split into 3 parts: \begin{description} \item[field definition]: define all variables that can be output \\ all lines in between the following two tags\\ \verb? ? \\ \verb? ? \item[file definition]: define the netcdf files to be created and the variables they will contain \\ all lines in between the following two tags \\ \verb? ? \\ \verb? ? \item[axis and grid definitions]: define the horizontal and vertical grids \\ all lines in between the following two set of two tags\\ \verb? ? \\ \verb? ? and \\ \verb? ? \\ \verb? ? \end{description} \subsubsection{Inheritance and group } Xml extensively uses the concept of inheritance. \\ \\ example 1: \\ \vspace{-30pt} \begin{alltt} {{\scriptsize \begin{verbatim} \end{verbatim} }}\end{alltt} The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''ave(X)'' of the attribute ''operation'' from its parent ''field definition''. Note that a child can overwrite the attribute definition inherited from its parents. In the example above, the field ''sss'' will therefore output instantaneous values instead of average values. example 2: Use (or overwrite) attributes value of a field when listing the variables included in a file \vspace{-20pt} \begin{alltt} {{\scriptsize \begin{verbatim} \end{verbatim} }}\end{alltt} With the help of the inheritance, the concept of group allow to define a set of attributes for several fields or files. example 3, group of fields: define a group ''T\_grid\_variables'' identified with the name ''grid\_T''. By default variables of this group have no vertical axis but, following inheritance rules, ''axis\_ref'' can be redefined for the field ''toce'' that is a 3D variable. \vspace{-30pt} \begin{alltt} {{\scriptsize \begin{verbatim} \end{verbatim} }}\end{alltt} example 4, group of files: define a group of file with the attribute output\_freq equal to 432000 (5 days) \vspace{-30pt} \begin{alltt} {{\scriptsize \begin{verbatim} ... ... \end{verbatim} }}\end{alltt} \subsubsection{Control of the xml attributes from NEMO} The values of some attributes are automatically defined by NEMO (and any definition given in the xml file is overwritten). By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the xml file (but this is not necessary). Here is the list of these attributes: \\ \\ \begin{tabular}{|l|c|c|c|} \hline \multicolumn{2}{|c|}{tag ids affected by automatic } & name & attribute value \\ \multicolumn{2}{|c|}{definition of some of their attributes } & attribute & \\ \hline \hline \multicolumn{2}{|c|}{field\_definition} & freq\_op & \np{rn\_rdt} \\ \hline \multicolumn{2}{|c|}{SBC} & freq\_op & \np{rn\_rdt} $\times$ \np{nn\_fsbc} \\ \hline 1h, 2h, 3h, 4h, 6h, 12h & \_grid\_T, \_grid\_U, & name & filename defined by \\ 1d, 3d, 5d & \_grid\_V, \_grid\_W, & & a call to rou{dia\_nam} \\ 1m, 2m, 3m, 4m, 6m & \_icemod, \_ptrc\_T, & & following NEMO \\ 1y, 2y, 5y, 10y & \_diad\_T, \_scalar & & nomenclature \\ \hline \multicolumn{2}{|c|}{EqT, EqU, EqW} & jbegin, ni, & according to the grid \\ \multicolumn{2}{|c|}{ } & name\_suffix & \\ \hline \multicolumn{2}{|c|}{TAO, RAMA and PIRATA moorings} & ibegin, jbegin, & according to the grid \\ \multicolumn{2}{|c|}{ } & name\_suffix & \\ \hline \end{tabular} \subsection{ Detailed functionalities } \subsubsection{Tag list} \begin{description} \item[context]: define the model using the xml file. Id is the only attribute accepted. Its value must be ''nemo'' or ''n\_nemo'' for the nth AGRIF zoom. Child of simulation tag. \item[field]: define the field to be output. Accepted attributes are axis\_ref, description, enable, freq\_op, grid\_ref, id (if child of field\_definition), level, operation, name, ref (if child of file), unit, zoom\_ref. Child of field\_definition, file or group of fields tag. \item[field\_definition]: definition of the part of the xml file corresponding to the field definition. Accept the same attributes as field tag. Child of context tag. \item[group]: define a group of file or field. Accept the same attributes as file or field. \item[file]: define the output file's characteristics. Accepted attributes are description, enable, output\_freq, output\_level, id, name, name\_suffix. Child of file\_definition or group of files tag. \item[file\_definition]: definition of the part of the xml file corresponding to the file definition. Accept the same attributes as file tag. Child of context tag. \item[axis]: definition of the vertical axis. Accepted attributes are description, id, positive, size, unit. Child of axis\_definition tag. \item[axis\_definition]: definition of the part of the xml file corresponding to the vertical axis definition. Accept the same attributes as axis tag. Child of context tag \item[grid]: definition of the horizontal grid. Accepted attributes are description and id. Child of axis\_definition tag. \item[grid\_definition]: definition of the part of the xml file corresponding to the horizontal grid definition. Accept the same attributes as grid tag. Child of context tag \item[zoom]: definition of a subdomain of an horizontal grid. Accepted attributes are description, id, i/jbegin, ni/j. Child of grid tag. \end{description} \subsubsection{Attributes list} Applied to a tag or a group of tags. % table to be added ? Another table, perhaps? %%%% Attribute Applied to? Definition Comment axis\_ref field String defining the vertical axis of the variable. It refers to the id of the vertical axis defined in the axis tag. Use ''non'' if the variable has no vertical axis %%%%%% \begin{description} \item[axis\_ref]: field attribute. String defining the vertical axis of the variable. It refers to the id of the vertical axis defined in the axis tag. Use ''none'' if the variable has no vertical axis \item[description]: this attribute can be applied to all tags but it is used only with the field tag. In this case, the value of description will be used to define, in the output netcdf file, the attributes long\_name and standard\_name of the variable. \item[enabled]: field and file attribute. Logical to switch on/off the output of a field or a file. \item[freq\_op]: field attribute (automatically defined, see part 1.4 (''control of the xml attributes'')). An integer defining the frequency in seconds at which NEMO is calling iom\_put for this variable. It corresponds to the model time step (rn\_rdt in the namelist) except for the variables computed at the frequency of the surface boundary condition (rn\_rdt ? nn\_fsbc in the namelist). \item[grid\_ref]: field attribute. String defining the horizontal grid of the variable. It refers to the id of the grid tag. \item[ibegin]: zoom attribute. Integer defining the zoom starting point along x direction. Automatically defined for TAO/RAMA/PIRATA moorings (see part 1.4). \item[id]: exists for all tag. This is a string defining the name to a specific tag that will be used later to refer to this tag. Tags of the same category must have different ids. \item[jbegin]: zoom attribute. Integer defining the zoom starting point along y direction. Automatically defined for TAO/RAMA/PIRATA moorings and equatorial section (see part 1.4). \item[level]: field attribute. Integer from 0 to 10 defining the output priority of a field. See output\_level attribute definition \item[operation]: field attribute. String defining the type of temporal operation to perform on a variable. Possible choices are ''ave(X)'' for temporal mean, ''inst(X)'' for instantaneous, ''t\_min(X)'' for temporal min and ''t\_max(X)'' for temporal max. \item[output\_freq]: file attribute. Integer defining the operation frequency in seconds. For example 86400 for daily mean. \item[output\_level]: file attribute. Integer from 0 to 10 defining the output priority of variables in a file: all variables listed in the file with a level smaller or equal to output\_level will be output. Other variables won't be output even if they are listed in the file. \item[positive]: axis attribute (always .FALSE.). Logical defining the vertical axis convention used in \NEMO (positive downward). Define the attribute positive of the variable in the netcdf output file. \item[prec]: field attribute. Integer defining the output precision. Not implemented, we always output real4 arrays. \item[name]: field or file attribute. String defining the name of a variable or a file. If the name of a file is undefined, its id is used as a name. 2 files must have different names. Files with specific ids will have their name automatically defined (see part 1.4). Note that is name will be automatically completed by the cpu number (if needed) and ''.nc'' \item[name\_suffix]: file attribute. String defining a suffix to be inserted after the name and before the cpu number and the ''.nc'' termination. Files with specific ids have an automatic definition of their suffix (see part 1.4). \item[ni]: zoom attribute. Integer defining the zoom extent along x direction. Automatically defined for equatorial sections (see part 1.4). \item[nj]: zoom attribute. Integer defining the zoom extent along x direction. \item[ref]: field attribute. String referring to the id of the field we want to add in a file. \item[size]: axis attribute. use unknown... \item[unit]: field attribute. String defining the unit of a variable and the associated attribute in the netcdf output file. \item[zoom\_ref]: field attribute. String defining the subdomain of data on which the file should be written (to ouput data only in a limited area). It refers to the id of a zoom defined in the zoom tag. \end{description} \subsection{IO\_SERVER} \subsubsection{Attached or detached mode?} Iom\_put is based on the io\_server developed by Yann Meurdesoif from IPSL (see \href{http://forge.ipsl.jussieu.fr/ioserver/browser}{here} for the source code or see its copy in NEMOGCM/EXTERNAL directory). This server can be used in ''attached mode'' (as a library) or in ''detached mode'' (as an external executable on n cpus). In attached mode, each cpu of NEMO will output its own subdomain. In detached mode, the io\_server will gather data from NEMO and output them split over n files with n the number of cpu dedicated to the io\_server. \subsubsection{Control the io\_server: the namelist file xmlio\_server.def} % %Again, a small table might be more readable? %Name %Type %Description %Comment %Using_server %Logical %Switch to use the server in attached or detached mode %(.TRUE. corresponding to detached mode). The control of the use of the io\_server is done through the namelist file of the io\_server called xmlio\_server.def. \textbf{using\_server}: logical, switch to use the server in attached or detached mode (.TRUE. corresponding to detached mode). \textbf{using\_oasis}: logical, set to .TRUE. if NEMO is used in coupled mode. \textbf{client\_id} = ''oceanx'' : character, used only in coupled mode. Specify the id used in OASIS to refer to NEMO. The same id must be used to refer to NEMO in the \$NBMODEL part of OASIS namcouple in the call of prim\_init\_comp\_proto in cpl\_oasis3f90 \textbf{server\_id} = ''ionemo'' : character, used only in coupled mode. Specify the id used in OASIS to refer to the IO\_SERVER when used in detached mode. Use the same id to refer to the io\_server in the \$NBMODEL part of OASIS namcouple. \textbf{global\_mpi\_buffer\_size}: integer; define the size in Mb of the MPI buffer used by the io\_server. \subsubsection{Number of cpu used by the io\_server in detached mode} The number of cpu used by the io\_server is specified only when launching the model. Here is an example of 2 cpus for the io\_server and 6 cpu for opa using mpirun: \texttt{ -p 2 -e ./ioserver} \texttt{ -p 6 -e ./opa } \subsection{Practical issues} \subsubsection{Add your own outputs} It is very easy to add you own outputs with iom\_put. 4 points must be followed. \begin{description} \item[1-] in NEMO code, add a \\ \texttt{ CALL iom\_put( 'identifier', array ) } \\ where you want to output a 2D or 3D array. \item[2-] don't forget to add \\ \texttt{ USE iom ! I/O manager library } \\ in the list of used modules in the upper part of your module. \item[3-] in the file\_definition part of the xml file, add the definition of your variable using the same identifier you used in the f90 code. \vspace{-20pt} \begin{alltt} {{\scriptsize \begin{verbatim} ... ... \end{verbatim} }}\end{alltt} attributes axis\_ref and grid\_ref must be consistent with the size of the array to pass to iom\_put. if your array is computed within the surface module each nn\_fsbc time\_step, add the field definition within the group defined with the id ''SBC'': $<$group id=''SBC''...$>$ \item[4-] add your field in one of the output files \\ \vspace{-20pt} \begin{alltt} {{\scriptsize \begin{verbatim} ... ... \end{verbatim} }}\end{alltt} \end{description} \subsubsection{Several time axes in the output file} If your output file contains variables with different operations (see operation definition), IOIPSL will create one specific time axis for each operation. Note that inst(X) will have a time axis corresponding to the end each output period whereas all other operators will have a time axis centred in the middle of the output periods. \subsubsection{Error/bug messages from IOIPSL} If you get the following error in the standard output file: \vspace{-20pt} \begin{alltt} {{\scriptsize \begin{verbatim} FATAL ERROR FROM ROUTINE flio_dom_set --> too many domains simultaneously defined --> please unset useless domains --> by calling flio_dom_unset \end{verbatim} }}\end{alltt} You must increase the value of dom\_max\_nb in fliocom.f90 (multiply it by 10 for example). If you mix, in the same file, variables with different freq\_op (see definition above), like for example variables from the surface module with other variables, IOIPSL will print in the standard output file warning messages saying there may be a bug. \vspace{-20pt} \begin{alltt} {{\scriptsize \begin{verbatim} WARNING FROM ROUTINE histvar_seq --> There were 10 errors in the learned sequence of variables --> for file 4 --> This looks like a bug, please report it. \end{verbatim} }}\end{alltt} Don't worry, there is no bug, everything is properly working! % } %end \gmcomment % ================================================================ % NetCDF4 support % ================================================================ \section{NetCDF4 Support (\key{netcdf4})} \label{DIA_iom} Since version 3.3, support for NetCDF4 chunking and (loss-less) compression has been included. These options build on the standard NetCDF output and allow the user control over the size of the chunks via namelist settings. Chunking and compression can lead to significant reductions in file sizes for a small runtime overhead. For a fuller discussion on chunking and other performance issues the reader is referred to the NetCDF4 documentation found \href{http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Chunking}{here}. The new features are only available when the code has been linked with a NetCDF4 library (version 4.1 onwards, recommended) which has been built with HDF5 support (version 1.8.4 onwards, recommended). Datasets created with chunking and compression are not backwards compatible with NetCDF3 "classic" format but most analysis codes can be relinked simply with the new libraries and will then read both NetCDF3 and NetCDF4 files. NEMO executables linked with NetCDF4 libraries can be made to produce NetCDF3 files by setting the \np{ln\_nc4zip} logical to false in the \textit{namnc4} namelist: %------------------------------------------namnc4---------------------------------------------------- \namdisplay{namnc4} %------------------------------------------------------------------------------------------------------------- If \key{netcdf4} has not been defined, these namelist parameters are not read. In this case, \np{ln\_nc4zip} is set false and dummy routines for a few NetCDF4-specific functions are defined. These functions will not be used but need to be included so that compilation is possible with NetCDF3 libraries. When using NetCDF4 libraries, \key{netcdf4} should be defined even if the intention is to create only NetCDF3-compatible files. This is necessary to avoid duplication between the dummy routines and the actual routines present in the library. Most compilers will fail at compile time when faced with such duplication. Thus when linking with NetCDF4 libraries the user must define \key{netcdf4} and control the type of NetCDF file produced via the namelist parameter. Chunking and compression is applied only to 4D fields and there is no advantage in chunking across more than one time dimension since previously written chunks would have to be read back and decompressed before being added to. Therefore, user control over chunk sizes is provided only for the three space dimensions. The user sets an approximate number of chunks along each spatial axis. The actual size of the chunks will depend on global domain size for mono-processors or, more likely, the local processor domain size for distributed processing. The derived values are subject to practical minimum values (to avoid wastefully small chunk sizes) and cannot be greater than the domain size in any dimension. The algorithm used is: \begin{alltt} {{\scriptsize \begin{verbatim} ichunksz(1) = MIN( idomain_size,MAX( (idomain_size-1)/nn_nchunks_i + 1 ,16 ) ) ichunksz(2) = MIN( jdomain_size,MAX( (jdomain_size-1)/nn_nchunks_j + 1 ,16 ) ) ichunksz(3) = MIN( kdomain_size,MAX( (kdomain_size-1)/nn_nchunks_k + 1 , 1 ) ) ichunksz(4) = 1 \end{verbatim} }}\end{alltt} \noindent As an example, setting: \vspace{-20pt} \begin{alltt} {{\scriptsize \begin{verbatim} nn_nchunks_i=4, nn_nchunks_j=4 and nn_nchunks_k=31 \end{verbatim} }}\end{alltt} \vspace{-10pt} \noindent for a standard ORCA2\_LIM configuration gives chunksizes of {\small\tt 46x38x1} respectively in the mono-processor case (i.e. global domain of {\small\tt 182x149x31}). An illustration of the potential space savings that NetCDF4 chunking and compression provides is given in table \ref{Tab_NC4} which compares the results of two short runs of the ORCA2\_LIM reference configuration with a 4x2 mpi partitioning. Note the variation in the compression ratio achieved which reflects chiefly the dry to wet volume ratio of each processing region. %------------------------------------------TABLE---------------------------------------------------- \begin{table} \begin{tabular}{lrrr} Filename & NetCDF3 & NetCDF4 & Reduction\\ &filesize & filesize & \% \\ &(KB) & (KB) & \\ ORCA2\_restart\_0000.nc & 16420 & 8860 & 47\%\\ ORCA2\_restart\_0001.nc & 16064 & 11456 & 29\%\\ ORCA2\_restart\_0002.nc & 16064 & 9744 & 40\%\\ ORCA2\_restart\_0003.nc & 16420 & 9404 & 43\%\\ ORCA2\_restart\_0004.nc & 16200 & 5844 & 64\%\\ ORCA2\_restart\_0005.nc & 15848 & 8172 & 49\%\\ ORCA2\_restart\_0006.nc & 15848 & 8012 & 50\%\\ ORCA2\_restart\_0007.nc & 16200 & 5148 & 69\%\\ ORCA2\_2d\_grid\_T\_0000.nc & 2200 & 1504 & 32\%\\ ORCA2\_2d\_grid\_T\_0001.nc & 2200 & 1748 & 21\%\\ ORCA2\_2d\_grid\_T\_0002.nc & 2200 & 1592 & 28\%\\ ORCA2\_2d\_grid\_T\_0003.nc & 2200 & 1540 & 30\%\\ ORCA2\_2d\_grid\_T\_0004.nc & 2200 & 1204 & 46\%\\ ORCA2\_2d\_grid\_T\_0005.nc & 2200 & 1444 & 35\%\\ ORCA2\_2d\_grid\_T\_0006.nc & 2200 & 1428 & 36\%\\ ORCA2\_2d\_grid\_T\_0007.nc & 2200 & 1148 & 48\%\\ ... & ... & ... & .. \\ ORCA2\_2d\_grid\_W\_0000.nc & 4416 & 2240 & 50\%\\ ORCA2\_2d\_grid\_W\_0001.nc & 4416 & 2924 & 34\%\\ ORCA2\_2d\_grid\_W\_0002.nc & 4416 & 2512 & 44\%\\ ORCA2\_2d\_grid\_W\_0003.nc & 4416 & 2368 & 47\%\\ ORCA2\_2d\_grid\_W\_0004.nc & 4416 & 1432 & 68\%\\ ORCA2\_2d\_grid\_W\_0005.nc & 4416 & 1972 & 56\%\\ ORCA2\_2d\_grid\_W\_0006.nc & 4416 & 2028 & 55\%\\ ORCA2\_2d\_grid\_W\_0007.nc & 4416 & 1368 & 70\%\\ \end{tabular} \caption{ \label{Tab_NC4} Filesize comparison between NetCDF3 and NetCDF4 with chunking and compression} \end{table} %---------------------------------------------------------------------------------------------------- When \key{iomput} is activated with \key{netcdf4} chunking and compression parameters for fields produced via \np{iom\_put} calls are set via an equivalent and identically named namelist to \textit{namnc4} in \np{xmlio\_server.def}. Typically this namelist serves the mean files whilst the \np{ namnc4} in the main namelist file continues to serve the restart files. This duplication is unfortunate but appropriate since, if using io\_servers, the domain sizes of the individual files produced by the io\_server processes may be different to those produced by the invidual processing regions and different chunking choices may be desired. % ------------------------------------------------------------------------------------------------------------- % Tracer/Dynamics Trends % ------------------------------------------------------------------------------------------------------------- \section[Tracer/Dynamics Trends (TRD)] {Tracer/Dynamics Trends (\key{trdtra}, \key{trddyn}, \\ \key{trddvor}, \key{trdmld})} \label{DIA_trd} %------------------------------------------namtrd---------------------------------------------------- \namdisplay{namtrd} %------------------------------------------------------------------------------------------------------------- When \key{trddyn} and/or \key{trddyn} CPP variables are defined, each trend of the dynamics and/or temperature and salinity time evolution equations is stored in three-dimensional arrays just after their computation ($i.e.$ at the end of each $dyn\cdots.F90$ and/or $tra\cdots.F90$ routines). These trends are then used in \mdl{trdmod} (see TRD directory) every \textit{nn\_trd } time-steps. What is done depends on the CPP keys defined: \begin{description} \item[\key{trddyn}, \key{trdtra}] : a check of the basin averaged properties of the momentum and/or tracer equations is performed ; \item[\key{trdvor}] : a vertical summation of the moment tendencies is performed, then the curl is computed to obtain the barotropic vorticity tendencies which are output ; \item[\key{trdmld}] : output of the tracer tendencies averaged vertically either over the mixed layer (\np{nn\_ctls}=0), or over a fixed number of model levels (\np{nn\_ctls}$>$1 provides the number of level), or over a spatially varying but temporally fixed number of levels (typically the base of the winter mixed layer) read in \ifile{ctlsurf\_idx} (\np{nn\_ctls}=1) ; \end{description} The units in the output file can be changed using the \np{nn\_ucf} namelist parameter. For example, in case of salinity tendency the units are given by PSU/s/\np{nn\_ucf}. Setting \np{nn\_ucf}=86400 ($i.e.$ the number of second in a day) provides the tendencies in PSU/d. When \key{trdmld} is defined, two time averaging procedure are proposed. Setting \np{ln\_trdmld\_instant} to \textit{true}, a simple time averaging is performed, so that the resulting tendency is the contribution to the change of a quantity between the two instantaneous values taken at the extremities of the time averaging period. Setting \np{ln\_trdmld\_instant} to \textit{false}, a double time averaging is performed, so that the resulting tendency is the contribution to the change of a quantity between two \textit{time mean} values. The later option requires the use of an extra file, \ifile{restart\_mld} (\np{ln\_trdmld\_restart}=true), to restart a run. Note that the mixed layer tendency diagnostic can also be used on biogeochemical models via the \key{trdtrc} and \key{trdmld\_trc} CPP keys. % ------------------------------------------------------------------------------------------------------------- % On-line Floats trajectories % ------------------------------------------------------------------------------------------------------------- \section{On-line Floats trajectories (FLO) (\key{floats})} \label{FLO} %--------------------------------------------namflo------------------------------------------------------- \namdisplay{namflo} %-------------------------------------------------------------------------------------------------------------- The on-line computation of floats advected either by the three dimensional velocity field or constraint to remain at a given depth ($w = 0$ in the computation) have been introduced in the system during the CLIPPER project. The algorithm used is based either on the work of \cite{Blanke_Raynaud_JPO97} (default option), or on a $4^th$ Runge-Hutta algorithm (\np{ln\_flork4}=true). Note that the \cite{Blanke_Raynaud_JPO97} algorithm have the advantage of providing trajectories which are consistent with the numeric of the code, so that the trajectories never intercept the bathymetry. \subsubsection{ Input data: initial coordinates } Initial coordinates can be given with Ariane Tools convention ( IJK coordinates ,(\np{ln\_ariane}=true) ) or with longitude and latitude. In case of Ariane convention, input filename is \np{"init\_float\_ariane"}. Its format is: \texttt{ I J K nisobfl itrash itrash } \noindent with: - I,J,K : indexes of initial position - nisobfl: 0 for an isobar float, 1 for a float following the w velocity - itrash : set to zero; it is a dummy variable to respect Ariane Tools convention - itrash :set to zero; it is a dummy variable to respect Ariane Tools convention \noindent Example:\\ \noindent \texttt{ 100.00000 90.00000 -1.50000 1.00000 0.00000}\\ \texttt{ 102.00000 90.00000 -1.50000 1.00000 0.00000}\\ \texttt{ 104.00000 90.00000 -1.50000 1.00000 0.00000}\\ \texttt{ 106.00000 90.00000 -1.50000 1.00000 0.00000}\\ \texttt{ 108.00000 90.00000 -1.50000 1.00000 0.00000}\\ In the other case ( longitude and latitude ), input filename is \np{"init\_float"}. Its format is: \texttt{ Long Lat depth nisobfl ngrpfl itrash} \noindent with: - Long, Lat, depth : Longitude, latitude, depth - nisobfl: 0 for an isobar float, 1 for a float following the w velocity - ngrpfl : number to identify searcher group - itrash :set to 1; it is a dummy variable. \noindent Example: \noindent\texttt{ 20.0 0.0 0.0 0 1 1 }\\ \texttt{ -21.0 0.0 0.0 0 1 1 }\\ \texttt{ -22.0 0.0 0.0 0 1 1 }\\ \texttt{ -23.0 0.0 0.0 0 1 1 }\\ \texttt{ -24.0 0.0 0.0 0 1 1 }\\ \np{jpnfl} is the total number of floats during the run. When initial positions are read in a restart file ( \np{ln\_rstflo= .TRUE.} ), \np{jpnflnewflo} can be added in the initialization file. \subsubsection{ Output data } \np{nn\_writefl} is the frequency of writing in float output file and \np{nn\_stockfl} is the frequency of creation of the float restart file. Output data can be written in ascii files (\np{ln\_flo\_ascii = .TRUE.} ). In that case, output filename is \np{is trajec\_float}. Another possiblity of writing format is Netcdf (\np{ln\_flo\_ascii = .FALSE.} ). There are 2 possibilities: - if (\key{iomput}) is used, outputs are selected in \np{iodef.xml}. Here it is an example of specification to put in files description section: \vspace{-30pt} \begin{alltt} {{\scriptsize \begin{verbatim} } }\\ } } } } } } } } } \end{verbatim} }}\end{alltt} - if (\key{iomput}) is not used, a file called \np{trajec\_float.nc} will be created by IOIPSL library. See also \href{http://stockage.univ-brest.fr/~grima/Ariane/}{here} the web site describing the off-line use of this marvellous diagnostic tool. % ------------------------------------------------------------------------------------------------------------- % Harmonic analysis of tidal constituents % ------------------------------------------------------------------------------------------------------------- \section{Harmonic analysis of tidal constituents (\key{diaharm}) } \label{DIA_diag_harm} A module is available to compute the amplitude and phase for tidal waves. This diagnostic is actived with \key{diaharm}. %------------------------------------------namdia_harm---------------------------------------------------- \namdisplay{namdia_harm} %---------------------------------------------------------------------------------------------------------- Concerning the on-line Harmonic analysis, some parameters are available in namelist: - \texttt{nit000\_han} is the first time step used for harmonic analysis - \texttt{nitend\_han} is the last time step used for harmonic analysis - \texttt{nstep\_han} is the time step frequency for harmonic analysis - \texttt{nb\_ana} is the number of harmonics to analyse - \texttt{tname} is an array with names of tidal constituents to analyse \texttt{nit000\_han} and \texttt{nitend\_han} must be between \texttt{nit000} and \texttt{nitend} of the simulation. The restart capability is not implemented. The Harmonic analysis solve this equation: \begin{equation} h_{i} - A_{0} + \sum^{nb\_ana}_{j=1}[A_{j}cos(\nu_{j}t_{j}-\phi_{j})] = e_{i} \end{equation} With $A_{j}$,$\nu_{j}$,$\phi_{j}$, the amplitude, frequency and phase for each wave and $e_{i}$ the error. $h_{i}$ is the sea level for the time $t_{i}$ and $A_{0}$ is the mean sea level. \\ We can rewrite this equation: \begin{equation} h_{i} - A_{0} + \sum^{nb\_ana}_{j=1}[C_{j}cos(\nu_{j}t_{j})+S_{j}sin(\nu_{j}t_{j})] = e_{i} \end{equation} with $A_{j}=\sqrt{C^{2}_{j}+S^{2}_{j}}$ et $\phi_{j}=arctan(S_{j}/C_{j})$. We obtain in output $C_{j}$ and $S_{j}$ for each tidal wave. % ------------------------------------------------------------------------------------------------------------- % Sections transports % ------------------------------------------------------------------------------------------------------------- \section{Transports across sections (\key{diadct}) } \label{DIA_diag_dct} A module is available to compute the transport of volume, heat and salt through sections. This diagnostic is actived with \key{diadct}. Each section is defined by the coordinates of its 2 extremities. The pathways between them are contructed using tools which can be found in \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT} and are written in a binary file \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is later read in by NEMO to compute on-line transports. The on-line transports module creates three output ascii files: - \texttt{volume\_transport} for volume transports ( unit: $10^{6} m^{3} s^{-1}$ ) - \texttt{heat\_transport} for heat transports ( unit: $10^{15} W $ ) - \texttt{salt\_transport} for salt transports ( unit: $10^{9}Kg s^{-1}$ )\\ Namelist parameters control how frequently the flows are summed and the time scales over which they are averaged, as well as the level of output for debugging: %------------------------------------------namdct---------------------------------------------------- \namdisplay{namdct} %------------------------------------------------------------------------------------------------------------- \texttt{nn\_dct}: frequency of instantaneous transports computing \texttt{nn\_dctwri}: frequency of writing ( mean of instantaneous transports ) \texttt{nn\_debug}: debugging of the section \subsubsection{ To create a binary file containing the pathway of each section } In \texttt{NEMOGCM/TOOLS/SECTIONS\_DIADCT/run}, the file \texttt{ {list\_sections.ascii\_global}} contains a list of all the sections that are to be computed (this list of sections is based on MERSEA project metrics). Another file is available for the GYRE configuration (\texttt{ {list\_sections.ascii\_GYRE}}). Each section is defined by: \noindent \texttt{ long1 lat1 long2 lat2 nclass (ok/no)strpond (no)ice section\_name }\\ with: - \texttt{long1 lat1} , coordinates of the first extremity of the section; - \texttt{long2 lat2} , coordinates of the second extremity of the section; - \texttt{nclass} the number of bounds of your classes (e.g. 3 bounds for 2 classes); - \texttt{okstrpond} to compute heat and salt transport, \texttt{nostrpond} if no; - \texttt{ice} to compute surface and volume ice transports, \texttt{noice} if no. \\ \noindent The results of the computing of transports, and the directions of positive and negative flow do not depend on the order of the 2 extremities in this file.\\ \noindent If nclass =/ 0,the next lines contain the class type and the nclass bounds: \texttt{long1 lat1 long2 lat2 nclass (ok/no)strpond (no)ice section\_name} \texttt{classtype} \texttt{zbound1} \texttt{zbound2} \texttt{.} \texttt{.} \texttt{nclass-1} \texttt{nclass} \noindent where \texttt{classtype} can be: - \texttt{zsal} for salinity classes - \texttt{ztem} for temperature classes - \texttt{zlay} for depth classes - \texttt{zsigi} for insitu density classes - \texttt{zsigp} for potential density classes \\ The script \texttt{job.ksh} computes the pathway for each section and creates a binary file \texttt{section\_ijglobal.diadct\_ORCA2\_LIM} which is read by NEMO. \\ It is possible to use this tools for new configuations: \texttt{job.ksh} has to be updated with the coordinates file name and path. \\ Examples of two sections, the ACC\_Drake\_Passage with no classes, and the ATL\_Cuba\_Florida with 4 temperature clases (5 class bounds), are shown: \noindent \texttt{ -68. -54.5 -60. -64.7 00 okstrpond noice ACC\_Drake\_Passage} \noindent \texttt{ -80.5 22.5 -80.5 25.5 05 nostrpond noice ATL\_Cuba\_Florida} \noindent \texttt{ztem} \noindent \texttt{-2.0} \noindent \texttt{ 4.5} \noindent \texttt{ 7.0} \noindent \texttt{12.0} \noindent \texttt{40.0} \subsubsection{ To read the output files } The output format is : {\small\texttt{date, time-step number, section number, section name, section slope coefficient, class number, class name, class bound 1 , classe bound2, transport\_direction1 , transport\_direction2, transport\_total}}\\ For sections with classes, the first \texttt{nclass-1} lines correspond to the transport for each class and the last line corresponds to the total transport summed over all classes. For sections with no classes, class number \texttt{ 1 } corresponds to \texttt{ total class } and this class is called \texttt{N}, meaning \texttt{none}.\\ \noindent \texttt{ transport\_direction1 } is the positive part of the transport ( \texttt{ > = 0 } ). \noindent \texttt{ transport\_direction2 } is the negative part of the transport ( \texttt{ < = 0 } ).\\ \noindent The \texttt{section slope coefficient} gives information about the significance of transports signs and direction:\\ \begin{tabular}{|c|c|c|c|p{4cm}|} \hline section slope coefficient & section type & direction 1 & direction 2 & total transport \\ \hline 0. & horizontal & northward & southward & postive: northward \\ \hline 1000. & vertical & eastward & westward & postive: eastward \\ \hline \texttt{=/0, =/ 1000.} & diagonal & eastward & westward & postive: eastward \\ \hline \end{tabular} % ------------------------------------------------------------------------------------------------------------- % Other Diagnostics % ------------------------------------------------------------------------------------------------------------- \section{Other Diagnostics (\key{diahth}, \key{diaar5})} \label{DIA_diag_others} Aside from the standard model variables, other diagnostics can be computed on-line. The available ready-to-add diagnostics routines can be found in directory DIA. Among the available diagnostics the following ones are obtained when defining the \key{diahth} CPP key: - the mixed layer depth (based on a density criterion, \citet{de_Boyer_Montegut_al_JGR04}) (\mdl{diahth}) - the turbocline depth (based on a turbulent mixing coefficient criterion) (\mdl{diahth}) - the depth of the 20\deg C isotherm (\mdl{diahth}) - the depth of the thermocline (maximum of the vertical temperature gradient) (\mdl{diahth}) The poleward heat and salt transports, their advective and diffusive component, and the meriodional stream function can be computed on-line in \mdl{diaptr} by setting \np{ln\_diaptr} to true (see the \textit{namptr} namelist below). When \np{ln\_subbas}~=~true, transports and stream function are computed for the Atlantic, Indian, Pacific and Indo-Pacific Oceans (defined north of 30\deg S) as well as for the World Ocean. The sub-basin decomposition requires an input file (\ifile{subbasins}) which contains three 2D mask arrays, the Indo-Pacific mask been deduced from the sum of the Indian and Pacific mask (Fig~\ref{Fig_mask_subasins}). %------------------------------------------namptr---------------------------------------------------- \namdisplay{namptr} %------------------------------------------------------------------------------------------------------------- %>>>>>>>>>>>>>>>>>>>>>>>>>>>> \begin{figure}[!t] \begin{center} \includegraphics[width=1.0\textwidth]{./TexFiles/Figures/Fig_mask_subasins.pdf} \caption{ \label{Fig_mask_subasins} Decomposition of the World Ocean (here ORCA2) into sub-basin used in to compute the heat and salt transports as well as the meridional stream-function: Atlantic basin (red), Pacific basin (green), Indian basin (bleue), Indo-Pacific basin (bleue+green). Note that semi-enclosed seas (Red, Med and Baltic seas) as well as Hudson Bay are removed from the sub-basins. Note also that the Arctic Ocean has been split into Atlantic and Pacific basins along the North fold line. } \end{center} \end{figure} %>>>>>>>>>>>>>>>>>>>>>>>>>>>> In addition, a series of diagnostics has been added in the \mdl{diaar5}. They corresponds to outputs that are required for AR5 simulations (see Section \ref{MISC_steric} below for one of them). Activating those outputs requires to define the \key{diaar5} CPP key. \\ \\ % ================================================================ % Steric effect in sea surface height % ================================================================ \section{Diagnosing the Steric effect in sea surface height} \label{DIA_steric} Changes in steric sea level are caused when changes in the density of the water column imply an expansion or contraction of the column. It is essentially produced through surface heating/cooling and to a lesser extent through non-linear effects of the equation of state (cabbeling, thermobaricity...). Non-Boussinesq models contain all ocean effects within the ocean acting on the sea level. In particular, they include the steric effect. In contrast, Boussinesq models, such as \NEMO, conserve volume, rather than mass, and so do not properly represent expansion or contraction. The steric effect is therefore not explicitely represented. This approximation does not represent a serious error with respect to the flow field calculated by the model \citep{Greatbatch_JGR94}, but extra attention is required when investigating sea level, as steric changes are an important contribution to local changes in sea level on seasonal and climatic time scales. This is especially true for investigation into sea level rise due to global warming. Fortunately, the steric contribution to the sea level consists of a spatially uniform component that can be diagnosed by considering the mass budget of the world ocean \citep{Greatbatch_JGR94}. In order to better understand how global mean sea level evolves and thus how the steric sea level can be diagnosed, we compare, in the following, the non-Boussinesq and Boussinesq cases. Let denote $\mathcal{M}$ the total mass of liquid seawater ($\mathcal{M}=\int_D \rho dv$), $\mathcal{V}$ the total volume of seawater ($\mathcal{V}=\int_D dv$), $\mathcal{A}$ the total surface of the ocean ($\mathcal{A}=\int_S ds$), $\bar{\rho}$ the global mean seawater (\textit{in situ}) density ($\bar{\rho}= 1/\mathcal{V} \int_D \rho \,dv$), and $\bar{\eta}$ the global mean sea level ($\bar{\eta}=1/\mathcal{A}\int_S \eta \,ds$). A non-Boussinesq fluid conserves mass. It satisfies the following relations: \begin{equation} \label{Eq_MV_nBq} \begin{split} \mathcal{M} &= \mathcal{V} \;\bar{\rho} \\ \mathcal{V} &= \mathcal{A} \;\bar{\eta} \end{split} \end{equation} Temporal changes in total mass is obtained from the density conservation equation : \begin{equation} \label{Eq_Co_nBq} \frac{1}{e_3} \partial_t ( e_3\,\rho) + \nabla( \rho \, \textbf{U} ) = \left. \frac{\textit{emp}}{e_3}\right|_\textit{surface} \end{equation} where $\rho$ is the \textit{in situ} density, and \textit{emp} the surface mass exchanges with the other media of the Earth system (atmosphere, sea-ice, land). Its global averaged leads to the total mass change \begin{equation} \label{Eq_Mass_nBq} \partial_t \mathcal{M} = \mathcal{A} \;\overline{\textit{emp}} \end{equation} where $\overline{\textit{emp}}=\int_S \textit{emp}\,ds$ is the net mass flux through the ocean surface. Bringing \eqref{Eq_Mass_nBq} and the time derivative of \eqref{Eq_MV_nBq} together leads to the evolution equation of the mean sea level \begin{equation} \label{Eq_ssh_nBq} \partial_t \bar{\eta} = \frac{\overline{\textit{emp}}}{ \bar{\rho}} - \frac{\mathcal{V}}{\mathcal{A}} \;\frac{\partial_t \bar{\rho} }{\bar{\rho}} \end{equation} The first term in equation \eqref{Eq_ssh_nBq} alters sea level by adding or subtracting mass from the ocean. The second term arises from temporal changes in the global mean density; $i.e.$ from steric effects. In a Boussinesq fluid, $\rho$ is replaced by $\rho_o$ in all the equation except when $\rho$ appears multiplied by the gravity ($i.e.$ in the hydrostatic balance of the primitive Equations). In particular, the mass conservation equation, \eqref{Eq_Co_nBq}, degenerates into the incompressibility equation: \begin{equation} \label{Eq_Co_Bq} \frac{1}{e_3} \partial_t ( e_3 ) + \nabla( \textbf{U} ) = \left. \frac{\textit{emp}}{\rho_o \,e_3}\right|_ \textit{surface} \end{equation} and the global average of this equation now gives the temporal change of the total volume, \begin{equation} \label{Eq_V_Bq} \partial_t \mathcal{V} = \mathcal{A} \;\frac{\overline{\textit{emp}}}{\rho_o} \end{equation} Only the volume is conserved, not mass, or, more precisely, the mass which is conserved is the Boussinesq mass, $\mathcal{M}_o = \rho_o \mathcal{V}$. The total volume (or equivalently the global mean sea level) is altered only by net volume fluxes across the ocean surface, not by changes in mean mass of the ocean: the steric effect is missing in a Boussinesq fluid. Nevertheless, following \citep{Greatbatch_JGR94}, the steric effect on the volume can be diagnosed by considering the mass budget of the ocean. The apparent changes in $\mathcal{M}$, mass of the ocean, which are not induced by surface mass flux must be compensated by a spatially uniform change in the mean sea level due to expansion/contraction of the ocean \citep{Greatbatch_JGR94}. In others words, the Boussinesq mass, $\mathcal{M}_o$, can be related to $\mathcal{M}$, the total mass of the ocean seen by the Boussinesq model, via the steric contribution to the sea level, $\eta_s$, a spatially uniform variable, as follows: \begin{equation} \label{Eq_M_Bq} \mathcal{M}_o = \mathcal{M} + \rho_o \,\eta_s \,\mathcal{A} \end{equation} Any change in $\mathcal{M}$ which cannot be explained by the net mass flux through the ocean surface is converted into a mean change in sea level. Introducing the total density anomaly, $\mathcal{D}= \int_D d_a \,dv$, where $d_a= (\rho -\rho_o ) / \rho_o$ is the density anomaly used in \NEMO (cf. \S\ref{TRA_eos}) in \eqref{Eq_M_Bq} leads to a very simple form for the steric height: \begin{equation} \label{Eq_steric_Bq} \eta_s = - \frac{1}{\mathcal{A}} \mathcal{D} \end{equation} The above formulation of the steric height of a Boussinesq ocean requires four remarks. First, one can be tempted to define $\rho_o$ as the initial value of $\mathcal{M}/\mathcal{V}$, $i.e.$ set $\mathcal{D}_{t=0}=0$, so that the initial steric height is zero. We do not recommend that. Indeed, in this case $\rho_o$ depends on the initial state of the ocean. Since $\rho_o$ has a direct effect on the dynamics of the ocean (it appears in the pressure gradient term of the momentum equation) it is definitively not a good idea when inter-comparing experiments. We better recommend to fixe once for all $\rho_o$ to $1035\;Kg\,m^{-3}$. This value is a sensible choice for the reference density used in a Boussinesq ocean climate model since, with the exception of only a small percentage of the ocean, density in the World Ocean varies by no more than 2$\%$ from this value (\cite{Gill1982}, page 47). Second, we have assumed here that the total ocean surface, $\mathcal{A}$, does not change when the sea level is changing as it is the case in all global ocean GCMs (wetting and drying of grid point is not allowed). Third, the discretisation of \eqref{Eq_steric_Bq} depends on the type of free surface which is considered. In the non linear free surface case, $i.e.$ \key{vvl} defined, it is given by \begin{equation} \label{Eq_discrete_steric_Bq} \eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t} e_{2t} e_{3t} } { \sum_{i,\,j,\,k} e_{1t} e_{2t} e_{3t} } \end{equation} whereas in the linear free surface, the volume above the \textit{z=0} surface must be explicitly taken into account to better approximate the total ocean mass and thus the steric sea level: \begin{equation} \label{Eq_discrete_steric_Bq} \eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} d_a\; e_{1t}e_{2t} \eta } {\sum_{i,\,j,\,k} e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} e_{1t}e_{2t} \eta } \end{equation} The fourth and last remark concerns the effective sea level and the presence of sea-ice. In the real ocean, sea ice (and snow above it) depresses the liquid seawater through its mass loading. This depression is a result of the mass of sea ice/snow system acting on the liquid ocean. There is, however, no dynamical effect associated with these depressions in the liquid ocean sea level, so that there are no associated ocean currents. Hence, the dynamically relevant sea level is the effective sea level, $i.e.$ the sea level as if sea ice (and snow) were converted to liquid seawater \citep{Campin_al_OM08}. However, in the current version of \NEMO the sea-ice is levitating above the ocean without mass exchanges between ice and ocean. Therefore the model effective sea level is always given by $\eta + \eta_s$, whether or not there is sea ice present. In AR5 outputs, the thermosteric sea level is demanded. It is steric sea level due to changes in ocean density arising just from changes in temperature. It is given by: \begin{equation} \label{Eq_thermosteric_Bq} \eta_s = - \frac{1}{\mathcal{A}} \int_D d_a(T,S_o,p_o) \,dv \end{equation} where $S_o$ and $p_o$ are the initial salinity and pressure, respectively. Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs the \key{diaar5} defined to be called. % ================================================================