% ================================================================ % Chapter Ñ Miscellaneous Topics % ================================================================ \chapter{Miscellaneous Topics} \label{MISC} \minitoc \newpage $\ $\newline % force a new ligne % ================================================================ % Representation of Unresolved Straits % ================================================================ \section{Representation of Unresolved Straits} \label{MISC_strait} In climate modeling, it often occurs that a crucial connections between water masses is broken as the grid mesh is too coarse to resolve narrow straits. For example, coarse grid spacing typically closes off the Mediterranean from the Atlantic at the Strait of Gibraltar. In this case, it is important for climate models to include the effects of salty water entering the Atlantic from the Mediterranean. Likewise, it is important for the Mediterranean to replenish its supply of water from the Atlantic to balance the net evaporation occurring over the Mediterranean region. This problem occurs even in eddy permitting simulations. For example, in ORCA 1/4\deg several straits of the Indonesian archipelago (Ombai, Lombok...) are much narrow than even a single ocean grid-point. We describe briefly here the three methods that can be used in \NEMO to handle such improperly resolved straits. The first two consist of opening the strait by hand while ensuring that the mass exchanges through the strait are not too large by either artificially reducing the surface of the strait grid-cells or, locally increasing the lateral friction. In the third one, the strait is closed but exchanges of mass, heat and salt across the land are allowed. Note that such modifications are so specific to a given configuration that no attempt has been made to set them in a generic way. However, examples of how they can be set up is given in the ORCA 2\deg and 0.5\deg configurations (search for \key{orca\_r2} or \key{orca\_r05} in the code). % ------------------------------------------------------------------------------------------------------------- % Hand made geometry changes % ------------------------------------------------------------------------------------------------------------- \subsection{Hand made geometry changes} \label{MISC_strait_hand} $\bullet$ reduced scale factor in the cross-strait direction to a value in better agreement with the true mean width of the strait. (Fig.~\ref{Fig_MISC_strait_hand}). This technique is sometime called "partially open face" or "partially closed cells". The key issue here is only to reduce the faces of $T$-cell ($i.e.$ change the value of the horizontal scale factors at $u$- or $v$-point) but not the volume of the $T$-cell. Indeed, reducing the volume of strait $T$-cell can easily produce a numerical instability at that grid point that would require a reduction of the model time step. The changes associated with strait management are done in \mdl{domhgr}, just after the definition or reading of the horizontal scale factors. $\bullet$ increase of the viscous boundary layer thickness by local increase of the fmask value at the coast (Fig.~\ref{Fig_MISC_strait_hand}). This is done in \mdl{dommsk} together with the setting of the coastal value of fmask (see Section \ref{LBC_coast}) %>>>>>>>>>>>>>>>>>>>>>>>>>>>> \begin{figure}[!tbp] \begin{center} \includegraphics[width=0.80\textwidth]{./TexFiles/Figures/Fig_Gibraltar.pdf} \includegraphics[width=0.80\textwidth]{./TexFiles/Figures/Fig_Gibraltar2.pdf} \caption{ \label{Fig_MISC_strait_hand} Example of the Gibraltar strait defined in a $1\deg \times 1\deg$ mesh. \textit{Top}: using partially open cells. The meridional scale factor at $v$-point is reduced on both sides of the strait to account for the real width of the strait (about 20 km). Note that the scale factors of the strait $T$-point remains unchanged. \textit{Bottom}: using viscous boundary layers. The four fmask parameters along the strait coastlines are set to a value larger than 4, $i.e.$ "strong" no-slip case (see Fig.\ref{Fig_LBC_shlat}) creating a large viscous boundary layer that allows a reduced transport through the strait.} \end{center} \end{figure} %>>>>>>>>>>>>>>>>>>>>>>>>>>>> % ------------------------------------------------------------------------------------------------------------- % Cross Land Advection % ------------------------------------------------------------------------------------------------------------- \subsection{Cross Land Advection (\mdl{tracla})} \label{MISC_strait_cla} %--------------------------------------------namcla-------------------------------------------------------- \namdisplay{namcla} %-------------------------------------------------------------------------------------------------------------- \colorbox{yellow}{Add a short description of CLA staff here or in lateral boundary condition chapter?} %The problem is resolved here by allowing the mixing of tracers and mass/volume between non-adjacent water columns at nominated regions within the model. Momentum is not mixed. The scheme conserves total tracer content, and total volume (the latter in $z*$- or $s*$-coordinate), and maintains compatibility between the tracer and mass/volume budgets. % ================================================================ % Closed seas % ================================================================ \section{Closed seas (\mdl{closea})} \label{MISC_closea} \colorbox{yellow}{Add here a short description of the way closed seas are managed} % ================================================================ % Sub-Domain Functionality (\textit{nizoom, njzoom}, namelist parameters) % ================================================================ \section{Sub-Domain Functionality (\jp{jpizoom}, \jp{jpjzoom})} \label{MISC_zoom} The sub-domain functionality, also improperly called the zoom option (improperly because it is not associated with a change in model resolution) is a quite simple function that allows a simulation over a sub-domain of an already defined configuration ($i.e.$ without defining a new mesh, initial state and forcings). This option can be useful for testing the user settings of surface boundary conditions, or the initial ocean state of a huge ocean model configuration while having a small computer memory requirement. It can also be used to easily test specific physics in a sub-domain (for example, see \citep{Madec_al_JPO96} for a test of the coupling used in the global ocean version of OPA between sea-ice and ocean model over the Arctic or Antarctic ocean, using a sub-domain). In the standard model, this option does not include any specific treatment for the ocean boundaries of the sub-domain: they are considered as artificial vertical walls. Nevertheless, it is quite easy to add a restoring term toward a climatology in the vicinity of such boundaries (see \S\ref{TRA_dmp}). In order to easily define a sub-domain over which the computation can be performed, the dimension of all input arrays (ocean mesh, bathymetry, forcing, initial state, ...) are defined as \jp{jpidta}, \jp{jpjdta} and \jp{jpkdta} (\mdl{par\_oce} module), while the computational domain is defined through \jp{jpiglo}, \jp{jpjglo} and \jp{jpk} (\mdl{par\_oce} module). When running the model over the whole domain, the user sets \jp{jpiglo}=\jp{jpidta} \jp{jpjglo}=\jp{jpjdta} and \jp{jpk}=\jp{jpkdta}. When running the model over a sub-domain, the user has to provide the size of the sub-domain, (\jp{jpiglo}, \jp{jpjglo}, \jp{jpkglo}), and the indices of the south western corner as \jp{jpizoom} and \jp{jpjzoom} in the \mdl{par\_oce} module (Fig.~\ref{Fig_LBC_zoom}). Note that a third set of dimensions exist, \jp{jpi}, \jp{jpj} and \jp{jpk} which is actually used to perform the computation. It is set by default to \jp{jpi}=\jp{jpjglo} and \jp{jpj}=\jp{jpjglo}, except for massively parallel computing where the computational domain is laid out on local processor memories following a 2D horizontal splitting. % (see {\S}IV.2-c) ref to the section to be updated %>>>>>>>>>>>>>>>>>>>>>>>>>>>> \begin{figure}[!ht] \begin{center} \includegraphics[width=0.90\textwidth]{./TexFiles/Figures/Fig_LBC_zoom.pdf} \caption{ \label{Fig_LBC_zoom} Position of a model domain compared to the data input domain when the zoom functionality is used.} \end{center} \end{figure} %>>>>>>>>>>>>>>>>>>>>>>>>>>>> % ================================================================ % Accelerating the Convergence % ================================================================ \section{Accelerating the Convergence (\np{nn\_acc} = 1)} \label{MISC_acc} %--------------------------------------------namdom------------------------------------------------------- \namdisplay{namdom} %-------------------------------------------------------------------------------------------------------------- Searching an equilibrium state with an global ocean model requires a very long time integration period (a few thousand years for a global model). Due to the size of the time step required for numerical stability (less than a few hours), this usually requires a large elapsed time. In order to overcome this problem, \citet{Bryan1984} introduces a technique that is intended to accelerate the spin up to equilibrium. It uses a larger time step in the tracer evolution equations than in the momentum evolution equations. It does not affect the equilibrium solution but modifies the trajectory to reach it. The acceleration of convergence option is used when \np{nn\_acc}=1. In that case, $\rdt=rn\_rdt$ is the time step of dynamics while $\widetilde{\rdt}=rdttra$ is the tracer time-step. the former is set from the \np{rn\_rdt} namelist parameter while the latter is computed using a hyperbolic tangent profile and the following namelist parameters : \np{rn\_rdtmin}, \np{rn\_rdtmax} and \np{rn\_rdth}. Those three parameters correspond to the surface value the deep ocean value and the depth at which the transition occurs, respectively. The set of prognostic equations to solve becomes: \begin{equation} \label{Eq_acc} \begin{split} \frac{\partial \textbf{U}_h }{\partial t} &\equiv \frac{\textbf{U}_h ^{t+1}-\textbf{U}_h^{t-1} }{2\rdt} = \ldots \\ \frac{\partial T}{\partial t} &\equiv \frac{T^{t+1}-T^{t-1}}{2 \widetilde{\rdt}} = \ldots \\ \frac{\partial S}{\partial t} &\equiv \frac{S^{t+1} -S^{t-1}}{2 \widetilde{\rdt}} = \ldots \\ \end{split} \end{equation} \citet{Bryan1984} has examined the consequences of this distorted physics. Free waves have a slower phase speed, their meridional structure is slightly modified, and the growth rate of baroclinically unstable waves is reduced but with a wider range of instability. This technique is efficient for searching for an equilibrium state in coarse resolution models. However its application is not suitable for many oceanic problems: it cannot be used for transient or time evolving problems (in particular, it is very questionable to use this technique when there is a seasonal cycle in the forcing fields), and it cannot be used in high-resolution models where baroclinically unstable processes are important. Moreover, the vertical variation of $\widetilde{ \rdt}$ implies that the heat and salt contents are no longer conserved due to the vertical coupling of the ocean level through both advection and diffusion. Therefore \np{rn\_rdtmin} = \np{rn\_rdtmax} should be a more clever choice. % ================================================================ % Model optimisation, Control Print and Benchmark % ================================================================ \section{Model Optimisation, Control Print and Benchmark} \label{MISC_opt} %--------------------------------------------namctl------------------------------------------------------- \namdisplay{namctl} %-------------------------------------------------------------------------------------------------------------- % \gmcomment{why not make these bullets into subsections?} $\bullet$ Vector optimisation: \key{vectopt\_loop} enables the internal loops to collapse. This is very a very efficient way to increase the length of vector calculations and thus to speed up the model on vector computers. % Add here also one word on NPROMA technique that has been found useless, since compiler have made significant progress during the last decade. % Add also one word on NEC specific optimisation (Novercheck option for example) $\bullet$ Control print %: describe here 4 things: 1- \np{ln\_ctl} : compute and print the trends averaged over the interior domain in all TRA, DYN, LDF and ZDF modules. This option is very helpful when diagnosing the origin of an undesired change in model results. 2- also \np{ln\_ctl} but using the nictl and njctl namelist parameters to check the source of differences between mono and multi processor runs. 3- \key{esopa} (to be rename key\_nemo) : which is another option for model management. When defined, this key forces the activation of all options and CPP keys. For example, all tracer and momentum advection schemes are called! Therefore the model results have no physical meaning. However, this option forces both the compiler and the model to run through all the \textsc{Fortran} lines of the model. This allows the user to check for obvious compilation or execution errors with all CPP options, and errors in namelist options. 4- last digit comparison (\np{nn\_bit\_cmp}). In an MPP simulation, the computation of a sum over the whole domain is performed as the summation over all processors of each of their sums over their interior domains. This double sum never gives exactly the same result as a single sum over the whole domain, due to truncation differences. The "bit comparison" option has been introduced in order to be able to check that mono-processor and multi-processor runs give exactly the same results. %THIS is to be updated with the mpp_sum_glo introduced in v3.3 % nn_bit_cmp today only check that the nn_cla = 0 (no cross land advection) $\bullet$ Benchmark (\np{nn\_bench}). This option defines a benchmark run based on a GYRE configuration (see \S\ref{CFG_gyre}) in which the resolution remains the same whatever the domain size. This allows a very large model domain to be used, just by changing the domain size (\jp{jpiglo}, \jp{jpjglo}) and without adjusting either the time-step or the physical parameterisations. % ================================================================ % Elliptic solvers (SOL) % ================================================================ \section{Elliptic solvers (SOL)} \label{MISC_sol} %--------------------------------------------namdom------------------------------------------------------- \namdisplay{namsol} %-------------------------------------------------------------------------------------------------------------- When the filtered sea surface height option is used, the surface pressure gradient is computed in \mdl{dynspg\_flt}. The force added in the momentum equation is solved implicitely. It is thus solution of an elliptic equation \eqref{Eq_PE_flt} for which two solvers are available: a Successive-Over-Relaxation scheme (SOR) and a preconditioned conjugate gradient scheme(PCG) \citep{Madec_al_OM88, Madec_PhD90}. The solver is selected trough the the value of \np{nn\_solv} (namelist parameter). The PCG is a very efficient method for solving elliptic equations on vector computers. It is a fast and rather easy method to use; which are attractive features for a large number of ocean situations (variable bottom topography, complex coastal geometry, variable grid spacing, islands, open or cyclic boundaries, etc ...). It does not require a search for an optimal parameter as in the SOR method. However, the SOR has been retained because it is a linear solver, which is a very useful property when using the adjoint model of \NEMO. At each time step, the time derivative of the sea surface height at time step $t+1$ (or equivalently the divergence of the \textit{after} barotropic transport) that appears in the filtering forced is the solution of the elliptic equation obtained from the horizontal divergence of the vertical summation of \eqref{Eq_PE_flt}. Introducing the following coefficients: \begin{equation} \label{Eq_sol_matrix} \begin{aligned} &c_{i,j}^{NS} &&= {2 \rdt }^2 \; \frac{H_v (i,j) \; e_{1v} (i,j)}{e_{2v}(i,j)} \\ &c_{i,j}^{EW} &&= {2 \rdt }^2 \; \frac{H_u (i,j) \; e_{2u} (i,j)}{e_{1u}(i,j)} \\ &b_{i,j} &&= \delta_i \left[ e_{2u}M_u \right] - \delta_j \left[ e_{1v}M_v \right]\ , \\ \end{aligned} \end{equation} the five-point finite difference equation \eqref{Eq_psi_total} can be rewritten as: \begin{equation} \label{Eq_solmat} \begin{split} c_{i+1,j}^{NS} D_{i+1,j} + \; c_{i,j+1}^{EW} D_{i,j+1} + c_{i,j} ^{NS} D_{i-1,j} + \; c_{i,j} ^{EW} D_{i,j-1} & \\ - \left(c_{i+1,j}^{NS} + c_{i,j+1}^{EW} + c_{i,j}^{NS} + c_{i,j}^{EW} \right) D_{i,j} &= b_{i,j} \end{split} \end{equation} \eqref{Eq_solmat} is a linear symmetric system of equations. All the elements of the corresponding matrix \textbf{A} vanish except those of five diagonals. With the natural ordering of the grid points (i.e. from west to east and from south to north), the structure of \textbf{A} is block-tridiagonal with tridiagonal or diagonal blocks. \textbf{A} is a positive-definite symmetric matrix of size $(jpi \cdot jpj)^2$, and \textbf{B}, the right hand side of \eqref{Eq_solmat}, is a vector. Note that in the linear free surface case, the depth that appears in \eqref{Eq_sol_matrix} does not vary with time, and thus the matrix can be computed once for all. In non-linear free surface (\key{vvl} defined) the matrix have to be updated at each time step. % ------------------------------------------------------------------------------------------------------------- % Successive Over Relaxation % ------------------------------------------------------------------------------------------------------------- \subsection{Successive Over Relaxation (\np{nn\_solv}=2, \mdl{solsor})} \label{MISC_solsor} Let us introduce the four cardinal coefficients: \begin{align*} a_{i,j}^S &= c_{i,j }^{NS}/d_{i,j} &\qquad a_{i,j}^W &= c_{i,j}^{EW}/d_{i,j} \\ a_{i,j}^E &= c_{i,j+1}^{EW}/d_{i,j} &\qquad a_{i,j}^N &= c_{i+1,j}^{NS}/d_{i,j} \end{align*} where $d_{i,j} = c_{i,j}^{NS}+ c_{i+1,j}^{NS} + c_{i,j}^{EW} + c_{i,j+1}^{EW}$ (i.e. the diagonal of the matrix). \eqref{Eq_solmat} can be rewritten as: \begin{equation} \label{Eq_solmat_p} \begin{split} a_{i,j}^{N} D_{i+1,j} +\,a_{i,j}^{E} D_{i,j+1} +\, a_{i,j}^{S} D_{i-1,j} +\,a_{i,j}^{W} D_{i,j-1} - D_{i,j} = \tilde{b}_{i,j} \end{split} \end{equation} with $\tilde b_{i,j} = b_{i,j}/d_{i,j}$. \eqref{Eq_solmat_p} is the equation actually solved with the SOR method. This method used is an iterative one. Its algorithm can be summarised as follows (see \citet{Haltiner1980} for a further discussion): initialisation (evaluate a first guess from previous time step computations) \begin{equation} D_{i,j}^0 = 2 \, D_{i,j}^t - D_{i,j}^{t-1} \end{equation} iteration $n$, from $n=0$ until convergence, do : \begin{equation} \label{Eq_sor_algo} \begin{split} R_{i,j}^n = &a_{i,j}^{N} D_{i+1,j}^n +\,a_{i,j}^{E} D_{i,j+1} ^n +\, a_{i,j}^{S} D_{i-1,j} ^{n+1}+\,a_{i,j}^{W} D_{i,j-1} ^{n+1} - D_{i,j}^n - \tilde{b}_{i,j} \\ D_{i,j} ^{n+1} = &D_{i,j} ^{n} + \omega \;R_{i,j}^n \end{split} \end{equation} where \textit{$\omega $ }satisfies $1\leq \omega \leq 2$. An optimal value exists for \textit{$\omega$} which significantly accelerates the convergence, but it has to be adjusted empirically for each model domain (except for a uniform grid where an analytical expression for \textit{$\omega$} can be found \citep{Richtmyer1967}). The value of $\omega$ is set using \np{rn\_sor}, a \textbf{namelist} parameter. The convergence test is of the form: \begin{equation} \delta = \frac{\sum\limits_{i,j}{R_{i,j}^n}{R_{i,j}^n}} {\sum\limits_{i,j}{ \tilde{b}_{i,j}^n}{\tilde{b}_{i,j}^n}} \leq \epsilon \end{equation} where $\epsilon$ is the absolute precision that is required. It is recommended that a value smaller or equal to $10^{-6}$ is used for $\epsilon$ since larger values may lead to numerically induced basin scale barotropic oscillations. The precision is specified by setting \np{rn\_eps} (\textbf{namelist} parameter). In addition, two other tests are used to halt the iterative algorithm. They involve the number of iterations and the modulus of the right hand side. If the former exceeds a specified value, \np{nn\_max} (\textbf{namelist} parameter), or the latter is greater than $10^{15}$, the whole model computation is stopped and the last computed time step fields are saved in a abort.nc NetCDF file. In both cases, this usually indicates that there is something wrong in the model configuration (an error in the mesh, the initial state, the input forcing, or the magnitude of the time step or of the mixing coefficients). A typical value of $nn\_max$ is a few hundred when $\epsilon = 10^{-6}$, increasing to a few thousand when $\epsilon = 10^{-12}$. The vectorization of the SOR algorithm is not straightforward. The scheme contains two linear recurrences on $i$ and $j$. This inhibits the vectorisation. \eqref{Eq_sor_algo} can be been rewritten as: \begin{equation} \begin{split} R_{i,j}^n = &a_{i,j}^{N} D_{i+1,j}^n +\,a_{i,j}^{E} D_{i,j+1} ^n +\,a_{i,j}^{S} D_{i-1,j} ^{n}+\,_{i,j}^{W} D_{i,j-1} ^{n} - D_{i,j}^n - \tilde{b}_{i,j} \\ R_{i,j}^n = &R_{i,j}^n - \omega \;a_{i,j}^{S}\; R_{i,j-1}^n \\ R_{i,j}^n = &R_{i,j}^n - \omega \;a_{i,j}^{W}\; R_{i-1,j}^n \end{split} \end{equation} This technique slightly increases the number of iteration required to reach the convergence, but this is largely compensated by the gain obtained by the suppression of the recurrences. Another technique have been chosen, the so-called red-black SOR. It consist in solving successively \eqref{Eq_sor_algo} for odd and even grid points. It also slightly reduced the convergence rate but allows the vectorisation. In addition, and this is the reason why it has been chosen, it is able to handle the north fold boundary condition used in ORCA configuration ($i.e.$ tri-polar global ocean mesh). The SOR method is very flexible and can be used under a wide range of conditions, including irregular boundaries, interior boundary points, etc. Proofs of convergence, etc. may be found in the standard numerical methods texts for partial differential equations. % ------------------------------------------------------------------------------------------------------------- % Preconditioned Conjugate Gradient % ------------------------------------------------------------------------------------------------------------- \subsection{Preconditioned Conjugate Gradient (\np{nn\_solv}=1, \mdl{solpcg}) } \label{MISC_solpcg} \textbf{A} is a definite positive symmetric matrix, thus solving the linear system \eqref{Eq_solmat} is equivalent to the minimisation of a quadratic functional: \begin{equation*} \textbf{Ax} = \textbf{b} \leftrightarrow \textbf{x} =\text{inf}_{y} \,\phi (\textbf{y}) \quad , \qquad \phi (\textbf{y}) = 1/2 \langle \textbf{Ay},\textbf{y}\rangle - \langle \textbf{b},\textbf{y} \rangle \end{equation*} where $\langle , \rangle$ is the canonical dot product. The idea of the conjugate gradient method is to search for the solution in the following iterative way: assuming that $\textbf{x}^n$ has been obtained, $\textbf{x}^{n+1}$ is found from $\textbf {x}^{n+1}={\textbf {x}}^n+\alpha^n{\textbf {d}}^n$ which satisfies: \begin{equation*} {\textbf{ x}}^{n+1}=\text{inf} _{{\textbf{ y}}={\textbf{ x}}^n+\alpha^n \,{\textbf{ d}}^n} \,\phi ({\textbf{ y}})\;\;\Leftrightarrow \;\;\frac{d\phi }{d\alpha}=0 \end{equation*} and expressing $\phi (\textbf{y})$ as a function of \textit{$\alpha $}, we obtain the value that minimises the functional: \begin{equation*} \alpha ^n = \langle{ \textbf{r}^n , \textbf{r}^n} \rangle / \langle {\textbf{ A d}^n, \textbf{d}^n} \rangle \end{equation*} where $\textbf{r}^n = \textbf{b}-\textbf{A x}^n = \textbf{A} (\textbf{x}-\textbf{x}^n)$ is the error at rank $n$. The descent vector $\textbf{d}^n$ s chosen to be dependent on the error: $\textbf{d}^n = \textbf{r}^n + \beta^n \,\textbf{d}^{n-1}$. $\beta ^n$ is searched such that the descent vectors form an orthogonal basis for the dot product linked to \textbf{A}. Expressing the condition $\langle \textbf{A d}^n, \textbf{d}^{n-1} \rangle = 0$ the value of $\beta ^n$ is found: $\beta ^n = \langle{ \textbf{r}^n , \textbf{r}^n} \rangle / \langle {\textbf{r}^{n-1}, \textbf{r}^{n-1}} \rangle$. As a result, the errors $ \textbf{r}^n$ form an orthogonal base for the canonic dot product while the descent vectors $\textbf{d}^n$ form an orthogonal base for the dot product linked to \textbf{A}. The resulting algorithm is thus the following one: initialisation : \begin{equation*} \begin{split} \textbf{x}^0 &= D_{i,j}^0 = 2 D_{i,j}^t - D_{i,j}^{t-1} \quad, \text{the initial guess } \\ \textbf{r}^0 &= \textbf{d}^0 = \textbf{b} - \textbf{A x}^0 \\ \gamma_0 &= \langle{ \textbf{r}^0 , \textbf{r}^0} \rangle \end{split} \end{equation*} iteration $n,$ from $n=0$ until convergence, do : \begin{equation} \begin{split} \text{z}^n& = \textbf{A d}^n \\ \alpha_n &= \gamma_n / \langle{ \textbf{z}^n , \textbf{d}^n} \rangle \\ \textbf{x}^{n+1} &= \textbf{x}^n + \alpha_n \,\textbf{d}^n \\ \textbf{r}^{n+1} &= \textbf{r}^n - \alpha_n \,\textbf{z}^n \\ \gamma_{n+1} &= \langle{ \textbf{r}^{n+1} , \textbf{r}^{n+1}} \rangle \\ \beta_{n+1} &= \gamma_{n+1}/\gamma_{n} \\ \textbf{d}^{n+1} &= \textbf{r}^{n+1} + \beta_{n+1}\; \textbf{d}^{n}\\ \end{split} \end{equation} The convergence test is: \begin{equation} \delta = \gamma_{n}\; / \langle{ \textbf{b} , \textbf{b}} \rangle \leq \epsilon \end{equation} where $\epsilon $ is the absolute precision that is required. As for the SOR algorithm, the whole model computation is stopped when the number of iterations, \np{nn\_max}, or the modulus of the right hand side of the convergence equation exceeds a specified value (see \S\ref{MISC_solsor} for a further discussion). The required precision and the maximum number of iterations allowed are specified by setting \np{rn\_eps} and \np{nn\_max} (\textbf{namelist} parameters). It can be demonstrated that the above algorithm is optimal, provides the exact solution in a number of iterations equal to the size of the matrix, and that the convergence rate is faster as the matrix is closer to the identity matrix, $i.e.$ its eigenvalues are closer to 1. Therefore, it is more efficient to solve a better conditioned system which has the same solution. For that purpose, we introduce a preconditioning matrix \textbf{Q} which is an approximation of \textbf{A} but much easier to invert than \textbf{A}, and solve the system: \begin{equation} \label{Eq_pmat} \textbf{Q}^{-1} \textbf{A x} = \textbf{Q}^{-1} \textbf{b} \end{equation} The same algorithm can be used to solve \eqref{Eq_pmat} if instead of the canonical dot product the following one is used: ${\langle{ \textbf{a} , \textbf{b}} \rangle}_Q = \langle{ \textbf{a} , \textbf{Q b}} \rangle$, and if $\textbf{\~{b}} = \textbf{Q}^{-1}\;\textbf{b}$ and $\textbf{\~{A}} = \textbf{Q}^{-1}\;\textbf{A}$ are substituted to \textbf{b} and \textbf{A} \citep{Madec_al_OM88}. In \NEMO, \textbf{Q} is chosen as the diagonal of \textbf{ A}, i.e. the simplest form for \textbf{Q} so that it can be easily inverted. In this case, the discrete formulation of \eqref{Eq_pmat} is in fact given by \eqref{Eq_solmat_p} and thus the matrix and right hand side are computed independently from the solver used. % ================================================================ % Diagnostics % ================================================================ \section{Diagnostics (DIA, IOM, TRD, FLO)} \label{MISC_diag} % ------------------------------------------------------------------------------------------------------------- % Standard Model Output % ------------------------------------------------------------------------------------------------------------- \subsection{Model Output (default or \key{iomput} or \key{dimgout} or \key{netcdf4})} \label{MISC_iom} %to be updated with Seb documentation on the IO The model outputs are of three types: the restart file, the output listing, and the output file(s). The restart file is used internally by the code when the user wants to start the model with initial conditions defined by a previous simulation. It contains all the information that is necessary in order for there to be no changes in the model results (even at the computer precision) between a run performed with several restarts and the same run performed in one step. It should be noted that this requires that the restart file contain two consecutive time steps for all the prognostic variables, and that it is saved in the same binary format as the one used by the computer that is to read it (in particular, 32 bits binary IEEE format must not be used for this file). The output listing and file(s) are predefined but should be checked and eventually adapted to the user's needs. The output listing is stored in the $ocean.output$ file. The information is printed from within the code on the logical unit $numout$. To locate these prints, use the UNIX command "\textit{grep -i numout}" in the source code directory. In the standard configuration, the user will find the model results in NetCDF files containing mean values (or instantaneous values if \key{diainstant} is defined) for every time-step where output is demanded. These outputs are defined in the \mdl{diawri} module. When defining \key{dimgout}, the output are written in DIMG format, an IEEE output format. Since version 3.3, support for NetCDF4 chunking and (loss-less) compression has been included. These options build on the standard NetCDF output and allow the user control over the size of the chunks via namelist settings. Chunking and compression can lead to significant reductions in file sizes for a small runtime overhead. For a fuller discussion on chunking and other performance issues the reader is referred to the NetCDF4 documentation: http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html\#Chunking The new features are only available when the code has been linked with a NetCDF4 library (version 4.1 onwards, recommended) which has been built with HDF5 support (version 1.8.4 onwards, recommended). Datasets created with chunking and compression are not backwards compatible with NetCDF3 "classic" format but most analysis codes can be relinked simply with the new libraries and will then read both NetCDF3 and NetCDF4 files. NEMO executables linked with NetCDF4 libraries can be made to produce NetCDF3 files by setting the \np{ln\_nc4zip} logical to false in the \np{namnc4} namelist: %------------------------------------------namnc4---------------------------------------------------- \namdisplay{namnc4} %------------------------------------------------------------------------------------------------------------- If \key{netcdf4} has not been defined, these namelist parameters are not read. In this case, \np{ln\_nc4zip} is set false and dummy routines for a few NetCDF4-specific functions are defined. These functions will not be used but need to be included so that compilation is possible with NetCDF3 libraries. When using NetCDF4 libraries, \key{netcdf4} should be defined even if the intention is to create only NetCDF3-compatible files. This is necessary to avoid duplication between the dummy routines and the actual routines present in the library. Most compilers will fail at compile time when faced with such duplication. Thus when linking with NetCDF4 libraries the user must define \key{netcdf4} and control the type of NetCDF file produced via the namelist parameter. Chunking and compression is applied only to 4D fields and there is no advantage in chunking across more than one time dimension since previously written chunks would have to be read back and decompressed before being added to. Therefore, user control over chunk sizes is provided only for the three space dimensions. The user sets an approximate number of chunks along each spatial axis. The actual size of the chunks will depend on global domain size for mono-processors or, more likely, the local processor domain size for distributed processing. The derived values are subject to practical minimum values (to avoid wastefully small chunk sizes) and cannot be greater than the domain size in any dimension. The algorithm used is: \begin{alltt} {{\tiny \begin{verbatim} ichunksz(1) = MIN( idomain_size,MAX( (idomain_size-1)/nn_nchunks_i + 1 ,16 ) ) ichunksz(2) = MIN( jdomain_size,MAX( (jdomain_size-1)/nn_nchunks_j + 1 ,16 ) ) ichunksz(3) = MIN( kdomain_size,MAX( (kdomain_size-1)/nn_nchunks_k + 1 , 1 ) ) ichunksz(4) = 1 \end{verbatim} }}\end{alltt} \noindent As an example, setting: \vspace{-20pt} \begin{alltt} {{\tiny \begin{verbatim} nn_nchunks_i=4, nn_nchunks_j=4 and nn_nchunks_k=31 \end{verbatim} }}\end{alltt} \vspace{-10pt} \noindent for a standard ORCA2\_LIM configuration gives chunksizes of {\small\tt 46x38x1} respectively in the mono-processor case (i.e. global domain of {\small\tt 182x149x31}). An illustration of the potential space savings that NetCDF4 chunking and compression provides is given in table \ref{Tab_NC4} which compares the results of two short runs of the ORCA2\_LIM reference configuration with a 4x2 mpi partitioning. Note the variation in the compression ratio achieved which reflects chiefly the dry to wet volume ratio of each processing region. %------------------------------------------TABLE---------------------------------------------------- \begin{table} \begin{tabular}{lrrr} Filename & NetCDF3 & NetCDF4 & Reduction\\ &filesize & filesize & \% \\ &(KB) & (KB) & \\ ORCA2\_restart\_0000.nc & 16420 & 8860 & 47\%\\ ORCA2\_restart\_0001.nc & 16064 & 11456 & 29\%\\ ORCA2\_restart\_0002.nc & 16064 & 9744 & 40\%\\ ORCA2\_restart\_0003.nc & 16420 & 9404 & 43\%\\ ORCA2\_restart\_0004.nc & 16200 & 5844 & 64\%\\ ORCA2\_restart\_0005.nc & 15848 & 8172 & 49\%\\ ORCA2\_restart\_0006.nc & 15848 & 8012 & 50\%\\ ORCA2\_restart\_0007.nc & 16200 & 5148 & 69\%\\ ORCA2\_2d\_grid\_T\_0000.nc & 2200 & 1504 & 32\%\\ ORCA2\_2d\_grid\_T\_0001.nc & 2200 & 1748 & 21\%\\ ORCA2\_2d\_grid\_T\_0002.nc & 2200 & 1592 & 28\%\\ ORCA2\_2d\_grid\_T\_0003.nc & 2200 & 1540 & 30\%\\ ORCA2\_2d\_grid\_T\_0004.nc & 2200 & 1204 & 46\%\\ ORCA2\_2d\_grid\_T\_0005.nc & 2200 & 1444 & 35\%\\ ORCA2\_2d\_grid\_T\_0006.nc & 2200 & 1428 & 36\%\\ ORCA2\_2d\_grid\_T\_0007.nc & 2200 & 1148 & 48\%\\ ... & ... & ... & .. \\ ORCA2\_2d\_grid\_W\_0000.nc & 4416 & 2240 & 50\%\\ ORCA2\_2d\_grid\_W\_0001.nc & 4416 & 2924 & 34\%\\ ORCA2\_2d\_grid\_W\_0002.nc & 4416 & 2512 & 44\%\\ ORCA2\_2d\_grid\_W\_0003.nc & 4416 & 2368 & 47\%\\ ORCA2\_2d\_grid\_W\_0004.nc & 4416 & 1432 & 68\%\\ ORCA2\_2d\_grid\_W\_0005.nc & 4416 & 1972 & 56\%\\ ORCA2\_2d\_grid\_W\_0006.nc & 4416 & 2028 & 55\%\\ ORCA2\_2d\_grid\_W\_0007.nc & 4416 & 1368 & 70\%\\ \end{tabular} \caption{ \label{Tab_NC4} Filesize comparison between NetCDF3 and NetCDF4 with chunking and compression} \end{table} %---------------------------------------------------------------------------------------------------- Since version 3.2, an I/O server has been added which provides more flexibility in the choice of the fields to be output as well as how the writing work is distributed over the processors in massively parallel computing. It is activated when \key{iomput} is defined. When \key{iomput} is activated with \key{netcdf4} chunking and compression parameters for fields produced via \np{iom\_put} calls are set via an equivalent and identically named namelist to \np{namnc4} in \np{xmlio\_server.def}. Typically this namelist serves the mean files whilst the \np{ namnc4} in the main namelist file continues to serve the restart files. This duplication is unfortunate but appropriate since, if using io\_servers, the domain sizes of the individual files produced by the io\_server processes may be different to those produced by the invidual processing regions and different chunking choices may be desired. { % ------------------------------------------------------------------------------------------------------------- % Tracer/Dynamics Trends % ------------------------------------------------------------------------------------------------------------- \subsection[Tracer/Dynamics Trends (TRD)] {Tracer/Dynamics Trends (\key{trdmld}, \key{trdtra}, \key{trddyn}, \key{trdmld\_trc})} \label{MISC_tratrd} %------------------------------------------namtrd---------------------------------------------------- \namdisplay{namtrd} %------------------------------------------------------------------------------------------------------------- When \key{trddyn} and/or \key{trddyn} CPP variables are defined, each trend of the dynamics and/or temperature and salinity time evolution equations is stored in three-dimensional arrays just after their computation ($i.e.$ at the end of each $dyn\cdots.F90$ and/or $tra\cdots.F90$ routines). These trends are then used in \mdl{trdmod} (see TRD directory) every \textit{nn\_trd } time-steps. What is done depends on the CPP keys defined: \begin{description} \item[\key{trddyn}, \key{trdtra}] : a check of the basin averaged properties of the momentum and/or tracer equations is performed ; \item[\key{trdvor}] : a vertical summation of the moment tendencies is performed, then the curl is computed to obtain the barotropic vorticity tendencies which are output ; \item[\key{trdmld}] : output of the tracer tendencies averaged vertically either over the mixed layer (\np{nn\_ctls}=0), or over a fixed number of model levels (\np{nn\_ctls}$>$1 provides the number of level), or over a spatially varying but temporally fixed number of levels (typically the base of the winter mixed layer) read in \ifile{ctlsurf\_idx} (\np{nn\_ctls}=1). \end{description} The units in the output file can be changed using the \np{nn\_ucf} namelist parameter. For example, in case of salinity tendency the units are given by PSU/s/\np{nn\_ucf}. Setting \np{nn\_ucf}=86400 provides the tendencies in PSU/d. When \key{trdmld} is defined, two time averaging procedure are proposed. Setting \np{ln\_trdmld\_instant} to \textit{true}, a simple time averaging is performed, so that the resulting tendency is the contribution to the change of a quantity between the two instantaneous values taken at the extremities of the time averaging period. Setting \np{ln\_trdmld\_instant} to \textit{false}, a double time averaging is performed, so that the resulting tendency is the contribution to the change of a quantity between two \textit{time mean} values. The later option requires the use of an extra file, \ifile{restart\_mld} (\np{ln\_trdmld\_restart}=true), to restart a run. Note that the mixed layer tendency diagnostic can also be used on biogeochemical models via Êthe \key{trdtrc} and \key{trdmld\_trc} CPP keys. % ------------------------------------------------------------------------------------------------------------- % On-line Floats trajectories % ------------------------------------------------------------------------------------------------------------- \subsection{On-line Floats trajectories (FLO) (\key{floats}} \label{FLO} %--------------------------------------------namflo------------------------------------------------------- \namdisplay{namflo} %-------------------------------------------------------------------------------------------------------------- The on-line computation of floats advected either by the three dimensional velocity field or constraint to remain at a given depth ($w = 0$ in the computation) have been introduced in the system during the CLIPPER project. The algorithm used is based either on the work of \cite{Blanke_Raynaud_JPO97} (default option), or on a $4^th$ Runge-Hutta algorithm (\np{ln\_flork4}=true). Note that the \cite{Blanke_Raynaud_JPO97} algorithm have the advantage of providing trajectories which are consistent with the numeric of the code, so that the trajectories never intercept the bathymetry. See also the web site describing the off-line use of this marvellous diagnostic tool (http://stockage.univ-brest.fr/~grima/Ariane/). % ------------------------------------------------------------------------------------------------------------- % Other Diagnostics % ------------------------------------------------------------------------------------------------------------- \subsection{Other Diagnostics (\key{diahth}, \key{diaar5})} \label{MISC_diag_others} Aside from the standard model variables, other diagnostics can be computed on-line. The available ready-to-add diagnostics routines can be found in directory DIA. Among the available diagnostics the following ones are obtained when defining the \key{diahth} CPP key: - the mixed layer depth (based on a density criterion, \citet{de_Boyer_Montegut_al_JGR04}) (\mdl{diahth}) - the turbocline depth (based on a turbulent mixing coefficient criterion) (\mdl{diahth}) - the depth of the 20\deg C isotherm (\mdl{diahth}) - the depth of the thermocline (maximum of the vertical temperature gradient) (\mdl{diahth}) The poleward heat and salt transports, their advective and diffusive component, and the meriodional stream function can be computed on-line in \mdl{diaptr} by setting \np{ln\_diaptr} to true (see the \textit{namptr} namelist below). When \np{ln\_subbas}~=~true, transports and stream function are computed for the Atlantic, Indian, Pacific and Indo-Pacific Oceans (defined north of 30\deg S) as well as for the World Ocean. The sub-basin decomposition requires an input file (\ifile{subbasins}) which contains three 2D mask arrays, the Indo-Pacific mask been deduced from the sum of the Indian and Pacific mask (Fig~\ref{Fig_mask_subasins}). %------------------------------------------namptr---------------------------------------------------- \namdisplay{namptr} %------------------------------------------------------------------------------------------------------------- %>>>>>>>>>>>>>>>>>>>>>>>>>>>> \begin{figure}[!t] \begin{center} \includegraphics[width=1.0\textwidth]{./TexFiles/Figures/Fig_mask_subasins.pdf} \caption{ \label{Fig_mask_subasins} Decomposition of the World Ocean (here ORCA2) into sub-basin used in to compute the heat and salt transports as well as the meridional stream-function: Atlantic basin (red), Pacific basin (green), Indian basin (bleue), Indo-Pacific basin (bleue+green). Note that semi-enclosed seas (Red, Med and Baltic seas) as well as Hudson Bay are removed from the sub-basin. Note also that the Arctic Ocean has been split into Atlantic and Pacific basins along the North fold line. } \end{center} \end{figure} %>>>>>>>>>>>>>>>>>>>>>>>>>>>> In addition, a series of diagnostics has been added in the \mdl{diaar5}. They corresponds to outputs that are required for AR5 simulations (see Section \ref{MISC_steric} below for one of them). Activating those outputs requires to define the \key{diaar5} CPP key. % ================================================================ % Steric effect in sea surface height % ================================================================ \section{Steric effect in sea surface height} \label{MISC_steric} Changes in steric sea level are caused when changes in the density of the water column imply an expansion or contraction of the column. It is essentially produced through surface heating/cooling and to a lesser extent through non-linear effects of the equation of state (cabbeling, thermobaricity...). Non-Boussinesq models contain all ocean effects within the ocean acting on the sea level. In particular, they include the steric effect. In contrast, Boussinesq models, such as \NEMO, conserve volume, rather than mass, and so do not properly represent expansion or contraction. The steric effect is therefore not explicitely represented. This approximation does not represent a serious error with respect to the flow field calculated by the model \citep{Greatbatch_JGR94}, but extra attention is required when investigating sea level, as steric changes are an important contribution to local changes in sea level on seasonal and climatic time scales. This is especially true for investigation into sea level rise due to global warming. Fortunately, the steric contribution to the sea level consists of a spatially uniform component that can be diagnosed by considering the mass budget of the world ocean \citep{Greatbatch_JGR94}. In order to better understand how global mean sea level evolves and thus how the steric sea level can be diagnosed, we compare, in the following, the non-Boussinesq and Boussinesq cases. Let denote $\mathcal{M}$ the total mass of liquid seawater ($\mathcal{M}=\int_D \rho dv$), $\mathcal{V}$ the total volume of seawater ($\mathcal{V}=\int_D dv$), $\mathcal{A}$ the total surface of the ocean ($\mathcal{A}=\int_S ds$), $\bar{\rho}$ the global mean seawater (\textit{in situ}) density ($\bar{\rho}= 1/\mathcal{V} \int_D \rho \,dv$), and $\bar{\eta}$ the global mean sea level ($\bar{\eta}=1/\mathcal{A}\int_S \eta \,ds$). A non-Boussinesq fluid conserves mass. It satisfies the following relations: \begin{equation} \label{Eq_MV_nBq} \begin{split} \mathcal{M} &= \mathcal{V} \;\bar{\rho} \\ \mathcal{V} &= \mathcal{A} \;\bar{\eta} \end{split} \end{equation} Temporal changes in total mass is obtained from the density conservation equation : \begin{equation} \label{Eq_Co_nBq} \frac{1}{e_3} \partial_t ( e_3\,\rho) + \nabla( \rho \, \textbf{U} ) = \left. \frac{\textit{emp}}{e_3}\right|_\textit{surface} \end{equation} where $\rho$ is the \textit{in situ} density, and \textit{emp} the surface mass exchanges with the other media of the Earth system (atmosphere, sea-ice, land). Its global averaged leads to the total mass change \begin{equation} \label{Eq_Mass_nBq} \partial_t \mathcal{M} = \mathcal{A} \;\overline{\textit{emp}} \end{equation} where $\overline{\textit{emp}}=\int_S \textit{emp}\,ds$ is the net mass flux through the ocean surface. Bringing \eqref{Eq_Mass_nBq} and the time derivative of \eqref{Eq_MV_nBq} together leads to the evolution equation of the mean sea level \begin{equation} \label{Eq_ssh_nBq} \partial_t \bar{\eta} = \frac{\overline{\textit{emp}}}{ \bar{\rho}} - \frac{\mathcal{V}}{\mathcal{A}} \;\frac{\partial_t \bar{\rho} }{\bar{\rho}} \end{equation} The first term in equation \eqref{Eq_ssh_nBq} alters sea level by adding or subtracting mass from the ocean. The second term arises from temporal changes in the global mean density; $i.e.$ from steric effects. In a Boussinesq fluid, $\rho$ is replaced by $\rho_o$ in all the equation except when $\rho$ appears multiplied by the gravity ($i.e.$ in the hydrostatic balance of the primitive Equations). In particular, the mass conservation equation, \eqref{Eq_Co_nBq}, degenerates into the incompressibility equation: \begin{equation} \label{Eq_Co_Bq} \frac{1}{e_3} \partial_t ( e_3 ) + \nabla( \textbf{U} ) = \left. \frac{\textit{emp}}{\rho_o \,e_3}\right|_ \textit{surface} \end{equation} and the global average of this equation now gives the temporal change of the total volume, \begin{equation} \label{Eq_V_Bq} \partial_t \mathcal{V} = \mathcal{A} \;\frac{\overline{\textit{emp}}}{\rho_o} \end{equation} Only the volume is conserved, not mass, or, more precisely, the mass which is conserved is the Boussinesq mass, $\mathcal{M}_o = \rho_o \mathcal{V}$. The total volume (or equivalently the global mean sea level) is altered only by net volume fluxes across the ocean surface, not by changes in mean mass of the ocean: the steric effect is missing in a Boussinesq fluid. Nevertheless, following \citep{Greatbatch_JGR94}, the steric effect on the volume can be diagnosed by considering the mass budget of the ocean. The apparent changes in $\mathcal{M}$, mass of the ocean, which are not induced by surface mass flux must be compensated by a spatially uniform change in the mean sea level due to expansion/contraction of the ocean \citep{Greatbatch_JGR94}. In others words, the Boussinesq mass, $\mathcal{M}_o$, can be related to $\mathcal{M}$, the total mass of the ocean seen by the Boussinesq model, via the steric contribution to the sea level, $\eta_s$, a spatially uniform variable, as follows: \begin{equation} \label{Eq_M_Bq} \mathcal{M}_o = \mathcal{M} + \rho_o \,\eta_s \,\mathcal{A} \end{equation} Any change in $\mathcal{M}$ which cannot be explained by the net mass flux through the ocean surface is converted into a mean change in sea level. Introducing the total density anomaly, $\mathcal{D}= \int_D d_a \,dv$, where $d_a= (\rho -\rho_o ) / \rho_o$ is the density anomaly used in \NEMO (cf. \S\ref{TRA_eos}) in \eqref{Eq_M_Bq} leads to a very simple form for the steric height: \begin{equation} \label{Eq_steric_Bq} \eta_s = - \frac{1}{\mathcal{A}} \mathcal{D} \end{equation} The above formulation of the steric height of a Boussinesq ocean requires four remarks. First, one can be tempted to define $\rho_o$ as the initial value of $\mathcal{M}/\mathcal{V}$, $i.e.$ set $\mathcal{D}_{t=0}=0$, so that the initial steric height is zero. We do not recommend that. Indeed, in this case $\rho_o$ depends on the initial state of the ocean. Since $\rho_o$ has a direct effect on the dynamics of the ocean (it appears in the pressure gradient term of the momentum equation) it is definitively not a good idea when inter-comparing experiments. We better recommend to fixe once for all $\rho_o$ to $1035\;Kg\,m^{-3}$. This value is a sensible choice for the reference density used in a Boussinesq ocean climate model since, with the exception of only a small percentage of the ocean, density in the World Ocean varies by no more than 2$\%$ from this value (\cite{Gill1982}, page 47). Second, we have assumed here that the total ocean surface, $\mathcal{A}$, does not change when the sea level is changing as it is the case in all global ocean GCMs (wetting and drying of grid point is not allowed). Third, the discretisation of \eqref{Eq_steric_Bq} depends on the type of free surface which is considered. In the non linear free surface case, $i.e.$ \key{vvl} defined, it is given by \begin{equation} \label{Eq_discrete_steric_Bq} \eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t} e_{2t} e_{3t} } { \sum_{i,\,j,\,k} e_{1t} e_{2t} e_{3t} } \end{equation} whereas in the linear free surface, the volume above the \textit{z=0} surface must be explicitly taken into account to better approximate the total ocean mass and thus the steric sea level: \begin{equation} \label{Eq_discrete_steric_Bq} \eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} d_a\; e_{1t}e_{2t} \eta } {\sum_{i,\,j,\,k} e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} e_{1t}e_{2t} \eta } \end{equation} The fourth and last remark concerns the effective sea level and the presence of sea-ice. In the real ocean, sea ice (and snow above it) depresses the liquid seawater through its mass loading. This depression is a result of the mass of sea ice/snow system acting on the liquid ocean. There is, however, no dynamical effect associated with these depressions in the liquid ocean sea level, so that there are no associated ocean currents. Hence, the dynamically relevant sea level is the effective sea level, $i.e.$ the sea level as if sea ice (and snow) were converted to liquid seawater \citep{Campin_al_OM08}. However, in the current version of \NEMO the sea-ice is levitating above the ocean without mass exchanges between ice and ocean. Therefore the model effective sea level is always given by $\eta + \eta_s$, whether or not there is sea ice present. In AR5 outputs, the thermosteric sea level is demanded. It is steric sea level due to changes in ocean density arising just from changes in temperature. It is given by: \begin{equation} \label{Eq_thermosteric_Bq} \eta_s = - \frac{1}{\mathcal{A}} \int_D d_a(T,S_o,p_o) \,dv \end{equation} where $S_o$ and $p_o$ are the initial salinity and pressure, respectively. Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs the \key{diaar5} defined to be called. % ================================================================