New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
Chap_MISC.tex in branches/2018/dev_r8864_nemo_v3_6_ZTILDE/DOC/TexFiles/Chapters – NEMO

source: branches/2018/dev_r8864_nemo_v3_6_ZTILDE/DOC/TexFiles/Chapters/Chap_MISC.tex @ 9341

Last change on this file since 9341 was 6993, checked in by nicolasmartin, 8 years ago

Add missing files and modifications from previous commit

File size: 35.5 KB
Line 
1\documentclass[NEMO_book]{subfiles}
2\begin{document}
3% ================================================================
4% Chapter � Miscellaneous Topics
5% ================================================================
6\chapter{Miscellaneous Topics}
7\label{MISC}
8\minitoc
9
10\newpage
11$\ $\newline    % force a new ligne
12
13% ================================================================
14% Representation of Unresolved Straits
15% ================================================================
16\section{Representation of Unresolved Straits}
17\label{MISC_strait}
18
19In climate modeling, it often occurs that a crucial connections between water masses
20is broken as the grid mesh is too coarse to resolve narrow straits. For example, coarse
21grid spacing typically closes off the Mediterranean from the Atlantic at the Strait of
22Gibraltar. In this case, it is important for climate models to include the effects of salty
23water entering the Atlantic from the Mediterranean. Likewise, it is important for the
24Mediterranean to replenish its supply of water from the Atlantic to balance the net
25evaporation occurring over the Mediterranean region. This problem occurs even in
26eddy permitting simulations. For example, in ORCA 1/4\deg several straits of the Indonesian
27archipelago (Ombai, Lombok...) are much narrow than even a single ocean grid-point.
28
29We describe briefly here the three methods that can be used in \NEMO to handle
30such improperly resolved straits. The first two consist of opening the strait by hand
31while ensuring that the mass exchanges through the strait are not too large by
32either artificially reducing the surface of the strait grid-cells or, locally increasing
33the lateral friction. In the third one, the strait is closed but exchanges of mass,
34heat and salt across the land are allowed.
35Note that such modifications are so specific to a given configuration that no attempt
36has been made to set them in a generic way. However, examples of how
37they can be set up is given in the ORCA 2\deg and 0.5\deg configurations. For example,
38for details of implementation in ORCA2, search:
39\texttt{ IF( cp\_cfg == "orca" .AND. jp\_cfg == 2 ) }
40
41% -------------------------------------------------------------------------------------------------------------
42%       Hand made geometry changes
43% -------------------------------------------------------------------------------------------------------------
44\subsection{Hand made geometry changes}
45\label{MISC_strait_hand}
46
47$\bullet$ reduced scale factor in the cross-strait direction to a value in better agreement
48with the true mean width of the strait. (Fig.~\ref{Fig_MISC_strait_hand}).
49This technique is sometime called "partially open face" or "partially closed cells".
50The key issue here is only to reduce the faces of $T$-cell ($i.e.$ change the value
51of the horizontal scale factors at $u$- or $v$-point) but not the volume of the $T$-cell.
52Indeed, reducing the volume of strait $T$-cell can easily produce a numerical
53instability at that grid point that would require a reduction of the model time step.
54The changes associated with strait management are done in \mdl{domhgr},
55just after the definition or reading of the horizontal scale factors.
56
57$\bullet$ increase of the viscous boundary layer thickness by local increase of the
58fmask value at the coast (Fig.~\ref{Fig_MISC_strait_hand}). This is done in
59\mdl{dommsk} together with the setting of the coastal value of fmask
60(see Section \ref{LBC_coast})
61
62%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
63\begin{figure}[!tbp]     \begin{center}
64\includegraphics[width=0.80\textwidth]{Fig_Gibraltar}
65\includegraphics[width=0.80\textwidth]{Fig_Gibraltar2}
66\caption{   \label{Fig_MISC_strait_hand} 
67Example of the Gibraltar strait defined in a $1^{\circ} \times 1^{\circ}$ mesh.
68\textit{Top}: using partially open cells. The meridional scale factor at $v$-point
69is reduced on both sides of the strait to account for the real width of the strait
70(about 20 km). Note that the scale factors of the strait $T$-point remains unchanged.
71\textit{Bottom}: using viscous boundary layers. The four fmask parameters
72along the strait coastlines are set to a value larger than 4, $i.e.$ "strong" no-slip
73case (see Fig.\ref{Fig_LBC_shlat}) creating a large viscous boundary layer
74that allows a reduced transport through the strait.}
75\end{center}   \end{figure}
76%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
77
78% -------------------------------------------------------------------------------------------------------------
79% Cross Land Advection
80% -------------------------------------------------------------------------------------------------------------
81\subsection{Cross Land Advection (\mdl{tracla})}
82\label{MISC_strait_cla}
83%--------------------------------------------namcla--------------------------------------------------------
84\namdisplay{namcla} 
85%--------------------------------------------------------------------------------------------------------------
86
87Options are defined through the  \ngn{namcla} namelist variables.
88This option is an obsolescent feature that will be removed in version 3.7 and followings.
89
90%The problem is resolved here by allowing the mixing of tracers and mass/volume between non-adjacent water columns at nominated regions within the model. Momentum is not mixed. The scheme conserves total tracer content, and total volume (the latter in $z*$- or $s*$-coordinate), and maintains compatibility between the tracer and mass/volume budgets. 
91
92% ================================================================
93% Closed seas
94% ================================================================
95\section{Closed seas (\mdl{closea})}
96\label{MISC_closea}
97
98\colorbox{yellow}{Add here a short description of the way closed seas are managed}
99
100
101% ================================================================
102% Sub-Domain Functionality (\textit{nizoom, njzoom}, namelist parameters)
103% ================================================================
104\section{Sub-Domain Functionality (\np{jpizoom}, \np{jpjzoom})}
105\label{MISC_zoom}
106
107The sub-domain functionality, also improperly called the zoom option
108(improperly because it is not associated with a change in model resolution)
109is a quite simple function that allows a simulation over a sub-domain of an
110already defined configuration ($i.e.$ without defining a new mesh, initial
111state and forcings). This option can be useful for testing the user settings
112of surface boundary conditions, or the initial ocean state of a huge ocean
113model configuration while having a small computer memory requirement.
114It can also be used to easily test specific physics in a sub-domain (for example,
115see \citep{Madec_al_JPO96} for a test of the coupling used in the global ocean
116version of OPA between sea-ice and ocean model over the Arctic or Antarctic
117ocean, using a sub-domain). In the standard model, this option does not
118include any specific treatment for the ocean boundaries of the sub-domain:
119they are considered as artificial vertical walls. Nevertheless, it is quite easy
120to add a restoring term toward a climatology in the vicinity of such boundaries
121(see \S\ref{TRA_dmp}).
122
123In order to easily define a sub-domain over which the computation can be
124performed, the dimension of all input arrays (ocean mesh, bathymetry,
125forcing, initial state, ...) are defined as \np{jpidta}, \np{jpjdta} and \np{jpkdta} 
126( in \ngn{namcfg} namelist), while the computational domain is defined through
127\np{jpiglo}, \np{jpjglo} and \jp{jpk} (\ngn{namcfg} namelist). When running the
128model over the whole domain, the user sets \np{jpiglo}=\np{jpidta} \np{jpjglo}=\np{jpjdta} 
129and \jp{jpk}=\jp{jpkdta}. When running the model over a sub-domain, the user
130has to provide the size of the sub-domain, (\np{jpiglo}, \np{jpjglo}, \np{jpkglo}),
131and the indices of the south western corner as \np{jpizoom} and \np{jpjzoom} in
132the  \ngn{namcfg} namelist (Fig.~\ref{Fig_LBC_zoom}).
133
134Note that a third set of dimensions exist, \jp{jpi}, \jp{jpj} and \jp{jpk} which is
135actually used to perform the computation. It is set by default to \jp{jpi}=\np{jpjglo} 
136and \jp{jpj}=\np{jpjglo}, except for massively parallel computing where the
137computational domain is laid out on local processor memories following a 2D
138horizontal splitting. % (see {\S}IV.2-c) ref to the section to be updated
139
140\subsection{Simple subsetting of input files via netCDF attributes}
141
142The extended grids for use with the under-shelf ice cavities will result in redundant rows
143around Antarctica if the ice cavities are not active. A simple mechanism for subsetting
144input files associated with the extended domains has been implemented to avoid the need to
145maintain different sets of input fields for use with or without active ice cavities. The
146existing 'zoom' options are overly complex for this task and marked for deletion anyway.
147This alternative subsetting operates for the j-direction only and works by optionally
148looking for and using a global file attribute (named: \np{open\_ocean\_jstart}) to
149determine the starting j-row for input. The use of this option is best explained with an
150example: Consider an ORCA1 configuration using the extended grid bathymetry and coordinate
151files:
152\vspace{-10pt}
153\begin{alltt}
154\tiny
155\begin{verbatim}
156eORCA1_bathymetry_v2.nc
157eORCA1_coordinates.nc
158\end{verbatim}
159\end{alltt}
160\noindent These files define a horizontal domain of 362x332. Assuming the first row with
161open ocean wet points in the non-isf bathymetry for this set is row 42 (Fortran indexing)
162then the formally correct setting for \np{open\_ocean\_jstart} is 41. Using this value as the
163first row to be read will result in a 362x292 domain which is the same size as the original
164ORCA1 domain. Thus the extended coordinates and bathymetry files can be used with all the
165original input files for ORCA1 if the ice cavities are not active (\np{ln\_isfcav =
166.false.}). Full instructions for achieving this are:
167
168\noindent Add the new attribute to any input files requiring a j-row offset, i.e:
169\vspace{-10pt}
170\begin{alltt}
171\tiny
172\begin{verbatim}
173ncatted  -a open_ocean_jstart,global,a,d,41 eORCA1_coordinates.nc
174ncatted  -a open_ocean_jstart,global,a,d,41 eORCA1_bathymetry_v2.nc
175\end{verbatim}
176\end{alltt}
177 
178\noindent Add the logical switch to \ngn{namcfg} in the configuration namelist and set true:
179%--------------------------------------------namcfg--------------------------------------------------------
180\namdisplay{namcfg_orca1}
181%--------------------------------------------------------------------------------------------------------------
182
183\noindent Note the j-size of the global domain is the (extended j-size minus
184\np{open\_ocean\_jstart} + 1 ) and this must match the size of all datasets other than
185bathymetry and coordinates currently. However the option can be extended to any global, 2D
186and 3D, netcdf, input field by adding the:
187\vspace{-10pt}
188\begin{alltt}
189\tiny
190\begin{verbatim}
191lrowattr=ln_use_jattr
192\end{verbatim}
193\end{alltt}
194optional argument to the appropriate \np{iom\_get} call and the \np{open\_ocean\_jstart} attribute to the corresponding input files. It remains the users responsibility to set \np{jpjdta} and \np{jpjglo} values in the \np{namelist\_cfg} file according to their needs.
195
196%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
197\begin{figure}[!ht]    \begin{center}
198\includegraphics[width=0.90\textwidth]{Fig_LBC_zoom}
199\caption{   \label{Fig_LBC_zoom}
200Position of a model domain compared to the data input domain when the zoom functionality is used.}
201\end{center}   \end{figure}
202%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
203
204
205% ================================================================
206% Accelerating the Convergence
207% ================================================================
208\section{Accelerating the Convergence (\np{nn\_acc} = 1)}
209\label{MISC_acc}
210%--------------------------------------------namdom-------------------------------------------------------
211\namdisplay{namdom} 
212%--------------------------------------------------------------------------------------------------------------
213
214Searching an equilibrium state with an global ocean model requires a very long time
215integration period (a few thousand years for a global model). Due to the size of
216the time step required for numerical stability (less than a few hours),
217this usually requires a large elapsed time. In order to overcome this problem,
218\citet{Bryan1984} introduces a technique that is intended to accelerate
219the spin up to equilibrium. It uses a larger time step in
220the tracer evolution equations than in the momentum evolution
221equations. It does not affect the equilibrium solution but modifies the
222trajectory to reach it.
223
224Options are defined through the  \ngn{namdom} namelist variables.
225The acceleration of convergence option is used when \np{nn\_acc}=1. In that case,
226$\rdt=rn\_rdt$ is the time step of dynamics while $\widetilde{\rdt}=rdttra$ is the
227tracer time-step. the former is set from the \np{rn\_rdt} namelist parameter while the latter
228is computed using a hyperbolic tangent profile and the following namelist parameters :
229\np{rn\_rdtmin}, \np{rn\_rdtmax} and \np{rn\_rdth}. Those three parameters correspond
230to the surface value the deep ocean value and the depth at which the transition occurs, respectively.
231The set of prognostic equations to solve becomes:
232\begin{equation} \label{Eq_acc}
233\begin{split}
234\frac{\partial \textbf{U}_h }{\partial t} 
235   &\equiv \frac{\textbf{U}_h ^{t+1}-\textbf{U}_h^{t-1} }{2\rdt} = \ldots \\ 
236\frac{\partial T}{\partial t} &\equiv \frac{T^{t+1}-T^{t-1}}{2 \widetilde{\rdt}} = \ldots \\ 
237\frac{\partial S}{\partial t} &\equiv \frac{S^{t+1} -S^{t-1}}{2 \widetilde{\rdt}} = \ldots \\ 
238\end{split}
239\end{equation}
240
241\citet{Bryan1984} has examined the consequences of this distorted physics.
242Free waves have a slower phase speed, their meridional structure is slightly
243modified, and the growth rate of baroclinically unstable waves is reduced
244but with a wider range of instability. This technique is efficient for
245searching for an equilibrium state in coarse resolution models. However its
246application is not suitable for many oceanic problems: it cannot be used for
247transient or time evolving problems (in particular, it is very questionable
248to use this technique when there is a seasonal cycle in the forcing fields),
249and it cannot be used in high-resolution models where baroclinically
250unstable processes are important. Moreover, the vertical variation of
251$\widetilde{ \rdt}$ implies that the heat and salt contents are no longer
252conserved due to the vertical coupling of the ocean level through both
253advection and diffusion. Therefore \np{rn\_rdtmin} = \np{rn\_rdtmax} should be
254a more clever choice.
255
256
257% ================================================================
258% Accuracy and Reproducibility
259% ================================================================
260\section{Accuracy and Reproducibility (\mdl{lib\_fortran})}
261\label{MISC_fortran}
262
263\subsection{Issues with intrinsinc SIGN function (\key{nosignedzero})}
264\label{MISC_sign}
265
266The SIGN(A, B) is the \textsc {Fortran} intrinsic function delivers the magnitude
267of A with the sign of B. For example, SIGN(-3.0,2.0) has the value 3.0.
268The problematic case is when the second argument is zero, because, on platforms
269that support IEEE arithmetic, zero is actually a signed number.
270There is a positive zero and a negative zero.
271
272In \textsc{Fortran}~90, the processor was required always to deliver a positive result for SIGN(A, B)
273if B was zero. Nevertheless, in \textsc{Fortran}~95, the processor is allowed to do the correct thing
274and deliver ABS(A) when B is a positive zero and -ABS(A) when B is a negative zero.
275This change in the specification becomes apparent only when B is of type real, and is zero,
276and the processor is capable of distinguishing between positive and negative zero,
277and B is negative real zero. Then SIGN delivers a negative result where, under \textsc{Fortran}~90
278rules,  it used to return a positive result.
279This change may be especially sensitive for the ice model, so we overwrite the intrinsinc
280function with our own function simply performing :   \\
281\verb?   IF( B >= 0.e0 ) THEN   ;   SIGN(A,B) = ABS(A)  ?    \\
282\verb?   ELSE                   ;   SIGN(A,B) =-ABS(A)     ?  \\
283\verb?   ENDIF    ? \\
284This feature can be found in \mdl{lib\_fortran} module and is effective when \key{nosignedzero}
285is defined. We use a CPP key as the overwritting of a intrinsic function can present
286performance issues with some computers/compilers.
287
288
289\subsection{MPP reproducibility}
290\label{MISC_glosum}
291
292The numerical reproducibility of simulations on distributed memory parallel computers
293is a critical issue. In particular, within NEMO global summation of distributed arrays
294is most susceptible to rounding errors, and their propagation and accumulation cause
295uncertainty in final simulation reproducibility on different numbers of processors.
296To avoid so, based on \citet{He_Ding_JSC01} review of different technics,
297we use a so called self-compensated summation method. The idea is to estimate
298the roundoff error, store it in a buffer, and then add it back in the next addition.
299
300Suppose we need to calculate $b = a_1 + a_2 + a_3$. The following algorithm
301will allow to split the sum in two ($sum_1 = a_{1} + a_{2}$ and $b = sum_2 = sum_1 + a_3$)
302with exactly the same rounding errors as the sum performed all at once.
303\begin{align*}
304   sum_1 \ \  &= a_1 + a_2 \\
305   error_1     &= a_2 + ( a_1 - sum_1 ) \\
306   sum_2 \ \  &= sum_1 + a_3 + error_1 \\
307   error_2     &= a_3 + error_1 + ( sum_1 - sum_2 ) \\
308   b \qquad \ &= sum_2 \\
309\end{align*}
310This feature can be found in \mdl{lib\_fortran} module and is effective when \key{mpp\_rep}.
311In that case, all calls to glob\_sum function (summation over the entire basin excluding
312duplicated rows and columns due to cyclic or north fold boundary condition as well as
313overlap MPP areas).
314Note this implementation may be sensitive to the optimization level.
315
316\subsection{MPP scalability}
317\label{MISC_mppsca}
318
319The default method of communicating values across the north-fold in distributed memory applications
320(\key{mpp\_mpi}) uses a \textsc{MPI\_ALLGATHER} function to exchange values from each processing
321region in the northern row with every other processing region in the northern row. This enables a
322global width array containing the top 4 rows to be collated on every northern row processor and then
323folded with a simple algorithm. Although conceptually simple, this "All to All" communication will
324hamper performance scalability for large numbers of northern row processors. From version 3.4
325onwards an alternative method is available which only performs direct "Peer to Peer" communications
326between each processor and its immediate "neighbours" across the fold line. This is achieved by
327using the default \textsc{MPI\_ALLGATHER} method during initialisation to help identify the "active"
328neighbours. Stored lists of these neighbours are then used in all subsequent north-fold exchanges to
329restrict exchanges to those between associated regions. The collated global width array for each
330region is thus only partially filled but is guaranteed to be set at all the locations actually
331required by each individual for the fold operation. This alternative method should give identical
332results to the default \textsc{ALLGATHER} method and is recommended for large values of \np{jpni}.
333The new method is activated by setting \np{ln\_nnogather} to be true ({\bf nammpp}). The
334reproducibility of results using the two methods should be confirmed for each new, non-reference
335configuration.
336
337% ================================================================
338% Model optimisation, Control Print and Benchmark
339% ================================================================
340\section{Model Optimisation, Control Print and Benchmark}
341\label{MISC_opt}
342%--------------------------------------------namctl-------------------------------------------------------
343\namdisplay{namctl} 
344%--------------------------------------------------------------------------------------------------------------
345
346 \gmcomment{why not make these bullets into subsections?}
347Options are defined through the  \ngn{namctl} namelist variables.
348
349$\bullet$ Vector optimisation:
350
351\key{vectopt\_loop} enables the internal loops to collapse. This is very
352a very efficient way to increase the length of vector calculations and thus
353to speed up the model on vector computers.
354 
355% Add here also one word on NPROMA technique that has been found useless, since compiler have made significant progress during the last decade.
356 
357% Add also one word on NEC specific optimisation (Novercheck option for example)
358 
359$\bullet$ Control print %: describe here 4 things:
360
3611- \np{ln\_ctl} : compute and print the trends averaged over the interior domain
362in all TRA, DYN, LDF and ZDF modules. This option is very helpful when
363diagnosing the origin of an undesired change in model results.
364
3652- also \np{ln\_ctl} but using the nictl and njctl namelist parameters to check
366the source of differences between mono and multi processor runs.
367
3683- \key{esopa} (to be rename key\_nemo) : which is another option for model
369management. When defined, this key forces the activation of all options and
370CPP keys. For example, all tracer and momentum advection schemes are called!
371Therefore the model results have no physical meaning.
372However, this option forces both the compiler and the model to run through
373all the \textsc{Fortran} lines of the model. This allows the user to check for obvious
374compilation or execution errors with all CPP options, and errors in namelist options.
375
3764- last digit comparison (\np{nn\_bit\_cmp}). In an MPP simulation, the computation of
377a sum over the whole domain is performed as the summation over all processors of
378each of their sums over their interior domains. This double sum never gives exactly
379the same result as a single sum over the whole domain, due to truncation differences.
380The "bit comparison" option has been introduced in order to be able to check that
381mono-processor and multi-processor runs give exactly the same results.
382%THIS is to be updated with the mpp_sum_glo  introduced in v3.3
383% nn_bit_cmp  today only check that the nn_cla = 0 (no cross land advection)
384
385$\bullet$  Benchmark (\np{nn\_bench}). This option defines a benchmark run based on
386a GYRE configuration (see \S\ref{CFG_gyre}) in which the resolution remains the same
387whatever the domain size. This allows a very large model domain to be used, just by
388changing the domain size (\jp{jpiglo}, \jp{jpjglo}) and without adjusting either the time-step
389or the physical parameterisations.
390
391
392% ================================================================
393% Elliptic solvers (SOL)
394% ================================================================
395\section{Elliptic solvers (SOL)}
396\label{MISC_sol}
397%--------------------------------------------namdom-------------------------------------------------------
398\namdisplay{namsol} 
399%--------------------------------------------------------------------------------------------------------------
400
401When the filtered sea surface height option is used, the surface pressure gradient is
402computed in \mdl{dynspg\_flt}. The force added in the momentum equation is solved implicitely.
403It is thus solution of an elliptic equation \eqref{Eq_PE_flt} for which two solvers are available:
404a Successive-Over-Relaxation scheme (SOR) and a preconditioned conjugate gradient
405scheme(PCG) \citep{Madec_al_OM88, Madec_PhD90}. The solver is selected trough the
406the value of \np{nn\_solv}   \ngn{namsol} namelist variable.
407
408The PCG is a very efficient method for solving elliptic equations on vector computers.
409It is a fast and rather easy method to use; which are attractive features for a large
410number of ocean situations (variable bottom topography, complex coastal geometry,
411variable grid spacing, open or cyclic boundaries, etc ...). It does not require
412a search for an optimal parameter as in the SOR method. However, the SOR has
413been retained because it is a linear solver, which is a very useful property when
414using the adjoint model of \NEMO.
415
416At each time step, the time derivative of the sea surface height at time step $t+1$ 
417(or equivalently the divergence of the \textit{after} barotropic transport) that appears
418in the filtering forced is the solution of the elliptic equation obtained from the horizontal
419divergence of the vertical summation of \eqref{Eq_PE_flt}.
420Introducing the following coefficients:
421\begin{equation}  \label{Eq_sol_matrix}
422\begin{aligned}
423&c_{i,j}^{NS}  &&= {2 \rdt }^2 \; \frac{H_v (i,j) \; e_{1v} (i,j)}{e_{2v}(i,j)}              \\
424&c_{i,j}^{EW} &&= {2 \rdt }^2 \; \frac{H_u (i,j) \; e_{2u} (i,j)}{e_{1u}(i,j)}            \\
425&b_{i,j} &&= \delta_i \left[ e_{2u}M_u \right] - \delta_j \left[ e_{1v}M_v \right]\ ,   \\
426\end{aligned}
427\end{equation}
428the resulting five-point finite difference equation is given by:
429\begin{equation}  \label{Eq_solmat}
430\begin{split}
431       c_{i+1,j}^{NS} D_{i+1,j}  + \;  c_{i,j+1}^{EW} D_{i,j+1}   
432  +   c_{i,j}    ^{NS} D_{i-1,j}   + \;  c_{i,j}    ^{EW} D_{i,j-1}                                          &    \\
433  -    \left(c_{i+1,j}^{NS} + c_{i,j+1}^{EW} + c_{i,j}^{NS} + c_{i,j}^{EW} \right)   D_{i,j}  &=  b_{i,j}
434\end{split}
435\end{equation}
436\eqref{Eq_solmat} is a linear symmetric system of equations. All the elements of
437the corresponding matrix \textbf{A} vanish except those of five diagonals. With
438the natural ordering of the grid points (i.e. from west to east and from
439south to north), the structure of \textbf{A} is block-tridiagonal with
440tridiagonal or diagonal blocks. \textbf{A} is a positive-definite symmetric
441matrix of size $(jpi \cdot jpj)^2$, and \textbf{B}, the right hand side of
442\eqref{Eq_solmat}, is a vector.
443
444Note that in the linear free surface case, the depth that appears in \eqref{Eq_sol_matrix}
445does not vary with time, and thus the matrix can be computed once for all. In non-linear free surface
446(\key{vvl} defined) the matrix have to be updated at each time step.
447
448% -------------------------------------------------------------------------------------------------------------
449%       Successive Over Relaxation
450% -------------------------------------------------------------------------------------------------------------
451\subsection{Successive Over Relaxation (\np{nn\_solv}=2, \mdl{solsor})}
452\label{MISC_solsor}
453
454Let us introduce the four cardinal coefficients:
455\begin{align*}
456a_{i,j}^S &= c_{i,j    }^{NS}/d_{i,j}     &\qquad  a_{i,j}^W &= c_{i,j}^{EW}/d_{i,j}       \\
457a_{i,j}^E &= c_{i,j+1}^{EW}/d_{i,j}    &\qquad   a_{i,j}^N &= c_{i+1,j}^{NS}/d_{i,j}   
458\end{align*}
459where $d_{i,j} = c_{i,j}^{NS}+ c_{i+1,j}^{NS} + c_{i,j}^{EW} + c_{i,j+1}^{EW}$ 
460(i.e. the diagonal of the matrix). \eqref{Eq_solmat} can be rewritten as:
461\begin{equation}  \label{Eq_solmat_p}
462\begin{split}
463a_{i,j}^{N}  D_{i+1,j} +\,a_{i,j}^{E}  D_{i,j+1} +\, a_{i,j}^{S}  D_{i-1,j} +\,a_{i,j}^{W} D_{i,j-1}  -  D_{i,j} = \tilde{b}_{i,j}
464\end{split}
465\end{equation}
466with $\tilde b_{i,j} = b_{i,j}/d_{i,j}$. \eqref{Eq_solmat_p} is the equation actually solved
467with the SOR method. This method used is an iterative one. Its algorithm can be
468summarised as follows (see \citet{Haltiner1980} for a further discussion):
469
470initialisation (evaluate a first guess from previous time step computations)
471\begin{equation}
472D_{i,j}^0 = 2 \, D_{i,j}^t - D_{i,j}^{t-1}
473\end{equation}
474iteration $n$, from $n=0$ until convergence, do :
475\begin{equation} \label{Eq_sor_algo}
476\begin{split}
477R_{i,j}^n  = &a_{i,j}^{N} D_{i+1,j}^n       +\,a_{i,j}^{E}  D_{i,j+1} ^n         
478         +\, a_{i,j}^{S}  D_{i-1,j} ^{n+1}+\,a_{i,j}^{W} D_{i,j-1} ^{n+1}
479                 -  D_{i,j}^n - \tilde{b}_{i,j}                                           \\
480D_{i,j} ^{n+1}  = &D_{i,j} ^{n}   + \omega \;R_{i,j}^n     
481\end{split}
482\end{equation}
483where \textit{$\omega $ }satisfies $1\leq \omega \leq 2$. An optimal value exists for
484\textit{$\omega$} which significantly accelerates the convergence, but it has to be
485adjusted empirically for each model domain (except for a uniform grid where an
486analytical expression for \textit{$\omega$} can be found \citep{Richtmyer1967}).
487The value of $\omega$ is set using \np{rn\_sor}, a \textbf{namelist} parameter.
488The convergence test is of the form:
489\begin{equation}
490\delta = \frac{\sum\limits_{i,j}{R_{i,j}^n}{R_{i,j}^n}}
491                    {\sum\limits_{i,j}{ \tilde{b}_{i,j}^n}{\tilde{b}_{i,j}^n}} \leq \epsilon
492\end{equation}
493where $\epsilon$ is the absolute precision that is required. It is recommended
494that a value smaller or equal to $10^{-6}$ is used for $\epsilon$ since larger
495values may lead to numerically induced basin scale barotropic oscillations.
496The precision is specified by setting \np{rn\_eps} (\textbf{namelist} parameter).
497In addition, two other tests are used to halt the iterative algorithm. They involve
498the number of iterations and the modulus of the right hand side. If the former
499exceeds a specified value, \np{nn\_max} (\textbf{namelist} parameter),
500or the latter is greater than $10^{15}$, the whole model computation is stopped
501and the last computed time step fields are saved in a abort.nc NetCDF file.
502In both cases, this usually indicates that there is something wrong in the model
503configuration (an error in the mesh, the initial state, the input forcing,
504or the magnitude of the time step or of the mixing coefficients). A typical value of
505$nn\_max$ is a few hundred when $\epsilon = 10^{-6}$, increasing to a few
506thousand when $\epsilon = 10^{-12}$.
507The vectorization of the SOR algorithm is not straightforward. The scheme
508contains two linear recurrences on $i$ and $j$. This inhibits the vectorisation.
509\eqref{Eq_sor_algo} can be been rewritten as:
510\begin{equation} 
511\begin{split}
512R_{i,j}^n
513= &a_{i,j}^{N}  D_{i+1,j}^n +\,a_{i,j}^{E}  D_{i,j+1} ^n
514 +\,a_{i,j}^{S}  D_{i-1,j} ^{n}+\,_{i,j}^{W} D_{i,j-1} ^{n} -  D_{i,j}^n - \tilde{b}_{i,j}      \\
515R_{i,j}^n = &R_{i,j}^n - \omega \;a_{i,j}^{S}\; R_{i,j-1}^n                                             \\
516R_{i,j}^n = &R_{i,j}^n - \omega \;a_{i,j}^{W}\; R_{i-1,j}^n
517\end{split}
518\end{equation}
519This technique slightly increases the number of iteration required to reach the convergence,
520but this is largely compensated by the gain obtained by the suppression of the recurrences.
521
522Another technique have been chosen, the so-called red-black SOR. It consist in solving successively
523\eqref{Eq_sor_algo} for odd and even grid points. It also slightly reduced the convergence rate
524but allows the vectorisation. In addition, and this is the reason why it has been chosen, it is able to handle the north fold boundary condition used in ORCA configuration ($i.e.$ tri-polar global ocean mesh).
525
526The SOR method is very flexible and can be used under a wide range of conditions,
527including irregular boundaries, interior boundary points, etc. Proofs of convergence, etc.
528may be found in the standard numerical methods texts for partial differential equations.
529
530% -------------------------------------------------------------------------------------------------------------
531%       Preconditioned Conjugate Gradient
532% -------------------------------------------------------------------------------------------------------------
533\subsection{Preconditioned Conjugate Gradient  (\np{nn\_solv}=1, \mdl{solpcg}) }
534\label{MISC_solpcg}
535
536\textbf{A} is a definite positive symmetric matrix, thus solving the linear
537system \eqref{Eq_solmat} is equivalent to the minimisation of a quadratic
538functional:
539\begin{equation*}
540\textbf{Ax} = \textbf{b} \leftrightarrow \textbf{x} =\text{inf}_{y} \,\phi (\textbf{y})
541\quad , \qquad
542\phi (\textbf{y}) = 1/2 \langle \textbf{Ay},\textbf{y}\rangle - \langle \textbf{b},\textbf{y} \rangle 
543\end{equation*}
544where $\langle , \rangle$ is the canonical dot product. The idea of the
545conjugate gradient method is to search for the solution in the following
546iterative way: assuming that $\textbf{x}^n$ has been obtained, $\textbf{x}^{n+1}$ 
547is found from $\textbf {x}^{n+1}={\textbf {x}}^n+\alpha^n{\textbf {d}}^n$ which satisfies:
548\begin{equation*}
549{\textbf{ x}}^{n+1}=\text{inf} _{{\textbf{ y}}={\textbf{ x}}^n+\alpha^n \,{\textbf{ d}}^n} \,\phi ({\textbf{ y}})\;\;\Leftrightarrow \;\;\frac{d\phi }{d\alpha}=0
550\end{equation*}
551and expressing $\phi (\textbf{y})$ as a function of \textit{$\alpha $}, we obtain the
552value that minimises the functional:
553\begin{equation*}
554\alpha ^n = \langle{ \textbf{r}^n , \textbf{r}^n} \rangle  / \langle {\textbf{ A d}^n, \textbf{d}^n} \rangle
555\end{equation*}
556where $\textbf{r}^n = \textbf{b}-\textbf{A x}^n = \textbf{A} (\textbf{x}-\textbf{x}^n)$ 
557is the error at rank $n$. The descent vector $\textbf{d}^n$ s chosen to be dependent
558on the error: $\textbf{d}^n = \textbf{r}^n + \beta^n \,\textbf{d}^{n-1}$. $\beta ^n$ 
559is searched such that the descent vectors form an orthogonal basis for the dot
560product linked to \textbf{A}. Expressing the condition
561$\langle \textbf{A d}^n, \textbf{d}^{n-1} \rangle = 0$ the value of $\beta ^n$ is found:
562 $\beta ^n = \langle{ \textbf{r}^n , \textbf{r}^n} \rangle  / \langle {\textbf{r}^{n-1}, \textbf{r}^{n-1}} \rangle$.
563 As a result, the errors $ \textbf{r}^n$ form an orthogonal
564base for the canonic dot product while the descent vectors $\textbf{d}^n$ form
565an orthogonal base for the dot product linked to \textbf{A}. The resulting
566algorithm is thus the following one:
567
568initialisation :
569\begin{equation*} 
570\begin{split}
571\textbf{x}^0 &= D_{i,j}^0   = 2 D_{i,j}^t - D_{i,j}^{t-1}       \quad, \text{the initial guess }     \\
572\textbf{r}^0 &= \textbf{d}^0 = \textbf{b} - \textbf{A x}^0       \\
573\gamma_0 &= \langle{ \textbf{r}^0 , \textbf{r}^0} \rangle
574\end{split}
575\end{equation*}
576
577iteration $n,$ from $n=0$ until convergence, do :
578\begin{equation} 
579\begin{split}
580\text{z}^n& = \textbf{A d}^n \\
581\alpha_n &= \gamma_n /  \langle{ \textbf{z}^n , \textbf{d}^n} \rangle \\
582\textbf{x}^{n+1} &= \textbf{x}^n + \alpha_n \,\textbf{d}^n \\
583\textbf{r}^{n+1} &= \textbf{r}^n - \alpha_n \,\textbf{z}^n \\
584\gamma_{n+1} &= \langle{ \textbf{r}^{n+1} , \textbf{r}^{n+1}} \rangle \\
585\beta_{n+1} &= \gamma_{n+1}/\gamma_{n}  \\
586\textbf{d}^{n+1} &= \textbf{r}^{n+1} + \beta_{n+1}\; \textbf{d}^{n}\\
587\end{split}
588\end{equation}
589
590
591The convergence test is:
592\begin{equation}
593\delta = \gamma_{n}\; / \langle{ \textbf{b} , \textbf{b}} \rangle \leq \epsilon
594\end{equation}
595where $\epsilon $ is the absolute precision that is required. As for the SOR algorithm,
596the whole model computation is stopped when the number of iterations, \np{nn\_max}, or
597the modulus of the right hand side of the convergence equation exceeds a
598specified value (see \S\ref{MISC_solsor} for a further discussion). The required
599precision and the maximum number of iterations allowed are specified by setting
600\np{rn\_eps} and \np{nn\_max} (\textbf{namelist} parameters).
601
602It can be demonstrated that the above algorithm is optimal, provides the exact
603solution in a number of iterations equal to the size of the matrix, and that
604the convergence rate is faster as the matrix is closer to the identity matrix,
605$i.e.$ its eigenvalues are closer to 1. Therefore, it is more efficient to solve
606a better conditioned system which has the same solution. For that purpose,
607we introduce a preconditioning matrix \textbf{Q} which is an approximation
608of \textbf{A} but much easier to invert than \textbf{A}, and solve the system:
609\begin{equation} \label{Eq_pmat}
610\textbf{Q}^{-1} \textbf{A x} = \textbf{Q}^{-1} \textbf{b}
611\end{equation}
612
613The same algorithm can be used to solve \eqref{Eq_pmat} if instead of the
614canonical dot product the following one is used:
615${\langle{ \textbf{a} , \textbf{b}} \rangle}_Q = \langle{ \textbf{a} , \textbf{Q b}} \rangle$, and
616if $\textbf{\~{b}} = \textbf{Q}^{-1}\;\textbf{b}$ and $\textbf{\~{A}} = \textbf{Q}^{-1}\;\textbf{A}$ 
617are substituted to \textbf{b} and \textbf{A} \citep{Madec_al_OM88}.
618In \NEMO, \textbf{Q} is chosen as the diagonal of \textbf{ A}, i.e. the simplest form for
619\textbf{Q} so that it can be easily inverted. In this case, the discrete formulation of
620\eqref{Eq_pmat} is in fact given by \eqref{Eq_solmat_p} and thus the matrix and
621right hand side are computed independently from the solver used.
622
623% ================================================================
624
625
626
627
628
629
630
631
632
633
634
635
636\end{document}
Note: See TracBrowser for help on using the repository browser.