source:trunk/DOC/TexFiles/Chapters/Chap_MISC.tex@3294

Last change on this file since 3294 was 3294, checked in by rblod, 10 years ago

Merge of 3.4beta into the trunk

File size: 32.4 KB
Line
1% ================================================================
2% Chapter Ñ Miscellaneous Topics
3% ================================================================
4\chapter{Miscellaneous Topics}
5\label{MISC}
6\minitoc
7
8\newpage
9$\$\newline    % force a new ligne
10
11% ================================================================
12% Representation of Unresolved Straits
13% ================================================================
14\section{Representation of Unresolved Straits}
15\label{MISC_strait}
16
17In climate modeling, it often occurs that a crucial connections between water masses
18is broken as the grid mesh is too coarse to resolve narrow straits. For example, coarse
19grid spacing typically closes off the Mediterranean from the Atlantic at the Strait of
20Gibraltar. In this case, it is important for climate models to include the effects of salty
21water entering the Atlantic from the Mediterranean. Likewise, it is important for the
22Mediterranean to replenish its supply of water from the Atlantic to balance the net
23evaporation occurring over the Mediterranean region. This problem occurs even in
24eddy permitting simulations. For example, in ORCA 1/4\deg several straits of the Indonesian
25archipelago (Ombai, Lombok...) are much narrow than even a single ocean grid-point.
26
27We describe briefly here the three methods that can be used in \NEMO to handle
28such improperly resolved straits. The first two consist of opening the strait by hand
29while ensuring that the mass exchanges through the strait are not too large by
30either artificially reducing the surface of the strait grid-cells or, locally increasing
31the lateral friction. In the third one, the strait is closed but exchanges of mass,
32heat and salt across the land are allowed.
33Note that such modifications are so specific to a given configuration that no attempt
34has been made to set them in a generic way. However, examples of how
35they can be set up is given in the ORCA 2\deg and 0.5\deg configurations (search for
36\key{orca\_r2} or \key{orca\_r05} in the code).
37
38% -------------------------------------------------------------------------------------------------------------
40% -------------------------------------------------------------------------------------------------------------
42\label{MISC_strait_hand}
43
44$\bullet$ reduced scale factor in the cross-strait direction to a value in better agreement
45with the true mean width of the strait. (Fig.~\ref{Fig_MISC_strait_hand}).
46This technique is sometime called "partially open face" or "partially closed cells".
47The key issue here is only to reduce the faces of $T$-cell ($i.e.$ change the value
48of the horizontal scale factors at $u$- or $v$-point) but not the volume of the $T$-cell.
49Indeed, reducing the volume of strait $T$-cell can easily produce a numerical
50instability at that grid point that would require a reduction of the model time step.
51The changes associated with strait management are done in \mdl{domhgr},
52just after the definition or reading of the horizontal scale factors.
53
54$\bullet$ increase of the viscous boundary layer thickness by local increase of the
55fmask value at the coast (Fig.~\ref{Fig_MISC_strait_hand}). This is done in
56\mdl{dommsk} together with the setting of the coastal value of fmask
57(see Section \ref{LBC_coast})
58
59%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
60\begin{figure}[!tbp]     \begin{center}
61\includegraphics[width=0.80\textwidth]{./TexFiles/Figures/Fig_Gibraltar.pdf}
62\includegraphics[width=0.80\textwidth]{./TexFiles/Figures/Fig_Gibraltar2.pdf}
63\caption{   \label{Fig_MISC_strait_hand}
64Example of the Gibraltar strait defined in a $1\deg \times 1\deg$ mesh.
65\textit{Top}: using partially open cells. The meridional scale factor at $v$-point
66is reduced on both sides of the strait to account for the real width of the strait
67(about 20 km). Note that the scale factors of the strait $T$-point remains unchanged.
68\textit{Bottom}: using viscous boundary layers. The four fmask parameters
69along the strait coastlines are set to a value larger than 4, $i.e.$ "strong" no-slip
70case (see Fig.\ref{Fig_LBC_shlat}) creating a large viscous boundary layer
71that allows a reduced transport through the strait.}
72\end{center}   \end{figure}
73%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
74
75% -------------------------------------------------------------------------------------------------------------
77% -------------------------------------------------------------------------------------------------------------
79\label{MISC_strait_cla}
80%--------------------------------------------namcla--------------------------------------------------------
81\namdisplay{namcla}
82%--------------------------------------------------------------------------------------------------------------
83
84\colorbox{yellow}{Add a short description of CLA staff here or in lateral boundary condition chapter?}
85
86%The problem is resolved here by allowing the mixing of tracers and mass/volume between non-adjacent water columns at nominated regions within the model. Momentum is not mixed. The scheme conserves total tracer content, and total volume (the latter in $z*$- or $s*$-coordinate), and maintains compatibility between the tracer and mass/volume budgets.
87
88% ================================================================
89% Closed seas
90% ================================================================
91\section{Closed seas (\mdl{closea})}
92\label{MISC_closea}
93
94\colorbox{yellow}{Add here a short description of the way closed seas are managed}
95
96
97% ================================================================
98% Sub-Domain Functionality (\textit{nizoom, njzoom}, namelist parameters)
99% ================================================================
100\section{Sub-Domain Functionality (\jp{jpizoom}, \jp{jpjzoom})}
101\label{MISC_zoom}
102
103The sub-domain functionality, also improperly called the zoom option
104(improperly because it is not associated with a change in model resolution)
105is a quite simple function that allows a simulation over a sub-domain of an
106already defined configuration ($i.e.$ without defining a new mesh, initial
107state and forcings). This option can be useful for testing the user settings
108of surface boundary conditions, or the initial ocean state of a huge ocean
109model configuration while having a small computer memory requirement.
110It can also be used to easily test specific physics in a sub-domain (for example,
111see \citep{Madec_al_JPO96} for a test of the coupling used in the global ocean
112version of OPA between sea-ice and ocean model over the Arctic or Antarctic
113ocean, using a sub-domain). In the standard model, this option does not
114include any specific treatment for the ocean boundaries of the sub-domain:
115they are considered as artificial vertical walls. Nevertheless, it is quite easy
116to add a restoring term toward a climatology in the vicinity of such boundaries
117(see \S\ref{TRA_dmp}).
118
119In order to easily define a sub-domain over which the computation can be
120performed, the dimension of all input arrays (ocean mesh, bathymetry,
121forcing, initial state, ...) are defined as \jp{jpidta}, \jp{jpjdta} and \jp{jpkdta}
122(\mdl{par\_oce} module), while the computational domain is defined through
123\jp{jpiglo}, \jp{jpjglo} and \jp{jpk} (\mdl{par\_oce} module). When running the
124model over the whole domain, the user sets \jp{jpiglo}=\jp{jpidta} \jp{jpjglo}=\jp{jpjdta}
125and \jp{jpk}=\jp{jpkdta}. When running the model over a sub-domain, the user
126has to provide the size of the sub-domain, (\jp{jpiglo}, \jp{jpjglo}, \jp{jpkglo}),
127and the indices of the south western corner as \jp{jpizoom} and \jp{jpjzoom} in
128the \mdl{par\_oce} module (Fig.~\ref{Fig_LBC_zoom}).
129
130Note that a third set of dimensions exist, \jp{jpi}, \jp{jpj} and \jp{jpk} which is
131actually used to perform the computation. It is set by default to \jp{jpi}=\jp{jpjglo}
132and \jp{jpj}=\jp{jpjglo}, except for massively parallel computing where the
133computational domain is laid out on local processor memories following a 2D
134horizontal splitting. % (see {\S}IV.2-c) ref to the section to be updated
135
136%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
137\begin{figure}[!ht]    \begin{center}
138\includegraphics[width=0.90\textwidth]{./TexFiles/Figures/Fig_LBC_zoom.pdf}
139\caption{   \label{Fig_LBC_zoom}
140Position of a model domain compared to the data input domain when the zoom functionality is used.}
141\end{center}   \end{figure}
142%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
143
144
145% ================================================================
146% Accelerating the Convergence
147% ================================================================
148\section{Accelerating the Convergence (\np{nn\_acc} = 1)}
149\label{MISC_acc}
150%--------------------------------------------namdom-------------------------------------------------------
151\namdisplay{namdom}
152%--------------------------------------------------------------------------------------------------------------
153
154Searching an equilibrium state with an global ocean model requires a very long time
155integration period (a few thousand years for a global model). Due to the size of
156the time step required for numerical stability (less than a few hours),
157this usually requires a large elapsed time. In order to overcome this problem,
158\citet{Bryan1984} introduces a technique that is intended to accelerate
159the spin up to equilibrium. It uses a larger time step in
160the tracer evolution equations than in the momentum evolution
161equations. It does not affect the equilibrium solution but modifies the
162trajectory to reach it.
163
164The acceleration of convergence option is used when \np{nn\_acc}=1. In that case,
165$\rdt=rn\_rdt$ is the time step of dynamics while $\widetilde{\rdt}=rdttra$ is the
166tracer time-step. the former is set from the \np{rn\_rdt} namelist parameter while the latter
167is computed using a hyperbolic tangent profile and the following namelist parameters :
168\np{rn\_rdtmin}, \np{rn\_rdtmax} and \np{rn\_rdth}. Those three parameters correspond
169to the surface value the deep ocean value and the depth at which the transition occurs, respectively.
170The set of prognostic equations to solve becomes:
171\begin{equation} \label{Eq_acc}
172\begin{split}
173\frac{\partial \textbf{U}_h }{\partial t}
174   &\equiv \frac{\textbf{U}_h ^{t+1}-\textbf{U}_h^{t-1} }{2\rdt} = \ldots \\
175\frac{\partial T}{\partial t} &\equiv \frac{T^{t+1}-T^{t-1}}{2 \widetilde{\rdt}} = \ldots \\
176\frac{\partial S}{\partial t} &\equiv \frac{S^{t+1} -S^{t-1}}{2 \widetilde{\rdt}} = \ldots \\
177\end{split}
178\end{equation}
179
180\citet{Bryan1984} has examined the consequences of this distorted physics.
181Free waves have a slower phase speed, their meridional structure is slightly
182modified, and the growth rate of baroclinically unstable waves is reduced
183but with a wider range of instability. This technique is efficient for
184searching for an equilibrium state in coarse resolution models. However its
185application is not suitable for many oceanic problems: it cannot be used for
186transient or time evolving problems (in particular, it is very questionable
187to use this technique when there is a seasonal cycle in the forcing fields),
188and it cannot be used in high-resolution models where baroclinically
189unstable processes are important. Moreover, the vertical variation of
190$\widetilde{ \rdt}$ implies that the heat and salt contents are no longer
191conserved due to the vertical coupling of the ocean level through both
192advection and diffusion. Therefore \np{rn\_rdtmin} = \np{rn\_rdtmax} should be
193a more clever choice.
194
195
196% ================================================================
197% Accuracy and Reproducibility
198% ================================================================
199\section{Accuracy and Reproducibility (\mdl{lib\_fortran})}
200\label{MISC_fortran}
201
202\subsection{Issues with intrinsinc SIGN function (\key{nosignedzero})}
203\label{MISC_sign}
204
205The SIGN(A, B) is the \textsc {Fortran} intrinsic function delivers the magnitude
206of A with the sign of B. For example, SIGN(-3.0,2.0) has the value 3.0.
207The problematic case is when the second argument is zero, because, on platforms
208that support IEEE arithmetic, zero is actually a signed number.
209There is a positive zero and a negative zero.
210
211In \textsc{Fortran}~90, the processor was required always to deliver a positive result for SIGN(A, B)
212if B was zero. Nevertheless, in \textsc{Fortran}~95, the processor is allowed to do the correct thing
213and deliver ABS(A) when B is a positive zero and -ABS(A) when B is a negative zero.
214This change in the specification becomes apparent only when B is of type real, and is zero,
215and the processor is capable of distinguishing between positive and negative zero,
216and B is negative real zero. Then SIGN delivers a negative result where, under \textsc{Fortran}~90
217rules,  it used to return a positive result.
218This change may be especially sensitive for the ice model, so we overwrite the intrinsinc
219function with our own function simply performing :   \\
220\verb?   IF( B >= 0.e0 ) THEN   ;   SIGN(A,B) = ABS(A)  ?    \\
221\verb?   ELSE                   ;   SIGN(A,B) =-ABS(A)     ?  \\
222\verb?   ENDIF    ? \\
223This feature can be found in \mdl{lib\_fortran} module and is effective when \key{nosignedzero}
224is defined. We use a CPP key as the overwritting of a intrinsic function can present
225performance issues with some computers/compilers.
226
227
228\subsection{MPP reproducibility}
229\label{MISC_glosum}
230
231The numerical reproducibility of simulations on distributed memory parallel computers
232is a critical issue. In particular, within NEMO global summation of distributed arrays
233is most susceptible to rounding errors, and their propagation and accumulation cause
234uncertainty in final simulation reproducibility on different numbers of processors.
235To avoid so, based on \citet{He_Ding_JSC01} review of different technics,
236we use a so called self-compensated summation method. The idea is to estimate
237the roundoff error, store it in a buffer, and then add it back in the next addition.
238
239Suppose we need to calculate $b = a_1 + a_2 + a_3$. The following algorithm
240will allow to split the sum in two ($sum_1 = a_{1} + a_{2}$ and $b = sum_2 = sum_1 + a_3$)
241with exactly the same rounding errors as the sum performed all at once.
242\begin{align*}
243   sum_1 \ \  &= a_1 + a_2 \\
244   error_1     &= a_2 + ( a_1 - sum_1 ) \\
245   sum_2 \ \  &= sum_1 + a_3 + error_1 \\
246   error_2     &= a_3 + error_1 + ( sum_1 - sum_2 ) \\
247   b \qquad \ &= sum_2 \\
248\end{align*}
249This feature can be found in \mdl{lib\_fortran} module and is effective when \key{mpp\_rep}.
250In that case, all calls to glob\_sum function (summation over the entire basin excluding
251duplicated rows and columns due to cyclic or north fold boundary condition as well as
252overlap MPP areas).
253Note this implementation may be sensitive to the optimization level.
254
255\subsection{MPP scalability}
256\label{MISC_mppsca}
257
258The default method of communicating values across the north-fold in distributed memory applications
259(\key{mpp\_mpi}) uses a \textsc{MPI\_ALLGATHER} function to exchange values from each processing
260region in the northern row with every other processing region in the northern row. This enables a
261global width array containing the top 4 rows to be collated on every northern row processor and then
262folded with a simple algorithm. Although conceptually simple, this "All to All" communication will
263hamper performance scalability for large numbers of northern row processors. From version 3.4
264onwards an alternative method is available which only performs direct "Peer to Peer" communications
265between each processor and its immediate "neighbours" across the fold line. This is achieved by
266using the default \textsc{MPI\_ALLGATHER} method during initialisation to help identify the "active"
267neighbours. Stored lists of these neighbours are then used in all subsequent north-fold exchanges to
268restrict exchanges to those between associated regions. The collated global width array for each
269region is thus only partially filled but is guaranteed to be set at all the locations actually
270required by each individual for the fold operation. This alternative method should give identical
271results to the default \textsc{ALLGATHER} method and is recommended for large values of \np{jpni}.
272The new method is activated by setting \np{ln\_nnogather} to be true ({\bf nammpp}). The
273reproducibility of results using the two methods should be confirmed for each new, non-reference
274configuration.
275
276% ================================================================
277% Model optimisation, Control Print and Benchmark
278% ================================================================
279\section{Model Optimisation, Control Print and Benchmark}
280\label{MISC_opt}
281%--------------------------------------------namctl-------------------------------------------------------
282\namdisplay{namctl}
283%--------------------------------------------------------------------------------------------------------------
284
285 \gmcomment{why not make these bullets into subsections?}
286
287
288$\bullet$ Vector optimisation:
289
290\key{vectopt\_loop} enables the internal loops to collapse. This is very
291a very efficient way to increase the length of vector calculations and thus
292to speed up the model on vector computers.
293
294% Add here also one word on NPROMA technique that has been found useless, since compiler have made significant progress during the last decade.
295
296% Add also one word on NEC specific optimisation (Novercheck option for example)
297
298$\bullet$ Control print %: describe here 4 things:
299
3001- \np{ln\_ctl} : compute and print the trends averaged over the interior domain
301in all TRA, DYN, LDF and ZDF modules. This option is very helpful when
302diagnosing the origin of an undesired change in model results.
303
3042- also \np{ln\_ctl} but using the nictl and njctl namelist parameters to check
305the source of differences between mono and multi processor runs.
306
3073- \key{esopa} (to be rename key\_nemo) : which is another option for model
308management. When defined, this key forces the activation of all options and
309CPP keys. For example, all tracer and momentum advection schemes are called!
310Therefore the model results have no physical meaning.
311However, this option forces both the compiler and the model to run through
312all the \textsc{Fortran} lines of the model. This allows the user to check for obvious
313compilation or execution errors with all CPP options, and errors in namelist options.
314
3154- last digit comparison (\np{nn\_bit\_cmp}). In an MPP simulation, the computation of
316a sum over the whole domain is performed as the summation over all processors of
317each of their sums over their interior domains. This double sum never gives exactly
318the same result as a single sum over the whole domain, due to truncation differences.
319The "bit comparison" option has been introduced in order to be able to check that
320mono-processor and multi-processor runs give exactly the same results.
321%THIS is to be updated with the mpp_sum_glo  introduced in v3.3
322% nn_bit_cmp  today only check that the nn_cla = 0 (no cross land advection)
323
324$\bullet$  Benchmark (\np{nn\_bench}). This option defines a benchmark run based on
325a GYRE configuration (see \S\ref{CFG_gyre}) in which the resolution remains the same
326whatever the domain size. This allows a very large model domain to be used, just by
327changing the domain size (\jp{jpiglo}, \jp{jpjglo}) and without adjusting either the time-step
328or the physical parameterisations.
329
330
331% ================================================================
332% Elliptic solvers (SOL)
333% ================================================================
334\section{Elliptic solvers (SOL)}
335\label{MISC_sol}
336%--------------------------------------------namdom-------------------------------------------------------
337\namdisplay{namsol}
338%--------------------------------------------------------------------------------------------------------------
339
340When the filtered sea surface height option is used, the surface pressure gradient is
341computed in \mdl{dynspg\_flt}. The force added in the momentum equation is solved implicitely.
342It is thus solution of an elliptic equation \eqref{Eq_PE_flt} for which two solvers are available:
343a Successive-Over-Relaxation scheme (SOR) and a preconditioned conjugate gradient
345the value of \np{nn\_solv} (namelist parameter).
346
347The PCG is a very efficient method for solving elliptic equations on vector computers.
348It is a fast and rather easy method to use; which are attractive features for a large
349number of ocean situations (variable bottom topography, complex coastal geometry,
350variable grid spacing, open or cyclic boundaries, etc ...). It does not require
351a search for an optimal parameter as in the SOR method. However, the SOR has
352been retained because it is a linear solver, which is a very useful property when
353using the adjoint model of \NEMO.
354
355At each time step, the time derivative of the sea surface height at time step $t+1$
356(or equivalently the divergence of the \textit{after} barotropic transport) that appears
357in the filtering forced is the solution of the elliptic equation obtained from the horizontal
358divergence of the vertical summation of \eqref{Eq_PE_flt}.
359Introducing the following coefficients:
360\begin{equation}  \label{Eq_sol_matrix}
361\begin{aligned}
362&c_{i,j}^{NS}  &&= {2 \rdt }^2 \; \frac{H_v (i,j) \; e_{1v} (i,j)}{e_{2v}(i,j)}              \\
363&c_{i,j}^{EW} &&= {2 \rdt }^2 \; \frac{H_u (i,j) \; e_{2u} (i,j)}{e_{1u}(i,j)}            \\
364&b_{i,j} &&= \delta_i \left[ e_{2u}M_u \right] - \delta_j \left[ e_{1v}M_v \right]\ ,   \\
365\end{aligned}
366\end{equation}
367the resulting five-point finite difference equation is given by:
368\begin{equation}  \label{Eq_solmat}
369\begin{split}
370       c_{i+1,j}^{NS} D_{i+1,j}  + \;  c_{i,j+1}^{EW} D_{i,j+1}
371  +   c_{i,j}    ^{NS} D_{i-1,j}   + \;  c_{i,j}    ^{EW} D_{i,j-1}                                          &    \\
372  -    \left(c_{i+1,j}^{NS} + c_{i,j+1}^{EW} + c_{i,j}^{NS} + c_{i,j}^{EW} \right)   D_{i,j}  &=  b_{i,j}
373\end{split}
374\end{equation}
375\eqref{Eq_solmat} is a linear symmetric system of equations. All the elements of
376the corresponding matrix \textbf{A} vanish except those of five diagonals. With
377the natural ordering of the grid points (i.e. from west to east and from
378south to north), the structure of \textbf{A} is block-tridiagonal with
379tridiagonal or diagonal blocks. \textbf{A} is a positive-definite symmetric
380matrix of size $(jpi \cdot jpj)^2$, and \textbf{B}, the right hand side of
381\eqref{Eq_solmat}, is a vector.
382
383Note that in the linear free surface case, the depth that appears in \eqref{Eq_sol_matrix}
384does not vary with time, and thus the matrix can be computed once for all. In non-linear free surface
385(\key{vvl} defined) the matrix have to be updated at each time step.
386
387% -------------------------------------------------------------------------------------------------------------
388%       Successive Over Relaxation
389% -------------------------------------------------------------------------------------------------------------
390\subsection{Successive Over Relaxation (\np{nn\_solv}=2, \mdl{solsor})}
391\label{MISC_solsor}
392
393Let us introduce the four cardinal coefficients:
394\begin{align*}
395a_{i,j}^S &= c_{i,j    }^{NS}/d_{i,j}     &\qquad  a_{i,j}^W &= c_{i,j}^{EW}/d_{i,j}       \\
396a_{i,j}^E &= c_{i,j+1}^{EW}/d_{i,j}    &\qquad   a_{i,j}^N &= c_{i+1,j}^{NS}/d_{i,j}
397\end{align*}
398where $d_{i,j} = c_{i,j}^{NS}+ c_{i+1,j}^{NS} + c_{i,j}^{EW} + c_{i,j+1}^{EW}$
399(i.e. the diagonal of the matrix). \eqref{Eq_solmat} can be rewritten as:
400\begin{equation}  \label{Eq_solmat_p}
401\begin{split}
402a_{i,j}^{N}  D_{i+1,j} +\,a_{i,j}^{E}  D_{i,j+1} +\, a_{i,j}^{S}  D_{i-1,j} +\,a_{i,j}^{W} D_{i,j-1}  -  D_{i,j} = \tilde{b}_{i,j}
403\end{split}
404\end{equation}
405with $\tilde b_{i,j} = b_{i,j}/d_{i,j}$. \eqref{Eq_solmat_p} is the equation actually solved
406with the SOR method. This method used is an iterative one. Its algorithm can be
407summarised as follows (see \citet{Haltiner1980} for a further discussion):
408
409initialisation (evaluate a first guess from previous time step computations)
410\begin{equation}
411D_{i,j}^0 = 2 \, D_{i,j}^t - D_{i,j}^{t-1}
412\end{equation}
413iteration $n$, from $n=0$ until convergence, do :
414\begin{equation} \label{Eq_sor_algo}
415\begin{split}
416R_{i,j}^n  = &a_{i,j}^{N} D_{i+1,j}^n       +\,a_{i,j}^{E}  D_{i,j+1} ^n
417         +\, a_{i,j}^{S}  D_{i-1,j} ^{n+1}+\,a_{i,j}^{W} D_{i,j-1} ^{n+1}
418                 -  D_{i,j}^n - \tilde{b}_{i,j}                                           \\
419D_{i,j} ^{n+1}  = &D_{i,j} ^{n}   + \omega \;R_{i,j}^n
420\end{split}
421\end{equation}
422where \textit{$\omega$ }satisfies $1\leq \omega \leq 2$. An optimal value exists for
423\textit{$\omega$} which significantly accelerates the convergence, but it has to be
424adjusted empirically for each model domain (except for a uniform grid where an
425analytical expression for \textit{$\omega$} can be found \citep{Richtmyer1967}).
426The value of $\omega$ is set using \np{rn\_sor}, a \textbf{namelist} parameter.
427The convergence test is of the form:
428\begin{equation}
429\delta = \frac{\sum\limits_{i,j}{R_{i,j}^n}{R_{i,j}^n}}
430                    {\sum\limits_{i,j}{ \tilde{b}_{i,j}^n}{\tilde{b}_{i,j}^n}} \leq \epsilon
431\end{equation}
432where $\epsilon$ is the absolute precision that is required. It is recommended
433that a value smaller or equal to $10^{-6}$ is used for $\epsilon$ since larger
434values may lead to numerically induced basin scale barotropic oscillations.
435The precision is specified by setting \np{rn\_eps} (\textbf{namelist} parameter).
436In addition, two other tests are used to halt the iterative algorithm. They involve
437the number of iterations and the modulus of the right hand side. If the former
438exceeds a specified value, \np{nn\_max} (\textbf{namelist} parameter),
439or the latter is greater than $10^{15}$, the whole model computation is stopped
440and the last computed time step fields are saved in a abort.nc NetCDF file.
441In both cases, this usually indicates that there is something wrong in the model
442configuration (an error in the mesh, the initial state, the input forcing,
443or the magnitude of the time step or of the mixing coefficients). A typical value of
444$nn\_max$ is a few hundred when $\epsilon = 10^{-6}$, increasing to a few
445thousand when $\epsilon = 10^{-12}$.
446The vectorization of the SOR algorithm is not straightforward. The scheme
447contains two linear recurrences on $i$ and $j$. This inhibits the vectorisation.
448\eqref{Eq_sor_algo} can be been rewritten as:
449\begin{equation}
450\begin{split}
451R_{i,j}^n
452= &a_{i,j}^{N}  D_{i+1,j}^n +\,a_{i,j}^{E}  D_{i,j+1} ^n
453 +\,a_{i,j}^{S}  D_{i-1,j} ^{n}+\,_{i,j}^{W} D_{i,j-1} ^{n} -  D_{i,j}^n - \tilde{b}_{i,j}      \\
454R_{i,j}^n = &R_{i,j}^n - \omega \;a_{i,j}^{S}\; R_{i,j-1}^n                                             \\
455R_{i,j}^n = &R_{i,j}^n - \omega \;a_{i,j}^{W}\; R_{i-1,j}^n
456\end{split}
457\end{equation}
458This technique slightly increases the number of iteration required to reach the convergence,
459but this is largely compensated by the gain obtained by the suppression of the recurrences.
460
461Another technique have been chosen, the so-called red-black SOR. It consist in solving successively
462\eqref{Eq_sor_algo} for odd and even grid points. It also slightly reduced the convergence rate
463but allows the vectorisation. In addition, and this is the reason why it has been chosen, it is able to handle the north fold boundary condition used in ORCA configuration ($i.e.$ tri-polar global ocean mesh).
464
465The SOR method is very flexible and can be used under a wide range of conditions,
466including irregular boundaries, interior boundary points, etc. Proofs of convergence, etc.
467may be found in the standard numerical methods texts for partial differential equations.
468
469% -------------------------------------------------------------------------------------------------------------
471% -------------------------------------------------------------------------------------------------------------
472\subsection{Preconditioned Conjugate Gradient  (\np{nn\_solv}=1, \mdl{solpcg}) }
473\label{MISC_solpcg}
474
475\textbf{A} is a definite positive symmetric matrix, thus solving the linear
476system \eqref{Eq_solmat} is equivalent to the minimisation of a quadratic
477functional:
478\begin{equation*}
479\textbf{Ax} = \textbf{b} \leftrightarrow \textbf{x} =\text{inf}_{y} \,\phi (\textbf{y})
481\phi (\textbf{y}) = 1/2 \langle \textbf{Ay},\textbf{y}\rangle - \langle \textbf{b},\textbf{y} \rangle
482\end{equation*}
483where $\langle , \rangle$ is the canonical dot product. The idea of the
484conjugate gradient method is to search for the solution in the following
485iterative way: assuming that $\textbf{x}^n$ has been obtained, $\textbf{x}^{n+1}$
486is found from $\textbf {x}^{n+1}={\textbf {x}}^n+\alpha^n{\textbf {d}}^n$ which satisfies:
487\begin{equation*}
488{\textbf{ x}}^{n+1}=\text{inf} _{{\textbf{ y}}={\textbf{ x}}^n+\alpha^n \,{\textbf{ d}}^n} \,\phi ({\textbf{ y}})\;\;\Leftrightarrow \;\;\frac{d\phi }{d\alpha}=0
489\end{equation*}
490and expressing $\phi (\textbf{y})$ as a function of \textit{$\alpha$}, we obtain the
491value that minimises the functional:
492\begin{equation*}
493\alpha ^n = \langle{ \textbf{r}^n , \textbf{r}^n} \rangle  / \langle {\textbf{ A d}^n, \textbf{d}^n} \rangle
494\end{equation*}
495where $\textbf{r}^n = \textbf{b}-\textbf{A x}^n = \textbf{A} (\textbf{x}-\textbf{x}^n)$
496is the error at rank $n$. The descent vector $\textbf{d}^n$ s chosen to be dependent
497on the error: $\textbf{d}^n = \textbf{r}^n + \beta^n \,\textbf{d}^{n-1}$. $\beta ^n$
498is searched such that the descent vectors form an orthogonal basis for the dot
499product linked to \textbf{A}. Expressing the condition
500$\langle \textbf{A d}^n, \textbf{d}^{n-1} \rangle = 0$ the value of $\beta ^n$ is found:
501 $\beta ^n = \langle{ \textbf{r}^n , \textbf{r}^n} \rangle / \langle {\textbf{r}^{n-1}, \textbf{r}^{n-1}} \rangle$.
502 As a result, the errors $\textbf{r}^n$ form an orthogonal
503base for the canonic dot product while the descent vectors $\textbf{d}^n$ form
504an orthogonal base for the dot product linked to \textbf{A}. The resulting
505algorithm is thus the following one:
506
507initialisation :
508\begin{equation*}
509\begin{split}
510\textbf{x}^0 &= D_{i,j}^0   = 2 D_{i,j}^t - D_{i,j}^{t-1}       \quad, \text{the initial guess }     \\
511\textbf{r}^0 &= \textbf{d}^0 = \textbf{b} - \textbf{A x}^0       \\
512\gamma_0 &= \langle{ \textbf{r}^0 , \textbf{r}^0} \rangle
513\end{split}
514\end{equation*}
515
516iteration $n,$ from $n=0$ until convergence, do :
517\begin{equation}
518\begin{split}
519\text{z}^n& = \textbf{A d}^n \\
520\alpha_n &= \gamma_n /  \langle{ \textbf{z}^n , \textbf{d}^n} \rangle \\
521\textbf{x}^{n+1} &= \textbf{x}^n + \alpha_n \,\textbf{d}^n \\
522\textbf{r}^{n+1} &= \textbf{r}^n - \alpha_n \,\textbf{z}^n \\
523\gamma_{n+1} &= \langle{ \textbf{r}^{n+1} , \textbf{r}^{n+1}} \rangle \\
524\beta_{n+1} &= \gamma_{n+1}/\gamma_{n}  \\
525\textbf{d}^{n+1} &= \textbf{r}^{n+1} + \beta_{n+1}\; \textbf{d}^{n}\\
526\end{split}
527\end{equation}
528
529
530The convergence test is:
531\begin{equation}
532\delta = \gamma_{n}\; / \langle{ \textbf{b} , \textbf{b}} \rangle \leq \epsilon
533\end{equation}
534where $\epsilon$ is the absolute precision that is required. As for the SOR algorithm,
535the whole model computation is stopped when the number of iterations, \np{nn\_max}, or
536the modulus of the right hand side of the convergence equation exceeds a
537specified value (see \S\ref{MISC_solsor} for a further discussion). The required
538precision and the maximum number of iterations allowed are specified by setting
539\np{rn\_eps} and \np{nn\_max} (\textbf{namelist} parameters).
540
541It can be demonstrated that the above algorithm is optimal, provides the exact
542solution in a number of iterations equal to the size of the matrix, and that
543the convergence rate is faster as the matrix is closer to the identity matrix,
544$i.e.$ its eigenvalues are closer to 1. Therefore, it is more efficient to solve
545a better conditioned system which has the same solution. For that purpose,
546we introduce a preconditioning matrix \textbf{Q} which is an approximation
547of \textbf{A} but much easier to invert than \textbf{A}, and solve the system:
548\begin{equation} \label{Eq_pmat}
549\textbf{Q}^{-1} \textbf{A x} = \textbf{Q}^{-1} \textbf{b}
550\end{equation}
551
552The same algorithm can be used to solve \eqref{Eq_pmat} if instead of the
553canonical dot product the following one is used:
554${\langle{ \textbf{a} , \textbf{b}} \rangle}_Q = \langle{ \textbf{a} , \textbf{Q b}} \rangle$, and
555if $\textbf{\~{b}} = \textbf{Q}^{-1}\;\textbf{b}$ and $\textbf{\~{A}} = \textbf{Q}^{-1}\;\textbf{A}$
556are substituted to \textbf{b} and \textbf{A} \citep{Madec_al_OM88}.
557In \NEMO, \textbf{Q} is chosen as the diagonal of \textbf{ A}, i.e. the simplest form for
558\textbf{Q} so that it can be easily inverted. In this case, the discrete formulation of
559\eqref{Eq_pmat} is in fact given by \eqref{Eq_solmat_p} and thus the matrix and
560right hand side are computed independently from the solver used.
561
562% ================================================================
563
564
565
566
567
568
569
570
571
572
573
574
Note: See TracBrowser for help on using the repository browser.