New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
Chap_MISC.tex in trunk/DOC/TexFiles/Chapters – NEMO

source: trunk/DOC/TexFiles/Chapters/Chap_MISC.tex @ 7646

Last change on this file since 7646 was 7646, checked in by timgraham, 7 years ago

Merge of dev_merge_2016 into trunk. UPDATE TO ARCHFILES NEEDED for XIOS2.
LIM_SRC_s/limrhg.F90 to follow in next commit due to change of kind (I'm unable to do it in this commit).
Merged using the following steps:

1) svn merge --reintegrate svn+ssh://forge.ipsl.jussieu.fr/ipsl/forge/projets/nemo/svn/trunk .
2) Resolve minor conflicts in sette.sh and namelist_cfg for ORCA2LIM3 (due to a change in trunk after branch was created)
3) svn commit
4) svn switch svn+ssh://forge.ipsl.jussieu.fr/ipsl/forge/projets/nemo/svn/trunk
5) svn merge svn+ssh://forge.ipsl.jussieu.fr/ipsl/forge/projets/nemo/svn/branches/2016/dev_merge_2016 .
6) At this stage I checked out a clean copy of the branch to compare against what is about to be committed to the trunk.
6) svn commit #Commit code to the trunk

In this commit I have also reverted a change to Fcheck_archfile.sh which was causing problems on the Paris machine.

File size: 17.8 KB
Line 
1\documentclass[NEMO_book]{subfiles}
2\begin{document}
3% ================================================================
4% Chapter ——— Miscellaneous Topics
5% ================================================================
6\chapter{Miscellaneous Topics}
7\label{MISC}
8\minitoc
9
10\newpage
11$\ $\newline    % force a new ligne
12
13% ================================================================
14% Representation of Unresolved Straits
15% ================================================================
16\section{Representation of Unresolved Straits}
17\label{MISC_strait}
18
19In climate modeling, it often occurs that a crucial connections between water masses
20is broken as the grid mesh is too coarse to resolve narrow straits. For example, coarse
21grid spacing typically closes off the Mediterranean from the Atlantic at the Strait of
22Gibraltar. In this case, it is important for climate models to include the effects of salty
23water entering the Atlantic from the Mediterranean. Likewise, it is important for the
24Mediterranean to replenish its supply of water from the Atlantic to balance the net
25evaporation occurring over the Mediterranean region. This problem occurs even in
26eddy permitting simulations. For example, in ORCA 1/4\deg several straits of the Indonesian
27archipelago (Ombai, Lombok...) are much narrow than even a single ocean grid-point.
28
29We describe briefly here the three methods that can be used in \NEMO to handle
30such improperly resolved straits. The first two consist of opening the strait by hand
31while ensuring that the mass exchanges through the strait are not too large by
32either artificially reducing the surface of the strait grid-cells or, locally increasing
33the lateral friction. In the third one, the strait is closed but exchanges of mass,
34heat and salt across the land are allowed.
35Note that such modifications are so specific to a given configuration that no attempt
36has been made to set them in a generic way. However, examples of how
37they can be set up is given in the ORCA 2\deg and 0.5\deg configurations. For example,
38for details of implementation in ORCA2, search:
39\texttt{ IF( cp\_cfg == "orca" .AND. jp\_cfg == 2 ) }
40
41% -------------------------------------------------------------------------------------------------------------
42%       Hand made geometry changes
43% -------------------------------------------------------------------------------------------------------------
44\subsection{Hand made geometry changes}
45\label{MISC_strait_hand}
46
47$\bullet$ reduced scale factor in the cross-strait direction to a value in better agreement
48with the true mean width of the strait. (Fig.~\ref{Fig_MISC_strait_hand}).
49This technique is sometime called "partially open face" or "partially closed cells".
50The key issue here is only to reduce the faces of $T$-cell ($i.e.$ change the value
51of the horizontal scale factors at $u$- or $v$-point) but not the volume of the $T$-cell.
52Indeed, reducing the volume of strait $T$-cell can easily produce a numerical
53instability at that grid point that would require a reduction of the model time step.
54The changes associated with strait management are done in \mdl{domhgr},
55just after the definition or reading of the horizontal scale factors.
56
57$\bullet$ increase of the viscous boundary layer thickness by local increase of the
58fmask value at the coast (Fig.~\ref{Fig_MISC_strait_hand}). This is done in
59\mdl{dommsk} together with the setting of the coastal value of fmask
60(see Section \ref{LBC_coast})
61
62%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
63\begin{figure}[!tbp]     \begin{center}
64\includegraphics[width=0.80\textwidth]{Fig_Gibraltar}
65\includegraphics[width=0.80\textwidth]{Fig_Gibraltar2}
66\caption{   \label{Fig_MISC_strait_hand} 
67Example of the Gibraltar strait defined in a $1^{\circ} \times 1^{\circ}$ mesh.
68\textit{Top}: using partially open cells. The meridional scale factor at $v$-point
69is reduced on both sides of the strait to account for the real width of the strait
70(about 20 km). Note that the scale factors of the strait $T$-point remains unchanged.
71\textit{Bottom}: using viscous boundary layers. The four fmask parameters
72along the strait coastlines are set to a value larger than 4, $i.e.$ "strong" no-slip
73case (see Fig.\ref{Fig_LBC_shlat}) creating a large viscous boundary layer
74that allows a reduced transport through the strait.}
75\end{center}   \end{figure}
76%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
77
78
79% ================================================================
80% Closed seas
81% ================================================================
82\section{Closed seas (\mdl{closea})}
83\label{MISC_closea}
84
85\colorbox{yellow}{Add here a short description of the way closed seas are managed}
86
87
88% ================================================================
89% Sub-Domain Functionality (\textit{nizoom, njzoom}, namelist parameters)
90% ================================================================
91\section{Sub-Domain Functionality (\np{jpizoom}, \np{jpjzoom})}
92\label{MISC_zoom}
93
94The sub-domain functionality, also improperly called the zoom option
95(improperly because it is not associated with a change in model resolution)
96is a quite simple function that allows a simulation over a sub-domain of an
97already defined configuration ($i.e.$ without defining a new mesh, initial
98state and forcings). This option can be useful for testing the user settings
99of surface boundary conditions, or the initial ocean state of a huge ocean
100model configuration while having a small computer memory requirement.
101It can also be used to easily test specific physics in a sub-domain (for example,
102see \citep{Madec_al_JPO96} for a test of the coupling used in the global ocean
103version of OPA between sea-ice and ocean model over the Arctic or Antarctic
104ocean, using a sub-domain). In the standard model, this option does not
105include any specific treatment for the ocean boundaries of the sub-domain:
106they are considered as artificial vertical walls. Nevertheless, it is quite easy
107to add a restoring term toward a climatology in the vicinity of such boundaries
108(see \S\ref{TRA_dmp}).
109
110In order to easily define a sub-domain over which the computation can be
111performed, the dimension of all input arrays (ocean mesh, bathymetry,
112forcing, initial state, ...) are defined as \np{jpidta}, \np{jpjdta} and \np{jpkdta} 
113( in \ngn{namcfg} namelist), while the computational domain is defined through
114\np{jpiglo}, \np{jpjglo} and \jp{jpk} (\ngn{namcfg} namelist). When running the
115model over the whole domain, the user sets \np{jpiglo}=\np{jpidta} \np{jpjglo}=\np{jpjdta} 
116and \jp{jpk}=\jp{jpkdta}. When running the model over a sub-domain, the user
117has to provide the size of the sub-domain, (\np{jpiglo}, \np{jpjglo}, \np{jpkglo}),
118and the indices of the south western corner as \np{jpizoom} and \np{jpjzoom} in
119the  \ngn{namcfg} namelist (Fig.~\ref{Fig_LBC_zoom}).
120
121Note that a third set of dimensions exist, \jp{jpi}, \jp{jpj} and \jp{jpk} which is
122actually used to perform the computation. It is set by default to \jp{jpi}=\np{jpjglo} 
123and \jp{jpj}=\np{jpjglo}, except for massively parallel computing where the
124computational domain is laid out on local processor memories following a 2D
125horizontal splitting. % (see {\S}IV.2-c) ref to the section to be updated
126
127\subsection{Simple subsetting of input files via netCDF attributes}
128
129The extended grids for use with the under-shelf ice cavities will result in redundant rows
130around Antarctica if the ice cavities are not active. A simple mechanism for subsetting
131input files associated with the extended domains has been implemented to avoid the need to
132maintain different sets of input fields for use with or without active ice cavities. The
133existing 'zoom' options are overly complex for this task and marked for deletion anyway.
134This alternative subsetting operates for the j-direction only and works by optionally
135looking for and using a global file attribute (named: \np{open\_ocean\_jstart}) to
136determine the starting j-row for input. The use of this option is best explained with an
137example: Consider an ORCA1 configuration using the extended grid bathymetry and coordinate
138files:
139\vspace{-10pt}
140\begin{alltt}
141\tiny
142\begin{verbatim}
143eORCA1_bathymetry_v2.nc
144eORCA1_coordinates.nc
145\end{verbatim}
146\end{alltt}
147\noindent These files define a horizontal domain of 362x332. Assuming the first row with
148open ocean wet points in the non-isf bathymetry for this set is row 42 (Fortran indexing)
149then the formally correct setting for \np{open\_ocean\_jstart} is 41. Using this value as the
150first row to be read will result in a 362x292 domain which is the same size as the original
151ORCA1 domain. Thus the extended coordinates and bathymetry files can be used with all the
152original input files for ORCA1 if the ice cavities are not active (\np{ln\_isfcav =
153.false.}). Full instructions for achieving this are:
154
155\noindent Add the new attribute to any input files requiring a j-row offset, i.e:
156\vspace{-10pt}
157\begin{alltt}
158\tiny
159\begin{verbatim}
160ncatted  -a open_ocean_jstart,global,a,d,41 eORCA1_coordinates.nc
161ncatted  -a open_ocean_jstart,global,a,d,41 eORCA1_bathymetry_v2.nc
162\end{verbatim}
163\end{alltt}
164 
165\noindent Add the logical switch to \ngn{namcfg} in the configuration namelist and set true:
166%--------------------------------------------namcfg--------------------------------------------------------
167\namdisplay{namcfg_orca1}
168%--------------------------------------------------------------------------------------------------------------
169
170\noindent Note the j-size of the global domain is the (extended j-size minus
171\np{open\_ocean\_jstart} + 1 ) and this must match the size of all datasets other than
172bathymetry and coordinates currently. However the option can be extended to any global, 2D
173and 3D, netcdf, input field by adding the:
174\vspace{-10pt}
175\begin{alltt}
176\tiny
177\begin{verbatim}
178lrowattr=ln_use_jattr
179\end{verbatim}
180\end{alltt}
181optional argument to the appropriate \np{iom\_get} call and the \np{open\_ocean\_jstart} attribute to the corresponding input files. It remains the users responsibility to set \np{jpjdta} and \np{jpjglo} values in the \np{namelist\_cfg} file according to their needs.
182
183%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
184\begin{figure}[!ht]    \begin{center}
185\includegraphics[width=0.90\textwidth]{Fig_LBC_zoom}
186\caption{   \label{Fig_LBC_zoom}
187Position of a model domain compared to the data input domain when the zoom functionality is used.}
188\end{center}   \end{figure}
189%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
190
191
192% ================================================================
193% Accuracy and Reproducibility
194% ================================================================
195\section{Accuracy and Reproducibility (\mdl{lib\_fortran})}
196\label{MISC_fortran}
197
198\subsection{Issues with intrinsinc SIGN function (\key{nosignedzero})}
199\label{MISC_sign}
200
201The SIGN(A, B) is the \textsc {Fortran} intrinsic function delivers the magnitude
202of A with the sign of B. For example, SIGN(-3.0,2.0) has the value 3.0.
203The problematic case is when the second argument is zero, because, on platforms
204that support IEEE arithmetic, zero is actually a signed number.
205There is a positive zero and a negative zero.
206
207In \textsc{Fortran}~90, the processor was required always to deliver a positive result for SIGN(A, B)
208if B was zero. Nevertheless, in \textsc{Fortran}~95, the processor is allowed to do the correct thing
209and deliver ABS(A) when B is a positive zero and -ABS(A) when B is a negative zero.
210This change in the specification becomes apparent only when B is of type real, and is zero,
211and the processor is capable of distinguishing between positive and negative zero,
212and B is negative real zero. Then SIGN delivers a negative result where, under \textsc{Fortran}~90
213rules,  it used to return a positive result.
214This change may be especially sensitive for the ice model, so we overwrite the intrinsinc
215function with our own function simply performing :   \\
216\verb?   IF( B >= 0.e0 ) THEN   ;   SIGN(A,B) = ABS(A)  ?    \\
217\verb?   ELSE                   ;   SIGN(A,B) =-ABS(A)     ?  \\
218\verb?   ENDIF    ? \\
219This feature can be found in \mdl{lib\_fortran} module and is effective when \key{nosignedzero}
220is defined. We use a CPP key as the overwritting of a intrinsic function can present
221performance issues with some computers/compilers.
222
223
224\subsection{MPP reproducibility}
225\label{MISC_glosum}
226
227The numerical reproducibility of simulations on distributed memory parallel computers
228is a critical issue. In particular, within NEMO global summation of distributed arrays
229is most susceptible to rounding errors, and their propagation and accumulation cause
230uncertainty in final simulation reproducibility on different numbers of processors.
231To avoid so, based on \citet{He_Ding_JSC01} review of different technics,
232we use a so called self-compensated summation method. The idea is to estimate
233the roundoff error, store it in a buffer, and then add it back in the next addition.
234
235Suppose we need to calculate $b = a_1 + a_2 + a_3$. The following algorithm
236will allow to split the sum in two ($sum_1 = a_{1} + a_{2}$ and $b = sum_2 = sum_1 + a_3$)
237with exactly the same rounding errors as the sum performed all at once.
238\begin{align*}
239   sum_1 \ \  &= a_1 + a_2 \\
240   error_1     &= a_2 + ( a_1 - sum_1 ) \\
241   sum_2 \ \  &= sum_1 + a_3 + error_1 \\
242   error_2     &= a_3 + error_1 + ( sum_1 - sum_2 ) \\
243   b \qquad \ &= sum_2 \\
244\end{align*}
245An example of this feature can be found in \mdl{lib\_fortran} module.
246It is systematicallt used in glob\_sum function (summation over the entire basin excluding
247duplicated rows and columns due to cyclic or north fold boundary condition as well as
248overlap MPP areas). The self-compensated summation method should be used in all summation
249in i- and/or j-direction. See closea.F90 module for an example.
250Note also that this implementation may be sensitive to the optimization level.
251
252\subsection{MPP scalability}
253\label{MISC_mppsca}
254
255The default method of communicating values across the north-fold in distributed memory applications
256(\key{mpp\_mpi}) uses a \textsc{MPI\_ALLGATHER} function to exchange values from each processing
257region in the northern row with every other processing region in the northern row. This enables a
258global width array containing the top 4 rows to be collated on every northern row processor and then
259folded with a simple algorithm. Although conceptually simple, this "All to All" communication will
260hamper performance scalability for large numbers of northern row processors. From version 3.4
261onwards an alternative method is available which only performs direct "Peer to Peer" communications
262between each processor and its immediate "neighbours" across the fold line. This is achieved by
263using the default \textsc{MPI\_ALLGATHER} method during initialisation to help identify the "active"
264neighbours. Stored lists of these neighbours are then used in all subsequent north-fold exchanges to
265restrict exchanges to those between associated regions. The collated global width array for each
266region is thus only partially filled but is guaranteed to be set at all the locations actually
267required by each individual for the fold operation. This alternative method should give identical
268results to the default \textsc{ALLGATHER} method and is recommended for large values of \np{jpni}.
269The new method is activated by setting \np{ln\_nnogather} to be true ({\bf nammpp}). The
270reproducibility of results using the two methods should be confirmed for each new, non-reference
271configuration.
272
273% ================================================================
274% Model optimisation, Control Print and Benchmark
275% ================================================================
276\section{Model Optimisation, Control Print and Benchmark}
277\label{MISC_opt}
278%--------------------------------------------namctl-------------------------------------------------------
279\namdisplay{namctl} 
280%--------------------------------------------------------------------------------------------------------------
281
282 \gmcomment{why not make these bullets into subsections?}
283Options are defined through the  \ngn{namctl} namelist variables.
284
285$\bullet$ Vector optimisation:
286
287\key{vectopt\_loop} enables the internal loops to collapse. This is very
288a very efficient way to increase the length of vector calculations and thus
289to speed up the model on vector computers.
290 
291% Add here also one word on NPROMA technique that has been found useless, since compiler have made significant progress during the last decade.
292 
293% Add also one word on NEC specific optimisation (Novercheck option for example)
294 
295$\bullet$ Control print %: describe here 4 things:
296
2971- \np{ln\_ctl} : compute and print the trends averaged over the interior domain
298in all TRA, DYN, LDF and ZDF modules. This option is very helpful when
299diagnosing the origin of an undesired change in model results.
300
3012- also \np{ln\_ctl} but using the nictl and njctl namelist parameters to check
302the source of differences between mono and multi processor runs.
303
304%%gm   to be removed both here and in the code
3053- last digit comparison (\np{nn\_bit\_cmp}). In an MPP simulation, the computation of
306a sum over the whole domain is performed as the summation over all processors of
307each of their sums over their interior domains. This double sum never gives exactly
308the same result as a single sum over the whole domain, due to truncation differences.
309The "bit comparison" option has been introduced in order to be able to check that
310mono-processor and multi-processor runs give exactly the same results.
311%THIS is to be updated with the mpp_sum_glo  introduced in v3.3
312% nn_bit_cmp  today only check that the nn_cla = 0 (no cross land advection)
313%%gm end
314
315$\bullet$  Benchmark (\np{nn\_bench}). This option defines a benchmark run based on
316a GYRE configuration (see \S\ref{CFG_gyre}) in which the resolution remains the same
317whatever the domain size. This allows a very large model domain to be used, just by
318changing the domain size (\jp{jpiglo}, \jp{jpjglo}) and without adjusting either the time-step
319or the physical parameterisations.
320
321% ================================================================
322\end{document}
323
324
325
326
Note: See TracBrowser for help on using the repository browser.