New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
chap_LBC.tex in NEMO/branches/2019/fix_vvl_ticket1791/doc/latex/NEMO/subfiles – NEMO

source: NEMO/branches/2019/fix_vvl_ticket1791/doc/latex/NEMO/subfiles/chap_LBC.tex @ 11422

Last change on this file since 11422 was 11422, checked in by jchanut, 5 years ago

#1791, merge with trunk

File size: 41.3 KB
Line 
1\documentclass[../main/NEMO_manual]{subfiles}
2
3\begin{document}
4% ================================================================
5% Chapter — Lateral Boundary Condition (LBC)
6% ================================================================
7\chapter{Lateral Boundary Condition (LBC)}
8\label{chap:LBC}
9
10\minitoc
11
12\newpage
13
14%gm% add here introduction to this chapter
15
16% ================================================================
17% Boundary Condition at the Coast
18% ================================================================
19\section[Boundary condition at the coast (\texttt{rn\_shlat})]
20{Boundary condition at the coast (\protect\np{rn\_shlat})}
21\label{sec:LBC_coast}
22%--------------------------------------------nam_lbc-------------------------------------------------------
23
24\nlst{namlbc} 
25%--------------------------------------------------------------------------------------------------------------
26
27%The lateral ocean boundary conditions contiguous to coastlines are Neumann conditions for heat and salt (no flux across boundaries) and Dirichlet conditions for momentum (ranging from free-slip to "strong" no-slip). They are handled automatically by the mask system (see \autoref{subsec:DOM_msk}).
28
29%OPA allows land and topography grid points in the computational domain due to the presence of continents or islands, and includes the use of a full or partial step representation of bottom topography. The computation is performed over the whole domain, \ie we do not try to restrict the computation to ocean-only points. This choice has two motivations. Firstly, working on ocean only grid points overloads the code and harms the code readability. Secondly, and more importantly, it drastically reduces the vector portion of the computation, leading to a dramatic increase of CPU time requirement on vector computers.  The current section describes how the masking affects the computation of the various terms of the equations with respect to the boundary condition at solid walls. The process of defining which areas are to be masked is described in \autoref{subsec:DOM_msk}.
30
31Options are defined through the \ngn{namlbc} namelist variables.
32The discrete representation of a domain with complex boundaries (coastlines and bottom topography) leads to
33arrays that include large portions where a computation is not required as the model variables remain at zero.
34Nevertheless, vectorial supercomputers are far more efficient when computing over a whole array,
35and the readability of a code is greatly improved when boundary conditions are applied in
36an automatic way rather than by a specific computation before or after each computational loop.
37An efficient way to work over the whole domain while specifying the boundary conditions,
38is to use multiplication by mask arrays in the computation.
39A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ elsewhere.
40A simple multiplication of a variable by its own mask ensures that it will remain zero over land areas.
41Since most of the boundary conditions consist of a zero flux across the solid boundaries,
42they can be simply applied by multiplying variables by the correct mask arrays,
43\ie the mask array of the grid point where the flux is evaluated.
44For example, the heat flux in the \textbf{i}-direction is evaluated at $u$-points.
45Evaluating this quantity as,
46
47\[
48  % \label{eq:lbc_aaaa}
49  \frac{A^{lT} }{e_1 }\frac{\partial T}{\partial i}\equiv \frac{A_u^{lT}
50  }{e_{1u} } \; \delta_{i+1 / 2} \left[ T \right]\;\;mask_u
51\]
52(where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is zero inside land and
53at the boundaries, since mask$_{u}$ is zero at solid boundaries which in this case are defined at $u$-points
54(normal velocity $u$ remains zero at the coast) (\autoref{fig:LBC_uv}).
55
56%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
57\begin{figure}[!t]
58  \begin{center}
59    \includegraphics[width=\textwidth]{Fig_LBC_uv}
60    \caption{
61      \protect\label{fig:LBC_uv}
62      Lateral boundary (thick line) at T-level.
63      The velocity normal to the boundary is set to zero.
64    }
65  \end{center}
66\end{figure}
67%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
68
69For momentum the situation is a bit more complex as two boundary conditions must be provided along the coast
70(one each for the normal and tangential velocities).
71The boundary of the ocean in the C-grid is defined by the velocity-faces.
72For example, at a given $T$-level,
73the lateral boundary (a coastline or an intersection with the bottom topography) is made of
74segments joining $f$-points, and normal velocity points are located between two $f-$points (\autoref{fig:LBC_uv}).
75The boundary condition on the normal velocity (no flux through solid boundaries)
76can thus be easily implemented using the mask system.
77The boundary condition on the tangential velocity requires a more specific treatment.
78This boundary condition influences the relative vorticity and momentum diffusive trends,
79and is required in order to compute the vorticity at the coast.
80Four different types of lateral boundary condition are available,
81controlled by the value of the \np{rn\_shlat} namelist parameter
82(The value of the mask$_{f}$ array along the coastline is set equal to this parameter).
83These are:
84
85%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
86\begin{figure}[!p]
87  \begin{center}
88    \includegraphics[width=\textwidth]{Fig_LBC_shlat}
89    \caption{
90      \protect\label{fig:LBC_shlat}
91      lateral boundary condition
92      (a) free-slip ($rn\_shlat=0$);
93      (b) no-slip ($rn\_shlat=2$);
94      (c) "partial" free-slip ($0<rn\_shlat<2$) and
95      (d) "strong" no-slip ($2<rn\_shlat$).
96      Implied "ghost" velocity inside land area is display in grey.
97    }
98  \end{center}
99\end{figure}
100%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
101
102\begin{description}
103
104\item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}):] the tangential velocity at
105  the coastline is equal to the offshore velocity,
106  \ie the normal derivative of the tangential velocity is zero at the coast,
107  so the vorticity: mask$_{f}$ array is set to zero inside the land and just at the coast
108  (\autoref{fig:LBC_shlat}-a).
109
110\item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}):] the tangential velocity vanishes at the coastline.
111  Assuming that the tangential velocity decreases linearly from
112  the closest ocean velocity grid point to the coastline,
113  the normal derivative is evaluated as if the velocities at the closest land velocity gridpoint and
114  the closest ocean velocity gridpoint were of the same magnitude but in the opposite direction
115  (\autoref{fig:LBC_shlat}-b).
116  Therefore, the vorticity along the coastlines is given by:
117
118  \[
119    \zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right] - \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ ,
120  \]
121  where $u$ and $v$ are masked fields.
122  Setting the mask$_{f}$ array to $2$ along the coastline provides a vorticity field computed with
123  the no-slip boundary condition, simply by multiplying it by the mask$_{f}$ :
124  \[
125    % \label{eq:lbc_bbbb}
126    \zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta_{i+1/2}
127        \left[ {e_{2v} \,v} \right]-\delta_{j+1/2} \left[ {e_{1u} \,u} \right]}
128    \right)\;\mbox{mask}_f
129  \]
130
131\item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2):] the tangential velocity at
132  the coastline is smaller than the offshore velocity, \ie there is a lateral friction but
133  not strong enough to make the tangential velocity at the coast vanish (\autoref{fig:LBC_shlat}-c).
134  This can be selected by providing a value of mask$_{f}$ strictly inbetween $0$ and $2$.
135
136\item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}):] the viscous boundary layer is assumed to
137  be smaller than half the grid size (\autoref{fig:LBC_shlat}-d).
138  The friction is thus larger than in the no-slip case.
139
140\end{description}
141
142Note that when the bottom topography is entirely represented by the $s$-coor-dinates (pure $s$-coordinate),
143the lateral boundary condition on tangential velocity is of much less importance as
144it is only applied next to the coast where the minimum water depth can be quite shallow.
145
146
147% ================================================================
148% Boundary Condition around the Model Domain
149% ================================================================
150\section[Model domain boundary condition (\texttt{jperio})]
151{Model domain boundary condition (\protect\np{jperio})}
152\label{sec:LBC_jperio}
153
154At the model domain boundaries several choices are offered:
155closed, cyclic east-west, cyclic north-south, a north-fold, and combination closed-north fold or
156bi-cyclic east-west and north-fold.
157The north-fold boundary condition is associated with the 3-pole ORCA mesh.
158
159% -------------------------------------------------------------------------------------------------------------
160%        Closed, cyclic (\np{jperio}\forcode{ = 0..2})
161% -------------------------------------------------------------------------------------------------------------
162\subsection[Closed, cyclic (\forcode{jperio = [0127]})]
163{Closed, cyclic (\protect\np{jperio}\forcode{ = [0127]})}
164\label{subsec:LBC_jperio012}
165
166The choice of closed or cyclic model domain boundary condition is made by
167setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}.
168Each time such a boundary condition is needed, it is set by a call to routine \mdl{lbclnk}.
169The computation of momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to $j=jpj-1$,
170\ie in the model interior.
171To choose a lateral model boundary condition is to specify the first and last rows and columns of
172the model variables.
173
174\begin{description}
175
176\item[For closed boundary (\np{jperio}\forcode{ = 0})],
177  solid walls are imposed at all model boundaries:
178  first and last rows and columns are set to zero.
179
180\item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})],
181  first and last rows are set to zero (closed) whilst the first column is set to
182  the value of the last-but-one column and the last column to the value of the second one
183  (\autoref{fig:LBC_jperio}-a).
184  Whatever flows out of the eastern (western) end of the basin enters the western (eastern) end.
185
186\item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})],
187  first and last columns are set to zero (closed) whilst the first row is set to
188  the value of the last-but-one row and the last row to the value of the second one
189  (\autoref{fig:LBC_jperio}-a).
190  Whatever flows out of the northern (southern) end of the basin enters the southern (northern) end.
191
192\item[Bi-cyclic east-west and north-south boundary (\np{jperio}\forcode{ = 7})] combines cases 1 and 2.
193
194\end{description}
195
196%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
197\begin{figure}[!t]
198  \begin{center}
199    \includegraphics[width=\textwidth]{Fig_LBC_jperio}
200    \caption{
201      \protect\label{fig:LBC_jperio}
202      setting of (a) east-west cyclic  (b) symmetric across the equator boundary conditions.
203    }
204  \end{center}
205\end{figure}
206%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
207
208% -------------------------------------------------------------------------------------------------------------
209%        North fold (\textit{jperio = 3 }to $6)$
210% -------------------------------------------------------------------------------------------------------------
211\subsection[North-fold (\forcode{jperio = [3-6]})]
212{North-fold (\protect\np{jperio}\forcode{ = [3-6]})}
213\label{subsec:LBC_north_fold}
214
215The north fold boundary condition has been introduced in order to handle the north boundary of
216a three-polar ORCA grid.
217Such a grid has two poles in the northern hemisphere (\autoref{fig:MISC_ORCA_msh},
218and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.
219Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition.
220
221%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
222\begin{figure}[!t]
223  \begin{center}
224    \includegraphics[width=\textwidth]{Fig_North_Fold_T}
225    \caption{
226      \protect\label{fig:North_Fold_T}
227      North fold boundary with a $T$-point pivot and cyclic east-west boundary condition ($jperio=4$),
228      as used in ORCA 2, 1/4, and 1/12.
229      Pink shaded area corresponds to the inner domain mask (see text).
230    }
231  \end{center}
232\end{figure}
233%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
234
235% ====================================================================
236% Exchange with neighbouring processors
237% ====================================================================
238\section[Exchange with neighbouring processors (\textit{lbclnk.F90}, \textit{lib\_mpp.F90})]
239{Exchange with neighbouring processors (\protect\mdl{lbclnk}, \protect\mdl{lib\_mpp})}
240\label{sec:LBC_mpp}
241
242For massively parallel processing (mpp), a domain decomposition method is used.
243The basic idea of the method is to split the large computation domain of a numerical experiment into
244several smaller domains and solve the set of equations by addressing independent local problems.
245Each processor has its own local memory and computes the model equation over a subdomain of the whole model domain.
246The subdomain boundary conditions are specified through communications between processors which
247are organized by explicit statements (message passing method).
248
249A big advantage is that the method does not need many modifications of the initial \fortran code.
250From the modeller's point of view, each sub domain running on a processor is identical to the "mono-domain" code.
251In addition, the programmer manages the communications between subdomains,
252and the code is faster when the number of processors is increased.
253The porting of OPA code on an iPSC860 was achieved during Guyon's PhD [Guyon et al. 1994, 1995]
254in collaboration with CETIIS and ONERA.
255The implementation in the operational context and the studies of performance on
256a T3D and T3E Cray computers have been made in collaboration with IDRIS and CNRS.
257The present implementation is largely inspired by Guyon's work [Guyon 1995].
258
259The parallelization strategy is defined by the physical characteristics of the ocean model.
260Second order finite difference schemes lead to local discrete operators that
261depend at the very most on one neighbouring point.
262The only non-local computations concern the vertical physics
263(implicit diffusion, turbulent closure scheme, ...) (delocalization over the whole water column),
264and the solving of the elliptic equation associated with the surface pressure gradient computation
265(delocalization over the whole horizontal domain).
266Therefore, a pencil strategy is used for the data sub-structuration:
267the 3D initial domain is laid out on local processor memories following a 2D horizontal topological splitting.
268Each sub-domain computes its own surface and bottom boundary conditions and
269has a side wall overlapping interface which defines the lateral boundary conditions for
270computations in the inner sub-domain.
271The overlapping area consists of the two rows at each edge of the sub-domain.
272After a computation, a communication phase starts:
273each processor sends to its neighbouring processors the update values of the points corresponding to
274the interior overlapping area to its neighbouring sub-domain (\ie the innermost of the two overlapping rows).
275The communication is done through the Message Passing Interface (MPI).
276The data exchanges between processors are required at the very place where
277lateral domain boundary conditions are set in the mono-domain computation:
278the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module) which manages such conditions is interfaced with
279routines found in \mdl{lib\_mpp} module when running on an MPP computer (\ie when \key{mpp\_mpi} defined).
280It has to be pointed out that when using the MPP version of the model,
281the east-west cyclic boundary condition is done implicitly,
282whilst the south-symmetric boundary condition option is not available.
283
284%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
285\begin{figure}[!t]
286  \begin{center}
287    \includegraphics[width=\textwidth]{Fig_mpp}
288    \caption{
289      \protect\label{fig:mpp}
290      Positioning of a sub-domain when massively parallel processing is used.
291    }
292  \end{center}
293\end{figure}
294%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
295
296In the standard version of \NEMO, the splitting is regular and arithmetic.
297The i-axis is divided by \jp{jpni} and
298the j-axis by \jp{jpnj} for a number of processors \jp{jpnij} most often equal to $jpni \times jpnj$
299(parameters set in  \ngn{nammpp} namelist).
300Each processor is independent and without message passing or synchronous process,
301programs run alone and access just its own local memory.
302For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) that
303are named \jp{jpi}, \jp{jpj}, \jp{jpk}.
304These dimensions include the internal domain and the overlapping rows.
305The number of rows to exchange (known as the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}).
306The whole domain dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}.
307The relationship between the whole domain and a sub-domain is:
308\[
309  jpi = ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci
310  jpj = ( jpjglo-2*jprecj + (jpnj-1) ) / jpnj + 2*jprecj
311\]
312where \jp{jpni}, \jp{jpnj} are the number of processors following the i- and j-axis.
313
314One also defines variables nldi and nlei which correspond to the internal domain bounds,
315and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain.
316An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,
317a global array (whole domain) by the relationship:
318\[
319  % \label{eq:lbc_nimpp}
320  T_{g} (i+nimpp-1,j+njmpp-1,k) = T_{l} (i,j,k),
321\]
322with  $1 \leq i \leq jpi$, $1  \leq j \leq jpj $ , and  $1  \leq k \leq jpk$.
323
324Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable nproc.
325In the standard version, a processor has no more than
326four neighbouring processors named nono (for north), noea (east), noso (south) and nowe (west) and
327two variables, nbondi and nbondj, indicate the relative position of the processor:
328\begin{itemize}
329\item       nbondi = -1    an east neighbour, no west processor,
330\item       nbondi =  0 an east neighbour, a west neighbour,
331\item       nbondi =  1    no east processor, a west neighbour,
332\item       nbondi =  2    no splitting following the i-axis.
333\end{itemize}
334During the simulation, processors exchange data with their neighbours.
335If there is effectively a neighbour, the processor receives variables from this processor on its overlapping row,
336and sends the data issued from internal domain corresponding to the overlapping row of the other processor.
337
338
339The \NEMO model computes equation terms with the help of mask arrays (0 on land points and 1 on sea points).
340It is easily readable and very efficient in the context of a computer with vectorial architecture.
341However, in the case of a scalar processor, computations over the land regions become more expensive in
342terms of CPU time.
343It is worse when we use a complex configuration with a realistic bathymetry like the global ocean where
344more than 50 \% of points are land points.
345For this reason, a pre-processing tool can be used to choose the mpp domain decomposition with a maximum number of
346only land points processors, which can then be eliminated (\autoref{fig:mppini2})
347(For example, the mpp\_optimiz tools, available from the DRAKKAR web site).
348This optimisation is dependent on the specific bathymetry employed.
349The user then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with $jpnij < jpni \times jpnj$,
350leading to the elimination of $jpni \times jpnj - jpnij$ land processors.
351When those parameters are specified in \ngn{nammpp} namelist,
352the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound, nono, noea,...) so that
353the land-only processors are not taken into account.
354
355\gmcomment{Note that the inimpp2 routine is general so that the original inimpp
356routine should be suppressed from the code.}
357
358When land processors are eliminated,
359the value corresponding to these locations in the model output files is undefined.
360Note that this is a problem for the meshmask file which requires to be defined over the whole domain.
361Therefore, user should not eliminate land processors when creating a meshmask file
362(\ie when setting a non-zero value to \np{nn\_msh}).
363
364%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
365\begin{figure}[!ht]
366  \begin{center}
367    \includegraphics[width=\textwidth]{Fig_mppini2}
368    \caption {
369      \protect\label{fig:mppini2}
370      Example of Atlantic domain defined for the CLIPPER projet.
371      Initial grid is composed of 773 x 1236 horizontal points.
372      (a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).
373      52 subdomains are land areas.
374      (b) 52 subdomains are eliminated (white rectangles) and
375      the resulting number of processors really used during the computation is jpnij=128.
376    }
377  \end{center}
378\end{figure}
379%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
380
381
382% ====================================================================
383% Unstructured open boundaries BDY
384% ====================================================================
385\section{Unstructured open boundary conditions (BDY)}
386\label{sec:LBC_bdy}
387
388%-----------------------------------------nambdy--------------------------------------------
389
390\nlst{nambdy} 
391%-----------------------------------------------------------------------------------------------
392%-----------------------------------------nambdy_dta--------------------------------------------
393
394\nlst{nambdy_dta} 
395%-----------------------------------------------------------------------------------------------
396
397Options are defined through the \ngn{nambdy} \ngn{nambdy\_dta} namelist variables.
398The BDY module is the core implementation of open boundary conditions for regional configurations on
399temperature, salinity, barotropic and baroclinic velocities, as well as ice concentration, ice and snow thicknesses).
400
401The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many features and
402a similar coding structure \citep{chanut_rpt05}.
403The specification of the location of the open boundary is completely flexible and
404allows for example the open boundary to follow an isobath or other irregular contour.
405Boundary data files used with versions of NEMO prior to Version 3.4 may need to be re-ordered to work with this version.
406See the section on the Input Boundary Data Files for details.
407
408%----------------------------------------------
409\subsection{Namelists}
410\label{subsec:BDY_namelist}
411
412The BDY module is activated by setting \np{ln\_bdy}\forcode{ = .true.} .
413It is possible to define more than one boundary ``set'' and apply different boundary conditions to each set.
414The number of boundary sets is defined by \np{nb\_bdy}.
415Each boundary set may be defined as a set of straight line segments in a namelist
416(\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file (\np{ln\_coords\_file}\forcode{ = .true.}).
417If the set is defined in a namelist, then the namelists \ngn{nambdy\_index} must be included separately, one for each set.
418If the set is defined by a file, then a ``\ifile{coordinates.bdy}'' file must be provided.
419The coordinates.bdy file is analagous to the usual NEMO ``\ifile{coordinates}'' file.
420In the example above, there are two boundary sets, the first of which is defined via a file and
421the second is defined in a namelist.
422For more details of the definition of the boundary geometry see section \autoref{subsec:BDY_geometry}.
423
424For each boundary set a boundary condition has to be chosen for the barotropic solution
425(``u2d'':sea-surface height and barotropic velocities), for the baroclinic velocities (``u3d''),
426for the active tracers \footnote{The BDY module does not deal with passive tracers at this version} (``tra''), and sea-ice (``ice'').
427For each set of variables there is a choice of algorithm and a choice for the data,
428eg. for the active tracers the algorithm is set by \np{cn\_tra} and the choice of data is set by \np{nn\_tra\_dta}.\\ 
429
430The choice of algorithm is currently as follows:
431
432\begin{description}
433\item[\forcode{'none'}:] No boundary condition applied.
434  So the solution will ``see'' the land points around the edge of the edge of the domain.
435\item[\forcode{'specified'}:] Specified boundary condition applied (only available for baroclinic velocity and tracer variables).
436\item[\forcode{'neumann'}:] Value at the boundary are duplicated (No gradient). Only available for baroclinic velocity and tracer variables.
437\item[\forcode{'frs'}:] Flow Relaxation Scheme (FRS) available for all variables.
438\item[\forcode{'Orlanski'}:] Orlanski radiation scheme (fully oblique) for barotropic, baroclinic and tracer variables.
439\item[\forcode{'Orlanski_npo'}:] Orlanski radiation scheme for barotropic, baroclinic and tracer variables.
440\item[\forcode{'flather'}:] Flather radiation scheme for the barotropic variables only.
441\end{description}
442
443The main choice for the boundary data is to use initial conditions as boundary data
444(\np{nn\_tra\_dta}\forcode{ = 0}) or to use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}).
445In case the 3d velocity data contain the total velocity (ie, baroclinic and barotropic velocity),
446the bdy code can derived baroclinic and barotropic velocities by setting \np{ln\_full\_vel}\forcode{ = .true. }
447For the barotropic solution there is also the option to use tidal harmonic forcing either by
448itself (\np{nn\_dyn2d\_dta}\forcode{ = 2}) or in addition to other external data (\np{nn\_dyn2d\_dta}\forcode{ = 3}).\\
449Sea-ice salinity, temperature and age data at the boundary are constant and defined repectively by \np{rn\_ice\_sal}, \np{rn\_ice\_tem} and \np{rn\_ice\_age}.
450
451If external boundary data is required then the \ngn{nambdy\_dta} namelist must be defined.
452One \ngn{nambdy\_dta} namelist is required for each boundary set in the order in which
453the boundary sets are defined in nambdy.
454In the example given, two boundary sets have been defined. The first one is reading data file in the \ngn{nambdy\_dta} namelist shown above
455and the second one is using data from intial condition (no namelist bloc needed).
456The boundary data is read in using the fldread module,
457so the \ngn{nambdy\_dta} namelist is in the format required for fldread.
458For each variable required, the filename, the frequency of the files and
459the frequency of the data in the files is given.
460Also whether or not time-interpolation is required and whether the data is climatological (time-cyclic) data.\\
461
462There is currently an option to vertically interpolate the open boundary data onto the native grid at run-time.
463If \np{nn\_bdy\_jpk} $< -1$, it is assumed that the lateral boundary data are already on the native grid.
464However, if \np{nn\_bdy\_jpk} is set to the number of vertical levels present in the boundary data,
465a bilinear interpolation onto the native grid will be triggered at runtime.
466For this to be successful the additional variables: $gdept$, $gdepu$, $gdepv$, $e3t$, $e3u$ and $e3v$, are required to be present in the lateral boundary files.
467These correspond to the depths and scale factors of the input data,
468the latter used to make any adjustment to the velocity fields due to differences in the total water depths between the two vertical grids.\\
469
470In the example namelists given, two boundary sets are defined.
471The first set is defined via a file and applies FRS conditions to temperature and salinity and
472Flather conditions to the barotropic variables. No condition specified for the baroclinic velocity and sea-ice.
473External data is provided in daily files (from a large-scale model).
474Tidal harmonic forcing is also used.
475The second set is defined in a namelist.
476FRS conditions are applied on temperature and salinity and climatological data is read from initial condition files.
477
478%----------------------------------------------
479\subsection{Flow relaxation scheme}
480\label{subsec:BDY_FRS_scheme}
481
482The Flow Relaxation Scheme (FRS) \citep{davies_QJRMS76,engedahl_T95},
483applies a simple relaxation of the model fields to externally-specified values over
484a zone next to the edge of the model domain.
485Given a model prognostic variable $\Phi$
486\[
487  % \label{eq:bdy_frs1}
488  \Phi(d) = \alpha(d)\Phi_{e}(d) + (1-\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N
489\]
490where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified external field,
491$d$ gives the discrete distance from the model boundary and
492$\alpha$ is a parameter that varies from $1$ at $d=1$ to a small value at $d=N$.
493It can be shown that this scheme is equivalent to adding a relaxation term to
494the prognostic equation for $\Phi$ of the form:
495\[
496  % \label{eq:bdy_frs2}
497  -\frac{1}{\tau}\left(\Phi - \Phi_{e}\right)
498\]
499where the relaxation time scale $\tau$ is given by a function of $\alpha$ and the model time step $\Delta t$:
500\[
501  % \label{eq:bdy_frs3}
502  \tau = \frac{1-\alpha}{\alpha\,\rdt
503\]
504Thus the model solution is completely prescribed by the external conditions at the edge of the model domain and
505is relaxed towards the external conditions over the rest of the FRS zone.
506The application of a relaxation zone helps to prevent spurious reflection of
507outgoing signals from the model boundary.
508
509The function $\alpha$ is specified as a $tanh$ function:
510\[
511  % \label{eq:bdy_frs4}
512  \alpha(d) = 1 - \tanh\left(\frac{d-1}{2}\right),       \quad d=1,N
513\]
514The width of the FRS zone is specified in the namelist as \np{nn\_rimwidth}.
515This is typically set to a value between 8 and 10.
516
517%----------------------------------------------
518\subsection{Flather radiation scheme}
519\label{subsec:BDY_flather_scheme}
520
521The \citet{flather_JPO94} scheme is a radiation condition on the normal,
522depth-mean transport across the open boundary.
523It takes the form
524\begin{equation}  \label{eq:bdy_fla1}
525U = U_{e} + \frac{c}{h}\left(\eta - \eta_{e}\right),
526\end{equation}
527where $U$ is the depth-mean velocity normal to the boundary and $\eta$ is the sea surface height,
528both from the model.
529The subscript $e$ indicates the same fields from external sources.
530The speed of external gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the water column.
531The depth-mean normal velocity along the edge of the model domain is set equal to
532the external depth-mean normal velocity,
533plus a correction term that allows gravity waves generated internally to exit the model boundary.
534Note that the sea-surface height gradient in \autoref{eq:bdy_fla1} is a spatial gradient across the model boundary,
535so that $\eta_{e}$ is defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the $T$ points with $nbr=2$.
536$U$ and $U_{e}$ are defined on the $U$ or $V$ points with $nbr=1$, \ie between the two $T$ grid points.
537
538%----------------------------------------------
539\subsection{Orlanski radiation scheme}
540\label{subsec:BDY_orlanski_scheme}
541
542The Orlanski scheme is based on the algorithm described by \citep{marchesiello.mcwilliams.ea_OM01}, hereafter MMS.
543
544The adaptive Orlanski condition solves a wave plus relaxation equation at the boundary:
545\begin{equation}
546\frac{\partial\phi}{\partial t} + c_x \frac{\partial\phi}{\partial x} + c_y \frac{\partial\phi}{\partial y} =
547                                                -\frac{1}{\tau}(\phi - \phi^{ext})
548\label{eq:wave_continuous}
549\end{equation}
550
551where $\phi$ is the model field, $x$ and $y$ refer to the normal and tangential directions to the boundary respectively, and the phase
552velocities are diagnosed from the model fields as:
553
554\begin{equation} \label{eq:cx}
555c_x = -\frac{\partial\phi}{\partial t}\frac{\partial\phi / \partial x}{(\partial\phi /\partial x)^2 + (\partial\phi /\partial y)^2}
556\end{equation}
557\begin{equation}
558\label{eq:cy}
559c_y = -\frac{\partial\phi}{\partial t}\frac{\partial\phi / \partial y}{(\partial\phi /\partial x)^2 + (\partial\phi /\partial y)^2}
560\end{equation}
561
562(As noted by MMS, this is a circular diagnosis of the phase speeds which only makes sense on a discrete grid).
563Equation (\autoref{eq:wave_continuous}) is defined adaptively depending on the sign of the phase velocity normal to the boundary $c_x$.
564For $c_x$ outward, we have
565
566\begin{equation}
567\tau = \tau_{out}
568\end{equation}
569
570For $c_x$ inward, the radiation equation is not applied:
571
572\begin{equation}
573\tau = \tau_{in}\,\,\,;\,\,\, c_x = c_y = 0
574\label{eq:tau_in}
575\end{equation}
576
577Generally the relaxation time scale at inward propagation points \np{(rn\_time\_dmp}) is set much shorter than the time scale at outward propagation
578points (\np{rn\_time\_dmp\_out}) so that the solution is constrained more strongly by the external data at inward propagation points.
579See \autoref{subsec:BDY_relaxation} for detailed on the spatial shape of the scaling.\\ 
580The ``normal propagation of oblique radiation'' or NPO approximation (called \forcode{'orlanski_npo'}) involves assuming
581that $c_y$ is zero in equation (\autoref{eq:wave_continuous}), but including
582this term in the denominator of equation (\autoref{eq:cx}). Both versions of the scheme are options in BDY. Equations
583(\autoref{eq:wave_continuous}) - (\autoref{eq:tau_in}) correspond to equations (13) - (15) and (2) - (3) in MMS.\\
584
585%----------------------------------------------
586\subsection{Relaxation at the boundary}
587\label{subsec:BDY_relaxation}
588
589In addition to a specific boundary condition specified as \np{cn\_tra} and \np{cn\_dyn3d}, relaxation on baroclinic velocities and tracers variables are available.
590It is control by the namelist parameter \np{ln\_tra\_dmp} and \np{ln\_dyn3d\_dmp} for each boundary set.
591
592The relaxation time scale value (\np{rn\_time\_dmp} and \np{rn\_time\_dmp\_out}, $\tau$) are defined at the boundaries itself.
593This time scale ($\alpha$) is weighted by the distance ($d$) from the boundary over \np{nn\_rimwidth} cells ($N$):
594
595\[
596  \alpha = \frac{1}{\tau}(\frac{N+1-d}{N})^2,       \quad d=1,N
597\]
598
599The same scaling is applied in the Orlanski damping.
600
601%----------------------------------------------
602\subsection{Boundary geometry}
603\label{subsec:BDY_geometry}
604
605Each open boundary set is defined as a list of points.
606The information is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$ structure.
607The $nbi$ and $nbj$ arrays define the local $(i,j)$ indices of each point in the boundary zone and
608the $nbr$ array defines the discrete distance from the boundary with $nbr=1$ meaning that
609the point is next to the edge of the model domain and $nbr>1$ showing that
610the point is increasingly further away from the edge of the model domain.
611A set of $nbi$, $nbj$, and $nbr$ arrays is defined for each of the $T$, $U$ and $V$ grids.
612Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular boundary.
613
614The boundary geometry for each set may be defined in a namelist nambdy\_index or
615by reading in a ``\ifile{coordinates.bdy}'' file.
616The nambdy\_index namelist defines a series of straight-line segments for north, east, south and west boundaries.
617One nambdy\_index namelist bloc is needed for each boundary condition defined by indexes.
618For the northern boundary, \np{nbdysegn} gives the number of segments,
619\np{jpjnob} gives the $j$ index for each segment and \np{jpindt} and
620\np{jpinft} give the start and end $i$ indices for each segment with similar for the other boundaries.
621These segments define a list of $T$ grid points along the outermost row of the boundary ($nbr\,=\, 1$).
622The code deduces the $U$ and $V$ points and also the points for $nbr\,>\, 1$ if \np{nn\_rimwidth}\forcode{ > 1}.
623
624The boundary geometry may also be defined from a ``\ifile{coordinates.bdy}'' file.
625Figure \autoref{fig:LBC_nc_header} gives an example of the header information from such a file.
626The file should contain the index arrays for each of the $T$, $U$ and $V$ grids.
627The arrays must be in order of increasing $nbr$.
628Note that the $nbi$, $nbj$ values in the file are global values and are converted to local values in the code.
629Typically this file will be used to generate external boundary data via interpolation and so
630will also contain the latitudes and longitudes of each point as shown.
631However, this is not necessary to run the model.
632
633For some choices of irregular boundary the model domain may contain areas of ocean which
634are not part of the computational domain.
635For example if an open boundary is defined along an isobath, say at the shelf break,
636then the areas of ocean outside of this boundary will need to be masked out.
637This can be done by reading a mask file defined as \np{cn\_mask\_file} in the nam\_bdy namelist.
638Only one mask file is used even if multiple boundary sets are defined.
639
640%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
641\begin{figure}[!t]
642  \begin{center}
643    \includegraphics[width=\textwidth]{Fig_LBC_bdy_geom}
644    \caption {
645      \protect\label{fig:LBC_bdy_geom}
646      Example of geometry of unstructured open boundary
647    }
648  \end{center}
649\end{figure}
650%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
651
652%----------------------------------------------
653\subsection{Input boundary data files}
654\label{subsec:BDY_data}
655
656The data files contain the data arrays in the order in which the points are defined in the $nbi$ and $nbj$ arrays.
657The data arrays are dimensioned on:
658a time dimension;
659$xb$ which is the index of the boundary data point in the horizontal;
660and $yb$ which is a degenerate dimension of 1 to enable the file to be read by the standard NEMO I/O routines.
661The 3D fields also have a depth dimension.
662
663From Version 3.4 there are new restrictions on the order in which the boundary points are defined
664(and therefore restrictions on the order of the data in the file).
665In particular:
666
667\begin{enumerate}
668\item The data points must be in order of increasing $nbr$,
669  ie. all the $nbr=1$ points, then all the $nbr=2$ points etc.
670\item All the data for a particular boundary set must be in the same order.
671  (Prior to 3.4 it was possible to define barotropic data in a different order to
672  the data for tracers and baroclinic velocities).
673\end{enumerate}
674
675These restrictions mean that data files used with versions of the
676model prior to Version 3.4 may not work with Version 3.4 onwards.
677A \fortran utility {\itshape bdy\_reorder} exists in the TOOLS directory which
678will re-order the data in old BDY data files.
679
680%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
681\begin{figure}[!t]
682  \begin{center}
683    \includegraphics[width=\textwidth]{Fig_LBC_nc_header}
684    \caption {
685      \protect\label{fig:LBC_nc_header}
686      Example of the header for a \protect\ifile{coordinates.bdy} file
687    }
688  \end{center}
689\end{figure}
690%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
691
692%----------------------------------------------
693\subsection{Volume correction}
694\label{subsec:BDY_vol_corr}
695
696There is an option to force the total volume in the regional model to be constant.
697This is controlled  by the \np{ln\_vol} parameter in the namelist.
698A value of \np{ln\_vol}\forcode{ = .false.} indicates that this option is not used.
699Two options to control the volume are available (\np{nn\_volctl}).
700If \np{nn\_volctl}\forcode{ = 0} then a correction is applied to the normal barotropic velocities around the boundary at
701each timestep to ensure that the integrated volume flow through the boundary is zero.
702If \np{nn\_volctl}\forcode{ = 1} then the calculation of the volume change on
703the timestep includes the change due to the freshwater flux across the surface and
704the correction velocity corrects for this as well.
705
706If more than one boundary set is used then volume correction is
707applied to all boundaries at once.
708
709%----------------------------------------------
710\subsection{Tidal harmonic forcing}
711\label{subsec:BDY_tides}
712
713%-----------------------------------------nambdy_tide--------------------------------------------
714
715\nlst{nambdy_tide} 
716%-----------------------------------------------------------------------------------------------
717
718Tidal forcing at open boundaries requires the activation of surface
719tides (i.e., in \ngn{nam\_tide}, \np{ln\_tide} needs to be set to
720\forcode{.true.} and the required constituents need to be activated by
721including their names in the \np{cname} array; see
722\autoref{sec:SBC_tide}). Specific options related to the reading in of
723the complex harmonic amplitudes of elevation (SSH) and barotropic
724velocity (u,v) at open boundaries are defined through the
725\ngn{nambdy\_tide} namelist parameters.\\
726
727The tidal harmonic data at open boundaries can be specified in two
728different ways, either on a two-dimensional grid covering the entire
729model domain or along open boundary segments; these two variants can
730be selected by setting \np{ln\_bdytide\_2ddta } to \forcode{.true.} or
731\forcode{.false.}, respectively. In either case, the real and
732imaginary parts of SSH and the two barotropic velocity components for
733each activated tidal constituent \textit{tcname} have to be provided
734separately: when two-dimensional data is used, variables
735\textit{tcname\_z1} and \textit{tcname\_z2} for real and imaginary SSH,
736respectively, are expected in input file \np{filtide} with suffix
737\ifile{\_grid\_T}, variables \textit{tcname\_u1} and
738\textit{tcname\_u2} for real and imaginary u, respectively, are
739expected in input file \np{filtide} with suffix \ifile{\_grid\_U}, and
740\textit{tcname\_v1} and \textit{tcname\_v2} for real and imaginary v,
741respectively, are expected in input file \np{filtide} with suffix
742\ifile{\_grid\_V}; when data along open boundary segments is used,
743variables \textit{z1} and \textit{z2} (real and imaginary part of SSH)
744are expected to be available from file \np{filtide} with suffix
745\ifile{tcname\_grid\_T}, variables \textit{u1} and \textit{u2} (real
746and imaginary part of u) are expected to be available from file
747\np{filtide} with suffix \ifile{tcname\_grid\_U}, and variables
748\textit{v1} and \textit{v2} (real and imaginary part of v) are
749expected to be available from file \np{filtide} with suffix
750\ifile{tcname\_grid\_V}. If \np{ln\_bdytide\_conj} is set to
751\forcode{.true.}, the data is expected to be in complex conjugate
752form.
753
754Note that the barotropic velocity components are assumed to be defined
755on the native model grid and should be rotated accordingly when they
756are converted from their definition on a different source grid. To do
757so, the u, v amplitudes and phases can be converted into tidal
758ellipses, the grid rotation added to the ellipse inclination, and then
759converted back (care should be taken regarding conventions of the
760direction of rotation). %, e.g. anticlockwise or clockwise.
761
762\biblio
763
764\pindex
765
766\end{document}
Note: See TracBrowser for help on using the repository browser.