New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
Chap_LBC.tex in branches/2015/nemo_v3_6_STABLE/DOC/TexFiles/Chapters – NEMO

source: branches/2015/nemo_v3_6_STABLE/DOC/TexFiles/Chapters/Chap_LBC.tex @ 6275

Last change on this file since 6275 was 6275, checked in by gm, 8 years ago

#1629: DOC of v3.6_stable. Upadate, see associated wiki page for description

File size: 35.8 KB
Line 
1% ================================================================
2% Chapter — Lateral Boundary Condition (LBC)
3% ================================================================
4\chapter{Lateral Boundary Condition (LBC) }
5\label{LBC}
6\minitoc
7
8\newpage
9$\ $\newline    % force a new ligne
10
11
12%gm% add here introduction to this chapter
13
14% ================================================================
15% Boundary Condition at the Coast
16% ================================================================
17\section{Boundary Condition at the Coast (\np{rn\_shlat})}
18\label{LBC_coast}
19%--------------------------------------------nam_lbc-------------------------------------------------------
20\namdisplay{namlbc} 
21%--------------------------------------------------------------------------------------------------------------
22
23%The lateral ocean boundary conditions contiguous to coastlines are Neumann conditions for heat and salt (no flux across boundaries) and Dirichlet conditions for momentum (ranging from free-slip to "strong" no-slip). They are handled automatically by the mask system (see \S\ref{DOM_msk}).
24
25%OPA allows land and topography grid points in the computational domain due to the presence of continents or islands, and includes the use of a full or partial step representation of bottom topography. The computation is performed over the whole domain, i.e. we do not try to restrict the computation to ocean-only points. This choice has two motivations. Firstly, working on ocean only grid points overloads the code and harms the code readability. Secondly, and more importantly, it drastically reduces the vector portion of the computation, leading to a dramatic increase of CPU time requirement on vector computers.  The current section describes how the masking affects the computation of the various terms of the equations with respect to the boundary condition at solid walls. The process of defining which areas are to be masked is described in \S\ref{DOM_msk}.
26
27Options are defined through the \ngn{namlbc} namelist variables.
28The discrete representation of a domain with complex boundaries (coastlines and
29bottom topography) leads to arrays that include large portions where a computation
30is not required as the model variables remain at zero. Nevertheless, vectorial
31supercomputers are far more efficient when computing over a whole array, and the
32readability of a code is greatly improved when boundary conditions are applied in
33an automatic way rather than by a specific computation before or after each
34computational loop. An efficient way to work over the whole domain while specifying
35the boundary conditions, is to use multiplication by mask arrays in the computation.
36A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ 
37elsewhere. A simple multiplication of a variable by its own mask ensures that it will
38remain zero over land areas. Since most of the boundary conditions consist of a
39zero flux across the solid boundaries, they can be simply applied by multiplying
40variables by the correct mask arrays, $i.e.$ the mask array of the grid point where
41the flux is evaluated. For example, the heat flux in the \textbf{i}-direction is evaluated
42at $u$-points. Evaluating this quantity as,
43
44\begin{equation} \label{Eq_lbc_aaaa}
45\frac{A^{lT} }{e_1 }\frac{\partial T}{\partial i}\equiv \frac{A_u^{lT} 
46}{e_{1u} } \; \delta _{i+1 / 2} \left[ T \right]\;\;mask_u
47\end{equation}
48(where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is
49zero inside land and at the boundaries, since mask$_{u}$ is zero at solid boundaries
50which in this case are defined at $u$-points (normal velocity $u$ remains zero at
51the coast) (Fig.~\ref{Fig_LBC_uv}).
52
53%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
54\begin{figure}[!t]     \begin{center}
55\includegraphics[width=0.90\textwidth]{./TexFiles/Figures/Fig_LBC_uv.pdf}
56\caption{  \label{Fig_LBC_uv}
57Lateral boundary (thick line) at T-level. The velocity normal to the boundary is set to zero.}
58\end{center}   \end{figure}
59%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
60
61For momentum the situation is a bit more complex as two boundary conditions
62must be provided along the coast (one each for the normal and tangential velocities).
63The boundary of the ocean in the C-grid is defined by the velocity-faces.
64For example, at a given $T$-level, the lateral boundary (a coastline or an intersection
65with the bottom topography) is made of segments joining $f$-points, and normal
66velocity points are located between two $f-$points (Fig.~\ref{Fig_LBC_uv}).
67The boundary condition on the normal velocity (no flux through solid boundaries)
68can thus be easily implemented using the mask system. The boundary condition
69on the tangential velocity requires a more specific treatment. This boundary
70condition influences the relative vorticity and momentum diffusive trends, and is
71required in order to compute the vorticity at the coast. Four different types of
72lateral boundary condition are available, controlled by the value of the \np{rn\_shlat} 
73namelist parameter. (The value of the mask$_{f}$ array along the coastline is set
74equal to this parameter.) These are:
75
76%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
77\begin{figure}[!p] \begin{center}
78\includegraphics[width=0.90\textwidth]{./TexFiles/Figures/Fig_LBC_shlat.pdf}
79\caption{     \label{Fig_LBC_shlat} 
80lateral boundary condition (a) free-slip ($rn\_shlat=0$) ; (b) no-slip ($rn\_shlat=2$)
81; (c) "partial" free-slip ($0<rn\_shlat<2$) and (d) "strong" no-slip ($2<rn\_shlat$).
82Implied "ghost" velocity inside land area is display in grey. }
83\end{center}    \end{figure}
84%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
85
86\begin{description}
87
88\item[free-slip boundary condition (\np{rn\_shlat}=0): ]  the tangential velocity at the
89coastline is equal to the offshore velocity, $i.e.$ the normal derivative of the
90tangential velocity is zero at the coast, so the vorticity: mask$_{f}$ array is set
91to zero inside the land and just at the coast (Fig.~\ref{Fig_LBC_shlat}-a).
92
93\item[no-slip boundary condition (\np{rn\_shlat}=2): ] the tangential velocity vanishes
94at the coastline. Assuming that the tangential velocity decreases linearly from
95the closest ocean velocity grid point to the coastline, the normal derivative is
96evaluated as if the velocities at the closest land velocity gridpoint and the closest
97ocean velocity gridpoint were of the same magnitude but in the opposite direction
98(Fig.~\ref{Fig_LBC_shlat}-b). Therefore, the vorticity along the coastlines is given by:
99
100\begin{equation*}
101\zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right] - \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ ,
102\end{equation*}
103where $u$ and $v$ are masked fields. Setting the mask$_{f}$ array to $2$ along
104the coastline provides a vorticity field computed with the no-slip boundary condition,
105simply by multiplying it by the mask$_{f}$ :
106\begin{equation} \label{Eq_lbc_bbbb}
107\zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta _{i+1/2} 
108\left[ {e_{2v} \,v} \right]-\delta _{j+1/2} \left[ {e_{1u} \,u} \right]} 
109\right)\;\mbox{mask}_f
110\end{equation}
111
112\item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2): ] the tangential
113velocity at the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral
114friction but not strong enough to make the tangential velocity at the coast vanish
115(Fig.~\ref{Fig_LBC_shlat}-c). This can be selected by providing a value of mask$_{f}$ 
116strictly inbetween $0$ and $2$.
117
118\item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}): ] the viscous boundary
119layer is assumed to be smaller than half the grid size (Fig.~\ref{Fig_LBC_shlat}-d).
120The friction is thus larger than in the no-slip case.
121
122\end{description}
123
124Note that when the bottom topography is entirely represented by the $s$-coor-dinates
125(pure $s$-coordinate), the lateral boundary condition on tangential velocity is of much
126less importance as it is only applied next to the coast where the minimum water depth
127can be quite shallow.
128
129The alternative numerical implementation of the no-slip boundary conditions for an
130arbitrary coast line of \citet{Shchepetkin1996} is also available through the
131\key{noslip\_accurate} CPP key. It is based on a fourth order evaluation of the shear at the
132coast which, in turn, allows a true second order scheme in the interior of the domain
133($i.e.$ the numerical boundary scheme simulates the truncation error of the numerical
134scheme used in the interior of the domain). \citet{Shchepetkin1996} found that such a
135technique considerably improves the quality of the numerical solution. In \NEMO, such
136spectacular improvements have not been found in the half-degree global ocean
137(ORCA05), but significant reductions of numerically induced coastal upwellings were
138found in an eddy resolving simulation of the Alboran Sea \citep{Olivier_PhD01}.
139Nevertheless, since a no-slip boundary condition is not recommended in an eddy
140permitting or resolving simulation \citep{Penduff_al_OS07}, the use of this option is also
141not recommended.
142
143In practice, the no-slip accurate option changes the way the curl is evaluated at the
144coast (see \mdl{divcur} module), and requires the nature of each coastline grid point
145(convex or concave corners, straight north-south or east-west coast) to be specified. 
146This is performed in routine \rou{dom\_msk\_nsa} in the \mdl{domask} module.
147
148% ================================================================
149% Boundary Condition around the Model Domain
150% ================================================================
151\section{Model Domain Boundary Condition (\np{jperio})}
152\label{LBC_jperio}
153
154At the model domain boundaries several choices are offered: closed, cyclic east-west,
155south symmetric across the equator, a north-fold, and combination closed-north fold
156or cyclic-north-fold. The north-fold boundary condition is associated with the 3-pole ORCA mesh.
157
158% -------------------------------------------------------------------------------------------------------------
159%        Closed, cyclic, south symmetric (\np{jperio} = 0, 1 or 2)
160% -------------------------------------------------------------------------------------------------------------
161\subsection{Closed, cyclic, south symmetric (\np{jperio} = 0, 1 or 2)}
162\label{LBC_jperio012}
163
164The choice of closed, cyclic or symmetric model domain boundary condition is made
165by setting \np{jperio} to 0, 1 or 2 in namelist \ngn{namcfg}. Each time such a boundary
166condition is needed, it is set by a call to routine \mdl{lbclnk}. The computation of
167momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to
168$j=jpj-1$, $i.e.$ in the model interior. To choose a lateral model boundary condition
169is to specify the first and last rows and columns of the model variables.
170
171\begin{description}
172
173\item[For closed boundary (\textit{jperio=0})], solid walls are imposed at all model
174boundaries: first and last rows and columns are set to zero.
175
176\item[For cyclic east-west boundary (\textit{jperio=1})], first and last rows are set
177to zero (closed) whilst the first column is set to the value of the last-but-one column
178and the last column to the value of the second one (Fig.~\ref{Fig_LBC_jperio}-a).
179Whatever flows out of the eastern (western) end of the basin enters the western
180(eastern) end. Note that there is no option for north-south cyclic or for doubly
181cyclic cases.
182
183\item[For symmetric boundary condition across the equator (\textit{jperio=2})],
184last rows, and first and last columns are set to zero (closed). The row of symmetry
185is chosen to be the $u$- and $T-$points equator line ($j=2$, i.e. at the southern
186end of the domain). For arrays defined at $u-$ or $T-$points, the first row is set
187to the value of the third row while for most of $v$- and $f$-point arrays ($v$, $\zeta$,
188$j\psi$, but \gmcomment{not sure why this is "but"} scalar arrays such as eddy coefficients)
189the first row is set to minus the value of the second row (Fig.~\ref{Fig_LBC_jperio}-b).
190Note that this boundary condition is not yet available for the case of a massively
191parallel computer (\textbf{key{\_}mpp} defined).
192
193\end{description}
194
195%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
196\begin{figure}[!t]     \begin{center}
197\includegraphics[width=1.0\textwidth]{./TexFiles/Figures/Fig_LBC_jperio.pdf}
198\caption{    \label{Fig_LBC_jperio}
199setting of (a) east-west cyclic  (b) symmetric across the equator boundary conditions.}
200\end{center}   \end{figure}
201%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
202
203% -------------------------------------------------------------------------------------------------------------
204%        North fold (\textit{jperio = 3 }to $6)$
205% -------------------------------------------------------------------------------------------------------------
206\subsection{North-fold (\textit{jperio = 3 }to $6$) }
207\label{LBC_north_fold}
208
209The north fold boundary condition has been introduced in order to handle the north
210boundary of a three-polar ORCA grid. Such a grid has two poles in the northern hemisphere
211(Fig.\ref{Fig_MISC_ORCA_msh}, and thus requires a specific treatment illustrated in Fig.\ref{Fig_North_Fold_T}.
212Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition.
213
214%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
215\begin{figure}[!t]    \begin{center}
216\includegraphics[width=0.90\textwidth]{./TexFiles/Figures/Fig_North_Fold_T.pdf}
217\caption{    \label{Fig_North_Fold_T} 
218North fold boundary with a $T$-point pivot and cyclic east-west boundary condition
219($jperio=4$), as used in ORCA 2, 1/4, and 1/12. Pink shaded area corresponds
220to the inner domain mask (see text). }
221\end{center}   \end{figure}
222%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
223
224% ====================================================================
225% Exchange with neighbouring processors
226% ====================================================================
227\section  [Exchange with neighbouring processors (\textit{lbclnk}, \textit{lib\_mpp})]
228      {Exchange with neighbouring processors (\mdl{lbclnk}, \mdl{lib\_mpp})}
229\label{LBC_mpp}
230
231For massively parallel processing (mpp), a domain decomposition method is used.
232The basic idea of the method is to split the large computation domain of a numerical
233experiment into several smaller domains and solve the set of equations by addressing
234independent local problems. Each processor has its own local memory and computes
235the model equation over a subdomain of the whole model domain. The subdomain
236boundary conditions are specified through communications between processors
237which are organized by explicit statements (message passing method).
238
239A big advantage is that the method does not need many modifications of the initial
240FORTRAN code. From the modeller's point of view, each sub domain running on
241a processor is identical to the "mono-domain" code. In addition, the programmer
242manages the communications between subdomains, and the code is faster when
243the number of processors is increased. The porting of OPA code on an iPSC860
244was achieved during Guyon's PhD [Guyon et al. 1994, 1995] in collaboration with
245CETIIS and ONERA. The implementation in the operational context and the studies
246of performance on a T3D and T3E Cray computers have been made in collaboration
247with IDRIS and CNRS. The present implementation is largely inspired by Guyon's
248work  [Guyon 1995].
249
250The parallelization strategy is defined by the physical characteristics of the
251ocean model. Second order finite difference schemes lead to local discrete
252operators that depend at the very most on one neighbouring point. The only
253non-local computations concern the vertical physics (implicit diffusion,
254turbulent closure scheme, ...) (delocalization over the whole water column),
255and the solving of the elliptic equation associated with the surface pressure
256gradient computation (delocalization over the whole horizontal domain).
257Therefore, a pencil strategy is used for the data sub-structuration
258: the 3D initial domain is laid out on local processor
259memories following a 2D horizontal topological splitting. Each sub-domain
260computes its own surface and bottom boundary conditions and has a side
261wall overlapping interface which defines the lateral boundary conditions for
262computations in the inner sub-domain. The overlapping area consists of the
263two rows at each edge of the sub-domain. After a computation, a communication
264phase starts: each processor sends to its neighbouring processors the update
265values of the points corresponding to the interior overlapping area to its
266neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows).
267The communication is done through the Message Passing Interface (MPI).
268The data exchanges between processors are required at the very
269place where lateral domain boundary conditions are set in the mono-domain
270computation : the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module)
271which manages such conditions is interfaced with routines found in \mdl{lib\_mpp} module
272when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined).
273It has to be pointed out that when using the MPP version of the model,
274the east-west cyclic boundary condition is done implicitly,
275whilst the south-symmetric boundary condition option is not available.
276
277%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
278\begin{figure}[!t]    \begin{center}
279\includegraphics[width=0.90\textwidth]{./TexFiles/Figures/Fig_mpp.pdf}
280\caption{   \label{Fig_mpp} 
281Positioning of a sub-domain when massively parallel processing is used. }
282\end{center}   \end{figure}
283%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
284
285In the standard version of \NEMO, the splitting is regular and arithmetic.
286The i-axis is divided by \jp{jpni} and the j-axis by \jp{jpnj} for a number of processors
287\jp{jpnij} most often equal to $jpni \times jpnj$ (parameters set in
288 \ngn{nammpp} namelist). Each processor is independent and without message passing
289 or synchronous process, programs run alone and access just its own local memory.
290 For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil)
291 that are named \jp{jpi}, \jp{jpj}, \jp{jpk}. These dimensions include the internal
292 domain and the overlapping rows. The number of rows to exchange (known as
293 the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). The whole domain
294 dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}. The relationship between
295 the whole domain and a sub-domain is:
296\begin{eqnarray} 
297      jpi & = & ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci  \nonumber \\
298      jpj & = & ( jpjglo-2*jprecj + (jpnj-1) ) / jpnj + 2*jprecj  \label{Eq_lbc_jpi}
299\end{eqnarray}
300where \jp{jpni}, \jp{jpnj} are the number of processors following the i- and j-axis.
301
302One also defines variables nldi and nlei which correspond to the internal domain bounds,
303and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain.
304An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,
305a global array (whole domain) by the relationship:
306\begin{equation} \label{Eq_lbc_nimpp}
307T_{g} (i+nimpp-1,j+njmpp-1,k) = T_{l} (i,j,k),
308\end{equation}
309with  $1 \leq i \leq jpi$, $1  \leq j \leq jpj $ , and  $1  \leq k \leq jpk$.
310
311Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable
312nproc. In the standard version, a processor has no more than four neighbouring
313processors named nono (for north), noea (east), noso (south) and nowe (west)
314and two variables, nbondi and nbondj, indicate the relative position of the processor :
315\begin{itemize}
316\item       nbondi = -1    an east neighbour, no west processor,
317\item       nbondi =  0 an east neighbour, a west neighbour,
318\item       nbondi =  1    no east processor, a west neighbour,
319\item       nbondi =  2    no splitting following the i-axis.
320\end{itemize}
321During the simulation, processors exchange data with their neighbours.
322If there is effectively a neighbour, the processor receives variables from this
323processor on its overlapping row, and sends the data issued from internal
324domain corresponding to the overlapping row of the other processor.
325
326
327The \NEMO model computes equation terms with the help of mask arrays (0 on land
328points and 1 on sea points). It is easily readable and very efficient in the context of
329a computer with vectorial architecture. However, in the case of a scalar processor,
330computations over the land regions become more expensive in terms of CPU time.
331It is worse when we use a complex configuration with a realistic bathymetry like the
332global ocean where more than 50 \% of points are land points. For this reason, a
333pre-processing tool can be used to choose the mpp domain decomposition with a
334maximum number of only land points processors, which can then be eliminated (Fig. \ref{Fig_mppini2})
335(For example, the mpp\_optimiz tools, available from the DRAKKAR web site).
336This optimisation is dependent on the specific bathymetry employed. The user
337then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with
338$jpnij < jpni \times jpnj$, leading to the elimination of $jpni \times jpnj - jpnij$ 
339land processors. When those parameters are specified in \ngn{nammpp} namelist,
340the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound,
341nono, noea,...) so that the land-only processors are not taken into account.
342
343\gmcomment{Note that the inimpp2 routine is general so that the original inimpp
344routine should be suppressed from the code.}
345
346When land processors are eliminated, the value corresponding to these locations in
347the model output files is undefined. Note that this is a problem for the meshmask file
348which requires to be defined over the whole domain. Therefore, user should not eliminate
349land processors when creating a meshmask file ($i.e.$ when setting a non-zero value to \np{nn\_msh}).
350
351%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
352\begin{figure}[!ht]     \begin{center}
353\includegraphics[width=0.90\textwidth]{./TexFiles/Figures/Fig_mppini2.pdf}
354\caption {    \label{Fig_mppini2}
355Example of Atlantic domain defined for the CLIPPER projet. Initial grid is
356composed of 773 x 1236 horizontal points.
357(a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).
35852 subdomains are land areas.
359(b) 52 subdomains are eliminated (white rectangles) and the resulting number
360of processors really used during the computation is jpnij=128.}
361\end{center}   \end{figure}
362%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
363
364
365% ====================================================================
366% Unstructured open boundaries BDY
367% ====================================================================
368\section{Unstructured Open Boundary Conditions (\key{bdy}) (BDY)}
369\label{LBC_bdy}
370
371%-----------------------------------------nambdy--------------------------------------------
372\namdisplay{nambdy} 
373%-----------------------------------------------------------------------------------------------
374%-----------------------------------------nambdy_index--------------------------------------------
375\namdisplay{nambdy_index} 
376%-----------------------------------------------------------------------------------------------
377%-----------------------------------------nambdy_dta--------------------------------------------
378\namdisplay{nambdy_dta} 
379%-----------------------------------------------------------------------------------------------
380%-----------------------------------------nambdy_dta--------------------------------------------
381\namdisplay{nambdy_dta2} 
382%-----------------------------------------------------------------------------------------------
383
384Options are defined through the \ngn{nambdy} \ngn{nambdy\_index} 
385\ngn{nambdy\_dta} \ngn{nambdy\_dta2} namelist variables.
386The BDY module is an alternative implementation of open boundary
387conditions for regional configurations. It implements the Flow
388Relaxation Scheme algorithm for temperature, salinity, velocities and
389ice fields, and the Flather radiation condition for the depth-mean
390transports. The specification of the location of the open boundary is
391completely flexible and allows for example the open boundary to follow
392an isobath or other irregular contour.
393
394The BDY module was modelled on the OBC module and shares many features
395and a similar coding structure \citep{Chanut2005}.
396
397The BDY module is completely rewritten at NEMO 3.4 and there is a new
398set of namelists. Boundary data files used with earlier versions of
399NEMO may need to be re-ordered to work with this version. See the
400section on the Input Boundary Data Files for details.
401
402%----------------------------------------------
403\subsection{The namelists}
404\label{BDY_namelist}
405
406It is possible to define more than one boundary ``set'' and apply
407different boundary conditions to each set. The number of boundary
408sets is defined by \np{nb\_bdy}.  Each boundary set may be defined
409as a set of straight line segments in a namelist
410(\np{ln\_coords\_file}=.false.) or read in from a file
411(\np{ln\_coords\_file}=.true.). If the set is defined in a namelist,
412then the namelists nambdy\_index must be included separately, one for
413each set. If the set is defined by a file, then a
414``coordinates.bdy.nc'' file must be provided. The coordinates.bdy file
415is analagous to the usual NEMO ``coordinates.nc'' file. In the example
416above, there are two boundary sets, the first of which is defined via
417a file and the second is defined in a namelist. For more details of
418the definition of the boundary geometry see section
419\ref{BDY_geometry}.
420
421For each boundary set a boundary
422condition has to be chosen for the barotropic solution (``u2d'':
423sea-surface height and barotropic velocities), for the baroclinic
424velocities (``u3d''), and for the active tracers\footnote{The BDY
425  module does not deal with passive tracers at this version}
426(``tra''). For each set of variables there is a choice of algorithm
427and a choice for the data, eg. for the active tracers the algorithm is
428set by \np{nn\_tra} and the choice of data is set by
429\np{nn\_tra\_dta}.
430
431The choice of algorithm is currently as follows:
432
433\mbox{}
434
435\begin{itemize}
436\item[0.] No boundary condition applied. So the solution will ``see''
437  the land points around the edge of the edge of the domain.
438\item[1.] Flow Relaxation Scheme (FRS) available for all variables.
439\item[2.] Flather radiation scheme for the barotropic variables. The
440  Flather scheme is not compatible with the filtered free surface
441  ({\it dynspg\_ts}).
442\end{itemize}
443
444\mbox{}
445
446The main choice for the boundary data is
447to use initial conditions as boundary data (\np{nn\_tra\_dta}=0) or to
448use external data from a file (\np{nn\_tra\_dta}=1). For the
449barotropic solution there is also the option to use tidal
450harmonic forcing either by itself or in addition to other external
451data.
452
453If external boundary data is required then the nambdy\_dta namelist
454must be defined. One nambdy\_dta namelist is required for each boundary
455set in the order in which the boundary sets are defined in nambdy. In
456the example given, two boundary sets have been defined and so there
457are two nambdy\_dta namelists. The boundary data is read in using the
458fldread module, so the nambdy\_dta namelist is in the format required
459for fldread. For each variable required, the filename, the frequency
460of the files and the frequency of the data in the files is given. Also
461whether or not time-interpolation is required and whether the data is
462climatological (time-cyclic) data. Note that on-the-fly spatial
463interpolation of boundary data is not available at this version.
464
465In the example namelists given, two boundary sets are defined. The
466first set is defined via a file and applies FRS conditions to
467temperature and salinity and Flather conditions to the barotropic
468variables. External data is provided in daily files (from a
469large-scale model). Tidal harmonic forcing is also used. The second
470set is defined in a namelist. FRS conditions are applied on
471temperature and salinity and climatological data is read from external
472files.
473
474%----------------------------------------------
475\subsection{The Flow Relaxation Scheme}
476\label{BDY_FRS_scheme}
477
478The Flow Relaxation Scheme (FRS) \citep{Davies_QJRMS76,Engerdahl_Tel95},
479applies a simple relaxation of the model fields to
480externally-specified values over a zone next to the edge of the model
481domain. Given a model prognostic variable $\Phi$ 
482\begin{equation}  \label{Eq_bdy_frs1}
483\Phi(d) = \alpha(d)\Phi_{e}(d) + (1-\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N
484\end{equation}
485where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified
486external field, $d$ gives the discrete distance from the model
487boundary  and $\alpha$ is a parameter that varies from $1$ at $d=1$ to
488a small value at $d=N$. It can be shown that this scheme is equivalent
489to adding a relaxation term to the prognostic equation for $\Phi$ of
490the form:
491\begin{equation}  \label{Eq_bdy_frs2}
492-\frac{1}{\tau}\left(\Phi - \Phi_{e}\right)
493\end{equation}
494where the relaxation time scale $\tau$ is given by a function of
495$\alpha$ and the model time step $\Delta t$:
496\begin{equation}  \label{Eq_bdy_frs3}
497\tau = \frac{1-\alpha}{\alpha}  \,\rdt
498\end{equation}
499Thus the model solution is completely prescribed by the external
500conditions at the edge of the model domain and is relaxed towards the
501external conditions over the rest of the FRS zone. The application of
502a relaxation zone helps to prevent spurious reflection of outgoing
503signals from the model boundary.
504
505The function $\alpha$ is specified as a $tanh$ function:
506\begin{equation}  \label{Eq_bdy_frs4}
507\alpha(d) = 1 - \tanh\left(\frac{d-1}{2}\right),       \quad d=1,N
508\end{equation}
509The width of the FRS zone is specified in the namelist as
510\np{nn\_rimwidth}. This is typically set to a value between 8 and 10.
511
512%----------------------------------------------
513\subsection{The Flather radiation scheme}
514\label{BDY_flather_scheme}
515
516The \citet{Flather_JPO94} scheme is a radiation condition on the normal, depth-mean
517transport across the open boundary. It takes the form
518\begin{equation}  \label{Eq_bdy_fla1}
519U = U_{e} + \frac{c}{h}\left(\eta - \eta_{e}\right),
520\end{equation}
521where $U$ is the depth-mean velocity normal to the boundary and $\eta$
522is the sea surface height, both from the model. The subscript $e$
523indicates the same fields from external sources. The speed of external
524gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the
525water column. The depth-mean normal velocity along the edge of the
526model domain is set equal to the
527external depth-mean normal velocity, plus a correction term that
528allows gravity waves generated internally to exit the model boundary.
529Note that the sea-surface height gradient in \eqref{Eq_bdy_fla1}
530is a spatial gradient across the model boundary, so that $\eta_{e}$ is
531defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the
532$T$ points with $nbr=2$. $U$ and $U_{e}$ are defined on the $U$ or
533$V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points.
534
535%----------------------------------------------
536\subsection{Boundary geometry}
537\label{BDY_geometry}
538
539Each open boundary set is defined as a list of points. The information
540is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$
541structure.  The $nbi$ and $nbj$ arrays
542define the local $(i,j)$ indices of each point in the boundary zone
543and the $nbr$ array defines the discrete distance from the boundary
544with $nbr=1$ meaning that the point is next to the edge of the
545model domain and $nbr>1$ showing that the point is increasingly
546further away from the edge of the model domain. A set of $nbi$, $nbj$,
547and $nbr$ arrays is defined for each of the $T$, $U$ and $V$
548grids. Figure \ref{Fig_LBC_bdy_geom} shows an example of an irregular
549boundary.
550
551The boundary geometry for each set may be defined in a namelist
552nambdy\_index or by reading in a ``coordinates.bdy.nc'' file. The
553nambdy\_index namelist defines a series of straight-line segments for
554north, east, south and west boundaries. For the northern boundary,
555\np{nbdysegn} gives the number of segments, \np{jpjnob} gives the $j$
556index for each segment and \np{jpindt} and \np{jpinft} give the start
557and end $i$ indices for each segment with similar for the other
558boundaries. These segments define a list of $T$ grid points along the
559outermost row of the boundary ($nbr\,=\, 1$). The code deduces the $U$ and
560$V$ points and also the points for $nbr\,>\, 1$ if
561$nn\_rimwidth\,>\,1$.
562
563The boundary geometry may also be defined from a
564``coordinates.bdy.nc'' file. Figure \ref{Fig_LBC_nc_header}
565gives an example of the header information from such a file. The file
566should contain the index arrays for each of the $T$, $U$ and $V$
567grids. The arrays must be in order of increasing $nbr$. Note that the
568$nbi$, $nbj$ values in the file are global values and are converted to
569local values in the code. Typically this file will be used to generate
570external boundary data via interpolation and so will also contain the
571latitudes and longitudes of each point as shown. However, this is not
572necessary to run the model.
573
574For some choices of irregular boundary the model domain may contain
575areas of ocean which are not part of the computational domain. For
576example if an open boundary is defined along an isobath, say at the
577shelf break, then the areas of ocean outside of this boundary will
578need to be masked out. This can be done by reading a mask file defined
579as \np{cn\_mask\_file} in the nam\_bdy namelist. Only one mask file is
580used even if multiple boundary sets are defined.
581
582%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
583\begin{figure}[!t]      \begin{center}
584\includegraphics[width=1.0\textwidth]{./TexFiles/Figures/Fig_LBC_bdy_geom.pdf}
585\caption {      \label{Fig_LBC_bdy_geom}
586Example of geometry of unstructured open boundary}
587\end{center}   \end{figure}
588%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
589
590%----------------------------------------------
591\subsection{Input boundary data files}
592\label{BDY_data}
593
594The data files contain the data arrays
595in the order in which the points are defined in the $nbi$ and $nbj$
596arrays. The data arrays are dimensioned on: a time dimension;
597$xb$ which is the index of the boundary data point in the horizontal;
598and $yb$ which is a degenerate dimension of 1 to enable the file to be
599read by the standard NEMO I/O routines. The 3D fields also have a
600depth dimension.
601
602At Version 3.4 there are new restrictions on the order in which the
603boundary points are defined (and therefore restrictions on the order
604of the data in the file). In particular:
605
606\mbox{}
607
608\begin{enumerate}
609\item The data points must be in order of increasing $nbr$, ie. all
610  the $nbr=1$ points, then all the $nbr=2$ points etc.
611\item All the data for a particular boundary set must be in the same
612  order. (Prior to 3.4 it was possible to define barotropic data in a
613  different order to the data for tracers and baroclinic velocities).
614\end{enumerate}
615
616\mbox{}
617
618These restrictions mean that data files used with previous versions of
619the model may not work with version 3.4. A fortran utility
620{\it bdy\_reorder} exists in the TOOLS directory which will re-order the
621data in old BDY data files.
622
623%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
624\begin{figure}[!t]     \begin{center}
625\includegraphics[width=1.0\textwidth]{./TexFiles/Figures/Fig_LBC_nc_header.pdf}
626\caption {     \label{Fig_LBC_nc_header} 
627Example of the header for a coordinates.bdy.nc file}
628\end{center}   \end{figure}
629%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
630
631%----------------------------------------------
632\subsection{Volume correction}
633\label{BDY_vol_corr}
634
635There is an option to force the total volume in the regional model to be constant,
636similar to the option in the OBC module. This is controlled  by the \np{nn\_volctl} 
637parameter in the namelist. A value of \np{nn\_volctl}~=~0 indicates that this option is not used.
638If  \np{nn\_volctl}~=~1 then a correction is applied to the normal velocities
639around the boundary at each timestep to ensure that the integrated volume flow
640through the boundary is zero. If \np{nn\_volctl}~=~2 then the calculation of
641the volume change on the timestep includes the change due to the freshwater
642flux across the surface and the correction velocity corrects for this as well.
643
644If more than one boundary set is used then volume correction is
645applied to all boundaries at once.
646
647\newpage
648%----------------------------------------------
649\subsection{Tidal harmonic forcing}
650\label{BDY_tides}
651
652%-----------------------------------------nambdy_tide--------------------------------------------
653\namdisplay{nambdy_tide} 
654%-----------------------------------------------------------------------------------------------
655
656Options are defined through the  \ngn{nambdy\_tide} namelist variables.
657 To be written....
658
659
660
661
Note: See TracBrowser for help on using the repository browser.