New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
chap_LBC.tex in NEMO/trunk/doc/latex/NEMO/subfiles – NEMO

source: NEMO/trunk/doc/latex/NEMO/subfiles/chap_LBC.tex @ 10406

Last change on this file since 10406 was 10406, checked in by nicolasmartin, 5 years ago

Edition of math environments

File size: 33.6 KB
Line 
1\documentclass[../tex_main/NEMO_manual]{subfiles}
2\begin{document}
3% ================================================================
4% Chapter — Lateral Boundary Condition (LBC)
5% ================================================================
6\chapter{Lateral Boundary Condition (LBC)}
7\label{chap:LBC}
8\minitoc
9
10\newpage
11$\ $\newline    % force a new ligne
12
13
14%gm% add here introduction to this chapter
15
16% ================================================================
17% Boundary Condition at the Coast
18% ================================================================
19\section{Boundary condition at the coast (\protect\np{rn\_shlat})}
20\label{sec:LBC_coast}
21%--------------------------------------------nam_lbc-------------------------------------------------------
22
23\nlst{namlbc} 
24%--------------------------------------------------------------------------------------------------------------
25
26%The lateral ocean boundary conditions contiguous to coastlines are Neumann conditions for heat and salt (no flux across boundaries) and Dirichlet conditions for momentum (ranging from free-slip to "strong" no-slip). They are handled automatically by the mask system (see \autoref{subsec:DOM_msk}).
27
28%OPA allows land and topography grid points in the computational domain due to the presence of continents or islands, and includes the use of a full or partial step representation of bottom topography. The computation is performed over the whole domain, i.e. we do not try to restrict the computation to ocean-only points. This choice has two motivations. Firstly, working on ocean only grid points overloads the code and harms the code readability. Secondly, and more importantly, it drastically reduces the vector portion of the computation, leading to a dramatic increase of CPU time requirement on vector computers.  The current section describes how the masking affects the computation of the various terms of the equations with respect to the boundary condition at solid walls. The process of defining which areas are to be masked is described in \autoref{subsec:DOM_msk}.
29
30Options are defined through the \ngn{namlbc} namelist variables.
31The discrete representation of a domain with complex boundaries (coastlines and bottom topography) leads to
32arrays that include large portions where a computation is not required as the model variables remain at zero.
33Nevertheless, vectorial supercomputers are far more efficient when computing over a whole array,
34and the readability of a code is greatly improved when boundary conditions are applied in
35an automatic way rather than by a specific computation before or after each computational loop.
36An efficient way to work over the whole domain while specifying the boundary conditions,
37is to use multiplication by mask arrays in the computation.
38A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ elsewhere.
39A simple multiplication of a variable by its own mask ensures that it will remain zero over land areas.
40Since most of the boundary conditions consist of a zero flux across the solid boundaries,
41they can be simply applied by multiplying variables by the correct mask arrays,
42$i.e.$ the mask array of the grid point where the flux is evaluated.
43For example, the heat flux in the \textbf{i}-direction is evaluated at $u$-points.
44Evaluating this quantity as,
45
46\begin{equation} \label{eq:lbc_aaaa}
47\frac{A^{lT} }{e_1 }\frac{\partial T}{\partial i}\equiv \frac{A_u^{lT} 
48}{e_{1u} } \; \delta_{i+1 / 2} \left[ T \right]\;\;mask_u
49\end{equation}
50(where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is zero inside land and
51at the boundaries, since mask$_{u}$ is zero at solid boundaries which in this case are defined at $u$-points
52(normal velocity $u$ remains zero at the coast) (\autoref{fig:LBC_uv}).
53
54%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
55\begin{figure}[!t]     \begin{center}
56\includegraphics[width=0.90\textwidth]{Fig_LBC_uv}
57\caption{  \protect\label{fig:LBC_uv}
58  Lateral boundary (thick line) at T-level.
59  The velocity normal to the boundary is set to zero.}
60\end{center}   \end{figure}
61%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
62
63For momentum the situation is a bit more complex as two boundary conditions must be provided along the coast
64(one each for the normal and tangential velocities).
65The boundary of the ocean in the C-grid is defined by the velocity-faces.
66For example, at a given $T$-level,
67the lateral boundary (a coastline or an intersection with the bottom topography) is made of
68segments joining $f$-points, and normal velocity points are located between two $f-$points (\autoref{fig:LBC_uv}).
69The boundary condition on the normal velocity (no flux through solid boundaries)
70can thus be easily implemented using the mask system.
71The boundary condition on the tangential velocity requires a more specific treatment.
72This boundary condition influences the relative vorticity and momentum diffusive trends,
73and is required in order to compute the vorticity at the coast.
74Four different types of lateral boundary condition are available,
75controlled by the value of the \np{rn\_shlat} namelist parameter
76(The value of the mask$_{f}$ array along the coastline is set equal to this parameter).
77These are:
78
79%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
80\begin{figure}[!p] \begin{center}
81\includegraphics[width=0.90\textwidth]{Fig_LBC_shlat}
82\caption{     \protect\label{fig:LBC_shlat} 
83  lateral boundary condition
84  (a) free-slip ($rn\_shlat=0$);
85  (b) no-slip ($rn\_shlat=2$);
86  (c) "partial" free-slip ($0<rn\_shlat<2$) and
87  (d) "strong" no-slip ($2<rn\_shlat$).
88  Implied "ghost" velocity inside land area is display in grey. }
89\end{center}    \end{figure}
90%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
91
92\begin{description}
93
94\item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}):] the tangential velocity at
95  the coastline is equal to the offshore velocity,
96  $i.e.$ the normal derivative of the tangential velocity is zero at the coast,
97  so the vorticity: mask$_{f}$ array is set to zero inside the land and just at the coast
98  (\autoref{fig:LBC_shlat}-a).
99
100\item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}):] the tangential velocity vanishes at the coastline.
101  Assuming that the tangential velocity decreases linearly from
102  the closest ocean velocity grid point to the coastline,
103  the normal derivative is evaluated as if the velocities at the closest land velocity gridpoint and
104  the closest ocean velocity gridpoint were of the same magnitude but in the opposite direction
105  (\autoref{fig:LBC_shlat}-b).
106  Therefore, the vorticity along the coastlines is given by:
107
108\[
109\zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right] - \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ ,
110\]
111where $u$ and $v$ are masked fields.
112Setting the mask$_{f}$ array to $2$ along the coastline provides a vorticity field computed with
113the no-slip boundary condition, simply by multiplying it by the mask$_{f}$ :
114\begin{equation} \label{eq:lbc_bbbb}
115\zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta_{i+1/2} 
116\left[ {e_{2v} \,v} \right]-\delta_{j+1/2} \left[ {e_{1u} \,u} \right]} 
117\right)\;\mbox{mask}_f
118\end{equation}
119
120\item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2):] the tangential velocity at
121  the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral friction but
122  not strong enough to make the tangential velocity at the coast vanish (\autoref{fig:LBC_shlat}-c).
123  This can be selected by providing a value of mask$_{f}$ strictly inbetween $0$ and $2$.
124
125\item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}):] the viscous boundary layer is assumed to
126  be smaller than half the grid size (\autoref{fig:LBC_shlat}-d).
127  The friction is thus larger than in the no-slip case.
128
129\end{description}
130
131Note that when the bottom topography is entirely represented by the $s$-coor-dinates (pure $s$-coordinate),
132the lateral boundary condition on tangential velocity is of much less importance as
133it is only applied next to the coast where the minimum water depth can be quite shallow.
134
135
136% ================================================================
137% Boundary Condition around the Model Domain
138% ================================================================
139\section{Model domain boundary condition (\protect\np{jperio})}
140\label{sec:LBC_jperio}
141
142At the model domain boundaries several choices are offered:
143closed, cyclic east-west, cyclic north-south, a north-fold, and combination closed-north fold or
144bi-cyclic east-west and north-fold.
145The north-fold boundary condition is associated with the 3-pole ORCA mesh.
146
147% -------------------------------------------------------------------------------------------------------------
148%        Closed, cyclic (\np{jperio}\forcode{ = 0..2})
149% -------------------------------------------------------------------------------------------------------------
150\subsection{Closed, cyclic (\protect\np{jperio}\forcode{= [0127]})}
151\label{subsec:LBC_jperio012}
152
153The choice of closed or cyclic model domain boundary condition is made by
154setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}.
155Each time such a boundary condition is needed, it is set by a call to routine \mdl{lbclnk}.
156The computation of momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to $j=jpj-1$,
157$i.e.$ in the model interior.
158To choose a lateral model boundary condition is to specify the first and last rows and columns of
159the model variables.
160
161\begin{description}
162
163\item[For closed boundary (\np{jperio}\forcode{ = 0})],
164  solid walls are imposed at all model boundaries:
165  first and last rows and columns are set to zero.
166
167\item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})],
168  first and last rows are set to zero (closed) whilst the first column is set to
169  the value of the last-but-one column and the last column to the value of the second one
170  (\autoref{fig:LBC_jperio}-a).
171  Whatever flows out of the eastern (western) end of the basin enters the western (eastern) end.
172
173\item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})],
174  first and last columns are set to zero (closed) whilst the first row is set to
175  the value of the last-but-one row and the last row to the value of the second one
176  (\autoref{fig:LBC_jperio}-a).
177  Whatever flows out of the northern (southern) end of the basin enters the southern (northern) end.
178
179\item[Bi-cyclic east-west and north-south boundary (\np{jperio}\forcode{ = 7})] combines cases 1 and 2.
180
181\end{description}
182
183%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
184\begin{figure}[!t]     \begin{center}
185\includegraphics[width=1.0\textwidth]{Fig_LBC_jperio}
186\caption{    \protect\label{fig:LBC_jperio}
187  setting of (a) east-west cyclic  (b) symmetric across the equator boundary conditions.}
188\end{center}   \end{figure}
189%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
190
191% -------------------------------------------------------------------------------------------------------------
192%        North fold (\textit{jperio = 3 }to $6)$
193% -------------------------------------------------------------------------------------------------------------
194\subsection{North-fold (\protect\np{jperio}\forcode{ = 3..6})}
195\label{subsec:LBC_north_fold}
196
197The north fold boundary condition has been introduced in order to handle the north boundary of
198a three-polar ORCA grid.
199Such a grid has two poles in the northern hemisphere (\autoref{fig:MISC_ORCA_msh},
200and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.
201Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition.
202
203%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
204\begin{figure}[!t]    \begin{center}
205\includegraphics[width=0.90\textwidth]{Fig_North_Fold_T}
206\caption{    \protect\label{fig:North_Fold_T}
207  North fold boundary with a $T$-point pivot and cyclic east-west boundary condition ($jperio=4$),
208  as used in ORCA 2, 1/4, and 1/12.
209  Pink shaded area corresponds to the inner domain mask (see text). }
210\end{center}   \end{figure}
211%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
212
213% ====================================================================
214% Exchange with neighbouring processors
215% ====================================================================
216\section{Exchange with neighbouring processors (\protect\mdl{lbclnk}, \protect\mdl{lib\_mpp})}
217\label{sec:LBC_mpp}
218
219For massively parallel processing (mpp), a domain decomposition method is used.
220The basic idea of the method is to split the large computation domain of a numerical experiment into
221several smaller domains and solve the set of equations by addressing independent local problems.
222Each processor has its own local memory and computes the model equation over a subdomain of the whole model domain.
223The subdomain boundary conditions are specified through communications between processors which
224are organized by explicit statements (message passing method).
225
226A big advantage is that the method does not need many modifications of the initial FORTRAN code.
227From the modeller's point of view, each sub domain running on a processor is identical to the "mono-domain" code.
228In addition, the programmer manages the communications between subdomains,
229and the code is faster when the number of processors is increased.
230The porting of OPA code on an iPSC860 was achieved during Guyon's PhD [Guyon et al. 1994, 1995]
231in collaboration with CETIIS and ONERA.
232The implementation in the operational context and the studies of performance on
233a T3D and T3E Cray computers have been made in collaboration with IDRIS and CNRS.
234The present implementation is largely inspired by Guyon's work [Guyon 1995].
235
236The parallelization strategy is defined by the physical characteristics of the ocean model.
237Second order finite difference schemes lead to local discrete operators that
238depend at the very most on one neighbouring point.
239The only non-local computations concern the vertical physics
240(implicit diffusion, turbulent closure scheme, ...) (delocalization over the whole water column),
241and the solving of the elliptic equation associated with the surface pressure gradient computation
242(delocalization over the whole horizontal domain).
243Therefore, a pencil strategy is used for the data sub-structuration:
244the 3D initial domain is laid out on local processor memories following a 2D horizontal topological splitting.
245Each sub-domain computes its own surface and bottom boundary conditions and
246has a side wall overlapping interface which defines the lateral boundary conditions for
247computations in the inner sub-domain.
248The overlapping area consists of the two rows at each edge of the sub-domain.
249After a computation, a communication phase starts:
250each processor sends to its neighbouring processors the update values of the points corresponding to
251the interior overlapping area to its neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows).
252The communication is done through the Message Passing Interface (MPI).
253The data exchanges between processors are required at the very place where
254lateral domain boundary conditions are set in the mono-domain computation:
255the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module) which manages such conditions is interfaced with
256routines found in \mdl{lib\_mpp} module when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined).
257It has to be pointed out that when using the MPP version of the model,
258the east-west cyclic boundary condition is done implicitly,
259whilst the south-symmetric boundary condition option is not available.
260
261%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
262\begin{figure}[!t]    \begin{center}
263\includegraphics[width=0.90\textwidth]{Fig_mpp}
264\caption{   \protect\label{fig:mpp}
265  Positioning of a sub-domain when massively parallel processing is used. }
266\end{center}   \end{figure}
267%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
268
269In the standard version of \NEMO, the splitting is regular and arithmetic.
270The i-axis is divided by \jp{jpni} and
271the j-axis by \jp{jpnj} for a number of processors \jp{jpnij} most often equal to $jpni \times jpnj$
272(parameters set in  \ngn{nammpp} namelist).
273Each processor is independent and without message passing or synchronous process,
274programs run alone and access just its own local memory.
275For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) that
276are named \jp{jpi}, \jp{jpj}, \jp{jpk}.
277These dimensions include the internal domain and the overlapping rows.
278The number of rows to exchange (known as the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}).
279The whole domain dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}.
280The relationship between the whole domain and a sub-domain is:
281\begin{align} 
282      jpi & = & ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci  \nonumber \\
283      jpj & = & ( jpjglo-2*jprecj + (jpnj-1) ) / jpnj + 2*jprecj  \label{eq:lbc_jpi}
284\end{align}
285where \jp{jpni}, \jp{jpnj} are the number of processors following the i- and j-axis.
286
287One also defines variables nldi and nlei which correspond to the internal domain bounds,
288and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain.
289An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,
290a global array (whole domain) by the relationship:
291\begin{equation} \label{eq:lbc_nimpp}
292T_{g} (i+nimpp-1,j+njmpp-1,k) = T_{l} (i,j,k),
293\end{equation}
294with  $1 \leq i \leq jpi$, $1  \leq j \leq jpj $ , and  $1  \leq k \leq jpk$.
295
296Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable nproc.
297In the standard version, a processor has no more than
298four neighbouring processors named nono (for north), noea (east), noso (south) and nowe (west) and
299two variables, nbondi and nbondj, indicate the relative position of the processor:
300\begin{itemize}
301\item       nbondi = -1    an east neighbour, no west processor,
302\item       nbondi =  0 an east neighbour, a west neighbour,
303\item       nbondi =  1    no east processor, a west neighbour,
304\item       nbondi =  2    no splitting following the i-axis.
305\end{itemize}
306During the simulation, processors exchange data with their neighbours.
307If there is effectively a neighbour, the processor receives variables from this processor on its overlapping row,
308and sends the data issued from internal domain corresponding to the overlapping row of the other processor.
309
310
311The \NEMO model computes equation terms with the help of mask arrays (0 on land points and 1 on sea points).
312It is easily readable and very efficient in the context of a computer with vectorial architecture.
313However, in the case of a scalar processor, computations over the land regions become more expensive in
314terms of CPU time.
315It is worse when we use a complex configuration with a realistic bathymetry like the global ocean where
316more than 50 \% of points are land points.
317For this reason, a pre-processing tool can be used to choose the mpp domain decomposition with a maximum number of
318only land points processors, which can then be eliminated (\autoref{fig:mppini2})
319(For example, the mpp\_optimiz tools, available from the DRAKKAR web site).
320This optimisation is dependent on the specific bathymetry employed.
321The user then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with $jpnij < jpni \times jpnj$,
322leading to the elimination of $jpni \times jpnj - jpnij$ land processors.
323When those parameters are specified in \ngn{nammpp} namelist,
324the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound, nono, noea,...) so that
325the land-only processors are not taken into account.
326
327\gmcomment{Note that the inimpp2 routine is general so that the original inimpp
328routine should be suppressed from the code.}
329
330When land processors are eliminated,
331the value corresponding to these locations in the model output files is undefined.
332Note that this is a problem for the meshmask file which requires to be defined over the whole domain.
333Therefore, user should not eliminate land processors when creating a meshmask file
334($i.e.$ when setting a non-zero value to \np{nn\_msh}).
335
336%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
337\begin{figure}[!ht]     \begin{center}
338\includegraphics[width=0.90\textwidth]{Fig_mppini2}
339\caption {    \protect\label{fig:mppini2}
340  Example of Atlantic domain defined for the CLIPPER projet.
341  Initial grid is composed of 773 x 1236 horizontal points.
342  (a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).
343  52 subdomains are land areas.
344  (b) 52 subdomains are eliminated (white rectangles) and
345  the resulting number of processors really used during the computation is jpnij=128.}
346\end{center}   \end{figure}
347%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
348
349
350% ====================================================================
351% Unstructured open boundaries BDY
352% ====================================================================
353\section{Unstructured open boundary conditions (BDY)}
354\label{sec:LBC_bdy}
355
356%-----------------------------------------nambdy--------------------------------------------
357
358\nlst{nambdy} 
359%-----------------------------------------------------------------------------------------------
360%-----------------------------------------nambdy_dta--------------------------------------------
361
362\nlst{nambdy_dta} 
363%-----------------------------------------------------------------------------------------------
364
365Options are defined through the \ngn{nambdy} \ngn{nambdy\_dta} namelist variables.
366The BDY module is the core implementation of open boundary conditions for regional configurations.
367It implements the Flow Relaxation Scheme algorithm for temperature, salinity, velocities and ice fields, and
368the Flather radiation condition for the depth-mean transports.
369The specification of the location of the open boundary is completely flexible and
370allows for example the open boundary to follow an isobath or other irregular contour.
371
372The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many features and
373a similar coding structure \citep{Chanut2005}.
374
375Boundary data files used with earlier versions of NEMO may need to be re-ordered to work with this version.
376See the section on the Input Boundary Data Files for details.
377
378%----------------------------------------------
379\subsection{Namelists}
380\label{subsec:BDY_namelist}
381
382The BDY module is activated by setting \np{ln\_bdy} to true.
383It is possible to define more than one boundary ``set'' and apply different boundary conditions to each set.
384The number of boundary sets is defined by \np{nb\_bdy}.
385Each boundary set may be defined as a set of straight line segments in a namelist
386(\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file (\np{ln\_coords\_file}\forcode{ = .true.}).
387If the set is defined in a namelist, then the namelists nambdy\_index must be included separately, one for each set.
388If the set is defined by a file, then a ``\ifile{coordinates.bdy}'' file must be provided.
389The coordinates.bdy file is analagous to the usual NEMO ``\ifile{coordinates}'' file.
390In the example above, there are two boundary sets, the first of which is defined via a file and
391the second is defined in a namelist.
392For more details of the definition of the boundary geometry see section \autoref{subsec:BDY_geometry}.
393
394For each boundary set a boundary condition has to be chosen for the barotropic solution
395(``u2d'':sea-surface height and barotropic velocities), for the baroclinic velocities (``u3d''), and
396for the active tracers \footnote{The BDY module does not deal with passive tracers at this version} (``tra'').
397For each set of variables there is a choice of algorithm and a choice for the data,
398eg. for the active tracers the algorithm is set by \np{nn\_tra} and the choice of data is set by \np{nn\_tra\_dta}.
399
400The choice of algorithm is currently as follows:
401
402\mbox{}
403
404\begin{itemize}
405\item[0.] No boundary condition applied.
406  So the solution will ``see'' the land points around the edge of the edge of the domain.
407\item[1.] Flow Relaxation Scheme (FRS) available for all variables.
408\item[2.] Flather radiation scheme for the barotropic variables.
409  The Flather scheme is not compatible with the filtered free surface
410  ({\it dynspg\_ts}).
411\end{itemize}
412
413\mbox{}
414
415The main choice for the boundary data is to use initial conditions as boundary data
416(\np{nn\_tra\_dta}\forcode{ = 0}) or to use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}).
417For the barotropic solution there is also the option to use tidal harmonic forcing either by
418itself or in addition to other external data.
419
420If external boundary data is required then the nambdy\_dta namelist must be defined.
421One nambdy\_dta namelist is required for each boundary set in the order in which
422the boundary sets are defined in nambdy.
423In the example given, two boundary sets have been defined and so there are two nambdy\_dta namelists.
424The boundary data is read in using the fldread module,
425so the nambdy\_dta namelist is in the format required for fldread.
426For each variable required, the filename, the frequency of the files and
427the frequency of the data in the files is given.
428Also whether or not time-interpolation is required and whether the data is climatological (time-cyclic) data.
429Note that on-the-fly spatial interpolation of boundary data is not available at this version.
430
431In the example namelists given, two boundary sets are defined.
432The first set is defined via a file and applies FRS conditions to temperature and salinity and
433Flather conditions to the barotropic variables.
434External data is provided in daily files (from a large-scale model).
435Tidal harmonic forcing is also used.
436The second set is defined in a namelist.
437FRS conditions are applied on temperature and salinity and climatological data is read from external files.
438
439%----------------------------------------------
440\subsection{Flow relaxation scheme}
441\label{subsec:BDY_FRS_scheme}
442
443The Flow Relaxation Scheme (FRS) \citep{Davies_QJRMS76,Engerdahl_Tel95},
444applies a simple relaxation of the model fields to externally-specified values over
445a zone next to the edge of the model domain.
446Given a model prognostic variable $\Phi$
447\begin{equation}  \label{eq:bdy_frs1}
448\Phi(d) = \alpha(d)\Phi_{e}(d) + (1-\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N
449\end{equation}
450where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified external field,
451$d$ gives the discrete distance from the model boundary and
452$\alpha$ is a parameter that varies from $1$ at $d=1$ to a small value at $d=N$.
453It can be shown that this scheme is equivalent to adding a relaxation term to
454the prognostic equation for $\Phi$ of the form:
455\begin{equation}  \label{eq:bdy_frs2}
456-\frac{1}{\tau}\left(\Phi - \Phi_{e}\right)
457\end{equation}
458where the relaxation time scale $\tau$ is given by a function of $\alpha$ and the model time step $\Delta t$:
459\begin{equation}  \label{eq:bdy_frs3}
460\tau = \frac{1-\alpha}{\alpha}  \,\rdt
461\end{equation}
462Thus the model solution is completely prescribed by the external conditions at the edge of the model domain and
463is relaxed towards the external conditions over the rest of the FRS zone.
464The application of a relaxation zone helps to prevent spurious reflection of
465outgoing signals from the model boundary.
466
467The function $\alpha$ is specified as a $tanh$ function:
468\begin{equation}  \label{eq:bdy_frs4}
469\alpha(d) = 1 - \tanh\left(\frac{d-1}{2}\right),       \quad d=1,N
470\end{equation}
471The width of the FRS zone is specified in the namelist as \np{nn\_rimwidth}.
472This is typically set to a value between 8 and 10.
473
474%----------------------------------------------
475\subsection{Flather radiation scheme}
476\label{subsec:BDY_flather_scheme}
477
478The \citet{Flather_JPO94} scheme is a radiation condition on the normal,
479depth-mean transport across the open boundary.
480It takes the form
481\begin{equation}  \label{eq:bdy_fla1}
482U = U_{e} + \frac{c}{h}\left(\eta - \eta_{e}\right),
483\end{equation}
484where $U$ is the depth-mean velocity normal to the boundary and $\eta$ is the sea surface height,
485both from the model.
486The subscript $e$ indicates the same fields from external sources.
487The speed of external gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the water column.
488The depth-mean normal velocity along the edge of the model domain is set equal to
489the external depth-mean normal velocity,
490plus a correction term that allows gravity waves generated internally to exit the model boundary.
491Note that the sea-surface height gradient in \autoref{eq:bdy_fla1} is a spatial gradient across the model boundary,
492so that $\eta_{e}$ is defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the $T$ points with $nbr=2$.
493$U$ and $U_{e}$ are defined on the $U$ or $V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points.
494
495%----------------------------------------------
496\subsection{Boundary geometry}
497\label{subsec:BDY_geometry}
498
499Each open boundary set is defined as a list of points.
500The information is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$ structure.
501The $nbi$ and $nbj$ arrays define the local $(i,j)$ indices of each point in the boundary zone and
502the $nbr$ array defines the discrete distance from the boundary with $nbr=1$ meaning that
503the point is next to the edge of the model domain and $nbr>1$ showing that
504the point is increasingly further away from the edge of the model domain.
505A set of $nbi$, $nbj$, and $nbr$ arrays is defined for each of the $T$, $U$ and $V$ grids.
506Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular boundary.
507
508The boundary geometry for each set may be defined in a namelist nambdy\_index or
509by reading in a ``\ifile{coordinates.bdy}'' file.
510The nambdy\_index namelist defines a series of straight-line segments for north, east, south and west boundaries.
511For the northern boundary, \np{nbdysegn} gives the number of segments,
512\np{jpjnob} gives the $j$ index for each segment and \np{jpindt} and
513\np{jpinft} give the start and end $i$ indices for each segment with similar for the other boundaries.
514These segments define a list of $T$ grid points along the outermost row of the boundary ($nbr\,=\, 1$).
515The code deduces the $U$ and $V$ points and also the points for $nbr\,>\, 1$ if $nn\_rimwidth\,>\,1$.
516
517The boundary geometry may also be defined from a ``\ifile{coordinates.bdy}'' file.
518Figure \autoref{fig:LBC_nc_header} gives an example of the header information from such a file.
519The file should contain the index arrays for each of the $T$, $U$ and $V$ grids.
520The arrays must be in order of increasing $nbr$.
521Note that the $nbi$, $nbj$ values in the file are global values and are converted to local values in the code.
522Typically this file will be used to generate external boundary data via interpolation and so
523will also contain the latitudes and longitudes of each point as shown.
524However, this is not necessary to run the model.
525
526For some choices of irregular boundary the model domain may contain areas of ocean which
527are not part of the computational domain.
528For example if an open boundary is defined along an isobath, say at the shelf break,
529then the areas of ocean outside of this boundary will need to be masked out.
530This can be done by reading a mask file defined as \np{cn\_mask\_file} in the nam\_bdy namelist.
531Only one mask file is used even if multiple boundary sets are defined.
532
533%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
534\begin{figure}[!t]      \begin{center}
535\includegraphics[width=1.0\textwidth]{Fig_LBC_bdy_geom}
536\caption {      \protect\label{fig:LBC_bdy_geom}
537  Example of geometry of unstructured open boundary}
538\end{center}   \end{figure}
539%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
540
541%----------------------------------------------
542\subsection{Input boundary data files}
543\label{subsec:BDY_data}
544
545The data files contain the data arrays in the order in which the points are defined in the $nbi$ and $nbj$ arrays.
546The data arrays are dimensioned on:
547a time dimension;
548$xb$ which is the index of the boundary data point in the horizontal;
549and $yb$ which is a degenerate dimension of 1 to enable the file to be read by the standard NEMO I/O routines.
550The 3D fields also have a depth dimension.
551
552At Version 3.4 there are new restrictions on the order in which the boundary points are defined
553(and therefore restrictions on the order of the data in the file).
554In particular:
555
556\mbox{}
557
558\begin{enumerate}
559\item The data points must be in order of increasing $nbr$,
560  ie. all the $nbr=1$ points, then all the $nbr=2$ points etc.
561\item All the data for a particular boundary set must be in the same order.
562  (Prior to 3.4 it was possible to define barotropic data in a different order to
563  the data for tracers and baroclinic velocities).
564\end{enumerate}
565
566\mbox{}
567
568These restrictions mean that data files used with previous versions of the model may not work with version 3.4.
569A fortran utility {\it bdy\_reorder} exists in the TOOLS directory which
570will re-order the data in old BDY data files.
571
572%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
573\begin{figure}[!t]     \begin{center}
574\includegraphics[width=1.0\textwidth]{Fig_LBC_nc_header}
575\caption {     \protect\label{fig:LBC_nc_header}
576  Example of the header for a \protect\ifile{coordinates.bdy} file}
577\end{center}   \end{figure}
578%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
579
580%----------------------------------------------
581\subsection{Volume correction}
582\label{subsec:BDY_vol_corr}
583
584There is an option to force the total volume in the regional model to be constant,
585similar to the option in the OBC module.
586This is controlled  by the \np{nn\_volctl} parameter in the namelist.
587A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used.
588If \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities around the boundary at
589each timestep to ensure that the integrated volume flow through the boundary is zero.
590If \np{nn\_volctl}\forcode{ = 2} then the calculation of the volume change on
591the timestep includes the change due to the freshwater flux across the surface and
592the correction velocity corrects for this as well.
593
594If more than one boundary set is used then volume correction is
595applied to all boundaries at once.
596
597\newpage
598%----------------------------------------------
599\subsection{Tidal harmonic forcing}
600\label{subsec:BDY_tides}
601
602%-----------------------------------------nambdy_tide--------------------------------------------
603
604\nlst{nambdy_tide} 
605%-----------------------------------------------------------------------------------------------
606
607Options are defined through the  \ngn{nambdy\_tide} namelist variables.
608 To be written....
609
610
611
612
613\end{document}
Note: See TracBrowser for help on using the repository browser.