New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
chap_LBC.tex in NEMO/trunk/doc/latex/NEMO/subfiles – NEMO

source: NEMO/trunk/doc/latex/NEMO/subfiles/chap_LBC.tex @ 10146

Last change on this file since 10146 was 10146, checked in by nicolasmartin, 6 years ago

Reorganisation for future addition of .rst files from users wiki extraction

  • Create root directories latex and rst for tidy up
  • Move namelists folder to the root with the aim to gather later all namelist groups here (OCE, ICE & TOP) Also building scripts have been modified so that figures is now expected to be present at the root
  • Create bin directory with namelist utilities (check and update)
  • Under rst, add 4 dummy files that would gather the whole documentation existing currently in users wiki
    • model_interfacing.rst
    • reference_configurations.rst
    • setup_configuration.rst
    • test_cases.rst
File size: 34.2 KB
Line 
1\documentclass[../tex_main/NEMO_manual]{subfiles}
2\begin{document}
3% ================================================================
4% Chapter — Lateral Boundary Condition (LBC)
5% ================================================================
6\chapter{Lateral Boundary Condition (LBC)}
7\label{chap:LBC}
8\minitoc
9
10\newpage
11$\ $\newline    % force a new ligne
12
13
14%gm% add here introduction to this chapter
15
16% ================================================================
17% Boundary Condition at the Coast
18% ================================================================
19\section{Boundary condition at the coast (\protect\np{rn\_shlat})}
20\label{sec:LBC_coast}
21%--------------------------------------------nam_lbc-------------------------------------------------------
22
23\nlst{namlbc} 
24%--------------------------------------------------------------------------------------------------------------
25
26%The lateral ocean boundary conditions contiguous to coastlines are Neumann conditions for heat and salt (no flux across boundaries) and Dirichlet conditions for momentum (ranging from free-slip to "strong" no-slip). They are handled automatically by the mask system (see \autoref{subsec:DOM_msk}).
27
28%OPA allows land and topography grid points in the computational domain due to the presence of continents or islands, and includes the use of a full or partial step representation of bottom topography. The computation is performed over the whole domain, i.e. we do not try to restrict the computation to ocean-only points. This choice has two motivations. Firstly, working on ocean only grid points overloads the code and harms the code readability. Secondly, and more importantly, it drastically reduces the vector portion of the computation, leading to a dramatic increase of CPU time requirement on vector computers.  The current section describes how the masking affects the computation of the various terms of the equations with respect to the boundary condition at solid walls. The process of defining which areas are to be masked is described in \autoref{subsec:DOM_msk}.
29
30Options are defined through the \ngn{namlbc} namelist variables.
31The discrete representation of a domain with complex boundaries (coastlines and
32bottom topography) leads to arrays that include large portions where a computation
33is not required as the model variables remain at zero. Nevertheless, vectorial
34supercomputers are far more efficient when computing over a whole array, and the
35readability of a code is greatly improved when boundary conditions are applied in
36an automatic way rather than by a specific computation before or after each
37computational loop. An efficient way to work over the whole domain while specifying
38the boundary conditions, is to use multiplication by mask arrays in the computation.
39A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ 
40elsewhere. A simple multiplication of a variable by its own mask ensures that it will
41remain zero over land areas. Since most of the boundary conditions consist of a
42zero flux across the solid boundaries, they can be simply applied by multiplying
43variables by the correct mask arrays, $i.e.$ the mask array of the grid point where
44the flux is evaluated. For example, the heat flux in the \textbf{i}-direction is evaluated
45at $u$-points. Evaluating this quantity as,
46
47\begin{equation} \label{eq:lbc_aaaa}
48\frac{A^{lT} }{e_1 }\frac{\partial T}{\partial i}\equiv \frac{A_u^{lT} 
49}{e_{1u} } \; \delta _{i+1 / 2} \left[ T \right]\;\;mask_u
50\end{equation}
51(where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is
52zero inside land and at the boundaries, since mask$_{u}$ is zero at solid boundaries
53which in this case are defined at $u$-points (normal velocity $u$ remains zero at
54the coast) (\autoref{fig:LBC_uv}).
55
56%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
57\begin{figure}[!t]     \begin{center}
58\includegraphics[width=0.90\textwidth]{Fig_LBC_uv}
59\caption{  \protect\label{fig:LBC_uv}
60Lateral boundary (thick line) at T-level. The velocity normal to the boundary is set to zero.}
61\end{center}   \end{figure}
62%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
63
64For momentum the situation is a bit more complex as two boundary conditions
65must be provided along the coast (one each for the normal and tangential velocities).
66The boundary of the ocean in the C-grid is defined by the velocity-faces.
67For example, at a given $T$-level, the lateral boundary (a coastline or an intersection
68with the bottom topography) is made of segments joining $f$-points, and normal
69velocity points are located between two $f-$points (\autoref{fig:LBC_uv}).
70The boundary condition on the normal velocity (no flux through solid boundaries)
71can thus be easily implemented using the mask system. The boundary condition
72on the tangential velocity requires a more specific treatment. This boundary
73condition influences the relative vorticity and momentum diffusive trends, and is
74required in order to compute the vorticity at the coast. Four different types of
75lateral boundary condition are available, controlled by the value of the \np{rn\_shlat} 
76namelist parameter. (The value of the mask$_{f}$ array along the coastline is set
77equal to this parameter.) These are:
78
79%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
80\begin{figure}[!p] \begin{center}
81\includegraphics[width=0.90\textwidth]{Fig_LBC_shlat}
82\caption{     \protect\label{fig:LBC_shlat} 
83lateral boundary condition (a) free-slip ($rn\_shlat=0$) ; (b) no-slip ($rn\_shlat=2$)
84; (c) "partial" free-slip ($0<rn\_shlat<2$) and (d) "strong" no-slip ($2<rn\_shlat$).
85Implied "ghost" velocity inside land area is display in grey. }
86\end{center}    \end{figure}
87%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
88
89\begin{description}
90
91\item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}): ]  the tangential velocity at the
92coastline is equal to the offshore velocity, $i.e.$ the normal derivative of the
93tangential velocity is zero at the coast, so the vorticity: mask$_{f}$ array is set
94to zero inside the land and just at the coast (\autoref{fig:LBC_shlat}-a).
95
96\item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}): ] the tangential velocity vanishes
97at the coastline. Assuming that the tangential velocity decreases linearly from
98the closest ocean velocity grid point to the coastline, the normal derivative is
99evaluated as if the velocities at the closest land velocity gridpoint and the closest
100ocean velocity gridpoint were of the same magnitude but in the opposite direction
101(\autoref{fig:LBC_shlat}-b). Therefore, the vorticity along the coastlines is given by:
102
103\begin{equation*}
104\zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right] - \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ ,
105\end{equation*}
106where $u$ and $v$ are masked fields. Setting the mask$_{f}$ array to $2$ along
107the coastline provides a vorticity field computed with the no-slip boundary condition,
108simply by multiplying it by the mask$_{f}$ :
109\begin{equation} \label{eq:lbc_bbbb}
110\zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta _{i+1/2} 
111\left[ {e_{2v} \,v} \right]-\delta _{j+1/2} \left[ {e_{1u} \,u} \right]} 
112\right)\;\mbox{mask}_f
113\end{equation}
114
115\item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2): ] the tangential
116velocity at the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral
117friction but not strong enough to make the tangential velocity at the coast vanish
118(\autoref{fig:LBC_shlat}-c). This can be selected by providing a value of mask$_{f}$ 
119strictly inbetween $0$ and $2$.
120
121\item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}): ] the viscous boundary
122layer is assumed to be smaller than half the grid size (\autoref{fig:LBC_shlat}-d).
123The friction is thus larger than in the no-slip case.
124
125\end{description}
126
127Note that when the bottom topography is entirely represented by the $s$-coor-dinates
128(pure $s$-coordinate), the lateral boundary condition on tangential velocity is of much
129less importance as it is only applied next to the coast where the minimum water depth
130can be quite shallow.
131
132
133% ================================================================
134% Boundary Condition around the Model Domain
135% ================================================================
136\section{Model domain boundary condition (\protect\np{jperio})}
137\label{sec:LBC_jperio}
138
139At the model domain boundaries several choices are offered: closed, cyclic east-west,
140cyclic north-south, a north-fold, and combination closed-north fold
141or bi-cyclic east-west and north-fold. The north-fold boundary condition is associated with the 3-pole ORCA mesh.
142
143% -------------------------------------------------------------------------------------------------------------
144%        Closed, cyclic (\np{jperio}\forcode{ = 0..2})
145% -------------------------------------------------------------------------------------------------------------
146\subsection{Closed, cyclic (\protect\np{jperio}\forcode{= [0127]})}
147\label{subsec:LBC_jperio012}
148
149The choice of closed or cyclic model domain boundary condition is made
150by setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}. Each time such a boundary
151condition is needed, it is set by a call to routine \mdl{lbclnk}. The computation of
152momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to
153$j=jpj-1$, $i.e.$ in the model interior. To choose a lateral model boundary condition
154is to specify the first and last rows and columns of the model variables.
155
156\begin{description}
157
158\item[For closed boundary (\np{jperio}\forcode{ = 0})], solid walls are imposed at all model
159boundaries: first and last rows and columns are set to zero.
160
161\item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})], first and last rows are set
162to zero (closed) whilst the first column is set to the value of the last-but-one column
163and the last column to the value of the second one (\autoref{fig:LBC_jperio}-a).
164Whatever flows out of the eastern (western) end of the basin enters the western
165(eastern) end.
166
167\item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})], first and last columns are set
168to zero (closed) whilst the first row is set to the value of the last-but-one row
169and the last row to the value of the second one (\autoref{fig:LBC_jperio}-a).
170Whatever flows out of the northern (southern) end of the basin enters the southern 
171(northern) end.
172
173\item[Bi-cyclic east-west and north-south boundary (\np{jperio}\forcode{ = 7})] combines cases 1 and 2.
174
175\end{description}
176
177%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
178\begin{figure}[!t]     \begin{center}
179\includegraphics[width=1.0\textwidth]{Fig_LBC_jperio}
180\caption{    \protect\label{fig:LBC_jperio}
181setting of (a) east-west cyclic  (b) symmetric across the equator boundary conditions.}
182\end{center}   \end{figure}
183%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
184
185% -------------------------------------------------------------------------------------------------------------
186%        North fold (\textit{jperio = 3 }to $6)$
187% -------------------------------------------------------------------------------------------------------------
188\subsection{North-fold (\protect\np{jperio}\forcode{ = 3..6})}
189\label{subsec:LBC_north_fold}
190
191The north fold boundary condition has been introduced in order to handle the north
192boundary of a three-polar ORCA grid. Such a grid has two poles in the northern hemisphere
193(\autoref{fig:MISC_ORCA_msh}, and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.
194Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition.
195
196%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
197\begin{figure}[!t]    \begin{center}
198\includegraphics[width=0.90\textwidth]{Fig_North_Fold_T}
199\caption{    \protect\label{fig:North_Fold_T} 
200North fold boundary with a $T$-point pivot and cyclic east-west boundary condition
201($jperio=4$), as used in ORCA 2, 1/4, and 1/12. Pink shaded area corresponds
202to the inner domain mask (see text). }
203\end{center}   \end{figure}
204%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
205
206% ====================================================================
207% Exchange with neighbouring processors
208% ====================================================================
209\section{Exchange with neighbouring processors (\protect\mdl{lbclnk}, \protect\mdl{lib\_mpp})}
210\label{sec:LBC_mpp}
211
212For massively parallel processing (mpp), a domain decomposition method is used.
213The basic idea of the method is to split the large computation domain of a numerical
214experiment into several smaller domains and solve the set of equations by addressing
215independent local problems. Each processor has its own local memory and computes
216the model equation over a subdomain of the whole model domain. The subdomain
217boundary conditions are specified through communications between processors
218which are organized by explicit statements (message passing method).
219
220A big advantage is that the method does not need many modifications of the initial
221FORTRAN code. From the modeller's point of view, each sub domain running on
222a processor is identical to the "mono-domain" code. In addition, the programmer
223manages the communications between subdomains, and the code is faster when
224the number of processors is increased. The porting of OPA code on an iPSC860
225was achieved during Guyon's PhD [Guyon et al. 1994, 1995] in collaboration with
226CETIIS and ONERA. The implementation in the operational context and the studies
227of performance on a T3D and T3E Cray computers have been made in collaboration
228with IDRIS and CNRS. The present implementation is largely inspired by Guyon's
229work  [Guyon 1995].
230
231The parallelization strategy is defined by the physical characteristics of the
232ocean model. Second order finite difference schemes lead to local discrete
233operators that depend at the very most on one neighbouring point. The only
234non-local computations concern the vertical physics (implicit diffusion,
235turbulent closure scheme, ...) (delocalization over the whole water column),
236and the solving of the elliptic equation associated with the surface pressure
237gradient computation (delocalization over the whole horizontal domain).
238Therefore, a pencil strategy is used for the data sub-structuration
239: the 3D initial domain is laid out on local processor
240memories following a 2D horizontal topological splitting. Each sub-domain
241computes its own surface and bottom boundary conditions and has a side
242wall overlapping interface which defines the lateral boundary conditions for
243computations in the inner sub-domain. The overlapping area consists of the
244two rows at each edge of the sub-domain. After a computation, a communication
245phase starts: each processor sends to its neighbouring processors the update
246values of the points corresponding to the interior overlapping area to its
247neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows).
248The communication is done through the Message Passing Interface (MPI).
249The data exchanges between processors are required at the very
250place where lateral domain boundary conditions are set in the mono-domain
251computation : the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module)
252which manages such conditions is interfaced with routines found in \mdl{lib\_mpp} module
253when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined).
254It has to be pointed out that when using the MPP version of the model,
255the east-west cyclic boundary condition is done implicitly,
256whilst the south-symmetric boundary condition option is not available.
257
258%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
259\begin{figure}[!t]    \begin{center}
260\includegraphics[width=0.90\textwidth]{Fig_mpp}
261\caption{   \protect\label{fig:mpp} 
262Positioning of a sub-domain when massively parallel processing is used. }
263\end{center}   \end{figure}
264%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
265
266In the standard version of \NEMO, the splitting is regular and arithmetic.
267The i-axis is divided by \jp{jpni} and the j-axis by \jp{jpnj} for a number of processors
268\jp{jpnij} most often equal to $jpni \times jpnj$ (parameters set in
269 \ngn{nammpp} namelist). Each processor is independent and without message passing
270 or synchronous process, programs run alone and access just its own local memory.
271 For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil)
272 that are named \jp{jpi}, \jp{jpj}, \jp{jpk}. These dimensions include the internal
273 domain and the overlapping rows. The number of rows to exchange (known as
274 the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). The whole domain
275 dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}. The relationship between
276 the whole domain and a sub-domain is:
277\begin{eqnarray} 
278      jpi & = & ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci  \nonumber \\
279      jpj & = & ( jpjglo-2*jprecj + (jpnj-1) ) / jpnj + 2*jprecj  \label{eq:lbc_jpi}
280\end{eqnarray}
281where \jp{jpni}, \jp{jpnj} are the number of processors following the i- and j-axis.
282
283One also defines variables nldi and nlei which correspond to the internal domain bounds,
284and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain.
285An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,
286a global array (whole domain) by the relationship:
287\begin{equation} \label{eq:lbc_nimpp}
288T_{g} (i+nimpp-1,j+njmpp-1,k) = T_{l} (i,j,k),
289\end{equation}
290with  $1 \leq i \leq jpi$, $1  \leq j \leq jpj $ , and  $1  \leq k \leq jpk$.
291
292Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable
293nproc. In the standard version, a processor has no more than four neighbouring
294processors named nono (for north), noea (east), noso (south) and nowe (west)
295and two variables, nbondi and nbondj, indicate the relative position of the processor :
296\begin{itemize}
297\item       nbondi = -1    an east neighbour, no west processor,
298\item       nbondi =  0 an east neighbour, a west neighbour,
299\item       nbondi =  1    no east processor, a west neighbour,
300\item       nbondi =  2    no splitting following the i-axis.
301\end{itemize}
302During the simulation, processors exchange data with their neighbours.
303If there is effectively a neighbour, the processor receives variables from this
304processor on its overlapping row, and sends the data issued from internal
305domain corresponding to the overlapping row of the other processor.
306
307
308The \NEMO model computes equation terms with the help of mask arrays (0 on land
309points and 1 on sea points). It is easily readable and very efficient in the context of
310a computer with vectorial architecture. However, in the case of a scalar processor,
311computations over the land regions become more expensive in terms of CPU time.
312It is worse when we use a complex configuration with a realistic bathymetry like the
313global ocean where more than 50 \% of points are land points. For this reason, a
314pre-processing tool can be used to choose the mpp domain decomposition with a
315maximum number of only land points processors, which can then be eliminated (\autoref{fig:mppini2})
316(For example, the mpp\_optimiz tools, available from the DRAKKAR web site).
317This optimisation is dependent on the specific bathymetry employed. The user
318then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with
319$jpnij < jpni \times jpnj$, leading to the elimination of $jpni \times jpnj - jpnij$ 
320land processors. When those parameters are specified in \ngn{nammpp} namelist,
321the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound,
322nono, noea,...) so that the land-only processors are not taken into account.
323
324\gmcomment{Note that the inimpp2 routine is general so that the original inimpp
325routine should be suppressed from the code.}
326
327When land processors are eliminated, the value corresponding to these locations in
328the model output files is undefined. Note that this is a problem for the meshmask file
329which requires to be defined over the whole domain. Therefore, user should not eliminate
330land processors when creating a meshmask file ($i.e.$ when setting a non-zero value to \np{nn\_msh}).
331
332%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
333\begin{figure}[!ht]     \begin{center}
334\includegraphics[width=0.90\textwidth]{Fig_mppini2}
335\caption {    \protect\label{fig:mppini2}
336Example of Atlantic domain defined for the CLIPPER projet. Initial grid is
337composed of 773 x 1236 horizontal points.
338(a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).
33952 subdomains are land areas.
340(b) 52 subdomains are eliminated (white rectangles) and the resulting number
341of processors really used during the computation is jpnij=128.}
342\end{center}   \end{figure}
343%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
344
345
346% ====================================================================
347% Unstructured open boundaries BDY
348% ====================================================================
349\section{Unstructured open boundary conditions (BDY)}
350\label{sec:LBC_bdy}
351
352%-----------------------------------------nambdy--------------------------------------------
353
354\nlst{nambdy} 
355%-----------------------------------------------------------------------------------------------
356%-----------------------------------------nambdy_index--------------------------------------------
357%
358%\nlst{nambdy_index}
359%-----------------------------------------------------------------------------------------------
360%-----------------------------------------nambdy_dta--------------------------------------------
361
362\nlst{nambdy_dta} 
363%-----------------------------------------------------------------------------------------------
364%-----------------------------------------nambdy_dta--------------------------------------------
365%
366%\nlst{nambdy_dta2}
367%-----------------------------------------------------------------------------------------------
368
369Options are defined through the \ngn{nambdy} \ngn{nambdy\_index} 
370\ngn{nambdy\_dta} \ngn{nambdy\_dta2} namelist variables.
371The BDY module is the core implementation of open boundary
372conditions for regional configurations. It implements the Flow
373Relaxation Scheme algorithm for temperature, salinity, velocities and
374ice fields, and the Flather radiation condition for the depth-mean
375transports. The specification of the location of the open boundary is
376completely flexible and allows for example the open boundary to follow
377an isobath or other irregular contour.
378
379The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many
380features and a similar coding structure \citep{Chanut2005}.
381
382Boundary data files used with earlier versions of NEMO may need
383to be re-ordered to work with this version. See the
384section on the Input Boundary Data Files for details.
385
386%----------------------------------------------
387\subsection{Namelists}
388\label{subsec:BDY_namelist}
389
390The BDY module is activated by setting \np{ln\_bdy} to true.
391It is possible to define more than one boundary ``set'' and apply
392different boundary conditions to each set. The number of boundary
393sets is defined by \np{nb\_bdy}.  Each boundary set may be defined
394as a set of straight line segments in a namelist
395(\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file
396(\np{ln\_coords\_file}\forcode{ = .true.}). If the set is defined in a namelist,
397then the namelists nambdy\_index must be included separately, one for
398each set. If the set is defined by a file, then a
399``\ifile{coordinates.bdy}'' file must be provided. The coordinates.bdy file
400is analagous to the usual NEMO ``\ifile{coordinates}'' file. In the example
401above, there are two boundary sets, the first of which is defined via
402a file and the second is defined in a namelist. For more details of
403the definition of the boundary geometry see section
404\autoref{subsec:BDY_geometry}.
405
406For each boundary set a boundary
407condition has to be chosen for the barotropic solution (``u2d'':
408sea-surface height and barotropic velocities), for the baroclinic
409velocities (``u3d''), and for the active tracers\footnote{The BDY
410  module does not deal with passive tracers at this version}
411(``tra''). For each set of variables there is a choice of algorithm
412and a choice for the data, eg. for the active tracers the algorithm is
413set by \np{nn\_tra} and the choice of data is set by
414\np{nn\_tra\_dta}.
415
416The choice of algorithm is currently as follows:
417
418\mbox{}
419
420\begin{itemize}
421\item[0.] No boundary condition applied. So the solution will ``see''
422  the land points around the edge of the edge of the domain.
423\item[1.] Flow Relaxation Scheme (FRS) available for all variables.
424\item[2.] Flather radiation scheme for the barotropic variables. The
425  Flather scheme is not compatible with the filtered free surface
426  ({\it dynspg\_ts}).
427\end{itemize}
428
429\mbox{}
430
431The main choice for the boundary data is
432to use initial conditions as boundary data (\np{nn\_tra\_dta}\forcode{ = 0}) or to
433use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}). For the
434barotropic solution there is also the option to use tidal
435harmonic forcing either by itself or in addition to other external
436data.
437
438If external boundary data is required then the nambdy\_dta namelist
439must be defined. One nambdy\_dta namelist is required for each boundary
440set in the order in which the boundary sets are defined in nambdy. In
441the example given, two boundary sets have been defined and so there
442are two nambdy\_dta namelists. The boundary data is read in using the
443fldread module, so the nambdy\_dta namelist is in the format required
444for fldread. For each variable required, the filename, the frequency
445of the files and the frequency of the data in the files is given. Also
446whether or not time-interpolation is required and whether the data is
447climatological (time-cyclic) data. Note that on-the-fly spatial
448interpolation of boundary data is not available at this version.
449
450In the example namelists given, two boundary sets are defined. The
451first set is defined via a file and applies FRS conditions to
452temperature and salinity and Flather conditions to the barotropic
453variables. External data is provided in daily files (from a
454large-scale model). Tidal harmonic forcing is also used. The second
455set is defined in a namelist. FRS conditions are applied on
456temperature and salinity and climatological data is read from external
457files.
458
459%----------------------------------------------
460\subsection{Flow relaxation scheme}
461\label{subsec:BDY_FRS_scheme}
462
463The Flow Relaxation Scheme (FRS) \citep{Davies_QJRMS76,Engerdahl_Tel95},
464applies a simple relaxation of the model fields to
465externally-specified values over a zone next to the edge of the model
466domain. Given a model prognostic variable $\Phi$ 
467\begin{equation}  \label{eq:bdy_frs1}
468\Phi(d) = \alpha(d)\Phi_{e}(d) + (1-\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N
469\end{equation}
470where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified
471external field, $d$ gives the discrete distance from the model
472boundary  and $\alpha$ is a parameter that varies from $1$ at $d=1$ to
473a small value at $d=N$. It can be shown that this scheme is equivalent
474to adding a relaxation term to the prognostic equation for $\Phi$ of
475the form:
476\begin{equation}  \label{eq:bdy_frs2}
477-\frac{1}{\tau}\left(\Phi - \Phi_{e}\right)
478\end{equation}
479where the relaxation time scale $\tau$ is given by a function of
480$\alpha$ and the model time step $\Delta t$:
481\begin{equation}  \label{eq:bdy_frs3}
482\tau = \frac{1-\alpha}{\alpha}  \,\rdt
483\end{equation}
484Thus the model solution is completely prescribed by the external
485conditions at the edge of the model domain and is relaxed towards the
486external conditions over the rest of the FRS zone. The application of
487a relaxation zone helps to prevent spurious reflection of outgoing
488signals from the model boundary.
489
490The function $\alpha$ is specified as a $tanh$ function:
491\begin{equation}  \label{eq:bdy_frs4}
492\alpha(d) = 1 - \tanh\left(\frac{d-1}{2}\right),       \quad d=1,N
493\end{equation}
494The width of the FRS zone is specified in the namelist as
495\np{nn\_rimwidth}. This is typically set to a value between 8 and 10.
496
497%----------------------------------------------
498\subsection{Flather radiation scheme}
499\label{subsec:BDY_flather_scheme}
500
501The \citet{Flather_JPO94} scheme is a radiation condition on the normal, depth-mean
502transport across the open boundary. It takes the form
503\begin{equation}  \label{eq:bdy_fla1}
504U = U_{e} + \frac{c}{h}\left(\eta - \eta_{e}\right),
505\end{equation}
506where $U$ is the depth-mean velocity normal to the boundary and $\eta$
507is the sea surface height, both from the model. The subscript $e$
508indicates the same fields from external sources. The speed of external
509gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the
510water column. The depth-mean normal velocity along the edge of the
511model domain is set equal to the
512external depth-mean normal velocity, plus a correction term that
513allows gravity waves generated internally to exit the model boundary.
514Note that the sea-surface height gradient in \autoref{eq:bdy_fla1}
515is a spatial gradient across the model boundary, so that $\eta_{e}$ is
516defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the
517$T$ points with $nbr=2$. $U$ and $U_{e}$ are defined on the $U$ or
518$V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points.
519
520%----------------------------------------------
521\subsection{Boundary geometry}
522\label{subsec:BDY_geometry}
523
524Each open boundary set is defined as a list of points. The information
525is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$
526structure.  The $nbi$ and $nbj$ arrays
527define the local $(i,j)$ indices of each point in the boundary zone
528and the $nbr$ array defines the discrete distance from the boundary
529with $nbr=1$ meaning that the point is next to the edge of the
530model domain and $nbr>1$ showing that the point is increasingly
531further away from the edge of the model domain. A set of $nbi$, $nbj$,
532and $nbr$ arrays is defined for each of the $T$, $U$ and $V$
533grids. Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular
534boundary.
535
536The boundary geometry for each set may be defined in a namelist
537nambdy\_index or by reading in a ``\ifile{coordinates.bdy}'' file. The
538nambdy\_index namelist defines a series of straight-line segments for
539north, east, south and west boundaries. For the northern boundary,
540\np{nbdysegn} gives the number of segments, \np{jpjnob} gives the $j$
541index for each segment and \np{jpindt} and \np{jpinft} give the start
542and end $i$ indices for each segment with similar for the other
543boundaries. These segments define a list of $T$ grid points along the
544outermost row of the boundary ($nbr\,=\, 1$). The code deduces the $U$ and
545$V$ points and also the points for $nbr\,>\, 1$ if
546$nn\_rimwidth\,>\,1$.
547
548The boundary geometry may also be defined from a
549``\ifile{coordinates.bdy}'' file. Figure \autoref{fig:LBC_nc_header}
550gives an example of the header information from such a file. The file
551should contain the index arrays for each of the $T$, $U$ and $V$
552grids. The arrays must be in order of increasing $nbr$. Note that the
553$nbi$, $nbj$ values in the file are global values and are converted to
554local values in the code. Typically this file will be used to generate
555external boundary data via interpolation and so will also contain the
556latitudes and longitudes of each point as shown. However, this is not
557necessary to run the model.
558
559For some choices of irregular boundary the model domain may contain
560areas of ocean which are not part of the computational domain. For
561example if an open boundary is defined along an isobath, say at the
562shelf break, then the areas of ocean outside of this boundary will
563need to be masked out. This can be done by reading a mask file defined
564as \np{cn\_mask\_file} in the nam\_bdy namelist. Only one mask file is
565used even if multiple boundary sets are defined.
566
567%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
568\begin{figure}[!t]      \begin{center}
569\includegraphics[width=1.0\textwidth]{Fig_LBC_bdy_geom}
570\caption {      \protect\label{fig:LBC_bdy_geom}
571Example of geometry of unstructured open boundary}
572\end{center}   \end{figure}
573%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
574
575%----------------------------------------------
576\subsection{Input boundary data files}
577\label{subsec:BDY_data}
578
579The data files contain the data arrays
580in the order in which the points are defined in the $nbi$ and $nbj$
581arrays. The data arrays are dimensioned on: a time dimension;
582$xb$ which is the index of the boundary data point in the horizontal;
583and $yb$ which is a degenerate dimension of 1 to enable the file to be
584read by the standard NEMO I/O routines. The 3D fields also have a
585depth dimension.
586
587At Version 3.4 there are new restrictions on the order in which the
588boundary points are defined (and therefore restrictions on the order
589of the data in the file). In particular:
590
591\mbox{}
592
593\begin{enumerate}
594\item The data points must be in order of increasing $nbr$, ie. all
595  the $nbr=1$ points, then all the $nbr=2$ points etc.
596\item All the data for a particular boundary set must be in the same
597  order. (Prior to 3.4 it was possible to define barotropic data in a
598  different order to the data for tracers and baroclinic velocities).
599\end{enumerate}
600
601\mbox{}
602
603These restrictions mean that data files used with previous versions of
604the model may not work with version 3.4. A fortran utility
605{\it bdy\_reorder} exists in the TOOLS directory which will re-order the
606data in old BDY data files.
607
608%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
609\begin{figure}[!t]     \begin{center}
610\includegraphics[width=1.0\textwidth]{Fig_LBC_nc_header}
611\caption {     \protect\label{fig:LBC_nc_header} 
612Example of the header for a \protect\ifile{coordinates.bdy} file}
613\end{center}   \end{figure}
614%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
615
616%----------------------------------------------
617\subsection{Volume correction}
618\label{subsec:BDY_vol_corr}
619
620There is an option to force the total volume in the regional model to be constant,
621similar to the option in the OBC module. This is controlled  by the \np{nn\_volctl} 
622parameter in the namelist. A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used.
623If  \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities
624around the boundary at each timestep to ensure that the integrated volume flow
625through the boundary is zero. If \np{nn\_volctl}\forcode{ = 2} then the calculation of
626the volume change on the timestep includes the change due to the freshwater
627flux across the surface and the correction velocity corrects for this as well.
628
629If more than one boundary set is used then volume correction is
630applied to all boundaries at once.
631
632\newpage
633%----------------------------------------------
634\subsection{Tidal harmonic forcing}
635\label{subsec:BDY_tides}
636
637%-----------------------------------------nambdy_tide--------------------------------------------
638
639\nlst{nambdy_tide} 
640%-----------------------------------------------------------------------------------------------
641
642Options are defined through the  \ngn{nambdy\_tide} namelist variables.
643 To be written....
644
645
646
647
648\end{document}
Note: See TracBrowser for help on using the repository browser.