New URL for NEMO forge!

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
chap_LBC.tex in NEMO/trunk/doc/tex_sub – NEMO

source: NEMO/trunk/doc/tex_sub/chap_LBC.tex @ 9667

Last change on this file since 9667 was 9667, checked in by smasson, 6 years ago

trunk: cyclic north-south periodicity and nperio cleaning, see #2093

File size: 34.3 KB
3% ================================================================
4% Chapter — Lateral Boundary Condition (LBC)
5% ================================================================
6\chapter{Lateral Boundary Condition (LBC)}
11$\ $\newline    % force a new ligne
14%gm% add here introduction to this chapter
16% ================================================================
17% Boundary Condition at the Coast
18% ================================================================
19\section{Boundary condition at the coast (\protect\np{rn\_shlat})}
25%The lateral ocean boundary conditions contiguous to coastlines are Neumann conditions for heat and salt (no flux across boundaries) and Dirichlet conditions for momentum (ranging from free-slip to "strong" no-slip). They are handled automatically by the mask system (see \autoref{subsec:DOM_msk}).
27%OPA allows land and topography grid points in the computational domain due to the presence of continents or islands, and includes the use of a full or partial step representation of bottom topography. The computation is performed over the whole domain, i.e. we do not try to restrict the computation to ocean-only points. This choice has two motivations. Firstly, working on ocean only grid points overloads the code and harms the code readability. Secondly, and more importantly, it drastically reduces the vector portion of the computation, leading to a dramatic increase of CPU time requirement on vector computers.  The current section describes how the masking affects the computation of the various terms of the equations with respect to the boundary condition at solid walls. The process of defining which areas are to be masked is described in \autoref{subsec:DOM_msk}.
29Options are defined through the \ngn{namlbc} namelist variables.
30The discrete representation of a domain with complex boundaries (coastlines and
31bottom topography) leads to arrays that include large portions where a computation
32is not required as the model variables remain at zero. Nevertheless, vectorial
33supercomputers are far more efficient when computing over a whole array, and the
34readability of a code is greatly improved when boundary conditions are applied in
35an automatic way rather than by a specific computation before or after each
36computational loop. An efficient way to work over the whole domain while specifying
37the boundary conditions, is to use multiplication by mask arrays in the computation.
38A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ 
39elsewhere. A simple multiplication of a variable by its own mask ensures that it will
40remain zero over land areas. Since most of the boundary conditions consist of a
41zero flux across the solid boundaries, they can be simply applied by multiplying
42variables by the correct mask arrays, $i.e.$ the mask array of the grid point where
43the flux is evaluated. For example, the heat flux in the \textbf{i}-direction is evaluated
44at $u$-points. Evaluating this quantity as,
46\begin{equation} \label{eq:lbc_aaaa}
47\frac{A^{lT} }{e_1 }\frac{\partial T}{\partial i}\equiv \frac{A_u^{lT} 
48}{e_{1u} } \; \delta _{i+1 / 2} \left[ T \right]\;\;mask_u
50(where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is
51zero inside land and at the boundaries, since mask$_{u}$ is zero at solid boundaries
52which in this case are defined at $u$-points (normal velocity $u$ remains zero at
53the coast) (\autoref{fig:LBC_uv}).
56\begin{figure}[!t]     \begin{center}
58\caption{  \protect\label{fig:LBC_uv}
59Lateral boundary (thick line) at T-level. The velocity normal to the boundary is set to zero.}
60\end{center}   \end{figure}
63For momentum the situation is a bit more complex as two boundary conditions
64must be provided along the coast (one each for the normal and tangential velocities).
65The boundary of the ocean in the C-grid is defined by the velocity-faces.
66For example, at a given $T$-level, the lateral boundary (a coastline or an intersection
67with the bottom topography) is made of segments joining $f$-points, and normal
68velocity points are located between two $f-$points (\autoref{fig:LBC_uv}).
69The boundary condition on the normal velocity (no flux through solid boundaries)
70can thus be easily implemented using the mask system. The boundary condition
71on the tangential velocity requires a more specific treatment. This boundary
72condition influences the relative vorticity and momentum diffusive trends, and is
73required in order to compute the vorticity at the coast. Four different types of
74lateral boundary condition are available, controlled by the value of the \np{rn\_shlat} 
75namelist parameter. (The value of the mask$_{f}$ array along the coastline is set
76equal to this parameter.) These are:
79\begin{figure}[!p] \begin{center}
81\caption{     \protect\label{fig:LBC_shlat} 
82lateral boundary condition (a) free-slip ($rn\_shlat=0$) ; (b) no-slip ($rn\_shlat=2$)
83; (c) "partial" free-slip ($0<rn\_shlat<2$) and (d) "strong" no-slip ($2<rn\_shlat$).
84Implied "ghost" velocity inside land area is display in grey. }
85\end{center}    \end{figure}
90\item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}): ]  the tangential velocity at the
91coastline is equal to the offshore velocity, $i.e.$ the normal derivative of the
92tangential velocity is zero at the coast, so the vorticity: mask$_{f}$ array is set
93to zero inside the land and just at the coast (\autoref{fig:LBC_shlat}-a).
95\item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}): ] the tangential velocity vanishes
96at the coastline. Assuming that the tangential velocity decreases linearly from
97the closest ocean velocity grid point to the coastline, the normal derivative is
98evaluated as if the velocities at the closest land velocity gridpoint and the closest
99ocean velocity gridpoint were of the same magnitude but in the opposite direction
100(\autoref{fig:LBC_shlat}-b). Therefore, the vorticity along the coastlines is given by:
103\zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right] - \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ ,
105where $u$ and $v$ are masked fields. Setting the mask$_{f}$ array to $2$ along
106the coastline provides a vorticity field computed with the no-slip boundary condition,
107simply by multiplying it by the mask$_{f}$ :
108\begin{equation} \label{eq:lbc_bbbb}
109\zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta _{i+1/2} 
110\left[ {e_{2v} \,v} \right]-\delta _{j+1/2} \left[ {e_{1u} \,u} \right]} 
114\item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2): ] the tangential
115velocity at the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral
116friction but not strong enough to make the tangential velocity at the coast vanish
117(\autoref{fig:LBC_shlat}-c). This can be selected by providing a value of mask$_{f}$ 
118strictly inbetween $0$ and $2$.
120\item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}): ] the viscous boundary
121layer is assumed to be smaller than half the grid size (\autoref{fig:LBC_shlat}-d).
122The friction is thus larger than in the no-slip case.
126Note that when the bottom topography is entirely represented by the $s$-coor-dinates
127(pure $s$-coordinate), the lateral boundary condition on tangential velocity is of much
128less importance as it is only applied next to the coast where the minimum water depth
129can be quite shallow.
132% ================================================================
133% Boundary Condition around the Model Domain
134% ================================================================
135\section{Model domain boundary condition (\protect\np{jperio})}
138At the model domain boundaries several choices are offered: closed, cyclic east-west,
139cyclic north-south, a north-fold, and combination closed-north fold
140or bi-cyclic east-west and north-fold. The north-fold boundary condition is associated with the 3-pole ORCA mesh.
142% -------------------------------------------------------------------------------------------------------------
143%        Closed, cyclic (\np{jperio}\forcode{ = 0..2})
144% -------------------------------------------------------------------------------------------------------------
145\subsection{Closed, cyclic (\protect\np{jperio}\forcode{= [0127]})}
148The choice of closed or cyclic model domain boundary condition is made
149by setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}. Each time such a boundary
150condition is needed, it is set by a call to routine \mdl{lbclnk}. The computation of
151momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to
152$j=jpj-1$, $i.e.$ in the model interior. To choose a lateral model boundary condition
153is to specify the first and last rows and columns of the model variables.
157\item[For closed boundary (\np{jperio}\forcode{ = 0})], solid walls are imposed at all model
158boundaries: first and last rows and columns are set to zero.
160\item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})], first and last rows are set
161to zero (closed) whilst the first column is set to the value of the last-but-one column
162and the last column to the value of the second one (\autoref{fig:LBC_jperio}-a).
163Whatever flows out of the eastern (western) end of the basin enters the western
164(eastern) end.
166\item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})], first and last columns are set
167to zero (closed) whilst the first row is set to the value of the last-but-one row
168and the last row to the value of the second one (\autoref{fig:LBC_jperio}-a).
169Whatever flows out of the northern (southern) end of the basin enters the southern 
170(northern) end.
172\item[Bi-cyclic east-west and north-south boundary (\np{jperio}\forcode{ = 7})] combines cases 1 and 2.
177\begin{figure}[!t]     \begin{center}
179\caption{    \protect\label{fig:LBC_jperio}
180setting of (a) east-west cyclic  (b) symmetric across the equator boundary conditions.}
181\end{center}   \end{figure}
184% -------------------------------------------------------------------------------------------------------------
185%        North fold (\textit{jperio = 3 }to $6)$
186% -------------------------------------------------------------------------------------------------------------
187\subsection{North-fold (\protect\np{jperio}\forcode{ = 3..6})}
190The north fold boundary condition has been introduced in order to handle the north
191boundary of a three-polar ORCA grid. Such a grid has two poles in the northern hemisphere
192(\autoref{fig:MISC_ORCA_msh}, and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.
193Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition.
196\begin{figure}[!t]    \begin{center}
198\caption{    \protect\label{fig:North_Fold_T} 
199North fold boundary with a $T$-point pivot and cyclic east-west boundary condition
200($jperio=4$), as used in ORCA 2, 1/4, and 1/12. Pink shaded area corresponds
201to the inner domain mask (see text). }
202\end{center}   \end{figure}
205% ====================================================================
206% Exchange with neighbouring processors
207% ====================================================================
208\section{Exchange with neighbouring processors (\protect\mdl{lbclnk}, \protect\mdl{lib\_mpp})}
211For massively parallel processing (mpp), a domain decomposition method is used.
212The basic idea of the method is to split the large computation domain of a numerical
213experiment into several smaller domains and solve the set of equations by addressing
214independent local problems. Each processor has its own local memory and computes
215the model equation over a subdomain of the whole model domain. The subdomain
216boundary conditions are specified through communications between processors
217which are organized by explicit statements (message passing method).
219A big advantage is that the method does not need many modifications of the initial
220FORTRAN code. From the modeller's point of view, each sub domain running on
221a processor is identical to the "mono-domain" code. In addition, the programmer
222manages the communications between subdomains, and the code is faster when
223the number of processors is increased. The porting of OPA code on an iPSC860
224was achieved during Guyon's PhD [Guyon et al. 1994, 1995] in collaboration with
225CETIIS and ONERA. The implementation in the operational context and the studies
226of performance on a T3D and T3E Cray computers have been made in collaboration
227with IDRIS and CNRS. The present implementation is largely inspired by Guyon's
228work  [Guyon 1995].
230The parallelization strategy is defined by the physical characteristics of the
231ocean model. Second order finite difference schemes lead to local discrete
232operators that depend at the very most on one neighbouring point. The only
233non-local computations concern the vertical physics (implicit diffusion,
234turbulent closure scheme, ...) (delocalization over the whole water column),
235and the solving of the elliptic equation associated with the surface pressure
236gradient computation (delocalization over the whole horizontal domain).
237Therefore, a pencil strategy is used for the data sub-structuration
238: the 3D initial domain is laid out on local processor
239memories following a 2D horizontal topological splitting. Each sub-domain
240computes its own surface and bottom boundary conditions and has a side
241wall overlapping interface which defines the lateral boundary conditions for
242computations in the inner sub-domain. The overlapping area consists of the
243two rows at each edge of the sub-domain. After a computation, a communication
244phase starts: each processor sends to its neighbouring processors the update
245values of the points corresponding to the interior overlapping area to its
246neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows).
247The communication is done through the Message Passing Interface (MPI).
248The data exchanges between processors are required at the very
249place where lateral domain boundary conditions are set in the mono-domain
250computation : the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module)
251which manages such conditions is interfaced with routines found in \mdl{lib\_mpp} module
252when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined).
253It has to be pointed out that when using the MPP version of the model,
254the east-west cyclic boundary condition is done implicitly,
255whilst the south-symmetric boundary condition option is not available.
258\begin{figure}[!t]    \begin{center}
260\caption{   \protect\label{fig:mpp} 
261Positioning of a sub-domain when massively parallel processing is used. }
262\end{center}   \end{figure}
265In the standard version of \NEMO, the splitting is regular and arithmetic.
266The i-axis is divided by \jp{jpni} and the j-axis by \jp{jpnj} for a number of processors
267\jp{jpnij} most often equal to $jpni \times jpnj$ (parameters set in
268 \ngn{nammpp} namelist). Each processor is independent and without message passing
269 or synchronous process, programs run alone and access just its own local memory.
270 For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil)
271 that are named \jp{jpi}, \jp{jpj}, \jp{jpk}. These dimensions include the internal
272 domain and the overlapping rows. The number of rows to exchange (known as
273 the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). The whole domain
274 dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}. The relationship between
275 the whole domain and a sub-domain is:
277      jpi & = & ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci  \nonumber \\
278      jpj & = & ( jpjglo-2*jprecj + (jpnj-1) ) / jpnj + 2*jprecj  \label{eq:lbc_jpi}
280where \jp{jpni}, \jp{jpnj} are the number of processors following the i- and j-axis.
282One also defines variables nldi and nlei which correspond to the internal domain bounds,
283and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain.
284An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,
285a global array (whole domain) by the relationship:
286\begin{equation} \label{eq:lbc_nimpp}
287T_{g} (i+nimpp-1,j+njmpp-1,k) = T_{l} (i,j,k),
289with  $1 \leq i \leq jpi$, $1  \leq j \leq jpj $ , and  $1  \leq k \leq jpk$.
291Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable
292nproc. In the standard version, a processor has no more than four neighbouring
293processors named nono (for north), noea (east), noso (south) and nowe (west)
294and two variables, nbondi and nbondj, indicate the relative position of the processor :
296\item       nbondi = -1    an east neighbour, no west processor,
297\item       nbondi =  0 an east neighbour, a west neighbour,
298\item       nbondi =  1    no east processor, a west neighbour,
299\item       nbondi =  2    no splitting following the i-axis.
301During the simulation, processors exchange data with their neighbours.
302If there is effectively a neighbour, the processor receives variables from this
303processor on its overlapping row, and sends the data issued from internal
304domain corresponding to the overlapping row of the other processor.
307The \NEMO model computes equation terms with the help of mask arrays (0 on land
308points and 1 on sea points). It is easily readable and very efficient in the context of
309a computer with vectorial architecture. However, in the case of a scalar processor,
310computations over the land regions become more expensive in terms of CPU time.
311It is worse when we use a complex configuration with a realistic bathymetry like the
312global ocean where more than 50 \% of points are land points. For this reason, a
313pre-processing tool can be used to choose the mpp domain decomposition with a
314maximum number of only land points processors, which can then be eliminated (\autoref{fig:mppini2})
315(For example, the mpp\_optimiz tools, available from the DRAKKAR web site).
316This optimisation is dependent on the specific bathymetry employed. The user
317then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with
318$jpnij < jpni \times jpnj$, leading to the elimination of $jpni \times jpnj - jpnij$ 
319land processors. When those parameters are specified in \ngn{nammpp} namelist,
320the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound,
321nono, noea,...) so that the land-only processors are not taken into account.
323\gmcomment{Note that the inimpp2 routine is general so that the original inimpp
324routine should be suppressed from the code.}
326When land processors are eliminated, the value corresponding to these locations in
327the model output files is undefined. Note that this is a problem for the meshmask file
328which requires to be defined over the whole domain. Therefore, user should not eliminate
329land processors when creating a meshmask file ($i.e.$ when setting a non-zero value to \np{nn\_msh}).
332\begin{figure}[!ht]     \begin{center}
334\caption {    \protect\label{fig:mppini2}
335Example of Atlantic domain defined for the CLIPPER projet. Initial grid is
336composed of 773 x 1236 horizontal points.
337(a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).
33852 subdomains are land areas.
339(b) 52 subdomains are eliminated (white rectangles) and the resulting number
340of processors really used during the computation is jpnij=128.}
341\end{center}   \end{figure}
345% ====================================================================
346% Unstructured open boundaries BDY
347% ====================================================================
348\section{Unstructured open boundary conditions (BDY)}
364Options are defined through the \ngn{nambdy} \ngn{nambdy\_index} 
365\ngn{nambdy\_dta} \ngn{nambdy\_dta2} namelist variables.
366The BDY module is the core implementation of open boundary
367conditions for regional configurations. It implements the Flow
368Relaxation Scheme algorithm for temperature, salinity, velocities and
369ice fields, and the Flather radiation condition for the depth-mean
370transports. The specification of the location of the open boundary is
371completely flexible and allows for example the open boundary to follow
372an isobath or other irregular contour.
374The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many
375features and a similar coding structure \citep{Chanut2005}.
377Boundary data files used with earlier versions of NEMO may need
378to be re-ordered to work with this version. See the
379section on the Input Boundary Data Files for details.
385The BDY module is activated by setting \np{ln\_bdy} to true.
386It is possible to define more than one boundary ``set'' and apply
387different boundary conditions to each set. The number of boundary
388sets is defined by \np{nb\_bdy}.  Each boundary set may be defined
389as a set of straight line segments in a namelist
390(\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file
391(\np{ln\_coords\_file}\forcode{ = .true.}). If the set is defined in a namelist,
392then the namelists nambdy\_index must be included separately, one for
393each set. If the set is defined by a file, then a
394``\ifile{coordinates.bdy}'' file must be provided. The coordinates.bdy file
395is analagous to the usual NEMO ``\ifile{coordinates}'' file. In the example
396above, there are two boundary sets, the first of which is defined via
397a file and the second is defined in a namelist. For more details of
398the definition of the boundary geometry see section
401For each boundary set a boundary
402condition has to be chosen for the barotropic solution (``u2d'':
403sea-surface height and barotropic velocities), for the baroclinic
404velocities (``u3d''), and for the active tracers\footnote{The BDY
405  module does not deal with passive tracers at this version}
406(``tra''). For each set of variables there is a choice of algorithm
407and a choice for the data, eg. for the active tracers the algorithm is
408set by \np{nn\_tra} and the choice of data is set by
411The choice of algorithm is currently as follows:
416\item[0.] No boundary condition applied. So the solution will ``see''
417  the land points around the edge of the edge of the domain.
418\item[1.] Flow Relaxation Scheme (FRS) available for all variables.
419\item[2.] Flather radiation scheme for the barotropic variables. The
420  Flather scheme is not compatible with the filtered free surface
421  ({\it dynspg\_ts}).
426The main choice for the boundary data is
427to use initial conditions as boundary data (\np{nn\_tra\_dta}\forcode{ = 0}) or to
428use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}). For the
429barotropic solution there is also the option to use tidal
430harmonic forcing either by itself or in addition to other external
433If external boundary data is required then the nambdy\_dta namelist
434must be defined. One nambdy\_dta namelist is required for each boundary
435set in the order in which the boundary sets are defined in nambdy. In
436the example given, two boundary sets have been defined and so there
437are two nambdy\_dta namelists. The boundary data is read in using the
438fldread module, so the nambdy\_dta namelist is in the format required
439for fldread. For each variable required, the filename, the frequency
440of the files and the frequency of the data in the files is given. Also
441whether or not time-interpolation is required and whether the data is
442climatological (time-cyclic) data. Note that on-the-fly spatial
443interpolation of boundary data is not available at this version.
445In the example namelists given, two boundary sets are defined. The
446first set is defined via a file and applies FRS conditions to
447temperature and salinity and Flather conditions to the barotropic
448variables. External data is provided in daily files (from a
449large-scale model). Tidal harmonic forcing is also used. The second
450set is defined in a namelist. FRS conditions are applied on
451temperature and salinity and climatological data is read from external
455\subsection{Flow relaxation scheme}
458The Flow Relaxation Scheme (FRS) \citep{Davies_QJRMS76,Engerdahl_Tel95},
459applies a simple relaxation of the model fields to
460externally-specified values over a zone next to the edge of the model
461domain. Given a model prognostic variable $\Phi$ 
462\begin{equation}  \label{eq:bdy_frs1}
463\Phi(d) = \alpha(d)\Phi_{e}(d) + (1-\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N
465where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified
466external field, $d$ gives the discrete distance from the model
467boundary  and $\alpha$ is a parameter that varies from $1$ at $d=1$ to
468a small value at $d=N$. It can be shown that this scheme is equivalent
469to adding a relaxation term to the prognostic equation for $\Phi$ of
470the form:
471\begin{equation}  \label{eq:bdy_frs2}
472-\frac{1}{\tau}\left(\Phi - \Phi_{e}\right)
474where the relaxation time scale $\tau$ is given by a function of
475$\alpha$ and the model time step $\Delta t$:
476\begin{equation}  \label{eq:bdy_frs3}
477\tau = \frac{1-\alpha}{\alpha}  \,\rdt
479Thus the model solution is completely prescribed by the external
480conditions at the edge of the model domain and is relaxed towards the
481external conditions over the rest of the FRS zone. The application of
482a relaxation zone helps to prevent spurious reflection of outgoing
483signals from the model boundary.
485The function $\alpha$ is specified as a $tanh$ function:
486\begin{equation}  \label{eq:bdy_frs4}
487\alpha(d) = 1 - \tanh\left(\frac{d-1}{2}\right),       \quad d=1,N
489The width of the FRS zone is specified in the namelist as
490\np{nn\_rimwidth}. This is typically set to a value between 8 and 10.
493\subsection{Flather radiation scheme}
496The \citet{Flather_JPO94} scheme is a radiation condition on the normal, depth-mean
497transport across the open boundary. It takes the form
498\begin{equation}  \label{eq:bdy_fla1}
499U = U_{e} + \frac{c}{h}\left(\eta - \eta_{e}\right),
501where $U$ is the depth-mean velocity normal to the boundary and $\eta$
502is the sea surface height, both from the model. The subscript $e$
503indicates the same fields from external sources. The speed of external
504gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the
505water column. The depth-mean normal velocity along the edge of the
506model domain is set equal to the
507external depth-mean normal velocity, plus a correction term that
508allows gravity waves generated internally to exit the model boundary.
509Note that the sea-surface height gradient in \autoref{eq:bdy_fla1}
510is a spatial gradient across the model boundary, so that $\eta_{e}$ is
511defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the
512$T$ points with $nbr=2$. $U$ and $U_{e}$ are defined on the $U$ or
513$V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points.
516\subsection{Boundary geometry}
519Each open boundary set is defined as a list of points. The information
520is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$
521structure.  The $nbi$ and $nbj$ arrays
522define the local $(i,j)$ indices of each point in the boundary zone
523and the $nbr$ array defines the discrete distance from the boundary
524with $nbr=1$ meaning that the point is next to the edge of the
525model domain and $nbr>1$ showing that the point is increasingly
526further away from the edge of the model domain. A set of $nbi$, $nbj$,
527and $nbr$ arrays is defined for each of the $T$, $U$ and $V$
528grids. Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular
531The boundary geometry for each set may be defined in a namelist
532nambdy\_index or by reading in a ``\ifile{coordinates.bdy}'' file. The
533nambdy\_index namelist defines a series of straight-line segments for
534north, east, south and west boundaries. For the northern boundary,
535\np{nbdysegn} gives the number of segments, \np{jpjnob} gives the $j$
536index for each segment and \np{jpindt} and \np{jpinft} give the start
537and end $i$ indices for each segment with similar for the other
538boundaries. These segments define a list of $T$ grid points along the
539outermost row of the boundary ($nbr\,=\, 1$). The code deduces the $U$ and
540$V$ points and also the points for $nbr\,>\, 1$ if
543The boundary geometry may also be defined from a
544``\ifile{coordinates.bdy}'' file. Figure \autoref{fig:LBC_nc_header}
545gives an example of the header information from such a file. The file
546should contain the index arrays for each of the $T$, $U$ and $V$
547grids. The arrays must be in order of increasing $nbr$. Note that the
548$nbi$, $nbj$ values in the file are global values and are converted to
549local values in the code. Typically this file will be used to generate
550external boundary data via interpolation and so will also contain the
551latitudes and longitudes of each point as shown. However, this is not
552necessary to run the model.
554For some choices of irregular boundary the model domain may contain
555areas of ocean which are not part of the computational domain. For
556example if an open boundary is defined along an isobath, say at the
557shelf break, then the areas of ocean outside of this boundary will
558need to be masked out. This can be done by reading a mask file defined
559as \np{cn\_mask\_file} in the nam\_bdy namelist. Only one mask file is
560used even if multiple boundary sets are defined.
563\begin{figure}[!t]      \begin{center}
565\caption {      \protect\label{fig:LBC_bdy_geom}
566Example of geometry of unstructured open boundary}
567\end{center}   \end{figure}
571\subsection{Input boundary data files}
574The data files contain the data arrays
575in the order in which the points are defined in the $nbi$ and $nbj$
576arrays. The data arrays are dimensioned on: a time dimension;
577$xb$ which is the index of the boundary data point in the horizontal;
578and $yb$ which is a degenerate dimension of 1 to enable the file to be
579read by the standard NEMO I/O routines. The 3D fields also have a
580depth dimension.
582At Version 3.4 there are new restrictions on the order in which the
583boundary points are defined (and therefore restrictions on the order
584of the data in the file). In particular:
589\item The data points must be in order of increasing $nbr$, ie. all
590  the $nbr=1$ points, then all the $nbr=2$ points etc.
591\item All the data for a particular boundary set must be in the same
592  order. (Prior to 3.4 it was possible to define barotropic data in a
593  different order to the data for tracers and baroclinic velocities).
598These restrictions mean that data files used with previous versions of
599the model may not work with version 3.4. A fortran utility
600{\it bdy\_reorder} exists in the TOOLS directory which will re-order the
601data in old BDY data files.
604\begin{figure}[!t]     \begin{center}
606\caption {     \protect\label{fig:LBC_nc_header} 
607Example of the header for a \protect\ifile{coordinates.bdy} file}
608\end{center}   \end{figure}
612\subsection{Volume correction}
615There is an option to force the total volume in the regional model to be constant,
616similar to the option in the OBC module. This is controlled  by the \np{nn\_volctl} 
617parameter in the namelist. A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used.
618If  \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities
619around the boundary at each timestep to ensure that the integrated volume flow
620through the boundary is zero. If \np{nn\_volctl}\forcode{ = 2} then the calculation of
621the volume change on the timestep includes the change due to the freshwater
622flux across the surface and the correction velocity corrects for this as well.
624If more than one boundary set is used then volume correction is
625applied to all boundaries at once.
629\subsection{Tidal harmonic forcing}
636Options are defined through the  \ngn{nambdy\_tide} namelist variables.
637 To be written....
Note: See TracBrowser for help on using the repository browser.