New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
chap_LBC.tex in NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles – NEMO

source: NEMO/branches/2018/dev_r10164_HPC09_ESIWACE_PREP_MERGE/doc/latex/NEMO/subfiles/chap_LBC.tex @ 10419

Last change on this file since 10419 was 10419, checked in by smasson, 5 years ago

dev_r10164_HPC09_ESIWACE_PREP_MERGE: merge with trunk@10418, see #2133

File size: 33.6 KB
Line 
1\documentclass[../main/NEMO_manual]{subfiles}
2
3\begin{document}
4% ================================================================
5% Chapter — Lateral Boundary Condition (LBC)
6% ================================================================
7\chapter{Lateral Boundary Condition (LBC)}
8\label{chap:LBC}
9
10\minitoc
11
12\newpage
13
14%gm% add here introduction to this chapter
15
16% ================================================================
17% Boundary Condition at the Coast
18% ================================================================
19\section{Boundary condition at the coast (\protect\np{rn\_shlat})}
20\label{sec:LBC_coast}
21%--------------------------------------------nam_lbc-------------------------------------------------------
22
23\nlst{namlbc} 
24%--------------------------------------------------------------------------------------------------------------
25
26%The lateral ocean boundary conditions contiguous to coastlines are Neumann conditions for heat and salt (no flux across boundaries) and Dirichlet conditions for momentum (ranging from free-slip to "strong" no-slip). They are handled automatically by the mask system (see \autoref{subsec:DOM_msk}).
27
28%OPA allows land and topography grid points in the computational domain due to the presence of continents or islands, and includes the use of a full or partial step representation of bottom topography. The computation is performed over the whole domain, i.e. we do not try to restrict the computation to ocean-only points. This choice has two motivations. Firstly, working on ocean only grid points overloads the code and harms the code readability. Secondly, and more importantly, it drastically reduces the vector portion of the computation, leading to a dramatic increase of CPU time requirement on vector computers.  The current section describes how the masking affects the computation of the various terms of the equations with respect to the boundary condition at solid walls. The process of defining which areas are to be masked is described in \autoref{subsec:DOM_msk}.
29
30Options are defined through the \ngn{namlbc} namelist variables.
31The discrete representation of a domain with complex boundaries (coastlines and bottom topography) leads to
32arrays that include large portions where a computation is not required as the model variables remain at zero.
33Nevertheless, vectorial supercomputers are far more efficient when computing over a whole array,
34and the readability of a code is greatly improved when boundary conditions are applied in
35an automatic way rather than by a specific computation before or after each computational loop.
36An efficient way to work over the whole domain while specifying the boundary conditions,
37is to use multiplication by mask arrays in the computation.
38A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ elsewhere.
39A simple multiplication of a variable by its own mask ensures that it will remain zero over land areas.
40Since most of the boundary conditions consist of a zero flux across the solid boundaries,
41they can be simply applied by multiplying variables by the correct mask arrays,
42$i.e.$ the mask array of the grid point where the flux is evaluated.
43For example, the heat flux in the \textbf{i}-direction is evaluated at $u$-points.
44Evaluating this quantity as,
45
46\[
47  % \label{eq:lbc_aaaa}
48  \frac{A^{lT} }{e_1 }\frac{\partial T}{\partial i}\equiv \frac{A_u^{lT}
49  }{e_{1u} } \; \delta_{i+1 / 2} \left[ T \right]\;\;mask_u
50\]
51(where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is zero inside land and
52at the boundaries, since mask$_{u}$ is zero at solid boundaries which in this case are defined at $u$-points
53(normal velocity $u$ remains zero at the coast) (\autoref{fig:LBC_uv}).
54
55%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
56\begin{figure}[!t]
57  \begin{center}
58    \includegraphics[width=0.90\textwidth]{Fig_LBC_uv}
59    \caption{
60      \protect\label{fig:LBC_uv}
61      Lateral boundary (thick line) at T-level.
62      The velocity normal to the boundary is set to zero.
63    }
64  \end{center}
65\end{figure}
66%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
67
68For momentum the situation is a bit more complex as two boundary conditions must be provided along the coast
69(one each for the normal and tangential velocities).
70The boundary of the ocean in the C-grid is defined by the velocity-faces.
71For example, at a given $T$-level,
72the lateral boundary (a coastline or an intersection with the bottom topography) is made of
73segments joining $f$-points, and normal velocity points are located between two $f-$points (\autoref{fig:LBC_uv}).
74The boundary condition on the normal velocity (no flux through solid boundaries)
75can thus be easily implemented using the mask system.
76The boundary condition on the tangential velocity requires a more specific treatment.
77This boundary condition influences the relative vorticity and momentum diffusive trends,
78and is required in order to compute the vorticity at the coast.
79Four different types of lateral boundary condition are available,
80controlled by the value of the \np{rn\_shlat} namelist parameter
81(The value of the mask$_{f}$ array along the coastline is set equal to this parameter).
82These are:
83
84%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
85\begin{figure}[!p]
86  \begin{center}
87    \includegraphics[width=0.90\textwidth]{Fig_LBC_shlat}
88    \caption{
89      \protect\label{fig:LBC_shlat}
90      lateral boundary condition
91      (a) free-slip ($rn\_shlat=0$);
92      (b) no-slip ($rn\_shlat=2$);
93      (c) "partial" free-slip ($0<rn\_shlat<2$) and
94      (d) "strong" no-slip ($2<rn\_shlat$).
95      Implied "ghost" velocity inside land area is display in grey.
96    }
97  \end{center}
98\end{figure}
99%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
100
101\begin{description}
102
103\item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}):] the tangential velocity at
104  the coastline is equal to the offshore velocity,
105  $i.e.$ the normal derivative of the tangential velocity is zero at the coast,
106  so the vorticity: mask$_{f}$ array is set to zero inside the land and just at the coast
107  (\autoref{fig:LBC_shlat}-a).
108
109\item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}):] the tangential velocity vanishes at the coastline.
110  Assuming that the tangential velocity decreases linearly from
111  the closest ocean velocity grid point to the coastline,
112  the normal derivative is evaluated as if the velocities at the closest land velocity gridpoint and
113  the closest ocean velocity gridpoint were of the same magnitude but in the opposite direction
114  (\autoref{fig:LBC_shlat}-b).
115  Therefore, the vorticity along the coastlines is given by:
116
117  \[
118    \zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right] - \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ ,
119  \]
120  where $u$ and $v$ are masked fields.
121  Setting the mask$_{f}$ array to $2$ along the coastline provides a vorticity field computed with
122  the no-slip boundary condition, simply by multiplying it by the mask$_{f}$ :
123  \[
124    % \label{eq:lbc_bbbb}
125    \zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta_{i+1/2}
126        \left[ {e_{2v} \,v} \right]-\delta_{j+1/2} \left[ {e_{1u} \,u} \right]}
127    \right)\;\mbox{mask}_f
128  \]
129
130\item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2):] the tangential velocity at
131  the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral friction but
132  not strong enough to make the tangential velocity at the coast vanish (\autoref{fig:LBC_shlat}-c).
133  This can be selected by providing a value of mask$_{f}$ strictly inbetween $0$ and $2$.
134
135\item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}):] the viscous boundary layer is assumed to
136  be smaller than half the grid size (\autoref{fig:LBC_shlat}-d).
137  The friction is thus larger than in the no-slip case.
138
139\end{description}
140
141Note that when the bottom topography is entirely represented by the $s$-coor-dinates (pure $s$-coordinate),
142the lateral boundary condition on tangential velocity is of much less importance as
143it is only applied next to the coast where the minimum water depth can be quite shallow.
144
145
146% ================================================================
147% Boundary Condition around the Model Domain
148% ================================================================
149\section{Model domain boundary condition (\protect\np{jperio})}
150\label{sec:LBC_jperio}
151
152At the model domain boundaries several choices are offered:
153closed, cyclic east-west, cyclic north-south, a north-fold, and combination closed-north fold or
154bi-cyclic east-west and north-fold.
155The north-fold boundary condition is associated with the 3-pole ORCA mesh.
156
157% -------------------------------------------------------------------------------------------------------------
158%        Closed, cyclic (\np{jperio}\forcode{ = 0..2})
159% -------------------------------------------------------------------------------------------------------------
160\subsection{Closed, cyclic (\protect\np{jperio}\forcode{= [0127]})}
161\label{subsec:LBC_jperio012}
162
163The choice of closed or cyclic model domain boundary condition is made by
164setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}.
165Each time such a boundary condition is needed, it is set by a call to routine \mdl{lbclnk}.
166The computation of momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to $j=jpj-1$,
167$i.e.$ in the model interior.
168To choose a lateral model boundary condition is to specify the first and last rows and columns of
169the model variables.
170
171\begin{description}
172
173\item[For closed boundary (\np{jperio}\forcode{ = 0})],
174  solid walls are imposed at all model boundaries:
175  first and last rows and columns are set to zero.
176
177\item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})],
178  first and last rows are set to zero (closed) whilst the first column is set to
179  the value of the last-but-one column and the last column to the value of the second one
180  (\autoref{fig:LBC_jperio}-a).
181  Whatever flows out of the eastern (western) end of the basin enters the western (eastern) end.
182
183\item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})],
184  first and last columns are set to zero (closed) whilst the first row is set to
185  the value of the last-but-one row and the last row to the value of the second one
186  (\autoref{fig:LBC_jperio}-a).
187  Whatever flows out of the northern (southern) end of the basin enters the southern (northern) end.
188
189\item[Bi-cyclic east-west and north-south boundary (\np{jperio}\forcode{ = 7})] combines cases 1 and 2.
190
191\end{description}
192
193%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
194\begin{figure}[!t]
195  \begin{center}
196    \includegraphics[width=1.0\textwidth]{Fig_LBC_jperio}
197    \caption{
198      \protect\label{fig:LBC_jperio}
199      setting of (a) east-west cyclic  (b) symmetric across the equator boundary conditions.
200    }
201  \end{center}
202\end{figure}
203%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
204
205% -------------------------------------------------------------------------------------------------------------
206%        North fold (\textit{jperio = 3 }to $6)$
207% -------------------------------------------------------------------------------------------------------------
208\subsection{North-fold (\protect\np{jperio}\forcode{ = 3..6})}
209\label{subsec:LBC_north_fold}
210
211The north fold boundary condition has been introduced in order to handle the north boundary of
212a three-polar ORCA grid.
213Such a grid has two poles in the northern hemisphere (\autoref{fig:MISC_ORCA_msh},
214and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.
215Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition.
216
217%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
218\begin{figure}[!t]
219  \begin{center}
220    \includegraphics[width=0.90\textwidth]{Fig_North_Fold_T}
221    \caption{
222      \protect\label{fig:North_Fold_T}
223      North fold boundary with a $T$-point pivot and cyclic east-west boundary condition ($jperio=4$),
224      as used in ORCA 2, 1/4, and 1/12.
225      Pink shaded area corresponds to the inner domain mask (see text).
226    }
227  \end{center}
228\end{figure}
229%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
230
231% ====================================================================
232% Exchange with neighbouring processors
233% ====================================================================
234\section{Exchange with neighbouring processors (\protect\mdl{lbclnk}, \protect\mdl{lib\_mpp})}
235\label{sec:LBC_mpp}
236
237For massively parallel processing (mpp), a domain decomposition method is used.
238The basic idea of the method is to split the large computation domain of a numerical experiment into
239several smaller domains and solve the set of equations by addressing independent local problems.
240Each processor has its own local memory and computes the model equation over a subdomain of the whole model domain.
241The subdomain boundary conditions are specified through communications between processors which
242are organized by explicit statements (message passing method).
243
244A big advantage is that the method does not need many modifications of the initial FORTRAN code.
245From the modeller's point of view, each sub domain running on a processor is identical to the "mono-domain" code.
246In addition, the programmer manages the communications between subdomains,
247and the code is faster when the number of processors is increased.
248The porting of OPA code on an iPSC860 was achieved during Guyon's PhD [Guyon et al. 1994, 1995]
249in collaboration with CETIIS and ONERA.
250The implementation in the operational context and the studies of performance on
251a T3D and T3E Cray computers have been made in collaboration with IDRIS and CNRS.
252The present implementation is largely inspired by Guyon's work [Guyon 1995].
253
254The parallelization strategy is defined by the physical characteristics of the ocean model.
255Second order finite difference schemes lead to local discrete operators that
256depend at the very most on one neighbouring point.
257The only non-local computations concern the vertical physics
258(implicit diffusion, turbulent closure scheme, ...) (delocalization over the whole water column),
259and the solving of the elliptic equation associated with the surface pressure gradient computation
260(delocalization over the whole horizontal domain).
261Therefore, a pencil strategy is used for the data sub-structuration:
262the 3D initial domain is laid out on local processor memories following a 2D horizontal topological splitting.
263Each sub-domain computes its own surface and bottom boundary conditions and
264has a side wall overlapping interface which defines the lateral boundary conditions for
265computations in the inner sub-domain.
266The overlapping area consists of the two rows at each edge of the sub-domain.
267After a computation, a communication phase starts:
268each processor sends to its neighbouring processors the update values of the points corresponding to
269the interior overlapping area to its neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows).
270The communication is done through the Message Passing Interface (MPI).
271The data exchanges between processors are required at the very place where
272lateral domain boundary conditions are set in the mono-domain computation:
273the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module) which manages such conditions is interfaced with
274routines found in \mdl{lib\_mpp} module when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined).
275It has to be pointed out that when using the MPP version of the model,
276the east-west cyclic boundary condition is done implicitly,
277whilst the south-symmetric boundary condition option is not available.
278
279%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
280\begin{figure}[!t]
281  \begin{center}
282    \includegraphics[width=0.90\textwidth]{Fig_mpp}
283    \caption{
284      \protect\label{fig:mpp}
285      Positioning of a sub-domain when massively parallel processing is used.
286    }
287  \end{center}
288\end{figure}
289%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
290
291In the standard version of \NEMO, the splitting is regular and arithmetic.
292The i-axis is divided by \jp{jpni} and
293the j-axis by \jp{jpnj} for a number of processors \jp{jpnij} most often equal to $jpni \times jpnj$
294(parameters set in  \ngn{nammpp} namelist).
295Each processor is independent and without message passing or synchronous process,
296programs run alone and access just its own local memory.
297For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) that
298are named \jp{jpi}, \jp{jpj}, \jp{jpk}.
299These dimensions include the internal domain and the overlapping rows.
300The number of rows to exchange (known as the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}).
301The whole domain dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}.
302The relationship between the whole domain and a sub-domain is:
303\[
304  jpi = ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci
305  jpj = ( jpjglo-2*jprecj + (jpnj-1) ) / jpnj + 2*jprecj
306\]
307where \jp{jpni}, \jp{jpnj} are the number of processors following the i- and j-axis.
308
309One also defines variables nldi and nlei which correspond to the internal domain bounds,
310and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain.
311An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,
312a global array (whole domain) by the relationship:
313\[
314  % \label{eq:lbc_nimpp}
315  T_{g} (i+nimpp-1,j+njmpp-1,k) = T_{l} (i,j,k),
316\]
317with  $1 \leq i \leq jpi$, $1  \leq j \leq jpj $ , and  $1  \leq k \leq jpk$.
318
319Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable nproc.
320In the standard version, a processor has no more than
321four neighbouring processors named nono (for north), noea (east), noso (south) and nowe (west) and
322two variables, nbondi and nbondj, indicate the relative position of the processor:
323\begin{itemize}
324\item       nbondi = -1    an east neighbour, no west processor,
325\item       nbondi =  0 an east neighbour, a west neighbour,
326\item       nbondi =  1    no east processor, a west neighbour,
327\item       nbondi =  2    no splitting following the i-axis.
328\end{itemize}
329During the simulation, processors exchange data with their neighbours.
330If there is effectively a neighbour, the processor receives variables from this processor on its overlapping row,
331and sends the data issued from internal domain corresponding to the overlapping row of the other processor.
332
333
334The \NEMO model computes equation terms with the help of mask arrays (0 on land points and 1 on sea points).
335It is easily readable and very efficient in the context of a computer with vectorial architecture.
336However, in the case of a scalar processor, computations over the land regions become more expensive in
337terms of CPU time.
338It is worse when we use a complex configuration with a realistic bathymetry like the global ocean where
339more than 50 \% of points are land points.
340For this reason, a pre-processing tool can be used to choose the mpp domain decomposition with a maximum number of
341only land points processors, which can then be eliminated (\autoref{fig:mppini2})
342(For example, the mpp\_optimiz tools, available from the DRAKKAR web site).
343This optimisation is dependent on the specific bathymetry employed.
344The user then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with $jpnij < jpni \times jpnj$,
345leading to the elimination of $jpni \times jpnj - jpnij$ land processors.
346When those parameters are specified in \ngn{nammpp} namelist,
347the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound, nono, noea,...) so that
348the land-only processors are not taken into account.
349
350\gmcomment{Note that the inimpp2 routine is general so that the original inimpp
351routine should be suppressed from the code.}
352
353When land processors are eliminated,
354the value corresponding to these locations in the model output files is undefined.
355Note that this is a problem for the meshmask file which requires to be defined over the whole domain.
356Therefore, user should not eliminate land processors when creating a meshmask file
357($i.e.$ when setting a non-zero value to \np{nn\_msh}).
358
359%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
360\begin{figure}[!ht]
361  \begin{center}
362    \includegraphics[width=0.90\textwidth]{Fig_mppini2}
363    \caption {
364      \protect\label{fig:mppini2}
365      Example of Atlantic domain defined for the CLIPPER projet.
366      Initial grid is composed of 773 x 1236 horizontal points.
367      (a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).
368      52 subdomains are land areas.
369      (b) 52 subdomains are eliminated (white rectangles) and
370      the resulting number of processors really used during the computation is jpnij=128.
371    }
372  \end{center}
373\end{figure}
374%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
375
376
377% ====================================================================
378% Unstructured open boundaries BDY
379% ====================================================================
380\section{Unstructured open boundary conditions (BDY)}
381\label{sec:LBC_bdy}
382
383%-----------------------------------------nambdy--------------------------------------------
384
385\nlst{nambdy} 
386%-----------------------------------------------------------------------------------------------
387%-----------------------------------------nambdy_dta--------------------------------------------
388
389\nlst{nambdy_dta} 
390%-----------------------------------------------------------------------------------------------
391
392Options are defined through the \ngn{nambdy} \ngn{nambdy\_dta} namelist variables.
393The BDY module is the core implementation of open boundary conditions for regional configurations.
394It implements the Flow Relaxation Scheme algorithm for temperature, salinity, velocities and ice fields, and
395the Flather radiation condition for the depth-mean transports.
396The specification of the location of the open boundary is completely flexible and
397allows for example the open boundary to follow an isobath or other irregular contour.
398
399The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many features and
400a similar coding structure \citep{Chanut2005}.
401
402Boundary data files used with earlier versions of NEMO may need to be re-ordered to work with this version.
403See the section on the Input Boundary Data Files for details.
404
405%----------------------------------------------
406\subsection{Namelists}
407\label{subsec:BDY_namelist}
408
409The BDY module is activated by setting \np{ln\_bdy} to true.
410It is possible to define more than one boundary ``set'' and apply different boundary conditions to each set.
411The number of boundary sets is defined by \np{nb\_bdy}.
412Each boundary set may be defined as a set of straight line segments in a namelist
413(\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file (\np{ln\_coords\_file}\forcode{ = .true.}).
414If the set is defined in a namelist, then the namelists nambdy\_index must be included separately, one for each set.
415If the set is defined by a file, then a ``\ifile{coordinates.bdy}'' file must be provided.
416The coordinates.bdy file is analagous to the usual NEMO ``\ifile{coordinates}'' file.
417In the example above, there are two boundary sets, the first of which is defined via a file and
418the second is defined in a namelist.
419For more details of the definition of the boundary geometry see section \autoref{subsec:BDY_geometry}.
420
421For each boundary set a boundary condition has to be chosen for the barotropic solution
422(``u2d'':sea-surface height and barotropic velocities), for the baroclinic velocities (``u3d''), and
423for the active tracers \footnote{The BDY module does not deal with passive tracers at this version} (``tra'').
424For each set of variables there is a choice of algorithm and a choice for the data,
425eg. for the active tracers the algorithm is set by \np{nn\_tra} and the choice of data is set by \np{nn\_tra\_dta}.
426
427The choice of algorithm is currently as follows:
428
429\begin{itemize}
430\item[0.] No boundary condition applied.
431  So the solution will ``see'' the land points around the edge of the edge of the domain.
432\item[1.] Flow Relaxation Scheme (FRS) available for all variables.
433\item[2.] Flather radiation scheme for the barotropic variables.
434  The Flather scheme is not compatible with the filtered free surface
435  ({\it dynspg\_ts}).
436\end{itemize}
437
438The main choice for the boundary data is to use initial conditions as boundary data
439(\np{nn\_tra\_dta}\forcode{ = 0}) or to use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}).
440For the barotropic solution there is also the option to use tidal harmonic forcing either by
441itself or in addition to other external data.
442
443If external boundary data is required then the nambdy\_dta namelist must be defined.
444One nambdy\_dta namelist is required for each boundary set in the order in which
445the boundary sets are defined in nambdy.
446In the example given, two boundary sets have been defined and so there are two nambdy\_dta namelists.
447The boundary data is read in using the fldread module,
448so the nambdy\_dta namelist is in the format required for fldread.
449For each variable required, the filename, the frequency of the files and
450the frequency of the data in the files is given.
451Also whether or not time-interpolation is required and whether the data is climatological (time-cyclic) data.
452Note that on-the-fly spatial interpolation of boundary data is not available at this version.
453
454In the example namelists given, two boundary sets are defined.
455The first set is defined via a file and applies FRS conditions to temperature and salinity and
456Flather conditions to the barotropic variables.
457External data is provided in daily files (from a large-scale model).
458Tidal harmonic forcing is also used.
459The second set is defined in a namelist.
460FRS conditions are applied on temperature and salinity and climatological data is read from external files.
461
462%----------------------------------------------
463\subsection{Flow relaxation scheme}
464\label{subsec:BDY_FRS_scheme}
465
466The Flow Relaxation Scheme (FRS) \citep{Davies_QJRMS76,Engerdahl_Tel95},
467applies a simple relaxation of the model fields to externally-specified values over
468a zone next to the edge of the model domain.
469Given a model prognostic variable $\Phi$
470\[
471  % \label{eq:bdy_frs1}
472  \Phi(d) = \alpha(d)\Phi_{e}(d) + (1-\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N
473\]
474where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified external field,
475$d$ gives the discrete distance from the model boundary and
476$\alpha$ is a parameter that varies from $1$ at $d=1$ to a small value at $d=N$.
477It can be shown that this scheme is equivalent to adding a relaxation term to
478the prognostic equation for $\Phi$ of the form:
479\[
480  % \label{eq:bdy_frs2}
481  -\frac{1}{\tau}\left(\Phi - \Phi_{e}\right)
482\]
483where the relaxation time scale $\tau$ is given by a function of $\alpha$ and the model time step $\Delta t$:
484\[
485  % \label{eq:bdy_frs3}
486  \tau = \frac{1-\alpha}{\alpha\,\rdt
487\]
488Thus the model solution is completely prescribed by the external conditions at the edge of the model domain and
489is relaxed towards the external conditions over the rest of the FRS zone.
490The application of a relaxation zone helps to prevent spurious reflection of
491outgoing signals from the model boundary.
492
493The function $\alpha$ is specified as a $tanh$ function:
494\[
495  % \label{eq:bdy_frs4}
496  \alpha(d) = 1 - \tanh\left(\frac{d-1}{2}\right),       \quad d=1,N
497\]
498The width of the FRS zone is specified in the namelist as \np{nn\_rimwidth}.
499This is typically set to a value between 8 and 10.
500
501%----------------------------------------------
502\subsection{Flather radiation scheme}
503\label{subsec:BDY_flather_scheme}
504
505The \citet{Flather_JPO94} scheme is a radiation condition on the normal,
506depth-mean transport across the open boundary.
507It takes the form
508\begin{equation}  \label{eq:bdy_fla1}
509U = U_{e} + \frac{c}{h}\left(\eta - \eta_{e}\right),
510\end{equation}
511where $U$ is the depth-mean velocity normal to the boundary and $\eta$ is the sea surface height,
512both from the model.
513The subscript $e$ indicates the same fields from external sources.
514The speed of external gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the water column.
515The depth-mean normal velocity along the edge of the model domain is set equal to
516the external depth-mean normal velocity,
517plus a correction term that allows gravity waves generated internally to exit the model boundary.
518Note that the sea-surface height gradient in \autoref{eq:bdy_fla1} is a spatial gradient across the model boundary,
519so that $\eta_{e}$ is defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the $T$ points with $nbr=2$.
520$U$ and $U_{e}$ are defined on the $U$ or $V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points.
521
522%----------------------------------------------
523\subsection{Boundary geometry}
524\label{subsec:BDY_geometry}
525
526Each open boundary set is defined as a list of points.
527The information is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$ structure.
528The $nbi$ and $nbj$ arrays define the local $(i,j)$ indices of each point in the boundary zone and
529the $nbr$ array defines the discrete distance from the boundary with $nbr=1$ meaning that
530the point is next to the edge of the model domain and $nbr>1$ showing that
531the point is increasingly further away from the edge of the model domain.
532A set of $nbi$, $nbj$, and $nbr$ arrays is defined for each of the $T$, $U$ and $V$ grids.
533Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular boundary.
534
535The boundary geometry for each set may be defined in a namelist nambdy\_index or
536by reading in a ``\ifile{coordinates.bdy}'' file.
537The nambdy\_index namelist defines a series of straight-line segments for north, east, south and west boundaries.
538For the northern boundary, \np{nbdysegn} gives the number of segments,
539\np{jpjnob} gives the $j$ index for each segment and \np{jpindt} and
540\np{jpinft} give the start and end $i$ indices for each segment with similar for the other boundaries.
541These segments define a list of $T$ grid points along the outermost row of the boundary ($nbr\,=\, 1$).
542The code deduces the $U$ and $V$ points and also the points for $nbr\,>\, 1$ if $nn\_rimwidth\,>\,1$.
543
544The boundary geometry may also be defined from a ``\ifile{coordinates.bdy}'' file.
545Figure \autoref{fig:LBC_nc_header} gives an example of the header information from such a file.
546The file should contain the index arrays for each of the $T$, $U$ and $V$ grids.
547The arrays must be in order of increasing $nbr$.
548Note that the $nbi$, $nbj$ values in the file are global values and are converted to local values in the code.
549Typically this file will be used to generate external boundary data via interpolation and so
550will also contain the latitudes and longitudes of each point as shown.
551However, this is not necessary to run the model.
552
553For some choices of irregular boundary the model domain may contain areas of ocean which
554are not part of the computational domain.
555For example if an open boundary is defined along an isobath, say at the shelf break,
556then the areas of ocean outside of this boundary will need to be masked out.
557This can be done by reading a mask file defined as \np{cn\_mask\_file} in the nam\_bdy namelist.
558Only one mask file is used even if multiple boundary sets are defined.
559
560%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
561\begin{figure}[!t]
562  \begin{center}
563    \includegraphics[width=1.0\textwidth]{Fig_LBC_bdy_geom}
564    \caption {
565      \protect\label{fig:LBC_bdy_geom}
566      Example of geometry of unstructured open boundary
567    }
568  \end{center}
569\end{figure}
570%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
571
572%----------------------------------------------
573\subsection{Input boundary data files}
574\label{subsec:BDY_data}
575
576The data files contain the data arrays in the order in which the points are defined in the $nbi$ and $nbj$ arrays.
577The data arrays are dimensioned on:
578a time dimension;
579$xb$ which is the index of the boundary data point in the horizontal;
580and $yb$ which is a degenerate dimension of 1 to enable the file to be read by the standard NEMO I/O routines.
581The 3D fields also have a depth dimension.
582
583At Version 3.4 there are new restrictions on the order in which the boundary points are defined
584(and therefore restrictions on the order of the data in the file).
585In particular:
586
587\begin{enumerate}
588\item The data points must be in order of increasing $nbr$,
589  ie. all the $nbr=1$ points, then all the $nbr=2$ points etc.
590\item All the data for a particular boundary set must be in the same order.
591  (Prior to 3.4 it was possible to define barotropic data in a different order to
592  the data for tracers and baroclinic velocities).
593\end{enumerate}
594
595These restrictions mean that data files used with previous versions of the model may not work with version 3.4.
596A fortran utility {\it bdy\_reorder} exists in the TOOLS directory which
597will re-order the data in old BDY data files.
598
599%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
600\begin{figure}[!t]
601  \begin{center}
602    \includegraphics[width=1.0\textwidth]{Fig_LBC_nc_header}
603    \caption {
604      \protect\label{fig:LBC_nc_header}
605      Example of the header for a \protect\ifile{coordinates.bdy} file
606    }
607  \end{center}
608\end{figure}
609%>>>>>>>>>>>>>>>>>>>>>>>>>>>>
610
611%----------------------------------------------
612\subsection{Volume correction}
613\label{subsec:BDY_vol_corr}
614
615There is an option to force the total volume in the regional model to be constant,
616similar to the option in the OBC module.
617This is controlled  by the \np{nn\_volctl} parameter in the namelist.
618A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used.
619If \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities around the boundary at
620each timestep to ensure that the integrated volume flow through the boundary is zero.
621If \np{nn\_volctl}\forcode{ = 2} then the calculation of the volume change on
622the timestep includes the change due to the freshwater flux across the surface and
623the correction velocity corrects for this as well.
624
625If more than one boundary set is used then volume correction is
626applied to all boundaries at once.
627
628\newpage
629%----------------------------------------------
630\subsection{Tidal harmonic forcing}
631\label{subsec:BDY_tides}
632
633%-----------------------------------------nambdy_tide--------------------------------------------
634
635\nlst{nambdy_tide} 
636%-----------------------------------------------------------------------------------------------
637
638Options are defined through the  \ngn{nambdy\_tide} namelist variables.
639 To be written....
640
641\biblio
642
643\end{document}
Note: See TracBrowser for help on using the repository browser.