New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
Changeset 10354 for NEMO/trunk/doc/latex/NEMO/subfiles/chap_LBC.tex – NEMO

Ignore:
Timestamp:
2018-11-21T17:59:55+01:00 (5 years ago)
Author:
nicolasmartin
Message:

Vast edition of LaTeX subfiles to improve the readability by cutting sentences in a more suitable way
Every sentence begins in a new line and if necessary is splitted around 110 characters lenght for side-by-side visualisation,
this setting may not be adequate for everyone but something has to be set.
The punctuation was the primer trigger for the cutting process, otherwise subordinators and coordinators, in order to mostly keep a meaning for each line

File:
1 edited

Legend:

Unmodified
Added
Removed
  • NEMO/trunk/doc/latex/NEMO/subfiles/chap_LBC.tex

    r10146 r10354  
    2929 
    3030Options are defined through the \ngn{namlbc} namelist variables. 
    31 The discrete representation of a domain with complex boundaries (coastlines and  
    32 bottom topography) leads to arrays that include large portions where a computation  
    33 is not required as the model variables remain at zero. Nevertheless, vectorial  
    34 supercomputers are far more efficient when computing over a whole array, and the  
    35 readability of a code is greatly improved when boundary conditions are applied in  
    36 an automatic way rather than by a specific computation before or after each  
    37 computational loop. An efficient way to work over the whole domain while specifying  
    38 the boundary conditions, is to use multiplication by mask arrays in the computation.  
    39 A mask array is a matrix whose elements are $1$ in the ocean domain and $0$  
    40 elsewhere. A simple multiplication of a variable by its own mask ensures that it will  
    41 remain zero over land areas. Since most of the boundary conditions consist of a  
    42 zero flux across the solid boundaries, they can be simply applied by multiplying  
    43 variables by the correct mask arrays, $i.e.$ the mask array of the grid point where  
    44 the flux is evaluated. For example, the heat flux in the \textbf{i}-direction is evaluated  
    45 at $u$-points. Evaluating this quantity as, 
     31The discrete representation of a domain with complex boundaries (coastlines and bottom topography) leads to 
     32arrays that include large portions where a computation is not required as the model variables remain at zero. 
     33Nevertheless, vectorial supercomputers are far more efficient when computing over a whole array, 
     34and the readability of a code is greatly improved when boundary conditions are applied in 
     35an automatic way rather than by a specific computation before or after each computational loop. 
     36An efficient way to work over the whole domain while specifying the boundary conditions, 
     37is to use multiplication by mask arrays in the computation. 
     38A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ elsewhere. 
     39A simple multiplication of a variable by its own mask ensures that it will remain zero over land areas. 
     40Since most of the boundary conditions consist of a zero flux across the solid boundaries, 
     41they can be simply applied by multiplying variables by the correct mask arrays, 
     42$i.e.$ the mask array of the grid point where the flux is evaluated. 
     43For example, the heat flux in the \textbf{i}-direction is evaluated at $u$-points. 
     44Evaluating this quantity as, 
    4645 
    4746\begin{equation} \label{eq:lbc_aaaa} 
     
    4948}{e_{1u} } \; \delta _{i+1 / 2} \left[ T \right]\;\;mask_u  
    5049\end{equation} 
    51 (where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is  
    52 zero inside land and at the boundaries, since mask$_{u}$ is zero at solid boundaries  
    53 which in this case are defined at $u$-points (normal velocity $u$ remains zero at  
    54 the coast) (\autoref{fig:LBC_uv}).  
     50(where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is zero inside land and 
     51at the boundaries, since mask$_{u}$ is zero at solid boundaries which in this case are defined at $u$-points 
     52(normal velocity $u$ remains zero at the coast) (\autoref{fig:LBC_uv}).  
    5553 
    5654%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    5856\includegraphics[width=0.90\textwidth]{Fig_LBC_uv} 
    5957\caption{  \protect\label{fig:LBC_uv} 
    60 Lateral boundary (thick line) at T-level. The velocity normal to the boundary is set to zero.} 
     58  Lateral boundary (thick line) at T-level. 
     59  The velocity normal to the boundary is set to zero.} 
    6160\end{center}   \end{figure} 
    6261%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
    6362 
    64 For momentum the situation is a bit more complex as two boundary conditions  
    65 must be provided along the coast (one each for the normal and tangential velocities).  
    66 The boundary of the ocean in the C-grid is defined by the velocity-faces.  
    67 For example, at a given $T$-level, the lateral boundary (a coastline or an intersection  
    68 with the bottom topography) is made of segments joining $f$-points, and normal  
    69 velocity points are located between two $f-$points (\autoref{fig:LBC_uv}).  
    70 The boundary condition on the normal velocity (no flux through solid boundaries)  
    71 can thus be easily implemented using the mask system. The boundary condition  
    72 on the tangential velocity requires a more specific treatment. This boundary  
    73 condition influences the relative vorticity and momentum diffusive trends, and is  
    74 required in order to compute the vorticity at the coast. Four different types of  
    75 lateral boundary condition are available, controlled by the value of the \np{rn\_shlat}  
    76 namelist parameter. (The value of the mask$_{f}$ array along the coastline is set  
    77 equal to this parameter.) These are: 
     63For momentum the situation is a bit more complex as two boundary conditions must be provided along the coast 
     64(one each for the normal and tangential velocities). 
     65The boundary of the ocean in the C-grid is defined by the velocity-faces. 
     66For example, at a given $T$-level, 
     67the lateral boundary (a coastline or an intersection with the bottom topography) is made of 
     68segments joining $f$-points, and normal velocity points are located between two $f-$points (\autoref{fig:LBC_uv}). 
     69The boundary condition on the normal velocity (no flux through solid boundaries) 
     70can thus be easily implemented using the mask system. 
     71The boundary condition on the tangential velocity requires a more specific treatment. 
     72This boundary condition influences the relative vorticity and momentum diffusive trends, 
     73and is required in order to compute the vorticity at the coast. 
     74Four different types of lateral boundary condition are available, 
     75controlled by the value of the \np{rn\_shlat} namelist parameter 
     76(The value of the mask$_{f}$ array along the coastline is set equal to this parameter). 
     77These are: 
    7878 
    7979%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    8181\includegraphics[width=0.90\textwidth]{Fig_LBC_shlat} 
    8282\caption{     \protect\label{fig:LBC_shlat}  
    83 lateral boundary condition (a) free-slip ($rn\_shlat=0$) ; (b) no-slip ($rn\_shlat=2$)  
    84 ; (c) "partial" free-slip ($0<rn\_shlat<2$) and (d) "strong" no-slip ($2<rn\_shlat$).  
    85 Implied "ghost" velocity inside land area is display in grey. } 
     83  lateral boundary condition 
     84  (a) free-slip ($rn\_shlat=0$); 
     85  (b) no-slip ($rn\_shlat=2$); 
     86  (c) "partial" free-slip ($0<rn\_shlat<2$) and 
     87  (d) "strong" no-slip ($2<rn\_shlat$). 
     88  Implied "ghost" velocity inside land area is display in grey. } 
    8689\end{center}    \end{figure} 
    8790%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    8992\begin{description} 
    9093 
    91 \item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}): ]  the tangential velocity at the  
    92 coastline is equal to the offshore velocity, $i.e.$ the normal derivative of the  
    93 tangential velocity is zero at the coast, so the vorticity: mask$_{f}$ array is set  
    94 to zero inside the land and just at the coast (\autoref{fig:LBC_shlat}-a). 
    95  
    96 \item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}): ] the tangential velocity vanishes  
    97 at the coastline. Assuming that the tangential velocity decreases linearly from  
    98 the closest ocean velocity grid point to the coastline, the normal derivative is  
    99 evaluated as if the velocities at the closest land velocity gridpoint and the closest  
    100 ocean velocity gridpoint were of the same magnitude but in the opposite direction  
    101 (\autoref{fig:LBC_shlat}-b). Therefore, the vorticity along the coastlines is given by:  
     94\item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}):] the tangential velocity at 
     95  the coastline is equal to the offshore velocity, 
     96  $i.e.$ the normal derivative of the tangential velocity is zero at the coast, 
     97  so the vorticity: mask$_{f}$ array is set to zero inside the land and just at the coast 
     98  (\autoref{fig:LBC_shlat}-a). 
     99 
     100\item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}):] the tangential velocity vanishes at the coastline. 
     101  Assuming that the tangential velocity decreases linearly from 
     102  the closest ocean velocity grid point to the coastline, 
     103  the normal derivative is evaluated as if the velocities at the closest land velocity gridpoint and 
     104  the closest ocean velocity gridpoint were of the same magnitude but in the opposite direction 
     105  (\autoref{fig:LBC_shlat}-b). 
     106  Therefore, the vorticity along the coastlines is given by:  
    102107 
    103108\begin{equation*} 
    104109\zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right] - \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ , 
    105110\end{equation*} 
    106 where $u$ and $v$ are masked fields. Setting the mask$_{f}$ array to $2$ along  
    107 the coastline provides a vorticity field computed with the no-slip boundary condition,  
    108 simply by multiplying it by the mask$_{f}$ : 
     111where $u$ and $v$ are masked fields. 
     112Setting the mask$_{f}$ array to $2$ along the coastline provides a vorticity field computed with 
     113the no-slip boundary condition, simply by multiplying it by the mask$_{f}$ : 
    109114\begin{equation} \label{eq:lbc_bbbb} 
    110115\zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta _{i+1/2}  
     
    113118\end{equation} 
    114119 
    115 \item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2): ] the tangential  
    116 velocity at the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral  
    117 friction but not strong enough to make the tangential velocity at the coast vanish  
    118 (\autoref{fig:LBC_shlat}-c). This can be selected by providing a value of mask$_{f}$  
    119 strictly inbetween $0$ and $2$. 
    120  
    121 \item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}): ] the viscous boundary  
    122 layer is assumed to be smaller than half the grid size (\autoref{fig:LBC_shlat}-d).  
    123 The friction is thus larger than in the no-slip case. 
     120\item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2):] the tangential velocity at 
     121  the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral friction but 
     122  not strong enough to make the tangential velocity at the coast vanish (\autoref{fig:LBC_shlat}-c). 
     123  This can be selected by providing a value of mask$_{f}$ strictly inbetween $0$ and $2$. 
     124 
     125\item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}):] the viscous boundary layer is assumed to 
     126  be smaller than half the grid size (\autoref{fig:LBC_shlat}-d). 
     127  The friction is thus larger than in the no-slip case. 
    124128 
    125129\end{description} 
    126130 
    127 Note that when the bottom topography is entirely represented by the $s$-coor-dinates  
    128 (pure $s$-coordinate), the lateral boundary condition on tangential velocity is of much  
    129 less importance as it is only applied next to the coast where the minimum water depth  
    130 can be quite shallow. 
     131Note that when the bottom topography is entirely represented by the $s$-coor-dinates (pure $s$-coordinate), 
     132the lateral boundary condition on tangential velocity is of much less importance as 
     133it is only applied next to the coast where the minimum water depth can be quite shallow. 
    131134 
    132135 
     
    137140\label{sec:LBC_jperio} 
    138141 
    139 At the model domain boundaries several choices are offered: closed, cyclic east-west,  
    140 cyclic north-south, a north-fold, and combination closed-north fold  
    141 or bi-cyclic east-west and north-fold. The north-fold boundary condition is associated with the 3-pole ORCA mesh.  
     142At the model domain boundaries several choices are offered: 
     143closed, cyclic east-west, cyclic north-south, a north-fold, and combination closed-north fold or 
     144bi-cyclic east-west and north-fold. 
     145The north-fold boundary condition is associated with the 3-pole ORCA mesh.  
    142146 
    143147% ------------------------------------------------------------------------------------------------------------- 
     
    147151\label{subsec:LBC_jperio012} 
    148152 
    149 The choice of closed or cyclic model domain boundary condition is made  
    150 by setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}. Each time such a boundary  
    151 condition is needed, it is set by a call to routine \mdl{lbclnk}. The computation of  
    152 momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to  
    153 $j=jpj-1$, $i.e.$ in the model interior. To choose a lateral model boundary condition  
    154 is to specify the first and last rows and columns of the model variables.  
     153The choice of closed or cyclic model domain boundary condition is made by 
     154setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}. 
     155Each time such a boundary condition is needed, it is set by a call to routine \mdl{lbclnk}. 
     156The computation of momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to $j=jpj-1$, 
     157$i.e.$ in the model interior. 
     158To choose a lateral model boundary condition is to specify the first and last rows and columns of 
     159the model variables.  
    155160 
    156161\begin{description} 
    157162 
    158 \item[For closed boundary (\np{jperio}\forcode{ = 0})], solid walls are imposed at all model  
    159 boundaries: first and last rows and columns are set to zero. 
    160  
    161 \item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})], first and last rows are set  
    162 to zero (closed) whilst the first column is set to the value of the last-but-one column  
    163 and the last column to the value of the second one (\autoref{fig:LBC_jperio}-a).  
    164 Whatever flows out of the eastern (western) end of the basin enters the western  
    165 (eastern) end. 
    166  
    167 \item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})], first and last columns are set  
    168 to zero (closed) whilst the first row is set to the value of the last-but-one row  
    169 and the last row to the value of the second one (\autoref{fig:LBC_jperio}-a).  
    170 Whatever flows out of the northern (southern) end of the basin enters the southern   
    171 (northern) end. 
     163\item[For closed boundary (\np{jperio}\forcode{ = 0})], 
     164  solid walls are imposed at all model boundaries: 
     165  first and last rows and columns are set to zero. 
     166 
     167\item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})], 
     168  first and last rows are set to zero (closed) whilst the first column is set to 
     169  the value of the last-but-one column and the last column to the value of the second one 
     170  (\autoref{fig:LBC_jperio}-a). 
     171  Whatever flows out of the eastern (western) end of the basin enters the western (eastern) end. 
     172 
     173\item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})], 
     174  first and last columns are set to zero (closed) whilst the first row is set to 
     175  the value of the last-but-one row and the last row to the value of the second one 
     176  (\autoref{fig:LBC_jperio}-a). 
     177  Whatever flows out of the northern (southern) end of the basin enters the southern (northern) end. 
    172178 
    173179\item[Bi-cyclic east-west and north-south boundary (\np{jperio}\forcode{ = 7})] combines cases 1 and 2. 
     
    179185\includegraphics[width=1.0\textwidth]{Fig_LBC_jperio} 
    180186\caption{    \protect\label{fig:LBC_jperio} 
    181 setting of (a) east-west cyclic  (b) symmetric across the equator boundary conditions.} 
     187  setting of (a) east-west cyclic  (b) symmetric across the equator boundary conditions.} 
    182188\end{center}   \end{figure} 
    183189%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    189195\label{subsec:LBC_north_fold} 
    190196 
    191 The north fold boundary condition has been introduced in order to handle the north  
    192 boundary of a three-polar ORCA grid. Such a grid has two poles in the northern hemisphere  
    193 (\autoref{fig:MISC_ORCA_msh}, and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.  
     197The north fold boundary condition has been introduced in order to handle the north boundary of 
     198a three-polar ORCA grid. 
     199Such a grid has two poles in the northern hemisphere (\autoref{fig:MISC_ORCA_msh}, 
     200and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}.  
    194201Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition. 
    195202 
     
    197204\begin{figure}[!t]    \begin{center} 
    198205\includegraphics[width=0.90\textwidth]{Fig_North_Fold_T} 
    199 \caption{    \protect\label{fig:North_Fold_T}  
    200 North fold boundary with a $T$-point pivot and cyclic east-west boundary condition  
    201 ($jperio=4$), as used in ORCA 2, 1/4, and 1/12. Pink shaded area corresponds  
    202 to the inner domain mask (see text). } 
     206\caption{    \protect\label{fig:North_Fold_T} 
     207  North fold boundary with a $T$-point pivot and cyclic east-west boundary condition ($jperio=4$), 
     208  as used in ORCA 2, 1/4, and 1/12. 
     209  Pink shaded area corresponds to the inner domain mask (see text). } 
    203210\end{center}   \end{figure} 
    204211%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    210217\label{sec:LBC_mpp} 
    211218 
    212 For massively parallel processing (mpp), a domain decomposition method is used.  
    213 The basic idea of the method is to split the large computation domain of a numerical  
    214 experiment into several smaller domains and solve the set of equations by addressing  
    215 independent local problems. Each processor has its own local memory and computes  
    216 the model equation over a subdomain of the whole model domain. The subdomain  
    217 boundary conditions are specified through communications between processors  
    218 which are organized by explicit statements (message passing method).  
    219  
    220 A big advantage is that the method does not need many modifications of the initial  
    221 FORTRAN code. From the modeller's point of view, each sub domain running on  
    222 a processor is identical to the "mono-domain" code. In addition, the programmer  
    223 manages the communications between subdomains, and the code is faster when  
    224 the number of processors is increased. The porting of OPA code on an iPSC860  
    225 was achieved during Guyon's PhD [Guyon et al. 1994, 1995] in collaboration with  
    226 CETIIS and ONERA. The implementation in the operational context and the studies  
    227 of performance on a T3D and T3E Cray computers have been made in collaboration  
    228 with IDRIS and CNRS. The present implementation is largely inspired by Guyon's  
    229 work  [Guyon 1995]. 
    230  
    231 The parallelization strategy is defined by the physical characteristics of the  
    232 ocean model. Second order finite difference schemes lead to local discrete  
    233 operators that depend at the very most on one neighbouring point. The only  
    234 non-local computations concern the vertical physics (implicit diffusion,  
    235 turbulent closure scheme, ...) (delocalization over the whole water column),  
    236 and the solving of the elliptic equation associated with the surface pressure  
    237 gradient computation (delocalization over the whole horizontal domain).  
    238 Therefore, a pencil strategy is used for the data sub-structuration  
    239 : the 3D initial domain is laid out on local processor  
    240 memories following a 2D horizontal topological splitting. Each sub-domain  
    241 computes its own surface and bottom boundary conditions and has a side  
    242 wall overlapping interface which defines the lateral boundary conditions for  
    243 computations in the inner sub-domain. The overlapping area consists of the  
    244 two rows at each edge of the sub-domain. After a computation, a communication  
    245 phase starts: each processor sends to its neighbouring processors the update  
    246 values of the points corresponding to the interior overlapping area to its  
    247 neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows).  
    248 The communication is done through the Message Passing Interface (MPI).  
    249 The data exchanges between processors are required at the very  
    250 place where lateral domain boundary conditions are set in the mono-domain  
    251 computation : the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module)  
    252 which manages such conditions is interfaced with routines found in \mdl{lib\_mpp} module  
    253 when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined).  
    254 It has to be pointed out that when using the MPP version of the model,  
    255 the east-west cyclic boundary condition is done implicitly,  
     219For massively parallel processing (mpp), a domain decomposition method is used. 
     220The basic idea of the method is to split the large computation domain of a numerical experiment into 
     221several smaller domains and solve the set of equations by addressing independent local problems. 
     222Each processor has its own local memory and computes the model equation over a subdomain of the whole model domain. 
     223The subdomain boundary conditions are specified through communications between processors which 
     224are organized by explicit statements (message passing method). 
     225 
     226A big advantage is that the method does not need many modifications of the initial FORTRAN code. 
     227From the modeller's point of view, each sub domain running on a processor is identical to the "mono-domain" code. 
     228In addition, the programmer manages the communications between subdomains, 
     229and the code is faster when the number of processors is increased. 
     230The porting of OPA code on an iPSC860 was achieved during Guyon's PhD [Guyon et al. 1994, 1995] 
     231in collaboration with CETIIS and ONERA. 
     232The implementation in the operational context and the studies of performance on 
     233a T3D and T3E Cray computers have been made in collaboration with IDRIS and CNRS. 
     234The present implementation is largely inspired by Guyon's work [Guyon 1995]. 
     235 
     236The parallelization strategy is defined by the physical characteristics of the ocean model. 
     237Second order finite difference schemes lead to local discrete operators that 
     238depend at the very most on one neighbouring point. 
     239The only non-local computations concern the vertical physics 
     240(implicit diffusion, turbulent closure scheme, ...) (delocalization over the whole water column), 
     241and the solving of the elliptic equation associated with the surface pressure gradient computation 
     242(delocalization over the whole horizontal domain). 
     243Therefore, a pencil strategy is used for the data sub-structuration: 
     244the 3D initial domain is laid out on local processor memories following a 2D horizontal topological splitting. 
     245Each sub-domain computes its own surface and bottom boundary conditions and 
     246has a side wall overlapping interface which defines the lateral boundary conditions for 
     247computations in the inner sub-domain. 
     248The overlapping area consists of the two rows at each edge of the sub-domain. 
     249After a computation, a communication phase starts: 
     250each processor sends to its neighbouring processors the update values of the points corresponding to 
     251the interior overlapping area to its neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows). 
     252The communication is done through the Message Passing Interface (MPI). 
     253The data exchanges between processors are required at the very place where 
     254lateral domain boundary conditions are set in the mono-domain computation: 
     255the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module) which manages such conditions is interfaced with 
     256routines found in \mdl{lib\_mpp} module when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined). 
     257It has to be pointed out that when using the MPP version of the model, 
     258the east-west cyclic boundary condition is done implicitly, 
    256259whilst the south-symmetric boundary condition option is not available. 
    257260 
     
    259262\begin{figure}[!t]    \begin{center} 
    260263\includegraphics[width=0.90\textwidth]{Fig_mpp} 
    261 \caption{   \protect\label{fig:mpp}  
    262 Positioning of a sub-domain when massively parallel processing is used. } 
     264\caption{   \protect\label{fig:mpp} 
     265  Positioning of a sub-domain when massively parallel processing is used. } 
    263266\end{center}   \end{figure} 
    264267%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
    265268 
    266269In the standard version of \NEMO, the splitting is regular and arithmetic. 
    267 The i-axis is divided by \jp{jpni} and the j-axis by \jp{jpnj} for a number of processors  
    268 \jp{jpnij} most often equal to $jpni \times jpnj$ (parameters set in  
    269  \ngn{nammpp} namelist). Each processor is independent and without message passing  
    270  or synchronous process, programs run alone and access just its own local memory.  
    271  For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil)  
    272  that are named \jp{jpi}, \jp{jpj}, \jp{jpk}. These dimensions include the internal  
    273  domain and the overlapping rows. The number of rows to exchange (known as  
    274  the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). The whole domain  
    275  dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}. The relationship between  
    276  the whole domain and a sub-domain is: 
     270The i-axis is divided by \jp{jpni} and 
     271the j-axis by \jp{jpnj} for a number of processors \jp{jpnij} most often equal to $jpni \times jpnj$ 
     272(parameters set in  \ngn{nammpp} namelist). 
     273Each processor is independent and without message passing or synchronous process, 
     274programs run alone and access just its own local memory. 
     275For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) that 
     276are named \jp{jpi}, \jp{jpj}, \jp{jpk}. 
     277These dimensions include the internal domain and the overlapping rows. 
     278The number of rows to exchange (known as the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). 
     279The whole domain dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}. 
     280The relationship between the whole domain and a sub-domain is: 
    277281\begin{eqnarray}  
    278282      jpi & = & ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci  \nonumber \\ 
     
    283287One also defines variables nldi and nlei which correspond to the internal domain bounds,  
    284288and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain.  
    285 An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$,  
     289An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$, 
    286290a global array (whole domain) by the relationship:  
    287291\begin{equation} \label{eq:lbc_nimpp} 
     
    290294with  $1 \leq i \leq jpi$, $1  \leq j \leq jpj $ , and  $1  \leq k \leq jpk$. 
    291295 
    292 Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable  
    293 nproc. In the standard version, a processor has no more than four neighbouring  
    294 processors named nono (for north), noea (east), noso (south) and nowe (west)  
    295 and two variables, nbondi and nbondj, indicate the relative position of the processor : 
     296Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable nproc. 
     297In the standard version, a processor has no more than 
     298four neighbouring processors named nono (for north), noea (east), noso (south) and nowe (west) and 
     299two variables, nbondi and nbondj, indicate the relative position of the processor: 
    296300\begin{itemize} 
    297301\item       nbondi = -1    an east neighbour, no west processor, 
     
    300304\item       nbondi =  2    no splitting following the i-axis. 
    301305\end{itemize} 
    302 During the simulation, processors exchange data with their neighbours.  
    303 If there is effectively a neighbour, the processor receives variables from this  
    304 processor on its overlapping row, and sends the data issued from internal  
    305 domain corresponding to the overlapping row of the other processor. 
    306  
    307  
    308 The \NEMO model computes equation terms with the help of mask arrays (0 on land  
    309 points and 1 on sea points). It is easily readable and very efficient in the context of  
    310 a computer with vectorial architecture. However, in the case of a scalar processor,  
    311 computations over the land regions become more expensive in terms of CPU time.  
    312 It is worse when we use a complex configuration with a realistic bathymetry like the  
    313 global ocean where more than 50 \% of points are land points. For this reason, a  
    314 pre-processing tool can be used to choose the mpp domain decomposition with a  
    315 maximum number of only land points processors, which can then be eliminated (\autoref{fig:mppini2}) 
    316 (For example, the mpp\_optimiz tools, available from the DRAKKAR web site).  
    317 This optimisation is dependent on the specific bathymetry employed. The user  
    318 then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with  
    319 $jpnij < jpni \times jpnj$, leading to the elimination of $jpni \times jpnj - jpnij$  
    320 land processors. When those parameters are specified in \ngn{nammpp} namelist,  
    321 the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound,  
    322 nono, noea,...) so that the land-only processors are not taken into account.  
     306During the simulation, processors exchange data with their neighbours. 
     307If there is effectively a neighbour, the processor receives variables from this processor on its overlapping row, 
     308and sends the data issued from internal domain corresponding to the overlapping row of the other processor. 
     309 
     310 
     311The \NEMO model computes equation terms with the help of mask arrays (0 on land points and 1 on sea points). 
     312It is easily readable and very efficient in the context of a computer with vectorial architecture. 
     313However, in the case of a scalar processor, computations over the land regions become more expensive in 
     314terms of CPU time.  
     315It is worse when we use a complex configuration with a realistic bathymetry like the global ocean where 
     316more than 50 \% of points are land points. 
     317For this reason, a pre-processing tool can be used to choose the mpp domain decomposition with a maximum number of 
     318only land points processors, which can then be eliminated (\autoref{fig:mppini2}) 
     319(For example, the mpp\_optimiz tools, available from the DRAKKAR web site). 
     320This optimisation is dependent on the specific bathymetry employed. 
     321The user then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with $jpnij < jpni \times jpnj$, 
     322leading to the elimination of $jpni \times jpnj - jpnij$ land processors. 
     323When those parameters are specified in \ngn{nammpp} namelist, 
     324the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound, nono, noea,...) so that 
     325the land-only processors are not taken into account. 
    323326 
    324327\gmcomment{Note that the inimpp2 routine is general so that the original inimpp  
    325328routine should be suppressed from the code.} 
    326329 
    327 When land processors are eliminated, the value corresponding to these locations in  
    328 the model output files is undefined. Note that this is a problem for the meshmask file  
    329 which requires to be defined over the whole domain. Therefore, user should not eliminate  
    330 land processors when creating a meshmask file ($i.e.$ when setting a non-zero value to \np{nn\_msh}). 
     330When land processors are eliminated, 
     331the value corresponding to these locations in the model output files is undefined. 
     332Note that this is a problem for the meshmask file which requires to be defined over the whole domain. 
     333Therefore, user should not eliminate land processors when creating a meshmask file 
     334($i.e.$ when setting a non-zero value to \np{nn\_msh}). 
    331335 
    332336%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    334338\includegraphics[width=0.90\textwidth]{Fig_mppini2} 
    335339\caption {    \protect\label{fig:mppini2} 
    336 Example of Atlantic domain defined for the CLIPPER projet. Initial grid is  
    337 composed of 773 x 1236 horizontal points.  
    338 (a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20).  
    339 52 subdomains are land areas.  
    340 (b) 52 subdomains are eliminated (white rectangles) and the resulting number  
    341 of processors really used during the computation is jpnij=128.} 
     340  Example of Atlantic domain defined for the CLIPPER projet. 
     341  Initial grid is composed of 773 x 1236 horizontal points. 
     342  (a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20). 
     343  52 subdomains are land areas. 
     344  (b) 52 subdomains are eliminated (white rectangles) and 
     345  the resulting number of processors really used during the computation is jpnij=128.} 
    342346\end{center}   \end{figure} 
    343347%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    354358\nlst{nambdy}  
    355359%----------------------------------------------------------------------------------------------- 
    356 %-----------------------------------------nambdy_index-------------------------------------------- 
    357 % 
    358 %\nlst{nambdy_index} 
    359 %----------------------------------------------------------------------------------------------- 
    360360%-----------------------------------------nambdy_dta-------------------------------------------- 
    361361 
    362362\nlst{nambdy_dta}  
    363363%----------------------------------------------------------------------------------------------- 
    364 %-----------------------------------------nambdy_dta-------------------------------------------- 
    365 % 
    366 %\nlst{nambdy_dta2}  
    367 %----------------------------------------------------------------------------------------------- 
    368  
    369 Options are defined through the \ngn{nambdy} \ngn{nambdy\_index}  
    370 \ngn{nambdy\_dta} \ngn{nambdy\_dta2} namelist variables. 
    371 The BDY module is the core implementation of open boundary 
    372 conditions for regional configurations. It implements the Flow 
    373 Relaxation Scheme algorithm for temperature, salinity, velocities and 
    374 ice fields, and the Flather radiation condition for the depth-mean 
    375 transports. The specification of the location of the open boundary is 
    376 completely flexible and allows for example the open boundary to follow 
    377 an isobath or other irregular contour.  
    378  
    379 The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many 
    380 features and a similar coding structure \citep{Chanut2005}. 
    381  
    382 Boundary data files used with earlier versions of NEMO may need 
    383 to be re-ordered to work with this version. See the 
    384 section on the Input Boundary Data Files for details. 
     364 
     365Options are defined through the \ngn{nambdy} \ngn{nambdy\_dta} namelist variables. 
     366The BDY module is the core implementation of open boundary conditions for regional configurations. 
     367It implements the Flow Relaxation Scheme algorithm for temperature, salinity, velocities and ice fields, and 
     368the Flather radiation condition for the depth-mean transports. 
     369The specification of the location of the open boundary is completely flexible and 
     370allows for example the open boundary to follow an isobath or other irregular contour.  
     371 
     372The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many features and 
     373a similar coding structure \citep{Chanut2005}. 
     374 
     375Boundary data files used with earlier versions of NEMO may need to be re-ordered to work with this version. 
     376See the section on the Input Boundary Data Files for details. 
    385377 
    386378%---------------------------------------------- 
     
    389381 
    390382The BDY module is activated by setting \np{ln\_bdy} to true. 
    391 It is possible to define more than one boundary ``set'' and apply 
    392 different boundary conditions to each set. The number of boundary 
    393 sets is defined by \np{nb\_bdy}.  Each boundary set may be defined 
    394 as a set of straight line segments in a namelist 
    395 (\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file 
    396 (\np{ln\_coords\_file}\forcode{ = .true.}). If the set is defined in a namelist, 
    397 then the namelists nambdy\_index must be included separately, one for 
    398 each set. If the set is defined by a file, then a 
    399 ``\ifile{coordinates.bdy}'' file must be provided. The coordinates.bdy file 
    400 is analagous to the usual NEMO ``\ifile{coordinates}'' file. In the example 
    401 above, there are two boundary sets, the first of which is defined via 
    402 a file and the second is defined in a namelist. For more details of 
    403 the definition of the boundary geometry see section 
    404 \autoref{subsec:BDY_geometry}. 
    405  
    406 For each boundary set a boundary 
    407 condition has to be chosen for the barotropic solution (``u2d'': 
    408 sea-surface height and barotropic velocities), for the baroclinic 
    409 velocities (``u3d''), and for the active tracers\footnote{The BDY 
    410   module does not deal with passive tracers at this version} 
    411 (``tra''). For each set of variables there is a choice of algorithm 
    412 and a choice for the data, eg. for the active tracers the algorithm is 
    413 set by \np{nn\_tra} and the choice of data is set by 
    414 \np{nn\_tra\_dta}.  
     383It is possible to define more than one boundary ``set'' and apply different boundary conditions to each set. 
     384The number of boundary sets is defined by \np{nb\_bdy}. 
     385Each boundary set may be defined as a set of straight line segments in a namelist 
     386(\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file (\np{ln\_coords\_file}\forcode{ = .true.}). 
     387If the set is defined in a namelist, then the namelists nambdy\_index must be included separately, one for each set. 
     388If the set is defined by a file, then a ``\ifile{coordinates.bdy}'' file must be provided. 
     389The coordinates.bdy file is analagous to the usual NEMO ``\ifile{coordinates}'' file. 
     390In the example above, there are two boundary sets, the first of which is defined via a file and 
     391the second is defined in a namelist. 
     392For more details of the definition of the boundary geometry see section \autoref{subsec:BDY_geometry}. 
     393 
     394For each boundary set a boundary condition has to be chosen for the barotropic solution 
     395(``u2d'':sea-surface height and barotropic velocities), for the baroclinic velocities (``u3d''), and 
     396for the active tracers \footnote{The BDY module does not deal with passive tracers at this version} (``tra''). 
     397For each set of variables there is a choice of algorithm and a choice for the data, 
     398eg. for the active tracers the algorithm is set by \np{nn\_tra} and the choice of data is set by \np{nn\_tra\_dta}.  
    415399 
    416400The choice of algorithm is currently as follows: 
     
    419403 
    420404\begin{itemize} 
    421 \item[0.] No boundary condition applied. So the solution will ``see'' 
    422   the land points around the edge of the edge of the domain. 
    423 \item[1.] Flow Relaxation Scheme (FRS) available for all variables.  
    424 \item[2.] Flather radiation scheme for the barotropic variables. The 
    425   Flather scheme is not compatible with the filtered free surface 
     405\item[0.] No boundary condition applied. 
     406  So the solution will ``see'' the land points around the edge of the edge of the domain. 
     407\item[1.] Flow Relaxation Scheme (FRS) available for all variables. 
     408\item[2.] Flather radiation scheme for the barotropic variables. 
     409  The Flather scheme is not compatible with the filtered free surface 
    426410  ({\it dynspg\_ts}).  
    427411\end{itemize} 
     
    429413\mbox{} 
    430414 
    431 The main choice for the boundary data is 
    432 to use initial conditions as boundary data (\np{nn\_tra\_dta}\forcode{ = 0}) or to 
    433 use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}). For the 
    434 barotropic solution there is also the option to use tidal 
    435 harmonic forcing either by itself or in addition to other external 
    436 data.  
    437  
    438 If external boundary data is required then the nambdy\_dta namelist 
    439 must be defined. One nambdy\_dta namelist is required for each boundary 
    440 set in the order in which the boundary sets are defined in nambdy. In 
    441 the example given, two boundary sets have been defined and so there 
    442 are two nambdy\_dta namelists. The boundary data is read in using the 
    443 fldread module, so the nambdy\_dta namelist is in the format required 
    444 for fldread. For each variable required, the filename, the frequency 
    445 of the files and the frequency of the data in the files is given. Also 
    446 whether or not time-interpolation is required and whether the data is 
    447 climatological (time-cyclic) data. Note that on-the-fly spatial 
    448 interpolation of boundary data is not available at this version.  
    449  
    450 In the example namelists given, two boundary sets are defined. The 
    451 first set is defined via a file and applies FRS conditions to 
    452 temperature and salinity and Flather conditions to the barotropic 
    453 variables. External data is provided in daily files (from a 
    454 large-scale model). Tidal harmonic forcing is also used. The second 
    455 set is defined in a namelist. FRS conditions are applied on 
    456 temperature and salinity and climatological data is read from external 
    457 files.  
     415The main choice for the boundary data is to use initial conditions as boundary data 
     416(\np{nn\_tra\_dta}\forcode{ = 0}) or to use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}). 
     417For the barotropic solution there is also the option to use tidal harmonic forcing either by 
     418itself or in addition to other external data.  
     419 
     420If external boundary data is required then the nambdy\_dta namelist must be defined. 
     421One nambdy\_dta namelist is required for each boundary set in the order in which 
     422the boundary sets are defined in nambdy. 
     423In the example given, two boundary sets have been defined and so there are two nambdy\_dta namelists. 
     424The boundary data is read in using the fldread module, 
     425so the nambdy\_dta namelist is in the format required for fldread. 
     426For each variable required, the filename, the frequency of the files and 
     427the frequency of the data in the files is given. 
     428Also whether or not time-interpolation is required and whether the data is climatological (time-cyclic) data. 
     429Note that on-the-fly spatial interpolation of boundary data is not available at this version.  
     430 
     431In the example namelists given, two boundary sets are defined. 
     432The first set is defined via a file and applies FRS conditions to temperature and salinity and 
     433Flather conditions to the barotropic variables. 
     434External data is provided in daily files (from a large-scale model). 
     435Tidal harmonic forcing is also used. 
     436The second set is defined in a namelist. 
     437FRS conditions are applied on temperature and salinity and climatological data is read from external files.  
    458438 
    459439%---------------------------------------------- 
     
    462442 
    463443The Flow Relaxation Scheme (FRS) \citep{Davies_QJRMS76,Engerdahl_Tel95}, 
    464 applies a simple relaxation of the model fields to 
    465 externally-specified values over a zone next to the edge of the model 
    466 domain. Given a model prognostic variable $\Phi$  
     444applies a simple relaxation of the model fields to externally-specified values over 
     445a zone next to the edge of the model domain. 
     446Given a model prognostic variable $\Phi$ 
    467447\begin{equation}  \label{eq:bdy_frs1} 
    468448\Phi(d) = \alpha(d)\Phi_{e}(d) + (1-\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N 
    469449\end{equation} 
    470 where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified 
    471 external field, $d$ gives the discrete distance from the model 
    472 boundary  and $\alpha$ is a parameter that varies from $1$ at $d=1$ to 
    473 a small value at $d=N$. It can be shown that this scheme is equivalent 
    474 to adding a relaxation term to the prognostic equation for $\Phi$ of 
    475 the form: 
     450where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified external field, 
     451$d$ gives the discrete distance from the model boundary and 
     452$\alpha$ is a parameter that varies from $1$ at $d=1$ to a small value at $d=N$. 
     453It can be shown that this scheme is equivalent to adding a relaxation term to 
     454the prognostic equation for $\Phi$ of the form: 
    476455\begin{equation}  \label{eq:bdy_frs2} 
    477456-\frac{1}{\tau}\left(\Phi - \Phi_{e}\right) 
    478457\end{equation} 
    479 where the relaxation time scale $\tau$ is given by a function of 
    480 $\alpha$ and the model time step $\Delta t$: 
     458where the relaxation time scale $\tau$ is given by a function of $\alpha$ and the model time step $\Delta t$: 
    481459\begin{equation}  \label{eq:bdy_frs3} 
    482460\tau = \frac{1-\alpha}{\alpha}  \,\rdt 
    483461\end{equation} 
    484 Thus the model solution is completely prescribed by the external 
    485 conditions at the edge of the model domain and is relaxed towards the 
    486 external conditions over the rest of the FRS zone. The application of 
    487 a relaxation zone helps to prevent spurious reflection of outgoing 
    488 signals from the model boundary.  
     462Thus the model solution is completely prescribed by the external conditions at the edge of the model domain and 
     463is relaxed towards the external conditions over the rest of the FRS zone. 
     464The application of a relaxation zone helps to prevent spurious reflection of 
     465outgoing signals from the model boundary.  
    489466 
    490467The function $\alpha$ is specified as a $tanh$ function: 
     
    492469\alpha(d) = 1 - \tanh\left(\frac{d-1}{2}\right),       \quad d=1,N 
    493470\end{equation} 
    494 The width of the FRS zone is specified in the namelist as  
    495 \np{nn\_rimwidth}. This is typically set to a value between 8 and 10.  
     471The width of the FRS zone is specified in the namelist as \np{nn\_rimwidth}. 
     472This is typically set to a value between 8 and 10. 
    496473 
    497474%---------------------------------------------- 
     
    499476\label{subsec:BDY_flather_scheme} 
    500477 
    501 The \citet{Flather_JPO94} scheme is a radiation condition on the normal, depth-mean 
    502 transport across the open boundary. It takes the form 
     478The \citet{Flather_JPO94} scheme is a radiation condition on the normal, 
     479depth-mean transport across the open boundary. 
     480It takes the form 
    503481\begin{equation}  \label{eq:bdy_fla1} 
    504482U = U_{e} + \frac{c}{h}\left(\eta - \eta_{e}\right), 
    505483\end{equation} 
    506 where $U$ is the depth-mean velocity normal to the boundary and $\eta$ 
    507 is the sea surface height, both from the model. The subscript $e$ 
    508 indicates the same fields from external sources. The speed of external 
    509 gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the 
    510 water column. The depth-mean normal velocity along the edge of the 
    511 model domain is set equal to the 
    512 external depth-mean normal velocity, plus a correction term that 
    513 allows gravity waves generated internally to exit the model boundary. 
    514 Note that the sea-surface height gradient in \autoref{eq:bdy_fla1} 
    515 is a spatial gradient across the model boundary, so that $\eta_{e}$ is 
    516 defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the 
    517 $T$ points with $nbr=2$. $U$ and $U_{e}$ are defined on the $U$ or 
    518 $V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points. 
     484where $U$ is the depth-mean velocity normal to the boundary and $\eta$ is the sea surface height, 
     485both from the model. 
     486The subscript $e$ indicates the same fields from external sources. 
     487The speed of external gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the water column. 
     488The depth-mean normal velocity along the edge of the model domain is set equal to 
     489the external depth-mean normal velocity, 
     490plus a correction term that allows gravity waves generated internally to exit the model boundary. 
     491Note that the sea-surface height gradient in \autoref{eq:bdy_fla1} is a spatial gradient across the model boundary, 
     492so that $\eta_{e}$ is defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the $T$ points with $nbr=2$. 
     493$U$ and $U_{e}$ are defined on the $U$ or $V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points. 
    519494 
    520495%---------------------------------------------- 
     
    522497\label{subsec:BDY_geometry} 
    523498 
    524 Each open boundary set is defined as a list of points. The information 
    525 is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$ 
    526 structure.  The $nbi$ and $nbj$ arrays 
    527 define the local $(i,j)$ indices of each point in the boundary zone 
    528 and the $nbr$ array defines the discrete distance from the boundary 
    529 with $nbr=1$ meaning that the point is next to the edge of the 
    530 model domain and $nbr>1$ showing that the point is increasingly 
    531 further away from the edge of the model domain. A set of $nbi$, $nbj$, 
    532 and $nbr$ arrays is defined for each of the $T$, $U$ and $V$ 
    533 grids. Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular 
    534 boundary.  
    535  
    536 The boundary geometry for each set may be defined in a namelist 
    537 nambdy\_index or by reading in a ``\ifile{coordinates.bdy}'' file. The 
    538 nambdy\_index namelist defines a series of straight-line segments for 
    539 north, east, south and west boundaries. For the northern boundary, 
    540 \np{nbdysegn} gives the number of segments, \np{jpjnob} gives the $j$ 
    541 index for each segment and \np{jpindt} and \np{jpinft} give the start 
    542 and end $i$ indices for each segment with similar for the other 
    543 boundaries. These segments define a list of $T$ grid points along the 
    544 outermost row of the boundary ($nbr\,=\, 1$). The code deduces the $U$ and 
    545 $V$ points and also the points for $nbr\,>\, 1$ if 
    546 $nn\_rimwidth\,>\,1$. 
    547  
    548 The boundary geometry may also be defined from a 
    549 ``\ifile{coordinates.bdy}'' file. Figure \autoref{fig:LBC_nc_header} 
    550 gives an example of the header information from such a file. The file 
    551 should contain the index arrays for each of the $T$, $U$ and $V$ 
    552 grids. The arrays must be in order of increasing $nbr$. Note that the 
    553 $nbi$, $nbj$ values in the file are global values and are converted to 
    554 local values in the code. Typically this file will be used to generate 
    555 external boundary data via interpolation and so will also contain the 
    556 latitudes and longitudes of each point as shown. However, this is not 
    557 necessary to run the model.  
    558  
    559 For some choices of irregular boundary the model domain may contain 
    560 areas of ocean which are not part of the computational domain. For 
    561 example if an open boundary is defined along an isobath, say at the 
    562 shelf break, then the areas of ocean outside of this boundary will 
    563 need to be masked out. This can be done by reading a mask file defined 
    564 as \np{cn\_mask\_file} in the nam\_bdy namelist. Only one mask file is 
    565 used even if multiple boundary sets are defined. 
     499Each open boundary set is defined as a list of points. 
     500The information is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$ structure. 
     501The $nbi$ and $nbj$ arrays define the local $(i,j)$ indices of each point in the boundary zone and 
     502the $nbr$ array defines the discrete distance from the boundary with $nbr=1$ meaning that 
     503the point is next to the edge of the model domain and $nbr>1$ showing that 
     504the point is increasingly further away from the edge of the model domain. 
     505A set of $nbi$, $nbj$, and $nbr$ arrays is defined for each of the $T$, $U$ and $V$ grids. 
     506Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular boundary.  
     507 
     508The boundary geometry for each set may be defined in a namelist nambdy\_index or 
     509by reading in a ``\ifile{coordinates.bdy}'' file. 
     510The nambdy\_index namelist defines a series of straight-line segments for north, east, south and west boundaries. 
     511For the northern boundary, \np{nbdysegn} gives the number of segments, 
     512\np{jpjnob} gives the $j$ index for each segment and \np{jpindt} and 
     513\np{jpinft} give the start and end $i$ indices for each segment with similar for the other boundaries. 
     514These segments define a list of $T$ grid points along the outermost row of the boundary ($nbr\,=\, 1$). 
     515The code deduces the $U$ and $V$ points and also the points for $nbr\,>\, 1$ if $nn\_rimwidth\,>\,1$. 
     516 
     517The boundary geometry may also be defined from a ``\ifile{coordinates.bdy}'' file. 
     518Figure \autoref{fig:LBC_nc_header} gives an example of the header information from such a file. 
     519The file should contain the index arrays for each of the $T$, $U$ and $V$ grids. 
     520The arrays must be in order of increasing $nbr$. 
     521Note that the $nbi$, $nbj$ values in the file are global values and are converted to local values in the code. 
     522Typically this file will be used to generate external boundary data via interpolation and so 
     523will also contain the latitudes and longitudes of each point as shown. 
     524However, this is not necessary to run the model.  
     525 
     526For some choices of irregular boundary the model domain may contain areas of ocean which 
     527are not part of the computational domain. 
     528For example if an open boundary is defined along an isobath, say at the shelf break, 
     529then the areas of ocean outside of this boundary will need to be masked out. 
     530This can be done by reading a mask file defined as \np{cn\_mask\_file} in the nam\_bdy namelist. 
     531Only one mask file is used even if multiple boundary sets are defined. 
    566532 
    567533%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    569535\includegraphics[width=1.0\textwidth]{Fig_LBC_bdy_geom} 
    570536\caption {      \protect\label{fig:LBC_bdy_geom} 
    571 Example of geometry of unstructured open boundary} 
     537  Example of geometry of unstructured open boundary} 
    572538\end{center}   \end{figure} 
    573539%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    577543\label{subsec:BDY_data} 
    578544 
    579 The data files contain the data arrays 
    580 in the order in which the points are defined in the $nbi$ and $nbj$ 
    581 arrays. The data arrays are dimensioned on: a time dimension; 
     545The data files contain the data arrays in the order in which the points are defined in the $nbi$ and $nbj$ arrays. 
     546The data arrays are dimensioned on: 
     547a time dimension; 
    582548$xb$ which is the index of the boundary data point in the horizontal; 
    583 and $yb$ which is a degenerate dimension of 1 to enable the file to be 
    584 read by the standard NEMO I/O routines. The 3D fields also have a 
    585 depth dimension.  
    586  
    587 At Version 3.4 there are new restrictions on the order in which the 
    588 boundary points are defined (and therefore restrictions on the order 
    589 of the data in the file). In particular: 
     549and $yb$ which is a degenerate dimension of 1 to enable the file to be read by the standard NEMO I/O routines. 
     550The 3D fields also have a depth dimension.  
     551 
     552At Version 3.4 there are new restrictions on the order in which the boundary points are defined 
     553(and therefore restrictions on the order of the data in the file). 
     554In particular: 
    590555 
    591556\mbox{} 
    592557 
    593558\begin{enumerate} 
    594 \item The data points must be in order of increasing $nbr$, ie. all 
    595   the $nbr=1$ points, then all the $nbr=2$ points etc. 
    596 \item All the data for a particular boundary set must be in the same 
    597   order. (Prior to 3.4 it was possible to define barotropic data in a 
    598   different order to the data for tracers and baroclinic velocities).  
     559\item The data points must be in order of increasing $nbr$, 
     560  ie. all the $nbr=1$ points, then all the $nbr=2$ points etc. 
     561\item All the data for a particular boundary set must be in the same order. 
     562  (Prior to 3.4 it was possible to define barotropic data in a different order to 
     563  the data for tracers and baroclinic velocities).  
    599564\end{enumerate} 
    600565 
    601566\mbox{} 
    602567 
    603 These restrictions mean that data files used with previous versions of 
    604 the model may not work with version 3.4. A fortran utility 
    605 {\it bdy\_reorder} exists in the TOOLS directory which will re-order the 
    606 data in old BDY data files.  
     568These restrictions mean that data files used with previous versions of the model may not work with version 3.4. 
     569A fortran utility {\it bdy\_reorder} exists in the TOOLS directory which 
     570will re-order the data in old BDY data files.  
    607571 
    608572%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
    609573\begin{figure}[!t]     \begin{center} 
    610574\includegraphics[width=1.0\textwidth]{Fig_LBC_nc_header} 
    611 \caption {     \protect\label{fig:LBC_nc_header}  
    612 Example of the header for a \protect\ifile{coordinates.bdy} file} 
     575\caption {     \protect\label{fig:LBC_nc_header} 
     576  Example of the header for a \protect\ifile{coordinates.bdy} file} 
    613577\end{center}   \end{figure} 
    614578%>>>>>>>>>>>>>>>>>>>>>>>>>>>> 
     
    618582\label{subsec:BDY_vol_corr} 
    619583 
    620 There is an option to force the total volume in the regional model to be constant,  
    621 similar to the option in the OBC module. This is controlled  by the \np{nn\_volctl}  
    622 parameter in the namelist. A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used.  
    623 If  \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities  
    624 around the boundary at each timestep to ensure that the integrated volume flow  
    625 through the boundary is zero. If \np{nn\_volctl}\forcode{ = 2} then the calculation of  
    626 the volume change on the timestep includes the change due to the freshwater  
    627 flux across the surface and the correction velocity corrects for this as well. 
     584There is an option to force the total volume in the regional model to be constant, 
     585similar to the option in the OBC module. 
     586This is controlled  by the \np{nn\_volctl} parameter in the namelist. 
     587A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used. 
     588If \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities around the boundary at 
     589each timestep to ensure that the integrated volume flow through the boundary is zero. 
     590If \np{nn\_volctl}\forcode{ = 2} then the calculation of the volume change on 
     591the timestep includes the change due to the freshwater flux across the surface and 
     592the correction velocity corrects for this as well. 
    628593 
    629594If more than one boundary set is used then volume correction is 
Note: See TracChangeset for help on using the changeset viewer.