Changeset 10354 for NEMO/trunk/doc/latex/NEMO/subfiles/chap_LBC.tex
- Timestamp:
- 2018-11-21T17:59:55+01:00 (5 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
NEMO/trunk/doc/latex/NEMO/subfiles/chap_LBC.tex
r10146 r10354 29 29 30 30 Options are defined through the \ngn{namlbc} namelist variables. 31 The discrete representation of a domain with complex boundaries (coastlines and 32 bottom topography) leads to arrays that include large portions where a computation 33 is not required as the model variables remain at zero. Nevertheless, vectorial 34 supercomputers are far more efficient when computing over a whole array, and the 35 readability of a code is greatly improved when boundary conditions are applied in 36 an automatic way rather than by a specific computation before or after each 37 computational loop. An efficient way to work over the whole domain while specifying 38 the boundary conditions, is to use multiplication by mask arrays in the computation. 39 A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ 40 elsewhere. A simple multiplication of a variable by its own mask ensures that it will 41 remain zero over land areas. Since most of the boundary conditions consist of a 42 zero flux across the solid boundaries, they can be simply applied by multiplying 43 variables by the correct mask arrays, $i.e.$ the mask array of the grid point where 44 the flux is evaluated. For example, the heat flux in the \textbf{i}-direction is evaluated 45 at $u$-points. Evaluating this quantity as, 31 The discrete representation of a domain with complex boundaries (coastlines and bottom topography) leads to 32 arrays that include large portions where a computation is not required as the model variables remain at zero. 33 Nevertheless, vectorial supercomputers are far more efficient when computing over a whole array, 34 and the readability of a code is greatly improved when boundary conditions are applied in 35 an automatic way rather than by a specific computation before or after each computational loop. 36 An efficient way to work over the whole domain while specifying the boundary conditions, 37 is to use multiplication by mask arrays in the computation. 38 A mask array is a matrix whose elements are $1$ in the ocean domain and $0$ elsewhere. 39 A simple multiplication of a variable by its own mask ensures that it will remain zero over land areas. 40 Since most of the boundary conditions consist of a zero flux across the solid boundaries, 41 they can be simply applied by multiplying variables by the correct mask arrays, 42 $i.e.$ the mask array of the grid point where the flux is evaluated. 43 For example, the heat flux in the \textbf{i}-direction is evaluated at $u$-points. 44 Evaluating this quantity as, 46 45 47 46 \begin{equation} \label{eq:lbc_aaaa} … … 49 48 }{e_{1u} } \; \delta _{i+1 / 2} \left[ T \right]\;\;mask_u 50 49 \end{equation} 51 (where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is 52 zero inside land and at the boundaries, since mask$_{u}$ is zero at solid boundaries 53 which in this case are defined at $u$-points (normal velocity $u$ remains zero at 54 the coast) (\autoref{fig:LBC_uv}). 50 (where mask$_{u}$ is the mask array at a $u$-point) ensures that the heat flux is zero inside land and 51 at the boundaries, since mask$_{u}$ is zero at solid boundaries which in this case are defined at $u$-points 52 (normal velocity $u$ remains zero at the coast) (\autoref{fig:LBC_uv}). 55 53 56 54 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 58 56 \includegraphics[width=0.90\textwidth]{Fig_LBC_uv} 59 57 \caption{ \protect\label{fig:LBC_uv} 60 Lateral boundary (thick line) at T-level. The velocity normal to the boundary is set to zero.} 58 Lateral boundary (thick line) at T-level. 59 The velocity normal to the boundary is set to zero.} 61 60 \end{center} \end{figure} 62 61 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> 63 62 64 For momentum the situation is a bit more complex as two boundary conditions 65 must be provided along the coast (one each for the normal and tangential velocities). 66 The boundary of the ocean in the C-grid is defined by the velocity-faces. 67 For example, at a given $T$-level, the lateral boundary (a coastline or an intersection 68 with the bottom topography) is made of segments joining $f$-points, and normal 69 velocity points are located between two $f-$points (\autoref{fig:LBC_uv}). 70 The boundary condition on the normal velocity (no flux through solid boundaries) 71 can thus be easily implemented using the mask system. The boundary condition 72 on the tangential velocity requires a more specific treatment. This boundary 73 condition influences the relative vorticity and momentum diffusive trends, and is 74 required in order to compute the vorticity at the coast. Four different types of 75 lateral boundary condition are available, controlled by the value of the \np{rn\_shlat} 76 namelist parameter. (The value of the mask$_{f}$ array along the coastline is set 77 equal to this parameter.) These are: 63 For momentum the situation is a bit more complex as two boundary conditions must be provided along the coast 64 (one each for the normal and tangential velocities). 65 The boundary of the ocean in the C-grid is defined by the velocity-faces. 66 For example, at a given $T$-level, 67 the lateral boundary (a coastline or an intersection with the bottom topography) is made of 68 segments joining $f$-points, and normal velocity points are located between two $f-$points (\autoref{fig:LBC_uv}). 69 The boundary condition on the normal velocity (no flux through solid boundaries) 70 can thus be easily implemented using the mask system. 71 The boundary condition on the tangential velocity requires a more specific treatment. 72 This boundary condition influences the relative vorticity and momentum diffusive trends, 73 and is required in order to compute the vorticity at the coast. 74 Four different types of lateral boundary condition are available, 75 controlled by the value of the \np{rn\_shlat} namelist parameter 76 (The value of the mask$_{f}$ array along the coastline is set equal to this parameter). 77 These are: 78 78 79 79 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 81 81 \includegraphics[width=0.90\textwidth]{Fig_LBC_shlat} 82 82 \caption{ \protect\label{fig:LBC_shlat} 83 lateral boundary condition (a) free-slip ($rn\_shlat=0$) ; (b) no-slip ($rn\_shlat=2$) 84 ; (c) "partial" free-slip ($0<rn\_shlat<2$) and (d) "strong" no-slip ($2<rn\_shlat$). 85 Implied "ghost" velocity inside land area is display in grey. } 83 lateral boundary condition 84 (a) free-slip ($rn\_shlat=0$); 85 (b) no-slip ($rn\_shlat=2$); 86 (c) "partial" free-slip ($0<rn\_shlat<2$) and 87 (d) "strong" no-slip ($2<rn\_shlat$). 88 Implied "ghost" velocity inside land area is display in grey. } 86 89 \end{center} \end{figure} 87 90 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 89 92 \begin{description} 90 93 91 \item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}): ] the tangential velocity at the 92 coastline is equal to the offshore velocity, $i.e.$ the normal derivative of the 93 tangential velocity is zero at the coast, so the vorticity: mask$_{f}$ array is set 94 to zero inside the land and just at the coast (\autoref{fig:LBC_shlat}-a). 95 96 \item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}): ] the tangential velocity vanishes 97 at the coastline. Assuming that the tangential velocity decreases linearly from 98 the closest ocean velocity grid point to the coastline, the normal derivative is 99 evaluated as if the velocities at the closest land velocity gridpoint and the closest 100 ocean velocity gridpoint were of the same magnitude but in the opposite direction 101 (\autoref{fig:LBC_shlat}-b). Therefore, the vorticity along the coastlines is given by: 94 \item[free-slip boundary condition (\np{rn\_shlat}\forcode{ = 0}):] the tangential velocity at 95 the coastline is equal to the offshore velocity, 96 $i.e.$ the normal derivative of the tangential velocity is zero at the coast, 97 so the vorticity: mask$_{f}$ array is set to zero inside the land and just at the coast 98 (\autoref{fig:LBC_shlat}-a). 99 100 \item[no-slip boundary condition (\np{rn\_shlat}\forcode{ = 2}):] the tangential velocity vanishes at the coastline. 101 Assuming that the tangential velocity decreases linearly from 102 the closest ocean velocity grid point to the coastline, 103 the normal derivative is evaluated as if the velocities at the closest land velocity gridpoint and 104 the closest ocean velocity gridpoint were of the same magnitude but in the opposite direction 105 (\autoref{fig:LBC_shlat}-b). 106 Therefore, the vorticity along the coastlines is given by: 102 107 103 108 \begin{equation*} 104 109 \zeta \equiv 2 \left(\delta_{i+1/2} \left[e_{2v} v \right] - \delta_{j+1/2} \left[e_{1u} u \right] \right) / \left(e_{1f} e_{2f} \right) \ , 105 110 \end{equation*} 106 where $u$ and $v$ are masked fields. Setting the mask$_{f}$ array to $2$ along107 the coastline provides a vorticity field computed with the no-slip boundary condition, 108 simply by multiplying it by the mask$_{f}$ :111 where $u$ and $v$ are masked fields. 112 Setting the mask$_{f}$ array to $2$ along the coastline provides a vorticity field computed with 113 the no-slip boundary condition, simply by multiplying it by the mask$_{f}$ : 109 114 \begin{equation} \label{eq:lbc_bbbb} 110 115 \zeta \equiv \frac{1}{e_{1f} {\kern 1pt}e_{2f} }\left( {\delta _{i+1/2} … … 113 118 \end{equation} 114 119 115 \item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2): ] the tangential 116 velocity at the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral 117 friction but not strong enough to make the tangential velocity at the coast vanish 118 (\autoref{fig:LBC_shlat}-c). This can be selected by providing a value of mask$_{f}$ 119 strictly inbetween $0$ and $2$. 120 121 \item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}): ] the viscous boundary 122 layer is assumed to be smaller than half the grid size (\autoref{fig:LBC_shlat}-d). 123 The friction is thus larger than in the no-slip case. 120 \item["partial" free-slip boundary condition (0$<$\np{rn\_shlat}$<$2):] the tangential velocity at 121 the coastline is smaller than the offshore velocity, $i.e.$ there is a lateral friction but 122 not strong enough to make the tangential velocity at the coast vanish (\autoref{fig:LBC_shlat}-c). 123 This can be selected by providing a value of mask$_{f}$ strictly inbetween $0$ and $2$. 124 125 \item["strong" no-slip boundary condition (2$<$\np{rn\_shlat}):] the viscous boundary layer is assumed to 126 be smaller than half the grid size (\autoref{fig:LBC_shlat}-d). 127 The friction is thus larger than in the no-slip case. 124 128 125 129 \end{description} 126 130 127 Note that when the bottom topography is entirely represented by the $s$-coor-dinates 128 (pure $s$-coordinate), the lateral boundary condition on tangential velocity is of much 129 less importance as it is only applied next to the coast where the minimum water depth 130 can be quite shallow. 131 Note that when the bottom topography is entirely represented by the $s$-coor-dinates (pure $s$-coordinate), 132 the lateral boundary condition on tangential velocity is of much less importance as 133 it is only applied next to the coast where the minimum water depth can be quite shallow. 131 134 132 135 … … 137 140 \label{sec:LBC_jperio} 138 141 139 At the model domain boundaries several choices are offered: closed, cyclic east-west, 140 cyclic north-south, a north-fold, and combination closed-north fold 141 or bi-cyclic east-west and north-fold. The north-fold boundary condition is associated with the 3-pole ORCA mesh. 142 At the model domain boundaries several choices are offered: 143 closed, cyclic east-west, cyclic north-south, a north-fold, and combination closed-north fold or 144 bi-cyclic east-west and north-fold. 145 The north-fold boundary condition is associated with the 3-pole ORCA mesh. 142 146 143 147 % ------------------------------------------------------------------------------------------------------------- … … 147 151 \label{subsec:LBC_jperio012} 148 152 149 The choice of closed or cyclic model domain boundary condition is made 150 by setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}. Each time such a boundary 151 condition is needed, it is set by a call to routine \mdl{lbclnk}. The computation of 152 momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to 153 $j=jpj-1$, $i.e.$ in the model interior. To choose a lateral model boundary condition 154 is to specify the first and last rows and columns of the model variables. 153 The choice of closed or cyclic model domain boundary condition is made by 154 setting \np{jperio} to 0, 1, 2 or 7 in namelist \ngn{namcfg}. 155 Each time such a boundary condition is needed, it is set by a call to routine \mdl{lbclnk}. 156 The computation of momentum and tracer trends proceeds from $i=2$ to $i=jpi-1$ and from $j=2$ to $j=jpj-1$, 157 $i.e.$ in the model interior. 158 To choose a lateral model boundary condition is to specify the first and last rows and columns of 159 the model variables. 155 160 156 161 \begin{description} 157 162 158 \item[For closed boundary (\np{jperio}\forcode{ = 0})], solid walls are imposed at all model 159 boundaries: first and last rows and columns are set to zero. 160 161 \item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})], first and last rows are set 162 to zero (closed) whilst the first column is set to the value of the last-but-one column 163 and the last column to the value of the second one (\autoref{fig:LBC_jperio}-a). 164 Whatever flows out of the eastern (western) end of the basin enters the western 165 (eastern) end. 166 167 \item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})], first and last columns are set 168 to zero (closed) whilst the first row is set to the value of the last-but-one row 169 and the last row to the value of the second one (\autoref{fig:LBC_jperio}-a). 170 Whatever flows out of the northern (southern) end of the basin enters the southern 171 (northern) end. 163 \item[For closed boundary (\np{jperio}\forcode{ = 0})], 164 solid walls are imposed at all model boundaries: 165 first and last rows and columns are set to zero. 166 167 \item[For cyclic east-west boundary (\np{jperio}\forcode{ = 1})], 168 first and last rows are set to zero (closed) whilst the first column is set to 169 the value of the last-but-one column and the last column to the value of the second one 170 (\autoref{fig:LBC_jperio}-a). 171 Whatever flows out of the eastern (western) end of the basin enters the western (eastern) end. 172 173 \item[For cyclic north-south boundary (\np{jperio}\forcode{ = 2})], 174 first and last columns are set to zero (closed) whilst the first row is set to 175 the value of the last-but-one row and the last row to the value of the second one 176 (\autoref{fig:LBC_jperio}-a). 177 Whatever flows out of the northern (southern) end of the basin enters the southern (northern) end. 172 178 173 179 \item[Bi-cyclic east-west and north-south boundary (\np{jperio}\forcode{ = 7})] combines cases 1 and 2. … … 179 185 \includegraphics[width=1.0\textwidth]{Fig_LBC_jperio} 180 186 \caption{ \protect\label{fig:LBC_jperio} 181 setting of (a) east-west cyclic (b) symmetric across the equator boundary conditions.}187 setting of (a) east-west cyclic (b) symmetric across the equator boundary conditions.} 182 188 \end{center} \end{figure} 183 189 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 189 195 \label{subsec:LBC_north_fold} 190 196 191 The north fold boundary condition has been introduced in order to handle the north 192 boundary of a three-polar ORCA grid. Such a grid has two poles in the northern hemisphere 193 (\autoref{fig:MISC_ORCA_msh}, and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}. 197 The north fold boundary condition has been introduced in order to handle the north boundary of 198 a three-polar ORCA grid. 199 Such a grid has two poles in the northern hemisphere (\autoref{fig:MISC_ORCA_msh}, 200 and thus requires a specific treatment illustrated in \autoref{fig:North_Fold_T}. 194 201 Further information can be found in \mdl{lbcnfd} module which applies the north fold boundary condition. 195 202 … … 197 204 \begin{figure}[!t] \begin{center} 198 205 \includegraphics[width=0.90\textwidth]{Fig_North_Fold_T} 199 \caption{ \protect\label{fig:North_Fold_T} 200 North fold boundary with a $T$-point pivot and cyclic east-west boundary condition 201 ($jperio=4$), as used in ORCA 2, 1/4, and 1/12. Pink shaded area corresponds 202 to the inner domain mask (see text). }206 \caption{ \protect\label{fig:North_Fold_T} 207 North fold boundary with a $T$-point pivot and cyclic east-west boundary condition ($jperio=4$), 208 as used in ORCA 2, 1/4, and 1/12. 209 Pink shaded area corresponds to the inner domain mask (see text). } 203 210 \end{center} \end{figure} 204 211 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 210 217 \label{sec:LBC_mpp} 211 218 212 For massively parallel processing (mpp), a domain decomposition method is used. 213 The basic idea of the method is to split the large computation domain of a numerical 214 experiment into several smaller domains and solve the set of equations by addressing 215 independent local problems. Each processor has its own local memory and computes 216 the model equation over a subdomain of the whole model domain. The subdomain 217 boundary conditions are specified through communications between processors 218 which are organized by explicit statements (message passing method). 219 220 A big advantage is that the method does not need many modifications of the initial 221 FORTRAN code. From the modeller's point of view, each sub domain running on 222 a processor is identical to the "mono-domain" code. In addition, the programmer 223 manages the communications between subdomains, and the code is faster when 224 the number of processors is increased. The porting of OPA code on an iPSC860 225 was achieved during Guyon's PhD [Guyon et al. 1994, 1995] in collaboration with 226 CETIIS and ONERA. The implementation in the operational context and the studies 227 of performance on a T3D and T3E Cray computers have been made in collaboration 228 with IDRIS and CNRS. The present implementation is largely inspired by Guyon's 229 work [Guyon 1995]. 230 231 The parallelization strategy is defined by the physical characteristics of the 232 ocean model. Second order finite difference schemes lead to local discrete 233 operators that depend at the very most on one neighbouring point. The only 234 non-local computations concern the vertical physics (implicit diffusion, 235 turbulent closure scheme, ...) (delocalization over the whole water column), 236 and the solving of the elliptic equation associated with the surface pressure 237 gradient computation (delocalization over the whole horizontal domain). 238 Therefore, a pencil strategy is used for the data sub-structuration 239 : the 3D initial domain is laid out on local processor 240 memories following a 2D horizontal topological splitting. Each sub-domain 241 computes its own surface and bottom boundary conditions and has a side 242 wall overlapping interface which defines the lateral boundary conditions for 243 computations in the inner sub-domain. The overlapping area consists of the 244 two rows at each edge of the sub-domain. After a computation, a communication 245 phase starts: each processor sends to its neighbouring processors the update 246 values of the points corresponding to the interior overlapping area to its 247 neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows). 248 The communication is done through the Message Passing Interface (MPI). 249 The data exchanges between processors are required at the very 250 place where lateral domain boundary conditions are set in the mono-domain 251 computation : the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module) 252 which manages such conditions is interfaced with routines found in \mdl{lib\_mpp} module 253 when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined). 254 It has to be pointed out that when using the MPP version of the model, 255 the east-west cyclic boundary condition is done implicitly, 219 For massively parallel processing (mpp), a domain decomposition method is used. 220 The basic idea of the method is to split the large computation domain of a numerical experiment into 221 several smaller domains and solve the set of equations by addressing independent local problems. 222 Each processor has its own local memory and computes the model equation over a subdomain of the whole model domain. 223 The subdomain boundary conditions are specified through communications between processors which 224 are organized by explicit statements (message passing method). 225 226 A big advantage is that the method does not need many modifications of the initial FORTRAN code. 227 From the modeller's point of view, each sub domain running on a processor is identical to the "mono-domain" code. 228 In addition, the programmer manages the communications between subdomains, 229 and the code is faster when the number of processors is increased. 230 The porting of OPA code on an iPSC860 was achieved during Guyon's PhD [Guyon et al. 1994, 1995] 231 in collaboration with CETIIS and ONERA. 232 The implementation in the operational context and the studies of performance on 233 a T3D and T3E Cray computers have been made in collaboration with IDRIS and CNRS. 234 The present implementation is largely inspired by Guyon's work [Guyon 1995]. 235 236 The parallelization strategy is defined by the physical characteristics of the ocean model. 237 Second order finite difference schemes lead to local discrete operators that 238 depend at the very most on one neighbouring point. 239 The only non-local computations concern the vertical physics 240 (implicit diffusion, turbulent closure scheme, ...) (delocalization over the whole water column), 241 and the solving of the elliptic equation associated with the surface pressure gradient computation 242 (delocalization over the whole horizontal domain). 243 Therefore, a pencil strategy is used for the data sub-structuration: 244 the 3D initial domain is laid out on local processor memories following a 2D horizontal topological splitting. 245 Each sub-domain computes its own surface and bottom boundary conditions and 246 has a side wall overlapping interface which defines the lateral boundary conditions for 247 computations in the inner sub-domain. 248 The overlapping area consists of the two rows at each edge of the sub-domain. 249 After a computation, a communication phase starts: 250 each processor sends to its neighbouring processors the update values of the points corresponding to 251 the interior overlapping area to its neighbouring sub-domain ($i.e.$ the innermost of the two overlapping rows). 252 The communication is done through the Message Passing Interface (MPI). 253 The data exchanges between processors are required at the very place where 254 lateral domain boundary conditions are set in the mono-domain computation: 255 the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module) which manages such conditions is interfaced with 256 routines found in \mdl{lib\_mpp} module when running on an MPP computer ($i.e.$ when \key{mpp\_mpi} defined). 257 It has to be pointed out that when using the MPP version of the model, 258 the east-west cyclic boundary condition is done implicitly, 256 259 whilst the south-symmetric boundary condition option is not available. 257 260 … … 259 262 \begin{figure}[!t] \begin{center} 260 263 \includegraphics[width=0.90\textwidth]{Fig_mpp} 261 \caption{ \protect\label{fig:mpp} 262 Positioning of a sub-domain when massively parallel processing is used. }264 \caption{ \protect\label{fig:mpp} 265 Positioning of a sub-domain when massively parallel processing is used. } 263 266 \end{center} \end{figure} 264 267 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> 265 268 266 269 In the standard version of \NEMO, the splitting is regular and arithmetic. 267 The i-axis is divided by \jp{jpni} and the j-axis by \jp{jpnj} for a number of processors 268 \jp{jpnij} most often equal to $jpni \times jpnj$ (parameters set in 269 \ngn{nammpp} namelist). Each processor is independent and without message passing 270 or synchronous process, programs run alone and access just its own local memory. 271 For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) 272 that are named \jp{jpi}, \jp{jpj}, \jp{jpk}. These dimensions include the internal 273 domain and the overlapping rows. The number of rows to exchange (known as 274 the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). The whole domain 275 dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}. The relationship between 276 the whole domain and a sub-domain is: 270 The i-axis is divided by \jp{jpni} and 271 the j-axis by \jp{jpnj} for a number of processors \jp{jpnij} most often equal to $jpni \times jpnj$ 272 (parameters set in \ngn{nammpp} namelist). 273 Each processor is independent and without message passing or synchronous process, 274 programs run alone and access just its own local memory. 275 For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) that 276 are named \jp{jpi}, \jp{jpj}, \jp{jpk}. 277 These dimensions include the internal domain and the overlapping rows. 278 The number of rows to exchange (known as the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). 279 The whole domain dimensions are named \np{jpiglo}, \np{jpjglo} and \jp{jpk}. 280 The relationship between the whole domain and a sub-domain is: 277 281 \begin{eqnarray} 278 282 jpi & = & ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci \nonumber \\ … … 283 287 One also defines variables nldi and nlei which correspond to the internal domain bounds, 284 288 and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain. 285 An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$, 289 An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$, 286 290 a global array (whole domain) by the relationship: 287 291 \begin{equation} \label{eq:lbc_nimpp} … … 290 294 with $1 \leq i \leq jpi$, $1 \leq j \leq jpj $ , and $1 \leq k \leq jpk$. 291 295 292 Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable 293 nproc. In the standard version, a processor has no more than four neighbouring 294 processors named nono (for north), noea (east), noso (south) and nowe (west) 295 and two variables, nbondi and nbondj, indicate the relative position of the processor:296 Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable nproc. 297 In the standard version, a processor has no more than 298 four neighbouring processors named nono (for north), noea (east), noso (south) and nowe (west) and 299 two variables, nbondi and nbondj, indicate the relative position of the processor: 296 300 \begin{itemize} 297 301 \item nbondi = -1 an east neighbour, no west processor, … … 300 304 \item nbondi = 2 no splitting following the i-axis. 301 305 \end{itemize} 302 During the simulation, processors exchange data with their neighbours. 303 If there is effectively a neighbour, the processor receives variables from this 304 processor on its overlapping row, and sends the data issued from internal 305 domain corresponding to the overlapping row of the other processor. 306 307 308 The \NEMO model computes equation terms with the help of mask arrays (0 on land 309 points and 1 on sea points). It is easily readable and very efficient in the context of 310 a computer with vectorial architecture. However, in the case of a scalar processor, 311 computations over the land regions become more expensive in terms of CPU time. 312 It is worse when we use a complex configuration with a realistic bathymetry like the 313 global ocean where more than 50 \% of points are land points. For this reason, a 314 pre-processing tool can be used to choose the mpp domain decomposition with a 315 maximum number of only land points processors, which can then be eliminated (\autoref{fig:mppini2}) 316 (For example, the mpp\_optimiz tools, available from the DRAKKAR web site). 317 This optimisation is dependent on the specific bathymetry employed. The user 318 then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with 319 $jpnij < jpni \times jpnj$, leading to the elimination of $jpni \times jpnj - jpnij$ 320 land processors. When those parameters are specified in \ngn{nammpp} namelist, 321 the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound, 322 nono, noea,...) so that the land-only processors are not taken into account. 306 During the simulation, processors exchange data with their neighbours. 307 If there is effectively a neighbour, the processor receives variables from this processor on its overlapping row, 308 and sends the data issued from internal domain corresponding to the overlapping row of the other processor. 309 310 311 The \NEMO model computes equation terms with the help of mask arrays (0 on land points and 1 on sea points). 312 It is easily readable and very efficient in the context of a computer with vectorial architecture. 313 However, in the case of a scalar processor, computations over the land regions become more expensive in 314 terms of CPU time. 315 It is worse when we use a complex configuration with a realistic bathymetry like the global ocean where 316 more than 50 \% of points are land points. 317 For this reason, a pre-processing tool can be used to choose the mpp domain decomposition with a maximum number of 318 only land points processors, which can then be eliminated (\autoref{fig:mppini2}) 319 (For example, the mpp\_optimiz tools, available from the DRAKKAR web site). 320 This optimisation is dependent on the specific bathymetry employed. 321 The user then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with $jpnij < jpni \times jpnj$, 322 leading to the elimination of $jpni \times jpnj - jpnij$ land processors. 323 When those parameters are specified in \ngn{nammpp} namelist, 324 the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound, nono, noea,...) so that 325 the land-only processors are not taken into account. 323 326 324 327 \gmcomment{Note that the inimpp2 routine is general so that the original inimpp 325 328 routine should be suppressed from the code.} 326 329 327 When land processors are eliminated, the value corresponding to these locations in 328 the model output files is undefined. Note that this is a problem for the meshmask file 329 which requires to be defined over the whole domain. Therefore, user should not eliminate 330 land processors when creating a meshmask file ($i.e.$ when setting a non-zero value to \np{nn\_msh}). 330 When land processors are eliminated, 331 the value corresponding to these locations in the model output files is undefined. 332 Note that this is a problem for the meshmask file which requires to be defined over the whole domain. 333 Therefore, user should not eliminate land processors when creating a meshmask file 334 ($i.e.$ when setting a non-zero value to \np{nn\_msh}). 331 335 332 336 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 334 338 \includegraphics[width=0.90\textwidth]{Fig_mppini2} 335 339 \caption { \protect\label{fig:mppini2} 336 Example of Atlantic domain defined for the CLIPPER projet. Initial grid is 337 composed of 773 x 1236 horizontal points. 338 (a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20). 339 52 subdomains are land areas. 340 (b) 52 subdomains are eliminated (white rectangles) and the resulting number 341 of processors really used during the computation is jpnij=128.}340 Example of Atlantic domain defined for the CLIPPER projet. 341 Initial grid is composed of 773 x 1236 horizontal points. 342 (a) the domain is split onto 9 \time 20 subdomains (jpni=9, jpnj=20). 343 52 subdomains are land areas. 344 (b) 52 subdomains are eliminated (white rectangles) and 345 the resulting number of processors really used during the computation is jpnij=128.} 342 346 \end{center} \end{figure} 343 347 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 354 358 \nlst{nambdy} 355 359 %----------------------------------------------------------------------------------------------- 356 %-----------------------------------------nambdy_index--------------------------------------------357 %358 %\nlst{nambdy_index}359 %-----------------------------------------------------------------------------------------------360 360 %-----------------------------------------nambdy_dta-------------------------------------------- 361 361 362 362 \nlst{nambdy_dta} 363 363 %----------------------------------------------------------------------------------------------- 364 %-----------------------------------------nambdy_dta-------------------------------------------- 365 % 366 %\nlst{nambdy_dta2} 367 %----------------------------------------------------------------------------------------------- 368 369 Options are defined through the \ngn{nambdy} \ngn{nambdy\_index} 370 \ngn{nambdy\_dta} \ngn{nambdy\_dta2} namelist variables. 371 The BDY module is the core implementation of open boundary 372 conditions for regional configurations. It implements the Flow 373 Relaxation Scheme algorithm for temperature, salinity, velocities and 374 ice fields, and the Flather radiation condition for the depth-mean 375 transports. The specification of the location of the open boundary is 376 completely flexible and allows for example the open boundary to follow 377 an isobath or other irregular contour. 378 379 The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many 380 features and a similar coding structure \citep{Chanut2005}. 381 382 Boundary data files used with earlier versions of NEMO may need 383 to be re-ordered to work with this version. See the 384 section on the Input Boundary Data Files for details. 364 365 Options are defined through the \ngn{nambdy} \ngn{nambdy\_dta} namelist variables. 366 The BDY module is the core implementation of open boundary conditions for regional configurations. 367 It implements the Flow Relaxation Scheme algorithm for temperature, salinity, velocities and ice fields, and 368 the Flather radiation condition for the depth-mean transports. 369 The specification of the location of the open boundary is completely flexible and 370 allows for example the open boundary to follow an isobath or other irregular contour. 371 372 The BDY module was modelled on the OBC module (see NEMO 3.4) and shares many features and 373 a similar coding structure \citep{Chanut2005}. 374 375 Boundary data files used with earlier versions of NEMO may need to be re-ordered to work with this version. 376 See the section on the Input Boundary Data Files for details. 385 377 386 378 %---------------------------------------------- … … 389 381 390 382 The BDY module is activated by setting \np{ln\_bdy} to true. 391 It is possible to define more than one boundary ``set'' and apply 392 different boundary conditions to each set. The number of boundary 393 sets is defined by \np{nb\_bdy}. Each boundary set may be defined 394 as a set of straight line segments in a namelist 395 (\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file 396 (\np{ln\_coords\_file}\forcode{ = .true.}). If the set is defined in a namelist, 397 then the namelists nambdy\_index must be included separately, one for 398 each set. If the set is defined by a file, then a 399 ``\ifile{coordinates.bdy}'' file must be provided. The coordinates.bdy file 400 is analagous to the usual NEMO ``\ifile{coordinates}'' file. In the example 401 above, there are two boundary sets, the first of which is defined via 402 a file and the second is defined in a namelist. For more details of 403 the definition of the boundary geometry see section 404 \autoref{subsec:BDY_geometry}. 405 406 For each boundary set a boundary 407 condition has to be chosen for the barotropic solution (``u2d'': 408 sea-surface height and barotropic velocities), for the baroclinic 409 velocities (``u3d''), and for the active tracers\footnote{The BDY 410 module does not deal with passive tracers at this version} 411 (``tra''). For each set of variables there is a choice of algorithm 412 and a choice for the data, eg. for the active tracers the algorithm is 413 set by \np{nn\_tra} and the choice of data is set by 414 \np{nn\_tra\_dta}. 383 It is possible to define more than one boundary ``set'' and apply different boundary conditions to each set. 384 The number of boundary sets is defined by \np{nb\_bdy}. 385 Each boundary set may be defined as a set of straight line segments in a namelist 386 (\np{ln\_coords\_file}\forcode{ = .false.}) or read in from a file (\np{ln\_coords\_file}\forcode{ = .true.}). 387 If the set is defined in a namelist, then the namelists nambdy\_index must be included separately, one for each set. 388 If the set is defined by a file, then a ``\ifile{coordinates.bdy}'' file must be provided. 389 The coordinates.bdy file is analagous to the usual NEMO ``\ifile{coordinates}'' file. 390 In the example above, there are two boundary sets, the first of which is defined via a file and 391 the second is defined in a namelist. 392 For more details of the definition of the boundary geometry see section \autoref{subsec:BDY_geometry}. 393 394 For each boundary set a boundary condition has to be chosen for the barotropic solution 395 (``u2d'':sea-surface height and barotropic velocities), for the baroclinic velocities (``u3d''), and 396 for the active tracers \footnote{The BDY module does not deal with passive tracers at this version} (``tra''). 397 For each set of variables there is a choice of algorithm and a choice for the data, 398 eg. for the active tracers the algorithm is set by \np{nn\_tra} and the choice of data is set by \np{nn\_tra\_dta}. 415 399 416 400 The choice of algorithm is currently as follows: … … 419 403 420 404 \begin{itemize} 421 \item[0.] No boundary condition applied. So the solution will ``see''422 the land points around the edge of the edge of the domain.423 \item[1.] Flow Relaxation Scheme (FRS) available for all variables. 424 \item[2.] Flather radiation scheme for the barotropic variables. The425 Flather scheme is not compatible with the filtered free surface405 \item[0.] No boundary condition applied. 406 So the solution will ``see'' the land points around the edge of the edge of the domain. 407 \item[1.] Flow Relaxation Scheme (FRS) available for all variables. 408 \item[2.] Flather radiation scheme for the barotropic variables. 409 The Flather scheme is not compatible with the filtered free surface 426 410 ({\it dynspg\_ts}). 427 411 \end{itemize} … … 429 413 \mbox{} 430 414 431 The main choice for the boundary data is 432 to use initial conditions as boundary data (\np{nn\_tra\_dta}\forcode{ = 0}) or to 433 use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}). For the 434 barotropic solution there is also the option to use tidal 435 harmonic forcing either by itself or in addition to other external 436 data. 437 438 If external boundary data is required then the nambdy\_dta namelist 439 must be defined. One nambdy\_dta namelist is required for each boundary 440 set in the order in which the boundary sets are defined in nambdy. In 441 the example given, two boundary sets have been defined and so there 442 are two nambdy\_dta namelists. The boundary data is read in using the 443 fldread module, so the nambdy\_dta namelist is in the format required 444 for fldread. For each variable required, the filename, the frequency 445 of the files and the frequency of the data in the files is given. Also 446 whether or not time-interpolation is required and whether the data is 447 climatological (time-cyclic) data. Note that on-the-fly spatial 448 interpolation of boundary data is not available at this version. 449 450 In the example namelists given, two boundary sets are defined. The 451 first set is defined via a file and applies FRS conditions to 452 temperature and salinity and Flather conditions to the barotropic 453 variables. External data is provided in daily files (from a 454 large-scale model). Tidal harmonic forcing is also used. The second 455 set is defined in a namelist. FRS conditions are applied on 456 temperature and salinity and climatological data is read from external 457 files. 415 The main choice for the boundary data is to use initial conditions as boundary data 416 (\np{nn\_tra\_dta}\forcode{ = 0}) or to use external data from a file (\np{nn\_tra\_dta}\forcode{ = 1}). 417 For the barotropic solution there is also the option to use tidal harmonic forcing either by 418 itself or in addition to other external data. 419 420 If external boundary data is required then the nambdy\_dta namelist must be defined. 421 One nambdy\_dta namelist is required for each boundary set in the order in which 422 the boundary sets are defined in nambdy. 423 In the example given, two boundary sets have been defined and so there are two nambdy\_dta namelists. 424 The boundary data is read in using the fldread module, 425 so the nambdy\_dta namelist is in the format required for fldread. 426 For each variable required, the filename, the frequency of the files and 427 the frequency of the data in the files is given. 428 Also whether or not time-interpolation is required and whether the data is climatological (time-cyclic) data. 429 Note that on-the-fly spatial interpolation of boundary data is not available at this version. 430 431 In the example namelists given, two boundary sets are defined. 432 The first set is defined via a file and applies FRS conditions to temperature and salinity and 433 Flather conditions to the barotropic variables. 434 External data is provided in daily files (from a large-scale model). 435 Tidal harmonic forcing is also used. 436 The second set is defined in a namelist. 437 FRS conditions are applied on temperature and salinity and climatological data is read from external files. 458 438 459 439 %---------------------------------------------- … … 462 442 463 443 The Flow Relaxation Scheme (FRS) \citep{Davies_QJRMS76,Engerdahl_Tel95}, 464 applies a simple relaxation of the model fields to 465 externally-specified values over a zone next to the edge of the model 466 domain. Given a model prognostic variable $\Phi$ 444 applies a simple relaxation of the model fields to externally-specified values over 445 a zone next to the edge of the model domain. 446 Given a model prognostic variable $\Phi$ 467 447 \begin{equation} \label{eq:bdy_frs1} 468 448 \Phi(d) = \alpha(d)\Phi_{e}(d) + (1-\alpha(d))\Phi_{m}(d)\;\;\;\;\; d=1,N 469 449 \end{equation} 470 where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified 471 external field, $d$ gives the discrete distance from the model 472 boundary and $\alpha$ is a parameter that varies from $1$ at $d=1$ to 473 a small value at $d=N$. It can be shown that this scheme is equivalent 474 to adding a relaxation term to the prognostic equation for $\Phi$ of 475 the form: 450 where $\Phi_{m}$ is the model solution and $\Phi_{e}$ is the specified external field, 451 $d$ gives the discrete distance from the model boundary and 452 $\alpha$ is a parameter that varies from $1$ at $d=1$ to a small value at $d=N$. 453 It can be shown that this scheme is equivalent to adding a relaxation term to 454 the prognostic equation for $\Phi$ of the form: 476 455 \begin{equation} \label{eq:bdy_frs2} 477 456 -\frac{1}{\tau}\left(\Phi - \Phi_{e}\right) 478 457 \end{equation} 479 where the relaxation time scale $\tau$ is given by a function of 480 $\alpha$ and the model time step $\Delta t$: 458 where the relaxation time scale $\tau$ is given by a function of $\alpha$ and the model time step $\Delta t$: 481 459 \begin{equation} \label{eq:bdy_frs3} 482 460 \tau = \frac{1-\alpha}{\alpha} \,\rdt 483 461 \end{equation} 484 Thus the model solution is completely prescribed by the external 485 conditions at the edge of the model domain and is relaxed towards the 486 external conditions over the rest of the FRS zone. The application of 487 a relaxation zone helps to prevent spurious reflection of outgoing 488 signals from the model boundary. 462 Thus the model solution is completely prescribed by the external conditions at the edge of the model domain and 463 is relaxed towards the external conditions over the rest of the FRS zone. 464 The application of a relaxation zone helps to prevent spurious reflection of 465 outgoing signals from the model boundary. 489 466 490 467 The function $\alpha$ is specified as a $tanh$ function: … … 492 469 \alpha(d) = 1 - \tanh\left(\frac{d-1}{2}\right), \quad d=1,N 493 470 \end{equation} 494 The width of the FRS zone is specified in the namelist as 495 \np{nn\_rimwidth}. This is typically set to a value between 8 and 10. 471 The width of the FRS zone is specified in the namelist as \np{nn\_rimwidth}. 472 This is typically set to a value between 8 and 10. 496 473 497 474 %---------------------------------------------- … … 499 476 \label{subsec:BDY_flather_scheme} 500 477 501 The \citet{Flather_JPO94} scheme is a radiation condition on the normal, depth-mean 502 transport across the open boundary. It takes the form 478 The \citet{Flather_JPO94} scheme is a radiation condition on the normal, 479 depth-mean transport across the open boundary. 480 It takes the form 503 481 \begin{equation} \label{eq:bdy_fla1} 504 482 U = U_{e} + \frac{c}{h}\left(\eta - \eta_{e}\right), 505 483 \end{equation} 506 where $U$ is the depth-mean velocity normal to the boundary and $\eta$ 507 is the sea surface height, both from the model. The subscript $e$ 508 indicates the same fields from external sources. The speed of external 509 gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the 510 water column. The depth-mean normal velocity along the edge of the 511 model domain is set equal to the 512 external depth-mean normal velocity, plus a correction term that 513 allows gravity waves generated internally to exit the model boundary. 514 Note that the sea-surface height gradient in \autoref{eq:bdy_fla1} 515 is a spatial gradient across the model boundary, so that $\eta_{e}$ is 516 defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the 517 $T$ points with $nbr=2$. $U$ and $U_{e}$ are defined on the $U$ or 518 $V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points. 484 where $U$ is the depth-mean velocity normal to the boundary and $\eta$ is the sea surface height, 485 both from the model. 486 The subscript $e$ indicates the same fields from external sources. 487 The speed of external gravity waves is given by $c = \sqrt{gh}$, and $h$ is the depth of the water column. 488 The depth-mean normal velocity along the edge of the model domain is set equal to 489 the external depth-mean normal velocity, 490 plus a correction term that allows gravity waves generated internally to exit the model boundary. 491 Note that the sea-surface height gradient in \autoref{eq:bdy_fla1} is a spatial gradient across the model boundary, 492 so that $\eta_{e}$ is defined on the $T$ points with $nbr=1$ and $\eta$ is defined on the $T$ points with $nbr=2$. 493 $U$ and $U_{e}$ are defined on the $U$ or $V$ points with $nbr=1$, $i.e.$ between the two $T$ grid points. 519 494 520 495 %---------------------------------------------- … … 522 497 \label{subsec:BDY_geometry} 523 498 524 Each open boundary set is defined as a list of points. The information 525 is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$ 526 structure. The $nbi$ and $nbj$ arrays 527 define the local $(i,j)$ indices of each point in the boundary zone 528 and the $nbr$ array defines the discrete distance from the boundary 529 with $nbr=1$ meaning that the point is next to the edge of the 530 model domain and $nbr>1$ showing that the point is increasingly 531 further away from the edge of the model domain. A set of $nbi$, $nbj$, 532 and $nbr$ arrays is defined for each of the $T$, $U$ and $V$ 533 grids. Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular 534 boundary. 535 536 The boundary geometry for each set may be defined in a namelist 537 nambdy\_index or by reading in a ``\ifile{coordinates.bdy}'' file. The 538 nambdy\_index namelist defines a series of straight-line segments for 539 north, east, south and west boundaries. For the northern boundary, 540 \np{nbdysegn} gives the number of segments, \np{jpjnob} gives the $j$ 541 index for each segment and \np{jpindt} and \np{jpinft} give the start 542 and end $i$ indices for each segment with similar for the other 543 boundaries. These segments define a list of $T$ grid points along the 544 outermost row of the boundary ($nbr\,=\, 1$). The code deduces the $U$ and 545 $V$ points and also the points for $nbr\,>\, 1$ if 546 $nn\_rimwidth\,>\,1$. 547 548 The boundary geometry may also be defined from a 549 ``\ifile{coordinates.bdy}'' file. Figure \autoref{fig:LBC_nc_header} 550 gives an example of the header information from such a file. The file 551 should contain the index arrays for each of the $T$, $U$ and $V$ 552 grids. The arrays must be in order of increasing $nbr$. Note that the 553 $nbi$, $nbj$ values in the file are global values and are converted to 554 local values in the code. Typically this file will be used to generate 555 external boundary data via interpolation and so will also contain the 556 latitudes and longitudes of each point as shown. However, this is not 557 necessary to run the model. 558 559 For some choices of irregular boundary the model domain may contain 560 areas of ocean which are not part of the computational domain. For 561 example if an open boundary is defined along an isobath, say at the 562 shelf break, then the areas of ocean outside of this boundary will 563 need to be masked out. This can be done by reading a mask file defined 564 as \np{cn\_mask\_file} in the nam\_bdy namelist. Only one mask file is 565 used even if multiple boundary sets are defined. 499 Each open boundary set is defined as a list of points. 500 The information is stored in the arrays $nbi$, $nbj$, and $nbr$ in the $idx\_bdy$ structure. 501 The $nbi$ and $nbj$ arrays define the local $(i,j)$ indices of each point in the boundary zone and 502 the $nbr$ array defines the discrete distance from the boundary with $nbr=1$ meaning that 503 the point is next to the edge of the model domain and $nbr>1$ showing that 504 the point is increasingly further away from the edge of the model domain. 505 A set of $nbi$, $nbj$, and $nbr$ arrays is defined for each of the $T$, $U$ and $V$ grids. 506 Figure \autoref{fig:LBC_bdy_geom} shows an example of an irregular boundary. 507 508 The boundary geometry for each set may be defined in a namelist nambdy\_index or 509 by reading in a ``\ifile{coordinates.bdy}'' file. 510 The nambdy\_index namelist defines a series of straight-line segments for north, east, south and west boundaries. 511 For the northern boundary, \np{nbdysegn} gives the number of segments, 512 \np{jpjnob} gives the $j$ index for each segment and \np{jpindt} and 513 \np{jpinft} give the start and end $i$ indices for each segment with similar for the other boundaries. 514 These segments define a list of $T$ grid points along the outermost row of the boundary ($nbr\,=\, 1$). 515 The code deduces the $U$ and $V$ points and also the points for $nbr\,>\, 1$ if $nn\_rimwidth\,>\,1$. 516 517 The boundary geometry may also be defined from a ``\ifile{coordinates.bdy}'' file. 518 Figure \autoref{fig:LBC_nc_header} gives an example of the header information from such a file. 519 The file should contain the index arrays for each of the $T$, $U$ and $V$ grids. 520 The arrays must be in order of increasing $nbr$. 521 Note that the $nbi$, $nbj$ values in the file are global values and are converted to local values in the code. 522 Typically this file will be used to generate external boundary data via interpolation and so 523 will also contain the latitudes and longitudes of each point as shown. 524 However, this is not necessary to run the model. 525 526 For some choices of irregular boundary the model domain may contain areas of ocean which 527 are not part of the computational domain. 528 For example if an open boundary is defined along an isobath, say at the shelf break, 529 then the areas of ocean outside of this boundary will need to be masked out. 530 This can be done by reading a mask file defined as \np{cn\_mask\_file} in the nam\_bdy namelist. 531 Only one mask file is used even if multiple boundary sets are defined. 566 532 567 533 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 569 535 \includegraphics[width=1.0\textwidth]{Fig_LBC_bdy_geom} 570 536 \caption { \protect\label{fig:LBC_bdy_geom} 571 Example of geometry of unstructured open boundary}537 Example of geometry of unstructured open boundary} 572 538 \end{center} \end{figure} 573 539 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 577 543 \label{subsec:BDY_data} 578 544 579 The data files contain the data arrays 580 in the order in which the points are defined in the $nbi$ and $nbj$ 581 a rrays. The data arrays are dimensioned on: atime dimension;545 The data files contain the data arrays in the order in which the points are defined in the $nbi$ and $nbj$ arrays. 546 The data arrays are dimensioned on: 547 a time dimension; 582 548 $xb$ which is the index of the boundary data point in the horizontal; 583 and $yb$ which is a degenerate dimension of 1 to enable the file to be 584 read by the standard NEMO I/O routines. The 3D fields also have a 585 depth dimension. 586 587 At Version 3.4 there are new restrictions on the order in which the 588 boundary points are defined (and therefore restrictions on the order 589 of the data in the file). In particular: 549 and $yb$ which is a degenerate dimension of 1 to enable the file to be read by the standard NEMO I/O routines. 550 The 3D fields also have a depth dimension. 551 552 At Version 3.4 there are new restrictions on the order in which the boundary points are defined 553 (and therefore restrictions on the order of the data in the file). 554 In particular: 590 555 591 556 \mbox{} 592 557 593 558 \begin{enumerate} 594 \item The data points must be in order of increasing $nbr$, ie. all595 the $nbr=1$ points, then all the $nbr=2$ points etc.596 \item All the data for a particular boundary set must be in the same 597 order. (Prior to 3.4 it was possible to define barotropic data in a598 different order tothe data for tracers and baroclinic velocities).559 \item The data points must be in order of increasing $nbr$, 560 ie. all the $nbr=1$ points, then all the $nbr=2$ points etc. 561 \item All the data for a particular boundary set must be in the same order. 562 (Prior to 3.4 it was possible to define barotropic data in a different order to 563 the data for tracers and baroclinic velocities). 599 564 \end{enumerate} 600 565 601 566 \mbox{} 602 567 603 These restrictions mean that data files used with previous versions of 604 the model may not work with version 3.4. A fortran utility 605 {\it bdy\_reorder} exists in the TOOLS directory which will re-order the 606 data in old BDY data files. 568 These restrictions mean that data files used with previous versions of the model may not work with version 3.4. 569 A fortran utility {\it bdy\_reorder} exists in the TOOLS directory which 570 will re-order the data in old BDY data files. 607 571 608 572 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> 609 573 \begin{figure}[!t] \begin{center} 610 574 \includegraphics[width=1.0\textwidth]{Fig_LBC_nc_header} 611 \caption { \protect\label{fig:LBC_nc_header} 612 Example of the header for a \protect\ifile{coordinates.bdy} file}575 \caption { \protect\label{fig:LBC_nc_header} 576 Example of the header for a \protect\ifile{coordinates.bdy} file} 613 577 \end{center} \end{figure} 614 578 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 618 582 \label{subsec:BDY_vol_corr} 619 583 620 There is an option to force the total volume in the regional model to be constant, 621 similar to the option in the OBC module. This is controlled by the \np{nn\_volctl} 622 parameter in the namelist. A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used. 623 If \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities 624 around the boundary at each timestep to ensure that the integrated volume flow 625 through the boundary is zero. If \np{nn\_volctl}\forcode{ = 2} then the calculation of 626 the volume change on the timestep includes the change due to the freshwater 627 flux across the surface and the correction velocity corrects for this as well. 584 There is an option to force the total volume in the regional model to be constant, 585 similar to the option in the OBC module. 586 This is controlled by the \np{nn\_volctl} parameter in the namelist. 587 A value of \np{nn\_volctl}\forcode{ = 0} indicates that this option is not used. 588 If \np{nn\_volctl}\forcode{ = 1} then a correction is applied to the normal velocities around the boundary at 589 each timestep to ensure that the integrated volume flow through the boundary is zero. 590 If \np{nn\_volctl}\forcode{ = 2} then the calculation of the volume change on 591 the timestep includes the change due to the freshwater flux across the surface and 592 the correction velocity corrects for this as well. 628 593 629 594 If more than one boundary set is used then volume correction is
Note: See TracChangeset
for help on using the changeset viewer.