Changeset 2541 for trunk/DOC/TexFiles/Chapters/Chap_SBC.tex
- Timestamp:
- 2011-01-09T05:55:20+01:00 (13 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/DOC/TexFiles/Chapters/Chap_SBC.tex
r2414 r2541 63 63 \label{SBC_general} 64 64 65 66 65 The surface ocean stress is the stress exerted by the wind and the sea-ice 67 66 on the ocean. The two components of stress are assumed to be interpolated … … 176 175 177 176 % ================================================================ 177 % Input Data 178 % ================================================================ 179 \section{Input Data generic interface} 180 \label{SBC_input} 181 182 A generic interface has been introduced to manage the way input data (2D or 3D fields, 183 like surface forcing or ocean T and S) are specify in \NEMO. This task is archieved by fldread.F90. 184 The module was design with four main objectives in mind: 185 \begin{enumerate} 186 \item optionally provide a time interpolation of the input data at model time-step, 187 whatever their input frequency is, and according to the different calendars available in the model. 188 \item optionally provide an on-the-fly space interpolation from the native input data grid to the model grid. 189 \item make the run duration independent from the period cover by the input files. 190 \item provide a simple user interface and a rather simple developer interface by limiting the 191 number of prerequisite information. 192 \end{enumerate} 193 194 As a results the user have only to fill in for each variable a structure in the namelist file 195 to defined the input data file and variable names, the frequency of the data (in hours or months), 196 whether its is climatological data or not, the period covered by the input file (one year, month, week or day), 197 and two additional parameters for on-the-fly interpolation. When adding a new input variable, 198 the developer has to add the associated structure in the namelist, read this information 199 by mirroring the namelist read in \rou{sbc\_blk\_init} for example, and simply call \rou{fld\_read} 200 to obtain the desired input field at the model time-step and grid points. 201 202 The only constraints are that the input file is a NetCDF file, the file name follows a nomenclature 203 (see \S\ref{SBC_fldread}), the period it cover is one year, month, week or day, and, if on-the-fly 204 interpolation is used, a file of weights must be supplied (see \S\ref{SBC_iof}). 205 206 Note that when an input data is archived on a disc which is accessible directly 207 from the workspace where the code is executed, then the use can set the \np{cn\_dir} 208 to the pathway leading to the data. By default, the data are assumed to have been 209 copied so that cn\_dir='./'. 210 211 % ------------------------------------------------------------------------------------------------------------- 212 % Input Data specification (\mdl{fldread}) 213 % ------------------------------------------------------------------------------------------------------------- 214 \subsection{Input Data specification (\mdl{fldread})} 215 \label{SBC_fldread} 216 217 The structure associated with an input variable contains the following information: 218 \begin{alltt} {{\tiny 219 \begin{verbatim} 220 ! file name ! frequency (hours) ! variable ! time interp. ! clim ! 'yearly'/ ! weights ! rotation ! 221 ! ! (if <0 months) ! name ! (logical) ! (T/F) ! 'monthly' ! filename ! pairing ! 222 \end{verbatim} 223 }}\end{alltt} 224 where 225 \begin{description} 226 \item[File name]: the stem name of the NetCDF file to be open. 227 This stem will be completed automatically by the model, with the addition of a '.nc' at its end 228 and by date information and possibly a prefix (when using AGRIF). 229 Tab.\ref{Tab_fldread} provides the resulting file name in all possible cases according to whether 230 it is a climatological file or not, and to the open/close frequency (see below for definition). 231 232 %--------------------------------------------------TABLE-------------------------------------------------- 233 \begin{table}[htbp] 234 \begin{center} 235 \begin{tabular}{|l|c|c|c|} 236 \hline 237 & daily or weekLLL & monthly & yearly \\ \hline 238 clim = false & fn\_yYYYYmMMdDD & fn\_yYYYYmMM & fn\_yYYYY \\ \hline 239 clim = true & not possible & fn\_m??.nc & fn \\ \hline 240 \end{tabular} 241 \end{center} 242 \caption{ \label{Tab_fldread} naming nomenclature for climatological or interannual input file, 243 as a function of the Open/close frequency. The stem name is assumed to be 'fn'. 244 For weekly files, the 'LLL' corresponds to the first three letters of the first day of the week ($i.e.$ 'sun','sat','fri','thu','wed','tue','mon'). The 'YYYY', 'MM' and 'DD' should be replaced by the 245 actual year/month/day, always coded with 4 or 2 digits. Note that (1) in mpp, if the file is split 246 over each subdomain, the suffix '.nc' is replaced by '\_PPPP.nc', where 'PPPP' is the 247 process number coded with 4 digits; (2) when using AGRIF, the prefix ÔN\_Õ is added to files, 248 where 'N' is the child grid number.} 249 \end{table} 250 %-------------------------------------------------------------------------------------------------------------- 251 252 253 \item[Record frequency]: the frequency of the records contained in the input file. 254 Its unit is in hours if it is positive (for example 24 for daily forcing) or in months if negative 255 (for example -1 for monthly forcing or -12 for annual forcing). 256 Note that this frequency must really be an integer and not a real. 257 On some computers, seting it to '24.' can be interpreted as 240! 258 259 \item[Variable name]: the name of the variable to be read in the input NetCDF file. 260 261 \item[Time interpolation]: a logical to activate, or not, the time interpolation. If set to 'false', 262 the forcing will have a steplike shape remaining constant during each forcing period. 263 For example, when using a daily forcing without time interpolation, the forcing remaining 264 constant from 00h00'00'' to 23h59'59". If set to 'true', the forcing will have a broken line shape. 265 Records are assumed to be dated the middle of the forcing period. 266 For example, when using a daily forcing with time interpolation, linear interpolation will 267 be performed between mid-day of two consecutive days. 268 269 \item[Climatological forcing]: a logical to specify if a input file contains climatological forcing 270 which can be cycle in time, or an interannual forcing which will requires additional files 271 if the period covered by the simulation exceed the one of the file. See the above the file 272 naming strategy which impacts the expected name of the file to be opened. 273 274 \item[Open/close frequency]: the frequency at which forcing files must be opened/closed. 275 Four cases are coded: 'daily', 'weekLLL' (with 'LLL' the first 3 letters of the first day of the week), 276 'monthly' and 'yearly' which means the forcing files will contain data for one day, one week, 277 one month or one year. Files are assumed to contain data from the beginning of the open/close period. 278 For example, the first record of a yearly file containing daily data is Jan 1st even if the experiment 279 is not starting at the beginning of the year. 280 281 \item[Others]: 'weights filename' and 'pairing rotation' are associted with on-the-fly interpolation 282 which is described in \S\ref{SBC_iof}. 283 284 \end{description} 285 286 Additional remarks:\\ 287 (1) The time interpolation is a simple linear interpolation between two consecutive records of 288 the input data. The only tricky point is therefore to specify the date at which we need to do 289 the interpolation and the date of the records read in the input files. 290 Following \citet{Leclair_Madec_OM09}, the date of a time step is set at the middle of the 291 time step. For example, for an experiment starting at 0h00'00" with a one hour time-step, 292 a time interpolation will be performed at the following time: 0h30'00", 1h30'00", 2h30'00", etc. 293 However, for forcing data related to the surface module, values are not needed at every 294 time-step but at every \np{nn\_fsbc} time-step. For example with \np{nn\_fsbc}~=~3, 295 the surface module will be called at time-steps 1, 4, 7, etc. The date used for the time interpolation 296 is thus redefined to be at the middle of \np{nn\_fsbc} time-step period. In the previous example, 297 this leads to: 1h30'00", 4h30'00", 7h30'00", etc. \\ 298 (2) For code readablility and maintenance issues, we don't take into account the NetCDF input file 299 calendar. The calendar associated with the forcing field is build according to the information 300 provided by user in the record frequency, the open/close frequency and the type of temporal interpolation. 301 For example, the first record of a yearly file containing daily data that will be interpolated in time 302 is assumed to be start Jan 1st at 12h00'00" and end Dec 31st at 12h00'00". \\ 303 (3) If a time interpolation is requested, the code will pick up the needed data in the previous (next) file 304 when interpolating data with the first (last) record of the open/close period. 305 For example, if the input file specifications are ''yearly, containing daily data to be interpolated in time'', 306 the values given by the code between 00h00'00" and 11h59'59" on Jan 1st will be interpolated values 307 between Dec 31st 12h00'00" and Jan 1st 12h00'00". If the forcing is climatological, Dec and Jan will 308 be keep-up from the same year. However, if the forcing is not climatological, at the end of the 309 open/close period the code will automatically close the current file and open the next one. 310 Note that, if the experiment is starting (ending) at the beginning (end) of an open/close period 311 we do accept that the previous (next) file is not existing. In this case, the time interpolation 312 will be performed between two identical values. For example, when starting an experiment on 313 Jan 1st of year Y with yearly files and daily data to be interpolated, we do accept that the file 314 related to year Y-1 is not existing. The value of Jan 1st will be used as the missing one for 315 Dec 31st of year Y-1. If the file of year Y-1 exists, the code will read its last record. 316 Therefore, this file can contain only one record corresponding to Dec 31st, a useful feature for 317 user considering that it is too heavy to manipulate the complete file for year Y-1. 318 319 320 % ------------------------------------------------------------------------------------------------------------- 321 % Interpolation on the Fly 322 % ------------------------------------------------------------------------------------------------------------- 323 \subsection [Interpolation on-the-Fly] {Interpolation on-the-Fly} 324 \label{SBC_iof} 325 326 Interpolation on the Fly allows the user to supply input files required 327 for the surface forcing on grids other than the model grid. 328 To do this he or she must supply, in addition to the source data file, 329 a file of weights to be used to interpolate from the data grid to the model grid. 330 The original development of this code used the SCRIP package (freely available 331 \href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement). 332 In principle, any package can be used to generate the weights, but the 333 variables in the input weights file must have the same names and meanings as 334 assumed by the model. 335 Two methods are currently available: bilinear and bicubic interpolation. 336 337 \subsubsection{Bilinear Interpolation} 338 \label{SBC_iof_bilinear} 339 340 The input weights file in this case has two sets of variables: src01, src02, 341 src03, src04 and wgt01, wgt02, wgt03, wgt04. 342 The "src" variables correspond to the point in the input grid to which the weight 343 "wgt" is to be applied. Each src value is an integer corresponding to the index of a 344 point in the input grid when written as a one dimensional array. For example, for an input grid 345 of size 5x10, point (3,2) is referenced as point 8, since (2-1)*5+3=8. 346 There are four of each variable because bilinear interpolation uses the four points defining 347 the grid box containing the point to be interpolated. 348 All of these arrays are on the model grid, so that values src01(i,j) and 349 wgt01(i,j) are used to generate a value for point (i,j) in the model. 350 351 Symbolically, the algorithm used is: 352 353 \begin{equation} 354 f_{m}(i,j) = f_{m}(i,j) + \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))} 355 \end{equation} 356 where function idx() transforms a one dimensional index src(k) into a two dimensional index, 357 and wgt(1) corresponds to variable "wgt01" for example. 358 359 \subsubsection{Bicubic Interpolation} 360 \label{SBC_iof_bicubic} 361 362 Again there are two sets of variables: "src" and "wgt". 363 But in this case there are 16 of each. 364 The symbolic algorithm used to calculate values on the model grid is now: 365 366 \begin{equation*} \begin{split} 367 f_{m}(i,j) = f_{m}(i,j) +& \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))} 368 + \sum_{k=5}^{8} {wgt(k)\left.\frac{\partial f}{\partial i}\right| _{idx(src(k))} } \\ 369 +& \sum_{k=9}^{12} {wgt(k)\left.\frac{\partial f}{\partial j}\right| _{idx(src(k))} } 370 + \sum_{k=13}^{16} {wgt(k)\left.\frac{\partial ^2 f}{\partial i \partial j}\right| _{idx(src(k))} } 371 \end{split} 372 \end{equation*} 373 The gradients here are taken with respect to the horizontal indices and not distances since the spatial dependency has been absorbed into the weights. 374 375 \subsubsection{Implementation} 376 \label{SBC_iof_imp} 377 378 To activate this option, a non-empty string should be supplied in the weights filename column 379 of the relevant namelist; if this is left as an empty string no action is taken. 380 In the model, weights files are read in and stored in a structured type (WGT) in the fldread 381 module, as and when they are first required. 382 This initialisation procedure determines whether the input data grid should be treated 383 as cyclical or not by inspecting a global attribute stored in the weights input file. 384 This attribute must be called "ew\_wrap" and be of integer type. 385 If it is negative, the input non-model grid is assumed not to be cyclic. 386 If zero or greater, then the value represents the number of columns that overlap. 387 $E.g.$ if the input grid has columns at longitudes 0, 1, 2, .... , 359, then ew\_wrap should be set to 0; 388 if longitudes are 0.5, 2.5, .... , 358.5, 360.5, 362.5, ew\_wrap should be 2. 389 If the model does not find attribute ew\_wrap, then a value of -999 is assumed. 390 In this case the \rou{fld\_read} routine defaults ew\_wrap to value 0 and therefore the grid 391 is assumed to be cyclic with no overlapping columns. 392 (In fact this only matters when bicubic interpolation is required.) 393 Note that no testing is done to check the validity in the model, since there is no way 394 of knowing the name used for the longitude variable, 395 so it is up to the user to make sure his or her data is correctly represented. 396 397 Next the routine reads in the weights. 398 Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise 399 bilinear interpolation is used. The WGT structure includes dynamic arrays both for 400 the storage of the weights (on the model grid), and when required, for reading in 401 the variable to be interpolated (on the input data grid). 402 The size of the input data array is determined by examining the values in the "src" 403 arrays to find the minimum and maximum i and j values required. 404 Since bicubic interpolation requires the calculation of gradients at each point on the grid, 405 the corresponding arrays are dimensioned with a halo of width one grid point all the way around. 406 When the array of points from the data file is adjacent to an edge of the data grid, 407 the halo is either a copy of the row/column next to it (non-cyclical case), or is a copy 408 of one from the first few columns on the opposite side of the grid (cyclical case). 409 410 \subsubsection{Limitations} 411 \label{SBC_iof_lim} 412 413 \begin{enumerate} 414 \item The case where input data grids are not logically rectangular has not been tested. 415 \item This code is not guaranteed to produce positive definite answers from positive definite inputs 416 when a bicubic interpolation method is used. 417 \item The cyclic condition is only applied on left and right columns, and not to top and bottom rows. 418 \item The gradients across the ends of a cyclical grid assume that the grid spacing between 419 the two columns involved are consistent with the weights used. 420 \item Neither interpolation scheme is conservative. (There is a conservative scheme available 421 in SCRIP, but this has not been implemented.) 422 \end{enumerate} 423 424 \subsubsection{Utilities} 425 \label{SBC_iof_util} 426 427 % to be completed 428 A set of utilities to create a weights file for a rectilinear input grid is available 429 (see the directory NEMOGCM/TOOLS/WEIGHTS). 430 431 432 % ================================================================ 178 433 % Analytical formulation (sbcana module) 179 434 % ================================================================ … … 215 470 read in the file, the time frequency at which it is given (in hours), and a logical 216 471 setting whether a time interpolation to the model time step is required 217 for this field). (fld\_i namelist structure). 218 219 \textbf{Caution}: when the frequency is set to --12, the data are monthly 220 values. These are assumed to be climatological values, so time interpolation 221 between December the 15$^{th}$ and January the 15$^{th}$ is done using 222 records 12 and 1 223 224 When higher frequency is set and time interpolation is demanded, the model 225 will try to read the last (first) record of previous (next) year in a file 226 having the same name but a suffix {\_}prev{\_}year ({\_}next{\_}year) being 227 added (e.g. "{\_}1989"). These files must only contain a single record. If they don't exist, 228 the model assumes that the last record of the previous year is equal to the first 229 record of the current year, and similarly, that the first record of the 230 next year is equal to the last record of the current year. This will cause 231 the forcing to remain constant over the first and last half fld\_frequ hours. 472 for this field. See \S\ref{SBC_fldread} for a more detailed description of the parameters. 232 473 233 474 Note that in general, a flux formulation is used in associated with a … … 281 522 \begin{table}[htbp] \label{Tab_CORE} 282 523 \begin{center} 283 \begin{tabular}{|l| l|l|l|}524 \begin{tabular}{|l|c|c|c|} 284 525 \hline 285 526 Variable desciption & Model variable & Units & point \\ \hline … … 297 538 %-------------------------------------------------------------------------------------------------------------- 298 539 299 Note that the air velocity is provided at a tracer ocean point, not at a velocity ocean point ($u$- and $v$-points). It is simpler and faster (less fields to be read), but it is not the recommended method when the ocean grid 300 size is the same or larger than the one of the input atmospheric fields. 540 Note that the air velocity is provided at a tracer ocean point, not at a velocity ocean 541 point ($u$- and $v$-points). It is simpler and faster (less fields to be read), 542 but it is not the recommended method when the ocean grid size is the same 543 or larger than the one of the input atmospheric fields. 301 544 302 545 % ------------------------------------------------------------------------------------------------------------- … … 338 581 As for the flux formulation, information about the input data required by the 339 582 model is provided in the namsbc\_blk\_core or namsbc\_blk\_clio 340 namelist (via the structure fld\_i). The first and last record assumption is also made 341 (see \S\ref{SBC_flx}) 583 namelist (see \S\ref{SBC_fldread}). 342 584 343 585 % ================================================================ … … 399 641 (see \mdl{dynspg} for the ocean). For sea-ice, the sea surface height, $\eta_m$, 400 642 which is provided to the sea ice model is set to $\eta - \eta_{ib}$ (see \mdl{sbcssr} module). 401 $\eta_{ib}$ can be set in the output. This can simplify the altirmetry data and model comparison643 $\eta_{ib}$ can be set in the output. This can simplify altimetry data and model comparison 402 644 as inverse barometer sea surface height is usually removed from these date prior to their distribution. 403 645 … … 433 675 River runoff generally enters the ocean at a nonzero depth rather than through the surface. 434 676 Many models, however, have traditionally inserted river runoff to the top model cell. 435 This was the case in \NEMO prior to the version 3.3, and was combined with an option to increase vertical mixing near the river mouth. 677 This was the case in \NEMO prior to the version 3.3, and was combined with an option 678 to increase vertical mixing near the river mouth. 436 679 437 680 However, with this method numerical and physical problems arise when the top grid cells are … … 517 760 518 761 } 519 520 521 % ================================================================522 % Interpolation on the Fly523 % ================================================================524 525 \section [Interpolation on the Fly] {Interpolation on the Fly}526 \label{SBC_iof}527 528 Interpolation on the Fly allows the user to supply input files required529 for the surface forcing on grids other than the model grid.530 To do this he or she must supply, in addition to the source data file,531 a file of weights to be used to interpolate from the data grid to the model532 grid.533 The original development of this code used the SCRIP package (freely available534 \href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement).535 In principle, any package can be used to generate the weights, but the536 variables in the input weights file must have the same names and meanings as537 assumed by the model.538 Two methods are currently available: bilinear and bicubic interpolation.539 540 \subsection{Bilinear Interpolation}541 \label{SBC_iof_bilinear}542 543 The input weights file in this case has two sets of variables: src01, src02,544 src03, src04 and wgt01, wgt02, wgt03, wgt04.545 The "src" variables correspond to the point in the input grid to which the weight546 "wgt" is to be applied. Each src value is an integer corresponding to the index of a547 point in the input grid when written as a one dimensional array. For example, for an input grid548 of size 5x10, point (3,2) is referenced as point 8, since (2-1)*5+3=8.549 There are four of each variable because bilinear interpolation uses the four points defining550 the grid box containing the point to be interpolated.551 All of these arrays are on the model grid, so that values src01(i,j) and552 wgt01(i,j) are used to generate a value for point (i,j) in the model.553 554 Symbolically, the algorithm used is:555 556 \begin{equation}557 f_{m}(i,j) = f_{m}(i,j) + \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}558 \end{equation}559 where function idx() transforms a one dimensional index src(k) into a two dimensional index,560 and wgt(1) corresponds to variable "wgt01" for example.561 562 \subsection{Bicubic Interpolation}563 \label{SBC_iof_bicubic}564 565 Again there are two sets of variables: "src" and "wgt".566 But in this case there are 16 of each.567 The symbolic algorithm used to calculate values on the model grid is now:568 569 \begin{equation*} \begin{split}570 f_{m}(i,j) = f_{m}(i,j) +& \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}571 + \sum_{k=5}^{8} {wgt(k)\left.\frac{\partial f}{\partial i}\right| _{idx(src(k))} } \\572 +& \sum_{k=9}^{12} {wgt(k)\left.\frac{\partial f}{\partial j}\right| _{idx(src(k))} }573 + \sum_{k=13}^{16} {wgt(k)\left.\frac{\partial ^2 f}{\partial i \partial j}\right| _{idx(src(k))} }574 \end{split}575 \end{equation*}576 The gradients here are taken with respect to the horizontal indices and not distances since the spatial dependency has been absorbed into the weights.577 578 \subsection{Implementation}579 \label{SBC_iof_imp}580 581 To activate this option, a non-empty string should be supplied in the weights filename column582 of the relevant namelist; if this is left as an empty string no action is taken.583 In the model, weights files are read in and stored in a structured type (WGT) in the fldread584 module, as and when they are first required.585 This initialisation procedure determines whether the input data grid should be treated586 as cyclical or not by inspecting a global attribute stored in the weights input file.587 This attribute must be called "ew\_wrap" and be of integer type.588 If it is negative, the input non-model grid is assumed not to be cyclic.589 If zero or greater, then the value represents the number of columns that overlap.590 $E.g.$ if the input grid has columns at longitudes 0, 1, 2, .... , 359, then ew\_wrap should be set to 0;591 if longitudes are 0.5, 2.5, .... , 358.5, 360.5, 362.5, ew\_wrap should be 2.592 If the model does not find attribute ew\_wrap, then a value of -999 is assumed.593 In this case the \rou{fld\_read} routine defaults ew\_wrap to value 0 and therefore the grid594 is assumed to be cyclic with no overlapping columns.595 (In fact this only matters when bicubic interpolation is required.)596 Note that no testing is done to check the validity in the model, since there is no way597 of knowing the name used for the longitude variable,598 so it is up to the user to make sure his or her data is correctly represented.599 600 Next the routine reads in the weights.601 Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise602 bilinear interpolation is used. The WGT structure includes dynamic arrays both for603 the storage of the weights (on the model grid), and when required, for reading in604 the variable to be interpolated (on the input data grid).605 The size of the input data array is determined by examining the values in the "src"606 arrays to find the minimum and maximum i and j values required.607 Since bicubic interpolation requires the calculation of gradients at each point on the grid,608 the corresponding arrays are dimensioned with a halo of width one grid point all the way around.609 When the array of points from the data file is adjacent to an edge of the data grid,610 the halo is either a copy of the row/column next to it (non-cyclical case), or is a copy611 of one from the first few columns on the opposite side of the grid (cyclical case).612 613 \subsection{Limitations}614 \label{SBC_iof_lim}615 616 \begin{description}617 \item618 The case where input data grids are not logically rectangular has not been tested.619 \item620 This code is not guaranteed to produce positive definite answers from positive definite inputs.621 \item622 The cyclic condition is only applied on left and right columns, and not to top and bottom rows.623 \item624 The gradients across the ends of a cyclical grid assume that the grid spacing between the two columns involved are consistent with the weights used.625 \item626 Neither interpolation scheme is conservative.627 (There is a conservative scheme available in SCRIP, but this has not been implemented.)628 \end{description}629 630 \subsection{Utilities}631 \label{SBC_iof_util}632 633 % to be completed634 A set of utilities to create a weights file for a rectilinear input grid is available635 (see the directory NEMOGCM/TOOLS/WEIGHTS).636 762 637 763 % ================================================================
Note: See TracChangeset
for help on using the changeset viewer.