Ignore:
Timestamp:
2011-01-09T05:55:20+01:00 (10 years ago)
Author:
gm
Message:

v3.3beta: #658 hopefully the final update, including MPP sum, SIGN, IOM, fldread documentation

File:
1 edited

Legend:

Unmodified
Added
Removed
  • trunk/DOC/TexFiles/Chapters/Chap_SBC.tex

    r2414 r2541  
    6363\label{SBC_general} 
    6464 
    65  
    6665The surface ocean stress is the stress exerted by the wind and the sea-ice  
    6766on the ocean. The two components of stress are assumed to be interpolated  
     
    176175 
    177176% ================================================================ 
     177%       Input Data  
     178% ================================================================ 
     179\section{Input Data generic interface} 
     180\label{SBC_input} 
     181 
     182A generic interface has been introduced to manage the way input data (2D or 3D fields,  
     183like surface forcing or ocean T and S) are specify in \NEMO. This task is archieved by fldread.F90.  
     184The module was design with four main objectives in mind:  
     185\begin{enumerate}   
     186\item optionally provide a time interpolation of the input data at model time-step,  
     187whatever their input frequency is, and according to the different calendars available in the model. 
     188\item optionally provide an on-the-fly space interpolation from the native input data grid to the model grid. 
     189\item make the run duration independent from the period cover by the input files. 
     190\item provide a simple user interface and a rather simple developer interface by limiting the  
     191 number of prerequisite information.  
     192\end{enumerate}   
     193 
     194As a results the user have only to fill in for each variable a structure in the namelist file  
     195to defined the input data file and variable names, the frequency of the data (in hours or months),  
     196whether its is climatological data or not, the period covered by the input file (one year, month, week or day),  
     197and two additional parameters for on-the-fly interpolation. When adding a new input variable,  
     198the developer has to add the associated structure in the namelist, read this information  
     199by mirroring the namelist read in \rou{sbc\_blk\_init} for example, and simply call \rou{fld\_read}  
     200to obtain the desired input field at the model time-step and grid points. 
     201 
     202The only constraints are that the input file is a NetCDF file, the file name follows a nomenclature  
     203(see \S\ref{SBC_fldread}), the period it cover is one year, month, week or day, and, if on-the-fly  
     204interpolation is used, a file of weights must be supplied (see \S\ref{SBC_iof}). 
     205 
     206Note that when an input data is archived on a disc which is accessible directly  
     207from the workspace where the code is executed, then the use can set the \np{cn\_dir}  
     208to the pathway leading to the data. By default, the data are assumed to have been  
     209copied so that cn\_dir='./'. 
     210 
     211% ------------------------------------------------------------------------------------------------------------- 
     212% Input Data specification (\mdl{fldread}) 
     213% ------------------------------------------------------------------------------------------------------------- 
     214\subsection{Input Data specification (\mdl{fldread})} 
     215\label{SBC_fldread} 
     216 
     217The structure associated with an input variable contains the following information: 
     218\begin{alltt}  {{\tiny     
     219\begin{verbatim} 
     220!  file name  ! frequency (hours) ! variable  ! time interp. !  clim  ! 'yearly'/ ! weights  ! rotation !  
     221!             !  (if <0  months)  !   name    !   (logical)  !  (T/F) ! 'monthly' ! filename ! pairing  ! 
     222\end{verbatim} 
     223}}\end{alltt}  
     224where  
     225\begin{description}   
     226\item[File name]: the stem name of the NetCDF file to be open.  
     227This stem will be completed automatically by the model, with the addition of a '.nc' at its end  
     228and by date information and possibly a prefix (when using AGRIF).  
     229Tab.\ref{Tab_fldread} provides the resulting file name in all possible cases according to whether  
     230it is a climatological file or not, and to the open/close frequency (see below for definition).  
     231 
     232%--------------------------------------------------TABLE-------------------------------------------------- 
     233\begin{table}[htbp]  
     234\begin{center} 
     235\begin{tabular}{|l|c|c|c|} 
     236\hline 
     237                         & daily or weekLLL          & monthly                   &   yearly          \\   \hline 
     238clim = false   & fn\_yYYYYmMMdDD  &   fn\_yYYYYmMM   &   fn\_yYYYY  \\   \hline 
     239clim = true       & not possible                  &  fn\_m??.nc             &   fn                \\   \hline 
     240\end{tabular} 
     241\end{center} 
     242\caption{ \label{Tab_fldread}   naming nomenclature for climatological or interannual input file,  
     243as a function of the Open/close frequency. The stem name is assumed to be 'fn'.  
     244For weekly files, the 'LLL' corresponds to the first three letters of the first day of the week ($i.e.$ 'sun','sat','fri','thu','wed','tue','mon'). The 'YYYY', 'MM' and 'DD' should be replaced by the  
     245actual year/month/day, always coded with 4 or 2 digits. Note that (1) in mpp, if the file is split  
     246over each subdomain, the suffix '.nc' is replaced by '\_PPPP.nc', where 'PPPP' is the  
     247process number coded with 4 digits; (2) when using AGRIF, the prefix ÔN\_Õ is added to files,  
     248where 'N'  is the child grid number.} 
     249\end{table} 
     250%-------------------------------------------------------------------------------------------------------------- 
     251   
     252 
     253\item[Record frequency]: the frequency of the records contained in the input file.  
     254Its unit is in hours if it is positive (for example 24 for daily forcing) or in months if negative  
     255(for example -1 for monthly forcing or -12 for annual forcing).  
     256Note that this frequency must really be an integer and not a real.  
     257On some computers, seting it to '24.' can be interpreted as 240! 
     258 
     259\item[Variable name]: the name of the variable to be read in the input NetCDF file. 
     260 
     261\item[Time interpolation]: a logical to activate, or not, the time interpolation. If set to 'false',  
     262the forcing will have a steplike shape remaining constant during each forcing period.  
     263For example, when using a daily forcing without time interpolation, the forcing remaining  
     264constant from 00h00'00'' to 23h59'59". If set to 'true', the forcing will have a broken line shape.  
     265Records are assumed to be dated the middle of the forcing period.  
     266For example, when using a daily forcing with time interpolation, linear interpolation will  
     267be performed between mid-day of two consecutive days.  
     268 
     269\item[Climatological forcing]: a logical to specify if a input file contains climatological forcing  
     270which can be cycle in time, or an interannual forcing which will requires additional files  
     271if the period covered by the simulation exceed the one of the file. See the above the file  
     272naming strategy which impacts the expected name of the file to be opened.  
     273 
     274\item[Open/close frequency]: the frequency at which forcing files must be opened/closed.  
     275Four cases are coded: 'daily', 'weekLLL' (with 'LLL' the first 3 letters of the first day of the week),  
     276'monthly' and 'yearly' which means the forcing files will contain data for one day, one week,  
     277one month or one year. Files are assumed to contain data from the beginning of the open/close period.  
     278For example, the first record of a yearly file containing daily data is Jan 1st even if the experiment  
     279is not starting at the beginning of the year.  
     280 
     281\item[Others]: 'weights filename' and 'pairing rotation' are associted with on-the-fly interpolation  
     282which is described in \S\ref{SBC_iof}. 
     283 
     284\end{description} 
     285 
     286Additional remarks:\\ 
     287(1) The time interpolation is a simple linear interpolation between two consecutive records of  
     288the input data. The only tricky point is therefore to specify the date at which we need to do  
     289the interpolation and the date of the records read in the input files.  
     290Following \citet{Leclair_Madec_OM09}, the date of a time step is set at the middle of the  
     291time step. For example, for an experiment starting at 0h00'00" with a one hour time-step,  
     292a time interpolation will be performed at the following time: 0h30'00", 1h30'00", 2h30'00", etc. 
     293However, for forcing data related to the surface module, values are not needed at every  
     294time-step but at every \np{nn\_fsbc} time-step. For example with \np{nn\_fsbc}~=~3,  
     295the surface module will be called at time-steps 1, 4, 7, etc. The date used for the time interpolation  
     296is thus redefined to be at the middle of \np{nn\_fsbc} time-step period. In the previous example,  
     297this leads to: 1h30'00", 4h30'00", 7h30'00", etc. \\  
     298(2) For code readablility and maintenance issues, we don't take into account the NetCDF input file  
     299calendar. The calendar associated with the forcing field is build according to the information  
     300provided by user in the record frequency, the open/close frequency and the type of temporal interpolation.  
     301For example, the first record of a yearly file containing daily data that will be interpolated in time  
     302is assumed to be start Jan 1st at 12h00'00" and end Dec 31st at 12h00'00". \\ 
     303(3) If a time interpolation is requested, the code will pick up the needed data in the previous (next) file  
     304when interpolating data with the first (last) record of the open/close period.  
     305For example, if the input file specifications are ''yearly, containing daily data to be interpolated in time'',  
     306the values given by the code between 00h00'00" and 11h59'59" on Jan 1st will be interpolated values  
     307between Dec 31st 12h00'00" and Jan 1st 12h00'00". If the forcing is climatological, Dec and Jan will  
     308be keep-up from the same year. However, if the forcing is not climatological, at the end of the  
     309open/close period the code will automatically close the current file and open the next one.  
     310Note that, if the experiment is starting (ending) at the beginning (end) of an open/close period  
     311we do accept that the previous (next) file is not existing. In this case, the time interpolation  
     312will be performed between two identical values. For example, when starting an experiment on  
     313Jan 1st of year Y with yearly files and daily data to be interpolated, we do accept that the file  
     314related to year Y-1 is not existing. The value of Jan 1st will be used as the missing one for  
     315Dec 31st of year Y-1. If the file of year Y-1 exists, the code will read its last record.  
     316Therefore, this file can contain only one record corresponding to Dec 31st, a useful feature for  
     317user considering that it is too heavy to manipulate the complete file for year Y-1. 
     318 
     319 
     320% ------------------------------------------------------------------------------------------------------------- 
     321% Interpolation on the Fly 
     322% ------------------------------------------------------------------------------------------------------------- 
     323\subsection [Interpolation on-the-Fly] {Interpolation on-the-Fly} 
     324\label{SBC_iof} 
     325 
     326Interpolation on the Fly allows the user to supply input files required 
     327for the surface forcing on grids other than the model grid. 
     328To do this he or she must supply, in addition to the source data file, 
     329a file of weights to be used to interpolate from the data grid to the model grid. 
     330The original development of this code used the SCRIP package (freely available  
     331\href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement). 
     332In principle, any package can be used to generate the weights, but the 
     333variables in the input weights file must have the same names and meanings as 
     334assumed by the model. 
     335Two methods are currently available: bilinear and bicubic interpolation. 
     336 
     337\subsubsection{Bilinear Interpolation} 
     338\label{SBC_iof_bilinear} 
     339 
     340The input weights file in this case has two sets of variables: src01, src02, 
     341src03, src04 and wgt01, wgt02, wgt03, wgt04. 
     342The "src" variables correspond to the point in the input grid to which the weight 
     343"wgt" is to be applied. Each src value is an integer corresponding to the index of a  
     344point in the input grid when written as a one dimensional array.  For example, for an input grid 
     345of size 5x10, point (3,2) is referenced as point 8, since (2-1)*5+3=8. 
     346There are four of each variable because bilinear interpolation uses the four points defining 
     347the grid box containing the point to be interpolated. 
     348All of these arrays are on the model grid, so that values src01(i,j) and 
     349wgt01(i,j) are used to generate a value for point (i,j) in the model. 
     350 
     351Symbolically, the algorithm used is: 
     352 
     353\begin{equation} 
     354f_{m}(i,j) = f_{m}(i,j) + \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))} 
     355\end{equation} 
     356where function idx() transforms a one dimensional index src(k) into a two dimensional index, 
     357and wgt(1) corresponds to variable "wgt01" for example. 
     358 
     359\subsubsection{Bicubic Interpolation} 
     360\label{SBC_iof_bicubic} 
     361 
     362Again there are two sets of variables: "src" and "wgt". 
     363But in this case there are 16 of each. 
     364The symbolic algorithm used to calculate values on the model grid is now: 
     365 
     366\begin{equation*} \begin{split} 
     367f_{m}(i,j) =  f_{m}(i,j) +& \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}      
     368              +   \sum_{k=5}^{8} {wgt(k)\left.\frac{\partial f}{\partial i}\right| _{idx(src(k))} }    \\ 
     369              +& \sum_{k=9}^{12} {wgt(k)\left.\frac{\partial f}{\partial j}\right| _{idx(src(k))} }    
     370              +   \sum_{k=13}^{16} {wgt(k)\left.\frac{\partial ^2 f}{\partial i \partial j}\right| _{idx(src(k))} } 
     371\end{split} 
     372\end{equation*} 
     373The gradients here are taken with respect to the horizontal indices and not distances since the spatial dependency has been absorbed into the weights. 
     374 
     375\subsubsection{Implementation} 
     376\label{SBC_iof_imp} 
     377 
     378To activate this option, a non-empty string should be supplied in the weights filename column  
     379of the relevant namelist; if this is left as an empty string no action is taken. 
     380In the model, weights files are read in and stored in a structured type (WGT) in the fldread  
     381module, as and when they are first required. 
     382This initialisation procedure determines whether the input data grid should be treated  
     383as cyclical or not by inspecting a global attribute stored in the weights input file. 
     384This attribute must be called "ew\_wrap" and be of integer type. 
     385If it is negative, the input non-model grid is assumed not to be cyclic. 
     386If zero or greater, then the value represents the number of columns that overlap. 
     387$E.g.$ if the input grid has columns at longitudes 0, 1, 2, .... , 359, then ew\_wrap should be set to 0; 
     388if longitudes are 0.5, 2.5, .... , 358.5, 360.5, 362.5, ew\_wrap should be 2. 
     389If the model does not find attribute ew\_wrap, then a value of -999 is assumed. 
     390In this case the \rou{fld\_read} routine defaults ew\_wrap to value 0 and therefore the grid  
     391is assumed to be cyclic with no overlapping columns. 
     392(In fact this only matters when bicubic interpolation is required.) 
     393Note that no testing is done to check the validity in the model, since there is no way  
     394of knowing the name used for the longitude variable, 
     395so it is up to the user to make sure his or her data is correctly represented. 
     396 
     397Next the routine reads in the weights. 
     398Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise  
     399bilinear interpolation is used. The WGT structure includes dynamic arrays both for  
     400the storage of the weights (on the model grid), and when required, for reading in  
     401the variable to be interpolated (on the input data grid). 
     402The size of the input data array is determined by examining the values in the "src"  
     403arrays to find the minimum and maximum i and j values required. 
     404Since bicubic interpolation requires the calculation of gradients at each point on the grid,  
     405the corresponding arrays are dimensioned with a halo of width one grid point all the way around. 
     406When the array of points from the data file is adjacent to an edge of the data grid,  
     407the halo is either a copy of the row/column next to it (non-cyclical case), or is a copy  
     408of one from the first few columns on the opposite side of the grid (cyclical case). 
     409 
     410\subsubsection{Limitations} 
     411\label{SBC_iof_lim} 
     412 
     413\begin{enumerate}   
     414\item  The case where input data grids are not logically rectangular has not been tested. 
     415\item  This code is not guaranteed to produce positive definite answers from positive definite inputs 
     416          when a bicubic interpolation method is used. 
     417\item  The cyclic condition is only applied on left and right columns, and not to top and bottom rows. 
     418\item  The gradients across the ends of a cyclical grid assume that the grid spacing between  
     419          the two columns involved are consistent with the weights used. 
     420\item  Neither interpolation scheme is conservative. (There is a conservative scheme available  
     421          in SCRIP, but this has not been implemented.) 
     422\end{enumerate} 
     423 
     424\subsubsection{Utilities} 
     425\label{SBC_iof_util} 
     426 
     427% to be completed 
     428A set of utilities to create a weights file for a rectilinear input grid is available  
     429(see the directory NEMOGCM/TOOLS/WEIGHTS). 
     430 
     431 
     432% ================================================================ 
    178433% Analytical formulation (sbcana module)  
    179434% ================================================================ 
     
    215470read in the file, the time frequency at which it is given (in hours), and a logical  
    216471setting whether a time interpolation to the model time step is required  
    217 for this field). (fld\_i namelist structure). 
    218  
    219 \textbf{Caution}: when the frequency is set to --12, the data are monthly  
    220 values. These are assumed to be climatological values, so time interpolation  
    221 between December the 15$^{th}$ and January the 15$^{th}$ is done using  
    222 records 12 and 1 
    223  
    224 When higher frequency is set and time interpolation is demanded, the model  
    225 will try to read the last (first) record of previous (next) year in a file  
    226 having the same name but a suffix {\_}prev{\_}year ({\_}next{\_}year) being  
    227 added (e.g. "{\_}1989"). These files must only contain a single record. If they don't exist,  
    228 the model assumes that the last record of the previous year is equal to the first  
    229 record of the current year, and similarly, that the first record of the  
    230 next year is equal to the last record of the current year. This will cause  
    231 the forcing to remain constant over the first and last half fld\_frequ hours. 
     472for this field. See \S\ref{SBC_fldread} for a more detailed description of the parameters. 
    232473 
    233474Note that in general, a flux formulation is used in associated with a  
     
    281522\begin{table}[htbp]   \label{Tab_CORE} 
    282523\begin{center} 
    283 \begin{tabular}{|l|l|l|l|} 
     524\begin{tabular}{|l|c|c|c|} 
    284525\hline 
    285526Variable desciption              & Model variable  & Units   & point \\    \hline 
     
    297538%-------------------------------------------------------------------------------------------------------------- 
    298539 
    299 Note that the air velocity is provided at a tracer ocean point, not at a velocity ocean point ($u$- and $v$-points). It is simpler and faster (less fields to be read), but it is not the recommended method when the ocean grid 
    300 size is the same or larger than the one of the input atmospheric fields. 
     540Note that the air velocity is provided at a tracer ocean point, not at a velocity ocean  
     541point ($u$- and $v$-points). It is simpler and faster (less fields to be read),  
     542but it is not the recommended method when the ocean grid size is the same  
     543or larger than the one of the input atmospheric fields. 
    301544 
    302545% ------------------------------------------------------------------------------------------------------------- 
     
    338581As for the flux formulation, information about the input data required by the  
    339582model is provided in the namsbc\_blk\_core or namsbc\_blk\_clio  
    340 namelist (via the structure fld\_i). The first and last record assumption is also made  
    341 (see \S\ref{SBC_flx}) 
     583namelist (see \S\ref{SBC_fldread}).  
    342584 
    343585% ================================================================ 
     
    399641(see \mdl{dynspg} for the ocean). For sea-ice, the sea surface height, $\eta_m$,  
    400642which is provided to the sea ice model is set to $\eta - \eta_{ib}$ (see \mdl{sbcssr} module). 
    401 $\eta_{ib}$ can be set in the output. This can simplify the altirmetry data and model comparison  
     643$\eta_{ib}$ can be set in the output. This can simplify altimetry data and model comparison  
    402644as inverse barometer sea surface height is usually removed from these date prior to their distribution. 
    403645 
     
    433675River runoff generally enters the ocean at a nonzero depth rather than through the surface. 
    434676Many models, however, have traditionally inserted river runoff to the top model cell. 
    435 This was the case in \NEMO prior to the version 3.3, and was combined with an option to increase vertical mixing near the river mouth. 
     677This was the case in \NEMO prior to the version 3.3, and was combined with an option  
     678to increase vertical mixing near the river mouth. 
    436679 
    437680However, with this method numerical and physical problems arise when the top grid cells are  
     
    517760 
    518761} 
    519  
    520  
    521 % ================================================================ 
    522 % Interpolation on the Fly 
    523 % ================================================================ 
    524  
    525 \section [Interpolation on the Fly] {Interpolation on the Fly} 
    526 \label{SBC_iof} 
    527  
    528 Interpolation on the Fly allows the user to supply input files required 
    529 for the surface forcing on grids other than the model grid. 
    530 To do this he or she must supply, in addition to the source data file, 
    531 a file of weights to be used to interpolate from the data grid to the model 
    532 grid. 
    533 The original development of this code used the SCRIP package (freely available  
    534 \href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement). 
    535 In principle, any package can be used to generate the weights, but the 
    536 variables in the input weights file must have the same names and meanings as 
    537 assumed by the model. 
    538 Two methods are currently available: bilinear and bicubic interpolation. 
    539  
    540 \subsection{Bilinear Interpolation} 
    541 \label{SBC_iof_bilinear} 
    542  
    543 The input weights file in this case has two sets of variables: src01, src02, 
    544 src03, src04 and wgt01, wgt02, wgt03, wgt04. 
    545 The "src" variables correspond to the point in the input grid to which the weight 
    546 "wgt" is to be applied. Each src value is an integer corresponding to the index of a  
    547 point in the input grid when written as a one dimensional array.  For example, for an input grid 
    548 of size 5x10, point (3,2) is referenced as point 8, since (2-1)*5+3=8. 
    549 There are four of each variable because bilinear interpolation uses the four points defining 
    550 the grid box containing the point to be interpolated. 
    551 All of these arrays are on the model grid, so that values src01(i,j) and 
    552 wgt01(i,j) are used to generate a value for point (i,j) in the model. 
    553  
    554 Symbolically, the algorithm used is: 
    555  
    556 \begin{equation} 
    557 f_{m}(i,j) = f_{m}(i,j) + \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))} 
    558 \end{equation} 
    559 where function idx() transforms a one dimensional index src(k) into a two dimensional index, 
    560 and wgt(1) corresponds to variable "wgt01" for example. 
    561  
    562 \subsection{Bicubic Interpolation} 
    563 \label{SBC_iof_bicubic} 
    564  
    565 Again there are two sets of variables: "src" and "wgt". 
    566 But in this case there are 16 of each. 
    567 The symbolic algorithm used to calculate values on the model grid is now: 
    568  
    569 \begin{equation*} \begin{split} 
    570 f_{m}(i,j) =  f_{m}(i,j) +& \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}      
    571               +   \sum_{k=5}^{8} {wgt(k)\left.\frac{\partial f}{\partial i}\right| _{idx(src(k))} }    \\ 
    572               +& \sum_{k=9}^{12} {wgt(k)\left.\frac{\partial f}{\partial j}\right| _{idx(src(k))} }    
    573               +   \sum_{k=13}^{16} {wgt(k)\left.\frac{\partial ^2 f}{\partial i \partial j}\right| _{idx(src(k))} } 
    574 \end{split} 
    575 \end{equation*} 
    576 The gradients here are taken with respect to the horizontal indices and not distances since the spatial dependency has been absorbed into the weights. 
    577  
    578 \subsection{Implementation} 
    579 \label{SBC_iof_imp} 
    580  
    581 To activate this option, a non-empty string should be supplied in the weights filename column  
    582 of the relevant namelist; if this is left as an empty string no action is taken. 
    583 In the model, weights files are read in and stored in a structured type (WGT) in the fldread  
    584 module, as and when they are first required. 
    585 This initialisation procedure determines whether the input data grid should be treated  
    586 as cyclical or not by inspecting a global attribute stored in the weights input file. 
    587 This attribute must be called "ew\_wrap" and be of integer type. 
    588 If it is negative, the input non-model grid is assumed not to be cyclic. 
    589 If zero or greater, then the value represents the number of columns that overlap. 
    590 $E.g.$ if the input grid has columns at longitudes 0, 1, 2, .... , 359, then ew\_wrap should be set to 0; 
    591 if longitudes are 0.5, 2.5, .... , 358.5, 360.5, 362.5, ew\_wrap should be 2. 
    592 If the model does not find attribute ew\_wrap, then a value of -999 is assumed. 
    593 In this case the \rou{fld\_read} routine defaults ew\_wrap to value 0 and therefore the grid  
    594 is assumed to be cyclic with no overlapping columns. 
    595 (In fact this only matters when bicubic interpolation is required.) 
    596 Note that no testing is done to check the validity in the model, since there is no way  
    597 of knowing the name used for the longitude variable, 
    598 so it is up to the user to make sure his or her data is correctly represented. 
    599  
    600 Next the routine reads in the weights. 
    601 Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise  
    602 bilinear interpolation is used. The WGT structure includes dynamic arrays both for  
    603 the storage of the weights (on the model grid), and when required, for reading in  
    604 the variable to be interpolated (on the input data grid). 
    605 The size of the input data array is determined by examining the values in the "src"  
    606 arrays to find the minimum and maximum i and j values required. 
    607 Since bicubic interpolation requires the calculation of gradients at each point on the grid,  
    608 the corresponding arrays are dimensioned with a halo of width one grid point all the way around. 
    609 When the array of points from the data file is adjacent to an edge of the data grid,  
    610 the halo is either a copy of the row/column next to it (non-cyclical case), or is a copy  
    611 of one from the first few columns on the opposite side of the grid (cyclical case). 
    612  
    613 \subsection{Limitations} 
    614 \label{SBC_iof_lim} 
    615  
    616 \begin{description} 
    617 \item 
    618 The case where input data grids are not logically rectangular has not been tested. 
    619 \item 
    620 This code is not guaranteed to produce positive definite answers from positive definite inputs. 
    621 \item 
    622 The cyclic condition is only applied on left and right columns, and not to top and bottom rows. 
    623 \item 
    624 The gradients across the ends of a cyclical grid assume that the grid spacing between the two columns involved are consistent with the weights used. 
    625 \item 
    626 Neither interpolation scheme is conservative. 
    627 (There is a conservative scheme available in SCRIP, but this has not been implemented.) 
    628 \end{description} 
    629  
    630 \subsection{Utilities} 
    631 \label{SBC_iof_util} 
    632  
    633 % to be completed 
    634 A set of utilities to create a weights file for a rectilinear input grid is available  
    635 (see the directory NEMOGCM/TOOLS/WEIGHTS). 
    636762 
    637763% ================================================================ 
Note: See TracChangeset for help on using the changeset viewer.