Changeset 10354 for NEMO/trunk/doc/latex/NEMO/subfiles/chap_OBS.tex
- Timestamp:
- 2018-11-21T17:59:55+01:00 (5 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
NEMO/trunk/doc/latex/NEMO/subfiles/chap_OBS.tex
r10146 r10354 15 15 $\ $\newline % force a new line 16 16 17 The observation and model comparison code (OBS) reads in observation files (profile 18 temperature and salinity, sea surface temperature, sea level anomaly, sea ice concentration, 19 and velocity) and calculates an interpolated model equivalent value at the observation 20 location and nearest model timestep. The resulting data are saved in a ``feedback'' file (or 21 files). The code was originally developed for use with the NEMOVAR data assimilation code, but 22 can be used for validation or verification of the model or with any other data assimilation system. 23 24 The OBS code is called from \mdl{nemogcm} for model initialisation and to calculate the model 25 equivalent values for observations on the 0th timestep. The code is then called again after 26 each timestep from \mdl{step}. The code is only activated if the namelist logical \np{ln\_diaobs} 27 is set to true. 28 29 For all data types a 2D horizontal interpolator or averager is needed to interpolate/average the model fields to 30 the observation location. For {\em in situ} profiles, a 1D vertical interpolator is needed in 31 addition to provide model fields at the observation depths. This now works in a generalised vertical 32 coordinate system. 17 The observation and model comparison code (OBS) reads in observation files 18 (profile temperature and salinity, sea surface temperature, sea level anomaly, sea ice concentration, and velocity) and calculates an interpolated model equivalent value at the observation location and nearest model timestep. 19 The resulting data are saved in a ``feedback'' file (or files). 20 The code was originally developed for use with the NEMOVAR data assimilation code, 21 but can be used for validation or verification of the model or with any other data assimilation system. 22 23 The OBS code is called from \mdl{nemogcm} for model initialisation and to calculate the model equivalent values for observations on the 0th timestep. 24 The code is then called again after each timestep from \mdl{step}. 25 The code is only activated if the namelist logical \np{ln\_diaobs} is set to true. 26 27 For all data types a 2D horizontal interpolator or averager is needed to 28 interpolate/average the model fields to the observation location. 29 For {\em in situ} profiles, a 1D vertical interpolator is needed in addition to 30 provide model fields at the observation depths. 31 This now works in a generalised vertical coordinate system. 33 32 34 33 Some profile observation types (e.g. tropical moored buoys) are made available as daily averaged quantities. 35 The observation operator code can be set-up to calculate the equivalent daily average model temperature fields 36 using the \np{nn\_profdavtypes} namelist array. Some SST observations are equivalent to a night-time 37 average value and the observation operator code can calculate equivalent night-time average model SST fields by 38 setting the namelist value \np{ln\_sstnight} to true. Otherwise the model value from the nearest timestep to the 39 observation time is used. 40 41 The code is controlled by the namelist \textit{namobs}. See the following sections for more 42 details on setting up the namelist. 34 The observation operator code can be set-up to calculate the equivalent daily average model temperature fields using 35 the \np{nn\_profdavtypes} namelist array. 36 Some SST observations are equivalent to a night-time average value and 37 the observation operator code can calculate equivalent night-time average model SST fields by 38 setting the namelist value \np{ln\_sstnight} to true. 39 Otherwise the model value from the nearest timestep to the observation time is used. 40 41 The code is controlled by the namelist \textit{namobs}. 42 See the following sections for more details on setting up the namelist. 43 43 44 44 \autoref{sec:OBS_example} introduces a test example of the observation operator code including 45 where to obtain data and how to setup the namelist. \autoref{sec:OBS_details} introduces some 46 more technical details of the different observation types used and also shows a more complete 47 namelist. \autoref{sec:OBS_theory} introduces some of the theoretical aspects of the observation 48 operator including interpolation methods and running on multiple processors. 45 where to obtain data and how to setup the namelist. 46 \autoref{sec:OBS_details} introduces some more technical details of the different observation types used and 47 also shows a more complete namelist. 48 \autoref{sec:OBS_theory} introduces some of the theoretical aspects of the observation operator including 49 interpolation methods and running on multiple processors. 49 50 \autoref{sec:OBS_ooo} describes the offline observation operator code. 50 \autoref{sec:OBS_obsutils} introduces some utilities to help working with the files 51 produced by the OBS code. 51 \autoref{sec:OBS_obsutils} introduces some utilities to help working with the files produced by the OBS code. 52 52 53 53 % ================================================================ … … 58 58 59 59 This section describes an example of running the observation operator code using 60 profile data which can be freely downloaded. It shows how to adapt an61 existing run and build of NEMO to run the observation operator.60 profile data which can be freely downloaded. 61 It shows how to adapt an existing run and build of NEMO to run the observation operator. 62 62 63 63 \begin{enumerate} 64 64 \item Compile NEMO. 65 65 66 \item Download some EN4 data from 67 \href{http://www.metoffice.gov.uk/hadobs}{www.metoffice.gov.uk/hadobs}. Choose observations which are 68 valid for the period of your test run because the observation operator compares 69 the model and observations for a matching date and time. 66 \item Download some EN4 data from \href{http://www.metoffice.gov.uk/hadobs}{www.metoffice.gov.uk/hadobs}. 67 Choose observations which are valid for the period of your test run because 68 the observation operator compares the model and observations for a matching date and time. 70 69 71 70 \item Compile the OBSTOOLS code using: … … 79 78 \end{cmds} 80 79 81 \item Include the following in the NEMO namelist to run the observation 82 operator on this data: 80 \item Include the following in the NEMO namelist to run the observation operator on this data: 83 81 \end{enumerate} 84 82 85 %------------------------------------------namobs_example----------------------------------------------------- 86 % 87 %\nlst{namobs_example} 88 %------------------------------------------------------------------------------------------------------------- 89 90 Options are defined through the \ngn{namobs} namelist variables. 91 The options \np{ln\_t3d} and \np{ln\_s3d} switch on the temperature and salinity 92 profile observation operator code. The filename or array of filenames are 93 specified using the \np{cn\_profbfiles} variable. The model grid points for a 94 particular observation latitude and longitude are found using the grid 95 searching part of the code. This can be expensive, particularly for large 96 numbers of observations, setting \np{ln\_grid\_search\_lookup} allows the use of 97 a lookup table which is saved into an ``xypos`` file (or files). This will need 98 to be generated the first time if it does not exist in the run directory. 83 Options are defined through the \ngn{namobs} namelist variables. 84 The options \np{ln\_t3d} and \np{ln\_s3d} switch on the temperature and salinity profile observation operator code. 85 The filename or array of filenames are specified using the \np{cn\_profbfiles} variable. 86 The model grid points for a particular observation latitude and longitude are found using 87 the grid searching part of the code. 88 This can be expensive, particularly for large numbers of observations, 89 setting \np{ln\_grid\_search\_lookup} allows the use of a lookup table which 90 is saved into an ``xypos`` file (or files). 91 This will need to be generated the first time if it does not exist in the run directory. 99 92 However, once produced it will significantly speed up future grid searches. 100 Setting \np{ln\_grid\_global} means that the code distributes the observations 101 evenly between processors. Alternatively each processor will work with 102 observations located within the model subdomain(see section~\autoref{subsec:OBS_parallel}).103 104 A number of utilities are now provided to plot the feedback files, convert and 105 recombine the files.These are explained in more detail in section~\autoref{sec:OBS_obsutils}.106 Utilites to convert other input data formats into the feedback format are also 107 described insection~\autoref{sec:OBS_obsutils}.93 Setting \np{ln\_grid\_global} means that the code distributes the observations evenly between processors. 94 Alternatively each processor will work with observations located within the model subdomain 95 (see section~\autoref{subsec:OBS_parallel}). 96 97 A number of utilities are now provided to plot the feedback files, convert and recombine the files. 98 These are explained in more detail in section~\autoref{sec:OBS_obsutils}. 99 Utilites to convert other input data formats into the feedback format are also described in 100 section~\autoref{sec:OBS_obsutils}. 108 101 109 102 \section{Technical details (feedback type observation file headers)} 110 103 \label{sec:OBS_details} 111 104 112 Here we show a more complete example namelist \ngn{namobs} and also show the NetCDF headers 113 of the observation 114 files that may be used with the observation operator 105 Here we show a more complete example namelist \ngn{namobs} and also show the NetCDF headers of 106 the observation files that may be used with the observation operator. 115 107 116 108 %------------------------------------------namobs-------------------------------------------------------- … … 119 111 %------------------------------------------------------------------------------------------------------------- 120 112 121 The observation operator code uses the "feedback" observation file format for 122 all data types. All the 123 observation files must be in NetCDF format. Some example headers (produced using 124 \mbox{\textit{ncdump~-h}}) for profile 125 data, sea level anomaly and sea surface temperature are in the following 126 subsections. 113 The observation operator code uses the "feedback" observation file format for all data types. 114 All the observation files must be in NetCDF format. 115 Some example headers (produced using \mbox{\textit{ncdump~-h}}) for profile data, sea level anomaly and 116 sea surface temperature are in the following subsections. 127 117 128 118 \subsection{Profile feedback} … … 406 396 \end{clines} 407 397 408 The mean dynamic 409 topography (MDT) must be provided in a separate file defined on the model grid 410 called \ifile{slaReferenceLevel}. The MDT is required in 411 order to produce the model equivalent sea level anomaly from the model sea 412 surface height. Below is an example header for this file (on the ORCA025 grid). 398 The mean dynamic topography (MDT) must be provided in a separate file defined on 399 the model grid called \ifile{slaReferenceLevel}. 400 The MDT is required in order to produce the model equivalent sea level anomaly from the model sea surface height. 401 Below is an example header for this file (on the ORCA025 grid). 413 402 414 403 \begin{clines} … … 551 540 \subsection{Horizontal interpolation and averaging methods} 552 541 553 For most observation types, the horizontal extent of the observation is small compared to the model grid size 554 and so the model equivalent of the observation is calculated by interpolating from the four surrounding grid 555 points to the observation location. Some satellite observations (e.g. microwave satellite SST data, or satellite SSS data) 556 have a footprint which is similar in size or larger than the model grid size (particularly when the grid size is small). 557 In those cases the model counterpart should be calculated by averaging the model grid points over the same size as the footprint. 558 NEMO therefore has the capability to specify either an interpolation or an averaging (for surface observation types only). 559 560 The main namelist option associated with the interpolation/averaging is \np{nn\_2dint}. This default option can be set to values from 0 to 6. 542 For most observation types, the horizontal extent of the observation is small compared to the model grid size and so 543 the model equivalent of the observation is calculated by interpolating from 544 the four surrounding grid points to the observation location. 545 Some satellite observations (e.g. microwave satellite SST data, or satellite SSS data) have a footprint which 546 is similar in size or larger than the model grid size (particularly when the grid size is small). 547 In those cases the model counterpart should be calculated by averaging the model grid points over 548 the same size as the footprint. 549 NEMO therefore has the capability to specify either an interpolation or an averaging 550 (for surface observation types only). 551 552 The main namelist option associated with the interpolation/averaging is \np{nn\_2dint}. 553 This default option can be set to values from 0 to 6. 561 554 Values between 0 to 4 are associated with interpolation while values 5 or 6 are associated with averaging. 562 555 \begin{itemize} … … 566 559 \item \np{nn\_2dint}\forcode{ = 3}: Bilinear remapping interpolation (general grid) 567 560 \item \np{nn\_2dint}\forcode{ = 4}: Polynomial interpolation 568 \item \np{nn\_2dint}\forcode{ = 5}: Radial footprint averaging with diameter specified in the namelist as \np{rn\_???\_avglamscl} in degrees or metres (set using \np{ln\_???\_fp\_indegs}) 569 \item \np{nn\_2dint}\forcode{ = 6}: Rectangular footprint averaging with E/W and N/S size specified in the namelist as \np{rn\_???\_avglamscl} and \np{rn\_???\_avgphiscl} in degrees or metres (set using \np{ln\_???\_fp\_indegs}) 561 \item \np{nn\_2dint}\forcode{ = 5}: Radial footprint averaging with diameter specified in the namelist as 562 \np{rn\_???\_avglamscl} in degrees or metres (set using \np{ln\_???\_fp\_indegs}) 563 \item \np{nn\_2dint}\forcode{ = 6}: Rectangular footprint averaging with E/W and N/S size specified in 564 the namelist as \np{rn\_???\_avglamscl} and \np{rn\_???\_avgphiscl} in degrees or metres 565 (set using \np{ln\_???\_fp\_indegs}) 570 566 \end{itemize} 571 The ??? in the last two options indicate these options should be specified for each observation type for which the averaging is to be performed (see namelist example above). 572 The \np{nn\_2dint} default option can be overridden for surface observation types using namelist values \np{nn\_2dint\_???} where ??? is one of sla,sst,sss,sic. 567 The ??? in the last two options indicate these options should be specified for each observation type for 568 which the averaging is to be performed (see namelist example above). 569 The \np{nn\_2dint} default option can be overridden for surface observation types using 570 namelist values \np{nn\_2dint\_???} where ??? is one of sla,sst,sss,sic. 573 571 574 572 Below is some more detail on the various options for interpolation and averaging available in NEMO. 575 573 576 574 \subsubsection{Horizontal interpolation} 577 Consider an observation point ${\rm P}$ with 578 with longitude and latitude $({\lambda_{}}_{\rm P}, \phi_{\rm P})$ and the 579 four nearest neighbouring model grid points ${\rm A}$, ${\rm B}$, ${\rm C}$ 580 and ${\rm D}$ with longitude and latitude ($\lambda_{\rm A}$, $\phi_{\rm A}$), 581 ($\lambda_{\rm B}$, $\phi_{\rm B}$) etc. 582 All horizontal interpolation methods implemented in NEMO 583 estimate the value of a model variable $x$ at point $P$ as 584 a weighted linear combination of the values of the model 585 variables at the grid points ${\rm A}$, ${\rm B}$ etc.: 575 Consider an observation point ${\rm P}$ with with longitude and latitude $({\lambda_{}}_{\rm P}, \phi_{\rm P})$ and 576 the four nearest neighbouring model grid points ${\rm A}$, ${\rm B}$, ${\rm C}$ and ${\rm D}$ with 577 longitude and latitude ($\lambda_{\rm A}$, $\phi_{\rm A}$),($\lambda_{\rm B}$, $\phi_{\rm B}$) etc. 578 All horizontal interpolation methods implemented in NEMO estimate the value of a model variable $x$ at point $P$ as 579 a weighted linear combination of the values of the model variables at the grid points ${\rm A}$, ${\rm B}$ etc.: 586 580 \begin{eqnarray} 587 581 {x_{}}_{\rm P} & \hspace{-2mm} = \hspace{-2mm} & … … 591 585 {w_{}}_{\rm D} {x_{}}_{\rm D} \right) 592 586 \end{eqnarray} 593 where ${w_{}}_{\rm A}$, ${w_{}}_{\rm B}$ etc. are the respective weights for the 594 model field at points ${\rm A}$, ${\rm B}$ etc., and 595 $w = {w_{}}_{\rm A} + {w_{}}_{\rm B} + {w_{}}_{\rm C} + {w_{}}_{\rm D}$. 587 where ${w_{}}_{\rm A}$, ${w_{}}_{\rm B}$ etc. are the respective weights for the model field at 588 points ${\rm A}$, ${\rm B}$ etc., and $w = {w_{}}_{\rm A} + {w_{}}_{\rm B} + {w_{}}_{\rm C} + {w_{}}_{\rm D}$. 596 589 597 590 Four different possibilities are available for computing the weights. … … 599 592 \begin{enumerate} 600 593 601 \item[1.] {\bf Great-Circle distance-weighted interpolation.} The weights602 are computed as a function of the great-circle distance $s(P, \cdot)$603 between $P$ and the model grid points $A$, $B$ etc. For example,604 the weight given to the field ${x_{}}_{\rm A}$ is specified as the605 product of the distancesfrom ${\rm P}$ to the other points:594 \item[1.] {\bf Great-Circle distance-weighted interpolation.} 595 The weights are computed as a function of the great-circle distance $s(P, \cdot)$ between $P$ and 596 the model grid points $A$, $B$ etc. 597 For example, the weight given to the field ${x_{}}_{\rm A}$ is specified as the product of the distances 598 from ${\rm P}$ to the other points: 606 599 \begin{eqnarray} 607 600 {w_{}}_{\rm A} = s({\rm P}, {\rm B}) \, s({\rm P}, {\rm C}) \, s({\rm P}, {\rm D}) … … 619 612 \end{eqnarray} 620 613 and $M$ corresponds to $B$, $C$ or $D$. 621 A more stable form of the great-circle distance formula for 622 small distances ($x$ near 1) involves the arcsine function 623 ($e.g.$ see p.~101 of \citet{Daley_Barker_Bk01}: 614 A more stable form of the great-circle distance formula for small distances ($x$ near 1) 615 involves the arcsine function ($e.g.$ see p.~101 of \citet{Daley_Barker_Bk01}: 624 616 \begin{eqnarray} 625 617 s\left( {\rm P}, {\rm M} \right) … … 651 643 \end{eqnarray} 652 644 653 \item[2.] {\bf Great-Circle distance-weighted interpolation with small angle 654 approximation.} Similar to the previous interpolation but with the 655 distance $s$ computed as 645 \item[2.] {\bf Great-Circle distance-weighted interpolation with small angle approximation.} 646 Similar to the previous interpolation but with the distance $s$ computed as 656 647 \begin{eqnarray} 657 648 s\left( {\rm P}, {\rm M} \right) … … 663 654 where $M$ corresponds to $A$, $B$, $C$ or $D$. 664 655 665 \item[3.] {\bf Bilinear interpolation for a regular spaced grid.} The 666 interpolation is split into two 1D interpolations in the longitude 667 and latitude directions, respectively. 668 669 \item[4.] {\bf Bilinear remapping interpolation for a general grid.} An 670 iterative scheme that involves first mapping a quadrilateral cell 671 into a cell with coordinates (0,0), (1,0), (0,1) and (1,1). This 672 method is based on the SCRIP interpolation package \citep{Jones_1998}. 656 \item[3.] {\bf Bilinear interpolation for a regular spaced grid.} 657 The interpolation is split into two 1D interpolations in the longitude and latitude directions, respectively. 658 659 \item[4.] {\bf Bilinear remapping interpolation for a general grid.} 660 An iterative scheme that involves first mapping a quadrilateral cell into 661 a cell with coordinates (0,0), (1,0), (0,1) and (1,1). 662 This method is based on the SCRIP interpolation package \citep{Jones_1998}. 673 663 674 664 \end{enumerate} … … 678 668 For each surface observation type: 679 669 \begin{itemize} 680 \item The standard grid-searching code is used to find the nearest model grid point to the observation location (see next subsection). 681 \item The maximum number of grid points is calculated in the local grid domain for which the averaging is likely need to cover. 682 \item The lats/longs of the grid points surrounding the nearest model grid box are extracted using existing mpi routines. 683 \item The weights for each grid point associated with each observation are calculated, either for radial or rectangular footprints. For grid points completely within the footprint, the weight is one; for grid points completely outside the footprint, the weight is zero. For grid points which are partly within the footprint the ratio between the area of the footprint within the grid box and the total area of the grid box is used as the weight. 684 \item The weighted average of the model grid points associated with each observation is calculated, and this is then given as the model counterpart of the observation. 670 \item The standard grid-searching code is used to find the nearest model grid point to the observation location 671 (see next subsection). 672 \item The maximum number of grid points is calculated in the local grid domain for which 673 the averaging is likely need to cover. 674 \item The lats/longs of the grid points surrounding the nearest model grid box are extracted using 675 existing mpi routines. 676 \item The weights for each grid point associated with each observation are calculated, 677 either for radial or rectangular footprints. 678 For grid points completely within the footprint, the weight is one; 679 for grid points completely outside the footprint, the weight is zero. 680 For grid points which are partly within the footprint the ratio between the area of the footprint within 681 the grid box and the total area of the grid box is used as the weight. 682 \item The weighted average of the model grid points associated with each observation is calculated, 683 and this is then given as the model counterpart of the observation. 685 684 \end{itemize} 686 685 687 Examples of the weights calculated for an observation with rectangular and radial footprints are shown in Figs.~\autoref{fig:obsavgrec} and~\autoref{fig:obsavgrad}. 686 Examples of the weights calculated for an observation with rectangular and radial footprints are shown in 687 Figs.~\autoref{fig:obsavgrec} and~\autoref{fig:obsavgrad}. 688 688 689 689 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 691 691 \includegraphics[width=0.90\textwidth]{Fig_OBS_avg_rec} 692 692 \caption{ \protect\label{fig:obsavgrec} 693 Weights associated with each model grid box (blue lines and numbers) for an observation at -170.5E, 56.0N with a rectangular footprint of 1\deg x 1\deg.} 693 Weights associated with each model grid box (blue lines and numbers) 694 for an observation at -170.5E, 56.0N with a rectangular footprint of 1\deg x 1\deg.} 694 695 \end{center} \end{figure} 695 696 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 699 700 \includegraphics[width=0.90\textwidth]{Fig_OBS_avg_rad} 700 701 \caption{ \protect\label{fig:obsavgrad} 701 Weights associated with each model grid box (blue lines and numbers) for an observation at -170.5E, 56.0N with a radial footprint with diameter 1\deg.} 702 Weights associated with each model grid box (blue lines and numbers) 703 for an observation at -170.5E, 56.0N with a radial footprint with diameter 1\deg.} 702 704 \end{center} \end{figure} 703 705 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 706 708 \subsection{Grid search} 707 709 708 For many grids used by the NEMO model, such as the ORCA family, 709 the horizontal grid coordinates $i$ and $j$ are not simple functions 710 of latitude and longitude. Therefore, it is not always straightforward 711 to determine the grid points surrounding any given observational position. 712 Before the interpolation can be performed, a search 713 algorithm is then required to determine the corner points of 710 For many grids used by the NEMO model, such as the ORCA family, the horizontal grid coordinates $i$ and $j$ are not simple functions of latitude and longitude. 711 Therefore, it is not always straightforward to determine the grid points surrounding any given observational position. 712 Before the interpolation can be performed, a search algorithm is then required to determine the corner points of 714 713 the quadrilateral cell in which the observation is located. 715 This is the most difficult and time consuming part of the 716 2D interpolation procedure. 717 A robust test for determining if an observation falls 718 within a given quadrilateral cell is as follows. Let 719 ${\rm P}({\lambda_{}}_{\rm P} ,{\phi_{}}_{\rm P} )$ denote the observation point, 720 and let ${\rm A}({\lambda_{}}_{\rm A} ,{\phi_{}}_{\rm A} )$, 721 ${\rm B}({\lambda_{}}_{\rm B} ,{\phi_{}}_{\rm B} )$, 722 ${\rm C}({\lambda_{}}_{\rm C} ,{\phi_{}}_{\rm C} )$ 723 and 724 ${\rm D}({\lambda_{}}_{\rm D} ,{\phi_{}}_{\rm D} )$ denote 725 the bottom left, bottom right, top left and top right 726 corner points of the cell, respectively. 727 To determine if P is inside 728 the cell, we verify that the cross-products 714 This is the most difficult and time consuming part of the 2D interpolation procedure. 715 A robust test for determining if an observation falls within a given quadrilateral cell is as follows. 716 Let ${\rm P}({\lambda_{}}_{\rm P} ,{\phi_{}}_{\rm P} )$ denote the observation point, 717 and let ${\rm A}({\lambda_{}}_{\rm A} ,{\phi_{}}_{\rm A} )$, ${\rm B}({\lambda_{}}_{\rm B} ,{\phi_{}}_{\rm B} )$, 718 ${\rm C}({\lambda_{}}_{\rm C} ,{\phi_{}}_{\rm C} )$ and ${\rm D}({\lambda_{}}_{\rm D} ,{\phi_{}}_{\rm D} )$ 719 denote the bottom left, bottom right, top left and top right corner points of the cell, respectively. 720 To determine if P is inside the cell, we verify that the cross-products 729 721 \begin{eqnarray} 730 722 \begin{array}{lllll} … … 752 744 \label{eq:cross} 753 745 \end{eqnarray} 754 point in the opposite direction to the unit normal 755 $\widehat{\bf k}$ (i.e., that the coefficients of 756 $\widehat{\bf k}$ are negative), 757 where ${{\bf r}_{}}_{\rm PA}$, ${{\bf r}_{}}_{\rm PB}$, 758 etc. correspond to the vectors between points P and A, 759 P and B, etc.. The method used is 760 similar to the method used in 761 the SCRIP interpolation package \citep{Jones_1998}. 762 763 In order to speed up the grid search, there is the possibility to construct 764 a lookup table for a user specified resolution. This lookup 765 table contains the lower and upper bounds on the $i$ and $j$ indices 766 to be searched for on a regular grid. For each observation position, 767 the closest point on the regular grid of this position is computed and 768 the $i$ and $j$ ranges of this point searched to determine the precise 769 four points surrounding the observation. 746 point in the opposite direction to the unit normal $\widehat{\bf k}$ 747 (i.e., that the coefficients of $\widehat{\bf k}$ are negative), 748 where ${{\bf r}_{}}_{\rm PA}$, ${{\bf r}_{}}_{\rm PB}$, etc. correspond to 749 the vectors between points P and A, P and B, etc.. 750 The method used is similar to the method used in the SCRIP interpolation package \citep{Jones_1998}. 751 752 In order to speed up the grid search, there is the possibility to construct a lookup table for a user specified resolution. 753 This lookup table contains the lower and upper bounds on the $i$ and $j$ indices to 754 be searched for on a regular grid. 755 For each observation position, the closest point on the regular grid of this position is computed and 756 the $i$ and $j$ ranges of this point searched to determine the precise four points surrounding the observation. 770 757 771 758 \subsection{Parallel aspects of horizontal interpolation} 772 759 \label{subsec:OBS_parallel} 773 760 774 For horizontal interpolation, there is the basic problem that the 775 observations are unevenly distributed on the globe. In numerical 776 models, it is common to divide the model grid into subgrids (or 777 domains) where each subgrid is executed on a single processing element 778 with explicit message passing for exchange of information along the 779 domain boundaries when running on a massively parallel processor (MPP) 780 system. This approach is used by \NEMO. 781 782 For observations there is no natural distribution since the 783 observations are not equally distributed on the globe. 784 Two options have been made available: 1) geographical distribution; 761 For horizontal interpolation, there is the basic problem that 762 the observations are unevenly distributed on the globe. 763 In numerical models, it is common to divide the model grid into subgrids (or domains) where 764 each subgrid is executed on a single processing element with explicit message passing for 765 exchange of information along the domain boundaries when running on a massively parallel processor (MPP) system. 766 This approach is used by \NEMO. 767 768 For observations there is no natural distribution since the observations are not equally distributed on the globe. 769 Two options have been made available: 770 1) geographical distribution; 785 771 and 2) round-robin. 786 772 … … 791 777 \includegraphics[width=10cm,height=12cm,angle=-90.]{Fig_ASM_obsdist_local} 792 778 \caption{ \protect\label{fig:obslocal} 793 Example of the distribution of observations with the geographical distribution of observational data.}779 Example of the distribution of observations with the geographical distribution of observational data.} 794 780 \end{center} \end{figure} 795 781 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> 796 782 797 This is the simplest option in which the observations are distributed according 798 to the domain of the grid-point parallelization. \autoref{fig:obslocal} 799 shows an example of the distribution of the {\em in situ} data on processors 800 with a different colour for each observation 801 on a given processor for a 4 $\times$ 2 decomposition with ORCA2. 783 This is the simplest option in which the observations are distributed according to 784 the domain of the grid-point parallelization. 785 \autoref{fig:obslocal} shows an example of the distribution of the {\em in situ} data on processors with 786 a different colour for each observation on a given processor for a 4 $\times$ 2 decomposition with ORCA2. 802 787 The grid-point domain decomposition is clearly visible on the plot. 803 788 804 The advantage of this approach is that all 805 information needed for horizontal interpolation is available without 806 any MPP communication. Of course, this is under the assumption that 807 we are only using a $2 \times 2$ grid-point stencil for the interpolation 808 (e.g., bilinear interpolation). For higher order interpolation schemes this 809 is no longer valid. A disadvantage with the above scheme is that the number of 810 observations on each processor can be very different. If the cost of 811 the actual interpolation is expensive relative to the communication of 812 data needed for interpolation, this could lead to load imbalance. 789 The advantage of this approach is that all information needed for horizontal interpolation is available without 790 any MPP communication. 791 Of course, this is under the assumption that we are only using a $2 \times 2$ grid-point stencil for 792 the interpolation (e.g., bilinear interpolation). 793 For higher order interpolation schemes this is no longer valid. 794 A disadvantage with the above scheme is that the number of observations on each processor can be very different. 795 If the cost of the actual interpolation is expensive relative to the communication of data needed for interpolation, 796 this could lead to load imbalance. 813 797 814 798 \subsubsection{Round-robin distribution of observations among processors} … … 818 802 \includegraphics[width=10cm,height=12cm,angle=-90.]{Fig_ASM_obsdist_global} 819 803 \caption{ \protect\label{fig:obsglobal} 820 Example of the distribution of observations with the round-robin distribution of observational data.}804 Example of the distribution of observations with the round-robin distribution of observational data.} 821 805 \end{center} \end{figure} 822 806 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> 823 807 824 An alternative approach is to distribute the observations equally 825 among processors and use message passing in order to retrieve 826 the stencil for interpolation. The simplest distribution of the observations 827 is to distribute them using a round-robin scheme. \autoref{fig:obsglobal} 828 shows the distribution of the {\em in situ} data on processors for the 829 round-robin distribution of observations with a different colour for 830 each observation on a given processor for a 4 $\times$ 2 decomposition 831 with ORCA2 for the same input data as in \autoref{fig:obslocal}. 808 An alternative approach is to distribute the observations equally among processors and 809 use message passing in order to retrieve the stencil for interpolation. 810 The simplest distribution of the observations is to distribute them using a round-robin scheme. 811 \autoref{fig:obsglobal} shows the distribution of the {\em in situ} data on processors for 812 the round-robin distribution of observations with a different colour for each observation on a given processor for 813 a 4 $\times$ 2 decomposition with ORCA2 for the same input data as in \autoref{fig:obslocal}. 832 814 The observations are now clearly randomly distributed on the globe. 833 In order to be able to perform horizontal interpolation in this case, 834 a subroutine has been developed that retrieves any grid points in the 835 global space. 815 In order to be able to perform horizontal interpolation in this case, 816 a subroutine has been developed that retrieves any grid points in the global space. 836 817 837 818 \subsection{Vertical interpolation operator} 838 819 839 Vertical interpolation is achieved using either a cubic spline or 840 linear interpolation. For the cubic spline, the top and 841 bottom boundary conditions for the second derivative of the 842 interpolating polynomial in the spline are set to zero. 820 Vertical interpolation is achieved using either a cubic spline or linear interpolation. 821 For the cubic spline, the top and bottom boundary conditions for the second derivative of 822 the interpolating polynomial in the spline are set to zero. 843 823 At the bottom boundary, this is done using the land-ocean mask. 844 824 … … 856 836 \subsection{Concept} 857 837 858 The obs oper maps model variables to observation space. It is possible to apply this mapping 859 without running the model. The software which performs this functionality is known as the 860 \textbf{offline obs oper}. The obs oper is divided into three stages. An initialisation phase, 861 an interpolation phase and an output phase. The implementation of which is outlined in the 862 previous sections. During the interpolation phase the offline obs oper populates the model 863 arrays by reading saved model fields from disk. 864 865 There are two ways of exploiting this offline capacity. The first is to mimic the behaviour of 866 the online system by supplying model fields at regular intervals between the start and the end 867 of the run. This approach results in a single model counterpart per observation. This kind of 868 usage produces feedback files the same file format as the online obs oper. 869 The second is to take advantage of the offline setting in which multiple model counterparts can 870 be calculated per observation. In this case it is possible to consider all forecasts verifying 871 at the same time. By forecast, I mean any method which produces an estimate of physical reality 872 which is not an observed value. In the case of class 4 files this means forecasts, analyses, persisted 873 analyses and climatological values verifying at the same time. Although the class 4 file format 874 doesn't account for multiple ensemble members or multiple experiments per observation, it is possible 875 to include these components in the same or multiple files. 838 The obs oper maps model variables to observation space. 839 It is possible to apply this mapping without running the model. 840 The software which performs this functionality is known as the \textbf{offline obs oper}. 841 The obs oper is divided into three stages. 842 An initialisation phase, an interpolation phase and an output phase. 843 The implementation of which is outlined in the previous sections. 844 During the interpolation phase the offline obs oper populates the model arrays by 845 reading saved model fields from disk. 846 847 There are two ways of exploiting this offline capacity. 848 The first is to mimic the behaviour of the online system by supplying model fields at 849 regular intervals between the start and the end of the run. 850 This approach results in a single model counterpart per observation. 851 This kind of usage produces feedback files the same file format as the online obs oper. 852 The second is to take advantage of the offline setting in which 853 multiple model counterparts can be calculated per observation. 854 In this case it is possible to consider all forecasts verifying at the same time. 855 By forecast, I mean any method which produces an estimate of physical reality which is not an observed value. 856 In the case of class 4 files this means forecasts, analyses, persisted analyses and 857 climatological values verifying at the same time. 858 Although the class 4 file format doesn't account for multiple ensemble members or 859 multiple experiments per observation, it is possible to include these components in the same or multiple files. 876 860 877 861 %-------------------------------------------------------------------------------------------------------- … … 883 867 \subsubsection{Building} 884 868 885 In addition to \emph{OPA\_SRC} the offline obs oper requires the inclusion 886 of the \emph{OOO\_SRC} directory. \emph{OOO\_SRC} contains a replacement \mdl{nemo} and 887 \mdl{nemogcm} which overwrites the resultant \textbf{nemo.exe}. This is the approach taken 888 by \emph{SAS\_SRC} and \emph{OFF\_SRC}.869 In addition to \emph{OPA\_SRC} the offline obs oper requires the inclusion of the \emph{OOO\_SRC} directory. 870 \emph{OOO\_SRC} contains a replacement \mdl{nemo} and \mdl{nemogcm} which 871 overwrites the resultant \textbf{nemo.exe}. 872 This is the approach taken by \emph{SAS\_SRC} and \emph{OFF\_SRC}. 889 873 890 874 %-------------------------------------------------------------------------------------------------------- … … 898 882 \subsubsection{Quick script} 899 883 900 A useful Python utility to control the namelist options can be found in \textbf{OBSTOOLS/OOO}. The901 functions which locate model fields and observation files can be manually specified. The package 902 can be installed by appropriate use of the included setup.py script.884 A useful Python utility to control the namelist options can be found in \textbf{OBSTOOLS/OOO}. 885 The functions which locate model fields and observation files can be manually specified. 886 The package can be installed by appropriate use of the included setup.py script. 903 887 904 888 Documentation can be auto-generated by Sphinx by running \emph{make html} in the \textbf{doc} directory. … … 908 892 %-------------------------------------------------------------------------------------------------------- 909 893 \subsection{Configuring the offline observation operator} 910 The observation files and settings understood by \textbf{namobs} have been outlined in the online 911 obs oper section. In addition there are two further namelists wich control the operation of the offline 912 obs oper. \textbf{namooo} which controls the input model fields and \textbf{namcl4} which controls the 913 production of class 4 files.894 The observation files and settings understood by \textbf{namobs} have been outlined in the online obs oper section. 895 In addition there are two further namelists wich control the operation of the offline obs oper. 896 \textbf{namooo} which controls the input model fields and \textbf{namcl4} which 897 controls the production of class 4 files. 914 898 915 899 \subsubsection{Single field} … … 917 901 In offline mode model arrays are populated at appropriate time steps via input files. 918 902 At present, \textbf{tsn} and \textbf{sshn} are populated by the default read routines. 919 These routines will be expanded upon in future versions to allow the specification of any 920 model variable.As such, input files must be global versions of the model domain with903 These routines will be expanded upon in future versions to allow the specification of any model variable. 904 As such, input files must be global versions of the model domain with 921 905 \textbf{votemper}, \textbf{vosaline} and optionally \textbf{sshn} present. 922 906 923 For each field read there must be an entry in the \textbf{namooo} namelist specifying the924 name of the file to read and the index along the \emph{time\_counter}. For example, to 925 read the second time counter from a single file the namelist would be.907 For each field read there must be an entry in the \textbf{namooo} namelist specifying 908 the name of the file to read and the index along the \emph{time\_counter}. 909 For example, to read the second time counter from a single file the namelist would be. 926 910 927 911 \begin{forlines} … … 939 923 \subsubsection{Multiple fields per run} 940 924 941 Model field iteration is controlled via \textbf{nn\_ooo\_freq} which specifies942 the number of model steps at which the next field gets read. For example, if 943 12 hourly fields are to be interpolated in a setup where 288 steps equals 24 hours.925 Model field iteration is controlled via \textbf{nn\_ooo\_freq} which 926 specifies the number of model steps at which the next field gets read. 927 For example, if 12 hourly fields are to be interpolated in a setup where 288 steps equals 24 hours. 944 928 945 929 \begin{forlines} … … 957 941 \end{forlines} 958 942 959 The above namelist will result in feedback files whose first 12 hours contain 960 the first field of foo.nc and thesecond 12 hours contain the second field.943 The above namelist will result in feedback files whose first 12 hours contain the first field of foo.nc and 944 the second 12 hours contain the second field. 961 945 962 946 %\begin{framed} … … 964 948 %\end{framed} 965 949 966 It is easy to see how a collection of fields taken fron a number of files 967 at different indices can be combined at a particular frequency in time to 968 generate a pseudo model evolution. As long as all that is needed is a single 969 model counterpart at a regular interval then namooo is all that needs to 970 be edited. However, a far more interesting approach can be taken in which 971 multiple forecasts, analyses, persisted analyses and climatologies are 972 considered against the same set of observations. For this a slightly more 973 complicated approach is needed. It is referred to as \emph{Class 4} since 974 it is the fourth metric defined by the GODAE intercomparison project. 950 It is easy to see how a collection of fields taken fron a number of files at different indices can be combined at 951 a particular frequency in time to generate a pseudo model evolution. 952 As long as all that is needed is a single model counterpart at a regular interval then 953 namooo is all that needs to be edited. 954 However, a far more interesting approach can be taken in which multiple forecasts, analyses, persisted analyses and 955 climatologies are considered against the same set of observations. 956 For this a slightly more complicated approach is needed. 957 It is referred to as \emph{Class 4} since it is the fourth metric defined by the GODAE intercomparison project. 975 958 976 959 %-------------------------------------------------------------------------------------------------------- … … 979 962 \subsubsection{Multiple model counterparts per observation a.k.a Class 4} 980 963 981 A generalisation of feedback files to allow multiple model components per observation. For a single 982 observation, as well as previous forecasts verifying at the same time there are also analyses, persisted 983 analyses and climatologies. 984 985 986 The above namelist performs two basic functions. It organises the fields 987 given in \textbf{namooo} into groups so that observations can be matched 988 up multiple times. It also controls the metadata and the output variable 989 of the class 4 file when a write routine is called. 964 A generalisation of feedback files to allow multiple model components per observation. 965 For a single observation, as well as previous forecasts verifying at the same time 966 there are also analyses, persisted analyses and climatologies. 967 968 969 The above namelist performs two basic functions. 970 It organises the fields given in \textbf{namooo} into groups so that observations can be matched up multiple times. 971 It also controls the metadata and the output variable of the class 4 file when a write routine is called. 990 972 991 973 %\begin{framed} 992 \textbf{Note: ln\_cl4} must be set to \forcode{.true.} in \textbf{namobs} 993 to use class 4 outputs. 974 \textbf{Note: ln\_cl4} must be set to \forcode{.true.} in \textbf{namobs} to use class 4 outputs. 994 975 %\end{framed} 995 976 … … 1004 985 \noindent 1005 986 \linebreak 1006 Much of the namelist is devoted to specifying this convention. The1007 following namelist settings control the elements of the output 1008 file names.Each should be specified as a single string of character data.987 Much of the namelist is devoted to specifying this convention. 988 The following namelist settings control the elements of the output file names. 989 Each should be specified as a single string of character data. 1009 990 1010 991 \begin{description} 1011 992 \item[cl4\_prefix] 1012 Prefix for class 4 files e.g. class4993 Prefix for class 4 files e.g. class4 1013 994 \item[cl4\_date] 1014 YYYYMMDD validity date995 YYYYMMDD validity date 1015 996 \item[cl4\_sys] 1016 The name of the class 4 model system e.g. FOAM997 The name of the class 4 model system e.g. FOAM 1017 998 \item[cl4\_cfg] 1018 The name of the class 4 model configuration e.g. orca025999 The name of the class 4 model configuration e.g. orca025 1019 1000 \item[cl4\_vn] 1020 The name of the class 4 model version e.g. 12.01001 The name of the class 4 model version e.g. 12.0 1021 1002 \end{description} 1022 1003 1023 1004 \noindent 1024 The kind is specified by the observation type internally to the obs oper. The processor1025 number is specified internally in NEMO.1005 The kind is specified by the observation type internally to the obs oper. 1006 The processor number is specified internally in NEMO. 1026 1007 1027 1008 \subsubsection{Class 4 file global attributes} 1028 1009 1029 Global attributes necessary to fulfill the class 4 file definition. These 1030 are also useful pieces of information when collaborating with external 1031 partners. 1010 Global attributes necessary to fulfill the class 4 file definition. 1011 These are also useful pieces of information when collaborating with external partners. 1032 1012 1033 1013 \begin{description} 1034 1014 \item[cl4\_contact] 1035 Contact email for class 4 files.1015 Contact email for class 4 files. 1036 1016 \item[cl4\_inst] 1037 The name of the producers institution.1017 The name of the producers institution. 1038 1018 \item[cl4\_cfg] 1039 The name of the class 4 model configuration e.g. orca0251019 The name of the class 4 model configuration e.g. orca025 1040 1020 \item[cl4\_vn] 1041 The name of the class 4 model version e.g. 12.01021 The name of the class 4 model version e.g. 12.0 1042 1022 \end{description} 1043 1023 1044 1024 \noindent 1045 The obs\_type, 1046 creation date and validity time are specified internally to the obs oper. 1025 The obs\_type, creation date and validity time are specified internally to the obs oper. 1047 1026 1048 1027 \subsubsection{Class 4 model counterpart configuration} 1049 1028 1050 As seen previously it is possible to perform a single sweep of the 1051 obs oper and specify a collection of model fields equally spaced 1052 along that sweep. In the class 4 case the single sweep is replaced 1053 with multiple sweeps and a certain ammount of book keeping is 1054 needed to ensure each model counterpart makes its way to the 1055 correct piece of memory in the output files. 1029 As seen previously it is possible to perform a single sweep of the obs oper and 1030 specify a collection of model fields equally spaced along that sweep. 1031 In the class 4 case the single sweep is replaced with multiple sweeps and 1032 a certain ammount of book keeping is needed to ensure each model counterpart makes its way to 1033 the correct piece of memory in the output files. 1056 1034 1057 1035 \noindent 1058 1036 \linebreak 1059 In terms of book keeping, the offline obs oper needs to know how many 1060 full sweeps need to be performed. This is specified via the 1061 \textbf{cl4\_match\_len} variable and is the total number of model 1062 counterparts per observation. For example, a 3 forecasts plus 3 persistence 1063 fields plus an analysis field would be 7 counterparts per observation. 1037 In terms of book keeping, the offline obs oper needs to know how many full sweeps need to be performed. 1038 This is specified via the \textbf{cl4\_match\_len} variable and 1039 is the total number of model counterparts per observation. 1040 For example, a 3 forecasts plus 3 persistence fields plus an analysis field would be 7 counterparts per observation. 1064 1041 1065 1042 \begin{forlines} … … 1067 1044 \end{forlines} 1068 1045 1069 Then to correctly allocate a class 4 file the forecast axis must be defined. This1070 is controlled via \textbf{cl4\_fcst\_len}, which in out above example would be 3.1046 Then to correctly allocate a class 4 file the forecast axis must be defined. 1047 This is controlled via \textbf{cl4\_fcst\_len}, which in out above example would be 3. 1071 1048 1072 1049 \begin{forlines} … … 1074 1051 \end{forlines} 1075 1052 1076 Then for each model field it is necessary to designate what class 4 variable and 1077 index along the forecast dimension the model counterpart should be stored in the 1078 output file. As well as a value for that lead time in hours, this will be useful 1079 when interpreting the data afterwards. 1053 Then for each model field it is necessary to designate what class 4 variable and index along 1054 the forecast dimension the model counterpart should be stored in the output file. 1055 As well as a value for that lead time in hours, this will be useful when interpreting the data afterwards. 1080 1056 1081 1057 \begin{forlines} … … 1086 1062 \end{forlines} 1087 1063 1088 In terms of files and indices of fields inside each file the class 4 approach 1089 makes use of the \textbf{namooo} namelist. If our fields are in separate files 1090 with a single field per file our example inputs will be specified.1064 In terms of files and indices of fields inside each file the class 4 approach makes use of 1065 the \textbf{namooo} namelist. 1066 If our fields are in separate files with a single field per file our example inputs will be specified. 1091 1067 1092 1068 \begin{forlines} … … 1095 1071 \end{forlines} 1096 1072 1097 When we combine all of the naming conventions, global attributes and i/o instructions 1098 the class 4 namelist becomes. 1073 When we combine all of the naming conventions, global attributes and i/o instructions the class 4 namelist becomes. 1099 1074 1100 1075 \begin{forlines} … … 1150 1125 \subsubsection{Climatology interpolation} 1151 1126 1152 The climatological counterpart is generated at the start of the run by restarting1153 the model from climatology through appropriate use of \textbf{namtsd}. To override 1154 the offline observation operator read routine and to take advantage of the restart 1155 s ettings, specify the first entry in \textbf{cl4\_vars} as "climatology". This will then1156 pipe the restart from climatology into the output class 4 file. As in every other 1157 class 4 matchup the input file, input index and output index must be specified.1158 These can be replaced with dummy data since they are not used but they must be1159 present to cycle through the matchups correctly.1127 The climatological counterpart is generated at the start of the run by 1128 restarting the model from climatology through appropriate use of \textbf{namtsd}. 1129 To override the offline observation operator read routine and to take advantage of the restart settings, 1130 specify the first entry in \textbf{cl4\_vars} as "climatology". 1131 This will then pipe the restart from climatology into the output class 4 file. 1132 As in every other class 4 matchup the input file, input index and output index must be specified. 1133 These can be replaced with dummy data since they are not used but 1134 they must be present to cycle through the matchups correctly. 1160 1135 1161 1136 \subsection{Advanced usage} 1162 1137 1163 In certain cases it may be desirable to include both multiple model fields per 1164 observation window with multiple match ups per observation. This can be achieved 1165 by specifying \textbf{nn\_ooo\_freq} as well as the class 4 settings. Care must 1166 be taken in generating the ooo\_files list such that the files are arranged into1167 consecutive blocks of single match ups. For example, 2 forecast fields1168 of 12 hourly data would result in 4 separate read operations but only 2 write 1169 o perations, 1 per forecast.1138 In certain cases it may be desirable to include both multiple model fields per observation window with 1139 multiple match ups per observation. 1140 This can be achieved by specifying \textbf{nn\_ooo\_freq} as well as the class 4 settings. 1141 Care must be taken in generating the ooo\_files list such that the files are arranged into 1142 consecutive blocks of single match ups. 1143 For example, 2 forecast fields of 12 hourly data would result in 4 separate read operations but 1144 only 2 write operations, 1 per forecast. 1170 1145 1171 1146 \begin{forlines} … … 1175 1150 \end{forlines} 1176 1151 1177 The above notation reveals the internal split between match up iterators and file 1178 iterators. This technique has not been used before so experimentation is needed 1179 before results can be trusted. 1152 The above notation reveals the internal split between match up iterators and file iterators. 1153 This technique has not been used before so experimentation is needed before results can be trusted. 1180 1154 1181 1155 … … 1187 1161 \label{sec:OBS_obsutils} 1188 1162 1189 Some tools for viewing and processing of observation and feedback files are provided in the 1190 NEMO repository for convenience. These include OBSTOOLS which are a collection of Fortran 1191 programs which are helpful to deal with feedback files. They do such tasks as observation file 1192 conversion, printing of file contents, some basic statistical analysis of feedback files. The 1193 other tool is an IDL program called dataplot which uses a graphical interface to visualise 1194 observations and feedback files. OBSTOOLS and dataplot are described in more detail below. 1163 Some tools for viewing and processing of observation and feedback files are provided in 1164 the NEMO repository for convenience. 1165 These include OBSTOOLS which are a collection of Fortran programs which are helpful to deal with feedback files. 1166 They do such tasks as observation file conversion, printing of file contents, 1167 some basic statistical analysis of feedback files. 1168 The other tool is an IDL program called dataplot which uses a graphical interface to 1169 visualise observations and feedback files. 1170 OBSTOOLS and dataplot are described in more detail below. 1195 1171 1196 1172 \subsection{Obstools} 1197 1173 1198 A series of Fortran utilities is provided with NEMO called OBSTOOLS. This are helpful in1199 handling observation files and the feedback file output from the NEMO observation operator.1174 A series of Fortran utilities is provided with NEMO called OBSTOOLS. 1175 This are helpful in handling observation files and the feedback file output from the NEMO observation operator. 1200 1176 The utilities are as follows 1201 1177 1202 1178 \subsubsection{c4comb} 1203 1179 1204 The program c4comb combines multiple class 4 files produced by individual processors in an 1205 MPI run of NEMO offline obs\_oper into a single class 4 file. The program is called in the following way: 1180 The program c4comb combines multiple class 4 files produced by individual processors in 1181 an MPI run of NEMO offline obs\_oper into a single class 4 file. 1182 The program is called in the following way: 1206 1183 1207 1184 … … 1213 1190 \subsubsection{corio2fb} 1214 1191 1215 The program corio2fb converts profile observation files from the Coriolis format to the 1216 standard feedback format.The program is called in the following way:1192 The program corio2fb converts profile observation files from the Coriolis format to the standard feedback format. 1193 The program is called in the following way: 1217 1194 1218 1195 \footnotesize … … 1223 1200 \subsubsection{enact2fb} 1224 1201 1225 The program enact2fb converts profile observation files from the ENACT format to the standard 1226 feedback format.The program is called in the following way:1202 The program enact2fb converts profile observation files from the ENACT format to the standard feedback format. 1203 The program is called in the following way: 1227 1204 1228 1205 \footnotesize … … 1233 1210 \subsubsection{fbcomb} 1234 1211 1235 The program fbcomb combines multiple feedback files produced by individual processors in an 1236 MPI run of NEMO into a single feedback file. The program is called in the following way: 1212 The program fbcomb combines multiple feedback files produced by individual processors in 1213 an MPI run of NEMO into a single feedback file. 1214 The program is called in the following way: 1237 1215 1238 1216 \footnotesize … … 1243 1221 \subsubsection{fbmatchup} 1244 1222 1245 The program fbmatchup will match observations from two feedback files. The program is called1246 in the following way:1223 The program fbmatchup will match observations from two feedback files. 1224 The program is called in the following way: 1247 1225 1248 1226 \footnotesize … … 1254 1232 1255 1233 The program fbprint will print the contents of a feedback file or files to standard output. 1256 Selected information can be output using optional arguments. The program is called in the1257 following way:1234 Selected information can be output using optional arguments. 1235 The program is called in the following way: 1258 1236 1259 1237 \footnotesize … … 1282 1260 \subsubsection{fbsel} 1283 1261 1284 The program fbsel will select or subsample observations. The program is called in the1285 following way:1262 The program fbsel will select or subsample observations. 1263 The program is called in the following way: 1286 1264 1287 1265 \footnotesize … … 1292 1270 \subsubsection{fbstat} 1293 1271 1294 The program fbstat will output summary statistics in different global areas into a number of 1295 files.The program is called in the following way:1272 The program fbstat will output summary statistics in different global areas into a number of files. 1273 The program is called in the following way: 1296 1274 1297 1275 \footnotesize … … 1302 1280 \subsubsection{fbthin} 1303 1281 1304 The program fbthin will thin the data to 1 degree resolution. The code could easily be 1305 modified to thin to a different resolution. The program is called in the following way: 1282 The program fbthin will thin the data to 1 degree resolution. 1283 The code could easily be modified to thin to a different resolution. 1284 The program is called in the following way: 1306 1285 1307 1286 \footnotesize … … 1312 1291 \subsubsection{sla2fb} 1313 1292 1314 The program sla2fb will convert an AVISO SLA format file to feedback format. The program is1315 called in the following way:1293 The program sla2fb will convert an AVISO SLA format file to feedback format. 1294 The program is called in the following way: 1316 1295 1317 1296 \footnotesize … … 1325 1304 \subsubsection{vel2fb} 1326 1305 1327 The program vel2fb will convert TAO/PIRATA/RAMA currents files to feedback format. The program1328 is called in the following way:1306 The program vel2fb will convert TAO/PIRATA/RAMA currents files to feedback format. 1307 The program is called in the following way: 1329 1308 1330 1309 \footnotesize … … 1339 1318 \subsection{Dataplot} 1340 1319 1341 An IDL program called dataplot is included which uses a graphical interface to visualise 1342 observations and feedback files. It is possible to zoom in, plot individual profiles and 1343 calculate some basic statistics. To plot some data run IDL and then: 1320 An IDL program called dataplot is included which uses a graphical interface to 1321 visualise observations and feedback files. 1322 It is possible to zoom in, plot individual profiles and calculate some basic statistics. 1323 To plot some data run IDL and then: 1344 1324 \footnotesize 1345 1325 \begin{minted}{idl} … … 1347 1327 \end{minted} 1348 1328 1349 To read multiple files into dataplot, for example multiple feedback files from different1350 processors or from different days, the easiest method is to use the spawn command to generate 1351 a list of files which can then be passed to dataplot.1329 To read multiple files into dataplot, 1330 for example multiple feedback files from different processors or from different days, 1331 the easiest method is to use the spawn command to generate a list of files which can then be passed to dataplot. 1352 1332 \footnotesize 1353 1333 \begin{minted}{idl} … … 1357 1337 1358 1338 \autoref{fig:obsdataplotmain} shows the main window which is launched when dataplot starts. 1359 This is split into three parts. At the top there is a menu bar which contains a variety of 1360 drop down menus. Areas - zooms into prespecified regions; plot - plots the data as a 1361 timeseries or a T-S diagram if appropriate; Find - allows data to be searched; Config - sets 1362 various configuration options. 1363 1364 The middle part is a plot of the geographical location of the observations. This will plot the 1365 observation value, the model background value or observation minus background value depending 1366 on the option selected in the radio button at the bottom of the window. The plotting colour 1367 range can be changed by clicking on the colour bar. The title of the plot gives some basic 1368 information about the date range and depth range shown, the extreme values, and the mean and 1369 rms values. It is possible to zoom in using a drag-box. You may also zoom in or out using the 1370 mouse wheel. 1371 1372 The bottom part of the window controls what is visible in the plot above. There are two bars 1373 which select the level range plotted (for profile data). The other bars below select the date 1374 range shown. The bottom of the figure allows the option to plot the mean, root mean square, 1375 standard deviation or mean square values. As mentioned above you can choose to plot the 1376 observation value, the model background value or observation minus background value. The next 1377 group of radio buttons selects the map projection. This can either be regular latitude 1378 longitude grid, or north or south polar stereographic. The next group of radio buttons will 1379 plot bad observations, switch to salinity and plot density for profile observations. The 1380 rightmost group of buttons will print the plot window as a postscript, save it as png, or exit 1381 from dataplot. 1339 This is split into three parts. 1340 At the top there is a menu bar which contains a variety of drop down menus. 1341 Areas - zooms into prespecified regions; 1342 plot - plots the data as a timeseries or a T-S diagram if appropriate; 1343 Find - allows data to be searched; 1344 Config - sets various configuration options. 1345 1346 The middle part is a plot of the geographical location of the observations. 1347 This will plot the observation value, the model background value or observation minus background value depending on 1348 the option selected in the radio button at the bottom of the window. 1349 The plotting colour range can be changed by clicking on the colour bar. 1350 The title of the plot gives some basic information about the date range and depth range shown, 1351 the extreme values, and the mean and rms values. 1352 It is possible to zoom in using a drag-box. 1353 You may also zoom in or out using the mouse wheel. 1354 1355 The bottom part of the window controls what is visible in the plot above. 1356 There are two bars which select the level range plotted (for profile data). 1357 The other bars below select the date range shown. 1358 The bottom of the figure allows the option to plot the mean, root mean square, standard deviation or 1359 mean square values. 1360 As mentioned above you can choose to plot the observation value, the model background value or 1361 observation minus background value. 1362 The next group of radio buttons selects the map projection. 1363 This can either be regular latitude longitude grid, or north or south polar stereographic. 1364 The next group of radio buttons will plot bad observations, switch to salinity and 1365 plot density for profile observations. 1366 The rightmost group of buttons will print the plot window as a postscript, save it as png, or exit from dataplot. 1382 1367 1383 1368 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 1386 1371 \includegraphics[width=9cm,angle=-90.]{Fig_OBS_dataplot_main} 1387 1372 \caption{ \protect\label{fig:obsdataplotmain} 1388 Main window of dataplot.}1373 Main window of dataplot.} 1389 1374 \end{center} \end{figure} 1390 1375 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> 1391 1376 1392 If a profile point is clicked with the mouse button a plot of the observation and background 1393 values asa function of depth (\autoref{fig:obsdataplotprofile}).1377 If a profile point is clicked with the mouse button a plot of the observation and background values as 1378 a function of depth (\autoref{fig:obsdataplotprofile}). 1394 1379 1395 1380 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 1398 1383 \includegraphics[width=7cm,angle=-90.]{Fig_OBS_dataplot_prof} 1399 1384 \caption{ \protect\label{fig:obsdataplotprofile} 1400 Profile plot from dataplot produced by right clicking on a point in the main window.}1385 Profile plot from dataplot produced by right clicking on a point in the main window.} 1401 1386 \end{center} \end{figure} 1402 1387 %>>>>>>>>>>>>>>>>>>>>>>>>>>>>
Note: See TracChangeset
for help on using the changeset viewer.