Changeset 2541 for trunk/DOC/TexFiles/Chapters
- Timestamp:
- 2011-01-09T05:55:20+01:00 (13 years ago)
- Location:
- trunk/DOC/TexFiles/Chapters
- Files:
-
- 1 added
- 9 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/DOC/TexFiles/Chapters/Abstracts_Foreword.tex
r2414 r2541 12 12 equation model adapted to regional and global ocean circulation problems. It is intended to 13 13 be a flexible tool for studying the ocean and its interactions with the others components of 14 the earth climate system (atmosphere, sea-ice, biogeochemical tracers, ...) over a wide range15 of space and time scales.Prognostic variables are the three-dimensional velocity field, a linear14 the earth climate system over a wide range of space and time scales. 15 Prognostic variables are the three-dimensional velocity field, a linear 16 16 or non-linear sea surface height, the temperature and the salinity. In the horizontal direction, 17 17 the model uses a curvilinear orthogonal grid and in the vertical direction, a full or partial step 18 18 $z$-coordinate, or $s$-coordinate, or a mixture of the two. The distribution of variables is a 19 19 three-dimensional Arakawa C-type grid. Various physical choices are available to describe 20 ocean physics, including TKE and KPP vertical physics. Within NEMO, the ocean is interfaced 21 with a sea-ice model (LIM v2 and v3), passive tracer and biogeochemical models (TOP) 22 and, via the OASIS coupler, with several atmospheric general circulation models. 20 ocean physics, including TKE, GLS and KPP vertical physics. Within NEMO, the ocean is 21 interfaced with a sea-ice model (LIM v2 and v3), passive tracer and biogeochemical models (TOP) 22 and, via the OASIS coupler, with several atmospheric general circulation models. It also 23 support two-way grid embedding via the AGRIF software. 23 24 24 25 % ================================================================ … … 28 29 mod\`{e}le aux \'{e}quations primitives de la circulation oc\'{e}anique r\'{e}gionale et globale. 29 30 Il se veut un outil flexible pour \'{e}tudier sur un vaste spectre spatiotemporel l'oc\'{e}an et ses 30 interactions avec les autres composantes du syst\`{e}me climatique terrestre (atmosph\`{e}re,31 glace de mer, traceurs biog\'{e}ochimiques...). Les variables pronostiques sont le champ32 tridimensionnel de vitesse, une hauteur de la merlin\'{e}aire ou non, la temperature et la salinit\'{e}.31 interactions avec les autres composantes du syst\`{e}me climatique terrestre. 32 Les variables pronostiques sont le champ tridimensionnel de vitesse, une hauteur de la mer 33 lin\'{e}aire ou non, la temperature et la salinit\'{e}. 33 34 La distribution des variables se fait sur une grille C d'Arakawa tridimensionnelle utilisant une 34 35 coordonn\'{e}e verticale $z$ \`{a} niveaux entiers ou partiels, ou une coordonn\'{e}e s, ou encore 35 36 une combinaison des deux. Diff\'{e}rents choix sont propos\'{e}s pour d\'{e}crire la physique 36 oc\'{e}anique, incluant notamment des physiques verticales TKE et KPP. A travers l'infrastructure 37 NEMO, l'oc\'{e}an est interfac\'{e} avec un mod\`{e}le de glace de mer, des mod\`{e}les 38 biog\'{e}ochimiques et de traceur passif, et, via le coupleur OASIS, \`{a} plusieurs mod\`{e}les 39 de circulation g\'{e}n\'{e}rale atmosph\'{e}rique. 37 oc\'{e}anique, incluant notamment des physiques verticales TKE, GLS et KPP. A travers l'infrastructure 38 NEMO, l'oc\'{e}an est interfac\'{e} avec des mod\`{e}les de glace de mer, de biog\'{e}ochimie 39 et de traceurs passifs, et, via le coupleur OASIS, \`{a} plusieurs mod\`{e}les de circulation 40 g\'{e}n\'{e}rale atmosph\'{e}rique. Il supporte \'{e}galement l'embo\^{i}tement interactif de 41 maillages via le logiciel AGRIF. 40 42 } 41 43 -
trunk/DOC/TexFiles/Chapters/Annex_D.tex
r2414 r2541 88 88 89 89 - In the declaration of a PUBLIC variable, the comment part at the end of the line 90 should start with the two characters "\verb?!:?". the following UNIX command, 91 \verb?grep var_name *90 \ grep \!: ? 90 should start with the two characters "\verb?!:?". the following UNIX command, \\ 91 \verb?grep var_name *90 \ grep \!: ? \\ 92 92 will display the module name and the line where the var\_name declaration is. 93 93 -
trunk/DOC/TexFiles/Chapters/Chap_CFG.tex
r2440 r2541 3 3 % ================================================================ 4 4 \chapter{Configurations} 5 \label{ MISC}5 \label{CFG} 6 6 \minitoc 7 7 … … 69 69 % ORCA family configurations 70 70 % ================================================================ 71 \section{ORCA family: global ocean with tripolar grid }71 \section{ORCA family: global ocean with tripolar grid (\key{orca\_rX})} 72 72 \label{CFG_orca} 73 73 … … 140 140 The NEMO system is provided with five built-in ORCA configurations which differ in the 141 141 horizontal resolution. The value of the resolution is given by the resolution at the Equator 142 expressed in degrees. Each of configuration is set through a CPP key with set the grid size 143 and configuration name parameters (Tab.~\ref{Tab_ORCA}). 142 expressed in degrees. Each of configuration is set through a CPP key, \key{orca\_rX} 143 (with X being an indicator of the resolution), which set the grid size and configuration 144 name parameters (Tab.~\ref{Tab_ORCA}). 144 145 . 145 146 -
trunk/DOC/TexFiles/Chapters/Chap_DYN.tex
r2376 r2541 900 900 901 901 The filtered formulation follows the \citet{Roullet_Madec_JGR00} implementation. 902 The extra term introduced in the equations (see {\S}I.2.2) is solved implicitly.902 The extra term introduced in the equations (see \S\ref{PE_free_surface}) is solved implicitly. 903 903 The elliptic solvers available in the code are documented in \S\ref{MISC}. 904 904 905 905 %% gm %%======>>>> given here the discrete eqs provided to the solver 906 \gmcomment{ %%% copy from chap-model basics 907 \begin{equation} \label{Eq_spg_flt} 908 \frac{\partial {\rm {\bf U}}_h }{\partial t}= {\rm {\bf M}} 909 - g \nabla \left( \tilde{\rho} \ \eta \right) 910 - g \ T_c \nabla \left( \widetilde{\rho} \ \partial_t \eta \right) 911 \end{equation} 912 where $T_c$, is a parameter with dimensions of time which characterizes the force, 913 $\widetilde{\rho} = \rho / \rho_o$ is the dimensionless density, and $\rm {\bf M}$ 914 represents the collected contributions of the Coriolis, hydrostatic pressure gradient, 915 non-linear and viscous terms in \eqref{Eq_PE_dyn}. 916 } %end gmcomment 906 917 907 918 Note that in the linear free surface formulation (\key{vvl} not defined), the ocean depth -
trunk/DOC/TexFiles/Chapters/Chap_MISC.tex
r2414 r2541 193 193 a more clever choice. 194 194 195 196 % ================================================================ 197 % Accuracy and Reproducibility 198 % ================================================================ 199 \section{Accuracy and Reproducibility (\mdl{lib\_fortran})} 200 \label{MISC_fortran} 201 202 \subsection{Issues with intrinsinc SIGN function (\key{nosignedzero})} 203 \label{MISC_sign} 204 205 The SIGN(A, B) is the \textsc {Fortran} intrinsic function delivers the magnitude 206 of A with the sign of B. For example, SIGN(-3.0,2.0) has the value 3.0. 207 The problematic case is when the second argument is zero, because, on platforms 208 that support IEEE arithmetic, zero is actually a signed number. 209 There is a positive zero and a negative zero. 210 211 In \textsc{Fortran}~90, the processor was required always to deliver a positive result for SIGN(A, B) 212 if B was zero. Nevertheless, in \textsc{Fortran}~95, the processor is allowed to do the correct thing 213 and deliver ABS(A) when B is a positive zero and -ABS(A) when B is a negative zero. 214 This change in the specification becomes apparent only when B is of type real, and is zero, 215 and the processor is capable of distinguishing between positive and negative zero, 216 and B is negative real zero. Then SIGN delivers a negative result where, under \textsc{Fortran}~90 217 rules, it used to return a positive result. 218 This change may be especially sensitive for the ice model, so we overwrite the intrinsinc 219 function with our own function simply performing : \\ 220 \verb? IF( B >= 0.e0 ) THEN ; SIGN(A,B) = ABS(A) ? \\ 221 \verb? ELSE ; SIGN(A,B) =-ABS(A) ? \\ 222 \verb? ENDIF ? \\ 223 This feature can be found in \mdl{lib\_fortran} module and is effective when \key{nosignedzero} 224 is defined. We use a CPP key as the overwritting of a intrinsic function can present 225 performance issues with some computers/compilers. 226 227 228 \subsection{MPP reproducibility} 229 \label{MISC_glosum} 230 231 The numerical reproducibility of simulations on distributed memory parallel computers 232 is a critical issue. In particular, within NEMO global summation of distributed arrays 233 is most susceptible to rounding errors, and their propagation and accumulation cause 234 uncertainty in final simulation reproducibility on different numbers of processors. 235 To avoid so, based on \citet{He_Ding_JSC01} review of different technics, 236 we use a so called self-compensated summation method. The idea is to estimate 237 the roundoff error, store it in a buffer, and then add it back in the next addition. 238 239 Suppose we need to calculate $b = a_1 + a_2 + a_3$. The following algorithm 240 will allow to split the sum in two ($sum_1 = a_{1} + a_{2}$ and $b = sum_2 = sum_1 + a_3$) 241 with exactly the same rounding errors as the sum performed all at once. 242 \begin{align*} 243 sum_1 \ \ &= a_1 + a_2 \\ 244 error_1 &= a_2 + ( a_1 - sum_1 ) \\ 245 sum_2 \ \ &= sum_1 + a_3 + error_1 \\ 246 error_2 &= a_3 + error_1 + ( sum_1 - sum_2 ) \\ 247 b \qquad \ &= sum_2 \\ 248 \end{align*} 249 This feature can be found in \mdl{lib\_fortran} module and is effective when \key{mpp\_rep}. 250 In that case, all calls to glob\_sum function (summation over the entire basin excluding 251 duplicated rows and columns due to cyclic or north fold boundary condition as well as 252 overlap MPP areas). 253 Note this implementation may be sensitive to the optimization level. 254 255 195 256 % ================================================================ 196 257 % Model optimisation, Control Print and Benchmark … … 202 263 %-------------------------------------------------------------------------------------------------------------- 203 264 204 %\gmcomment{why not make these bullets into subsections?}265 \gmcomment{why not make these bullets into subsections?} 205 266 206 267 … … 267 328 It is a fast and rather easy method to use; which are attractive features for a large 268 329 number of ocean situations (variable bottom topography, complex coastal geometry, 269 variable grid spacing, islands,open or cyclic boundaries, etc ...). It does not require330 variable grid spacing, open or cyclic boundaries, etc ...). It does not require 270 331 a search for an optimal parameter as in the SOR method. However, the SOR has 271 332 been retained because it is a linear solver, which is a very useful property when … … 284 345 \end{aligned} 285 346 \end{equation} 286 the five-point finite difference equation \eqref{Eq_psi_total} can be rewritten as:347 the resulting five-point finite difference equation is given by: 287 348 \begin{equation} \label{Eq_solmat} 288 349 \begin{split} … … 480 541 481 542 % ================================================================ 482 % Diagnostics 483 % ================================================================ 484 \section{Diagnostics (DIA, IOM, TRD, FLO)} 485 \label{MISC_diag} 486 487 % ------------------------------------------------------------------------------------------------------------- 488 % Standard Model Output 489 % ------------------------------------------------------------------------------------------------------------- 490 \subsection{Model Output (default or \key{iomput} or \key{dimgout} or \key{netcdf4})} 491 \label{MISC_iom} 492 493 %to be updated with Seb documentation on the IO 494 495 The model outputs are of three types: the restart file, the output listing, 496 and the output file(s). The restart file is used internally by the code when 497 the user wants to start the model with initial conditions defined by a 498 previous simulation. It contains all the information that is necessary in 499 order for there to be no changes in the model results (even at the computer 500 precision) between a run performed with several restarts and the same run 501 performed in one step. It should be noted that this requires that the restart file 502 contain two consecutive time steps for all the prognostic variables, and 503 that it is saved in the same binary format as the one used by the computer 504 that is to read it (in particular, 32 bits binary IEEE format must not be used for 505 this file). The output listing and file(s) are predefined but should be checked 506 and eventually adapted to the user's needs. The output listing is stored in 507 the $ocean.output$ file. The information is printed from within the code on the 508 logical unit $numout$. To locate these prints, use the UNIX command 509 "\textit{grep -i numout}" in the source code directory. 510 511 In the standard configuration, the user will find the model results in 512 NetCDF files containing mean values (or instantaneous values if 513 \key{diainstant} is defined) for every time-step where output is demanded. 514 These outputs are defined in the \mdl{diawri} module. 515 When defining \key{dimgout}, the output are written in DIMG format, 516 an IEEE output format. 517 518 Since version 3.3, support for NetCDF4 chunking and (loss-less) compression has 519 been included. These options build on the standard NetCDF output and allow 520 the user control over the size of the chunks via namelist settings. Chunking 521 and compression can lead to significant reductions in file sizes for a small 522 runtime overhead. For a fuller discussion on chunking and other performance 523 issues the reader is referred to the NetCDF4 documentation found 524 \href{http://www.unidata.ucar.edu/software/netcdf/docs/netcdf.html#Chunking}{here}. 525 526 The new features are only available when the code has been linked with a 527 NetCDF4 library (version 4.1 onwards, recommended) which has been built 528 with HDF5 support (version 1.8.4 onwards, recommended). Datasets created 529 with chunking and compression are not backwards compatible with NetCDF3 530 "classic" format but most analysis codes can be relinked simply with the 531 new libraries and will then read both NetCDF3 and NetCDF4 files. NEMO 532 executables linked with NetCDF4 libraries can be made to produce NetCDF3 533 files by setting the \np{ln\_nc4zip} logical to false in the \np{namnc4} 534 namelist: 535 536 %------------------------------------------namnc4---------------------------------------------------- 537 \namdisplay{namnc4} 538 %------------------------------------------------------------------------------------------------------------- 539 540 If \key{netcdf4} has not been defined, these namelist parameters are not read. 541 In this case, \np{ln\_nc4zip} is set false and dummy routines for a few 542 NetCDF4-specific functions are defined. These functions will not be used but 543 need to be included so that compilation is possible with NetCDF3 libraries. 544 545 When using NetCDF4 libraries, \key{netcdf4} should be defined even if the 546 intention is to create only NetCDF3-compatible files. This is necessary to 547 avoid duplication between the dummy routines and the actual routines present 548 in the library. Most compilers will fail at compile time when faced with 549 such duplication. Thus when linking with NetCDF4 libraries the user must 550 define \key{netcdf4} and control the type of NetCDF file produced via the 551 namelist parameter. 552 553 Chunking and compression is applied only to 4D fields and there is no 554 advantage in chunking across more than one time dimension since previously 555 written chunks would have to be read back and decompressed before being 556 added to. Therefore, user control over chunk sizes is provided only for the 557 three space dimensions. The user sets an approximate number of chunks along 558 each spatial axis. The actual size of the chunks will depend on global domain 559 size for mono-processors or, more likely, the local processor domain size for 560 distributed processing. The derived values are subject to practical minimum 561 values (to avoid wastefully small chunk sizes) and cannot be greater than the 562 domain size in any dimension. The algorithm used is: 563 564 \begin{alltt} {{\scriptsize 565 \begin{verbatim} 566 ichunksz(1) = MIN( idomain_size,MAX( (idomain_size-1)/nn_nchunks_i + 1 ,16 ) ) 567 ichunksz(2) = MIN( jdomain_size,MAX( (jdomain_size-1)/nn_nchunks_j + 1 ,16 ) ) 568 ichunksz(3) = MIN( kdomain_size,MAX( (kdomain_size-1)/nn_nchunks_k + 1 , 1 ) ) 569 ichunksz(4) = 1 570 \end{verbatim} 571 }}\end{alltt} 572 573 \noindent As an example, setting: 574 \vspace{-20pt} 575 \begin{alltt} {{\scriptsize 576 \begin{verbatim} 577 nn_nchunks_i=4, nn_nchunks_j=4 and nn_nchunks_k=31 578 \end{verbatim} 579 }}\end{alltt} \vspace{-10pt} 580 581 \noindent for a standard ORCA2\_LIM configuration gives chunksizes of {\small\tt 46x38x1} 582 respectively in the mono-processor case (i.e. global domain of {\small\tt 182x149x31}). 583 An illustration of the potential space savings that NetCDF4 chunking and compression 584 provides is given in table \ref{Tab_NC4} which compares the results of two short 585 runs of the ORCA2\_LIM reference configuration with a 4x2 mpi partitioning. Note 586 the variation in the compression ratio achieved which reflects chiefly the dry to wet 587 volume ratio of each processing region. 588 589 %------------------------------------------TABLE---------------------------------------------------- 590 \begin{table} \begin{tabular}{lrrr} 591 Filename & NetCDF3 & NetCDF4 & Reduction\\ 592 &filesize & filesize & \% \\ 593 &(KB) & (KB) & \\ 594 ORCA2\_restart\_0000.nc & 16420 & 8860 & 47\%\\ 595 ORCA2\_restart\_0001.nc & 16064 & 11456 & 29\%\\ 596 ORCA2\_restart\_0002.nc & 16064 & 9744 & 40\%\\ 597 ORCA2\_restart\_0003.nc & 16420 & 9404 & 43\%\\ 598 ORCA2\_restart\_0004.nc & 16200 & 5844 & 64\%\\ 599 ORCA2\_restart\_0005.nc & 15848 & 8172 & 49\%\\ 600 ORCA2\_restart\_0006.nc & 15848 & 8012 & 50\%\\ 601 ORCA2\_restart\_0007.nc & 16200 & 5148 & 69\%\\ 602 ORCA2\_2d\_grid\_T\_0000.nc & 2200 & 1504 & 32\%\\ 603 ORCA2\_2d\_grid\_T\_0001.nc & 2200 & 1748 & 21\%\\ 604 ORCA2\_2d\_grid\_T\_0002.nc & 2200 & 1592 & 28\%\\ 605 ORCA2\_2d\_grid\_T\_0003.nc & 2200 & 1540 & 30\%\\ 606 ORCA2\_2d\_grid\_T\_0004.nc & 2200 & 1204 & 46\%\\ 607 ORCA2\_2d\_grid\_T\_0005.nc & 2200 & 1444 & 35\%\\ 608 ORCA2\_2d\_grid\_T\_0006.nc & 2200 & 1428 & 36\%\\ 609 ORCA2\_2d\_grid\_T\_0007.nc & 2200 & 1148 & 48\%\\ 610 ... & ... & ... & .. \\ 611 ORCA2\_2d\_grid\_W\_0000.nc & 4416 & 2240 & 50\%\\ 612 ORCA2\_2d\_grid\_W\_0001.nc & 4416 & 2924 & 34\%\\ 613 ORCA2\_2d\_grid\_W\_0002.nc & 4416 & 2512 & 44\%\\ 614 ORCA2\_2d\_grid\_W\_0003.nc & 4416 & 2368 & 47\%\\ 615 ORCA2\_2d\_grid\_W\_0004.nc & 4416 & 1432 & 68\%\\ 616 ORCA2\_2d\_grid\_W\_0005.nc & 4416 & 1972 & 56\%\\ 617 ORCA2\_2d\_grid\_W\_0006.nc & 4416 & 2028 & 55\%\\ 618 ORCA2\_2d\_grid\_W\_0007.nc & 4416 & 1368 & 70\%\\ 619 \end{tabular} 620 \caption{ \label{Tab_NC4} 621 Filesize comparison between NetCDF3 and NetCDF4 with chunking and compression} 622 \end{table} 623 %---------------------------------------------------------------------------------------------------- 624 625 Since version 3.2, an I/O server has been added which provides more 626 flexibility in the choice of the fields to be output as well as how the 627 writing work is distributed over the processors in massively parallel 628 computing. It is activated when \key{iomput} is defined. 629 630 When \key{iomput} is activated with \key{netcdf4} chunking and 631 compression parameters for fields produced via \np{iom\_put} calls are 632 set via an equivalent and identically named namelist to \np{namnc4} in 633 \np{xmlio\_server.def}. Typically this namelist serves the mean files 634 whilst the \np{ namnc4} in the main namelist file continues to serve the 635 restart files. This duplication is unfortunate but appropriate since, if 636 using io\_servers, the domain sizes of the individual files produced by the 637 io\_server processes may be different to those produced by the invidual 638 processing regions and different chunking choices may be desired. 639 { 640 641 % ------------------------------------------------------------------------------------------------------------- 642 % Tracer/Dynamics Trends 643 % ------------------------------------------------------------------------------------------------------------- 644 \subsection[Tracer/Dynamics Trends (TRD)] 645 {Tracer/Dynamics Trends (\key{trdmld}, \key{trdtra}, \key{trddyn}, \key{trdmld\_trc})} 646 \label{MISC_tratrd} 647 648 %------------------------------------------namtrd---------------------------------------------------- 649 \namdisplay{namtrd} 650 %------------------------------------------------------------------------------------------------------------- 651 652 When \key{trddyn} and/or \key{trddyn} CPP variables are defined, each 653 trend of the dynamics and/or temperature and salinity time evolution equations 654 is stored in three-dimensional arrays just after their computation ($i.e.$ at the end 655 of each $dyn\cdots.F90$ and/or $tra\cdots.F90$ routines). These trends are then 656 used in \mdl{trdmod} (see TRD directory) every \textit{nn\_trd } time-steps. 657 658 What is done depends on the CPP keys defined: 659 \begin{description} 660 \item[\key{trddyn}, \key{trdtra}] : a check of the basin averaged properties of the momentum 661 and/or tracer equations is performed ; 662 \item[\key{trdvor}] : a vertical summation of the moment tendencies is performed, 663 then the curl is computed to obtain the barotropic vorticity tendencies which are output ; 664 \item[\key{trdmld}] : output of the tracer tendencies averaged vertically 665 either over the mixed layer (\np{nn\_ctls}=0), 666 or over a fixed number of model levels (\np{nn\_ctls}$>$1 provides the number of level), 667 or over a spatially varying but temporally fixed number of levels (typically the base 668 of the winter mixed layer) read in \ifile{ctlsurf\_idx} (\np{nn\_ctls}=1). 669 \end{description} 670 671 The units in the output file can be changed using the \np{nn\_ucf} namelist parameter. 672 For example, in case of salinity tendency the units are given by PSU/s/\np{nn\_ucf}. 673 Setting \np{nn\_ucf}=86400 ($i.e.$ the number of second in a day) provides the tendencies in PSU/d. 674 675 When \key{trdmld} is defined, two time averaging procedure are proposed. 676 Setting \np{ln\_trdmld\_instant} to \textit{true}, a simple time averaging is performed, 677 so that the resulting tendency is the contribution to the change of a quantity between 678 the two instantaneous values taken at the extremities of the time averaging period. 679 Setting \np{ln\_trdmld\_instant} to \textit{false}, a double time averaging is performed, 680 so that the resulting tendency is the contribution to the change of a quantity between 681 two \textit{time mean} values. The later option requires the use of an extra file, \ifile{restart\_mld} 682 (\np{ln\_trdmld\_restart}=true), to restart a run. 683 684 685 Note that the mixed layer tendency diagnostic can also be used on biogeochemical models 686 via the \key{trdtrc} and \key{trdmld\_trc} CPP keys. 687 688 % ------------------------------------------------------------------------------------------------------------- 689 % On-line Floats trajectories 690 % ------------------------------------------------------------------------------------------------------------- 691 \subsection{On-line Floats trajectories (FLO) (\key{floats})} 692 \label{FLO} 693 %--------------------------------------------namflo------------------------------------------------------- 694 \namdisplay{namflo} 695 %-------------------------------------------------------------------------------------------------------------- 696 697 The on-line computation of floats advected either by the three dimensional velocity 698 field or constraint to remain at a given depth ($w = 0$ in the computation) have been 699 introduced in the system during the CLIPPER project. The algorithm used is based 700 either on the work of \cite{Blanke_Raynaud_JPO97} (default option), or on a $4^th$ 701 Runge-Hutta algorithm (\np{ln\_flork4}=true). Note that the \cite{Blanke_Raynaud_JPO97} 702 algorithm have the advantage of providing trajectories which are consistent with the 703 numeric of the code, so that the trajectories never intercept the bathymetry. 704 705 See also \href{http://stockage.univ-brest.fr/~grima/Ariane/}{here} the web site describing 706 the off-line use of this marvellous diagnostic tool. 707 708 % ------------------------------------------------------------------------------------------------------------- 709 % Other Diagnostics 710 % ------------------------------------------------------------------------------------------------------------- 711 \subsection{Other Diagnostics (\key{diahth}, \key{diaar5})} 712 \label{MISC_diag_others} 713 714 715 Aside from the standard model variables, other diagnostics can be computed 716 on-line. The available ready-to-add diagnostics routines can be found in directory DIA. 717 Among the available diagnostics the following ones are obtained when defining 718 the \key{diahth} CPP key: 719 720 - the mixed layer depth (based on a density criterion, \citet{de_Boyer_Montegut_al_JGR04}) (\mdl{diahth}) 721 722 - the turbocline depth (based on a turbulent mixing coefficient criterion) (\mdl{diahth}) 723 724 - the depth of the 20\deg C isotherm (\mdl{diahth}) 725 726 - the depth of the thermocline (maximum of the vertical temperature gradient) (\mdl{diahth}) 727 728 The poleward heat and salt transports, their advective and diffusive component, and 729 the meriodional stream function can be computed on-line in \mdl{diaptr} by setting 730 \np{ln\_diaptr} to true (see the \textit{namptr} namelist below). 731 When \np{ln\_subbas}~=~true, transports and stream function are computed 732 for the Atlantic, Indian, Pacific and Indo-Pacific Oceans (defined north of 30\deg S) 733 as well as for the World Ocean. The sub-basin decomposition requires an input file 734 (\ifile{subbasins}) which contains three 2D mask arrays, the Indo-Pacific mask 735 been deduced from the sum of the Indian and Pacific mask (Fig~\ref{Fig_mask_subasins}). 736 737 %------------------------------------------namptr---------------------------------------------------- 738 \namdisplay{namptr} 739 %------------------------------------------------------------------------------------------------------------- 740 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> 741 \begin{figure}[!t] \begin{center} 742 \includegraphics[width=1.0\textwidth]{./TexFiles/Figures/Fig_mask_subasins.pdf} 743 \caption{ \label{Fig_mask_subasins} 744 Decomposition of the World Ocean (here ORCA2) into sub-basin used in to compute 745 the heat and salt transports as well as the meridional stream-function: Atlantic basin (red), 746 Pacific basin (green), Indian basin (bleue), Indo-Pacific basin (bleue+green). 747 Note that semi-enclosed seas (Red, Med and Baltic seas) as well as Hudson Bay 748 are removed from the sub-basin. Note also that the Arctic Ocean has been split 749 into Atlantic and Pacific basins along the North fold line. } 750 \end{center} \end{figure} 751 %>>>>>>>>>>>>>>>>>>>>>>>>>>>> 752 753 In addition, a series of diagnostics has been added in the \mdl{diaar5}. 754 They corresponds to outputs that are required for AR5 simulations 755 (see Section \ref{MISC_steric} below for one of them). 756 Activating those outputs requires to define the \key{diaar5} CPP key. 757 758 759 760 % ================================================================ 761 % Steric effect in sea surface height 762 % ================================================================ 763 \section{Steric effect in sea surface height} 764 \label{MISC_steric} 765 766 767 Changes in steric sea level are caused when changes in the density of the water 768 column imply an expansion or contraction of the column. It is essentially produced 769 through surface heating/cooling and to a lesser extent through non-linear effects of 770 the equation of state (cabbeling, thermobaricity...). 771 Non-Boussinesq models contain all ocean effects within the ocean acting 772 on the sea level. In particular, they include the steric effect. In contrast, 773 Boussinesq models, such as \NEMO, conserve volume, rather than mass, 774 and so do not properly represent expansion or contraction. The steric effect is 775 therefore not explicitely represented. 776 This approximation does not represent a serious error with respect to the flow field 777 calculated by the model \citep{Greatbatch_JGR94}, but extra attention is required 778 when investigating sea level, as steric changes are an important 779 contribution to local changes in sea level on seasonal and climatic time scales. 780 This is especially true for investigation into sea level rise due to global warming. 781 782 Fortunately, the steric contribution to the sea level consists of a spatially uniform 783 component that can be diagnosed by considering the mass budget of the world 784 ocean \citep{Greatbatch_JGR94}. 785 In order to better understand how global mean sea level evolves and thus how 786 the steric sea level can be diagnosed, we compare, in the following, the 787 non-Boussinesq and Boussinesq cases. 788 789 Let denote 790 $\mathcal{M}$ the total mass of liquid seawater ($\mathcal{M}=\int_D \rho dv$), 791 $\mathcal{V}$ the total volume of seawater ($\mathcal{V}=\int_D dv$), 792 $\mathcal{A}$ the total surface of the ocean ($\mathcal{A}=\int_S ds$), 793 $\bar{\rho}$ the global mean seawater (\textit{in situ}) density ($\bar{\rho}= 1/\mathcal{V} \int_D \rho \,dv$), and 794 $\bar{\eta}$ the global mean sea level ($\bar{\eta}=1/\mathcal{A}\int_S \eta \,ds$). 795 796 A non-Boussinesq fluid conserves mass. It satisfies the following relations: 797 \begin{equation} \label{Eq_MV_nBq} 798 \begin{split} 799 \mathcal{M} &= \mathcal{V} \;\bar{\rho} \\ 800 \mathcal{V} &= \mathcal{A} \;\bar{\eta} 801 \end{split} 802 \end{equation} 803 Temporal changes in total mass is obtained from the density conservation equation : 804 \begin{equation} \label{Eq_Co_nBq} 805 \frac{1}{e_3} \partial_t ( e_3\,\rho) + \nabla( \rho \, \textbf{U} ) = \left. \frac{\textit{emp}}{e_3}\right|_\textit{surface} 806 \end{equation} 807 where $\rho$ is the \textit{in situ} density, and \textit{emp} the surface mass 808 exchanges with the other media of the Earth system (atmosphere, sea-ice, land). 809 Its global averaged leads to the total mass change 810 \begin{equation} \label{Eq_Mass_nBq} 811 \partial_t \mathcal{M} = \mathcal{A} \;\overline{\textit{emp}} 812 \end{equation} 813 where $\overline{\textit{emp}}=\int_S \textit{emp}\,ds$ is the net mass flux 814 through the ocean surface. 815 Bringing \eqref{Eq_Mass_nBq} and the time derivative of \eqref{Eq_MV_nBq} 816 together leads to the evolution equation of the mean sea level 817 \begin{equation} \label{Eq_ssh_nBq} 818 \partial_t \bar{\eta} = \frac{\overline{\textit{emp}}}{ \bar{\rho}} 819 - \frac{\mathcal{V}}{\mathcal{A}} \;\frac{\partial_t \bar{\rho} }{\bar{\rho}} 820 \end{equation} 821 The first term in equation \eqref{Eq_ssh_nBq} alters sea level by adding or 822 subtracting mass from the ocean. 823 The second term arises from temporal changes in the global mean 824 density; $i.e.$ from steric effects. 825 826 In a Boussinesq fluid, $\rho$ is replaced by $\rho_o$ in all the equation except when $\rho$ 827 appears multiplied by the gravity ($i.e.$ in the hydrostatic balance of the primitive Equations). 828 In particular, the mass conservation equation, \eqref{Eq_Co_nBq}, degenerates into 829 the incompressibility equation: 830 \begin{equation} \label{Eq_Co_Bq} 831 \frac{1}{e_3} \partial_t ( e_3 ) + \nabla( \textbf{U} ) = \left. \frac{\textit{emp}}{\rho_o \,e_3}\right|_ \textit{surface} 832 \end{equation} 833 and the global average of this equation now gives the temporal change of the total volume, 834 \begin{equation} \label{Eq_V_Bq} 835 \partial_t \mathcal{V} = \mathcal{A} \;\frac{\overline{\textit{emp}}}{\rho_o} 836 \end{equation} 837 Only the volume is conserved, not mass, or, more precisely, the mass which is conserved is the 838 Boussinesq mass, $\mathcal{M}_o = \rho_o \mathcal{V}$. The total volume (or equivalently 839 the global mean sea level) is altered only by net volume fluxes across the ocean surface, 840 not by changes in mean mass of the ocean: the steric effect is missing in a Boussinesq fluid. 841 842 Nevertheless, following \citep{Greatbatch_JGR94}, the steric effect on the volume can be 843 diagnosed by considering the mass budget of the ocean. 844 The apparent changes in $\mathcal{M}$, mass of the ocean, which are not induced by surface 845 mass flux must be compensated by a spatially uniform change in the mean sea level due to 846 expansion/contraction of the ocean \citep{Greatbatch_JGR94}. In others words, the Boussinesq 847 mass, $\mathcal{M}_o$, can be related to $\mathcal{M}$, the total mass of the ocean seen 848 by the Boussinesq model, via the steric contribution to the sea level, $\eta_s$, a spatially 849 uniform variable, as follows: 850 \begin{equation} \label{Eq_M_Bq} 851 \mathcal{M}_o = \mathcal{M} + \rho_o \,\eta_s \,\mathcal{A} 852 \end{equation} 853 Any change in $\mathcal{M}$ which cannot be explained by the net mass flux through 854 the ocean surface is converted into a mean change in sea level. Introducing the total density 855 anomaly, $\mathcal{D}= \int_D d_a \,dv$, where $d_a= (\rho -\rho_o ) / \rho_o$ 856 is the density anomaly used in \NEMO (cf. \S\ref{TRA_eos}) in \eqref{Eq_M_Bq} 857 leads to a very simple form for the steric height: 858 \begin{equation} \label{Eq_steric_Bq} 859 \eta_s = - \frac{1}{\mathcal{A}} \mathcal{D} 860 \end{equation} 861 862 The above formulation of the steric height of a Boussinesq ocean requires four remarks. 863 First, one can be tempted to define $\rho_o$ as the initial value of $\mathcal{M}/\mathcal{V}$, 864 $i.e.$ set $\mathcal{D}_{t=0}=0$, so that the initial steric height is zero. We do not 865 recommend that. Indeed, in this case $\rho_o$ depends on the initial state of the ocean. 866 Since $\rho_o$ has a direct effect on the dynamics of the ocean (it appears in the pressure 867 gradient term of the momentum equation) it is definitively not a good idea when 868 inter-comparing experiments. 869 We better recommend to fixe once for all $\rho_o$ to $1035\;Kg\,m^{-3}$. This value is a 870 sensible choice for the reference density used in a Boussinesq ocean climate model since, 871 with the exception of only a small percentage of the ocean, density in the World Ocean 872 varies by no more than 2$\%$ from this value (\cite{Gill1982}, page 47). 873 874 Second, we have assumed here that the total ocean surface, $\mathcal{A}$, does not 875 change when the sea level is changing as it is the case in all global ocean GCMs 876 (wetting and drying of grid point is not allowed). 877 878 Third, the discretisation of \eqref{Eq_steric_Bq} depends on the type of free surface 879 which is considered. In the non linear free surface case, $i.e.$ \key{vvl} defined, it is 880 given by 881 \begin{equation} \label{Eq_discrete_steric_Bq} 882 \eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t} e_{2t} e_{3t} } 883 { \sum_{i,\,j,\,k} e_{1t} e_{2t} e_{3t} } 884 \end{equation} 885 whereas in the linear free surface, the volume above the \textit{z=0} surface must be explicitly taken 886 into account to better approximate the total ocean mass and thus the steric sea level: 887 \begin{equation} \label{Eq_discrete_steric_Bq} 888 \eta_s = - \frac{ \sum_{i,\,j,\,k} d_a\; e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} d_a\; e_{1t}e_{2t} \eta } 889 {\sum_{i,\,j,\,k} e_{1t}e_{2t}e_{3t} + \sum_{i,\,j} e_{1t}e_{2t} \eta } 890 \end{equation} 891 892 The fourth and last remark concerns the effective sea level and the presence of sea-ice. 893 In the real ocean, sea ice (and snow above it) depresses the liquid seawater through 894 its mass loading. This depression is a result of the mass of sea ice/snow system acting 895 on the liquid ocean. There is, however, no dynamical effect associated with these depressions 896 in the liquid ocean sea level, so that there are no associated ocean currents. Hence, the 897 dynamically relevant sea level is the effective sea level, $i.e.$ the sea level as if sea ice 898 (and snow) were converted to liquid seawater \citep{Campin_al_OM08}. However, 899 in the current version of \NEMO the sea-ice is levitating above the ocean without 900 mass exchanges between ice and ocean. Therefore the model effective sea level 901 is always given by $\eta + \eta_s$, whether or not there is sea ice present. 902 903 In AR5 outputs, the thermosteric sea level is demanded. It is steric sea level due to 904 changes in ocean density arising just from changes in temperature. It is given by: 905 \begin{equation} \label{Eq_thermosteric_Bq} 906 \eta_s = - \frac{1}{\mathcal{A}} \int_D d_a(T,S_o,p_o) \,dv 907 \end{equation} 908 where $S_o$ and $p_o$ are the initial salinity and pressure, respectively. 909 910 Both steric and thermosteric sea level are computed in \mdl{diaar5} which needs 911 the \key{diaar5} defined to be called. 912 913 914 915 \gmcomment{ % start of gmcomment 916 917 918 % ================================================================ 919 % Diagnostics 920 % ================================================================ 921 \section{Standard model Output (IOM)} 922 \label{MISC_iom} 923 924 % ------------------------------------------------------------------------------------------------------------- 925 % Standard Model Output 926 % ------------------------------------------------------------------------------------------------------------- 927 %\subsection{Model Output (default or \key{iomput} } 928 %\label{MISC_iom} 929 930 931 932 \subsection{Basic knowledge} 933 934 935 \subsubsection{ XML basic rules} 936 937 XML tags begin with the less-than character ("$<$") and end with the greater-than character (''$>$''). 938 You use tags to mark the start and end of elements, which are the logical units of information 939 in an XML document. In addition to marking the beginning of an element, XML start tags also 940 provide a place to specify attributes. An attribute specifies a single property for an element, 941 using a name/value pair, for example: $<$a b="x" c="y" b="z"$>$ ... $<$/a$>$. 942 See \href{http://www.xmlnews.org/docs/xml-basics.html}{here} for more details. 943 944 \subsubsection{Structure of the xml file used in NEMO} 945 946 The xml file is split into 3 parts: 947 948 \textbf{field definition}: define all variables that can be output (all lines between 949 \texttt{$<$field\_definition$>$} and \texttt{$<$/field\_definition$>$}) 950 951 \textbf{file definition}: define the netcdf files to be created and the variables they will contain 952 (all lines between \texttt{ $<$file\_definition$>$} and \texttt{$<$/file\_definition$>$}) 953 954 \textbf{axis and grid definitions}: define the horizontal and vertical grids (all lines between 955 \texttt{$<$axis\_definition$>$} and \texttt{$<$/axis\_definition$>$} and all lines between 956 \texttt{$<$grid\_definition$>$} and \texttt{$<$/grid\_definition$>$}) 957 958 \subsubsection{Inheritance and group } 959 960 Xml extensively uses the concept of inheritance. \\ 961 \\ 962 example 1: \\ 963 \vspace{-30pt} 964 \begin{alltt} {{\scriptsize 965 \begin{verbatim} 966 <field_definition operation="ave(X)" > 967 <field id="sst" /> <!-- averaged sst --> 968 <field id="sss" operation="inst(X)"/> <!-- instantaneous sss --> 969 </field_definition> 970 \end{verbatim} 971 }}\end{alltt} 972 973 The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''ave(X)'' 974 of the attribute ''operation'' from its parent ''field definition''. Note that a child can overwrite 975 the attribute definition inherited from its parents. In the example above, the field ''sss'' will 976 therefore output instantaneous values instead of average values. 977 978 example 2: Use (or overwrite) attributes value of a field when listing the variables included in a file 979 \vspace{-20pt} 980 \begin{alltt} {{\scriptsize 981 \begin{verbatim} 982 <field_definition> 983 <field id="sst" description="sea surface temperature" /> 984 <field id="sss" description="sea surface salinity" /> 985 </field_definition> 986 987 <file_definition> 988 <file id="file_1" /> 989 <field ref="sst" /> <!-- default def --> 990 <field ref="sss" description="my description" /> <!-- overwrite --> 991 </file> 992 </file_definition> 993 \end{verbatim} 994 }}\end{alltt} 995 996 With the help of the inheritance, the concept of group allow to define a set of attributes 997 for several fields or files. 998 999 example 3, group of fields: define a group ''T\_grid\_variables'' identified with the name 1000 ''grid\_T''. By default variables of this group have no vertical axis but, following inheritance 1001 rules, ''axis\_ref'' can be redefined for the field ''toce'' that is a 3D variable. 1002 \vspace{-30pt} 1003 \begin{alltt} {{\scriptsize 1004 \begin{verbatim} 1005 <field_definition> 1006 <group id="grid_T" axis_ref="none" grid_ref="T_grid_variables"> 1007 <field id="sst"/> 1008 <field id="sss"/> 1009 <field id="toce" axis_ref="deptht"/> <!-- overwrite axis def --> 1010 </group> 1011 </field_definition> 1012 \end{verbatim} 1013 }}\end{alltt} 1014 1015 example 4, group of files: define a group of file with the attribute output\_freq equal to 432000 (5 days) 1016 \vspace{-30pt} 1017 \begin{alltt} {{\scriptsize 1018 \begin{verbatim} 1019 <file_definition> 1020 <group id="5d" output_freq="432000"> <!-- 5d files --> 1021 <file id="5d_grid_T" name="auto"> <!-- T grid file --> 1022 ... 1023 </file> 1024 <file id="5d_grid_U" name="auto"> <!-- U grid file --> 1025 ... 1026 </file> 1027 </group> 1028 </file_definition> 1029 \end{verbatim} 1030 }}\end{alltt} 1031 1032 \subsubsection{Control of the xml attributes from NEMO} 1033 1034 The values of some attributes are automatically defined by NEMO (and any definition 1035 given in the xml file is overwritten). By convention, these attributes are defined to ''auto'' 1036 (for string) or ''0000'' (for integer) in the xml file (but this is not necessary). 1037 1038 Here is the list of these attributes: \\ 1039 1040 %table to be created here.... 1041 1042 tag ids affected by automatic definition of some of their attributes 1043 1044 name attribute 1045 attribute value 1046 field\_definition 1047 freq\_op 1048 \np{rn\_rdt} (namelist) 1049 SBC 1050 freq\_op 1051 \np{rn\_rdt} $\times$ \np{nn\_fsbc} (namelist) 1052 1h, 2h, 3h, 4h, 6h, 12h 1053 1d, 3d, 5d 1054 1m, 2m, 3m, 4m, 6m 1055 1y, 2y, 5y, 10y 1056 \_grid\_T, \_grid\_U, \_grid\_V, \_grid\_W, \_icemod, \_ptrc\_T, \_diad\_T, \_scalar 1057 name 1058 filename defined by a call to rou{dia\_nam} following NEMO nomenclature 1059 EqT, EqU, EqW 1060 jbegin, ni, name\_suffix 1061 according to the grid 1062 TAO, RAMA and PIRATA moorings 1063 ibegin, jbegin, name\_suffix 1064 according to the grid 1065 1066 1067 \subsection{ Detailed functionalities } 1068 1069 \subsubsection{Tag list} 1070 1071 1072 Table might be easier to read: % table to create 1073 1074 Tag 1075 Description 1076 Accepted attribute 1077 Accepted attribute value(s) 1078 Parent tag 1079 context 1080 define the model using the xml file 1081 id 1082 "nemo" or "n\_nemo" for the nth AGRIF zoom. 1083 simulation 1084 1085 What do you think, Seb? 1086 1087 1088 \begin{description} 1089 1090 \item[context]: define the model using the xml file. Id is the only attribute accepted. 1091 Its value must be ''nemo'' or ''n\_nemo'' for the nth AGRIF zoom. Child of simulation tag. 1092 1093 \item[field]: define the field to be output. Accepted attributes are axis\_ref, description, enable, 1094 freq\_op, grid\_ref, id (if child of field\_definition), level, operation, name, ref (if child of file), 1095 unit, zoom\_ref. Child of field\_definition, file or group of fields tag. 1096 1097 \item[field\_definition]: definition of the part of the xml file corresponding to the field definition. 1098 Accept the same attributes as field tag. Child of context tag. 1099 1100 \item[group]: define a group of file or field. Accept the same attributes as file or field. 1101 1102 \item[file]: define the output fileÕs characteristics. Accepted attributes are description, enable, 1103 output\_freq, output\_level, id, name, name\_suffix. Child of file\_definition or group of files tag. 1104 1105 \item[file\_definition]: definition of the part of the xml file corresponding to the file definition. 1106 Accept the same attributes as file tag. Child of context tag. 1107 1108 \item[axis]: definition of the vertical axis. Accepted attributes are description, id, positive, size, unit. 1109 Child of axis\_definition tag. 1110 1111 \item[axis\_definition]: definition of the part of the xml file corresponding to the vertical axis definition. 1112 Accept the same attributes as axis tag. Child of context tag 1113 1114 \item[grid]: definition of the horizontal grid. Accepted attributes are description and id. 1115 Child of axis\_definition tag. 1116 1117 \item[grid\_definition]: definition of the part of the xml file corresponding to the horizontal grid definition. 1118 Accept the same attributes as grid tag. Child of context tag 1119 1120 \item[zoom]: definition of a subdomain of an horizontal grid. Accepted attributes are description, id, 1121 i/jbegin, ni/j. Child of grid tag. 1122 1123 \end{description} 1124 1125 1126 \subsubsection{Attributes list} 1127 1128 Applied to a tag or a group of tags. 1129 1130 % table to be added ? 1131 Another table, perhaps? 1132 1133 %%%% 1134 1135 Attribute 1136 Applied to? 1137 Definition 1138 Comment 1139 axis\_ref 1140 field 1141 String defining the vertical axis of the variable. It refers to the id of the vertical axis defined in the axis tag. 1142 Use ''non'' if the variable has no vertical axis 1143 1144 %%%%%% 1145 1146 \begin{description} 1147 1148 \item[axis\_ref]: field attribute. String defining the vertical axis of the variable. 1149 It refers to the id of the vertical axis defined in the axis tag. 1150 Use ''none'' if the variable has no vertical axis 1151 1152 \item[description]: this attribute can be applied to all tags but it is used only with the field tag. 1153 In this case, the value of description will be used to define, in the output netcdf file, 1154 the attributes long\_name and standard\_name of the variable. 1155 1156 \item[enabled]: field and file attribute. Logical to switch on/off the output of a field or a file. 1157 1158 \item[freq\_op]: field attribute (automatically defined, see part 1.4 (''control of the xml attributes'')). 1159 An integer defining the frequency in seconds at which NEMO is calling iom\_put for this variable. 1160 It corresponds to the model time step (rn\_rdt in the namelist) except for the variables computed 1161 at the frequency of the surface boundary condition (rn\_rdt ? nn\_fsbc in the namelist). 1162 1163 \item[grid\_ref]: field attribute. String defining the horizontal grid of the variable. 1164 It refers to the id of the grid tag. 1165 1166 \item[ibegin]: zoom attribute. Integer defining the zoom starting point along x direction. 1167 Automatically defined for TAO/RAMA/PIRATA moorings (see part 1.4). 1168 1169 \item[id]: exists for all tag. This is a string defining the name to a specific tag that will be used 1170 later to refer to this tag. Tags of the same category must have different ids. 1171 1172 \item[jbegin]: zoom attribute. Integer defining the zoom starting point along y direction. 1173 Automatically defined for TAO/RAMA/PIRATA moorings and equatorial section (see part 1.4). 1174 1175 \item[level]: field attribute. Integer from 0 to 10 defining the output priority of a field. 1176 See output\_level attribute definition 1177 1178 \item[operation]: field attribute. String defining the type of temporal operation to perform on a variable. 1179 Possible choices are ''ave(X)'' for temporal mean, ''inst(X)'' for instantaneous, ''t\_min(X)'' for temporal min 1180 and ''t\_max(X)'' for temporal max. 1181 1182 \item[output\_freq]: file attribute. Integer defining the operation frequency in seconds. 1183 For example 86400 for daily mean. 1184 1185 \item[output\_level]: file attribute. Integer from 0 to 10 defining the output priority of variables in a file: 1186 all variables listed in the file with a level smaller or equal to output\_level will be output. 1187 Other variables wonÕt be output even if they are listed in the file. 1188 1189 \item[positive]: axis attribute (always .FALSE.). Logical defining the vertical axis convention used 1190 in \NEMO (positive downward). Define the attribute positive of the variable in the netcdf output file. 1191 1192 \item[prec]: field attribute. Integer defining the output precision. 1193 Not implemented, we always output real4 arrays. 1194 1195 \item[name]: field or file attribute. String defining the name of a variable or a file. 1196 If the name of a file is undefined, its id is used as a name. 2 files must have different names. 1197 Files with specific ids will have their name automatically defined (see part 1.4). 1198 Note that is name will be automatically completed by the cpu number (if needed) and ''.nc'' 1199 1200 \item[name\_suffix]: file attribute. String defining a suffix to be inserted after the name 1201 and before the cpu number and the ''.nc'' termination. Files with specific ids have an 1202 automatic definition of their suffix (see part 1.4). 1203 1204 \item[ni]: zoom attribute. Integer defining the zoom extent along x direction. 1205 Automatically defined for equatorial sections (see part 1.4). 1206 1207 \item[nj]: zoom attribute. Integer defining the zoom extent along x direction. 1208 1209 \item[ref]: field attribute. String referring to the id of the field we want to add in a file. 1210 1211 \item[size]: axis attribute. use unknown... 1212 1213 \item[unit]: field attribute. String defining the unit of a variable and the associated 1214 attribute in the netcdf output file. 1215 1216 \item[zoom\_ref]: field attribute. String defining the subdomain of data on which 1217 the file should be written (to ouput data only in a limited area). 1218 It refers to the id of a zoom defined in the zoom tag. 1219 \end{description} 1220 1221 1222 \subsection{IO\_SERVER} 1223 1224 \subsubsection{Attached or detached mode?} 1225 1226 Iom\_put is based on the io\_server developed by Yann Meurdesoif from IPSL 1227 (see \href{http://forge.ipsl.jussieu.fr/ioserver/browser}{here} for the source code or 1228 see its copy in NEMOGCM/EXTERNAL directory). 1229 This server can be used in ''attached mode'' (as a library) or in ''detached mode'' 1230 (as an external executable on n cpus). In attached mode, each cpu of NEMO will output 1231 its own subdomain. In detached mode, the io\_server will gather data from NEMO 1232 and output them split over n files with n the number of cpu dedicated to the io\_server. 1233 1234 \subsubsection{Control the io\_server: the namelist file xmlio\_server.def} 1235 1236 % 1237 %Again, a small table might be more readable? 1238 %Name 1239 %Type 1240 %Description 1241 %Comment 1242 %Using_server 1243 %Logical 1244 %Switch to use the server in attached or detached mode 1245 %(.TRUE. corresponding to detached mode). 1246 1247 The control of the use of the io\_server is done through the namelist file of the io\_server 1248 called xmlio\_server.def. 1249 1250 \textbf{using\_server}: logical, switch to use the server in attached or detached mode 1251 (.TRUE. corresponding to detached mode). 1252 1253 \textbf{using\_oasis}: logical, set to .TRUE. if NEMO is used in coupled mode. 1254 1255 \textbf{client\_id} = ''oceanx'' : character, used only in coupled mode. 1256 Specify the id used in OASIS to refer to NEMO. The same id must be used to refer to NEMO 1257 in the \$NBMODEL part of OASIS namcouple in the call of prim\_init\_comp\_proto in cpl\_oasis3f90 1258 1259 \textbf{server\_id} = ''ionemo'' : character, used only in coupled mode. 1260 Specify the id used in OASIS to refer to the IO\_SERVER when used in detached mode. 1261 Use the same id to refer to the io\_server in the \$NBMODEL part of OASIS namcouple. 1262 1263 \textbf{global\_mpi\_buffer\_size}: integer; define the size in Mb of the MPI buffer used by the io\_server. 1264 1265 \subsubsection{Number of cpu used by the io\_server in detached mode} 1266 1267 The number of cpu used by the io\_server is specified only when launching the model. 1268 Here is an example of 2 cpus for the io\_server and 6 cpu for opa using mpirun: 1269 1270 \texttt{ -p 2 -e ./ioserver} 1271 1272 \texttt{ -p 6 -e ./opa } 1273 1274 1275 \subsection{Practical issues} 1276 1277 \subsubsection{Add your own outputs} 1278 1279 It is very easy to add you own outputs with iom\_put. 4 points must be followed. 1280 \begin{description} 1281 \item[1-] in NEMO code, add a \\ 1282 \texttt{ CALL iom\_put( 'identifier', array ) } \\ 1283 where you want to output a 2D or 3D array. 1284 1285 \item[2-] don't forget to add \\ 1286 \texttt{ USE iom ! I/O manager library } \\ 1287 in the list of used modules in the upper part of your module. 1288 1289 \item[3-] in the file\_definition part of the xml file, add the definition of your variable using the same identifier you used in the f90 code. 1290 \vspace{-20pt} 1291 \begin{alltt} {{\scriptsize 1292 \begin{verbatim} 1293 <field_definition> 1294 ... 1295 <field id="identifier" description="blabla" /> 1296 ... 1297 </field_definition> 1298 \end{verbatim} 1299 }}\end{alltt} 1300 attributes axis\_ref and grid\_ref must be consistent with the size of the array to pass to iom\_put. 1301 if your array is computed within the surface module each nn\_fsbc time\_step, 1302 add the field definition within the group defined with the id ''SBC'': $<$group id=''SBC''...$>$ 1303 1304 \item[4-] add your field in one of the output files \\ 1305 \vspace{-20pt} 1306 \begin{alltt} {{\scriptsize 1307 \begin{verbatim} 1308 <file id="file_1" .../> 1309 ... 1310 <field ref="identifier" /> 1311 ... 1312 </file> 1313 \end{verbatim} 1314 }}\end{alltt} 1315 1316 \end{description} 1317 1318 \subsubsection{Several time axes in the output file} 1319 1320 If your output file contains variables with different operations (see operation definition), 1321 IOIPSL will create one specific time axis for each operation. Note that inst(X) will have 1322 a time axis corresponding to the end each output period whereas all other operators 1323 will have a time axis centred in the middle of the output periods. 1324 1325 \subsubsection{Error/bug messages from IOIPSL} 1326 1327 If you get the following error in the standard output file: 1328 \vspace{-20pt} 1329 \begin{alltt} {{\scriptsize 1330 \begin{verbatim} 1331 FATAL ERROR FROM ROUTINE flio_dom_set 1332 --> too many domains simultaneously defined 1333 --> please unset useless domains 1334 --> by calling flio_dom_unset 1335 \end{verbatim} 1336 }}\end{alltt} 1337 1338 You must increase the value of dom\_max\_nb in fliocom.f90 (multiply it by 10 for example). 1339 1340 If you mix, in the same file, variables with different freq\_op (see definition above), 1341 like for example variables from the surface module with other variables, 1342 IOIPSL will print in the standard output file warning messages saying there may be a bug. 1343 \vspace{-20pt} 1344 \begin{alltt} {{\scriptsize 1345 \begin{verbatim} 1346 WARNING FROM ROUTINE histvar_seq 1347 --> There were 10 errors in the learned sequence of variables 1348 --> for file 4 1349 --> This looks like a bug, please report it. 1350 \end{verbatim} 1351 }}\end{alltt} 1352 1353 Don't worry, there is no bug, everything is properly working! 1354 1355 } %end \gmcomment 1356 % ================================================================ 1357 1358 1359 1360 1361 1362 1363 1364 1365 1366 1367 1368 543 544 545 546 547 548 549 550 551 552 553 554 -
trunk/DOC/TexFiles/Chapters/Chap_SBC.tex
r2414 r2541 63 63 \label{SBC_general} 64 64 65 66 65 The surface ocean stress is the stress exerted by the wind and the sea-ice 67 66 on the ocean. The two components of stress are assumed to be interpolated … … 176 175 177 176 % ================================================================ 177 % Input Data 178 % ================================================================ 179 \section{Input Data generic interface} 180 \label{SBC_input} 181 182 A generic interface has been introduced to manage the way input data (2D or 3D fields, 183 like surface forcing or ocean T and S) are specify in \NEMO. This task is archieved by fldread.F90. 184 The module was design with four main objectives in mind: 185 \begin{enumerate} 186 \item optionally provide a time interpolation of the input data at model time-step, 187 whatever their input frequency is, and according to the different calendars available in the model. 188 \item optionally provide an on-the-fly space interpolation from the native input data grid to the model grid. 189 \item make the run duration independent from the period cover by the input files. 190 \item provide a simple user interface and a rather simple developer interface by limiting the 191 number of prerequisite information. 192 \end{enumerate} 193 194 As a results the user have only to fill in for each variable a structure in the namelist file 195 to defined the input data file and variable names, the frequency of the data (in hours or months), 196 whether its is climatological data or not, the period covered by the input file (one year, month, week or day), 197 and two additional parameters for on-the-fly interpolation. When adding a new input variable, 198 the developer has to add the associated structure in the namelist, read this information 199 by mirroring the namelist read in \rou{sbc\_blk\_init} for example, and simply call \rou{fld\_read} 200 to obtain the desired input field at the model time-step and grid points. 201 202 The only constraints are that the input file is a NetCDF file, the file name follows a nomenclature 203 (see \S\ref{SBC_fldread}), the period it cover is one year, month, week or day, and, if on-the-fly 204 interpolation is used, a file of weights must be supplied (see \S\ref{SBC_iof}). 205 206 Note that when an input data is archived on a disc which is accessible directly 207 from the workspace where the code is executed, then the use can set the \np{cn\_dir} 208 to the pathway leading to the data. By default, the data are assumed to have been 209 copied so that cn\_dir='./'. 210 211 % ------------------------------------------------------------------------------------------------------------- 212 % Input Data specification (\mdl{fldread}) 213 % ------------------------------------------------------------------------------------------------------------- 214 \subsection{Input Data specification (\mdl{fldread})} 215 \label{SBC_fldread} 216 217 The structure associated with an input variable contains the following information: 218 \begin{alltt} {{\tiny 219 \begin{verbatim} 220 ! file name ! frequency (hours) ! variable ! time interp. ! clim ! 'yearly'/ ! weights ! rotation ! 221 ! ! (if <0 months) ! name ! (logical) ! (T/F) ! 'monthly' ! filename ! pairing ! 222 \end{verbatim} 223 }}\end{alltt} 224 where 225 \begin{description} 226 \item[File name]: the stem name of the NetCDF file to be open. 227 This stem will be completed automatically by the model, with the addition of a '.nc' at its end 228 and by date information and possibly a prefix (when using AGRIF). 229 Tab.\ref{Tab_fldread} provides the resulting file name in all possible cases according to whether 230 it is a climatological file or not, and to the open/close frequency (see below for definition). 231 232 %--------------------------------------------------TABLE-------------------------------------------------- 233 \begin{table}[htbp] 234 \begin{center} 235 \begin{tabular}{|l|c|c|c|} 236 \hline 237 & daily or weekLLL & monthly & yearly \\ \hline 238 clim = false & fn\_yYYYYmMMdDD & fn\_yYYYYmMM & fn\_yYYYY \\ \hline 239 clim = true & not possible & fn\_m??.nc & fn \\ \hline 240 \end{tabular} 241 \end{center} 242 \caption{ \label{Tab_fldread} naming nomenclature for climatological or interannual input file, 243 as a function of the Open/close frequency. The stem name is assumed to be 'fn'. 244 For weekly files, the 'LLL' corresponds to the first three letters of the first day of the week ($i.e.$ 'sun','sat','fri','thu','wed','tue','mon'). The 'YYYY', 'MM' and 'DD' should be replaced by the 245 actual year/month/day, always coded with 4 or 2 digits. Note that (1) in mpp, if the file is split 246 over each subdomain, the suffix '.nc' is replaced by '\_PPPP.nc', where 'PPPP' is the 247 process number coded with 4 digits; (2) when using AGRIF, the prefix ÔN\_Õ is added to files, 248 where 'N' is the child grid number.} 249 \end{table} 250 %-------------------------------------------------------------------------------------------------------------- 251 252 253 \item[Record frequency]: the frequency of the records contained in the input file. 254 Its unit is in hours if it is positive (for example 24 for daily forcing) or in months if negative 255 (for example -1 for monthly forcing or -12 for annual forcing). 256 Note that this frequency must really be an integer and not a real. 257 On some computers, seting it to '24.' can be interpreted as 240! 258 259 \item[Variable name]: the name of the variable to be read in the input NetCDF file. 260 261 \item[Time interpolation]: a logical to activate, or not, the time interpolation. If set to 'false', 262 the forcing will have a steplike shape remaining constant during each forcing period. 263 For example, when using a daily forcing without time interpolation, the forcing remaining 264 constant from 00h00'00'' to 23h59'59". If set to 'true', the forcing will have a broken line shape. 265 Records are assumed to be dated the middle of the forcing period. 266 For example, when using a daily forcing with time interpolation, linear interpolation will 267 be performed between mid-day of two consecutive days. 268 269 \item[Climatological forcing]: a logical to specify if a input file contains climatological forcing 270 which can be cycle in time, or an interannual forcing which will requires additional files 271 if the period covered by the simulation exceed the one of the file. See the above the file 272 naming strategy which impacts the expected name of the file to be opened. 273 274 \item[Open/close frequency]: the frequency at which forcing files must be opened/closed. 275 Four cases are coded: 'daily', 'weekLLL' (with 'LLL' the first 3 letters of the first day of the week), 276 'monthly' and 'yearly' which means the forcing files will contain data for one day, one week, 277 one month or one year. Files are assumed to contain data from the beginning of the open/close period. 278 For example, the first record of a yearly file containing daily data is Jan 1st even if the experiment 279 is not starting at the beginning of the year. 280 281 \item[Others]: 'weights filename' and 'pairing rotation' are associted with on-the-fly interpolation 282 which is described in \S\ref{SBC_iof}. 283 284 \end{description} 285 286 Additional remarks:\\ 287 (1) The time interpolation is a simple linear interpolation between two consecutive records of 288 the input data. The only tricky point is therefore to specify the date at which we need to do 289 the interpolation and the date of the records read in the input files. 290 Following \citet{Leclair_Madec_OM09}, the date of a time step is set at the middle of the 291 time step. For example, for an experiment starting at 0h00'00" with a one hour time-step, 292 a time interpolation will be performed at the following time: 0h30'00", 1h30'00", 2h30'00", etc. 293 However, for forcing data related to the surface module, values are not needed at every 294 time-step but at every \np{nn\_fsbc} time-step. For example with \np{nn\_fsbc}~=~3, 295 the surface module will be called at time-steps 1, 4, 7, etc. The date used for the time interpolation 296 is thus redefined to be at the middle of \np{nn\_fsbc} time-step period. In the previous example, 297 this leads to: 1h30'00", 4h30'00", 7h30'00", etc. \\ 298 (2) For code readablility and maintenance issues, we don't take into account the NetCDF input file 299 calendar. The calendar associated with the forcing field is build according to the information 300 provided by user in the record frequency, the open/close frequency and the type of temporal interpolation. 301 For example, the first record of a yearly file containing daily data that will be interpolated in time 302 is assumed to be start Jan 1st at 12h00'00" and end Dec 31st at 12h00'00". \\ 303 (3) If a time interpolation is requested, the code will pick up the needed data in the previous (next) file 304 when interpolating data with the first (last) record of the open/close period. 305 For example, if the input file specifications are ''yearly, containing daily data to be interpolated in time'', 306 the values given by the code between 00h00'00" and 11h59'59" on Jan 1st will be interpolated values 307 between Dec 31st 12h00'00" and Jan 1st 12h00'00". If the forcing is climatological, Dec and Jan will 308 be keep-up from the same year. However, if the forcing is not climatological, at the end of the 309 open/close period the code will automatically close the current file and open the next one. 310 Note that, if the experiment is starting (ending) at the beginning (end) of an open/close period 311 we do accept that the previous (next) file is not existing. In this case, the time interpolation 312 will be performed between two identical values. For example, when starting an experiment on 313 Jan 1st of year Y with yearly files and daily data to be interpolated, we do accept that the file 314 related to year Y-1 is not existing. The value of Jan 1st will be used as the missing one for 315 Dec 31st of year Y-1. If the file of year Y-1 exists, the code will read its last record. 316 Therefore, this file can contain only one record corresponding to Dec 31st, a useful feature for 317 user considering that it is too heavy to manipulate the complete file for year Y-1. 318 319 320 % ------------------------------------------------------------------------------------------------------------- 321 % Interpolation on the Fly 322 % ------------------------------------------------------------------------------------------------------------- 323 \subsection [Interpolation on-the-Fly] {Interpolation on-the-Fly} 324 \label{SBC_iof} 325 326 Interpolation on the Fly allows the user to supply input files required 327 for the surface forcing on grids other than the model grid. 328 To do this he or she must supply, in addition to the source data file, 329 a file of weights to be used to interpolate from the data grid to the model grid. 330 The original development of this code used the SCRIP package (freely available 331 \href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement). 332 In principle, any package can be used to generate the weights, but the 333 variables in the input weights file must have the same names and meanings as 334 assumed by the model. 335 Two methods are currently available: bilinear and bicubic interpolation. 336 337 \subsubsection{Bilinear Interpolation} 338 \label{SBC_iof_bilinear} 339 340 The input weights file in this case has two sets of variables: src01, src02, 341 src03, src04 and wgt01, wgt02, wgt03, wgt04. 342 The "src" variables correspond to the point in the input grid to which the weight 343 "wgt" is to be applied. Each src value is an integer corresponding to the index of a 344 point in the input grid when written as a one dimensional array. For example, for an input grid 345 of size 5x10, point (3,2) is referenced as point 8, since (2-1)*5+3=8. 346 There are four of each variable because bilinear interpolation uses the four points defining 347 the grid box containing the point to be interpolated. 348 All of these arrays are on the model grid, so that values src01(i,j) and 349 wgt01(i,j) are used to generate a value for point (i,j) in the model. 350 351 Symbolically, the algorithm used is: 352 353 \begin{equation} 354 f_{m}(i,j) = f_{m}(i,j) + \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))} 355 \end{equation} 356 where function idx() transforms a one dimensional index src(k) into a two dimensional index, 357 and wgt(1) corresponds to variable "wgt01" for example. 358 359 \subsubsection{Bicubic Interpolation} 360 \label{SBC_iof_bicubic} 361 362 Again there are two sets of variables: "src" and "wgt". 363 But in this case there are 16 of each. 364 The symbolic algorithm used to calculate values on the model grid is now: 365 366 \begin{equation*} \begin{split} 367 f_{m}(i,j) = f_{m}(i,j) +& \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))} 368 + \sum_{k=5}^{8} {wgt(k)\left.\frac{\partial f}{\partial i}\right| _{idx(src(k))} } \\ 369 +& \sum_{k=9}^{12} {wgt(k)\left.\frac{\partial f}{\partial j}\right| _{idx(src(k))} } 370 + \sum_{k=13}^{16} {wgt(k)\left.\frac{\partial ^2 f}{\partial i \partial j}\right| _{idx(src(k))} } 371 \end{split} 372 \end{equation*} 373 The gradients here are taken with respect to the horizontal indices and not distances since the spatial dependency has been absorbed into the weights. 374 375 \subsubsection{Implementation} 376 \label{SBC_iof_imp} 377 378 To activate this option, a non-empty string should be supplied in the weights filename column 379 of the relevant namelist; if this is left as an empty string no action is taken. 380 In the model, weights files are read in and stored in a structured type (WGT) in the fldread 381 module, as and when they are first required. 382 This initialisation procedure determines whether the input data grid should be treated 383 as cyclical or not by inspecting a global attribute stored in the weights input file. 384 This attribute must be called "ew\_wrap" and be of integer type. 385 If it is negative, the input non-model grid is assumed not to be cyclic. 386 If zero or greater, then the value represents the number of columns that overlap. 387 $E.g.$ if the input grid has columns at longitudes 0, 1, 2, .... , 359, then ew\_wrap should be set to 0; 388 if longitudes are 0.5, 2.5, .... , 358.5, 360.5, 362.5, ew\_wrap should be 2. 389 If the model does not find attribute ew\_wrap, then a value of -999 is assumed. 390 In this case the \rou{fld\_read} routine defaults ew\_wrap to value 0 and therefore the grid 391 is assumed to be cyclic with no overlapping columns. 392 (In fact this only matters when bicubic interpolation is required.) 393 Note that no testing is done to check the validity in the model, since there is no way 394 of knowing the name used for the longitude variable, 395 so it is up to the user to make sure his or her data is correctly represented. 396 397 Next the routine reads in the weights. 398 Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise 399 bilinear interpolation is used. The WGT structure includes dynamic arrays both for 400 the storage of the weights (on the model grid), and when required, for reading in 401 the variable to be interpolated (on the input data grid). 402 The size of the input data array is determined by examining the values in the "src" 403 arrays to find the minimum and maximum i and j values required. 404 Since bicubic interpolation requires the calculation of gradients at each point on the grid, 405 the corresponding arrays are dimensioned with a halo of width one grid point all the way around. 406 When the array of points from the data file is adjacent to an edge of the data grid, 407 the halo is either a copy of the row/column next to it (non-cyclical case), or is a copy 408 of one from the first few columns on the opposite side of the grid (cyclical case). 409 410 \subsubsection{Limitations} 411 \label{SBC_iof_lim} 412 413 \begin{enumerate} 414 \item The case where input data grids are not logically rectangular has not been tested. 415 \item This code is not guaranteed to produce positive definite answers from positive definite inputs 416 when a bicubic interpolation method is used. 417 \item The cyclic condition is only applied on left and right columns, and not to top and bottom rows. 418 \item The gradients across the ends of a cyclical grid assume that the grid spacing between 419 the two columns involved are consistent with the weights used. 420 \item Neither interpolation scheme is conservative. (There is a conservative scheme available 421 in SCRIP, but this has not been implemented.) 422 \end{enumerate} 423 424 \subsubsection{Utilities} 425 \label{SBC_iof_util} 426 427 % to be completed 428 A set of utilities to create a weights file for a rectilinear input grid is available 429 (see the directory NEMOGCM/TOOLS/WEIGHTS). 430 431 432 % ================================================================ 178 433 % Analytical formulation (sbcana module) 179 434 % ================================================================ … … 215 470 read in the file, the time frequency at which it is given (in hours), and a logical 216 471 setting whether a time interpolation to the model time step is required 217 for this field). (fld\_i namelist structure). 218 219 \textbf{Caution}: when the frequency is set to --12, the data are monthly 220 values. These are assumed to be climatological values, so time interpolation 221 between December the 15$^{th}$ and January the 15$^{th}$ is done using 222 records 12 and 1 223 224 When higher frequency is set and time interpolation is demanded, the model 225 will try to read the last (first) record of previous (next) year in a file 226 having the same name but a suffix {\_}prev{\_}year ({\_}next{\_}year) being 227 added (e.g. "{\_}1989"). These files must only contain a single record. If they don't exist, 228 the model assumes that the last record of the previous year is equal to the first 229 record of the current year, and similarly, that the first record of the 230 next year is equal to the last record of the current year. This will cause 231 the forcing to remain constant over the first and last half fld\_frequ hours. 472 for this field. See \S\ref{SBC_fldread} for a more detailed description of the parameters. 232 473 233 474 Note that in general, a flux formulation is used in associated with a … … 281 522 \begin{table}[htbp] \label{Tab_CORE} 282 523 \begin{center} 283 \begin{tabular}{|l| l|l|l|}524 \begin{tabular}{|l|c|c|c|} 284 525 \hline 285 526 Variable desciption & Model variable & Units & point \\ \hline … … 297 538 %-------------------------------------------------------------------------------------------------------------- 298 539 299 Note that the air velocity is provided at a tracer ocean point, not at a velocity ocean point ($u$- and $v$-points). It is simpler and faster (less fields to be read), but it is not the recommended method when the ocean grid 300 size is the same or larger than the one of the input atmospheric fields. 540 Note that the air velocity is provided at a tracer ocean point, not at a velocity ocean 541 point ($u$- and $v$-points). It is simpler and faster (less fields to be read), 542 but it is not the recommended method when the ocean grid size is the same 543 or larger than the one of the input atmospheric fields. 301 544 302 545 % ------------------------------------------------------------------------------------------------------------- … … 338 581 As for the flux formulation, information about the input data required by the 339 582 model is provided in the namsbc\_blk\_core or namsbc\_blk\_clio 340 namelist (via the structure fld\_i). The first and last record assumption is also made 341 (see \S\ref{SBC_flx}) 583 namelist (see \S\ref{SBC_fldread}). 342 584 343 585 % ================================================================ … … 399 641 (see \mdl{dynspg} for the ocean). For sea-ice, the sea surface height, $\eta_m$, 400 642 which is provided to the sea ice model is set to $\eta - \eta_{ib}$ (see \mdl{sbcssr} module). 401 $\eta_{ib}$ can be set in the output. This can simplify the altirmetry data and model comparison643 $\eta_{ib}$ can be set in the output. This can simplify altimetry data and model comparison 402 644 as inverse barometer sea surface height is usually removed from these date prior to their distribution. 403 645 … … 433 675 River runoff generally enters the ocean at a nonzero depth rather than through the surface. 434 676 Many models, however, have traditionally inserted river runoff to the top model cell. 435 This was the case in \NEMO prior to the version 3.3, and was combined with an option to increase vertical mixing near the river mouth. 677 This was the case in \NEMO prior to the version 3.3, and was combined with an option 678 to increase vertical mixing near the river mouth. 436 679 437 680 However, with this method numerical and physical problems arise when the top grid cells are … … 517 760 518 761 } 519 520 521 % ================================================================522 % Interpolation on the Fly523 % ================================================================524 525 \section [Interpolation on the Fly] {Interpolation on the Fly}526 \label{SBC_iof}527 528 Interpolation on the Fly allows the user to supply input files required529 for the surface forcing on grids other than the model grid.530 To do this he or she must supply, in addition to the source data file,531 a file of weights to be used to interpolate from the data grid to the model532 grid.533 The original development of this code used the SCRIP package (freely available534 \href{http://climate.lanl.gov/Software/SCRIP}{here} under a copyright agreement).535 In principle, any package can be used to generate the weights, but the536 variables in the input weights file must have the same names and meanings as537 assumed by the model.538 Two methods are currently available: bilinear and bicubic interpolation.539 540 \subsection{Bilinear Interpolation}541 \label{SBC_iof_bilinear}542 543 The input weights file in this case has two sets of variables: src01, src02,544 src03, src04 and wgt01, wgt02, wgt03, wgt04.545 The "src" variables correspond to the point in the input grid to which the weight546 "wgt" is to be applied. Each src value is an integer corresponding to the index of a547 point in the input grid when written as a one dimensional array. For example, for an input grid548 of size 5x10, point (3,2) is referenced as point 8, since (2-1)*5+3=8.549 There are four of each variable because bilinear interpolation uses the four points defining550 the grid box containing the point to be interpolated.551 All of these arrays are on the model grid, so that values src01(i,j) and552 wgt01(i,j) are used to generate a value for point (i,j) in the model.553 554 Symbolically, the algorithm used is:555 556 \begin{equation}557 f_{m}(i,j) = f_{m}(i,j) + \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}558 \end{equation}559 where function idx() transforms a one dimensional index src(k) into a two dimensional index,560 and wgt(1) corresponds to variable "wgt01" for example.561 562 \subsection{Bicubic Interpolation}563 \label{SBC_iof_bicubic}564 565 Again there are two sets of variables: "src" and "wgt".566 But in this case there are 16 of each.567 The symbolic algorithm used to calculate values on the model grid is now:568 569 \begin{equation*} \begin{split}570 f_{m}(i,j) = f_{m}(i,j) +& \sum_{k=1}^{4} {wgt(k)f(idx(src(k)))}571 + \sum_{k=5}^{8} {wgt(k)\left.\frac{\partial f}{\partial i}\right| _{idx(src(k))} } \\572 +& \sum_{k=9}^{12} {wgt(k)\left.\frac{\partial f}{\partial j}\right| _{idx(src(k))} }573 + \sum_{k=13}^{16} {wgt(k)\left.\frac{\partial ^2 f}{\partial i \partial j}\right| _{idx(src(k))} }574 \end{split}575 \end{equation*}576 The gradients here are taken with respect to the horizontal indices and not distances since the spatial dependency has been absorbed into the weights.577 578 \subsection{Implementation}579 \label{SBC_iof_imp}580 581 To activate this option, a non-empty string should be supplied in the weights filename column582 of the relevant namelist; if this is left as an empty string no action is taken.583 In the model, weights files are read in and stored in a structured type (WGT) in the fldread584 module, as and when they are first required.585 This initialisation procedure determines whether the input data grid should be treated586 as cyclical or not by inspecting a global attribute stored in the weights input file.587 This attribute must be called "ew\_wrap" and be of integer type.588 If it is negative, the input non-model grid is assumed not to be cyclic.589 If zero or greater, then the value represents the number of columns that overlap.590 $E.g.$ if the input grid has columns at longitudes 0, 1, 2, .... , 359, then ew\_wrap should be set to 0;591 if longitudes are 0.5, 2.5, .... , 358.5, 360.5, 362.5, ew\_wrap should be 2.592 If the model does not find attribute ew\_wrap, then a value of -999 is assumed.593 In this case the \rou{fld\_read} routine defaults ew\_wrap to value 0 and therefore the grid594 is assumed to be cyclic with no overlapping columns.595 (In fact this only matters when bicubic interpolation is required.)596 Note that no testing is done to check the validity in the model, since there is no way597 of knowing the name used for the longitude variable,598 so it is up to the user to make sure his or her data is correctly represented.599 600 Next the routine reads in the weights.601 Bicubic interpolation is assumed if it finds a variable with name "src05", otherwise602 bilinear interpolation is used. The WGT structure includes dynamic arrays both for603 the storage of the weights (on the model grid), and when required, for reading in604 the variable to be interpolated (on the input data grid).605 The size of the input data array is determined by examining the values in the "src"606 arrays to find the minimum and maximum i and j values required.607 Since bicubic interpolation requires the calculation of gradients at each point on the grid,608 the corresponding arrays are dimensioned with a halo of width one grid point all the way around.609 When the array of points from the data file is adjacent to an edge of the data grid,610 the halo is either a copy of the row/column next to it (non-cyclical case), or is a copy611 of one from the first few columns on the opposite side of the grid (cyclical case).612 613 \subsection{Limitations}614 \label{SBC_iof_lim}615 616 \begin{description}617 \item618 The case where input data grids are not logically rectangular has not been tested.619 \item620 This code is not guaranteed to produce positive definite answers from positive definite inputs.621 \item622 The cyclic condition is only applied on left and right columns, and not to top and bottom rows.623 \item624 The gradients across the ends of a cyclical grid assume that the grid spacing between the two columns involved are consistent with the weights used.625 \item626 Neither interpolation scheme is conservative.627 (There is a conservative scheme available in SCRIP, but this has not been implemented.)628 \end{description}629 630 \subsection{Utilities}631 \label{SBC_iof_util}632 633 % to be completed634 A set of utilities to create a weights file for a rectilinear input grid is available635 (see the directory NEMOGCM/TOOLS/WEIGHTS).636 762 637 763 % ================================================================ -
trunk/DOC/TexFiles/Chapters/Chap_TRA.tex
r2376 r2541 1053 1053 %--------------------------------------------namtra_dmp------------------------------------------------- 1054 1054 \namdisplay{namtra_dmp} 1055 \namdisplay{namdta_tem} 1056 \namdisplay{namdta_sal} 1055 1057 %-------------------------------------------------------------------------------------------------------------- 1056 1058 … … 1067 1069 The restoring term is added when \key{tradmp} is defined. 1068 1070 It also requires that both \key{dtatem} and \key{dtasal} are defined 1069 ($i.e.$ that $T_o$ and $S_o$ are read). The restoring coefficient 1070 $\gamma$ is a three-dimensional array initialized by the user in routine 1071 \rou{dtacof} also located in module \mdl{tradmp}. 1071 and fill in \textit{namdta\_tem} and \textit{namdta\_sal namelists} 1072 ($i.e.$ that $T_o$ and $S_o$ are read using \mdl{fldread}, 1073 see \S\ref{SBC_fldread}). The restoring coefficient $\gamma$ is 1074 a three-dimensional array initialized by the user in routine \rou{dtacof} 1075 also located in module \mdl{tradmp}. 1072 1076 1073 1077 The two main cases in which \eqref{Eq_tra_dmp} is used are \textit{(a)} -
trunk/DOC/TexFiles/Chapters/Chap_ZDF.tex
r2414 r2541 246 246 \subsubsection{Surface wave breaking parameterization} 247 247 %-----------------------------------------------------------------------% 248 249 248 Following \citet{Mellor_Blumberg_JPO04}, the TKE turbulence closure model has been modified 250 249 to include the effect of surface wave breaking energetics. This results in a reduction of summertime … … 329 328 %--------------------------------------------------------------% 330 329 331 %add here a description of "penetration of TKE" and the associated namelist parameters332 % \np{nn\_etau}, \np{rn\_efr}, \np{nn\_htau} 330 To be add here a description of "penetration of TKE" and the associated namelist parameters 331 \np{nn\_etau}, \np{rn\_efr} and \np{nn\_htau}. 333 332 334 333 % from Burchard et al OM 2008 : 335 % the most critical process not reproduced by statistical turbulence models is the activity of internal waves and their interaction with turbulence. After the Reynolds decomposition, internal waves are in principle included in the RANS equations, but later partially excluded by the hydrostatic assumption and the model resolution. Thus far, the representation of internal wave mixing in ocean models has been relatively crude (e.g. Mellor, 1989; Large etal., 1994; Meier, 2001; Axell, 2002; St. Laurent and Garrett, 2002).334 % the most critical process not reproduced by statistical turbulence models is the activity of internal waves and their interaction with turbulence. After the Reynolds decomposition, internal waves are in principle included in the RANS equations, but later partially excluded by the hydrostatic assumption and the model resolution. Thus far, the representation of internal wave mixing in ocean models has been relatively crude (e.g. Mellor, 1989; Large et al., 1994; Meier, 2001; Axell, 2002; St. Laurent and Garrett, 2002). 336 335 337 336 … … 355 354 of \eqref{Eq_zdftke_e}) should balance the loss of kinetic energy associated with 356 355 the vertical momentum diffusion (first line in \eqref{Eq_PE_zdf}). To do so a special care 357 have to be taken for both the time and space discretization of the TKE equation \citep{Burchard_OM02}. 356 have to be taken for both the time and space discretization of the TKE equation 357 \citep{Burchard_OM02,Marsaleix_al_OM08}. 358 358 359 359 Let us first address the time stepping issue. Fig.~\ref{Fig_TKE_time_scheme} shows -
trunk/DOC/TexFiles/Chapters/Introduction.tex
r2376 r2541 41 41 the tendency terms of the equations are evaluated either centered in time, or forward, 42 42 or backward depending of the nature of the term. 43 Chapter~\ref{DOM} presents the space domain. The model is discretised on a staggered grid44 (Arakawa C grid) with masking of land areas and uses a Leap-frog environment for time-stepping.45 Vertical discretisation used depends on both how the bottom topography is represented and46 whether the free surface is linear or not. Full step or partial step $z$-coordinate or47 $s$- (terrain-following) coordinate is used with linear free surface (level position are then48 fixed in time). In non-linear free surface, the corresponding rescaled height coordinate49 formulation (\textit{z*} or \textit{s*}) is used (the level position then vary in time as a50 function of the sea surface heigh). The following two chapters (\ref{TRA} and \ref{DYN})51 describe the discretisation of the prognostic equations for the active tracers and the52 momentum. Explicit, split-explicitand filtered free surface formulations are implemented.43 Chapter~\ref{DOM} presents the space domain. The model is discretised on a staggered 44 grid (Arakawa C grid) with masking of land areas. Vertical discretisation used depends 45 on both how the bottom topography is represented and whether the free surface is linear or not. 46 Full step or partial step $z$-coordinate or $s$- (terrain-following) coordinate is used 47 with linear free surface (level position are then fixed in time). In non-linear free surface, 48 the corresponding rescaled height coordinate formulation (\textit{z*} or \textit{s*}) is used 49 (the level position then vary in time as a function of the sea surface heigh). 50 The following two chapters (\ref{TRA} and \ref{DYN}) describe the discretisation of the 51 prognostic equations for the active tracers and the momentum. Explicit, split-explicit 52 and filtered free surface formulations are implemented. 53 53 A number of numerical schemes are available for momentum advection, for the computation 54 54 of the pressure gradients, as well as for the advection of tracers (second or higher … … 79 79 or \citet{Umlauf_Burchard_JMS03} mixing schemes. 80 80 81 Chapter~\ref{OBS} describes a tool which reads in observation files (profile temperature and salinity, 82 sea surface temperature, sea level anomaly and sea ice concentration) and calculates an interpolated 83 model equivalent value at the observation location and nearest model timestep. Originally 84 developed of data assimilation, it is a fantastic tool for model and data comparison. 85 Other Specific online diagnostics (not documented yet) are available in the model: output of all 86 the tendencies of the momentum and tracers equations, output of tracers tendencies 87 averaged over the time evolving mixed layer, output of the tendencies of the barotropic 88 vorticity equation, on-line floats trajectories... 81 Model outputs management and specific online diagnostics are described in chapters~\ref{DIA}. 82 The diagnostics includes the output of all the tendencies of the momentum and tracers equations, 83 the output of tracers tendencies averaged over the time evolving mixed layer, the output of 84 the tendencies of the barotropic vorticity equation, the computation of on-line floats trajectories... 85 Chapter~\ref{OBS} describes a tool which reads in observation files (profile temperature 86 and salinity, sea surface temperature, sea level anomaly and sea ice concentration) 87 and calculates an interpolated model equivalent value at the observation location 88 and nearest model timestep. Originally developed of data assimilation, it is a fantastic 89 tool for model and data comparison. Chapter~\ref{ASM} describes how increments 90 produced by data assimilation may be applied to the model equations. 91 Finally, Chapter~\ref{CFG} provides a brief introduction to the pre-defined model 92 configurations (water column model, ORCA and GYRE families of configurations). 89 93 90 94 The model is implemented in \textsc{Fortran 90}, with preprocessing (C-pre-processor). … … 102 106 around the code, the module names follow a three-letter rule. For example, \mdl{traldf} 103 107 is a module related to the TRAcers equation, computing the Lateral DiFfussion. 104 The complete list of module names is presented in Appendix~\ref{Apdx_D}. 108 %The complete list of module names is presented in Appendix~\ref{Apdx_D}. %====>>>> to be done ! 105 109 Furthermore, modules are organized in a few directories that correspond to their category, 106 110 as indicated by the first three letters of their name (Tab.~\ref{Tab_chap}). … … 114 118 \begin{table}[!t] 115 119 %\begin{center} \begin{tabular}{|p{143pt}|l|l|} \hline 120 \caption{ \label{Tab_chap} Organization of Chapters mimicking the one of the model directories. } 116 121 \begin{center} \begin{tabular}{|l|l|l|} \hline 117 122 Chapter \ref{STP} & - & model time STePping environment \\ \hline … … 123 128 Chapter \ref{LDF} & LDF & Lateral DiFfusion (parameterisations) \\ \hline 124 129 Chapter \ref{ZDF} & ZDF & vertical (Z) DiFfusion (parameterisations) \\ \hline 130 Chapter \ref{DIA} & DIA & I/O and DIAgnostics (also IOM, FLO and TRD) \\ \hline 125 131 Chapter \ref{OBS} & OBS & OBServation and model comparison \\ \hline 126 Chapter \ref{ASM} & ASM & ASsimilation increment \\ \hline 127 Chapter \ref{MISC} & ... & Miscellaneous topics (DIA, DTA, IOM, \\ 128 & & SOL, TRD, FLO...) \\ \hline 129 Chapter \ref{CFG} & - & predefined configurations \\ \hline 132 Chapter \ref{ASM} & ASM & ASsiMilation increment \\ \hline 133 Chapter \ref{MISC} & SOL & Miscellaneous topics (including solvers) \\ \hline 134 Chapter \ref{CFG} & - & predefined configurations (including C1D) \\ \hline 130 135 \end{tabular} 131 \caption{ \label{Tab_chap}132 Organization of Chapters which miminc the one of the model directories. }133 136 \end{center} \end{table} 134 137 %-------------------------------------------------------------------------------------------------------------- … … 141 144 142 145 $\bullet$ The main modifications from OPA v8 and NEMO/OPA v3.2 are :\\ 143 \\144 146 (1) transition to full native \textsc{Fortran} 90, deep code restructuring and drastic 145 147 reduction of CPP keys; \\ … … 150 152 coordinate and for the new options for horizontal pressure gradient computation with 151 153 a non-linear equation of state.}; \\ 152 (4) more choices for the treatment of the free surface: full explicit, split-explicit and filtered. \\154 (4) more choices for the treatment of the free surface: full explicit, split-explicit or filtered schemes. \\ 153 155 (5) suppression of the rigid-lid option;\\ 154 156 (6) non linear free surface option (associated with the rescaled height coordinate … … 162 164 (12) surface module (SBC) that simplify the way the ocean is forced and include two 163 165 bulk formulea (CLIO and CORE) and which includes an on-the-fly interpolation of input forcing fields\\ 164 (13) introduction of LIM 3, the new Louvain-la-Neuve sea-ice model (C-grid rheology and 166 (13) RGB light penetration and optional use of ocean color 167 (14) major changes in the TKE schemes: it now includes a Langmuir cell parameterization \citep{Axell_JGR02}, 168 the \citet{Mellor_Blumberg_JPO04} surface wave breaking parameterization, and has a time discretization 169 which is energetically consistent with the ocean model equations \citep{Burchard_OM02, Marsaleix_al_OM08}; \\ 170 (15) tidal mixing parametrisation (bottom intensification) + Indonesian specific tidal mixing \citep{Koch-Larrouy_al_GRL07}; \\ 171 (16) introduction of LIM-3, the new Louvain-la-Neuve sea-ice model (C-grid rheology and 165 172 new thermodynamics including bulk ice salinity) \citep{Vancoppenolle_al_OM09a, Vancoppenolle_al_OM09b} 166 173 167 174 \vspace{1cm} 168 $\bullet$ The main modifications from NEMO/OPA v3.2 and v3.2 are :\\ 169 \\ 175 $\bullet$ The main modifications from NEMO/OPA v3.2 and v3.3 are :\\ 170 176 (1) introduction of a modified leapfrog-Asselin filter time stepping scheme \citep{Leclair_Madec_OM09}; \\ 171 (2) additional scheme for iso-neutral mixing \citep{Griffies_al_JPO98}, although it is still a "work in progress"; \\ 172 (3) a rewriting of the bottom boundary scheme, following \citet{Campin_Goosse_Tel99}; \\ 173 (4) addition of the atmospheric pressure as an external forcing on both ocean and sea-ice dynamics; \\ 174 (5) addition of a diurnal cycle on solar radiation \citep{Bernie_al_CD07}; \\ 175 (6) addition of an on-line observation and model comparison (thanks to NEMOVAR project); \\ 176 (7) optional application of an assimilation increment (thanks to NEMOVAR project); \\ 177 (8) introduction of ..... 177 (2) additional scheme for iso-neutral mixing \citep{Griffies_al_JPO98}, although it is still a "work in progress"; \\ 178 (3) a rewriting of the bottom boundary layer scheme, following \citet{Campin_Goosse_Tel99}; \\ 179 (4) addition of a Generic Length Scale vertical mixing scheme, following \citet{Umlauf_Burchard_JMS03}; 180 (5) addition of the atmospheric pressure as an external forcing on both ocean and sea-ice dynamics; \\ 181 (6) addition of a diurnal cycle on solar radiation \citep{Bernie_al_CD07}; \\ 182 (7) river runoffs added through a non-zero depth, and having its own temperature and salinity; \\ 183 (8) CORE II normal year forcing set as the default forcing of ORCA2-LIM configuration ; \\ 184 (9) generalisation of the use of \mdl{fldread} for all input fields (ocean, climatology, sea-ice damping...) 185 (10) addition of an on-line observation and model comparison (thanks to NEMOVAR project); \\ 186 (11) optional application of an assimilation increment (thanks to NEMOVAR project); \\ 187 (12) coupling interface adjusted for WRF atmospheric model 188 (13) C-grid ice rheology now available fro both LIM-2 and LIM-3 \citep{Bouillon_al_OM09}; \\ 189 (14) a deep re-writting and simplification of the off-line tracer component (OFF\_SRC) ; \\ 190 (15) the merge of passive and active advection and diffusion modules \\ 191 (16) Use of the Flexible Configuration Manager (FCM) to build configurations, generate the Makefile and produce the executable ; \\ 192 (17) Linear-tangent and Adjoint component (TAM) added, phased with v3.0 178 193 179 194 \vspace{1cm}
Note: See TracChangeset
for help on using the changeset viewer.