Changeset 4153
- Timestamp:
- 2013-11-05T13:25:45+01:00 (10 years ago)
- Location:
- branches/2013/dev_LOCEAN_2013
- Files:
-
- 39 edited
- 1 copied
Legend:
- Unmodified
- Added
- Removed
-
branches/2013/dev_LOCEAN_2013/DOC/NEMO_book.tex
r4147 r4153 23 23 \usepackage[margin=10pt,font={small},labelsep=colon,labelfont={bf}]{caption} % Gives small font for captions 24 24 \usepackage{enumitem} % allows non-bold description items 25 \usepackage{longtable} % allows multipage tables 25 26 %\usepackage{colortbl} % gives coloured panels behind table columns 26 27 -
branches/2013/dev_LOCEAN_2013/DOC/TexFiles/Chapters/Chap_DIA.tex
r4148 r4153 1 1 % ================================================================ 2 % Chapter �I/O & Diagnostics2 % Chapter I/O & Diagnostics 3 3 % ================================================================ 4 4 \chapter{Ouput and Diagnostics (IOM, DIA, TRD, FLO)} … … 16 16 17 17 The model outputs are of three types: the restart file, the output listing, 18 and the output file(s). The restart file is used internally by the code when18 and the diagnostic output file(s). The restart file is used internally by the code when 19 19 the user wants to start the model with initial conditions defined by a 20 20 previous simulation. It contains all the information that is necessary in … … 25 25 that it is saved in the same binary format as the one used by the computer 26 26 that is to read it (in particular, 32 bits binary IEEE format must not be used for 27 this file). The output listing and file(s) are predefined but should be checked 27 this file). 28 29 The output listing and file(s) are predefined but should be checked 28 30 and eventually adapted to the user's needs. The output listing is stored in 29 31 the $ocean.output$ file. The information is printed from within the code on the … … 31 33 "\textit{grep -i numout}" in the source code directory. 32 34 33 By default, outpout files are written in NetCDF format but an IEEE output format, called DIMG, can be choosen when defining \key{dimgout}. Since version 3.2, when defining \key{iomput}, an I/O server has been added which provides more flexibility in the choice of the fields to be outputted as well as how the writing work is distributed over the processors in massively parallel computing. The complete description of the use of this I/O server is presented in next section. If neither \key{iomput} nor \key{dimgout} are defined, NEMO is producing NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation, but it is quite inefficient on parrallel machines. If \key{iomput} is not defined, output files are defined in the \mdl{diawri} module and containing mean (or instantaneous if \key{diainstant} is defined) values over a period of nn\_write time-step (namelist parameter). 35 By default, diagnostic output files are written in NetCDF format but an IEEE binary output format, called DIMG, can be choosen by defining \key{dimgout}. 36 37 Since version 3.2, when defining \key{iomput}, an I/O server has been added which provides more flexibility in the choice of the fields to be written as well as how the writing work is distributed over the processors in massively parallel computing. The complete description of the use of this I/O server is presented in the next section. 38 39 By default, if neither \key{iomput} nor \key{dimgout} are defined, NEMO produces NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation. However, the IOIPSL library is quite inefficient on parallel machines and, since version 3.2, many diagnostic options have been added presuming the use of \key{iomput}. The usefulness of the default IOIPSL-based option is expected to reduce with each new release. If \key{iomput} is not defined, output files and content are defined in the \mdl{diawri} module and contain mean (or instantaneous if \key{diainstant} is defined) values over a regular period of nn\_write time-steps (namelist parameter). 34 40 35 41 %\gmcomment{ % start of gmcomment … … 42 48 43 49 44 Since version 3.2, iomput is the NEMO output interface. It was designed to be simple to use, flexible and efficient. The two main purposes of iomput are: \\ 45 (1) the complete and flexible control of the output files through an external xml file defined by the user \\ 46 (2) to achieve high performance outputs through the distribution (or not) of all tasks related to output files on dedicated processes. \\ 47 The first functionality allows the user to specify, without touching anything into the code, the way he want to output data: \\ 48 - choice of output frequencies that can be different for each file (including real months and years) \\ 49 - choice of file contents: decide which data will be written in which file (the same data can be outputted in different files) \\ 50 - possibility to split output files at a choosen frequency \\ 51 - possibility to extract a vertical or an horizontal subdomain \\ 52 - choice of the temporal operation to perform: average, accumulate, instantaneous, min, max and once \\ 53 - extremely large choice of data available \\ 54 - redefine variables name and long\_name \\ 55 In addition, iomput allows the user to output any variable (scalar, 2D or 3D) in the code in a very easy way. All details of iomput functionalities are listed in the following subsections. Example of the iodef.xml files that control the outputs can be found here: NEMOGCM/CONFIG/ORCA2\_LIM/EXP00/iodef*.xml 56 57 The second functionality targets outputs performances when running on a very large number of processes. First, iomput provides the possibility to dedicate N specific processes (in addition to NEMO processes) to write the outputs, where N is big enough (and defined by the user) to suppress the bottle neck associated with the the writing of the output files. Since version 3.5, this interface depends on an external code called \href{http://forge.ipsl.jussieu.fr/ioserver}{XIOS}. This new IO server takes advantage of the new functionalitiy of NetCDF4 that allows the user to write files in parallel and therefore to bypass the rebuilding phase. Note that writting in parallel into the same NetCDF files requires that your NetCDF4 library is linked to an HDF5 library that has been correctly compiled (i.e. with the configure option $--$enable-parallel). Note that the files created by iomput trough xios are incompatible with NetCDF3. All post-processsing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3. 58 59 \subsection{Basic knowledge} 60 50 Since version 3.2, iomput is the NEMO output interface of choice. It has been designed to be simple to use, flexible and efficient. The two main purposes of iomput are: 51 \begin{enumerate} 52 \item The complete and flexible control of the output files through external XML files adapted by the user from standard templates. 53 \item To achieve high performance and scalable output through the optional distribution of all diagnostic output related tasks to dedicated processes. 54 \end{enumerate} 55 The first functionality allows the user to specify, without code changes or recompilation, aspects of the diagnostic output stream, such as: 56 \begin{itemize} 57 \item The choice of output frequencies that can be different for each file (including real months and years). 58 \item The choice of file contents; includes complete flexibility over which data are written in which files (the same data can be written in different files). 59 \item The possibility to split output files at a choosen frequency. 60 \item The possibility to extract a vertical or an horizontal subdomain. 61 \item The choice of the temporal operation to perform, e.g.: average, accumulate, instantaneous, min, max and once. 62 \item Control over metadata via a large XML "database" of possible output fields. 63 \end{itemize} 64 In addition, iomput allows the user to add the output of any new variable (scalar, 2D or 3D) in the code in a very easy way. All details of iomput functionalities are listed in the following subsections. Examples of the XML files that control the outputs can be found in: 65 \begin{alltt} 66 \begin{verbatim} 67 NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef.xml 68 NEMOGCM/CONFIG/SHARED/field_def.xml 69 and 70 NEMOGCM/CONFIG/SHARED/domain_def.xml. 71 \end{verbatim} 72 \end{alltt} 73 74 The second functionality targets output performance when running in parallel (\key{mpp\_mpi}). Iomput provides the possibility to specify N dedicated I/O processes (in addition to the NEMO processes) to collect and write the outputs. With an appropriate choice of N by the user, the bottleneck associated with the writing of the output files can be greatly reduced. 75 76 Since version 3.5, the iom\_put interface depends on an external code called \href{http://forge.ipsl.jussieu.fr/ioserver}{XIOS}. This new IO server can take advantage of the parallel I/O functionality of NetCDF4 to create a single output file and therefore to bypass the rebuilding phase. Note that writing in parallel into the same NetCDF files requires that your NetCDF4 library is linked to an HDF5 library that has been correctly compiled (i.e. with the configure option $--$enable-parallel). Note that the files created by iomput through XIOS are incompatible with NetCDF3. All post-processsing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3. 77 78 Even if not using the parallel I/O functionality of NetCDF4, using N dedicated I/O servers, where N is typically much less than the number of NEMO processors, will reduce the number of output files created. This can greatly reduce the post-processing burden usually associated with using large numbers of NEMO processors. Note that for smaller configurations, the rebuilding phase can be avoided, even without a parallel-enabled NetCDF4 library, simply by employing only one dedicated I/O server. 79 80 \subsection{XIOS: the IO\_SERVER} 81 82 \subsubsection{Attached or detached mode?} 83 84 Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS}, the io\_server developed by Yann Meurdesoif from IPSL. The behaviour of the io subsystem is controlled by settings in the external XML files listed above. Key settings in the iodef.xml file are {\tt using\_server} and the {\tt type} tag associated with each defined file. The {\tt using\_server} setting determines whether or not the server will be used in ''attached mode'' (as a library) [{\tt false}] or in ''detached mode'' (as an external executable on N additional, dedicated cpus) [{\tt true}]. The ''attached mode'' is simpler to use but much less efficient for massively parallel applications. The type of each file can be either ''multiple\_file'' or ''one\_file''. 85 86 In attached mode and if the type of file is ''multiple\_file'', then each NEMO process will also act as an IO server and produce its own set of output files. Superficially, this emulates the standard behaviour in previous versions, However, the subdomain written out by each process does not correspond to the {\tt jpi x jpj x jpk} domain actually computed by the process (although it may if {\tt jpni=1}). Instead each process will have collected and written out a number of complete longitudinal strips. If the ''one\_file'' option is chosen then all processes will collect their longitudinal strips and write (in parallel) to a single output file. 87 88 In detached mode and if the type of file is ''multiple\_file'', then each stand-alone XIOS process will collect data for a range of complete longitudinal strips and write to its own set of output files. If the ''one\_file'' option is chosen then all XIOS processes will collect their longitudinal strips and write (in parallel) to a single output file. Note running in detached mode requires launching a Multiple Process Multiple Data (MPMD) parallel job. The following subsection provides a typical example but the syntax will vary in different MPP environments. 89 90 \subsubsection{Number of cpu used by XIOS in detached mode} 91 92 The number of cores used by the XIOS is specified when launching the model. The number of cores dedicated to XIOS should be from ~1/10 to ~1/50 of the number or cores dedicated to NEMO. Some manufacturers suggest using O($\sqrt{N}$) dedicated IO processors for N processors but this is a general recommendation and not specific to NEMO. It is difficult to provide precise recommendations because the optimal choice will depend on the particular hardware properties of the target system (parallel filesystem performance, available memory, memory bandwidth etc.) and the volume and frequency of data to be created. Here is an example of 2 cpus for the io\_server and 62 cpu for nemo using mpirun: 93 94 \texttt{ mpirun -np 62 ./nemo.exe : -np 2 ./xios\_server.exe } 95 96 \subsubsection{Control of XIOS: the XIOS context in iodef.xml} 97 98 As well as the {\tt using\_server} flag, other controls on the use of XIOS are set in the XIOS context in iodef.xml. See the XML basics section below for more details on XML syntax and rules. 99 100 \begin{tabular}{|p{4cm}|p{6.0cm}|p{2.0cm}|} 101 \hline 102 variable name & 103 description & 104 example \\ 105 \hline 106 \hline 107 buffer\_size & 108 buffer size used by XIOS to send data from NEMO to XIOS. Larger is more efficient. Note that needed/used buffer sizes are summarized at the end of the job & 109 25000000 \\ 110 \hline 111 buffer\_server\_factor\_size & 112 ratio between NEMO and XIOS buffer size. Should be 2. & 113 2 \\ 114 \hline 115 info\_level & 116 verbosity level (0 to 100) & 117 0 \\ 118 \hline 119 using\_server & 120 activate attached(false) or detached(true) mode & 121 true \\ 122 \hline 123 using\_oasis & 124 XIOS is used with OASIS(true) or not (false) & 125 false \\ 126 \hline 127 oasis\_codes\_id & 128 when using oasis, define the identifier of NEMO in the namcouple. Note that the identifier of XIOS is xios.x & 129 oceanx \\ 130 \hline 131 \end{tabular} 132 133 134 \subsection{Practical issues} 135 136 \subsubsection{Installation} 137 138 As mentioned, XIOS is supported separately and must be downloaded and compiled before it can be used with NEMO. See the installation guide on the \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS} wiki for help and guidance. NEMO will need to link to the compiled XIOS library. The 139 \href{http://www.nemo-ocean.eu/Using-NEMO/User-Guides/Basics/XIOS-IO-server-installation-and-use}{XIOS with NEMO} guide provides an example illustration of how this can be achieved. 140 141 \subsubsection{Add your own outputs} 142 143 It is very easy to add your own outputs with iomput. Many standard fields and diagnostics are already prepared (i.e., steps 1 to 3 below have been done) and simply need to be activated by including the required output in a file definition in iodef.xml (step 4). To add new output variables, all 4 of the following steps must be taken. 144 \begin{description} 145 \item[1.] in NEMO code, add a \\ 146 \texttt{ CALL iom\_put( 'identifier', array ) } \\ 147 where you want to output a 2D or 3D array. 148 149 \item[2.] If necessary, add \\ 150 \texttt{ USE iom\ \ \ \ \ \ \ \ \ \ \ \ ! I/O manager library } \\ 151 to the list of used modules in the upper part of your module. 152 153 \item[3.] in the field\_def.xml file, add the definition of your variable using the same identifier you used in the f90 code (see subsequent sections for a details of the XML syntax and rules). For example: 154 \vspace{-20pt} 155 \begin{alltt} {{\scriptsize 156 \begin{verbatim} 157 <field_definition> 158 <!-- T grid --> 159 160 <field_group id="grid_T" grid_ref="grid_T_3D"> 161 ... 162 <field id="identifier" long_name="blabla" ... /> 163 ... 164 </field_definition> 165 \end{verbatim} 166 }}\end{alltt} 167 Note your definition must be added to the field\_group whose reference grid is consistent with the size of the array passed to iomput. The grid\_ref attribute refers to definitions set in iodef.xml which, in turn, reference grids and axes either defined in the code (iom\_set\_domain\_attr and iom\_set\_axis\_attr in iom.F90) or defined in the domain\_def.xml file. E.g.: 168 \vspace{-20pt} 169 \begin{alltt} {{\scriptsize 170 \begin{verbatim} 171 <grid id="grid_T_3D" domain_ref="grid_T" axis_ref="deptht"/> 172 \end{verbatim} 173 }}\end{alltt} 174 Note, if your array is computed within the surface module each nn\_fsbc time\_step, 175 add the field definition within the field\_group defined with the id ''SBC'': $<$field\_group id=''SBC''...$>$ which has been defined with the correct frequency of operations (iom\_set\_field\_attr in iom.F90) 176 177 \item[4.] add your field in one of the output files defined in iodef.xml (again see subsequent sections for syntax and rules) \\ 178 \vspace{-20pt} 179 \begin{alltt} {{\scriptsize 180 \begin{verbatim} 181 <file id="file1" .../> 182 ... 183 <field field_ref="identifier" /> 184 ... 185 </file> 186 \end{verbatim} 187 }}\end{alltt} 188 189 \end{description} 190 \subsection{XML fundamentals} 61 191 62 192 \subsubsection{ XML basic rules} … … 72 202 73 203 The XML file used in XIOS is structured by 7 families of tags: context, axis, domain, grid, field, file and variable. Each tag family has hierarchy of three flavors (except for context): 74 \begin{description} 75 \item[root]: declaration of the root element that can contain element groups or elements, for example : $<$file\_definition ...$/>$ \\ 76 \item[group]: declaration of a group element that can contain element groups or elements, for example : $<$file\_group ...$/>$ \\ 77 \item[element]: declaration of an element that can contain elements, for example : $<$file ...$/>$ \\ 78 \end{description} 204 \\ 205 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 206 \hline 207 flavor & 208 description & 209 example \\ 210 \hline 211 \hline 212 root & 213 declaration of the root element that can contain element groups or elements & 214 {\scriptsize \verb? < file_definition ... >?} \\ 215 \hline 216 group & 217 declaration of a group element that can contain element groups or elements & 218 {\scriptsize \verb? < file_group ... >?} \\ 219 \hline 220 element & 221 declaration of an element that can contain elements & 222 {\scriptsize \verb? < file ... >?} \\ 223 \hline 224 \end{tabular} 225 \\ 79 226 80 227 Each element may have several attributes. Some attributes are mandatory, other are optional but have a default value and other are are completely optional. Id is a special attribute used to identify an element or a group of elements. It must be unique for a kind of element. It is optional, but no reference to the corresponding element can be done if it is not defined. 81 228 82 The XML file is split into context tags that are used to isolate IO definition from different codes or different parts of a code. No interference is possible between 2 different contexts. Each context has its own calendar and an associated timestep. In NEMO, we used the following contexts (that can be defined in any order): 83 \begin{description} 84 \item[contex xios]: context containing informations for XIOS \\ 85 \verb? <context id="xios" ... ? 86 \item[context nemo]: contex containing IO informations for NEMO (mother grid when using AGRIF) \\ 87 \verb? <context id="nemo" ... ? 88 \item[context 1\_nemo]: contex containing IO informations for NEMO child grid 1 (when using AGRIF) \\ 89 \verb? <context id="1_nemo" ... ? 90 \item[context n\_nemo]: contex containing IO informations for NEMO child grid n (when using AGRIF) \\ 91 \verb? <context id="n_nemo" ... ? 92 \end{description} 93 94 Each context tag related to NEMO (mother or child grids) is divided into 5 parts (that can be defined in any order): 95 \begin{description} 96 \item[field definition]: define all variables that can potentially be outputted \\ 97 \verb? <field_definition ... ? 98 \item[file definition]: define the netcdf files to be created and the variables they will contain \\ 99 \verb? <file_definition ... ? 100 \item[axis definitions]: define vertical axis \\ 101 \verb? <axis_definition ... ? 102 \item[domain definitions]: define the horizontal grids \\ 103 \verb? <domain_definition ... ? 104 \item[grid definitions]: define the 2D and 3D grids (association of an axis and a domain) \\ 105 \verb? <grid_definition ... ? 106 \end{description} 107 108 the xios context contains only 1 tag: 109 \begin{description} 110 \item[variable definition]: define variables needed by xios. This can be seen as a kind of namelist for xios. \\ 111 \verb? <variable_definition ... ? 112 \end{description} 113 114 The XML file can be split in different parts to improve its readability and facilitate its use. The inclusing of XML files into the main XML file can be done through the attribute src: \\ 115 \verb? <context src="./nemo_def.xml" /> ? 116 In NEMO, by default, the field and domain définition is done in 2 séparate files: \\ 117 NEMOGCM/CONFIG/SHARED/field\_def.xml and \\ 118 NEMOGCM/CONFIG/SHARED/domain\_def.xml that are included in the main iodef.xml file through the following commands: \\ 119 \verb? <field_definition src="./field_def.xml" /> ? \\ 120 \verb? <domain_definition src="./domain_def.xml" /> ? 229 The XML file is split into context tags that are used to isolate IO definition from different codes or different parts of a code. No interference is possible between 2 different contexts. Each context has its own calendar and an associated timestep. In NEMO, we used the following contexts (that can be defined in any order):\\ 230 \\ 231 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 232 \hline 233 context & 234 description & 235 example \\ 236 \hline 237 \hline 238 context xios & 239 context containing information for XIOS & 240 {\scriptsize \verb? <context id="xios" ... ?} \\ 241 \hline 242 context nemo & 243 context containing IO information for NEMO (mother grid when using AGRIF) & 244 {\scriptsize \verb? <context id="nemo" ... ?} \\ 245 \hline 246 context 1\_nemo & 247 context containing IO information for NEMO child grid 1 (when using AGRIF) & 248 {\scriptsize \verb? <context id="1_nemo" ... ?} \\ 249 \hline 250 context n\_nemo & 251 context containing IO information for NEMO child grid n (when using AGRIF) & 252 {\scriptsize \verb? <context id="n_nemo" ... ?} \\ 253 \hline 254 \end{tabular} 255 \\ 256 257 \noindent The xios context contains only 1 tag: 258 \\ 259 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 260 \hline 261 context tag & 262 description & 263 example \\ 264 \hline 265 \hline 266 variable\_definition & 267 define variables needed by XIOS. This can be seen as a kind of namelist for XIOS. & 268 {\scriptsize \verb? <variable_definition ... ?} \\ 269 \hline 270 \end{tabular} 271 \\ 272 273 \noindent Each context tag related to NEMO (mother or child grids) is divided into 5 parts (that can be defined in any order):\\ 274 \\ 275 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 276 \hline 277 context tag & 278 description & 279 example \\ 280 \hline 281 \hline 282 field\_definition & 283 define all variables that can potentially be outputted & 284 {\scriptsize \verb? <field_definition ... ?} \\ 285 \hline 286 file\_definition & 287 define the netcdf files to be created and the variables they will contain & 288 {\scriptsize \verb? <file_definition ... ?} \\ 289 \hline 290 axis\_definition & 291 define vertical axis & 292 {\scriptsize \verb? <axis_definition ... ?} \\ 293 \hline 294 domain\_definition & 295 define the horizontal grids & 296 {\scriptsize \verb? <domain_definition ... ?} \\ 297 \hline 298 grid\_definition & 299 define the 2D and 3D grids (association of an axis and a domain) & 300 {\scriptsize \verb? <grid_definition ... ?} \\ 301 \hline 302 \end{tabular} 303 \\ 304 305 \subsubsection{Nesting XML files} 306 307 The XML file can be split in different parts to improve its readability and facilitate its use. The inclusion of XML files into the main XML file can be done through the attribute src: \\ 308 {\scriptsize \verb? <context src="./nemo_def.xml" /> ?}\\ 309 310 \noindent In NEMO, by default, the field and domain definition is done in 2 separate files: 311 {\scriptsize \tt 312 \begin{verbatim} 313 NEMOGCM/CONFIG/SHARED/field_def.xml 314 and 315 NEMOGCM/CONFIG/SHARED/domain_def.xml 316 \end{verbatim} 317 } 318 \noindent that are included in the main iodef.xml file through the following commands: \\ 319 {\scriptsize \verb? <field_definition src="./field_def.xml" /> ? \\ 320 \verb? <domain_definition src="./domain_def.xml" /> ? } 121 321 122 322 123 323 \subsubsection{Use of inheritance} 124 324 125 XML extensively uses the concept of inheritance. XML has a based treestructure with a parent-child oriented relation: all children inherit attributes from parent, but an attribute defined in a child replace the inherited attribute value. Note that the special attribute ''id'' is never inherited. \\325 XML extensively uses the concept of inheritance. XML has a tree based structure with a parent-child oriented relation: all children inherit attributes from parent, but an attribute defined in a child replace the inherited attribute value. Note that the special attribute ''id'' is never inherited. \\ 126 326 \\ 127 example 1: Direct inheritance. \\ 327 example 1: Direct inheritance. 328 \vspace{-20pt} 128 329 \begin{alltt} {{\scriptsize 129 330 \begin{verbatim} 130 331 <field_definition operation="average" > 131 132 332 <field id="sst" /> <!-- averaged sst --> 333 <field id="sss" operation="instant"/> <!-- instantaneous sss --> 133 334 </field_definition> 134 335 \end{verbatim} … … 140 341 for example output instantaneous values instead of average values. \\ 141 342 \\ 142 example 2: Inheritance by reference. \\ 343 example 2: Inheritance by reference. 344 \vspace{-20pt} 143 345 \begin{alltt} {{\scriptsize 144 346 \begin{verbatim} 145 347 <field_definition> 146 147 348 <field id="sst" long_name="sea surface temperature" /> 349 <field id="sss" long_name="sea surface salinity" /> 148 350 </field_definition> 149 351 150 352 <file_definition> 151 152 153 154 353 <file id="myfile" output_freq="1d" /> 354 <field field_ref="sst" /> <!-- default def --> 355 <field field_ref="sss" long_name="my description" /> <!-- overwrite --> 356 </file> 155 357 </file_definition> 156 358 \end{verbatim} 157 359 }}\end{alltt} 158 Inherite (and overwrite, if needed) the attributes of a tag you are refering to. 159 160 \subsubsection{Use of Group} 161 162 Groups can be used fort 2 purposes. \\ 163 164 First, the group can be used to define common attributes to be shared by the elements of the group through the inheritance. In the following example, we define a group of field that will share a common grid ''grid\_T\_2D''. Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''. 360 Inherit (and overwrite, if needed) the attributes of a tag you are refering to. 361 362 \subsubsection{Use of Groups} 363 364 Groups can be used for 2 purposes. Firstly, the group can be used to define common attributes to be shared by the elements of the group through inheritance. In the following example, we define a group of field that will share a common grid ''grid\_T\_2D''. Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''. 365 \vspace{-20pt} 165 366 \begin{alltt} {{\scriptsize 166 367 \begin{verbatim} 167 368 <field_group id="grid_T" grid_ref="grid_T_2D"> 168 169 170 171 369 <field id="toce" long_name="temperature" unit="degC" grid_ref="grid_T_3D"/> 370 <field id="sst" long_name="sea surface temperature" unit="degC" /> 371 <field id="sss" long_name="sea surface salinity" unit="psu" /> 372 <field id="ssh" long_name="sea surface height" unit="m" /> 172 373 ... 173 374 \end{verbatim} 174 375 }}\end{alltt} 175 376 176 Second , the group can be used to replace a list of elements. Several examples of groups of fields are proposed at the end of the file \\177 NEMOGCM/CONFIG/SHARED/field\_def.xml. For example, a short list of usual variables related to the U grid: 377 Secondly, the group can be used to replace a list of elements. Several examples of groups of fields are proposed at the end of the file {\tt CONFIG/SHARED/field\_def.xml}. For example, a short list of the usual variables related to the U grid: 378 \vspace{-20pt} 178 379 \begin{alltt} {{\scriptsize 179 380 \begin{verbatim} 180 381 <field_group id="groupU" > 181 182 183 382 <field field_ref="uoce" /> 383 <field field_ref="suoce" /> 384 <field field_ref="utau" /> 184 385 </field_group> 185 386 \end{verbatim} 186 387 }}\end{alltt} 187 that can be directly include in a file through the following syntaxe: 388 that can be directly included in a file through the following syntax: 389 \vspace{-20pt} 188 390 \begin{alltt} {{\scriptsize 189 391 \begin{verbatim} 190 392 <file id="myfile_U" output_freq="1d" /> 191 192 393 <field_group group_ref="groupU"/> 394 <field field_ref="uocetr_eff" /> <!-- add another field --> 193 395 </file> 194 396 \end{verbatim} … … 197 399 \subsection{Detailed functionalities } 198 400 199 The file NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xmlprovides several examples of the use of the new functionalities offered by the XML interface of XIOS.401 The file {\tt NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xml} provides several examples of the use of the new functionalities offered by the XML interface of XIOS. 200 402 201 403 \subsubsection{Define horizontal subdomains} 202 Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of the tag family domain. It must therefore be done in the domain part of the XML file. For example, in NEMOGCM/CONFIG/SHARED/domain\_def.xml, we provide the following example of a definition of a 5 by 5 box with the bottom left corner at point (10,10). 404 Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of the tag family domain. It must therefore be done in the domain part of the XML file. For example, in {\tt CONFIG/SHARED/domain\_def.xml}, we provide the following example of a definition of a 5 by 5 box with the bottom left corner at point (10,10). 405 \vspace{-20pt} 203 406 \begin{alltt} {{\scriptsize 204 407 \begin{verbatim} 205 408 <domain_group id="grid_T"> 206 409 <domain id="myzoom" zoom_ibegin="10" zoom_jbegin="10" zoom_ni="5" zoom_nj="5" /> 207 410 \end{verbatim} 208 411 }}\end{alltt} 209 412 The use of this subdomain is done through the redefinition of the attribute domain\_ref of the tag family field. For example: 413 \vspace{-20pt} 210 414 \begin{alltt} {{\scriptsize 211 415 \begin{verbatim} … … 216 420 }}\end{alltt} 217 421 Moorings are seen as an extrem case corresponding to a 1 by 1 subdomain. The Equatorial section, the TAO, RAMA and PIRATA moorings are alredy registered in the code and can therefore be outputted without taking care of their (i,j) position in the grid. These predefined domains can be activated by the use of specific domain\_ref: ''EqT'', ''EqU'' or ''EqW'' for the equatorial sections and the mooring position for TAO, RAMA and PIRATA followed by ''T'' (for example: ''8s137eT'', ''1.5s80.5eT'' ...) 422 \vspace{-20pt} 218 423 \begin{alltt} {{\scriptsize 219 424 \begin{verbatim} … … 227 432 \subsubsection{Define vertical zooms} 228 433 Vertical zooms are defined through the attributs zoom\_begin and zoom\_end of the tag family axis. It must therefore be done in the axis part of the XML file. For example, in NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xml, we provide the following example: 434 \vspace{-20pt} 229 435 \begin{alltt} {{\scriptsize 230 436 \begin{verbatim} … … 235 441 }}\end{alltt} 236 442 The use of this vertical zoom is done through the redefinition of the attribute axis\_ref of the tag family field. For example: 443 \vspace{-20pt} 237 444 \begin{alltt} {{\scriptsize 238 445 \begin{verbatim} … … 246 453 247 454 The output file names are defined by the attributs ''name'' and ''name\_suffix'' of the tag family file. for example: 455 \vspace{-20pt} 248 456 \begin{alltt} {{\scriptsize 249 457 \begin{verbatim} … … 258 466 \end{verbatim} 259 467 }}\end{alltt} 260 However it is also often very convienent to define the file name with the name of the experience, the output file frequency and the date of the beginning and the end of the simulation (which are informations stored either in the namelist or in the XML file). To do so, we added the following rule: if the id of the tag file is ''fileN''(where N = 1 to 99) or one of the predefined section or mooring (see next subsection), the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by:\\468 However it is often very convienent to define the file name with the name of the experiment, the output file frequency and the date of the beginning and the end of the simulation (which are informations stored either in the namelist or in the XML file). To do so, we added the following rule: if the id of the tag file is ''fileN''(where N = 1 to 99) or one of the predefined sections or moorings (see next subsection), the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by:\\ 261 469 \\ 262 470 \begin{tabular}{|p{4cm}|p{8cm}|} 263 471 \hline 264 \centering part of the name automatically to be replaced & 265 by \\ 472 \centering placeholder string & automatically replaced by \\ 266 473 \hline 267 474 \hline 268 475 \centering @expname@ & 269 the experi encename (from cn\_exp in the namelist) \\476 the experiment name (from cn\_exp in the namelist) \\ 270 477 \hline 271 478 \centering @freq@ & … … 284 491 ending date of the simulation (from nn\_date0 and nn\_itend in the namelist). \verb?yyyymmdd_hh:mm:ss? format \\ 285 492 \hline 286 \end{tabular} 493 \end{tabular}\\ 287 494 \\ 288 495 289 For example, 290 291 \begin{alltt} {{\scriptsize 496 \noindent For example, 497 {{\scriptsize 292 498 \begin{verbatim} 293 499 <file id="myfile_hzoom" name="myfile_@expname@_@startdate@_freq@freq@" output_freq="1d" > 294 500 \end{verbatim} 295 }}\end{alltt} 296 297 With, in the namelist: 298 299 \begin{alltt} {{\scriptsize 501 }} 502 \noindent with the namelist: 503 {{\scriptsize 300 504 \begin{verbatim} 301 505 cn_exp = "ORCA2" … … 303 507 ln_rstart = .false. 304 508 \end{verbatim} 305 }}\end{alltt} 306 307 will give the following file name radical: 308 309 \begin{alltt} {{\scriptsize 509 }} 510 \noindent will give the following file name radical: 511 {{\scriptsize 310 512 \begin{verbatim} 311 513 myfile_ORCA2_19891231_freq1d 312 514 \end{verbatim} 313 }}\end{alltt} 314 515 }} 315 516 316 517 \subsubsection{Other controls of the xml attributes from NEMO} 317 518 318 The values of some attributes are automatically defined by NEMO (and any definition given in the xml file is overwritten). By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the xml file (but this is not necessary).319 320 Here is the list of these attributes: 519 The values of some attributes are defined by subroutine calls within NEMO (calls to iom\_set\_domain\_attr, iom\_set\_axis\_attr and iom\_set\_field\_attr in iom.F90). Any definition given in the xml file will be overwritten. By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the xml file (but this is not necessary). 520 521 Here is the list of these attributes:\\ 321 522 \\ 322 523 \begin{tabular}{|l|c|c|c|} … … 343 544 344 545 546 \subsection{XML reference tables} 547 345 548 \subsubsection{Tag list} 346 549 347 348 \begin{tabular}{|p{2cm}|p{2.5cm}|p{3.5cm}|p{2cm}|p{2cm}|} 550 \begin{longtable}{|p{2.2cm}|p{2.5cm}|p{3.5cm}|p{2.2cm}|p{1.6cm}|} 349 551 \hline 350 552 tag name & … … 352 554 accepted attribute & 353 555 child of & 354 parent of \\ 355 \hline 556 parent of \endhead 356 557 \hline 357 558 simulation & … … 362 563 \hline 363 564 context & 364 encapsulates parts of the xml file d édicated to different codes or different parts of a code &565 encapsulates parts of the xml file dedicated to different codes or different parts of a code & 365 566 id (''xios'', ''nemo'' or ''n\_nemo'' for the nth AGRIF zoom), src, time\_origin & 366 567 simulation & 367 all root tags: ... \_definition \\568 all root tags: ... \_definition \\ 368 569 \hline 369 570 \hline … … 389 590 file\_definition & 390 591 encapsulates the definition of all the files that will be outputted & 391 enabled, min\_digits, name, name\_suffix, output\_level, split\_f ormat, split\_freq, sync\_freq, type, src &592 enabled, min\_digits, name, name\_suffix, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 392 593 context & 393 594 file or file\_group \\ … … 395 596 file\_group & 396 597 encapsulates a group of files that will be outputted & 397 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_f ormat, split\_freq, sync\_freq, type, src &598 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 398 599 file\_definition, file\_group & 399 600 file or file\_group \\ 400 601 \hline 401 602 file & 402 defi le the contentof a file to be outputted &403 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_f ormat, split\_freq, sync\_freq, type, src &603 define the contents of a file to be outputted & 604 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 404 605 file\_definition, file\_group & 405 606 field \\ 406 \hline407 \end{tabular}408 \begin{tabular}{|p{2cm}|p{2.5cm}|p{3.5cm}|p{2cm}|p{2cm}|}409 \hline410 tag name &411 description &412 accepted attribute &413 child of &414 parent of \\415 \hline416 607 \hline 417 608 axis\_definition & … … 434 625 \hline 435 626 \hline 436 domain\_ definition &627 domain\_\-definition & 437 628 define all the horizontal domains potentially used by the variables & 438 629 src & 439 630 context & 440 domain\_ group, domain \\631 domain\_\-group, domain \\ 441 632 \hline 442 633 domain\_group & 443 634 encapsulates a group of horizontal domains & 444 635 id, lon\_name, src, zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj & 445 domain\_ definition, domain\_group &446 domain\_ group, domain \\636 domain\_\-definition, domain\_group & 637 domain\_\-group, domain \\ 447 638 \hline 448 639 domain & 449 640 define an horizontal domain & 450 641 id, lon\_name, src, zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj & 451 domain\_ definition, domain\_group &642 domain\_\-definition, domain\_group & 452 643 none \\ 453 644 \hline … … 471 662 none \\ 472 663 \hline 473 \end{ tabular}664 \end{longtable} 474 665 475 666 476 667 \subsubsection{Attributes list} 477 668 478 \begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|} 479 \hline 669 \begin{longtable}{|p{2.2cm}|p{4cm}|p{3.8cm}|p{2cm}|} 670 \hline 671 attribute name & 672 description & 673 example & 674 accepted by \endhead 675 \hline 676 axis\_ref & 677 refers to the id of a vertical axis & 678 axis\_ref="deptht" & 679 field, grid families \\ 680 \hline 681 enabled & 682 switch on/off the output of a field or a file & 683 enabled=".TRUE." & 684 field, file families \\ 685 \hline 686 default\_value & 687 missing\_value definition & 688 default\_value="1.e20" & 689 field family \\ 690 \hline 691 description & 692 just for information, not used & 693 description="ocean T grid variables" & 694 all tags \\ 695 \hline 696 domain\_ref & 697 refers to the id of a domain & 698 domain\_ref="grid\_T" & 699 field or grid families \\ 700 \hline 701 field\_ref & 702 id of the field we want to add in a file & 703 field\_ref="toce" & 704 field \\ 705 \hline 706 grid\_ref & 707 refers to the id of a grid & 708 grid\_ref="grid\_T\_2D" & 709 field family \\ 710 \hline 711 group\_ref & 712 refer to a group of variables & 713 group\_ref="mooring" & 714 field\_group \\ 715 \hline 716 id & 717 allow to identify a tag & 718 id="nemo" & 719 accepted by all tags except simulation \\ 720 \hline 721 level & 722 output priority of a field: 0 (high) to 10 (low)& 723 level="1" & 724 field family \\ 725 \hline 726 long\_name & 727 define the long\_name attribute in the NetCDF file & 728 long\_name="Vertical T levels" & 729 field \\ 730 \hline 731 min\_digits & 732 specify the minimum of digits used in the core number in the name of the NetCDF file & 733 min\_digits="4" & 734 file family \\ 735 \hline 736 name & 737 name of a variable or a file. If the name of a file is undefined, its id is used as a name & 738 name="tos" & 739 field or file families \\ 740 \hline 741 name\_suffix & 742 suffix to be inserted after the name and before the cpu number and the ''.nc'' termination of a file & 743 name\_suffix="\_myzoom" & 744 file family \\ 745 \hline 480 746 attribute name & 481 747 description & … … 484 750 \hline 485 751 \hline 486 axis\_ref & 487 refers to the id of a vertical axis & 488 axis\_ref="deptht" & 489 field, grid families \\ 490 \hline 491 enabled & 492 switch on/off the output of a field or a file & 493 enabled=".TRUE." & 494 field, file families \\ 495 \hline 496 default\_value & 497 missing\_value definition & 498 default\_value="1.e20" & 752 operation & 753 type of temporal operation: average, accumulate, instantaneous, min, max and once & 754 operation="average" & 499 755 field family \\ 500 756 \hline 501 description & 502 just for information, not used & 503 description="ocean T grid variables" & 504 all tags \\ 505 \hline 506 domain\_ref & 507 refers to the id of a domain & 508 domain\_ref="grid\_T" & 509 field or grid families \\ 510 \hline 511 field\_ref= & 512 id of the field we want to add in a file & 513 field\_ref="toce" & 757 output\_freq & 758 operation frequency. units can be ts (timestep), y, mo, d, h, mi, s. & 759 output\_freq="1d12h" & 760 field family \\ 761 \hline 762 output\_level & 763 output priority of variables in a file: 0 (high) to 10 (low). All variables listed in the file with a level smaller or equal to output\_level will be output. Other variables won't be output even if they are listed in the file. & 764 output\_level="10"& 765 file family \\ 766 \hline 767 positive & 768 convention used for the orientation of vertival axis (positive downward in \NEMO). & 769 positive="down" & 770 axis family \\ 771 \hline 772 prec & 773 output precision: real 4 or real 8 & 774 prec="4" & 775 field family \\ 776 \hline 777 split\_freq & 778 frequency at which to temporally split output files. Units can be ts (timestep), y, mo, d, h, mi, s. Useful for long runs to prevent over-sized output files.& 779 split\_freq="1mo" & 780 file family \\ 781 \hline 782 split\_freq\-\_format & 783 date format used in the name of temporally split output files. Can be specified 784 using the following syntaxes: \%y, \%mo, \%d, \%h \%mi and \%s & 785 split\_freq\_format= "\%y\%mo\%d" & 786 file family \\ 787 \hline 788 src & 789 allow to include a file & 790 src="./field\_def.xml" & 791 accepted by all tags except simulation \\ 792 \hline 793 standard\_name & 794 define the standard\_name attribute in the NetCDF file & 795 standard\_name= "Eastward\_Sea\_Ice\_Transport" & 514 796 field \\ 515 797 \hline 516 grid\_ref & 517 refers to the id of a grid & 518 grid\_ref="grid\_T\_2D" & 519 field family \\ 520 \hline 521 group\_ref & 522 refer to a group of variables & 523 group\_ref="mooring" & 524 field\_group \\ 525 \hline 526 id & 527 allow to identify a tag & 528 id="nemo" & 529 accepted by all tags except simulation \\ 530 \hline 531 level & 532 output priority of a field: 0 (high) to 10 (low)& 533 level="1" & 534 field family \\ 535 \hline 536 long\_name & 537 define the long\_name attribute in the NetCDF file & 538 long\_name="Vertical T levels" & 539 field \\ 540 \hline 541 min\_digits & 542 specify the minimum of digits used in the core number in the name of the NetCDF file & 543 min\_digits="4" & 798 sync\_freq & 799 NetCDF file synchronization frequency (update of the time\_counter). units can be ts (timestep), y, mo, d, h, mi, s. & 800 sync\_freq="10d" & 544 801 file family \\ 545 802 \hline 546 name &547 name of a variable or a file. If the name of a file is undefined, its id is used as a name &548 name="tos" &549 field or file families \\550 \hline551 name\_suffix &552 suffix to be inserted after the name and before the cpu number and the ''.nc'' termination of a file &553 name\_suffix="\_myzoom" &554 file family \\555 \hline556 \end{tabular}557 \begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|}558 \hline559 803 attribute name & 560 804 description & … … 563 807 \hline 564 808 \hline 565 operation &566 type of temporal operation: average, accumulate, instantaneous, min, max and once &567 operation="average" &568 field family \\569 \hline570 output\_freq &571 operation frequency. units can be ts (timestep), y, mo, d, h, mi, s. &572 output\_freq="1d12h" &573 field family \\574 \hline575 output\_level &576 output priority of variables in a file: 0 (high) to 10 (low). All variables listed in the file with a level smaller or equal to output\_level will be output. Other variables won't be output even if they are listed in the file. &577 output\_level="10"&578 file family \\579 \hline580 positive &581 convention used for the orientation of vertival axis (positive downward in \NEMO). &582 positive="down" &583 axis family \\584 \hline585 prec &586 output precision: real 4 or real 8 &587 prec="4" &588 field family \\589 \hline590 split\_format &591 date format used in the name of splitted output files. can be spécified using the following syntaxe: \%y, \%mo, \%d, \%h \%mi and \%s &592 split\_format="\%yy\%mom\%dd" &593 file family \\594 \hline595 split\_freq &596 split output files frequency. units can be ts (timestep), y, mo, d, h, mi, s. &597 split\_freq="1mo" &598 file family \\599 \hline600 src &601 allow to include a file &602 src="./field\_def.xml" &603 accepted by all tags except simulation \\604 \hline605 standard\_name &606 define the standard\_name attribute in the NetCDF file &607 standard\_name="Eastward\_Sea\_Ice\_Transport" &608 field \\609 \hline610 sync\_freq &611 NetCDF file synchronization frequency (update of the time\_counter). units can be ts (timestep), y, mo, d, h, mi, s. &612 sync\_freq="10d" &613 file family \\614 \hline615 \end{tabular}616 \begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|}617 \hline618 attribute name &619 description &620 example &621 accepted by \\622 \hline623 \hline624 809 time\_origin & 625 810 specify the origin of the time counter & … … 628 813 \hline 629 814 type (1)& 630 specify if the output files must be splitted(multiple\_file) or not (one\_file) &815 specify if the output files are to be split spatially (multiple\_file) or not (one\_file) & 631 816 type="multiple\_file" & 632 817 file familly \\ … … 662 847 domain family \\ 663 848 \hline 664 \end{tabular} 665 666 \subsection{XIOS: the IO\_SERVER} 667 668 \subsubsection{Attached or detached mode?} 669 670 Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS}, the io\_server developed by Yann Meurdesoif from IPSL. This server can be used in ''attached mode'' (as a library) or in ''detached mode'' (as an external executable on n cpus). The ''attached mode'' is simpler to use but much less efficient. If the type of file is ''multiple\_file'', then in attached(detached) mode, each NEMO(XIOS) process will output its own subdomain: if NEMO(XIOS) is runnning on N cores, the ouput files will be splitted into N files. If the type of file is ''one\_file'', the output files will be directly recombined into one unique file either in ''detached mode'' or ''attached mode''. 671 672 \subsubsection{Control of xios: the xios context in iodef.xml} 673 674 The control of the use of xios is done through the xios context in iodef.xml. 675 676 \begin{tabular}{|p{3cm}|p{6.5cm}|p{2.5cm}|} 677 \hline 678 variable name & 679 description & 680 example \\ 681 \hline 682 \hline 683 buffer\_size & 684 buffer size used by XIOS to send data from NEMO to XIOS. Larger is more efficient. Note that needed/used buffer sizes are summarized at the end of the job & 685 25000000 \\ 686 \hline 687 buffer\_server\_factor\_size & 688 ratio between NEMO and XIOS buffer size. Should be 2. & 689 2 \\ 690 \hline 691 info\_level & 692 verbosity level (0 to 100) & 693 0 \\ 694 \hline 695 using\_server & 696 activate attached(false) or detached(true) mode & 697 true \\ 698 \hline 699 using\_oasis & 700 xios is used with OASIS(true) or not (false) & 701 false \\ 702 \hline 703 oasis\_codes\_id & 704 when using oasis, define the identifier of NEMO in the namcouple. Not that the identifier of XIOS is xios.x & 705 oceanx \\ 706 \hline 707 \end{tabular} 708 709 \subsubsection{Number of cpu used by XIOS in detached mode} 710 711 The number of cores used by the xios is specified only when launching the model. The number or cores dedicated to XIOS should be from ~1/10 to ~1/50 of the number or cores dedicated to NEMO (according of the amount of data to be created). Here is an example of 2 cpus for the io\_server and 62 cpu for opa using mpirun: 712 713 \texttt{ mpirun -np 2 ./nemo.exe : -np 62 ./xios\_server.exe } 714 715 \subsection{Practical issues} 716 717 \subsubsection{Add your own outputs} 718 719 It is very easy to add you own outputs with iomput. 4 points must be followed. 720 \begin{description} 721 \item[1-] in NEMO code, add a \\ 722 \texttt{ CALL iom\_put( 'identifier', array ) } \\ 723 where you want to output a 2D or 3D array. 724 725 \item[2-] don't forget to add \\ 726 \texttt{ USE iom ! I/O manager library } \\ 727 in the list of used modules in the upper part of your module. 728 729 \item[3-] in the file\_definition part of the xml file, add the definition of your variable using the same identifier you used in the f90 code. 730 \vspace{-20pt} 731 \begin{alltt} {{\scriptsize 732 \begin{verbatim} 733 <field_definition> 734 ... 735 <field id="identifier" long_name="blabla" ... /> 736 ... 737 </field_definition> 738 \end{verbatim} 739 }}\end{alltt} 740 attributes axis\_ref and grid\_ref must be consistent with the size of the array to pass to iomput. 741 if your array is computed within the surface module each nn\_fsbc time\_step, 742 add the field definition within the group defined with the id ''SBC'': $<$group id=''SBC''...$>$ 743 744 \item[4-] add your field in one of the output files \\ 745 \vspace{-20pt} 746 \begin{alltt} {{\scriptsize 747 \begin{verbatim} 748 <file id="file1" .../> 749 ... 750 <field ref="identifier" /> 751 ... 752 </file> 753 \end{verbatim} 754 }}\end{alltt} 755 756 \end{description} 849 \end{longtable} 850 757 851 758 852 … … 809 903 domain size in any dimension. The algorithm used is: 810 904 905 \vspace{-20pt} 811 906 \begin{alltt} {{\scriptsize 812 907 \begin{verbatim} -
branches/2013/dev_LOCEAN_2013/NEMOGCM/ARCH/arch-X64_CURIE.fcm
r4148 r4153 29 29 # - fcm variables are starting with a % (and not a $) 30 30 # 31 %NCDF_HOME /usr/local/netcdf-4.2_hdf5 32 %HDF5_HOME /usr/local/hdf5-1.8. 831 %NCDF_HOME /usr/local/netcdf-4.2_hdf5_parallel 32 %HDF5_HOME /usr/local/hdf5-1.8.9_parallel 33 33 %XIOS_HOME $WORKDIR/now/models/xios 34 34 %OASIS_HOME $WORKDIR/now/models/oa3mct -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/AMM12/EXP00/iodef.xml
r4148 r4153 128 128 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 129 129 <axis id="nfloat" long_name="Float number" unit="-" /> 130 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 130 131 </axis_definition> 131 132 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/GYRE/EXP00/iodef.xml
r4148 r4153 91 91 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 92 92 <axis id="nfloat" long_name="Float number" unit="-" /> 93 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 93 94 </axis_definition> 94 95 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/GYRE_BFM/EXP00/iodef.xml
r4148 r4153 62 62 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 63 63 <axis id="nfloat" long_name="Float number" unit="-" /> 64 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 64 65 </axis_definition> 65 66 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/GYRE_PISCES/EXP00/iodef.xml
r4148 r4153 128 128 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 129 129 <axis id="nfloat" long_name="Float number" unit="-" /> 130 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 130 131 </axis_definition> 131 132 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_ar5.xml
r4148 r4153 248 248 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 249 249 <axis id="nfloat" long_name="Float number" unit="-" /> 250 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 250 251 </axis_definition> 251 252 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_default.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r4148 r4153 129 129 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 130 130 <axis id="nfloat" long_name="Float number" unit="-" /> 131 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 131 132 </axis_definition> 132 133 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_demo.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r4148 r4153 44 44 <!-- mooring: automatic definition of the file name suffix based on id="0n180wT" --> 45 45 <!-- include a group of variables. see field_def.xml for mooring variables definition --> 46 <file id="0n180wT" 47 <field_group group_ref="mooring" />46 <file id="0n180wT"> 47 <field_group group_ref="mooring" domain_ref="0n180wT" /> 48 48 </file> 49 49 … … 53 53 <field_group id="EqT" domain_ref="EqT" > 54 54 <field field_ref="toce" name="votemper" axis_ref="deptht_myzoom" /> 55 <field field_ref="sss" /> 55 56 </field_group> 56 57 </file> … … 85 86 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 86 87 <axis id="nfloat" long_name="Float number" unit="-" /> 88 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 87 89 </axis_definition> 88 90 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_oldstyle.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r4148 r4153 116 116 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 117 117 <axis id="nfloat" long_name="Float number" unit="-" /> 118 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 118 119 </axis_definition> 119 120 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/ORCA2_LIM_CFC_C14b/EXP00/iodef.xml
r4152 r4153 125 125 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 126 126 <axis id="nfloat" long_name="Float number" unit="-" /> 127 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 127 128 </axis_definition> 128 129 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/ORCA2_LIM_PISCES/EXP00/iodef.xml
r4152 r4153 230 230 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 231 231 <axis id="nfloat" long_name="Float number" unit="-" /> 232 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 232 233 </axis_definition> 233 234 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/ORCA2_OFF_PISCES/EXP00/iodef.xml
r4152 r4153 88 88 </file> 89 89 90 <file id="file4" name_suffix="_ ptrc_T" description="additional pisces diagnostics" >90 <file id="file4" name_suffix="_diad_T" description="additional pisces diagnostics" > 91 91 <field field_ref="PH" /> 92 92 <field field_ref="CO3" /> … … 158 158 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 159 159 <axis id="nfloat" long_name="Float number" unit="-" /> 160 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 160 161 </axis_definition> 161 162 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/ORCA2_SAS_LIM/EXP00/iodef.xml
r4147 r4153 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 114 114 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 115 115 <axis id="nfloat" long_name="Float number" unit="-" /> 116 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 116 117 </axis_definition> 117 118 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/SHARED/1_namelist_ref
r4147 r4153 124 124 rn_rdtmax = 28800. ! maximum time step on tracers (used if nn_acc=1) 125 125 rn_rdth = 800. ! depth variation of tracer time step (used if nn_acc=1) 126 ln_crs = .false. ! Logical switch for coarsening module 126 127 jphgr_msh = 0 ! type of horizontal mesh 127 128 ! = 0 curvilinear coordinate on the sphere read in coordinate.nc … … 147 148 ppkth2 = 48.029893720000 ! 148 149 ppacr2 = 13.000000000000 ! 150 / 151 !----------------------------------------------------------------------- 152 &namcrs ! Grid coarsening for dynamics output and/or 153 ! passive tracer coarsened online simulations 154 !----------------------------------------------------------------------- 155 nn_factx = 3 ! Reduction factor of x-direction 156 nn_facty = 3 ! Reduction factor of y-direction 157 nn_binref = 0 ! Bin centering preference: NORTH or EQUAT 158 ! 0, coarse grid is binned with preferential treatment of the north fold 159 ! 1, coarse grid is binned with centering at the equator 160 ! Symmetry with nn_facty being odd-numbered. Asymmetry with even-numbered nn_facty. 161 nn_msh_crs = 1 ! create (=1) a mesh file or not (=0) 162 nn_crs_kz = 0 ! 0, MEAN of volume boxes 163 ! 1, MAX of boxes 164 ! 2, MIN of boxes 165 ln_crs_wn = .true. ! wn coarsened (T) or computed using horizontal divergence ( F ) 149 166 / 150 167 !----------------------------------------------------------------------- -
branches/2013/dev_LOCEAN_2013/NEMOGCM/CONFIG/SHARED/field_def.xml
r4152 r4153 227 227 </field_group> 228 228 229 <!-- variables available with iceberg trajectories --> 230 <field_group id="icbvar" domain_ref="grid_T" > 231 <field id="berg_melt" long_name="icb melt rate of icebergs" unit="kg/m2/s" /> 232 <field id="berg_buoy_melt" long_name="icb buoyancy component of iceberg melt rate" unit="kg/m2/s" /> 233 <field id="berg_eros_melt" long_name="icb erosion component of iceberg melt rate" unit="kg/m2/s" /> 234 <field id="berg_conv_melt" long_name="icb convective component of iceberg melt rate" unit="kg/m2/s" /> 235 <field id="berg_virtual_area" long_name="icb virtual coverage by icebergs" unit="m2" /> 236 <field id="bits_src" long_name="icb mass source of bergy bits" unit="kg/m2/s" /> 237 <field id="bits_melt" long_name="icb melt rate of bergy bits" unit="kg/m2/s" /> 238 <field id="bits_mass" long_name="icb bergy bit density field" unit="kg/m2" /> 239 <field id="berg_mass" long_name="icb iceberg density field" unit="kg/m2" /> 240 <field id="calving" long_name="icb calving mass input" unit="kg/s" /> 241 <field id="berg_floating_melt" long_name="icb melt rate of icebergs + bits" unit="kg/m2/s" /> 242 <field id="berg_real_calving" long_name="icb calving into iceberg class" unit="kg/s" axis_ref="icbcla" /> 243 <field id="berg_stored_ice" long_name="icb accumulated ice mass by class" unit="kg" axis_ref="icbcla" /> 244 </field_group> 245 229 246 <!-- ptrc on T grid --> 230 247 … … 255 272 <field id="NH4" long_name="Ammonium Concentration" unit="mmol/m3" /> 256 273 274 <!-- PISCES with Kriest parametisation : variables available with key_kriest --> 275 <field id="Num" long_name="Number of organic particles" unit="nbr" /> 276 277 <!-- PISCES light : variables available with key_pisces_reduced --> 257 278 <field id="DET" long_name="Detritus" unit="mmol-N/m3" /> 258 279 <field id="DOM" long_name="Dissolved Organic Matter" unit="mmol-N/m3" /> 259 280 281 <!-- CFC11 : variables available with key_cfc --> 260 282 <field id="CFC11" long_name="CFC-11 Concentration" unit="umol/L" /> 283 <!-- Bomb C14 : variables available with key_c14b --> 261 284 <field id="C14B" long_name="Bomb C14 Concentration" unit="ration" /> 262 285 </field_group> 263 286 264 <!-- diad on T grid : variables available with key_diatrc --> 265 287 <!-- PISCES additional diagnostics on T grid --> 266 288 <field_group id="diad_T" grid_ref="grid_T_2D"> 267 289 <field id="PH" long_name="PH" unit="-" grid_ref="grid_T_3D" /> … … 317 339 <field id="Heup" long_name="Euphotic layer depth" unit="m" /> 318 340 <field id="Irondep" long_name="Iron deposition from dust" unit="mol/m2/s" /> 319 <field id="Ironsed" long_name="Iron deposition from sediment" unit="mol/m2/s" grid_ref="grid_T_3D" /> 320 321 <field id="FNO3PHY" long_name="FNO3PHY" unit="-" grid_ref="grid_T_3D" /> 322 <field id="FNH4PHY" long_name="FNH4PHY" unit="-" grid_ref="grid_T_3D" /> 323 <field id="FNH4NO3" long_name="FNH4NO3" unit="-" grid_ref="grid_T_3D" /> 324 <field id="TNO3PHY" long_name="TNO3PHY" unit="-" /> 325 <field id="TNH4PHY" long_name="TNH4PHY" unit="-" /> 326 <field id="TPHYDOM" long_name="TPHYDOM" unit="-" /> 327 <field id="TPHYNH4" long_name="TPHYNH4" unit="-" /> 328 <field id="TPHYZOO" long_name="TPHYZOO" unit="-" /> 329 <field id="TPHYDET" long_name="TPHYDET" unit="-" /> 330 <field id="TDETZOO" long_name="TDETZOO" unit="-" /> 331 <field id="TZOODET" long_name="TZOODET" unit="-" /> 332 <field id="TZOOBOD" long_name="TZOOBOD" unit="-" /> 333 <field id="TZOONH4" long_name="TZOONH4" unit="-" /> 334 <field id="TZOODOM" long_name="TZOODOM" unit="-" /> 335 <field id="TNH4NO3" long_name="TNH4NO3" unit="-" /> 336 <field id="TDOMNH4" long_name="TDOMNH4" unit="-" /> 337 <field id="TDETNH4" long_name="TDETNH4" unit="-" /> 338 <field id="TPHYTOT" long_name="TPHYTOT" unit="-" /> 339 <field id="TZOOTOT" long_name="TZOOTOT" unit="-" /> 340 <field id="SEDPOC" long_name="SEDPOC" unit="-" /> 341 <field id="TDETSED" long_name="TDETSED" unit="-" /> 342 343 <field id="qtrCFC11" long_name="Air-sea flux of CFC-11" unit="mol/m2/s" /> 344 <field id="qintCFC11" long_name="Cumulative air-sea flux of CFC-11" unit="mol/m2" /> 345 <field id="qtrC14b" long_name="Air-sea flux of Bomb C14" unit="mol/m2/s" /> 346 <field id="qintC14b" long_name="Cumulative air-sea flux of Bomb C14" unit="mol/m2" /> 347 <field id="fdecay" long_name="Radiactive decay of Bomb C14" unit="mol/m3" grid_ref="grid_T_3D" /> 341 <field id="Ironsed" long_name="Iron deposition from sediment" unit="mol/m2/s" grid_ref="grid_T_3D "/> 342 343 <!-- PISCES with Kriest parametisation : variables available with key_kriest --> 344 <field id="POCFlx" long_name="Particulate organic C flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 345 <field id="NumFlx" long_name="Particle number flux" unit="nbr/m2/s" grid_ref="grid_T_3D" /> 346 <field id="SiFlx" long_name="Biogenic Si flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 347 <field id="CaCO3Flx" long_name="CaCO3 flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 348 <field id="xnum" long_name="Number of particles in aggregats" unit="-" grid_ref="grid_T_3D" /> 349 <field id="W1" long_name="sinking speed of mass flux" unit="m2/s" grid_ref="grid_T_3D" /> 350 <field id="W2" long_name="sinking speed of number flux" unit="m2/s" grid_ref="grid_T_3D" /> 351 352 <!-- PISCES light : variables available with key_pisces_reduced --> 353 <field id="FNO3PHY" long_name="FNO3PHY" unit="-" grid_ref="grid_T_3D" /> 354 <field id="FNH4PHY" long_name="FNH4PHY" unit="-" grid_ref="grid_T_3D" /> 355 <field id="FNH4NO3" long_name="FNH4NO3" unit="-" grid_ref="grid_T_3D" /> 356 <field id="TNO3PHY" long_name="TNO3PHY" unit="-" /> 357 <field id="TNH4PHY" long_name="TNH4PHY" unit="-" /> 358 <field id="TPHYDOM" long_name="TPHYDOM" unit="-" /> 359 <field id="TPHYNH4" long_name="TPHYNH4" unit="-" /> 360 <field id="TPHYZOO" long_name="TPHYZOO" unit="-" /> 361 <field id="TPHYDET" long_name="TPHYDET" unit="-" /> 362 <field id="TDETZOO" long_name="TDETZOO" unit="-" /> 363 <field id="TZOODET" long_name="TZOODET" unit="-" /> 364 <field id="TZOOBOD" long_name="TZOOBOD" unit="-" /> 365 <field id="TZOONH4" long_name="TZOONH4" unit="-" /> 366 <field id="TZOODOM" long_name="TZOODOM" unit="-" /> 367 <field id="TNH4NO3" long_name="TNH4NO3" unit="-" /> 368 <field id="TDOMNH4" long_name="TDOMNH4" unit="-" /> 369 <field id="TDETNH4" long_name="TDETNH4" unit="-" /> 370 <field id="TPHYTOT" long_name="TPHYTOT" unit="-" /> 371 <field id="TZOOTOT" long_name="TZOOTOT" unit="-" /> 372 <field id="SEDPOC" long_name="SEDPOC" unit="-" /> 373 <field id="TDETSED" long_name="TDETSED" unit="-" /> 374 375 <!-- CFC11 : variables available with key_cfc --> 376 <field id="qtrCFC11" long_name="Air-sea flux of CFC-11" unit="mol/m2/s" /> 377 <field id="qintCFC11" long_name="Cumulative air-sea flux of CFC-11" unit="mol/m2" /> 378 <!-- Bomb C14 : variables available with key_c14b --> 379 <field id="qtrC14b" long_name="Air-sea flux of Bomb C14" unit="mol/m2/s" /> 380 <field id="qintC14b" long_name="Cumulative air-sea flux of Bomb C14" unit="mol/m2" /> 381 <field id="fdecay" long_name="Radiactive decay of Bomb C14" unit="mol/m3" grid_ref="grid_T_3D" /> 348 382 </field_group> 349 383 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/NST_SRC/agrif_opa_sponge.F90
r3698 r4153 185 185 INTEGER :: ji,jj,jk 186 186 INTEGER :: ispongearea, ilci, ilcj 187 REAL(wp) :: z1spongearea 188 REAL(wp), POINTER, DIMENSION(:,:) :: zlocalviscsponge 187 LOGICAL :: ll_spdone 188 REAL(wp) :: z1spongearea, zramp 189 REAL(wp), POINTER, DIMENSION(:,:) :: ztabramp 189 190 190 191 #if defined SPONGE || defined SPONGE_TOP 191 192 CALL wrk_alloc( jpi, jpj, zlocalviscsponge ) 193 194 ispongearea = 2 + 2 * Agrif_irhox() 195 ilci = nlci - ispongearea 196 ilcj = nlcj - ispongearea 197 z1spongearea = 1._wp / REAL( ispongearea - 2 ) 198 spbtr2(:,:) = 1. / ( e1t(:,:) * e2t(:,:) ) 192 ll_spdone=.TRUE. 193 IF (( .NOT. spongedoneT ).OR.( .NOT. spongedoneU )) THEN 194 ! Define ramp from boundaries towards domain interior 195 ! at T-points 196 ! Store it in ztabramp 197 ll_spdone=.FALSE. 198 199 CALL wrk_alloc( jpi, jpj, ztabramp ) 200 201 ispongearea = 2 + 2 * Agrif_irhox() 202 ilci = nlci - ispongearea 203 ilcj = nlcj - ispongearea 204 z1spongearea = 1._wp / REAL( ispongearea - 2 ) 205 spbtr2(:,:) = 1. / ( e1t(:,:) * e2t(:,:) ) 206 207 ztabramp(:,:) = 0. 208 209 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 210 DO jj = 1, jpj 211 IF ( umask(2,jj,1) == 1._wp ) THEN 212 DO ji = 2, ispongearea 213 ztabramp(ji,jj) = ( ispongearea-ji ) * z1spongearea 214 END DO 215 ENDIF 216 ENDDO 217 ENDIF 218 219 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 220 DO jj = 1, jpj 221 IF ( umask(nlci-2,jj,1) == 1._wp ) THEN 222 DO ji = ilci+1,nlci-1 223 zramp = (ji - (ilci+1) ) * z1spongearea 224 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 225 ENDDO 226 ENDIF 227 ENDDO 228 ENDIF 229 230 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 231 DO ji = 1, jpi 232 IF ( vmask(ji,2,1) == 1._wp ) THEN 233 DO jj = 2, ispongearea 234 zramp = ( ispongearea-jj ) * z1spongearea 235 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 236 END DO 237 ENDIF 238 ENDDO 239 ENDIF 240 241 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 242 DO ji = 1, jpi 243 IF ( vmask(ji,nlcj-2,1) == 1._wp ) THEN 244 DO jj = ilcj+1,nlcj-1 245 zramp = (jj - (ilcj+1) ) * z1spongearea 246 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 247 END DO 248 ENDIF 249 ENDDO 250 ENDIF 251 252 ENDIF 199 253 200 254 ! Tracers 201 255 IF( .NOT. spongedoneT ) THEN 202 zlocalviscsponge(:,:) = 0.203 256 spe1ur(:,:) = 0. 204 257 spe2vr(:,:) = 0. 205 258 206 259 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 207 DO ji = 2, ispongearea208 zlocalviscsponge(ji,:) = visc_tra * ( ispongearea-ji ) * z1spongearea209 ENDDO210 spe1ur(2:ispongearea-1,: ) = 0.5 * ( zlocalviscsponge(2:ispongearea-1,: ) &211 & + zlocalviscsponge(3:ispongearea ,: ) ) & 212 & * e2u(2:ispongearea-1,: ) / e1u(2:ispongearea-1,: )213 spe2vr(2:ispongearea ,1:jpjm1) = 0.5 * ( zlocalviscsponge(2:ispongearea ,1:jpjm1) &214 & + zlocalviscsponge(2:ispongearea,2 :jpj ) ) &215 & * e1v(2:ispongearea ,1:jpjm1) / e2v(2:ispongearea,1:jpjm1)260 spe1ur(2:ispongearea-1,: ) = visc_tra & 261 & * 0.5 * ( ztabramp(2:ispongearea-1,: ) & 262 & + ztabramp(3:ispongearea ,: ) ) & 263 & * e2u(2:ispongearea-1,:) / e1u(2:ispongearea-1,:) 264 265 spe2vr(2:ispongearea ,1:jpjm1 ) = visc_tra & 266 & * 0.5 * ( ztabramp(2:ispongearea ,1:jpjm1) & 267 & + ztabramp(2:ispongearea,2 :jpj ) ) & 268 & * e1v(2:ispongearea,1:jpjm1) / e2v(2:ispongearea,1:jpjm1) 216 269 ENDIF 217 270 218 271 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 219 DO ji = ilci+1,nlci-1 220 zlocalviscsponge(ji,:) = visc_tra * (ji - (ilci+1) ) * z1spongearea 221 ENDDO 222 223 spe1ur(ilci+1:nlci-2,: ) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-2,:) & 224 & + zlocalviscsponge(ilci+2:nlci-1,:) ) & 225 & * e2u(ilci+1:nlci-2,:) / e1u(ilci+1:nlci-2,:) 226 227 spe2vr(ilci+1:nlci-1,1:jpjm1) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-1,1:jpjm1) & 228 & + zlocalviscsponge(ilci+1:nlci-1,2:jpj ) ) & 229 & * e1v(ilci+1:nlci-1,1:jpjm1) / e2v(ilci+1:nlci-1,1:jpjm1) 272 spe1ur(ilci+1:nlci-2,: ) = visc_tra & 273 & * 0.5 * ( ztabramp(ilci+1:nlci-2,: ) & 274 & + ztabramp(ilci+2:nlci-1,: ) ) & 275 & * e2u(ilci+1:nlci-2,:) / e1u(ilci+1:nlci-2,:) 276 277 spe2vr(ilci+1:nlci-1,1:jpjm1 ) = visc_tra & 278 & * 0.5 * ( ztabramp(ilci+1:nlci-1,1:jpjm1) & 279 & + ztabramp(ilci+1:nlci-1,2:jpj ) ) & 280 & * e1v(ilci+1:nlci-1,1:jpjm1) / e2v(ilci+1:nlci-1,1:jpjm1) 230 281 ENDIF 231 282 232 283 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 233 DO jj = 2, ispongearea 234 zlocalviscsponge(:,jj) = visc_tra * ( ispongearea-jj ) * z1spongearea 235 ENDDO 236 spe1ur(1:jpim1,2:ispongearea ) = 0.5 * ( zlocalviscsponge(1:jpim1,2:ispongearea ) & 237 & + zlocalviscsponge(2:jpi ,2:ispongearea) ) & 284 spe1ur(1:jpim1,2:ispongearea ) = visc_tra & 285 & * 0.5 * ( ztabramp(1:jpim1,2:ispongearea ) & 286 & + ztabramp(2:jpi ,2:ispongearea ) ) & 238 287 & * e2u(1:jpim1,2:ispongearea) / e1u(1:jpim1,2:ispongearea) 239 288 240 spe2vr(: ,2:ispongearea-1) = 0.5 * ( zlocalviscsponge(:,2:ispongearea-1) & 241 & + zlocalviscsponge(:,3:ispongearea ) ) & 289 spe2vr(: ,2:ispongearea-1) = visc_tra & 290 & * 0.5 * ( ztabramp(: ,2:ispongearea-1) & 291 & + ztabramp(: ,3:ispongearea ) ) & 242 292 & * e1v(:,2:ispongearea-1) / e2v(:,2:ispongearea-1) 243 293 ENDIF 244 294 245 295 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 246 DO jj = ilcj+1,nlcj-1 247 zlocalviscsponge(:,jj) = visc_tra * (jj - (ilcj+1) ) * z1spongearea 248 ENDDO 249 spe1ur(1:jpim1,ilcj+1:nlcj-1) = 0.5 * ( zlocalviscsponge(1:jpim1,ilcj+1:nlcj-1) & 250 & + zlocalviscsponge(2:jpi ,ilcj+1:nlcj-1) ) & 296 spe1ur(1:jpim1,ilcj+1:nlcj-1) = visc_tra & 297 & * 0.5 * ( ztabramp(1:jpim1,ilcj+1:nlcj-1) & 298 & + ztabramp(2:jpi ,ilcj+1:nlcj-1) ) & 251 299 & * e2u(1:jpim1,ilcj+1:nlcj-1) / e1u(1:jpim1,ilcj+1:nlcj-1) 252 spe2vr(: ,ilcj+1:nlcj-2) = 0.5 * ( zlocalviscsponge(:,ilcj+1:nlcj-2 ) & 253 & + zlocalviscsponge(:,ilcj+2:nlcj-1) ) & 300 301 spe2vr(: ,ilcj+1:nlcj-2) = visc_tra & 302 & * 0.5 * ( ztabramp(: ,ilcj+1:nlcj-2) & 303 & + ztabramp(: ,ilcj+2:nlcj-1) ) & 254 304 & * e1v(:,ilcj+1:nlcj-2) / e2v(:,ilcj+1:nlcj-2) 255 305 ENDIF … … 259 309 ! Dynamics 260 310 IF( .NOT. spongedoneU ) THEN 261 zlocalviscsponge(:,:) = 0.262 311 spe1ur2(:,:) = 0. 263 312 spe2vr2(:,:) = 0. 264 313 265 314 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 266 DO ji = 2, ispongearea 267 zlocalviscsponge(ji,:) = visc_dyn * ( ispongearea-ji ) * z1spongearea 268 ENDDO 269 spe1ur2(2:ispongearea-1,: ) = 0.5 * ( zlocalviscsponge(2:ispongearea-1,: ) & 270 & + zlocalviscsponge(3:ispongearea,: ) ) 271 spe2vr2(2:ispongearea ,1:jpjm1) = 0.5 * ( zlocalviscsponge(2:ispongearea ,1:jpjm1) & 272 & + zlocalviscsponge(2:ispongearea,2:jpj) ) 315 spe1ur2(2:ispongearea-1,: ) = visc_dyn & 316 & * 0.5 * ( ztabramp(2:ispongearea-1,: ) & 317 & + ztabramp(3:ispongearea ,: ) ) 318 spe2vr2(2:ispongearea ,1:jpjm1) = visc_dyn & 319 & * 0.5 * ( ztabramp(2:ispongearea ,1:jpjm1) & 320 & + ztabramp(2:ispongearea ,2:jpj ) ) 273 321 ENDIF 274 322 275 323 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 276 DO ji = ilci+1,nlci-1 277 zlocalviscsponge(ji,:) = visc_dyn * (ji - (ilci+1) ) * z1spongearea 278 ENDDO 279 spe1ur2(ilci+1:nlci-2,: ) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-2,:) & 280 & + zlocalviscsponge(ilci+2:nlci-1,:) ) 281 spe2vr2(ilci+1:nlci-1,1:jpjm1) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-1,1:jpjm1) & 282 & + zlocalviscsponge(ilci+1:nlci-1,2:jpj ) ) 324 spe1ur2(ilci+1:nlci-2 ,: ) = visc_dyn & 325 & * 0.5 * ( ztabramp(ilci+1:nlci-2, : ) & 326 & + ztabramp(ilci+2:nlci-1, : ) ) 327 spe2vr2(ilci+1:nlci-1 ,1:jpjm1) = visc_dyn & 328 & * 0.5 * ( ztabramp(ilci+1:nlci-1,1:jpjm1 ) & 329 & + ztabramp(ilci+1:nlci-1,2:jpj ) ) 283 330 ENDIF 284 331 285 332 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 286 DO jj = 2, ispongearea 287 zlocalviscsponge(:,jj) = visc_dyn * ( ispongearea-jj ) * z1spongearea 288 ENDDO 289 spe1ur2(1:jpim1,2:ispongearea ) = 0.5 * ( zlocalviscsponge(1:jpim1,2:ispongearea) & 290 & + zlocalviscsponge(2:jpi,2:ispongearea) ) 291 spe2vr2(: ,2:ispongearea-1) = 0.5 * ( zlocalviscsponge(:,2:ispongearea-1) & 292 & + zlocalviscsponge(:,3:ispongearea) ) 333 spe1ur2(1:jpim1,2:ispongearea ) = visc_dyn & 334 & * 0.5 * ( ztabramp(1:jpim1,2:ispongearea ) & 335 & + ztabramp(2:jpi ,2:ispongearea ) ) 336 spe2vr2(: ,2:ispongearea-1) = visc_dyn & 337 & * 0.5 * ( ztabramp(: ,2:ispongearea-1) & 338 & + ztabramp(: ,3:ispongearea ) ) 293 339 ENDIF 294 340 295 341 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 296 DO jj = ilcj+1,nlcj-1 297 zlocalviscsponge(:,jj) = visc_dyn * (jj - (ilcj+1) ) * z1spongearea 298 ENDDO 299 spe1ur2(1:jpim1,ilcj+1:nlcj-1) = 0.5 * ( zlocalviscsponge(1:jpim1,ilcj+1:nlcj-1) & 300 & + zlocalviscsponge(2:jpi,ilcj+1:nlcj-1) ) 301 spe2vr2(: ,ilcj+1:nlcj-2) = 0.5 * ( zlocalviscsponge(:,ilcj+1:nlcj-2 ) & 302 & + zlocalviscsponge(:,ilcj+2:nlcj-1) ) 342 spe1ur2(1:jpim1,ilcj+1:nlcj-1 ) = visc_dyn & 343 & * 0.5 * ( ztabramp(1:jpim1,ilcj+1:nlcj-1 ) & 344 & + ztabramp(2:jpi ,ilcj+1:nlcj-1 ) ) 345 spe2vr2(: ,ilcj+1:nlcj-2 ) = visc_dyn & 346 & * 0.5 * ( ztabramp(: ,ilcj+1:nlcj-2 ) & 347 & + ztabramp(: ,ilcj+2:nlcj-1 ) ) 303 348 ENDIF 304 349 spongedoneU = .TRUE. … … 306 351 ENDIF 307 352 ! 308 CALL wrk_dealloc( jpi, jpj, zlocalviscsponge)353 IF (.NOT.ll_spdone) CALL wrk_dealloc( jpi, jpj, ztabramp ) 309 354 ! 310 355 #endif -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/BDY/bdydyn.F90
r3294 r4153 30 30 USE lbclnk ! ocean lateral boundary conditions (or mpp link) 31 31 USE in_out_manager ! 32 USE domvvl ! variable volume 32 33 33 34 IMPLICIT NONE … … 84 85 pu2d(:,:) = 0.e0 85 86 pv2d(:,:) = 0.e0 86 DO jk = 1, jpkm1 !! Vertically integrated momentum trends 87 pu2d(:,:) = pu2d(:,:) + fse3u(:,:,jk) * umask(:,:,jk) * ua(:,:,jk) 88 pv2d(:,:) = pv2d(:,:) + fse3v(:,:,jk) * vmask(:,:,jk) * va(:,:,jk) 89 END DO 90 pu2d(:,:) = pu2d(:,:) * phur(:,:) 91 pv2d(:,:) = pv2d(:,:) * phvr(:,:) 87 IF (lk_vvl) THEN 88 DO jk = 1, jpkm1 !! Vertically integrated momentum trends 89 pu2d(:,:) = pu2d(:,:) + fse3u_a(:,:,jk) * umask(:,:,jk) * ua(:,:,jk) 90 pv2d(:,:) = pv2d(:,:) + fse3v_a(:,:,jk) * vmask(:,:,jk) * va(:,:,jk) 91 END DO 92 pu2d(:,:) = pu2d(:,:) / ( hu_0(:,:) + sshu_a(:,:) + 1._wp - umask(:,:,1) ) 93 pv2d(:,:) = pv2d(:,:) / ( hv_0(:,:) + sshv_a(:,:) + 1._wp - vmask(:,:,1) ) 94 ELSE 95 DO jk = 1, jpkm1 !! Vertically integrated momentum trends 96 pu2d(:,:) = pu2d(:,:) + fse3u(:,:,jk) * umask(:,:,jk) * ua(:,:,jk) 97 pv2d(:,:) = pv2d(:,:) + fse3v(:,:,jk) * vmask(:,:,jk) * va(:,:,jk) 98 END DO 99 pu2d(:,:) = pu2d(:,:) * phur(:,:) 100 pv2d(:,:) = pv2d(:,:) * phvr(:,:) 101 ENDIF 92 102 DO jk = 1 , jpkm1 93 ua(:,:,jk) = ua(:,:,jk) - pu2d(:,:) 94 va(:,:,jk) = va(:,:,jk) - pv2d(:,:) 103 ua(:,:,jk) = ua(:,:,jk) - pu2d(:,:) * umask(:,:,jk) 104 va(:,:,jk) = va(:,:,jk) - pv2d(:,:) * vmask(:,:,jk) 95 105 END DO 96 106 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/C1D/step_c1d.F90
r3680 r4153 59 59 60 60 indic = 0 ! reset to no error condition 61 IF( kstp == nit000 ) CALL iom_init ! iom_put initialization (must be done after nemo_init for AGRIF+XIOS+OASIS) 61 62 IF( kstp /= nit000 ) CALL day( kstp ) ! Calendar (day was already called at nit000 in day_init) 62 CALL iom_setkt( kstp ) ! say to iom that we are at time step kstp63 CALL iom_setkt( kstp - nit000 + 1 ) ! say to iom that we are at time step kstp 63 64 64 65 !>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 106 107 !<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 107 108 CALL dia_wri( kstp ) ! ocean model: outputs 109 IF( lk_diahth ) CALL dia_hth( kstp ) ! Thermocline depth (20°C) 110 108 111 109 112 #if defined key_top -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/DIA/diadct.F90
r4147 r4153 42 42 #endif 43 43 #if defined key_lim3 44 USE ice_3 44 USE par_ice 45 USE ice 45 46 #endif 46 47 USE domvvl … … 490 491 ijglo = secs(jsec)%listPoint(jpt)%J + jpjzoom - 1 + njmpp - 1 491 492 WRITE(numout,*)' # I J : ',iiglo,ijglo 493 CALL FLUSH(numout) 492 494 ENDDO 493 495 ENDIF … … 612 614 613 615 !! * Local variables 614 INTEGER :: jk, jseg, jclass, &!loop on level/segment/classes616 INTEGER :: jk, jseg, jclass,jl, &!loop on level/segment/classes/ice categories 615 617 isgnu, isgnv ! 616 618 REAL(wp) :: zumid, zvmid, &!U/V velocity on a cell segment … … 777 779 778 780 zTnorm=zumid_ice*e2u(k%I,k%J)+zvmid_ice*e1v(k%I,k%J) 779 781 782 #if defined key_lim2 780 783 transports_2d(1,jsec,jseg) = transports_2d(1,jsec,jseg) + (zTnorm)* & 781 784 (1.0 - frld(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J)) & … … 784 787 transports_2d(2,jsec,jseg) = transports_2d(2,jsec,jseg) + (zTnorm)* & 785 788 (1.0 - frld(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J)) 789 #endif 790 #if defined key_lim3 791 DO jl=1,jpl 792 transports_2d(1,jsec,jseg) = transports_2d(1,jsec,jseg) + (zTnorm)* & 793 a_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) * & 794 ( ht_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) + & 795 ht_s(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) ) 796 797 transports_2d(2,jsec,jseg) = transports_2d(2,jsec,jseg) + (zTnorm)* & 798 a_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) 799 ENDDO 800 #endif 786 801 787 802 ENDIF !end of ice case -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/DOM/domvvl.F90
r3294 r4153 192 192 INTEGER :: iku, ikv ! local integers 193 193 INTEGER :: ii0, ii1, ij0, ij1 ! temporary integers 194 REAL(wp) :: zvt 194 REAL(wp) :: zvt, zvtip1, zvtjp1 ! local scalars 195 195 !!---------------------------------------------------------------------- 196 196 ! … … 202 202 WRITE(numout,*) '~~~~~~~~~ ' 203 203 pe3u_b(:,:,jpk) = fse3u_0(:,:,jpk) 204 pe3v_b(:,:,jpk) = fse3 u_0(:,:,jpk)204 pe3v_b(:,:,jpk) = fse3v_0(:,:,jpk) 205 205 ENDIF 206 206 … … 208 208 DO jj = 2, jpjm1 209 209 DO ji = fs_2, fs_jpim1 210 zvt = fse3t_b(ji,jj,jk) * e1e2t(ji,jj) 211 pe3u_b(ji,jj,jk) = 0.5_wp * ( zvt + fse3t_b(ji+1,jj,jk) * e1e2t(ji+1,jj) ) / ( e1u(ji,jj) * e2u(ji,jj) ) 212 pe3v_b(ji,jj,jk) = 0.5_wp * ( zvt + fse3t_b(ji,jj+1,jk) * e1e2t(ji,jj+1) ) / ( e1v(ji,jj) * e2v(ji,jj) ) 210 zvt = ( fse3t_b(ji ,jj ,jk) - fse3t_0(ji ,jj ,jk) ) * e1e2t(ji ,jj ) 211 zvtip1 = ( fse3t_b(ji+1,jj ,jk) - fse3t_0(ji+1,jj ,jk) ) * e1e2t(ji+1,jj ) 212 zvtjp1 = ( fse3t_b(ji ,jj+1,jk) - fse3t_0(ji ,jj+1,jk) ) * e1e2t(ji ,jj+1) 213 pe3u_b(ji,jj,jk) = fse3u_0(ji,jj,jk) + 0.5_wp * ( zvt + zvtip1 ) / ( e1u(ji,jj) * e2u(ji,jj) ) 214 pe3v_b(ji,jj,jk) = fse3v_0(ji,jj,jk) + 0.5_wp * ( zvt + zvtjp1 ) / ( e1v(ji,jj) * e2v(ji,jj) ) 213 215 END DO 214 216 END DO -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/DOM/domzgr.F90
r4148 r4153 1106 1106 INTEGER :: ios ! Local integer output status for namelist read 1107 1107 REAL(wp) :: zrmax, ztaper ! temporary scalars 1108 REAL(wp) :: zrfact ! temporary scalars 1109 REAL(wp), POINTER, DIMENSION(:,: ) :: ztmpi1, ztmpi2, ztmpj1, ztmpj2 1110 1111 ! 1112 REAL(wp), POINTER, DIMENSION(:,: ) :: zenv, zri, zrj, zhbat 1108 ! 1109 REAL(wp), POINTER, DIMENSION(:,: ) :: zenv, ztmp, zmsk, zri, zrj, zhbat 1113 1110 1114 1111 NAMELIST/namzgr_sco/ln_s_sh94, ln_s_sf12, ln_sigcrit, rn_sbot_min, rn_sbot_max, rn_hc, rn_rmax,rn_theta, & … … 1118 1115 IF( nn_timing == 1 ) CALL timing_start('zgr_sco') 1119 1116 ! 1120 CALL wrk_alloc( jpi, jpj, ztmpi1, ztmpi2, ztmpj1, ztmpj2 ) 1121 CALL wrk_alloc( jpi, jpj, zenv, zri, zrj, zhbat ) 1122 ! 1117 CALL wrk_alloc( jpi, jpj, zenv, ztmp, zmsk, zri, zrj, zhbat ) 1118 ! 1123 1119 REWIND( numnam_ref ) ! Namelist namzgr_sco in reference namelist : Sigma-stretching parameters 1124 1120 READ ( numnam_ref, namzgr_sco, IOSTAT = ios, ERR = 901) … … 1173 1169 ! ! ============================= 1174 1170 ! use r-value to create hybrid coordinates 1175 ! DO jj = 1, jpj 1176 ! DO ji = 1, jpi 1177 ! zenv(ji,jj) = MAX( bathy(ji,jj), 0._wp ) 1178 ! END DO 1179 ! END DO 1180 ! CALL lbc_lnk( zenv, 'T', 1._wp ) 1181 zenv(:,:) = bathy(:,:) 1171 DO jj = 1, jpj 1172 DO ji = 1, jpi 1173 zenv(ji,jj) = MAX( bathy(ji,jj), rn_sbot_min ) 1174 END DO 1175 END DO 1182 1176 ! 1183 1177 ! Smooth the bathymetry (if required) … … 1187 1181 jl = 0 1188 1182 zrmax = 1._wp 1189 ! 1190 ! set scaling factor used in reducing vertical gradients 1191 zrfact = ( 1._wp - rn_rmax ) / ( 1._wp + rn_rmax ) 1192 ! 1193 ! initialise temporary evelope depth arrays 1194 ztmpi1(:,:) = zenv(:,:) 1195 ztmpi2(:,:) = zenv(:,:) 1196 ztmpj1(:,:) = zenv(:,:) 1197 ztmpj2(:,:) = zenv(:,:) 1198 ! 1199 ! initialise temporary r-value arrays 1200 zri(:,:) = 1._wp 1201 zrj(:,:) = 1._wp 1202 ! ! ================ ! 1203 DO WHILE( jl <= 10000 .AND. ( zrmax - rn_rmax ) > 1.e-8_wp ) ! Iterative loop ! 1204 ! ! ================ ! 1183 ! ! ================ ! 1184 DO WHILE( jl <= 10000 .AND. zrmax > rn_rmax ) ! Iterative loop ! 1185 ! ! ================ ! 1205 1186 jl = jl + 1 1206 1187 zrmax = 0._wp 1207 ! we set zrmax from previous r-values (zri abd zrj) first 1208 ! if set after current r-value calculation (as previously) 1209 ! we could exit DO WHILE prematurely before checking r-value 1210 ! of current zenv 1211 DO jj = 1, nlcj 1212 DO ji = 1, nlci 1213 zrmax = MAX( zrmax, ABS(zri(ji,jj)), ABS(zrj(ji,jj)) ) 1214 END DO 1215 END DO 1216 zri(:,:) = 0._wp 1217 zrj(:,:) = 0._wp 1188 zmsk(:,:) = 0._wp 1218 1189 DO jj = 1, nlcj 1219 1190 DO ji = 1, nlci 1220 1191 iip1 = MIN( ji+1, nlci ) ! force zri = 0 on last line (ji=ncli+1 to jpi) 1221 1192 ijp1 = MIN( jj+1, nlcj ) ! force zrj = 0 on last raw (jj=nclj+1 to jpj) 1222 IF( (zenv(ji,jj) > 0._wp) .AND. (zenv(iip1,jj) > 0._wp)) THEN 1223 zri(ji,jj) = ( zenv(iip1,jj ) - zenv(ji,jj) ) / ( zenv(iip1,jj ) + zenv(ji,jj) ) 1224 END IF 1225 IF( (zenv(ji,jj) > 0._wp) .AND. (zenv(ji,ijp1) > 0._wp)) THEN 1226 zrj(ji,jj) = ( zenv(ji ,ijp1) - zenv(ji,jj) ) / ( zenv(ji ,ijp1) + zenv(ji,jj) ) 1227 END IF 1228 IF( zri(ji,jj) > rn_rmax ) ztmpi1(ji ,jj ) = zenv(iip1,jj ) * zrfact 1229 IF( zri(ji,jj) < -rn_rmax ) ztmpi2(iip1,jj ) = zenv(ji ,jj ) * zrfact 1230 IF( zrj(ji,jj) > rn_rmax ) ztmpj1(ji ,jj ) = zenv(ji ,ijp1) * zrfact 1231 IF( zrj(ji,jj) < -rn_rmax ) ztmpj2(ji ,ijp1) = zenv(ji ,jj ) * zrfact 1193 zri(ji,jj) = ABS( zenv(iip1,jj ) - zenv(ji,jj) ) / ( zenv(iip1,jj ) + zenv(ji,jj) ) 1194 zrj(ji,jj) = ABS( zenv(ji ,ijp1) - zenv(ji,jj) ) / ( zenv(ji ,ijp1) + zenv(ji,jj) ) 1195 zrmax = MAX( zrmax, zri(ji,jj), zrj(ji,jj) ) 1196 IF( zri(ji,jj) > rn_rmax ) zmsk(ji ,jj ) = 1._wp 1197 IF( zri(ji,jj) > rn_rmax ) zmsk(iip1,jj ) = 1._wp 1198 IF( zrj(ji,jj) > rn_rmax ) zmsk(ji ,jj ) = 1._wp 1199 IF( zrj(ji,jj) > rn_rmax ) zmsk(ji ,ijp1) = 1._wp 1232 1200 END DO 1233 1201 END DO 1234 1202 IF( lk_mpp ) CALL mpp_max( zrmax ) ! max over the global domain 1203 ! lateral boundary condition on zmsk: keep 1 along closed boundary (use of MAX) 1204 ztmp(:,:) = zmsk(:,:) ; CALL lbc_lnk( zmsk, 'T', 1._wp ) 1205 DO jj = 1, nlcj 1206 DO ji = 1, nlci 1207 zmsk(ji,jj) = MAX( zmsk(ji,jj), ztmp(ji,jj) ) 1208 END DO 1209 END DO 1235 1210 ! 1236 IF(lwp)WRITE(numout,*) 'zgr_sco : iter= ',jl, ' rmax= ', zrmax 1211 IF(lwp)WRITE(numout,*) 'zgr_sco : iter= ',jl, ' rmax= ', zrmax, ' nb of pt= ', INT( SUM(zmsk(:,:) ) ) 1237 1212 ! 1238 1213 DO jj = 1, nlcj 1239 1214 DO ji = 1, nlci 1240 zenv(ji,jj) = MAX(zenv(ji,jj), ztmpi1(ji,jj), ztmpi2(ji,jj), ztmpj1(ji,jj), ztmpj2(ji,jj) ) 1215 iip1 = MIN( ji+1, nlci ) ! last line (ji=nlci) 1216 ijp1 = MIN( jj+1, nlcj ) ! last raw (jj=nlcj) 1217 iim1 = MAX( ji-1, 1 ) ! first line (ji=nlci) 1218 ijm1 = MAX( jj-1, 1 ) ! first raw (jj=nlcj) 1219 IF( zmsk(ji,jj) == 1._wp ) THEN 1220 ztmp(ji,jj) = ( & 1221 & zenv(iim1,ijp1)*zmsk(iim1,ijp1) + zenv(ji,ijp1)*zmsk(ji,ijp1) + zenv(iip1,ijp1)*zmsk(iip1,ijp1) & 1222 & + zenv(iim1,jj )*zmsk(iim1,jj ) + zenv(ji,jj )* 2._wp + zenv(iip1,jj )*zmsk(iip1,jj ) & 1223 & + zenv(iim1,ijm1)*zmsk(iim1,ijm1) + zenv(ji,ijm1)*zmsk(ji,ijm1) + zenv(iip1,ijm1)*zmsk(iip1,ijm1) & 1224 & ) / ( & 1225 & zmsk(iim1,ijp1) + zmsk(ji,ijp1) + zmsk(iip1,ijp1) & 1226 & + zmsk(iim1,jj ) + 2._wp + zmsk(iip1,jj ) & 1227 & + zmsk(iim1,ijm1) + zmsk(ji,ijm1) + zmsk(iip1,ijm1) & 1228 & ) 1229 ENDIF 1241 1230 END DO 1242 1231 END DO 1243 1232 ! 1244 CALL lbc_lnk( zenv, 'T', 1._wp ) 1233 DO jj = 1, nlcj 1234 DO ji = 1, nlci 1235 IF( zmsk(ji,jj) == 1._wp ) zenv(ji,jj) = MAX( ztmp(ji,jj), bathy(ji,jj) ) 1236 END DO 1237 END DO 1238 ! 1239 ! Apply lateral boundary condition CAUTION: keep the value when the lbc field is zero 1240 ztmp(:,:) = zenv(:,:) ; CALL lbc_lnk( zenv, 'T', 1._wp ) 1241 DO jj = 1, nlcj 1242 DO ji = 1, nlci 1243 IF( zenv(ji,jj) == 0._wp ) zenv(ji,jj) = ztmp(ji,jj) 1244 END DO 1245 END DO 1245 1246 ! ! ================ ! 1246 1247 END DO ! End loop ! 1247 1248 ! ! ================ ! 1248 1249 ! 1249 ! DO jj = 1, jpj 1250 ! DO ji = 1, jpi 1251 ! zenv(ji,jj) = MAX( zenv(ji,jj), rn_sbot_min ) ! set all points to avoid undefined scale values 1252 ! END DO 1253 ! END DO 1250 ! Fill ghost rows with appropriate values to avoid undefined e3 values with some mpp decompositions 1251 DO ji = nlci+1, jpi 1252 zenv(ji,1:nlcj) = zenv(nlci,1:nlcj) 1253 END DO 1254 ! 1255 DO jj = nlcj+1, jpj 1256 zenv(:,jj) = zenv(:,nlcj) 1257 END DO 1254 1258 ! 1255 1259 ! Envelope bathymetry saved in hbatt 1256 1260 hbatt(:,:) = zenv(:,:) 1257 1258 1261 IF( MINVAL( gphit(:,:) ) * MAXVAL( gphit(:,:) ) <= 0._wp ) THEN 1259 1262 CALL ctl_warn( ' s-coordinates are tapered in vicinity of the Equator' ) 1260 1263 DO jj = 1, jpj 1261 1264 DO ji = 1, jpi 1262 ztaper = EXP( -(gphit(ji,jj)/8._wp)**2 )1265 ztaper = EXP( -(gphit(ji,jj)/8._wp)**2._wp ) 1263 1266 hbatt(ji,jj) = rn_sbot_max * ztaper + hbatt(ji,jj) * ( 1._wp - ztaper ) 1264 1267 END DO … … 1375 1378 fsde3w(:,:,:) = gdep3w(:,:,:) 1376 1379 ! 1377 where (e3t (:,:,:).eq.0.0) e3t(:,:,:) = 1.0 1378 where (e3u (:,:,:).eq.0.0) e3u(:,:,:) = 1.0 1379 where (e3v (:,:,:).eq.0.0) e3v(:,:,:) = 1.0 1380 where (e3f (:,:,:).eq.0.0) e3f(:,:,:) = 1.0 1381 where (e3w (:,:,:).eq.0.0) e3w(:,:,:) = 1.0 1382 where (e3uw (:,:,:).eq.0.0) e3uw(:,:,:) = 1.0 1383 where (e3vw (:,:,:).eq.0.0) e3vw(:,:,:) = 1.0 1384 1380 where (e3t (:,:,:).eq.0.0) e3t(:,:,:) = 1._wp 1381 where (e3u (:,:,:).eq.0.0) e3u(:,:,:) = 1._wp 1382 where (e3v (:,:,:).eq.0.0) e3v(:,:,:) = 1._wp 1383 where (e3f (:,:,:).eq.0.0) e3f(:,:,:) = 1._wp 1384 where (e3w (:,:,:).eq.0.0) e3w(:,:,:) = 1._wp 1385 where (e3uw (:,:,:).eq.0.0) e3uw(:,:,:) = 1._wp 1386 where (e3vw (:,:,:).eq.0.0) e3vw(:,:,:) = 1._wp 1387 1388 #if defined key_agrif 1389 ! Ensure meaningful vertical scale factors in ghost lines/columns 1390 IF( .NOT. Agrif_Root() ) THEN 1391 ! 1392 IF((nbondi == -1).OR.(nbondi == 2)) THEN 1393 e3u(1,:,:) = e3u(2,:,:) 1394 ENDIF 1395 ! 1396 IF((nbondi == 1).OR.(nbondi == 2)) THEN 1397 e3u(nlci-1,:,:) = e3u(nlci-2,:,:) 1398 ENDIF 1399 ! 1400 IF((nbondj == -1).OR.(nbondj == 2)) THEN 1401 e3v(:,1,:) = e3v(:,2,:) 1402 ENDIF 1403 ! 1404 IF((nbondj == 1).OR.(nbondj == 2)) THEN 1405 e3v(:,nlcj-1,:) = e3v(:,nlcj-2,:) 1406 ENDIF 1407 ! 1408 ENDIF 1409 #endif 1385 1410 1386 1411 fsdept(:,:,:) = gdept (:,:,:) … … 1431 1456 WRITE(numout,"(10x,i4,4f9.2)") ( jk, fsdept(1,1,jk), fsdepw(1,1,jk), & 1432 1457 & fse3t (1,1,jk), fse3w (1,1,jk), jk=1,jpk ) 1433 DO jj = mj0(20), mj1(20) 1434 DO ji = mi0(20), mi1(20) 1458 iip1 = MIN(20, jpiglo-1) ! for config with i smaller than 20 points 1459 ijp1 = MIN(20, jpjglo-1) ! for config with j smaller than 20 points 1460 DO jj = mj0(ijp1), mj1(ijp1) 1461 DO ji = mi0(iip1), mi1(iip1) 1435 1462 WRITE(numout,*) 1436 WRITE(numout,*) ' domzgr: vertical coordinates : point (20,20,k) bathy = ', bathy(ji,jj), hbatt(ji,jj) 1463 WRITE(numout,*) ' domzgr: vertical coordinates : point (',iip1,',',ijp1,',k) bathy = ', & 1464 & bathy(ji,jj), hbatt(ji,jj) 1437 1465 WRITE(numout,*) ' ~~~~~~ --------------------' 1438 1466 WRITE(numout,"(9x,' level gdept gdepw gde3w e3t e3w ')") … … 1441 1469 END DO 1442 1470 END DO 1443 DO jj = mj0(74), mj1(74) 1444 DO ji = mi0(100), mi1(100) 1471 iip1 = MIN( 74, jpiglo-1) 1472 ijp1 = MIN( 100, jpjglo-1) 1473 DO jj = mj0(ijp1), mj1(ijp1) 1474 DO ji = mi0(iip1), mi1(iip1) 1445 1475 WRITE(numout,*) 1446 WRITE(numout,*) ' domzgr: vertical coordinates : point (100,74,k) bathy = ', bathy(ji,jj), hbatt(ji,jj) 1476 WRITE(numout,*) ' domzgr: vertical coordinates : point (',iip1,',',ijp1,',k) bathy = ', & 1477 & bathy(ji,jj), hbatt(ji,jj) 1447 1478 WRITE(numout,*) ' ~~~~~~ --------------------' 1448 1479 WRITE(numout,"(9x,' level gdept gdepw gde3w e3t e3w ')") … … 1501 1532 END DO 1502 1533 ! 1503 CALL wrk_dealloc( jpi, jpj, zenv, ztmpi1, ztmpi2, ztmpj1, ztmpj2, zri, zrj, zhbat ) ! 1534 CALL wrk_dealloc( jpi, jpj, zenv, ztmp, zmsk, zri, zrj, zhbat ) 1535 ! 1504 1536 IF( nn_timing == 1 ) CALL timing_stop('zgr_sco') 1505 1537 ! … … 1730 1762 ENDDO 1731 1763 ! 1732 CALL lbc_lnk(e3t ,'T',1.) ; CALL lbc_lnk(e3u ,'T',1.)1733 CALL lbc_lnk(e3v ,'T',1.) ; CALL lbc_lnk(e3f ,'T',1.)1734 CALL lbc_lnk(e3w ,'T',1.)1735 CALL lbc_lnk(e3uw,'T',1.) ; CALL lbc_lnk(e3vw,'T',1.)1736 !1737 1764 ! ! ============= 1738 1765 … … 1831 1858 !!---------------------------------------------------------------------- 1832 1859 ! 1833 pf = ( TANH( rn_theta * ( -(pk-0.5_wp) / REAL(jpkm1 ) + rn_thetb ) ) &1860 pf = ( TANH( rn_theta * ( -(pk-0.5_wp) / REAL(jpkm1,wp) + rn_thetb ) ) & 1834 1861 & - TANH( rn_thetb * rn_theta ) ) & 1835 1862 & * ( COSH( rn_theta ) & … … 1857 1884 ! 1858 1885 IF ( rn_theta == 0 ) then ! uniform sigma 1859 pf1 = - ( pk1 - 0.5_wp ) / REAL( jpkm1 )1886 pf1 = - ( pk1 - 0.5_wp ) / REAL( jpkm1,wp ) 1860 1887 ELSE ! stretched sigma 1861 pf1 = ( 1._wp - pbb ) * ( SINH( rn_theta*(-(pk1-0.5_wp)/REAL(jpkm1 )) ) ) / SINH( rn_theta ) &1862 & + pbb * ( (TANH( rn_theta*( (-(pk1-0.5_wp)/REAL(jpkm1 )) + 0.5_wp) ) - TANH( 0.5_wp * rn_theta ) ) &1888 pf1 = ( 1._wp - pbb ) * ( SINH( rn_theta*(-(pk1-0.5_wp)/REAL(jpkm1,wp)) ) ) / SINH( rn_theta ) & 1889 & + pbb * ( (TANH( rn_theta*( (-(pk1-0.5_wp)/REAL(jpkm1,wp)) + 0.5_wp) ) - TANH( 0.5_wp * rn_theta ) ) & 1863 1890 & / ( 2._wp * TANH( 0.5_wp * rn_theta ) ) ) 1864 1891 ENDIF -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/DYN/dynadv_ubs.F90
r3294 r4153 29 29 30 30 REAL(wp), PARAMETER :: gamma1 = 1._wp/3._wp ! =1/4 quick ; =1/3 3rd order UBS 31 REAL(wp), PARAMETER :: gamma2 = 1._wp/ 8._wp ! =0 2nd order ; =1/84th order centred31 REAL(wp), PARAMETER :: gamma2 = 1._wp/32._wp ! =0 2nd order ; =1/32 4th order centred 32 32 33 33 PUBLIC dyn_adv_ubs ! routine called by step.F90 … … 57 57 !! = 1/3 3rd order Upstream biased scheme 58 58 !! gamma2 = 0 2nd order finite differencing 59 !! = 1/ 84th order finite differencing59 !! = 1/32 4th order finite differencing 60 60 !! For stability reasons, the first term of the fluxes which cor- 61 61 !! responds to a second order centered scheme is evaluated using … … 64 64 !! before velocity (forward in time). 65 65 !! Default value (hard coded in the begining of the module) are 66 !! gamma1=1/3 and gamma2=1/ 8.66 !! gamma1=1/3 and gamma2=1/32. 67 67 !! 68 68 !! ** Action : - (ua,va) updated with the 3D advective momentum trends -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/DYN/dynspg_flt.F90
r4147 r4153 109 109 INTEGER :: ji, jj, jk ! dummy loop indices 110 110 REAL(wp) :: z2dt, z2dtg, zgcb, zbtd, ztdgu, ztdgv ! local scalars 111 REAL(wp), POINTER, DIMENSION(:,:,:) :: zub, zvb112 111 !!---------------------------------------------------------------------- 113 112 ! 114 113 IF( nn_timing == 1 ) CALL timing_start('dyn_spg_flt') 115 114 ! 116 CALL wrk_alloc( jpi,jpj,jpk, zub, zvb )117 115 ! 118 116 IF( kt == nit000 ) THEN … … 213 211 DO jk = 1, jpkm1 214 212 DO ji = 1, jpij 215 spgu(ji,1) = spgu(ji,1) + fse3u (ji,1,jk) * ua(ji,1,jk)216 spgv(ji,1) = spgv(ji,1) + fse3v (ji,1,jk) * va(ji,1,jk)213 spgu(ji,1) = spgu(ji,1) + fse3u_a(ji,1,jk) * ua(ji,1,jk) 214 spgv(ji,1) = spgv(ji,1) + fse3v_a(ji,1,jk) * va(ji,1,jk) 217 215 END DO 218 216 END DO … … 221 219 DO jj = 2, jpjm1 222 220 DO ji = 2, jpim1 223 spgu(ji,jj) = spgu(ji,jj) + fse3u (ji,jj,jk) * ua(ji,jj,jk)224 spgv(ji,jj) = spgv(ji,jj) + fse3v (ji,jj,jk) * va(ji,jj,jk)221 spgu(ji,jj) = spgu(ji,jj) + fse3u_a(ji,jj,jk) * ua(ji,jj,jk) 222 spgv(ji,jj) = spgv(ji,jj) + fse3v_a(ji,jj,jk) * va(ji,jj,jk) 225 223 END DO 226 224 END DO … … 360 358 IF( lrst_oce ) CALL flt_rst( kt, 'WRITE' ) 361 359 ! 362 CALL wrk_dealloc( jpi,jpj,jpk, zub, zvb )363 360 ! 364 361 IF( nn_timing == 1 ) CALL timing_stop('dyn_spg_flt') -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/ICB/icb_oce.F90
r4147 r4153 37 37 USE par_oce ! ocean parameters 38 38 USE lib_mpp ! MPP library 39 USE fldread ! read input fields (FLD type)40 39 41 40 IMPLICIT NONE … … 148 147 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: griddata !: work array for icbrst 149 148 150 TYPE(FLD), PUBLIC, ALLOCATABLE , DIMENSION(:) :: sf_icb !: structure: file information, fields read151 152 149 !!---------------------------------------------------------------------- 153 150 !! NEMO/OPA 3.3 , NEMO Consortium (2011) … … 165 162 ! 166 163 icb_alloc = 0 167 !! ALLOCATE( berg_grid , & 168 ALLOCATE( & 169 & berg_grid%calving (jpi,jpj) , berg_grid%calving_hflx (jpi,jpj) , & 164 ALLOCATE( berg_grid%calving (jpi,jpj) , berg_grid%calving_hflx (jpi,jpj) , & 170 165 & berg_grid%stored_heat(jpi,jpj) , berg_grid%floating_melt(jpi,jpj) , & 171 166 & berg_grid%maxclass (jpi,jpj) , berg_grid%stored_ice (jpi,jpj,nclasses) , & -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/ICB/icbini.F90
r4147 r4153 35 35 PUBLIC icb_init ! routine called in nemogcm.F90 module 36 36 37 CHARACTER(len=100) :: cn_dir ! Root directory for location of icb files 38 TYPE(FLD_N) :: sn_icb ! information about the calving file to be read 39 37 CHARACTER(len=100) :: cn_dir = './' !: Root directory for location of icb files 38 TYPE(FLD_N) :: sn_icb !: information about the calving file to be read 39 TYPE(FLD), PUBLIC, ALLOCATABLE , DIMENSION(:) :: sf_icb !: structure: file information, fields read 40 !: used in icbini and icbstp 40 41 !!---------------------------------------------------------------------- 41 42 !! NEMO/OPA 3.3 , NEMO Consortium (2011) -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/ICB/icbstp.F90
r3614 r4153 24 24 USE lib_mpp 25 25 USE iom 26 USE fldread 26 27 USE timing ! timing 27 28 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/IOM/iom.F90
r4152 r4153 31 31 USE sbc_oce, ONLY : nn_fsbc ! ocean space and time domain 32 32 USE trc_oce, ONLY : nn_dttrc ! !: frequency of step on passive tracers 33 USE icb_oce, ONLY : class_num ! !: iceberg classes 33 34 USE domngb ! ocean space and time domain 34 35 USE phycst ! physical constants … … 96 97 clname = cdname 97 98 IF( TRIM(Agrif_CFixed()) /= '0' ) clname = TRIM(Agrif_CFixed())//"_"//TRIM(cdname) 99 # if defined key_mpp_mpi 98 100 CALL xios_context_initialize(TRIM(clname), mpi_comm_opa) 101 # else 102 CALL xios_context_initialize(TRIM(clname), 0) 103 # endif 99 104 CALL iom_swap( cdname ) 100 105 … … 136 141 CALL iom_set_axis_attr( "depthw", gdepw_0 ) 137 142 # if defined key_floats 138 CALL iom_set_axis_attr( "nfloat", ( ji, ji=1,nfloat) )143 CALL iom_set_axis_attr( "nfloat", (/ (REAL(ji,wp), ji=1,nfloat) /) ) 139 144 # endif 145 CALL iom_set_axis_attr( "icbcla", class_num ) 140 146 141 147 ! automatic definitions of some of the xml attributs -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/LBC/lbclnk.F90
r4152 r4153 281 281 END SUBROUTINE lbc_lnk_3d 282 282 283 SUBROUTINE lbc_bdy_lnk_3d( pt3d, cd_type, psgn, ib_bdy )284 !!---------------------------------------------------------------------285 !! *** ROUTINE lbc_bdy_lnk ***286 !!287 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used288 !! to maintain the same interface with regards to the mpp case289 !!290 !!----------------------------------------------------------------------291 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points292 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: pt3d ! 3D array on which the lbc is applied293 REAL(wp) , INTENT(in ) :: psgn ! control of the sign294 INTEGER :: ib_bdy ! BDY boundary set295 !!296 CALL lbc_lnk_3d( pt3d, cd_type, psgn)297 298 END SUBROUTINE lbc_bdy_lnk_3d299 300 SUBROUTINE lbc_bdy_lnk_2d( pt2d, cd_type, psgn, ib_bdy )301 !!---------------------------------------------------------------------302 !! *** ROUTINE lbc_bdy_lnk ***303 !!304 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used305 !! to maintain the same interface with regards to the mpp case306 !!307 !!----------------------------------------------------------------------308 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points309 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 3D array on which the lbc is applied310 REAL(wp) , INTENT(in ) :: psgn ! control of the sign311 INTEGER :: ib_bdy ! BDY boundary set312 !!313 CALL lbc_lnk_2d( pt2d, cd_type, psgn)314 315 END SUBROUTINE lbc_bdy_lnk_2d316 317 283 SUBROUTINE lbc_lnk_2d( pt2d, cd_type, psgn, cd_mpp, pval ) 318 284 !!--------------------------------------------------------------------- … … 401 367 END SUBROUTINE lbc_lnk_2d 402 368 369 #endif 370 371 372 SUBROUTINE lbc_bdy_lnk_3d( pt3d, cd_type, psgn, ib_bdy ) 373 !!--------------------------------------------------------------------- 374 !! *** ROUTINE lbc_bdy_lnk *** 375 !! 376 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used 377 !! to maintain the same interface with regards to the mpp 378 !case 379 !! 380 !!---------------------------------------------------------------------- 381 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 382 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: pt3d ! 3D array on which the lbc is applied 383 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 384 INTEGER :: ib_bdy ! BDY boundary set 385 !! 386 CALL lbc_lnk_3d( pt3d, cd_type, psgn) 387 388 END SUBROUTINE lbc_bdy_lnk_3d 389 390 SUBROUTINE lbc_bdy_lnk_2d( pt2d, cd_type, psgn, ib_bdy ) 391 !!--------------------------------------------------------------------- 392 !! *** ROUTINE lbc_bdy_lnk *** 393 !! 394 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used 395 !! to maintain the same interface with regards to the mpp 396 !case 397 !! 398 !!---------------------------------------------------------------------- 399 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 400 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 3D array on which the lbc is applied 401 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 402 INTEGER :: ib_bdy ! BDY boundary set 403 !! 404 CALL lbc_lnk_2d( pt2d, cd_type, psgn) 405 406 END SUBROUTINE lbc_bdy_lnk_2d 407 408 403 409 SUBROUTINE lbc_lnk_2d_e( pt2d, cd_type, psgn, jpri, jprj ) 404 410 !!--------------------------------------------------------------------- … … 425 431 END SUBROUTINE lbc_lnk_2d_e 426 432 427 # endif428 433 #endif 429 434 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/LBC/lib_mpp.F90
r4152 r4153 2184 2184 !!gm Remark : this is very time consumming!!! 2185 2185 ! ! ------------------------ ! 2186 IF( ijpt0 > ijpt1 .OR. iipt0 > iipt1) THEN2186 IF(((nbondi .ne. 0) .AND. (ktype .eq. 2)) .OR. ((nbondj .ne. 0) .AND. (ktype .eq. 1))) THEN 2187 2187 ! there is nothing to be migrated 2188 2188 lmigr = .FALSE. -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/LBC/mppini_2.h90
r4147 r4153 129 129 irestj = 1 + MOD( jpjglo - nrecj -1 , jpnj ) 130 130 131 #if defined key_nemocice_decomp 132 ! Change padding to be consistent with CICE 133 ilci(1:jpni-1 ,:) = jpi 134 ilci(jpni ,:) = jpiglo - (jpni - 1) * (jpi - nreci) 135 136 ilcj(:, 1:jpnj-1) = jpj 137 ilcj(:, jpnj) = jpjglo - (jpnj - 1) * (jpj - nrecj) 138 #else 131 139 ilci(1:iresti ,:) = jpi 132 140 ilci(iresti+1:jpni ,:) = jpi-1 … … 134 142 ilcj(:, 1:irestj) = jpj 135 143 ilcj(:, irestj+1:jpnj) = jpj-1 144 #endif 136 145 137 146 IF(lwp) WRITE(numout,*) -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/SBC/sbcmod.F90
r4152 r4153 229 229 230 230 ! 231 CALL sbc_ssm_init 231 CALL sbc_ssm_init ! Sea-surface mean fields initialisation 232 232 ! 233 233 IF( ln_ssr ) CALL sbc_ssr_init ! Sea-Surface Restoring initialisation -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/SOL/solmat.F90
r3609 r4153 30 30 USE lbclnk ! lateral boudary conditions 31 31 USE lib_mpp ! distributed memory computing 32 USE c1d ! 1D vertical configuration 32 33 USE in_out_manager ! I/O manager 33 34 USE timing ! timing … … 271 272 272 273 ! SOR and PCG solvers 274 IF( lk_c1d ) CALL lbc_lnk( gcdmat, 'T', 1._wp ) ! 1D case bmask =/0 but gcdmat not define everywhere 273 275 DO jj = 1, jpj 274 276 DO ji = 1, jpi -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/OPA_SRC/step.F90
r4152 r4153 284 284 IF( lk_cpl ) CALL sbc_cpl_snd( kstp ) ! coupled mode : field exchanges 285 285 ! 286 IF( kstp == nitend ) THEN 286 #if defined key_iomput 287 IF( kstp == nitend .OR. indic < 0 ) THEN 287 288 CALL iom_context_finalize( "nemo" ) ! needed for XIOS+AGRIF 288 289 IF( ln_crs ) CALL iom_context_finalize( "nemo_crs" ) ! 289 290 ENDIF 291 #endif 290 292 ! 291 293 IF( nn_timing == 1 .AND. kstp == nit000 ) CALL timing_reset -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zfechem.F90
r4148 r4153 129 129 zoxy = trn(ji,jj,jk,jpoxy) * ( rhop(ji,jj,jk) / 1.e3 ) 130 130 ! Fe2+ oxydation rate from Santana-Casiano et al. (2005) 131 zkox = 35.407 - 6.7109 * zph + 0.5342 * zph * zph - 5362.6 / ( tsn(ji,jj, 1,jp_tem) + 273.15 ) &131 zkox = 35.407 - 6.7109 * zph + 0.5342 * zph * zph - 5362.6 / ( tsn(ji,jj,jk,jp_tem) + 273.15 ) & 132 132 & - 0.04406 * SQRT( tsn(ji,jj,jk,jp_sal) ) - 0.002847 * tsn(ji,jj,jk,jp_sal) 133 133 zkox = ( 10.** zkox ) * spd -
branches/2013/dev_LOCEAN_2013/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zsms.F90
r4152 r4153 433 433 #endif 434 434 & + trn(:,:,:,jpsfe) & 435 & + trn(:,:,:,jpzoo) 435 & + trn(:,:,:,jpzoo) * ferat3 & 436 436 & + trn(:,:,:,jpmes) * ferat3 ) * cvol(:,:,:) ) 437 437 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/SETTE/iodef_sette.xml
r4147 r4153 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".FALSE."/> <!-- 1h files --> … … 54 54 55 55 <axis_definition> 56 <axis id="deptht" long_name="Vertical T levels" unit="m" /><!-- positive=".FALSE." -->57 <axis id="depthu" long_name="Vertical U levels" unit="m" /><!-- positive=".FALSE." -->58 <axis id="depthv" long_name="Vertical V levels" unit="m" /><!-- positive=".FALSE." -->59 <axis id="depthw" long_name="Vertical W levels" unit="m" /><!-- positive=".FALSE." -->56 <axis id="deptht" long_name="Vertical T levels" unit="m" positive="down" /> 57 <axis id="depthu" long_name="Vertical U levels" unit="m" positive="down" /> 58 <axis id="depthv" long_name="Vertical V levels" unit="m" positive="down" /> 59 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 60 60 <axis id="nfloat" long_name="Float number" unit="-" /> 61 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 61 62 </axis_definition> 62 63 -
branches/2013/dev_LOCEAN_2013/NEMOGCM/TOOLS/MISCELLANEOUS/chk_iomput.sh
r2404 r4153 35 35 echo ' --insrc only print all variable definitions found in the source code' 36 36 echo 'Examples' 37 echo ' chk_iomput.sh'38 echo ' chk_iomput.sh --help'39 echo ' chk_iomput.sh ../../CONFIG/ORCA2_LIM/EXP00/iodef.xml "../../NEMO/OPA_SRC/ ../../NEMO/LIM_SRC_2/"'37 echo ' ./chk_iomput.sh' 38 echo ' ./chk_iomput.sh --help' 39 echo ' ./chk_iomput.sh ../../CONFIG/ORCA2_LIM/EXP00/iodef.xml "../../NEMO/OPA_SRC/ ../../NEMO/LIM_SRC_2/"' 40 40 echo 41 41 exit ;; … … 59 59 #------------------------------------------------ 60 60 # 61 [ $inxml -eq 1 ] && grep "< *field * id *=" $xmlfile 61 external=$( grep -c "<field_definition.* src=" $xmlfile ) 62 if [ $external -eq 1 ] 63 then 64 xmlfield_def=$( grep "<field_definition.* src=" $xmlfile | sed -e 's/.*src="\([^"]*\)".*/\1/' ) 65 xmlfield_def=$( dirname $xmlfile )/$xmlfield_def 66 else 67 xmlfield_def=$xmlfile 68 fi 69 [ $inxml -eq 1 ] && grep "< *field * id *=" $xmlfield_def 62 70 [ $insrc -eq 1 ] && find $srcdir -name "*.[Ffh]90" -exec grep -iH "^[^\!]*call *iom_put *(" {} \; 63 71 [ $(( $insrc + $inxml )) -ge 1 ] && exit … … 71 79 # list of variables used in "CALL iom_put" 72 80 # 73 varlistsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | sort -d ) 81 badvarsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -ic iom_put ) 82 if [ $badvarsrc -ne 0 ] 83 then 84 echo "The following call to iom_put cannot be checked" 85 echo 86 find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -i iom_put | sort -d 87 echo 88 fi 89 varlistsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -vi iom_put | sort -d ) 74 90 # 75 91 # list of variables defined in the xml file 76 92 # 77 varlistxml=$( grep "< *field * id *=" $xmlfile | sed -e "s/^.*< *field* id *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d )93 varlistxml=$( grep "< *field.* id *=" $xmlfield_def | sed -e "s/^.*< *field.* id *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d ) 78 94 # 79 95 # list of variables to be outputed in the xml file 80 96 # 81 varlistout=$( grep "< *field * ref *=" $xmlfile | sed -e "s/^.*< *field *ref *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d )97 varlistout=$( grep "< *field.* field_ref *=" $xmlfile | sed -e "s/^.*< *field.* field_ref *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d ) 82 98 # 83 99 echo "--------------------------------------------------" 84 100 echo check if all iom_put found in $srcdir 85 echo have a corresponding variable definition in $xmlfi le101 echo have a corresponding variable definition in $xmlfield_def 86 102 echo "--------------------------------------------------" 87 103 for var in $varlistsrc … … 90 106 if [ $tst -ne 1 ] 91 107 then 92 echo "problem with $var: $tst lines corresponding to its definition in $xmlfi le, but defined in the code in"108 echo "problem with $var: $tst lines corresponding to its definition in $xmlfield_def, but defined in the code in" 93 109 for f in $srclist 94 110 do
Note: See TracChangeset
for help on using the changeset viewer.