Changeset 4219
- Timestamp:
- 2013-11-15T16:25:37+01:00 (10 years ago)
- Location:
- branches/2013/dev_r3948_NOC_FK
- Files:
-
- 1 deleted
- 53 edited
- 2 copied
Legend:
- Unmodified
- Added
- Removed
-
branches/2013/dev_r3948_NOC_FK/DOC/NEMO_book.tex
r3294 r4219 23 23 \usepackage[margin=10pt,font={small},labelsep=colon,labelfont={bf}]{caption} % Gives small font for captions 24 24 \usepackage{enumitem} % allows non-bold description items 25 \usepackage{longtable} % allows multipage tables 25 26 %\usepackage{colortbl} % gives coloured panels behind table columns 26 27 -
branches/2013/dev_r3948_NOC_FK/DOC/TexFiles/Chapters/Chap_DIA.tex
r3940 r4219 1 1 % ================================================================ 2 % Chapter ÑI/O & Diagnostics2 % Chapter I/O & Diagnostics 3 3 % ================================================================ 4 4 \chapter{Ouput and Diagnostics (IOM, DIA, TRD, FLO)} … … 16 16 17 17 The model outputs are of three types: the restart file, the output listing, 18 and the output file(s). The restart file is used internally by the code when18 and the diagnostic output file(s). The restart file is used internally by the code when 19 19 the user wants to start the model with initial conditions defined by a 20 20 previous simulation. It contains all the information that is necessary in … … 25 25 that it is saved in the same binary format as the one used by the computer 26 26 that is to read it (in particular, 32 bits binary IEEE format must not be used for 27 this file). The output listing and file(s) are predefined but should be checked 27 this file). 28 29 The output listing and file(s) are predefined but should be checked 28 30 and eventually adapted to the user's needs. The output listing is stored in 29 31 the $ocean.output$ file. The information is printed from within the code on the … … 31 33 "\textit{grep -i numout}" in the source code directory. 32 34 33 By default, outpout files are written in NetCDF format but an IEEE output format, called DIMG, can be choosen when defining \key{dimgout}. Since version 3.2, when defining \key{iomput}, an I/O server has been added which provides more flexibility in the choice of the fields to be outputted as well as how the writing work is distributed over the processors in massively parallel computing. The complete description of the use of this I/O server is presented in next section. If neither \key{iomput} nor \key{dimgout} are defined, NEMO is producing NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation, but it is quite inefficient on parrallel machines. If \key{iomput} is not defined, output files are defined in the \mdl{diawri} module and containing mean (or instantaneous if \key{diainstant} is defined) values over a period of nn\_write time-step (namelist parameter). 35 By default, diagnostic output files are written in NetCDF format but an IEEE binary output format, called DIMG, can be choosen by defining \key{dimgout}. 36 37 Since version 3.2, when defining \key{iomput}, an I/O server has been added which provides more flexibility in the choice of the fields to be written as well as how the writing work is distributed over the processors in massively parallel computing. The complete description of the use of this I/O server is presented in the next section. 38 39 By default, if neither \key{iomput} nor \key{dimgout} are defined, NEMO produces NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation. However, the IOIPSL library is quite inefficient on parallel machines and, since version 3.2, many diagnostic options have been added presuming the use of \key{iomput}. The usefulness of the default IOIPSL-based option is expected to reduce with each new release. If \key{iomput} is not defined, output files and content are defined in the \mdl{diawri} module and contain mean (or instantaneous if \key{diainstant} is defined) values over a regular period of nn\_write time-steps (namelist parameter). 34 40 35 41 %\gmcomment{ % start of gmcomment … … 42 48 43 49 44 Since version 3.2, iomput is the NEMO output interface. It was designed to be simple to use, flexible and efficient. The two main purposes of iomput are: \\ 45 (1) the complete and flexible control of the output files through an external xml file defined by the user \\ 46 (2) to achieve high performance outputs through the distribution (or not) of all tasks related to output files on dedicated processes. \\ 47 The first functionality allows the user to specify, without touching anything into the code, the way he want to output data: \\ 48 - choice of output frequencies that can be different for each file (including real months and years) \\ 49 - choice of file contents: decide which data will be written in which file (the same data can be outputted in different files) \\ 50 - possibility to split output files at a choosen frequency \\ 51 - possibility to extract a vertical or an horizontal subdomain \\ 52 - choice of the temporal operation to perform: average, accumulate, instantaneous, min, max and once \\ 53 - extremely large choice of data available \\ 54 - redefine variables name and long\_name \\ 55 In addition, iomput allows the user to output any variable (scalar, 2D or 3D) in the code in a very easy way. All details of iomput functionalities are listed in the following subsections. Example of the iodef.xml files that control the outputs can be found here: NEMOGCM/CONFIG/ORCA2\_LIM/EXP00/iodef*.xml 56 57 The second functionality targets outputs performances when running on a very large number of processes. First, iomput provides the possibility to dedicate N specific processes (in addition to NEMO processes) to write the outputs, where N is big enough (and defined by the user) to suppress the bottle neck associated with the the writing of the output files. Since version 3.5, this interface depends on an external code called \href{http://forge.ipsl.jussieu.fr/ioserver}{XIOS}. This new IO server takes advantage of the new functionalitiy of NetCDF4 that allows the user to write files in parallel and therefore to bypass the rebuilding phase. Note that writting in parallel into the same NetCDF files requires that your NetCDF4 library is linked to an HDF5 library that has been correctly compiled (i.e. with the configure option $--$enable-parallel). Note that the files created by iomput trough xios are incompatible with NetCDF3. All post-processsing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3. 58 59 \subsection{Basic knowledge} 60 50 Since version 3.2, iomput is the NEMO output interface of choice. It has been designed to be simple to use, flexible and efficient. The two main purposes of iomput are: 51 \begin{enumerate} 52 \item The complete and flexible control of the output files through external XML files adapted by the user from standard templates. 53 \item To achieve high performance and scalable output through the optional distribution of all diagnostic output related tasks to dedicated processes. 54 \end{enumerate} 55 The first functionality allows the user to specify, without code changes or recompilation, aspects of the diagnostic output stream, such as: 56 \begin{itemize} 57 \item The choice of output frequencies that can be different for each file (including real months and years). 58 \item The choice of file contents; includes complete flexibility over which data are written in which files (the same data can be written in different files). 59 \item The possibility to split output files at a choosen frequency. 60 \item The possibility to extract a vertical or an horizontal subdomain. 61 \item The choice of the temporal operation to perform, e.g.: average, accumulate, instantaneous, min, max and once. 62 \item Control over metadata via a large XML "database" of possible output fields. 63 \end{itemize} 64 In addition, iomput allows the user to add the output of any new variable (scalar, 2D or 3D) in the code in a very easy way. All details of iomput functionalities are listed in the following subsections. Examples of the XML files that control the outputs can be found in: 65 \begin{alltt} 66 \begin{verbatim} 67 NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef.xml 68 NEMOGCM/CONFIG/SHARED/field_def.xml 69 and 70 NEMOGCM/CONFIG/SHARED/domain_def.xml. 71 \end{verbatim} 72 \end{alltt} 73 74 The second functionality targets output performance when running in parallel (\key{mpp\_mpi}). Iomput provides the possibility to specify N dedicated I/O processes (in addition to the NEMO processes) to collect and write the outputs. With an appropriate choice of N by the user, the bottleneck associated with the writing of the output files can be greatly reduced. 75 76 Since version 3.5, the iom\_put interface depends on an external code called \href{http://forge.ipsl.jussieu.fr/ioserver}{XIOS}. This new IO server can take advantage of the parallel I/O functionality of NetCDF4 to create a single output file and therefore to bypass the rebuilding phase. Note that writing in parallel into the same NetCDF files requires that your NetCDF4 library is linked to an HDF5 library that has been correctly compiled (i.e. with the configure option $--$enable-parallel). Note that the files created by iomput through XIOS are incompatible with NetCDF3. All post-processsing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3. 77 78 Even if not using the parallel I/O functionality of NetCDF4, using N dedicated I/O servers, where N is typically much less than the number of NEMO processors, will reduce the number of output files created. This can greatly reduce the post-processing burden usually associated with using large numbers of NEMO processors. Note that for smaller configurations, the rebuilding phase can be avoided, even without a parallel-enabled NetCDF4 library, simply by employing only one dedicated I/O server. 79 80 \subsection{XIOS: the IO\_SERVER} 81 82 \subsubsection{Attached or detached mode?} 83 84 Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS}, the io\_server developed by Yann Meurdesoif from IPSL. The behaviour of the io subsystem is controlled by settings in the external XML files listed above. Key settings in the iodef.xml file are {\tt using\_server} and the {\tt type} tag associated with each defined file. The {\tt using\_server} setting determines whether or not the server will be used in ''attached mode'' (as a library) [{\tt false}] or in ''detached mode'' (as an external executable on N additional, dedicated cpus) [{\tt true}]. The ''attached mode'' is simpler to use but much less efficient for massively parallel applications. The type of each file can be either ''multiple\_file'' or ''one\_file''. 85 86 In attached mode and if the type of file is ''multiple\_file'', then each NEMO process will also act as an IO server and produce its own set of output files. Superficially, this emulates the standard behaviour in previous versions, However, the subdomain written out by each process does not correspond to the {\tt jpi x jpj x jpk} domain actually computed by the process (although it may if {\tt jpni=1}). Instead each process will have collected and written out a number of complete longitudinal strips. If the ''one\_file'' option is chosen then all processes will collect their longitudinal strips and write (in parallel) to a single output file. 87 88 In detached mode and if the type of file is ''multiple\_file'', then each stand-alone XIOS process will collect data for a range of complete longitudinal strips and write to its own set of output files. If the ''one\_file'' option is chosen then all XIOS processes will collect their longitudinal strips and write (in parallel) to a single output file. Note running in detached mode requires launching a Multiple Process Multiple Data (MPMD) parallel job. The following subsection provides a typical example but the syntax will vary in different MPP environments. 89 90 \subsubsection{Number of cpu used by XIOS in detached mode} 91 92 The number of cores used by the XIOS is specified when launching the model. The number of cores dedicated to XIOS should be from ~1/10 to ~1/50 of the number or cores dedicated to NEMO. Some manufacturers suggest using O($\sqrt{N}$) dedicated IO processors for N processors but this is a general recommendation and not specific to NEMO. It is difficult to provide precise recommendations because the optimal choice will depend on the particular hardware properties of the target system (parallel filesystem performance, available memory, memory bandwidth etc.) and the volume and frequency of data to be created. Here is an example of 2 cpus for the io\_server and 62 cpu for nemo using mpirun: 93 94 \texttt{ mpirun -np 62 ./nemo.exe : -np 2 ./xios\_server.exe } 95 96 \subsubsection{Control of XIOS: the XIOS context in iodef.xml} 97 98 As well as the {\tt using\_server} flag, other controls on the use of XIOS are set in the XIOS context in iodef.xml. See the XML basics section below for more details on XML syntax and rules. 99 100 \begin{tabular}{|p{4cm}|p{6.0cm}|p{2.0cm}|} 101 \hline 102 variable name & 103 description & 104 example \\ 105 \hline 106 \hline 107 buffer\_size & 108 buffer size used by XIOS to send data from NEMO to XIOS. Larger is more efficient. Note that needed/used buffer sizes are summarized at the end of the job & 109 25000000 \\ 110 \hline 111 buffer\_server\_factor\_size & 112 ratio between NEMO and XIOS buffer size. Should be 2. & 113 2 \\ 114 \hline 115 info\_level & 116 verbosity level (0 to 100) & 117 0 \\ 118 \hline 119 using\_server & 120 activate attached(false) or detached(true) mode & 121 true \\ 122 \hline 123 using\_oasis & 124 XIOS is used with OASIS(true) or not (false) & 125 false \\ 126 \hline 127 oasis\_codes\_id & 128 when using oasis, define the identifier of NEMO in the namcouple. Note that the identifier of XIOS is xios.x & 129 oceanx \\ 130 \hline 131 \end{tabular} 132 133 134 \subsection{Practical issues} 135 136 \subsubsection{Installation} 137 138 As mentioned, XIOS is supported separately and must be downloaded and compiled before it can be used with NEMO. See the installation guide on the \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS} wiki for help and guidance. NEMO will need to link to the compiled XIOS library. The 139 \href{http://www.nemo-ocean.eu/Using-NEMO/User-Guides/Basics/XIOS-IO-server-installation-and-use}{XIOS with NEMO} guide provides an example illustration of how this can be achieved. 140 141 \subsubsection{Add your own outputs} 142 143 It is very easy to add your own outputs with iomput. Many standard fields and diagnostics are already prepared (i.e., steps 1 to 3 below have been done) and simply need to be activated by including the required output in a file definition in iodef.xml (step 4). To add new output variables, all 4 of the following steps must be taken. 144 \begin{description} 145 \item[1.] in NEMO code, add a \\ 146 \texttt{ CALL iom\_put( 'identifier', array ) } \\ 147 where you want to output a 2D or 3D array. 148 149 \item[2.] If necessary, add \\ 150 \texttt{ USE iom\ \ \ \ \ \ \ \ \ \ \ \ ! I/O manager library } \\ 151 to the list of used modules in the upper part of your module. 152 153 \item[3.] in the field\_def.xml file, add the definition of your variable using the same identifier you used in the f90 code (see subsequent sections for a details of the XML syntax and rules). For example: 154 \vspace{-20pt} 155 \begin{alltt} {{\scriptsize 156 \begin{verbatim} 157 <field_definition> 158 <!-- T grid --> 159 160 <field_group id="grid_T" grid_ref="grid_T_3D"> 161 ... 162 <field id="identifier" long_name="blabla" ... /> 163 ... 164 </field_definition> 165 \end{verbatim} 166 }}\end{alltt} 167 Note your definition must be added to the field\_group whose reference grid is consistent with the size of the array passed to iomput. The grid\_ref attribute refers to definitions set in iodef.xml which, in turn, reference grids and axes either defined in the code (iom\_set\_domain\_attr and iom\_set\_axis\_attr in iom.F90) or defined in the domain\_def.xml file. E.g.: 168 \vspace{-20pt} 169 \begin{alltt} {{\scriptsize 170 \begin{verbatim} 171 <grid id="grid_T_3D" domain_ref="grid_T" axis_ref="deptht"/> 172 \end{verbatim} 173 }}\end{alltt} 174 Note, if your array is computed within the surface module each nn\_fsbc time\_step, 175 add the field definition within the field\_group defined with the id ''SBC'': $<$field\_group id=''SBC''...$>$ which has been defined with the correct frequency of operations (iom\_set\_field\_attr in iom.F90) 176 177 \item[4.] add your field in one of the output files defined in iodef.xml (again see subsequent sections for syntax and rules) \\ 178 \vspace{-20pt} 179 \begin{alltt} {{\scriptsize 180 \begin{verbatim} 181 <file id="file1" .../> 182 ... 183 <field field_ref="identifier" /> 184 ... 185 </file> 186 \end{verbatim} 187 }}\end{alltt} 188 189 \end{description} 190 \subsection{XML fundamentals} 61 191 62 192 \subsubsection{ XML basic rules} … … 72 202 73 203 The XML file used in XIOS is structured by 7 families of tags: context, axis, domain, grid, field, file and variable. Each tag family has hierarchy of three flavors (except for context): 74 \begin{description} 75 \item[root]: declaration of the root element that can contain element groups or elements, for example : $<$file\_definition ...$/>$ \\ 76 \item[group]: declaration of a group element that can contain element groups or elements, for example : $<$file\_group ...$/>$ \\ 77 \item[element]: declaration of an element that can contain elements, for example : $<$file ...$/>$ \\ 78 \end{description} 204 \\ 205 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 206 \hline 207 flavor & 208 description & 209 example \\ 210 \hline 211 \hline 212 root & 213 declaration of the root element that can contain element groups or elements & 214 {\scriptsize \verb? < file_definition ... >?} \\ 215 \hline 216 group & 217 declaration of a group element that can contain element groups or elements & 218 {\scriptsize \verb? < file_group ... >?} \\ 219 \hline 220 element & 221 declaration of an element that can contain elements & 222 {\scriptsize \verb? < file ... >?} \\ 223 \hline 224 \end{tabular} 225 \\ 79 226 80 227 Each element may have several attributes. Some attributes are mandatory, other are optional but have a default value and other are are completely optional. Id is a special attribute used to identify an element or a group of elements. It must be unique for a kind of element. It is optional, but no reference to the corresponding element can be done if it is not defined. 81 228 82 The XML file is split into context tags that are used to isolate IO definition from different codes or different parts of a code. No interference is possible between 2 different contexts. Each context has its own calendar and an associated timestep. In NEMO, we used the following contexts (that can be defined in any order): 83 \begin{description} 84 \item[contex xios]: context containing informations for XIOS \\ 85 \verb? <context id="xios" ... ? 86 \item[context nemo]: contex containing IO informations for NEMO (mother grid when using AGRIF) \\ 87 \verb? <context id="nemo" ... ? 88 \item[context 1\_nemo]: contex containing IO informations for NEMO child grid 1 (when using AGRIF) \\ 89 \verb? <context id="1_nemo" ... ? 90 \item[context n\_nemo]: contex containing IO informations for NEMO child grid n (when using AGRIF) \\ 91 \verb? <context id="n_nemo" ... ? 92 \end{description} 93 94 Each context tag related to NEMO (mother or child grids) is divided into 5 parts (that can be defined in any order): 95 \begin{description} 96 \item[field definition]: define all variables that can potentially be outputted \\ 97 \verb? <field_definition ... ? 98 \item[file definition]: define the netcdf files to be created and the variables they will contain \\ 99 \verb? <file_definition ... ? 100 \item[axis definitions]: define vertical axis \\ 101 \verb? <axis_definition ... ? 102 \item[domain definitions]: define the horizontal grids \\ 103 \verb? <domain_definition ... ? 104 \item[grid definitions]: define the 2D and 3D grids (association of an axis and a domain) \\ 105 \verb? <grid_definition ... ? 106 \end{description} 107 108 the xios context contains only 1 tag: 109 \begin{description} 110 \item[variable definition]: define variables needed by xios. This can be seen as a kind of namelist for xios. \\ 111 \verb? <variable_definition ... ? 112 \end{description} 113 114 The XML file can be split in different parts to improve its readability and facilitate its use. The inclusing of XML files into the main XML file can be done through the attribute src: \\ 115 \verb? <context src="./nemo_def.xml" /> ? 116 In NEMO, by default, the field and domain définition is done in 2 séparate files: \\ 117 NEMOGCM/CONFIG/SHARED/field\_def.xml and \\ 118 NEMOGCM/CONFIG/SHARED/domain\_def.xml that are included in the main iodef.xml file through the following commands: \\ 119 \verb? <field_definition src="./field_def.xml" /> ? \\ 120 \verb? <domain_definition src="./domain_def.xml" /> ? 229 The XML file is split into context tags that are used to isolate IO definition from different codes or different parts of a code. No interference is possible between 2 different contexts. Each context has its own calendar and an associated timestep. In NEMO, we used the following contexts (that can be defined in any order):\\ 230 \\ 231 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 232 \hline 233 context & 234 description & 235 example \\ 236 \hline 237 \hline 238 context xios & 239 context containing information for XIOS & 240 {\scriptsize \verb? <context id="xios" ... ?} \\ 241 \hline 242 context nemo & 243 context containing IO information for NEMO (mother grid when using AGRIF) & 244 {\scriptsize \verb? <context id="nemo" ... ?} \\ 245 \hline 246 context 1\_nemo & 247 context containing IO information for NEMO child grid 1 (when using AGRIF) & 248 {\scriptsize \verb? <context id="1_nemo" ... ?} \\ 249 \hline 250 context n\_nemo & 251 context containing IO information for NEMO child grid n (when using AGRIF) & 252 {\scriptsize \verb? <context id="n_nemo" ... ?} \\ 253 \hline 254 \end{tabular} 255 \\ 256 257 \noindent The xios context contains only 1 tag: 258 \\ 259 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 260 \hline 261 context tag & 262 description & 263 example \\ 264 \hline 265 \hline 266 variable\_definition & 267 define variables needed by XIOS. This can be seen as a kind of namelist for XIOS. & 268 {\scriptsize \verb? <variable_definition ... ?} \\ 269 \hline 270 \end{tabular} 271 \\ 272 273 \noindent Each context tag related to NEMO (mother or child grids) is divided into 5 parts (that can be defined in any order):\\ 274 \\ 275 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 276 \hline 277 context tag & 278 description & 279 example \\ 280 \hline 281 \hline 282 field\_definition & 283 define all variables that can potentially be outputted & 284 {\scriptsize \verb? <field_definition ... ?} \\ 285 \hline 286 file\_definition & 287 define the netcdf files to be created and the variables they will contain & 288 {\scriptsize \verb? <file_definition ... ?} \\ 289 \hline 290 axis\_definition & 291 define vertical axis & 292 {\scriptsize \verb? <axis_definition ... ?} \\ 293 \hline 294 domain\_definition & 295 define the horizontal grids & 296 {\scriptsize \verb? <domain_definition ... ?} \\ 297 \hline 298 grid\_definition & 299 define the 2D and 3D grids (association of an axis and a domain) & 300 {\scriptsize \verb? <grid_definition ... ?} \\ 301 \hline 302 \end{tabular} 303 \\ 304 305 \subsubsection{Nesting XML files} 306 307 The XML file can be split in different parts to improve its readability and facilitate its use. The inclusion of XML files into the main XML file can be done through the attribute src: \\ 308 {\scriptsize \verb? <context src="./nemo_def.xml" /> ?}\\ 309 310 \noindent In NEMO, by default, the field and domain definition is done in 2 separate files: 311 {\scriptsize \tt 312 \begin{verbatim} 313 NEMOGCM/CONFIG/SHARED/field_def.xml 314 and 315 NEMOGCM/CONFIG/SHARED/domain_def.xml 316 \end{verbatim} 317 } 318 \noindent that are included in the main iodef.xml file through the following commands: \\ 319 {\scriptsize \verb? <field_definition src="./field_def.xml" /> ? \\ 320 \verb? <domain_definition src="./domain_def.xml" /> ? } 121 321 122 322 123 323 \subsubsection{Use of inheritance} 124 324 125 XML extensively uses the concept of inheritance. XML has a based treestructure with a parent-child oriented relation: all children inherit attributes from parent, but an attribute defined in a child replace the inherited attribute value. Note that the special attribute ''id'' is never inherited. \\325 XML extensively uses the concept of inheritance. XML has a tree based structure with a parent-child oriented relation: all children inherit attributes from parent, but an attribute defined in a child replace the inherited attribute value. Note that the special attribute ''id'' is never inherited. \\ 126 326 \\ 127 example 1: Direct inheritance. \\ 327 example 1: Direct inheritance. 328 \vspace{-20pt} 128 329 \begin{alltt} {{\scriptsize 129 330 \begin{verbatim} 130 331 <field_definition operation="average" > 131 132 332 <field id="sst" /> <!-- averaged sst --> 333 <field id="sss" operation="instant"/> <!-- instantaneous sss --> 133 334 </field_definition> 134 335 \end{verbatim} … … 140 341 for example output instantaneous values instead of average values. \\ 141 342 \\ 142 example 2: Inheritance by reference. \\ 343 example 2: Inheritance by reference. 344 \vspace{-20pt} 143 345 \begin{alltt} {{\scriptsize 144 346 \begin{verbatim} 145 347 <field_definition> 146 147 348 <field id="sst" long_name="sea surface temperature" /> 349 <field id="sss" long_name="sea surface salinity" /> 148 350 </field_definition> 149 351 150 352 <file_definition> 151 152 153 154 353 <file id="myfile" output_freq="1d" /> 354 <field field_ref="sst" /> <!-- default def --> 355 <field field_ref="sss" long_name="my description" /> <!-- overwrite --> 356 </file> 155 357 </file_definition> 156 358 \end{verbatim} 157 359 }}\end{alltt} 158 Inherite (and overwrite, if needed) the attributes of a tag you are refering to. 159 160 \subsubsection{Use of Group} 161 162 Groups can be used fort 2 purposes. \\ 163 164 First, the group can be used to define common attributes to be shared by the elements of the group through the inheritance. In the following example, we define a group of field that will share a common grid ''grid\_T\_2D''. Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''. 360 Inherit (and overwrite, if needed) the attributes of a tag you are refering to. 361 362 \subsubsection{Use of Groups} 363 364 Groups can be used for 2 purposes. Firstly, the group can be used to define common attributes to be shared by the elements of the group through inheritance. In the following example, we define a group of field that will share a common grid ''grid\_T\_2D''. Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''. 365 \vspace{-20pt} 165 366 \begin{alltt} {{\scriptsize 166 367 \begin{verbatim} 167 368 <field_group id="grid_T" grid_ref="grid_T_2D"> 168 169 170 171 369 <field id="toce" long_name="temperature" unit="degC" grid_ref="grid_T_3D"/> 370 <field id="sst" long_name="sea surface temperature" unit="degC" /> 371 <field id="sss" long_name="sea surface salinity" unit="psu" /> 372 <field id="ssh" long_name="sea surface height" unit="m" /> 172 373 ... 173 374 \end{verbatim} 174 375 }}\end{alltt} 175 376 176 Second , the group can be used to replace a list of elements. Several examples of groups of fields are proposed at the end of the file \\177 NEMOGCM/CONFIG/SHARED/field\_def.xml. For example, a short list of usual variables related to the U grid: 377 Secondly, the group can be used to replace a list of elements. Several examples of groups of fields are proposed at the end of the file {\tt CONFIG/SHARED/field\_def.xml}. For example, a short list of the usual variables related to the U grid: 378 \vspace{-20pt} 178 379 \begin{alltt} {{\scriptsize 179 380 \begin{verbatim} 180 381 <field_group id="groupU" > 181 182 183 382 <field field_ref="uoce" /> 383 <field field_ref="suoce" /> 384 <field field_ref="utau" /> 184 385 </field_group> 185 386 \end{verbatim} 186 387 }}\end{alltt} 187 that can be directly include in a file through the following syntaxe: 388 that can be directly included in a file through the following syntax: 389 \vspace{-20pt} 188 390 \begin{alltt} {{\scriptsize 189 391 \begin{verbatim} 190 392 <file id="myfile_U" output_freq="1d" /> 191 192 393 <field_group group_ref="groupU"/> 394 <field field_ref="uocetr_eff" /> <!-- add another field --> 193 395 </file> 194 396 \end{verbatim} … … 197 399 \subsection{Detailed functionalities } 198 400 199 The file NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xmlprovides several examples of the use of the new functionalities offered by the XML interface of XIOS.401 The file {\tt NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xml} provides several examples of the use of the new functionalities offered by the XML interface of XIOS. 200 402 201 403 \subsubsection{Define horizontal subdomains} 202 Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of the tag family domain. It must therefore be done in the domain part of the XML file. For example, in NEMOGCM/CONFIG/SHARED/domain\_def.xml, we provide the following example of a definition of a 5 by 5 box with the bottom left corner at point (10,10). 404 Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of the tag family domain. It must therefore be done in the domain part of the XML file. For example, in {\tt CONFIG/SHARED/domain\_def.xml}, we provide the following example of a definition of a 5 by 5 box with the bottom left corner at point (10,10). 405 \vspace{-20pt} 203 406 \begin{alltt} {{\scriptsize 204 407 \begin{verbatim} 205 408 <domain_group id="grid_T"> 206 409 <domain id="myzoom" zoom_ibegin="10" zoom_jbegin="10" zoom_ni="5" zoom_nj="5" /> 207 410 \end{verbatim} 208 411 }}\end{alltt} 209 412 The use of this subdomain is done through the redefinition of the attribute domain\_ref of the tag family field. For example: 413 \vspace{-20pt} 210 414 \begin{alltt} {{\scriptsize 211 415 \begin{verbatim} … … 216 420 }}\end{alltt} 217 421 Moorings are seen as an extrem case corresponding to a 1 by 1 subdomain. The Equatorial section, the TAO, RAMA and PIRATA moorings are alredy registered in the code and can therefore be outputted without taking care of their (i,j) position in the grid. These predefined domains can be activated by the use of specific domain\_ref: ''EqT'', ''EqU'' or ''EqW'' for the equatorial sections and the mooring position for TAO, RAMA and PIRATA followed by ''T'' (for example: ''8s137eT'', ''1.5s80.5eT'' ...) 422 \vspace{-20pt} 218 423 \begin{alltt} {{\scriptsize 219 424 \begin{verbatim} … … 227 432 \subsubsection{Define vertical zooms} 228 433 Vertical zooms are defined through the attributs zoom\_begin and zoom\_end of the tag family axis. It must therefore be done in the axis part of the XML file. For example, in NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xml, we provide the following example: 434 \vspace{-20pt} 229 435 \begin{alltt} {{\scriptsize 230 436 \begin{verbatim} … … 235 441 }}\end{alltt} 236 442 The use of this vertical zoom is done through the redefinition of the attribute axis\_ref of the tag family field. For example: 443 \vspace{-20pt} 237 444 \begin{alltt} {{\scriptsize 238 445 \begin{verbatim} … … 246 453 247 454 The output file names are defined by the attributs ''name'' and ''name\_suffix'' of the tag family file. for example: 455 \vspace{-20pt} 248 456 \begin{alltt} {{\scriptsize 249 457 \begin{verbatim} … … 258 466 \end{verbatim} 259 467 }}\end{alltt} 260 However it is also often very convienent to define the file name with the name of the experience, the output file frequency and the date of the beginning and the end of the simulation (which are informations stored either in the namelist or in the XML file). To do so, we added the following rule: if the id of the tag file is ''fileN''(where N = 1 to 99) or one of the predefined section or mooring (see next subsection), the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by:\\468 However it is often very convienent to define the file name with the name of the experiment, the output file frequency and the date of the beginning and the end of the simulation (which are informations stored either in the namelist or in the XML file). To do so, we added the following rule: if the id of the tag file is ''fileN''(where N = 1 to 99) or one of the predefined sections or moorings (see next subsection), the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by:\\ 261 469 \\ 262 470 \begin{tabular}{|p{4cm}|p{8cm}|} 263 471 \hline 264 \centering part of the name automatically to be replaced & 265 by \\ 472 \centering placeholder string & automatically replaced by \\ 266 473 \hline 267 474 \hline 268 475 \centering @expname@ & 269 the experi encename (from cn\_exp in the namelist) \\476 the experiment name (from cn\_exp in the namelist) \\ 270 477 \hline 271 478 \centering @freq@ & … … 284 491 ending date of the simulation (from nn\_date0 and nn\_itend in the namelist). \verb?yyyymmdd_hh:mm:ss? format \\ 285 492 \hline 286 \end{tabular} 493 \end{tabular}\\ 287 494 \\ 288 495 289 For example, 290 291 \begin{alltt} {{\scriptsize 496 \noindent For example, 497 {{\scriptsize 292 498 \begin{verbatim} 293 499 <file id="myfile_hzoom" name="myfile_@expname@_@startdate@_freq@freq@" output_freq="1d" > 294 500 \end{verbatim} 295 }}\end{alltt} 296 297 With, in the namelist: 298 299 \begin{alltt} {{\scriptsize 501 }} 502 \noindent with the namelist: 503 {{\scriptsize 300 504 \begin{verbatim} 301 505 cn_exp = "ORCA2" … … 303 507 ln_rstart = .false. 304 508 \end{verbatim} 305 }}\end{alltt} 306 307 will give the following file name radical: 308 309 \begin{alltt} {{\scriptsize 509 }} 510 \noindent will give the following file name radical: 511 {{\scriptsize 310 512 \begin{verbatim} 311 513 myfile_ORCA2_19891231_freq1d 312 514 \end{verbatim} 313 }}\end{alltt} 314 515 }} 315 516 316 517 \subsubsection{Other controls of the xml attributes from NEMO} 317 518 318 The values of some attributes are automatically defined by NEMO (and any definition given in the xml file is overwritten). By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the xml file (but this is not necessary).319 320 Here is the list of these attributes: 519 The values of some attributes are defined by subroutine calls within NEMO (calls to iom\_set\_domain\_attr, iom\_set\_axis\_attr and iom\_set\_field\_attr in iom.F90). Any definition given in the xml file will be overwritten. By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the xml file (but this is not necessary). 520 521 Here is the list of these attributes:\\ 321 522 \\ 322 523 \begin{tabular}{|l|c|c|c|} … … 343 544 344 545 546 \subsection{XML reference tables} 547 345 548 \subsubsection{Tag list} 346 549 347 348 \begin{tabular}{|p{2cm}|p{2.5cm}|p{3.5cm}|p{2cm}|p{2cm}|} 550 \begin{longtable}{|p{2.2cm}|p{2.5cm}|p{3.5cm}|p{2.2cm}|p{1.6cm}|} 349 551 \hline 350 552 tag name & … … 352 554 accepted attribute & 353 555 child of & 354 parent of \\ 355 \hline 556 parent of \endhead 356 557 \hline 357 558 simulation & … … 362 563 \hline 363 564 context & 364 encapsulates parts of the xml file d édicated to different codes or different parts of a code &565 encapsulates parts of the xml file dedicated to different codes or different parts of a code & 365 566 id (''xios'', ''nemo'' or ''n\_nemo'' for the nth AGRIF zoom), src, time\_origin & 366 567 simulation & 367 all root tags: ... \_definition \\568 all root tags: ... \_definition \\ 368 569 \hline 369 570 \hline … … 389 590 file\_definition & 390 591 encapsulates the definition of all the files that will be outputted & 391 enabled, min\_digits, name, name\_suffix, output\_level, split\_f ormat, split\_freq, sync\_freq, type, src &592 enabled, min\_digits, name, name\_suffix, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 392 593 context & 393 594 file or file\_group \\ … … 395 596 file\_group & 396 597 encapsulates a group of files that will be outputted & 397 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_f ormat, split\_freq, sync\_freq, type, src &598 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 398 599 file\_definition, file\_group & 399 600 file or file\_group \\ 400 601 \hline 401 602 file & 402 defi le the contentof a file to be outputted &403 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_f ormat, split\_freq, sync\_freq, type, src &603 define the contents of a file to be outputted & 604 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 404 605 file\_definition, file\_group & 405 606 field \\ 406 \hline407 \end{tabular}408 \begin{tabular}{|p{2cm}|p{2.5cm}|p{3.5cm}|p{2cm}|p{2cm}|}409 \hline410 tag name &411 description &412 accepted attribute &413 child of &414 parent of \\415 \hline416 607 \hline 417 608 axis\_definition & … … 434 625 \hline 435 626 \hline 436 domain\_ definition &627 domain\_\-definition & 437 628 define all the horizontal domains potentially used by the variables & 438 629 src & 439 630 context & 440 domain\_ group, domain \\631 domain\_\-group, domain \\ 441 632 \hline 442 633 domain\_group & 443 634 encapsulates a group of horizontal domains & 444 635 id, lon\_name, src, zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj & 445 domain\_ definition, domain\_group &446 domain\_ group, domain \\636 domain\_\-definition, domain\_group & 637 domain\_\-group, domain \\ 447 638 \hline 448 639 domain & 449 640 define an horizontal domain & 450 641 id, lon\_name, src, zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj & 451 domain\_ definition, domain\_group &642 domain\_\-definition, domain\_group & 452 643 none \\ 453 644 \hline … … 471 662 none \\ 472 663 \hline 473 \end{ tabular}664 \end{longtable} 474 665 475 666 476 667 \subsubsection{Attributes list} 477 668 478 \begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|} 479 \hline 669 \begin{longtable}{|p{2.2cm}|p{4cm}|p{3.8cm}|p{2cm}|} 670 \hline 671 attribute name & 672 description & 673 example & 674 accepted by \endhead 675 \hline 676 axis\_ref & 677 refers to the id of a vertical axis & 678 axis\_ref="deptht" & 679 field, grid families \\ 680 \hline 681 enabled & 682 switch on/off the output of a field or a file & 683 enabled=".TRUE." & 684 field, file families \\ 685 \hline 686 default\_value & 687 missing\_value definition & 688 default\_value="1.e20" & 689 field family \\ 690 \hline 691 description & 692 just for information, not used & 693 description="ocean T grid variables" & 694 all tags \\ 695 \hline 696 domain\_ref & 697 refers to the id of a domain & 698 domain\_ref="grid\_T" & 699 field or grid families \\ 700 \hline 701 field\_ref & 702 id of the field we want to add in a file & 703 field\_ref="toce" & 704 field \\ 705 \hline 706 grid\_ref & 707 refers to the id of a grid & 708 grid\_ref="grid\_T\_2D" & 709 field family \\ 710 \hline 711 group\_ref & 712 refer to a group of variables & 713 group\_ref="mooring" & 714 field\_group \\ 715 \hline 716 id & 717 allow to identify a tag & 718 id="nemo" & 719 accepted by all tags except simulation \\ 720 \hline 721 level & 722 output priority of a field: 0 (high) to 10 (low)& 723 level="1" & 724 field family \\ 725 \hline 726 long\_name & 727 define the long\_name attribute in the NetCDF file & 728 long\_name="Vertical T levels" & 729 field \\ 730 \hline 731 min\_digits & 732 specify the minimum of digits used in the core number in the name of the NetCDF file & 733 min\_digits="4" & 734 file family \\ 735 \hline 736 name & 737 name of a variable or a file. If the name of a file is undefined, its id is used as a name & 738 name="tos" & 739 field or file families \\ 740 \hline 741 name\_suffix & 742 suffix to be inserted after the name and before the cpu number and the ''.nc'' termination of a file & 743 name\_suffix="\_myzoom" & 744 file family \\ 745 \hline 480 746 attribute name & 481 747 description & … … 484 750 \hline 485 751 \hline 486 axis\_ref & 487 refers to the id of a vertical axis & 488 axis\_ref="deptht" & 489 field, grid families \\ 490 \hline 491 enabled & 492 switch on/off the output of a field or a file & 493 enabled=".TRUE." & 494 field, file families \\ 495 \hline 496 default\_value & 497 missing\_value definition & 498 default\_value="1.e20" & 752 operation & 753 type of temporal operation: average, accumulate, instantaneous, min, max and once & 754 operation="average" & 499 755 field family \\ 500 756 \hline 501 description & 502 just for information, not used & 503 description="ocean T grid variables" & 504 all tags \\ 505 \hline 506 domain\_ref & 507 refers to the id of a domain & 508 domain\_ref="grid\_T" & 509 field or grid families \\ 510 \hline 511 field\_ref= & 512 id of the field we want to add in a file & 513 field\_ref="toce" & 757 output\_freq & 758 operation frequency. units can be ts (timestep), y, mo, d, h, mi, s. & 759 output\_freq="1d12h" & 760 field family \\ 761 \hline 762 output\_level & 763 output priority of variables in a file: 0 (high) to 10 (low). All variables listed in the file with a level smaller or equal to output\_level will be output. Other variables won't be output even if they are listed in the file. & 764 output\_level="10"& 765 file family \\ 766 \hline 767 positive & 768 convention used for the orientation of vertival axis (positive downward in \NEMO). & 769 positive="down" & 770 axis family \\ 771 \hline 772 prec & 773 output precision: real 4 or real 8 & 774 prec="4" & 775 field family \\ 776 \hline 777 split\_freq & 778 frequency at which to temporally split output files. Units can be ts (timestep), y, mo, d, h, mi, s. Useful for long runs to prevent over-sized output files.& 779 split\_freq="1mo" & 780 file family \\ 781 \hline 782 split\_freq\-\_format & 783 date format used in the name of temporally split output files. Can be specified 784 using the following syntaxes: \%y, \%mo, \%d, \%h \%mi and \%s & 785 split\_freq\_format= "\%y\%mo\%d" & 786 file family \\ 787 \hline 788 src & 789 allow to include a file & 790 src="./field\_def.xml" & 791 accepted by all tags except simulation \\ 792 \hline 793 standard\_name & 794 define the standard\_name attribute in the NetCDF file & 795 standard\_name= "Eastward\_Sea\_Ice\_Transport" & 514 796 field \\ 515 797 \hline 516 grid\_ref & 517 refers to the id of a grid & 518 grid\_ref="grid\_T\_2D" & 519 field family \\ 520 \hline 521 group\_ref & 522 refer to a group of variables & 523 group\_ref="mooring" & 524 field\_group \\ 525 \hline 526 id & 527 allow to identify a tag & 528 id="nemo" & 529 accepted by all tags except simulation \\ 530 \hline 531 level & 532 output priority of a field: 0 (high) to 10 (low)& 533 level="1" & 534 field family \\ 535 \hline 536 long\_name & 537 define the long\_name attribute in the NetCDF file & 538 long\_name="Vertical T levels" & 539 field \\ 540 \hline 541 min\_digits & 542 specify the minimum of digits used in the core number in the name of the NetCDF file & 543 min\_digits="4" & 798 sync\_freq & 799 NetCDF file synchronization frequency (update of the time\_counter). units can be ts (timestep), y, mo, d, h, mi, s. & 800 sync\_freq="10d" & 544 801 file family \\ 545 802 \hline 546 name &547 name of a variable or a file. If the name of a file is undefined, its id is used as a name &548 name="tos" &549 field or file families \\550 \hline551 name\_suffix &552 suffix to be inserted after the name and before the cpu number and the ''.nc'' termination of a file &553 name\_suffix="\_myzoom" &554 file family \\555 \hline556 \end{tabular}557 \begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|}558 \hline559 803 attribute name & 560 804 description & … … 563 807 \hline 564 808 \hline 565 operation &566 type of temporal operation: average, accumulate, instantaneous, min, max and once &567 operation="average" &568 field family \\569 \hline570 output\_freq &571 operation frequency. units can be ts (timestep), y, mo, d, h, mi, s. &572 output\_freq="1d12h" &573 field family \\574 \hline575 output\_level &576 output priority of variables in a file: 0 (high) to 10 (low). All variables listed in the file with a level smaller or equal to output\_level will be output. Other variables won't be output even if they are listed in the file. &577 output\_level="10"&578 file family \\579 \hline580 positive &581 convention used for the orientation of vertival axis (positive downward in \NEMO). &582 positive="down" &583 axis family \\584 \hline585 prec &586 output precision: real 4 or real 8 &587 prec="4" &588 field family \\589 \hline590 split\_format &591 date format used in the name of splitted output files. can be spécified using the following syntaxe: \%y, \%mo, \%d, \%h \%mi and \%s &592 split\_format="\%yy\%mom\%dd" &593 file family \\594 \hline595 split\_freq &596 split output files frequency. units can be ts (timestep), y, mo, d, h, mi, s. &597 split\_freq="1mo" &598 file family \\599 \hline600 src &601 allow to include a file &602 src="./field\_def.xml" &603 accepted by all tags except simulation \\604 \hline605 standard\_name &606 define the standard\_name attribute in the NetCDF file &607 standard\_name="Eastward\_Sea\_Ice\_Transport" &608 field \\609 \hline610 sync\_freq &611 NetCDF file synchronization frequency (update of the time\_counter). units can be ts (timestep), y, mo, d, h, mi, s. &612 sync\_freq="10d" &613 file family \\614 \hline615 \end{tabular}616 \begin{tabular}{|p{2cm}|p{4cm}|p{4cm}|p{2cm}|}617 \hline618 attribute name &619 description &620 example &621 accepted by \\622 \hline623 \hline624 809 time\_origin & 625 810 specify the origin of the time counter & … … 628 813 \hline 629 814 type (1)& 630 specify if the output files must be splitted(multiple\_file) or not (one\_file) &815 specify if the output files are to be split spatially (multiple\_file) or not (one\_file) & 631 816 type="multiple\_file" & 632 817 file familly \\ … … 662 847 domain family \\ 663 848 \hline 664 \end{tabular} 665 666 \subsection{XIOS: the IO\_SERVER} 667 668 \subsubsection{Attached or detached mode?} 669 670 Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS}, the io\_server developed by Yann Meurdesoif from IPSL. This server can be used in ''attached mode'' (as a library) or in ''detached mode'' (as an external executable on n cpus). The ''attached mode'' is simpler to use but much less efficient. If the type of file is ''multiple\_file'', then in attached(detached) mode, each NEMO(XIOS) process will output its own subdomain: if NEMO(XIOS) is runnning on N cores, the ouput files will be splitted into N files. If the type of file is ''one\_file'', the output files will be directly recombined into one unique file either in ''detached mode'' or ''attached mode''. 671 672 \subsubsection{Control of xios: the xios context in iodef.xml} 673 674 The control of the use of xios is done through the xios context in iodef.xml. 675 676 \begin{tabular}{|p{3cm}|p{6.5cm}|p{2.5cm}|} 677 \hline 678 variable name & 679 description & 680 example \\ 681 \hline 682 \hline 683 buffer\_size & 684 buffer size used by XIOS to send data from NEMO to XIOS. Larger is more efficient. Note that needed/used buffer sizes are summarized at the end of the job & 685 25000000 \\ 686 \hline 687 buffer\_server\_factor\_size & 688 ratio between NEMO and XIOS buffer size. Should be 2. & 689 2 \\ 690 \hline 691 info\_level & 692 verbosity level (0 to 100) & 693 0 \\ 694 \hline 695 using\_server & 696 activate attached(false) or detached(true) mode & 697 true \\ 698 \hline 699 using\_oasis & 700 xios is used with OASIS(true) or not (false) & 701 false \\ 702 \hline 703 oasis\_codes\_id & 704 when using oasis, define the identifier of NEMO in the namcouple. Not that the identifier of XIOS is xios.x & 705 oceanx \\ 706 \hline 707 \end{tabular} 708 709 \subsubsection{Number of cpu used by XIOS in detached mode} 710 711 The number of cores used by the xios is specified only when launching the model. The number or cores dedicated to XIOS should be from ~1/10 to ~1/50 of the number or cores dedicated to NEMO (according of the amount of data to be created). Here is an example of 2 cpus for the io\_server and 62 cpu for opa using mpirun: 712 713 \texttt{ mpirun -np 2 ./nemo.exe : -np 62 ./xios\_server.exe } 714 715 \subsection{Practical issues} 716 717 \subsubsection{Add your own outputs} 718 719 It is very easy to add you own outputs with iomput. 4 points must be followed. 720 \begin{description} 721 \item[1-] in NEMO code, add a \\ 722 \texttt{ CALL iom\_put( 'identifier', array ) } \\ 723 where you want to output a 2D or 3D array. 724 725 \item[2-] don't forget to add \\ 726 \texttt{ USE iom ! I/O manager library } \\ 727 in the list of used modules in the upper part of your module. 728 729 \item[3-] in the file\_definition part of the xml file, add the definition of your variable using the same identifier you used in the f90 code. 730 \vspace{-20pt} 731 \begin{alltt} {{\scriptsize 732 \begin{verbatim} 733 <field_definition> 734 ... 735 <field id="identifier" long_name="blabla" ... /> 736 ... 737 </field_definition> 738 \end{verbatim} 739 }}\end{alltt} 740 attributes axis\_ref and grid\_ref must be consistent with the size of the array to pass to iomput. 741 if your array is computed within the surface module each nn\_fsbc time\_step, 742 add the field definition within the group defined with the id ''SBC'': $<$group id=''SBC''...$>$ 743 744 \item[4-] add your field in one of the output files \\ 745 \vspace{-20pt} 746 \begin{alltt} {{\scriptsize 747 \begin{verbatim} 748 <file id="file1" .../> 749 ... 750 <field ref="identifier" /> 751 ... 752 </file> 753 \end{verbatim} 754 }}\end{alltt} 755 756 \end{description} 849 \end{longtable} 850 757 851 758 852 … … 809 903 domain size in any dimension. The algorithm used is: 810 904 905 \vspace{-20pt} 811 906 \begin{alltt} {{\scriptsize 812 907 \begin{verbatim} -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/ARCH/arch-ALTIX_NAUTILUS_MPT.fcm
r3922 r4219 47 47 %LD ifort 48 48 %FPPFLAGS -P -C -traditional 49 %LDFLAGS -lmpi -lstdc++ 49 %LDFLAGS -lmpi -lstdc++ -lcurl 50 50 %AR ar 51 51 %ARFLAGS -r -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/ARCH/arch-X64_CURIE.fcm
r3922 r4219 29 29 # - fcm variables are starting with a % (and not a $) 30 30 # 31 %NCDF_HOME /usr/local/netcdf-4.2_hdf5 32 %HDF5_HOME /usr/local/hdf5-1.8. 831 %NCDF_HOME /usr/local/netcdf-4.2_hdf5_parallel 32 %HDF5_HOME /usr/local/hdf5-1.8.9_parallel 33 33 %XIOS_HOME $WORKDIR/now/models/xios 34 34 %OASIS_HOME $WORKDIR/now/models/oa3mct -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/AMM12/EXP00/iodef.xml
r3940 r4219 62 62 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 63 63 <axis id="nfloat" long_name="Float number" unit="-" /> 64 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 64 65 </axis_definition> 65 66 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/GYRE/EXP00/iodef.xml
r4188 r4219 101 101 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 102 102 <axis id="nfloat" long_name="Float number" unit="-" /> 103 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 103 104 </axis_definition> 104 105 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/GYRE_BFM/EXP00/iodef.xml
r3940 r4219 62 62 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 63 63 <axis id="nfloat" long_name="Float number" unit="-" /> 64 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 64 65 </axis_definition> 65 66 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/GYRE_PISCES/EXP00/iodef.xml
r4188 r4219 137 137 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 138 138 <axis id="nfloat" long_name="Float number" unit="-" /> 139 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 139 140 </axis_definition> 140 141 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_ar5.xml
r3940 r4219 248 248 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 249 249 <axis id="nfloat" long_name="Float number" unit="-" /> 250 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 250 251 </axis_definition> 251 252 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_default.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r4188 r4219 138 138 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 139 139 <axis id="nfloat" long_name="Float number" unit="-" /> 140 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 140 141 </axis_definition> 141 142 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_demo.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r3940 r4219 44 44 <!-- mooring: automatic definition of the file name suffix based on id="0n180wT" --> 45 45 <!-- include a group of variables. see field_def.xml for mooring variables definition --> 46 <file id="0n180wT" 47 <field_group group_ref="mooring" />46 <file id="0n180wT"> 47 <field_group group_ref="mooring" domain_ref="0n180wT" /> 48 48 </file> 49 49 … … 53 53 <field_group id="EqT" domain_ref="EqT" > 54 54 <field field_ref="toce" name="votemper" axis_ref="deptht_myzoom" /> 55 <field field_ref="sss" /> 55 56 </field_group> 56 57 </file> … … 85 86 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 86 87 <axis id="nfloat" long_name="Float number" unit="-" /> 88 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 87 89 </axis_definition> 88 90 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_oldstyle.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r3940 r4219 116 116 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 117 117 <axis id="nfloat" long_name="Float number" unit="-" /> 118 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 118 119 </axis_definition> 119 120 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/ORCA2_LIM_CFC_C14b/EXP00/iodef.xml
r3771 r4219 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 60 60 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 61 61 <axis id="nfloat" long_name="Float number" unit="-" /> 62 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 62 63 </axis_definition> 63 64 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/ORCA2_LIM_PISCES/EXP00/iodef.xml
r4188 r4219 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="10d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 31 31 <file_group id="1d" output_freq="1d" output_level="10" enabled=".TRUE."> <!-- 1d files --> 32 32 33 <file id=" 1d_grid_T" name="auto" description="ocean T grid variables" >33 <file id="file1" name_suffix="_grid_T" description="ocean T grid variables" > 34 34 <field field_ref="sst" name="sosstsst" /> 35 35 <field field_ref="sss" name="sosaline" /> … … 37 37 </file> 38 38 39 <file id=" 1d_grid_U" name="auto" description="ocean U grid variables" >39 <file id="file2" name_suffix="_grid_U" description="ocean U grid variables" > 40 40 <field field_ref="suoce" name="vozocrtx" /> 41 41 </file> 42 42 43 <file id=" 1d_grid_V" name="auto" description="ocean V grid variables" >43 <file id="file3" name_suffix="_grid_V" description="ocean V grid variables" > 44 44 <field field_ref="svoce" name="vomecrty" /> 45 45 </file> … … 49 49 <file_group id="5d" output_freq="5d" output_level="10" enabled=".TRUE."> <!-- 5d files --> 50 50 51 <file id=" 5d_grid_T" name="auto" description="ocean T grid variables" >51 <file id="file4" name_suffix="_grid_T" description="ocean T grid variables" > 52 52 <field field_ref="toce" name="votemper" /> 53 53 <field field_ref="soce" name="vosaline" /> … … 72 72 </file> 73 73 74 <file id=" 5d_grid_U" name="auto" description="ocean U grid variables" >74 <file id="file5" name_suffix="_grid_U" description="ocean U grid variables" > 75 75 <field field_ref="uoce" name="vozocrtx" /> 76 76 <field field_ref="uoce_eiv" name="vozoeivu" /> … … 81 81 </file> 82 82 83 <file id=" 5d_grid_V" name="auto" description="ocean V grid variables" >83 <file id="file6" name_suffix="_grid_V" description="ocean V grid variables" > 84 84 <field field_ref="voce" name="vomecrty" /> 85 85 <field field_ref="voce_eiv" name="vomeeivv" /> … … 90 90 </file> 91 91 92 <file id=" 5d_grid_W" name="auto" description="ocean W grid variables" >92 <file id="file7" name_suffix="_grid_W" description="ocean W grid variables" > 93 93 <field field_ref="woce" name="vovecrtz" /> 94 94 <field field_ref="avt" name="votkeavt" /> … … 97 97 </file> 98 98 99 <file id=" 5d_icemod" name="auto" description="ice variables" >99 <file id="file8" name_suffix="_icemod" description="ice variables" > 100 100 <field field_ref="ice_pres" /> 101 101 <field field_ref="snowthic_cea" name="isnowthi" /> … … 117 117 <file_group id="1m" output_freq="1mo" output_level="10" enabled=".TRUE."> <!-- real monthly files --> 118 118 119 <file id=" 1m_ptrc_T" name="auto" description="pisces sms variables" >119 <file id="file9" name_suffix="_ptrc_T" description="pisces sms variables" > 120 120 <field field_ref="DIC" /> 121 121 <field field_ref="Alkalini" /> … … 129 129 </file> 130 130 131 <file id=" 1m_diad_T" name="auto" description="additional pisces diagnostics" >131 <file id="file10" name_suffix="_diad_T" description="additional pisces diagnostics" > 132 132 <field field_ref="Cflx" /> 133 133 <field field_ref="Dpco2" /> … … 142 142 <file_group id="1y" output_freq="1y" output_level="10" enabled=".TRUE."> <!-- real yearly files --> 143 143 144 <file id=" 1y_ptrc_T" name="auto" description="pisces sms variables" >144 <file id="file11" name_suffix="_ptrc_T" description="pisces sms variables" > 145 145 <field field_ref="DIC" /> 146 146 <field field_ref="Alkalini" /> … … 169 169 </file> 170 170 171 <file id=" 1y_diad_T" name="auto" description="additional pisces diagnostics" >171 <file id="file12" name_suffix="_diad_T" description="additional pisces diagnostics" > 172 172 <field field_ref="PH" /> 173 173 <field field_ref="CO3" /> … … 239 239 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 240 240 <axis id="nfloat" long_name="Float number" unit="-" /> 241 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 241 242 </axis_definition> 242 243 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/ORCA2_OFF_PISCES/EXP00/iodef.xml
r3771 r4219 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 35 35 <file_group id="1m" output_freq="1mo" output_level="10" enabled=".TRUE."> <!-- real monthly files --> 36 36 37 <file id=" 1m_ptrc_T" name="auto" description="pisces sms variables" >37 <file id="file1" name_suffix="_ptrc_T" description="pisces sms variables" > 38 38 <field field_ref="DIC" /> 39 39 <field field_ref="Alkalini" /> … … 47 47 </file> 48 48 49 <file id=" 1m_diad_T" name="auto" description="additional pisces diagnostics" >49 <file id="file2" name_suffix="_diad_T" description="additional pisces diagnostics" > 50 50 <field field_ref="Cflx" /> 51 51 <field field_ref="Dpco2" /> … … 60 60 <file_group id="1y" output_freq="1y" output_level="10" enabled=".TRUE."> <!-- real yearly files --> 61 61 62 <file id=" 1y_ptrc_T" name="auto" description="pisces sms variables" >62 <file id="file3" name_suffix="_ptrc_T" description="pisces sms variables" > 63 63 <field field_ref="DIC" /> 64 64 <field field_ref="Alkalini" /> … … 87 87 </file> 88 88 89 <file id=" 1y_diad_T" name="auto" description="additional pisces diagnostics" >89 <file id="file4" name_suffix="_diad_T" description="additional pisces diagnostics" > 90 90 <field field_ref="PH" /> 91 91 <field field_ref="CO3" /> … … 157 157 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 158 158 <axis id="nfloat" long_name="Float number" unit="-" /> 159 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 159 160 </axis_definition> 160 161 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/ORCA2_SAS_LIM/EXP00/iodef.xml
r3771 r4219 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 60 60 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 61 61 <axis id="nfloat" long_name="Float number" unit="-" /> 62 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 62 63 </axis_definition> 63 64 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/CONFIG/SHARED/field_def.xml
r4188 r4219 227 227 </field_group> 228 228 229 <!-- variables available with iceberg trajectories --> 230 <field_group id="icbvar" domain_ref="grid_T" > 231 <field id="berg_melt" long_name="icb melt rate of icebergs" unit="kg/m2/s" /> 232 <field id="berg_buoy_melt" long_name="icb buoyancy component of iceberg melt rate" unit="kg/m2/s" /> 233 <field id="berg_eros_melt" long_name="icb erosion component of iceberg melt rate" unit="kg/m2/s" /> 234 <field id="berg_conv_melt" long_name="icb convective component of iceberg melt rate" unit="kg/m2/s" /> 235 <field id="berg_virtual_area" long_name="icb virtual coverage by icebergs" unit="m2" /> 236 <field id="bits_src" long_name="icb mass source of bergy bits" unit="kg/m2/s" /> 237 <field id="bits_melt" long_name="icb melt rate of bergy bits" unit="kg/m2/s" /> 238 <field id="bits_mass" long_name="icb bergy bit density field" unit="kg/m2" /> 239 <field id="berg_mass" long_name="icb iceberg density field" unit="kg/m2" /> 240 <field id="calving" long_name="icb calving mass input" unit="kg/s" /> 241 <field id="berg_floating_melt" long_name="icb melt rate of icebergs + bits" unit="kg/m2/s" /> 242 <field id="berg_real_calving" long_name="icb calving into iceberg class" unit="kg/s" axis_ref="icbcla" /> 243 <field id="berg_stored_ice" long_name="icb accumulated ice mass by class" unit="kg" axis_ref="icbcla" /> 244 </field_group> 245 229 246 <!-- ptrc on T grid --> 230 247 … … 255 272 <field id="NH4" long_name="Ammonium Concentration" unit="mmol/m3" /> 256 273 274 <!-- PISCES with Kriest parametisation : variables available with key_kriest --> 275 <field id="Num" long_name="Number of organic particles" unit="nbr" /> 276 277 <!-- PISCES light : variables available with key_pisces_reduced --> 257 278 <field id="DET" long_name="Detritus" unit="mmol-N/m3" /> 258 279 <field id="DOM" long_name="Dissolved Organic Matter" unit="mmol-N/m3" /> 259 280 281 <!-- CFC11 : variables available with key_cfc --> 260 282 <field id="CFC11" long_name="CFC-11 Concentration" unit="umol/L" /> 283 <!-- Bomb C14 : variables available with key_c14b --> 261 284 <field id="C14B" long_name="Bomb C14 Concentration" unit="ration" /> 262 285 </field_group> 263 286 264 <!-- diad on T grid : variables available with key_diatrc --> 265 287 <!-- PISCES additional diagnostics on T grid --> 266 288 <field_group id="diad_T" grid_ref="grid_T_2D"> 267 289 <field id="PH" long_name="PH" unit="-" grid_ref="grid_T_3D" /> … … 317 339 <field id="Heup" long_name="Euphotic layer depth" unit="m" /> 318 340 <field id="Irondep" long_name="Iron deposition from dust" unit="mol/m2/s" /> 319 <field id="Ironsed" long_name="Iron deposition from sediment" unit="mol/m2/s" grid_ref="grid_T_3D" /> 320 321 <field id="FNO3PHY" long_name="FNO3PHY" unit="-" grid_ref="grid_T_3D" /> 322 <field id="FNH4PHY" long_name="FNH4PHY" unit="-" grid_ref="grid_T_3D" /> 323 <field id="FNH4NO3" long_name="FNH4NO3" unit="-" grid_ref="grid_T_3D" /> 324 <field id="TNO3PHY" long_name="TNO3PHY" unit="-" /> 325 <field id="TNH4PHY" long_name="TNH4PHY" unit="-" /> 326 <field id="TPHYDOM" long_name="TPHYDOM" unit="-" /> 327 <field id="TPHYNH4" long_name="TPHYNH4" unit="-" /> 328 <field id="TPHYZOO" long_name="TPHYZOO" unit="-" /> 329 <field id="TPHYDET" long_name="TPHYDET" unit="-" /> 330 <field id="TDETZOO" long_name="TDETZOO" unit="-" /> 331 <field id="TZOODET" long_name="TZOODET" unit="-" /> 332 <field id="TZOOBOD" long_name="TZOOBOD" unit="-" /> 333 <field id="TZOONH4" long_name="TZOONH4" unit="-" /> 334 <field id="TZOODOM" long_name="TZOODOM" unit="-" /> 335 <field id="TNH4NO3" long_name="TNH4NO3" unit="-" /> 336 <field id="TDOMNH4" long_name="TDOMNH4" unit="-" /> 337 <field id="TDETNH4" long_name="TDETNH4" unit="-" /> 338 <field id="TPHYTOT" long_name="TPHYTOT" unit="-" /> 339 <field id="TZOOTOT" long_name="TZOOTOT" unit="-" /> 340 <field id="SEDPOC" long_name="SEDPOC" unit="-" /> 341 <field id="TDETSED" long_name="TDETSED" unit="-" /> 342 343 <field id="qtrCFC11" long_name="Air-sea flux of CFC-11" unit="mol/m2/s" /> 344 <field id="qintCFC11" long_name="Cumulative air-sea flux of CFC-11" unit="mol/m2" /> 345 <field id="qtrC14b" long_name="Air-sea flux of Bomb C14" unit="mol/m2/s" /> 346 <field id="qintC14b" long_name="Cumulative air-sea flux of Bomb C14" unit="mol/m2" /> 347 <field id="fdecay" long_name="Radiactive decay of Bomb C14" unit="mol/m3" grid_ref="grid_T_3D" /> 341 <field id="Ironsed" long_name="Iron deposition from sediment" unit="mol/m2/s" grid_ref="grid_T_3D "/> 342 343 <!-- PISCES with Kriest parametisation : variables available with key_kriest --> 344 <field id="POCFlx" long_name="Particulate organic C flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 345 <field id="NumFlx" long_name="Particle number flux" unit="nbr/m2/s" grid_ref="grid_T_3D" /> 346 <field id="SiFlx" long_name="Biogenic Si flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 347 <field id="CaCO3Flx" long_name="CaCO3 flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 348 <field id="xnum" long_name="Number of particles in aggregats" unit="-" grid_ref="grid_T_3D" /> 349 <field id="W1" long_name="sinking speed of mass flux" unit="m2/s" grid_ref="grid_T_3D" /> 350 <field id="W2" long_name="sinking speed of number flux" unit="m2/s" grid_ref="grid_T_3D" /> 351 352 <!-- PISCES light : variables available with key_pisces_reduced --> 353 <field id="FNO3PHY" long_name="FNO3PHY" unit="-" grid_ref="grid_T_3D" /> 354 <field id="FNH4PHY" long_name="FNH4PHY" unit="-" grid_ref="grid_T_3D" /> 355 <field id="FNH4NO3" long_name="FNH4NO3" unit="-" grid_ref="grid_T_3D" /> 356 <field id="TNO3PHY" long_name="TNO3PHY" unit="-" /> 357 <field id="TNH4PHY" long_name="TNH4PHY" unit="-" /> 358 <field id="TPHYDOM" long_name="TPHYDOM" unit="-" /> 359 <field id="TPHYNH4" long_name="TPHYNH4" unit="-" /> 360 <field id="TPHYZOO" long_name="TPHYZOO" unit="-" /> 361 <field id="TPHYDET" long_name="TPHYDET" unit="-" /> 362 <field id="TDETZOO" long_name="TDETZOO" unit="-" /> 363 <field id="TZOODET" long_name="TZOODET" unit="-" /> 364 <field id="TZOOBOD" long_name="TZOOBOD" unit="-" /> 365 <field id="TZOONH4" long_name="TZOONH4" unit="-" /> 366 <field id="TZOODOM" long_name="TZOODOM" unit="-" /> 367 <field id="TNH4NO3" long_name="TNH4NO3" unit="-" /> 368 <field id="TDOMNH4" long_name="TDOMNH4" unit="-" /> 369 <field id="TDETNH4" long_name="TDETNH4" unit="-" /> 370 <field id="TPHYTOT" long_name="TPHYTOT" unit="-" /> 371 <field id="TZOOTOT" long_name="TZOOTOT" unit="-" /> 372 <field id="SEDPOC" long_name="SEDPOC" unit="-" /> 373 <field id="TDETSED" long_name="TDETSED" unit="-" /> 374 375 <!-- CFC11 : variables available with key_cfc --> 376 <field id="qtrCFC11" long_name="Air-sea flux of CFC-11" unit="mol/m2/s" /> 377 <field id="qintCFC11" long_name="Cumulative air-sea flux of CFC-11" unit="mol/m2" /> 378 <!-- Bomb C14 : variables available with key_c14b --> 379 <field id="qtrC14b" long_name="Air-sea flux of Bomb C14" unit="mol/m2/s" /> 380 <field id="qintC14b" long_name="Cumulative air-sea flux of Bomb C14" unit="mol/m2" /> 381 <field id="fdecay" long_name="Radiactive decay of Bomb C14" unit="mol/m3" grid_ref="grid_T_3D" /> 348 382 </field_group> 349 383 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/NST_SRC/agrif_opa_sponge.F90
r3918 r4219 185 185 INTEGER :: ji,jj,jk 186 186 INTEGER :: ispongearea, ilci, ilcj 187 REAL(wp) :: z1spongearea 188 REAL(wp), POINTER, DIMENSION(:,:) :: zlocalviscsponge 187 LOGICAL :: ll_spdone 188 REAL(wp) :: z1spongearea, zramp 189 REAL(wp), POINTER, DIMENSION(:,:) :: ztabramp 189 190 190 191 #if defined SPONGE || defined SPONGE_TOP 191 192 CALL wrk_alloc( jpi, jpj, zlocalviscsponge ) 193 194 ispongearea = 2 + 2 * Agrif_irhox() 195 ilci = nlci - ispongearea 196 ilcj = nlcj - ispongearea 197 z1spongearea = 1._wp / REAL( ispongearea - 2 ) 198 spbtr2(:,:) = 1. / ( e1t(:,:) * e2t(:,:) ) 192 ll_spdone=.TRUE. 193 IF (( .NOT. spongedoneT ).OR.( .NOT. spongedoneU )) THEN 194 ! Define ramp from boundaries towards domain interior 195 ! at T-points 196 ! Store it in ztabramp 197 ll_spdone=.FALSE. 198 199 CALL wrk_alloc( jpi, jpj, ztabramp ) 200 201 ispongearea = 2 + 2 * Agrif_irhox() 202 ilci = nlci - ispongearea 203 ilcj = nlcj - ispongearea 204 z1spongearea = 1._wp / REAL( ispongearea - 2 ) 205 spbtr2(:,:) = 1. / ( e1t(:,:) * e2t(:,:) ) 206 207 ztabramp(:,:) = 0. 208 209 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 210 DO jj = 1, jpj 211 IF ( umask(2,jj,1) == 1._wp ) THEN 212 DO ji = 2, ispongearea 213 ztabramp(ji,jj) = ( ispongearea-ji ) * z1spongearea 214 END DO 215 ENDIF 216 ENDDO 217 ENDIF 218 219 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 220 DO jj = 1, jpj 221 IF ( umask(nlci-2,jj,1) == 1._wp ) THEN 222 DO ji = ilci+1,nlci-1 223 zramp = (ji - (ilci+1) ) * z1spongearea 224 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 225 ENDDO 226 ENDIF 227 ENDDO 228 ENDIF 229 230 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 231 DO ji = 1, jpi 232 IF ( vmask(ji,2,1) == 1._wp ) THEN 233 DO jj = 2, ispongearea 234 zramp = ( ispongearea-jj ) * z1spongearea 235 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 236 END DO 237 ENDIF 238 ENDDO 239 ENDIF 240 241 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 242 DO ji = 1, jpi 243 IF ( vmask(ji,nlcj-2,1) == 1._wp ) THEN 244 DO jj = ilcj+1,nlcj-1 245 zramp = (jj - (ilcj+1) ) * z1spongearea 246 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 247 END DO 248 ENDIF 249 ENDDO 250 ENDIF 251 252 ENDIF 199 253 200 254 ! Tracers 201 255 IF( .NOT. spongedoneT ) THEN 202 zlocalviscsponge(:,:) = 0.203 256 spe1ur(:,:) = 0. 204 257 spe2vr(:,:) = 0. 205 258 206 259 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 207 DO ji = 2, ispongearea208 zlocalviscsponge(ji,:) = visc_tra * ( ispongearea-ji ) * z1spongearea209 ENDDO210 spe1ur(2:ispongearea-1,: ) = 0.5 * ( zlocalviscsponge(2:ispongearea-1,: ) &211 & + zlocalviscsponge(3:ispongearea ,: ) ) & 212 & * e2u(2:ispongearea-1,: ) / e1u(2:ispongearea-1,: )213 spe2vr(2:ispongearea ,1:jpjm1) = 0.5 * ( zlocalviscsponge(2:ispongearea ,1:jpjm1) &214 & + zlocalviscsponge(2:ispongearea,2 :jpj ) ) &215 & * e1v(2:ispongearea ,1:jpjm1) / e2v(2:ispongearea,1:jpjm1)260 spe1ur(2:ispongearea-1,: ) = visc_tra & 261 & * 0.5 * ( ztabramp(2:ispongearea-1,: ) & 262 & + ztabramp(3:ispongearea ,: ) ) & 263 & * e2u(2:ispongearea-1,:) / e1u(2:ispongearea-1,:) 264 265 spe2vr(2:ispongearea ,1:jpjm1 ) = visc_tra & 266 & * 0.5 * ( ztabramp(2:ispongearea ,1:jpjm1) & 267 & + ztabramp(2:ispongearea,2 :jpj ) ) & 268 & * e1v(2:ispongearea,1:jpjm1) / e2v(2:ispongearea,1:jpjm1) 216 269 ENDIF 217 270 218 271 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 219 DO ji = ilci+1,nlci-1 220 zlocalviscsponge(ji,:) = visc_tra * (ji - (ilci+1) ) * z1spongearea 221 ENDDO 222 223 spe1ur(ilci+1:nlci-2,: ) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-2,:) & 224 & + zlocalviscsponge(ilci+2:nlci-1,:) ) & 225 & * e2u(ilci+1:nlci-2,:) / e1u(ilci+1:nlci-2,:) 226 227 spe2vr(ilci+1:nlci-1,1:jpjm1) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-1,1:jpjm1) & 228 & + zlocalviscsponge(ilci+1:nlci-1,2:jpj ) ) & 229 & * e1v(ilci+1:nlci-1,1:jpjm1) / e2v(ilci+1:nlci-1,1:jpjm1) 272 spe1ur(ilci+1:nlci-2,: ) = visc_tra & 273 & * 0.5 * ( ztabramp(ilci+1:nlci-2,: ) & 274 & + ztabramp(ilci+2:nlci-1,: ) ) & 275 & * e2u(ilci+1:nlci-2,:) / e1u(ilci+1:nlci-2,:) 276 277 spe2vr(ilci+1:nlci-1,1:jpjm1 ) = visc_tra & 278 & * 0.5 * ( ztabramp(ilci+1:nlci-1,1:jpjm1) & 279 & + ztabramp(ilci+1:nlci-1,2:jpj ) ) & 280 & * e1v(ilci+1:nlci-1,1:jpjm1) / e2v(ilci+1:nlci-1,1:jpjm1) 230 281 ENDIF 231 282 232 283 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 233 DO jj = 2, ispongearea 234 zlocalviscsponge(:,jj) = visc_tra * ( ispongearea-jj ) * z1spongearea 235 ENDDO 236 spe1ur(1:jpim1,2:ispongearea ) = 0.5 * ( zlocalviscsponge(1:jpim1,2:ispongearea ) & 237 & + zlocalviscsponge(2:jpi ,2:ispongearea) ) & 284 spe1ur(1:jpim1,2:ispongearea ) = visc_tra & 285 & * 0.5 * ( ztabramp(1:jpim1,2:ispongearea ) & 286 & + ztabramp(2:jpi ,2:ispongearea ) ) & 238 287 & * e2u(1:jpim1,2:ispongearea) / e1u(1:jpim1,2:ispongearea) 239 288 240 spe2vr(: ,2:ispongearea-1) = 0.5 * ( zlocalviscsponge(:,2:ispongearea-1) & 241 & + zlocalviscsponge(:,3:ispongearea ) ) & 289 spe2vr(: ,2:ispongearea-1) = visc_tra & 290 & * 0.5 * ( ztabramp(: ,2:ispongearea-1) & 291 & + ztabramp(: ,3:ispongearea ) ) & 242 292 & * e1v(:,2:ispongearea-1) / e2v(:,2:ispongearea-1) 243 293 ENDIF 244 294 245 295 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 246 DO jj = ilcj+1,nlcj-1 247 zlocalviscsponge(:,jj) = visc_tra * (jj - (ilcj+1) ) * z1spongearea 248 ENDDO 249 spe1ur(1:jpim1,ilcj+1:nlcj-1) = 0.5 * ( zlocalviscsponge(1:jpim1,ilcj+1:nlcj-1) & 250 & + zlocalviscsponge(2:jpi ,ilcj+1:nlcj-1) ) & 296 spe1ur(1:jpim1,ilcj+1:nlcj-1) = visc_tra & 297 & * 0.5 * ( ztabramp(1:jpim1,ilcj+1:nlcj-1) & 298 & + ztabramp(2:jpi ,ilcj+1:nlcj-1) ) & 251 299 & * e2u(1:jpim1,ilcj+1:nlcj-1) / e1u(1:jpim1,ilcj+1:nlcj-1) 252 spe2vr(: ,ilcj+1:nlcj-2) = 0.5 * ( zlocalviscsponge(:,ilcj+1:nlcj-2 ) & 253 & + zlocalviscsponge(:,ilcj+2:nlcj-1) ) & 300 301 spe2vr(: ,ilcj+1:nlcj-2) = visc_tra & 302 & * 0.5 * ( ztabramp(: ,ilcj+1:nlcj-2) & 303 & + ztabramp(: ,ilcj+2:nlcj-1) ) & 254 304 & * e1v(:,ilcj+1:nlcj-2) / e2v(:,ilcj+1:nlcj-2) 255 305 ENDIF … … 259 309 ! Dynamics 260 310 IF( .NOT. spongedoneU ) THEN 261 zlocalviscsponge(:,:) = 0.262 311 spe1ur2(:,:) = 0. 263 312 spe2vr2(:,:) = 0. 264 313 265 314 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 266 DO ji = 2, ispongearea 267 zlocalviscsponge(ji,:) = visc_dyn * ( ispongearea-ji ) * z1spongearea 268 ENDDO 269 spe1ur2(2:ispongearea-1,: ) = 0.5 * ( zlocalviscsponge(2:ispongearea-1,: ) & 270 & + zlocalviscsponge(3:ispongearea,: ) ) 271 spe2vr2(2:ispongearea ,1:jpjm1) = 0.5 * ( zlocalviscsponge(2:ispongearea ,1:jpjm1) & 272 & + zlocalviscsponge(2:ispongearea,2:jpj) ) 315 spe1ur2(2:ispongearea-1,: ) = visc_dyn & 316 & * 0.5 * ( ztabramp(2:ispongearea-1,: ) & 317 & + ztabramp(3:ispongearea ,: ) ) 318 spe2vr2(2:ispongearea ,1:jpjm1) = visc_dyn & 319 & * 0.5 * ( ztabramp(2:ispongearea ,1:jpjm1) & 320 & + ztabramp(2:ispongearea ,2:jpj ) ) 273 321 ENDIF 274 322 275 323 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 276 DO ji = ilci+1,nlci-1 277 zlocalviscsponge(ji,:) = visc_dyn * (ji - (ilci+1) ) * z1spongearea 278 ENDDO 279 spe1ur2(ilci+1:nlci-2,: ) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-2,:) & 280 & + zlocalviscsponge(ilci+2:nlci-1,:) ) 281 spe2vr2(ilci+1:nlci-1,1:jpjm1) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-1,1:jpjm1) & 282 & + zlocalviscsponge(ilci+1:nlci-1,2:jpj ) ) 324 spe1ur2(ilci+1:nlci-2 ,: ) = visc_dyn & 325 & * 0.5 * ( ztabramp(ilci+1:nlci-2, : ) & 326 & + ztabramp(ilci+2:nlci-1, : ) ) 327 spe2vr2(ilci+1:nlci-1 ,1:jpjm1) = visc_dyn & 328 & * 0.5 * ( ztabramp(ilci+1:nlci-1,1:jpjm1 ) & 329 & + ztabramp(ilci+1:nlci-1,2:jpj ) ) 283 330 ENDIF 284 331 285 332 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 286 DO jj = 2, ispongearea 287 zlocalviscsponge(:,jj) = visc_dyn * ( ispongearea-jj ) * z1spongearea 288 ENDDO 289 spe1ur2(1:jpim1,2:ispongearea ) = 0.5 * ( zlocalviscsponge(1:jpim1,2:ispongearea) & 290 & + zlocalviscsponge(2:jpi,2:ispongearea) ) 291 spe2vr2(: ,2:ispongearea-1) = 0.5 * ( zlocalviscsponge(:,2:ispongearea-1) & 292 & + zlocalviscsponge(:,3:ispongearea) ) 333 spe1ur2(1:jpim1,2:ispongearea ) = visc_dyn & 334 & * 0.5 * ( ztabramp(1:jpim1,2:ispongearea ) & 335 & + ztabramp(2:jpi ,2:ispongearea ) ) 336 spe2vr2(: ,2:ispongearea-1) = visc_dyn & 337 & * 0.5 * ( ztabramp(: ,2:ispongearea-1) & 338 & + ztabramp(: ,3:ispongearea ) ) 293 339 ENDIF 294 340 295 341 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 296 DO jj = ilcj+1,nlcj-1 297 zlocalviscsponge(:,jj) = visc_dyn * (jj - (ilcj+1) ) * z1spongearea 298 ENDDO 299 spe1ur2(1:jpim1,ilcj+1:nlcj-1) = 0.5 * ( zlocalviscsponge(1:jpim1,ilcj+1:nlcj-1) & 300 & + zlocalviscsponge(2:jpi,ilcj+1:nlcj-1) ) 301 spe2vr2(: ,ilcj+1:nlcj-2) = 0.5 * ( zlocalviscsponge(:,ilcj+1:nlcj-2 ) & 302 & + zlocalviscsponge(:,ilcj+2:nlcj-1) ) 342 spe1ur2(1:jpim1,ilcj+1:nlcj-1 ) = visc_dyn & 343 & * 0.5 * ( ztabramp(1:jpim1,ilcj+1:nlcj-1 ) & 344 & + ztabramp(2:jpi ,ilcj+1:nlcj-1 ) ) 345 spe2vr2(: ,ilcj+1:nlcj-2 ) = visc_dyn & 346 & * 0.5 * ( ztabramp(: ,ilcj+1:nlcj-2 ) & 347 & + ztabramp(: ,ilcj+2:nlcj-1 ) ) 303 348 ENDIF 304 349 spongedoneU = .TRUE. … … 306 351 ENDIF 307 352 ! 308 CALL wrk_dealloc( jpi, jpj, zlocalviscsponge)353 IF (.NOT.ll_spdone) CALL wrk_dealloc( jpi, jpj, ztabramp ) 309 354 ! 310 355 #endif -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/ASM/asminc.F90
r3785 r4219 682 682 ! used to prevent the applied increments taking the temperature below the local freezing point 683 683 684 #if defined key_cice 685 fzptnz(:,:,:) = -1.8_wp 686 #else 687 DO jk = 1, jpk 688 DO jj = 1, jpj 689 DO ji = 1, jpk 690 fzptnz (ji,jj,jk) = ( -0.0575_wp + 1.710523e-3_wp * SQRT( tsn(ji,jj,jk,jp_sal) ) & 691 - 2.154996e-4_wp * tsn(ji,jj,jk,jp_sal) ) * tsn(ji,jj,jk,jp_sal) & 692 - 7.53e-4_wp * fsdepw(ji,jj,jk) ! (pressure in dbar) 693 END DO 694 END DO 695 END DO 696 #endif 684 DO jk=1, jpkm1 685 fzptnz (:,:,jk) = tfreez( tsn(:,:,jk,jp_sal), fsdept(:,:,jk) ) 686 ENDDO 697 687 698 688 IF ( ln_asmiau ) THEN -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/BDY/bdydyn.F90
r3294 r4219 30 30 USE lbclnk ! ocean lateral boundary conditions (or mpp link) 31 31 USE in_out_manager ! 32 USE domvvl ! variable volume 32 33 33 34 IMPLICIT NONE … … 84 85 pu2d(:,:) = 0.e0 85 86 pv2d(:,:) = 0.e0 86 DO jk = 1, jpkm1 !! Vertically integrated momentum trends 87 pu2d(:,:) = pu2d(:,:) + fse3u(:,:,jk) * umask(:,:,jk) * ua(:,:,jk) 88 pv2d(:,:) = pv2d(:,:) + fse3v(:,:,jk) * vmask(:,:,jk) * va(:,:,jk) 89 END DO 90 pu2d(:,:) = pu2d(:,:) * phur(:,:) 91 pv2d(:,:) = pv2d(:,:) * phvr(:,:) 87 IF (lk_vvl) THEN 88 DO jk = 1, jpkm1 !! Vertically integrated momentum trends 89 pu2d(:,:) = pu2d(:,:) + fse3u_a(:,:,jk) * umask(:,:,jk) * ua(:,:,jk) 90 pv2d(:,:) = pv2d(:,:) + fse3v_a(:,:,jk) * vmask(:,:,jk) * va(:,:,jk) 91 END DO 92 pu2d(:,:) = pu2d(:,:) / ( hu_0(:,:) + sshu_a(:,:) + 1._wp - umask(:,:,1) ) 93 pv2d(:,:) = pv2d(:,:) / ( hv_0(:,:) + sshv_a(:,:) + 1._wp - vmask(:,:,1) ) 94 ELSE 95 DO jk = 1, jpkm1 !! Vertically integrated momentum trends 96 pu2d(:,:) = pu2d(:,:) + fse3u(:,:,jk) * umask(:,:,jk) * ua(:,:,jk) 97 pv2d(:,:) = pv2d(:,:) + fse3v(:,:,jk) * vmask(:,:,jk) * va(:,:,jk) 98 END DO 99 pu2d(:,:) = pu2d(:,:) * phur(:,:) 100 pv2d(:,:) = pv2d(:,:) * phvr(:,:) 101 ENDIF 92 102 DO jk = 1 , jpkm1 93 ua(:,:,jk) = ua(:,:,jk) - pu2d(:,:) 94 va(:,:,jk) = va(:,:,jk) - pv2d(:,:) 103 ua(:,:,jk) = ua(:,:,jk) - pu2d(:,:) * umask(:,:,jk) 104 va(:,:,jk) = va(:,:,jk) - pv2d(:,:) * vmask(:,:,jk) 95 105 END DO 96 106 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/C1D/step_c1d.F90
r3680 r4219 59 59 60 60 indic = 0 ! reset to no error condition 61 IF( kstp == nit000 ) CALL iom_init ! iom_put initialization (must be done after nemo_init for AGRIF+XIOS+OASIS) 61 62 IF( kstp /= nit000 ) CALL day( kstp ) ! Calendar (day was already called at nit000 in day_init) 62 CALL iom_setkt( kstp ) ! say to iom that we are at time step kstp63 CALL iom_setkt( kstp - nit000 + 1 ) ! say to iom that we are at time step kstp 63 64 64 65 !>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 106 107 !<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 107 108 CALL dia_wri( kstp ) ! ocean model: outputs 109 IF( lk_diahth ) CALL dia_hth( kstp ) ! Thermocline depth (20°C) 110 108 111 109 112 #if defined key_top -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/DIA/diadct.F90
r3680 r4219 42 42 #endif 43 43 #if defined key_lim3 44 USE ice_3 44 USE par_ice 45 USE ice 45 46 #endif 46 47 USE domvvl … … 484 485 ijglo = secs(jsec)%listPoint(jpt)%J + jpjzoom - 1 + njmpp - 1 485 486 WRITE(numout,*)' # I J : ',iiglo,ijglo 487 CALL FLUSH(numout) 486 488 ENDDO 487 489 ENDIF … … 606 608 607 609 !! * Local variables 608 INTEGER :: jk, jseg, jclass, &!loop on level/segment/classes610 INTEGER :: jk, jseg, jclass,jl, &!loop on level/segment/classes/ice categories 609 611 isgnu, isgnv ! 610 612 REAL(wp) :: zumid, zvmid, &!U/V velocity on a cell segment … … 771 773 772 774 zTnorm=zumid_ice*e2u(k%I,k%J)+zvmid_ice*e1v(k%I,k%J) 773 775 776 #if defined key_lim2 774 777 transports_2d(1,jsec,jseg) = transports_2d(1,jsec,jseg) + (zTnorm)* & 775 778 (1.0 - frld(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J)) & … … 778 781 transports_2d(2,jsec,jseg) = transports_2d(2,jsec,jseg) + (zTnorm)* & 779 782 (1.0 - frld(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J)) 783 #endif 784 #if defined key_lim3 785 DO jl=1,jpl 786 transports_2d(1,jsec,jseg) = transports_2d(1,jsec,jseg) + (zTnorm)* & 787 a_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) * & 788 ( ht_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) + & 789 ht_s(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) ) 790 791 transports_2d(2,jsec,jseg) = transports_2d(2,jsec,jseg) + (zTnorm)* & 792 a_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) 793 ENDDO 794 #endif 780 795 781 796 ENDIF !end of ice case -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/DIA/diahsb.F90
r3625 r4219 21 21 USE bdy_par ! (for lk_bdy) 22 22 USE timing ! preformance summary 23 USE lib_fortran 24 USE sbcrnf 23 25 24 26 IMPLICIT NONE … … 33 35 REAL(dp) :: surf_tot , vol_tot ! 34 36 REAL(dp) :: frc_t , frc_s , frc_v ! global forcing trends 37 REAL(dp) :: frc_wn_t , frc_wn_s ! global forcing trends 35 38 REAL(dp) :: fact1 ! conversion factors 36 39 REAL(dp) :: fact21 , fact22 ! - - … … 38 41 REAL(dp), DIMENSION(:,:) , ALLOCATABLE :: surf , ssh_ini ! 39 42 REAL(dp), DIMENSION(:,:,:), ALLOCATABLE :: hc_loc_ini, sc_loc_ini, e3t_ini ! 43 REAL(dp), DIMENSION(:,:) , ALLOCATABLE :: ssh_hc_loc_ini, ssh_sc_loc_ini 40 44 41 45 !! * Substitutions … … 67 71 INTEGER :: jk ! dummy loop indice 68 72 REAL(dp) :: zdiff_hc , zdiff_sc ! heat and salt content variations 73 REAL(dp) :: zdiff_hc1 , zdiff_sc1 ! heat and salt content variations of ssh 69 74 REAL(dp) :: zdiff_v1 , zdiff_v2 ! volume variation 75 REAL(dp) :: zerr_hc1 , zerr_sc1 ! Non conservation due to free surface 70 76 REAL(dp) :: z1_rau0 ! local scalars 71 77 REAL(dp) :: zdeltat ! - - 72 78 REAL(dp) :: z_frc_trd_t , z_frc_trd_s ! - - 73 79 REAL(dp) :: z_frc_trd_v ! - - 80 REAL(dp) :: z_wn_trd_t , z_wn_trd_s ! - - 81 REAL(dp) :: z_ssh_hc , z_ssh_sc ! - - 74 82 !!--------------------------------------------------------------------------- 75 83 IF( nn_timing == 1 ) CALL timing_start('dia_hsb') … … 79 87 ! ------------------------- ! 80 88 z1_rau0 = 1.e0 / rau0 81 z_frc_trd_v = z1_rau0 * SUM( - ( emp(:,:) - rnf(:,:) ) * surf(:,:) ) ! volume fluxes 82 z_frc_trd_t = SUM( sbc_tsc(:,:,jp_tem) * surf(:,:) ) ! heat fluxes 83 z_frc_trd_s = SUM( sbc_tsc(:,:,jp_sal) * surf(:,:) ) ! salt fluxes 89 z_frc_trd_v = z1_rau0 * glob_sum( - ( emp(:,:) - rnf(:,:) ) * surf(:,:) ) ! volume fluxes 90 z_frc_trd_t = glob_sum( sbc_tsc(:,:,jp_tem) * surf(:,:) ) ! heat fluxes 91 z_frc_trd_s = glob_sum( sbc_tsc(:,:,jp_sal) * surf(:,:) ) ! salt fluxes 92 ! Add runoff heat & salt input 93 IF( ln_rnf ) z_frc_trd_t = z_frc_trd_t + glob_sum( rnf_tsc(:,:,jp_tem) * surf(:,:) ) 94 IF( ln_rnf_sal) z_frc_trd_s = z_frc_trd_s + glob_sum( rnf_tsc(:,:,jp_sal) * surf(:,:) ) 84 95 ! Add penetrative solar radiation 85 IF( ln_traqsr ) z_frc_trd_t = z_frc_trd_t + r1_rau0_rcp * SUM( qsr (:,:) * surf(:,:) )96 IF( ln_traqsr ) z_frc_trd_t = z_frc_trd_t + r1_rau0_rcp * glob_sum( qsr (:,:) * surf(:,:) ) 86 97 ! Add geothermal heat flux 87 IF( ln_trabbc ) z_frc_trd_t = z_frc_trd_t + r1_rau0_rcp * SUM( qgh_trd0(:,:) * surf(:,:) ) 88 IF( lk_mpp ) THEN 89 CALL mpp_sum( z_frc_trd_v ) 90 CALL mpp_sum( z_frc_trd_t ) 91 ENDIF 98 IF( ln_trabbc ) z_frc_trd_t = z_frc_trd_t + glob_sum( qgh_trd0(:,:) * surf(:,:) ) 99 IF( .NOT. lk_vvl ) THEN 100 z_wn_trd_t = - glob_sum( surf(:,:) * wn(:,:,1) * tsb(:,:,1,jp_tem) ) 101 z_wn_trd_s = - glob_sum( surf(:,:) * wn(:,:,1) * tsb(:,:,1,jp_sal) ) 102 ENDIF 103 92 104 frc_v = frc_v + z_frc_trd_v * rdt 93 105 frc_t = frc_t + z_frc_trd_t * rdt 94 106 frc_s = frc_s + z_frc_trd_s * rdt 107 ! ! Advection flux through fixed surface (z=0) 108 IF( .NOT. lk_vvl ) THEN 109 frc_wn_t = frc_wn_t + z_wn_trd_t * rdt 110 frc_wn_s = frc_wn_s + z_wn_trd_s * rdt 111 ENDIF 95 112 96 113 ! ----------------------- ! … … 100 117 zdiff_hc = 0.d0 101 118 zdiff_sc = 0.d0 119 102 120 ! volume variation (calculated with ssh) 103 zdiff_v1 = SUM( surf(:,:) * tmask(:,:,1) * ( sshn(:,:) - ssh_ini(:,:) ) ) 121 zdiff_v1 = glob_sum( surf(:,:) * ( sshn(:,:) - ssh_ini(:,:) ) ) 122 123 ! heat & salt content variation (associated with ssh) 124 IF( .NOT. lk_vvl ) THEN 125 z_ssh_hc = glob_sum( surf(:,:) * ( tsn(:,:,1,jp_tem) * sshn(:,:) - ssh_hc_loc_ini(:,:) ) ) 126 z_ssh_sc = glob_sum( surf(:,:) * ( tsn(:,:,1,jp_sal) * sshn(:,:) - ssh_sc_loc_ini(:,:) ) ) 127 ENDIF 128 104 129 DO jk = 1, jpkm1 105 106 zdiff_v2 = zdiff_v2 + SUM( surf(:,:) * tmask(:,:,jk) &130 ! volume variation (calculated with scale factors) 131 zdiff_v2 = zdiff_v2 + glob_sum( surf(:,:) * tmask(:,:,jk) & 107 132 & * ( fse3t_n(:,:,jk) & 108 133 & - e3t_ini(:,:,jk) ) ) 109 134 ! heat content variation 110 zdiff_hc = zdiff_hc + SUM( surf(:,:) * tmask(:,:,jk) &135 zdiff_hc = zdiff_hc + glob_sum( surf(:,:) * tmask(:,:,jk) & 111 136 & * ( fse3t_n(:,:,jk) * tsn(:,:,jk,jp_tem) & 112 137 & - hc_loc_ini(:,:,jk) ) ) 113 138 ! salt content variation 114 zdiff_sc = zdiff_sc + SUM( surf(:,:) * tmask(:,:,jk) &139 zdiff_sc = zdiff_sc + glob_sum( surf(:,:) * tmask(:,:,jk) & 115 140 & * ( fse3t_n(:,:,jk) * tsn(:,:,jk,jp_sal) & 116 141 & - sc_loc_ini(:,:,jk) ) ) 117 142 ENDDO 118 143 119 IF( lk_mpp ) THEN120 CALL mpp_sum( zdiff_hc )121 CALL mpp_sum( zdiff_sc )122 CALL mpp_sum( zdiff_v1 )123 CALL mpp_sum( zdiff_v2 )124 ENDIF125 126 144 ! Substract forcing from heat content, salt content and volume variations 127 145 zdiff_v1 = zdiff_v1 - frc_v 128 zdiff_v2 = zdiff_v2 - frc_v146 IF( lk_vvl ) zdiff_v2 = zdiff_v2 - frc_v 129 147 zdiff_hc = zdiff_hc - frc_t 130 148 zdiff_sc = zdiff_sc - frc_s 149 IF( .NOT. lk_vvl ) THEN 150 zdiff_hc1 = zdiff_hc + z_ssh_hc 151 zdiff_sc1 = zdiff_sc + z_ssh_sc 152 zerr_hc1 = z_ssh_hc - frc_wn_t 153 zerr_sc1 = z_ssh_sc - frc_wn_s 154 ENDIF 131 155 132 156 ! ----------------------- ! … … 134 158 ! ----------------------- ! 135 159 zdeltat = 1.e0 / ( ( kt - nit000 + 1 ) * rdt ) 136 WRITE(numhsb , 9020) kt , zdiff_hc / vol_tot , zdiff_hc * fact1 * zdeltat, & 137 & zdiff_sc / vol_tot , zdiff_sc * fact21 * zdeltat, zdiff_sc * fact22 * zdeltat, & 138 & zdiff_v1 , zdiff_v1 * fact31 * zdeltat, zdiff_v1 * fact32 * zdeltat, & 139 & zdiff_v2 , zdiff_v2 * fact31 * zdeltat, zdiff_v2 * fact32 * zdeltat 160 IF( lk_vvl ) THEN 161 WRITE(numhsb , 9020) kt , zdiff_hc / vol_tot , zdiff_hc * fact1 * zdeltat, & 162 & zdiff_sc / vol_tot , zdiff_sc * fact21 * zdeltat, zdiff_sc * fact22 * zdeltat, & 163 & zdiff_v1 , zdiff_v1 * fact31 * zdeltat, zdiff_v1 * fact32 * zdeltat, & 164 & zdiff_v2 , zdiff_v2 * fact31 * zdeltat, zdiff_v2 * fact32 * zdeltat 165 ELSE 166 WRITE(numhsb , 9030) kt , zdiff_hc1 / vol_tot , zdiff_hc1 * fact1 * zdeltat, & 167 & zdiff_sc1 / vol_tot , zdiff_sc1 * fact21 * zdeltat, zdiff_sc1 * fact22 * zdeltat, & 168 & zdiff_v1 , zdiff_v1 * fact31 * zdeltat, zdiff_v1 * fact32 * zdeltat, & 169 & zerr_hc1 / vol_tot , zerr_sc1 / vol_tot 170 ENDIF 140 171 141 172 IF ( kt == nitend ) CLOSE( numhsb ) … … 144 175 145 176 9020 FORMAT(I5,11D15.7) 177 9030 FORMAT(I5,10D15.7) 146 178 ! 147 179 END SUBROUTINE dia_hsb … … 179 211 180 212 IF( .NOT. ln_diahsb ) RETURN 213 IF( .NOT. lk_mpp_rep ) & 214 CALL ctl_stop (' Your global mpp_sum if performed in single precision - 64 bits -', & 215 & ' whereas the global sum to be precise must be done in double precision ',& 216 & ' please add key_mpp_rep') 181 217 182 218 ! ------------------- ! 183 219 ! 1 - Allocate memory ! 184 220 ! ------------------- ! 185 ALLOCATE( hc_loc_ini(jpi,jpj,jpk), STAT=ierror ) 221 ALLOCATE( hc_loc_ini(jpi,jpj,jpk), sc_loc_ini(jpi,jpj,jpk), & 222 & ssh_hc_loc_ini(jpi,jpj), ssh_sc_loc_ini(jpi,jpj), & 223 & e3t_ini(jpi,jpj,jpk) , & 224 & surf(jpi,jpj), ssh_ini(jpi,jpj), STAT=ierror ) 186 225 IF( ierror > 0 ) THEN 187 226 CALL ctl_stop( 'dia_hsb: unable to allocate hc_loc_ini' ) ; RETURN 188 ENDIF189 ALLOCATE( sc_loc_ini(jpi,jpj,jpk), STAT=ierror )190 IF( ierror > 0 ) THEN191 CALL ctl_stop( 'dia_hsb: unable to allocate sc_loc_ini' ) ; RETURN192 ENDIF193 ALLOCATE( e3t_ini(jpi,jpj,jpk) , STAT=ierror )194 IF( ierror > 0 ) THEN195 CALL ctl_stop( 'dia_hsb: unable to allocate e3t_ini' ) ; RETURN196 ENDIF197 ALLOCATE( surf(jpi,jpj) , STAT=ierror )198 IF( ierror > 0 ) THEN199 CALL ctl_stop( 'dia_hsb: unable to allocate surf' ) ; RETURN200 ENDIF201 ALLOCATE( ssh_ini(jpi,jpj) , STAT=ierror )202 IF( ierror > 0 ) THEN203 CALL ctl_stop( 'dia_hsb: unable to allocate ssh_ini' ) ; RETURN204 227 ENDIF 205 228 … … 214 237 cl_name = 'heat_salt_volume_budgets.txt' ! name of output file 215 238 surf(:,:) = e1t(:,:) * e2t(:,:) * tmask(:,:,1) * tmask_i(:,:) ! masked surface grid cell area 216 surf_tot = SUM( surf(:,:) ) ! total ocean surface area239 surf_tot = glob_sum( surf(:,:) ) ! total ocean surface area 217 240 vol_tot = 0.d0 ! total ocean volume 218 241 DO jk = 1, jpkm1 219 vol_tot = vol_tot + SUM( surf(:,:) * tmask(:,:,jk) &220 & * fse3t_n(:,:,jk) )242 vol_tot = vol_tot + glob_sum( surf(:,:) * tmask(:,:,jk) & 243 & * fse3t_n(:,:,jk) ) 221 244 END DO 222 IF( lk_mpp ) THEN223 CALL mpp_sum( vol_tot )224 CALL mpp_sum( surf_tot )225 ENDIF226 245 227 246 CALL ctl_opn( numhsb , cl_name , 'UNKNOWN' , 'FORMATTED' , 'SEQUENTIAL' , 1 , numout , lwp , 1 ) 228 ! 12345678901234567890123456789012345678901234567890123456789012345678901234567890 -> 80 229 WRITE( numhsb, 9010 ) "kt | heat content budget | salt content budget ", & 230 ! 123456789012345678901234567890123456789012345 -> 45 231 & "| volume budget (ssh) ", & 232 ! 678901234567890123456789012345678901234567890 -> 45 233 & "| volume budget (e3t) " 234 WRITE( numhsb, 9010 ) " | [C] [W/m2] | [psu] [mmm/s] [SV] ", & 235 & "| [m3] [mmm/s] [SV] ", & 236 & "| [m3] [mmm/s] [SV] " 237 247 IF( lk_vvl ) THEN 248 ! 12345678901234567890123456789012345678901234567890123456789012345678901234567890 -> 80 249 WRITE( numhsb, 9010 ) "kt | heat content budget | salt content budget ", & 250 ! 123456789012345678901234567890123456789012345 -> 45 251 & "| volume budget (ssh) ", & 252 ! 678901234567890123456789012345678901234567890 -> 45 253 & "| volume budget (e3t) " 254 WRITE( numhsb, 9010 ) " | [C] [W/m2] | [psu] [mmm/s] [SV] ", & 255 & "| [m3] [mmm/s] [SV] ", & 256 & "| [m3] [mmm/s] [SV] " 257 ELSE 258 ! 12345678901234567890123456789012345678901234567890123456789012345678901234567890 -> 80 259 WRITE( numhsb, 9011 ) "kt | heat content budget | salt content budget ", & 260 ! 123456789012345678901234567890123456789012345 -> 45 261 & "| volume budget (ssh) ", & 262 ! 678901234567890123456789012345678901234567890 -> 45 263 & "| Non conservation due to free surface " 264 WRITE( numhsb, 9011 ) " | [C] [W/m2] | [psu] [mmm/s] [SV] ", & 265 & "| [m3] [mmm/s] [SV] ", & 266 & "| [heat - C] [salt - psu] " 267 ENDIF 238 268 ! --------------- ! 239 269 ! 3 - Conversions ! (factors will be multiplied by duration afterwards) … … 261 291 frc_t = 0.d0 ! heat content - - - - 262 292 frc_s = 0.d0 ! salt content - - - - 293 IF( .NOT. lk_vvl ) THEN 294 ssh_hc_loc_ini(:,:) = tsn(:,:,1,jp_tem) * ssh_ini(:,:) ! initial heat content associated with ssh 295 ssh_sc_loc_ini(:,:) = tsn(:,:,1,jp_sal) * ssh_ini(:,:) ! initial salt content associated with ssh 296 frc_wn_t = 0.d0 297 frc_wn_s = 0.d0 298 ENDIF 263 299 ! 264 300 9010 FORMAT(A80,A45,A45) 301 9011 FORMAT(A80,A45,A45) 265 302 ! 266 303 END SUBROUTINE dia_hsb_init -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/DOM/closea.F90
r3632 r4219 108 108 ncsi1(2) = 97 ; ncsj1(2) = 107 109 109 ncsi2(2) = 103 ; ncsj2(2) = 111 110 ncsir(2,1) = 110 ; ncsjr(2,1) = 111 111 ! ! Black Sea 1 : west part of the Black Sea 112 ncsnr(3) = 1 ; ncstt(3) = 2 ! (ie west of the cyclic b.c.) 113 ncsi1(3) = 174 ; ncsj1(3) = 107 ! put in Med Sea 114 ncsi2(3) = 181 ; ncsj2(3) = 112 115 ncsir(3,1) = 171 ; ncsjr(3,1) = 106 116 ! ! Black Sea 2 : est part of the Black Sea 117 ncsnr(4) = 1 ; ncstt(4) = 2 ! (ie est of the cyclic b.c.) 118 ncsi1(4) = 2 ; ncsj1(4) = 107 ! put in Med Sea 119 ncsi2(4) = 6 ; ncsj2(4) = 112 120 ncsir(4,1) = 171 ; ncsjr(4,1) = 106 110 ncsir(2,1) = 110 ; ncsjr(2,1) = 111 111 ! ! Black Sea (crossed by the cyclic boundary condition) 112 ncsnr(3:4) = 4 ; ncstt(3:4) = 2 ! put in Med Sea (north of Aegean Sea) 113 ncsir(3:4,1) = 171; ncsjr(3:4,1) = 106 ! 114 ncsir(3:4,2) = 170; ncsjr(3:4,2) = 106 115 ncsir(3:4,3) = 171; ncsjr(3:4,3) = 105 116 ncsir(3:4,4) = 170; ncsjr(3:4,4) = 105 117 ncsi1(3) = 174 ; ncsj1(3) = 107 ! 1 : west part of the Black Sea 118 ncsi2(3) = 181 ; ncsj2(3) = 112 ! (ie west of the cyclic b.c.) 119 ncsi1(4) = 2 ; ncsj1(4) = 107 ! 2 : east part of the Black Sea 120 ncsi2(4) = 6 ; ncsj2(4) = 112 ! (ie east of the cyclic b.c.) 121 122 123 121 124 ! ! ======================= 122 125 CASE ( 4 ) ! ORCA_R4 configuration … … 372 375 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: p_rnfmsk ! river runoff mask (rnfmsk array) 373 376 ! 374 INTEGER :: jc, jn ! dummy loop indices 375 INTEGER :: ii, ij ! temporary integer 377 INTEGER :: jc, jn, ji, jj ! dummy loop indices 376 378 !!---------------------------------------------------------------------- 377 379 ! … … 379 381 IF( ncstt(jc) >= 1 ) THEN ! runoff mask set to 1 at closed sea outflows 380 382 DO jn = 1, 4 381 ii = mi0( ncsir(jc,jn) ) 382 ij = mj0( ncsjr(jc,jn) ) 383 p_rnfmsk(ii,ij) = MAX( p_rnfmsk(ii,ij), 1.0_wp ) 383 DO jj = mj0( ncsjr(jc,jn) ), mj1( ncsjr(jc,jn) ) 384 DO ji = mi0( ncsir(jc,jn) ), mi1( ncsir(jc,jn) ) 385 p_rnfmsk(ji,jj) = MAX( p_rnfmsk(ji,jj), 1.0_wp ) 386 END DO 387 END DO 384 388 END DO 385 389 ENDIF -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/DOM/daymod.F90
r3851 r4219 238 238 nday_year = 1 239 239 nsec_year = ndt05 240 IF( nsec1jan000 >= 2 * (2**30 - nsecd * nyear_len(1) / 2 ) ) THEN ! test integer 4 max value 241 CALL ctl_stop( 'The number of seconds between Jan. 1st 00h of nit000 year and Jan. 1st 00h ', & 242 & 'of the current year is exceeding the INTEGER 4 max VALUE: 2^31-1 -> 68.09 years in seconds', & 243 & 'You must do a restart at higher frequency (or remove this STOP and recompile everything in I8)' ) 244 ENDIF 240 245 nsec1jan000 = nsec1jan000 + nsecd * nyear_len(1) 241 246 IF( nleapy == 1 ) CALL day_mth -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/DOM/domvvl.F90
r3294 r4219 192 192 INTEGER :: iku, ikv ! local integers 193 193 INTEGER :: ii0, ii1, ij0, ij1 ! temporary integers 194 REAL(wp) :: zvt 194 REAL(wp) :: zvt, zvtip1, zvtjp1 ! local scalars 195 195 !!---------------------------------------------------------------------- 196 196 ! … … 202 202 WRITE(numout,*) '~~~~~~~~~ ' 203 203 pe3u_b(:,:,jpk) = fse3u_0(:,:,jpk) 204 pe3v_b(:,:,jpk) = fse3 u_0(:,:,jpk)204 pe3v_b(:,:,jpk) = fse3v_0(:,:,jpk) 205 205 ENDIF 206 206 … … 208 208 DO jj = 2, jpjm1 209 209 DO ji = fs_2, fs_jpim1 210 zvt = fse3t_b(ji,jj,jk) * e1e2t(ji,jj) 211 pe3u_b(ji,jj,jk) = 0.5_wp * ( zvt + fse3t_b(ji+1,jj,jk) * e1e2t(ji+1,jj) ) / ( e1u(ji,jj) * e2u(ji,jj) ) 212 pe3v_b(ji,jj,jk) = 0.5_wp * ( zvt + fse3t_b(ji,jj+1,jk) * e1e2t(ji,jj+1) ) / ( e1v(ji,jj) * e2v(ji,jj) ) 210 zvt = ( fse3t_b(ji ,jj ,jk) - fse3t_0(ji ,jj ,jk) ) * e1e2t(ji ,jj ) 211 zvtip1 = ( fse3t_b(ji+1,jj ,jk) - fse3t_0(ji+1,jj ,jk) ) * e1e2t(ji+1,jj ) 212 zvtjp1 = ( fse3t_b(ji ,jj+1,jk) - fse3t_0(ji ,jj+1,jk) ) * e1e2t(ji ,jj+1) 213 pe3u_b(ji,jj,jk) = fse3u_0(ji,jj,jk) + 0.5_wp * ( zvt + zvtip1 ) / ( e1u(ji,jj) * e2u(ji,jj) ) 214 pe3v_b(ji,jj,jk) = fse3v_0(ji,jj,jk) + 0.5_wp * ( zvt + zvtjp1 ) / ( e1v(ji,jj) * e2v(ji,jj) ) 213 215 END DO 214 216 END DO -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/DOM/domzgr.F90
r3926 r4219 1102 1102 INTEGER :: iip1, ijp1, iim1, ijm1 ! temporary integers 1103 1103 REAL(wp) :: zrmax, ztaper ! temporary scalars 1104 REAL(wp) :: zrfact ! temporary scalars 1105 REAL(wp), POINTER, DIMENSION(:,: ) :: ztmpi1, ztmpi2, ztmpj1, ztmpj2 1106 1107 ! 1108 REAL(wp), POINTER, DIMENSION(:,: ) :: zenv, zri, zrj, zhbat 1104 ! 1105 REAL(wp), POINTER, DIMENSION(:,: ) :: zenv, ztmp, zmsk, zri, zrj, zhbat 1109 1106 1110 1107 NAMELIST/namzgr_sco/ln_s_sh94, ln_s_sf12, ln_sigcrit, rn_sbot_min, rn_sbot_max, rn_hc, rn_rmax,rn_theta, & … … 1114 1111 IF( nn_timing == 1 ) CALL timing_start('zgr_sco') 1115 1112 ! 1116 CALL wrk_alloc( jpi, jpj, ztmpi1, ztmpi2, ztmpj1, ztmpj2 ) 1117 CALL wrk_alloc( jpi, jpj, zenv, zri, zrj, zhbat ) 1118 ! 1113 CALL wrk_alloc( jpi, jpj, zenv, ztmp, zmsk, zri, zrj, zhbat ) 1114 ! 1119 1115 REWIND( numnam ) ! Read Namelist namzgr_sco : sigma-stretching parameters 1120 1116 READ ( numnam, namzgr_sco ) … … 1163 1159 ! ! ============================= 1164 1160 ! use r-value to create hybrid coordinates 1165 ! DO jj = 1, jpj 1166 ! DO ji = 1, jpi 1167 ! zenv(ji,jj) = MAX( bathy(ji,jj), 0._wp ) 1168 ! END DO 1169 ! END DO 1170 ! CALL lbc_lnk( zenv, 'T', 1._wp ) 1171 zenv(:,:) = bathy(:,:) 1161 DO jj = 1, jpj 1162 DO ji = 1, jpi 1163 zenv(ji,jj) = MAX( bathy(ji,jj), rn_sbot_min ) 1164 END DO 1165 END DO 1172 1166 ! 1173 1167 ! Smooth the bathymetry (if required) … … 1177 1171 jl = 0 1178 1172 zrmax = 1._wp 1179 ! 1180 ! set scaling factor used in reducing vertical gradients 1181 zrfact = ( 1._wp - rn_rmax ) / ( 1._wp + rn_rmax ) 1182 ! 1183 ! initialise temporary evelope depth arrays 1184 ztmpi1(:,:) = zenv(:,:) 1185 ztmpi2(:,:) = zenv(:,:) 1186 ztmpj1(:,:) = zenv(:,:) 1187 ztmpj2(:,:) = zenv(:,:) 1188 ! 1189 ! initialise temporary r-value arrays 1190 zri(:,:) = 1._wp 1191 zrj(:,:) = 1._wp 1192 ! ! ================ ! 1193 DO WHILE( jl <= 10000 .AND. ( zrmax - rn_rmax ) > 1.e-8_wp ) ! Iterative loop ! 1194 ! ! ================ ! 1173 ! ! ================ ! 1174 DO WHILE( jl <= 10000 .AND. zrmax > rn_rmax ) ! Iterative loop ! 1175 ! ! ================ ! 1195 1176 jl = jl + 1 1196 1177 zrmax = 0._wp 1197 ! we set zrmax from previous r-values (zri abd zrj) first 1198 ! if set after current r-value calculation (as previously) 1199 ! we could exit DO WHILE prematurely before checking r-value 1200 ! of current zenv 1201 DO jj = 1, nlcj 1202 DO ji = 1, nlci 1203 zrmax = MAX( zrmax, ABS(zri(ji,jj)), ABS(zrj(ji,jj)) ) 1204 END DO 1205 END DO 1206 zri(:,:) = 0._wp 1207 zrj(:,:) = 0._wp 1178 zmsk(:,:) = 0._wp 1208 1179 DO jj = 1, nlcj 1209 1180 DO ji = 1, nlci 1210 1181 iip1 = MIN( ji+1, nlci ) ! force zri = 0 on last line (ji=ncli+1 to jpi) 1211 1182 ijp1 = MIN( jj+1, nlcj ) ! force zrj = 0 on last raw (jj=nclj+1 to jpj) 1212 IF( (zenv(ji,jj) > 0._wp) .AND. (zenv(iip1,jj) > 0._wp)) THEN 1213 zri(ji,jj) = ( zenv(iip1,jj ) - zenv(ji,jj) ) / ( zenv(iip1,jj ) + zenv(ji,jj) ) 1214 END IF 1215 IF( (zenv(ji,jj) > 0._wp) .AND. (zenv(ji,ijp1) > 0._wp)) THEN 1216 zrj(ji,jj) = ( zenv(ji ,ijp1) - zenv(ji,jj) ) / ( zenv(ji ,ijp1) + zenv(ji,jj) ) 1217 END IF 1218 IF( zri(ji,jj) > rn_rmax ) ztmpi1(ji ,jj ) = zenv(iip1,jj ) * zrfact 1219 IF( zri(ji,jj) < -rn_rmax ) ztmpi2(iip1,jj ) = zenv(ji ,jj ) * zrfact 1220 IF( zrj(ji,jj) > rn_rmax ) ztmpj1(ji ,jj ) = zenv(ji ,ijp1) * zrfact 1221 IF( zrj(ji,jj) < -rn_rmax ) ztmpj2(ji ,ijp1) = zenv(ji ,jj ) * zrfact 1183 zri(ji,jj) = ABS( zenv(iip1,jj ) - zenv(ji,jj) ) / ( zenv(iip1,jj ) + zenv(ji,jj) ) 1184 zrj(ji,jj) = ABS( zenv(ji ,ijp1) - zenv(ji,jj) ) / ( zenv(ji ,ijp1) + zenv(ji,jj) ) 1185 zrmax = MAX( zrmax, zri(ji,jj), zrj(ji,jj) ) 1186 IF( zri(ji,jj) > rn_rmax ) zmsk(ji ,jj ) = 1._wp 1187 IF( zri(ji,jj) > rn_rmax ) zmsk(iip1,jj ) = 1._wp 1188 IF( zrj(ji,jj) > rn_rmax ) zmsk(ji ,jj ) = 1._wp 1189 IF( zrj(ji,jj) > rn_rmax ) zmsk(ji ,ijp1) = 1._wp 1222 1190 END DO 1223 1191 END DO 1224 1192 IF( lk_mpp ) CALL mpp_max( zrmax ) ! max over the global domain 1193 ! lateral boundary condition on zmsk: keep 1 along closed boundary (use of MAX) 1194 ztmp(:,:) = zmsk(:,:) ; CALL lbc_lnk( zmsk, 'T', 1._wp ) 1195 DO jj = 1, nlcj 1196 DO ji = 1, nlci 1197 zmsk(ji,jj) = MAX( zmsk(ji,jj), ztmp(ji,jj) ) 1198 END DO 1199 END DO 1225 1200 ! 1226 IF(lwp)WRITE(numout,*) 'zgr_sco : iter= ',jl, ' rmax= ', zrmax 1201 IF(lwp)WRITE(numout,*) 'zgr_sco : iter= ',jl, ' rmax= ', zrmax, ' nb of pt= ', INT( SUM(zmsk(:,:) ) ) 1227 1202 ! 1228 1203 DO jj = 1, nlcj 1229 1204 DO ji = 1, nlci 1230 zenv(ji,jj) = MAX(zenv(ji,jj), ztmpi1(ji,jj), ztmpi2(ji,jj), ztmpj1(ji,jj), ztmpj2(ji,jj) ) 1205 iip1 = MIN( ji+1, nlci ) ! last line (ji=nlci) 1206 ijp1 = MIN( jj+1, nlcj ) ! last raw (jj=nlcj) 1207 iim1 = MAX( ji-1, 1 ) ! first line (ji=nlci) 1208 ijm1 = MAX( jj-1, 1 ) ! first raw (jj=nlcj) 1209 IF( zmsk(ji,jj) == 1._wp ) THEN 1210 ztmp(ji,jj) = ( & 1211 & zenv(iim1,ijp1)*zmsk(iim1,ijp1) + zenv(ji,ijp1)*zmsk(ji,ijp1) + zenv(iip1,ijp1)*zmsk(iip1,ijp1) & 1212 & + zenv(iim1,jj )*zmsk(iim1,jj ) + zenv(ji,jj )* 2._wp + zenv(iip1,jj )*zmsk(iip1,jj ) & 1213 & + zenv(iim1,ijm1)*zmsk(iim1,ijm1) + zenv(ji,ijm1)*zmsk(ji,ijm1) + zenv(iip1,ijm1)*zmsk(iip1,ijm1) & 1214 & ) / ( & 1215 & zmsk(iim1,ijp1) + zmsk(ji,ijp1) + zmsk(iip1,ijp1) & 1216 & + zmsk(iim1,jj ) + 2._wp + zmsk(iip1,jj ) & 1217 & + zmsk(iim1,ijm1) + zmsk(ji,ijm1) + zmsk(iip1,ijm1) & 1218 & ) 1219 ENDIF 1231 1220 END DO 1232 1221 END DO 1233 1222 ! 1234 CALL lbc_lnk( zenv, 'T', 1._wp ) 1223 DO jj = 1, nlcj 1224 DO ji = 1, nlci 1225 IF( zmsk(ji,jj) == 1._wp ) zenv(ji,jj) = MAX( ztmp(ji,jj), bathy(ji,jj) ) 1226 END DO 1227 END DO 1228 ! 1229 ! Apply lateral boundary condition CAUTION: keep the value when the lbc field is zero 1230 ztmp(:,:) = zenv(:,:) ; CALL lbc_lnk( zenv, 'T', 1._wp ) 1231 DO jj = 1, nlcj 1232 DO ji = 1, nlci 1233 IF( zenv(ji,jj) == 0._wp ) zenv(ji,jj) = ztmp(ji,jj) 1234 END DO 1235 END DO 1235 1236 ! ! ================ ! 1236 1237 END DO ! End loop ! 1237 1238 ! ! ================ ! 1238 1239 ! 1239 ! DO jj = 1, jpj 1240 ! DO ji = 1, jpi 1241 ! zenv(ji,jj) = MAX( zenv(ji,jj), rn_sbot_min ) ! set all points to avoid undefined scale values 1242 ! END DO 1243 ! END DO 1240 ! Fill ghost rows with appropriate values to avoid undefined e3 values with some mpp decompositions 1241 DO ji = nlci+1, jpi 1242 zenv(ji,1:nlcj) = zenv(nlci,1:nlcj) 1243 END DO 1244 ! 1245 DO jj = nlcj+1, jpj 1246 zenv(:,jj) = zenv(:,nlcj) 1247 END DO 1244 1248 ! 1245 1249 ! Envelope bathymetry saved in hbatt 1246 1250 hbatt(:,:) = zenv(:,:) 1247 1248 1251 IF( MINVAL( gphit(:,:) ) * MAXVAL( gphit(:,:) ) <= 0._wp ) THEN 1249 1252 CALL ctl_warn( ' s-coordinates are tapered in vicinity of the Equator' ) 1250 1253 DO jj = 1, jpj 1251 1254 DO ji = 1, jpi 1252 ztaper = EXP( -(gphit(ji,jj)/8._wp)**2 )1255 ztaper = EXP( -(gphit(ji,jj)/8._wp)**2._wp ) 1253 1256 hbatt(ji,jj) = rn_sbot_max * ztaper + hbatt(ji,jj) * ( 1._wp - ztaper ) 1254 1257 END DO … … 1365 1368 fsde3w(:,:,:) = gdep3w(:,:,:) 1366 1369 ! 1367 where (e3t (:,:,:).eq.0.0) e3t(:,:,:) = 1.0 1368 where (e3u (:,:,:).eq.0.0) e3u(:,:,:) = 1.0 1369 where (e3v (:,:,:).eq.0.0) e3v(:,:,:) = 1.0 1370 where (e3f (:,:,:).eq.0.0) e3f(:,:,:) = 1.0 1371 where (e3w (:,:,:).eq.0.0) e3w(:,:,:) = 1.0 1372 where (e3uw (:,:,:).eq.0.0) e3uw(:,:,:) = 1.0 1373 where (e3vw (:,:,:).eq.0.0) e3vw(:,:,:) = 1.0 1374 1370 where (e3t (:,:,:).eq.0.0) e3t(:,:,:) = 1._wp 1371 where (e3u (:,:,:).eq.0.0) e3u(:,:,:) = 1._wp 1372 where (e3v (:,:,:).eq.0.0) e3v(:,:,:) = 1._wp 1373 where (e3f (:,:,:).eq.0.0) e3f(:,:,:) = 1._wp 1374 where (e3w (:,:,:).eq.0.0) e3w(:,:,:) = 1._wp 1375 where (e3uw (:,:,:).eq.0.0) e3uw(:,:,:) = 1._wp 1376 where (e3vw (:,:,:).eq.0.0) e3vw(:,:,:) = 1._wp 1377 1378 #if defined key_agrif 1379 ! Ensure meaningful vertical scale factors in ghost lines/columns 1380 IF( .NOT. Agrif_Root() ) THEN 1381 ! 1382 IF((nbondi == -1).OR.(nbondi == 2)) THEN 1383 e3u(1,:,:) = e3u(2,:,:) 1384 ENDIF 1385 ! 1386 IF((nbondi == 1).OR.(nbondi == 2)) THEN 1387 e3u(nlci-1,:,:) = e3u(nlci-2,:,:) 1388 ENDIF 1389 ! 1390 IF((nbondj == -1).OR.(nbondj == 2)) THEN 1391 e3v(:,1,:) = e3v(:,2,:) 1392 ENDIF 1393 ! 1394 IF((nbondj == 1).OR.(nbondj == 2)) THEN 1395 e3v(:,nlcj-1,:) = e3v(:,nlcj-2,:) 1396 ENDIF 1397 ! 1398 ENDIF 1399 #endif 1375 1400 1376 1401 fsdept(:,:,:) = gdept (:,:,:) … … 1421 1446 WRITE(numout,"(10x,i4,4f9.2)") ( jk, fsdept(1,1,jk), fsdepw(1,1,jk), & 1422 1447 & fse3t (1,1,jk), fse3w (1,1,jk), jk=1,jpk ) 1423 DO jj = mj0(20), mj1(20) 1424 DO ji = mi0(20), mi1(20) 1448 iip1 = MIN(20, jpiglo-1) ! for config with i smaller than 20 points 1449 ijp1 = MIN(20, jpjglo-1) ! for config with j smaller than 20 points 1450 DO jj = mj0(ijp1), mj1(ijp1) 1451 DO ji = mi0(iip1), mi1(iip1) 1425 1452 WRITE(numout,*) 1426 WRITE(numout,*) ' domzgr: vertical coordinates : point (20,20,k) bathy = ', bathy(ji,jj), hbatt(ji,jj) 1453 WRITE(numout,*) ' domzgr: vertical coordinates : point (',iip1,',',ijp1,',k) bathy = ', & 1454 & bathy(ji,jj), hbatt(ji,jj) 1427 1455 WRITE(numout,*) ' ~~~~~~ --------------------' 1428 1456 WRITE(numout,"(9x,' level gdept gdepw gde3w e3t e3w ')") … … 1431 1459 END DO 1432 1460 END DO 1433 DO jj = mj0(74), mj1(74) 1434 DO ji = mi0(100), mi1(100) 1461 iip1 = MIN( 74, jpiglo-1) 1462 ijp1 = MIN( 100, jpjglo-1) 1463 DO jj = mj0(ijp1), mj1(ijp1) 1464 DO ji = mi0(iip1), mi1(iip1) 1435 1465 WRITE(numout,*) 1436 WRITE(numout,*) ' domzgr: vertical coordinates : point (100,74,k) bathy = ', bathy(ji,jj), hbatt(ji,jj) 1466 WRITE(numout,*) ' domzgr: vertical coordinates : point (',iip1,',',ijp1,',k) bathy = ', & 1467 & bathy(ji,jj), hbatt(ji,jj) 1437 1468 WRITE(numout,*) ' ~~~~~~ --------------------' 1438 1469 WRITE(numout,"(9x,' level gdept gdepw gde3w e3t e3w ')") … … 1491 1522 END DO 1492 1523 ! 1493 CALL wrk_dealloc( jpi, jpj, zenv, ztmpi1, ztmpi2, ztmpj1, ztmpj2, zri, zrj, zhbat ) ! 1524 CALL wrk_dealloc( jpi, jpj, zenv, ztmp, zmsk, zri, zrj, zhbat ) 1525 ! 1494 1526 IF( nn_timing == 1 ) CALL timing_stop('zgr_sco') 1495 1527 ! … … 1720 1752 ENDDO 1721 1753 ! 1722 CALL lbc_lnk(e3t ,'T',1.) ; CALL lbc_lnk(e3u ,'T',1.)1723 CALL lbc_lnk(e3v ,'T',1.) ; CALL lbc_lnk(e3f ,'T',1.)1724 CALL lbc_lnk(e3w ,'T',1.)1725 CALL lbc_lnk(e3uw,'T',1.) ; CALL lbc_lnk(e3vw,'T',1.)1726 !1727 1754 ! ! ============= 1728 1755 … … 1821 1848 !!---------------------------------------------------------------------- 1822 1849 ! 1823 pf = ( TANH( rn_theta * ( -(pk-0.5_wp) / REAL(jpkm1 ) + rn_thetb ) ) &1850 pf = ( TANH( rn_theta * ( -(pk-0.5_wp) / REAL(jpkm1,wp) + rn_thetb ) ) & 1824 1851 & - TANH( rn_thetb * rn_theta ) ) & 1825 1852 & * ( COSH( rn_theta ) & … … 1847 1874 ! 1848 1875 IF ( rn_theta == 0 ) then ! uniform sigma 1849 pf1 = - ( pk1 - 0.5_wp ) / REAL( jpkm1 )1876 pf1 = - ( pk1 - 0.5_wp ) / REAL( jpkm1,wp ) 1850 1877 ELSE ! stretched sigma 1851 pf1 = ( 1._wp - pbb ) * ( SINH( rn_theta*(-(pk1-0.5_wp)/REAL(jpkm1 )) ) ) / SINH( rn_theta ) &1852 & + pbb * ( (TANH( rn_theta*( (-(pk1-0.5_wp)/REAL(jpkm1 )) + 0.5_wp) ) - TANH( 0.5_wp * rn_theta ) ) &1878 pf1 = ( 1._wp - pbb ) * ( SINH( rn_theta*(-(pk1-0.5_wp)/REAL(jpkm1,wp)) ) ) / SINH( rn_theta ) & 1879 & + pbb * ( (TANH( rn_theta*( (-(pk1-0.5_wp)/REAL(jpkm1,wp)) + 0.5_wp) ) - TANH( 0.5_wp * rn_theta ) ) & 1853 1880 & / ( 2._wp * TANH( 0.5_wp * rn_theta ) ) ) 1854 1881 ENDIF -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/DYN/dynspg_flt.F90
r3765 r4219 109 109 INTEGER :: ji, jj, jk ! dummy loop indices 110 110 REAL(wp) :: z2dt, z2dtg, zgcb, zbtd, ztdgu, ztdgv ! local scalars 111 REAL(wp), POINTER, DIMENSION(:,:,:) :: zub, zvb112 111 !!---------------------------------------------------------------------- 113 112 ! 114 113 IF( nn_timing == 1 ) CALL timing_start('dyn_spg_flt') 115 114 ! 116 CALL wrk_alloc( jpi,jpj,jpk, zub, zvb )117 115 ! 118 116 IF( kt == nit000 ) THEN … … 213 211 DO jk = 1, jpkm1 214 212 DO ji = 1, jpij 215 spgu(ji,1) = spgu(ji,1) + fse3u (ji,1,jk) * ua(ji,1,jk)216 spgv(ji,1) = spgv(ji,1) + fse3v (ji,1,jk) * va(ji,1,jk)213 spgu(ji,1) = spgu(ji,1) + fse3u_a(ji,1,jk) * ua(ji,1,jk) 214 spgv(ji,1) = spgv(ji,1) + fse3v_a(ji,1,jk) * va(ji,1,jk) 217 215 END DO 218 216 END DO … … 221 219 DO jj = 2, jpjm1 222 220 DO ji = 2, jpim1 223 spgu(ji,jj) = spgu(ji,jj) + fse3u (ji,jj,jk) * ua(ji,jj,jk)224 spgv(ji,jj) = spgv(ji,jj) + fse3v (ji,jj,jk) * va(ji,jj,jk)221 spgu(ji,jj) = spgu(ji,jj) + fse3u_a(ji,jj,jk) * ua(ji,jj,jk) 222 spgv(ji,jj) = spgv(ji,jj) + fse3v_a(ji,jj,jk) * va(ji,jj,jk) 225 223 END DO 226 224 END DO … … 360 358 IF( lrst_oce ) CALL flt_rst( kt, 'WRITE' ) 361 359 ! 362 CALL wrk_dealloc( jpi,jpj,jpk, zub, zvb )363 360 ! 364 361 IF( nn_timing == 1 ) CALL timing_stop('dyn_spg_flt') -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/ICB/icb_oce.F90
r3614 r4219 37 37 USE par_oce ! ocean parameters 38 38 USE lib_mpp ! MPP library 39 USE fldread ! read input fields (FLD type)40 39 41 40 IMPLICIT NONE … … 151 150 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: griddata !: work array for icbrst 152 151 153 TYPE(FLD), PUBLIC, ALLOCATABLE , DIMENSION(:) :: sf_icb !: structure: file information, fields read154 155 152 !!---------------------------------------------------------------------- 156 153 !! NEMO/OPA 3.3 , NEMO Consortium (2011) … … 168 165 ! 169 166 icb_alloc = 0 170 ALLOCATE( berg_grid , & 171 & berg_grid%calving (jpi,jpj) , berg_grid%calving_hflx (jpi,jpj) , & 167 ALLOCATE( berg_grid%calving (jpi,jpj) , berg_grid%calving_hflx (jpi,jpj) , & 172 168 & berg_grid%stored_heat(jpi,jpj) , berg_grid%floating_melt(jpi,jpj) , & 173 169 & berg_grid%maxclass (jpi,jpj) , berg_grid%stored_ice (jpi,jpj,nclasses) , & -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/ICB/icbini.F90
r3785 r4219 35 35 PUBLIC icb_init ! routine called in nemogcm.F90 module 36 36 37 CHARACTER(len=100) :: cn_dir = './' ! Root directory for location of icb files 38 TYPE(FLD_N) :: sn_icb ! information about the calving file to be read 37 CHARACTER(len=100) :: cn_dir = './' !: Root directory for location of icb files 38 TYPE(FLD_N) :: sn_icb !: information about the calving file to be read 39 TYPE(FLD), PUBLIC, ALLOCATABLE , DIMENSION(:) :: sf_icb !: structure: file information, fields read 40 !: used in icbini and icbstp 39 41 40 42 !!---------------------------------------------------------------------- -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/ICB/icbstp.F90
r3614 r4219 24 24 USE lib_mpp 25 25 USE iom 26 USE fldread 26 27 USE timing ! timing 27 28 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/IOM/iom.F90
r3940 r4219 31 31 USE sbc_oce, ONLY : nn_fsbc ! ocean space and time domain 32 32 USE trc_oce, ONLY : nn_dttrc ! !: frequency of step on passive tracers 33 USE icb_oce, ONLY : class_num ! !: iceberg classes 33 34 USE domngb ! ocean space and time domain 34 35 USE phycst ! physical constants … … 99 100 clname = "nemo" 100 101 IF( TRIM(Agrif_CFixed()) /= '0' ) clname = TRIM(Agrif_CFixed())//"_"//TRIM(clname) 102 # if defined key_mpp_mpi 101 103 CALL xios_context_initialize(TRIM(clname), mpi_comm_opa) 104 # else 105 CALL xios_context_initialize(TRIM(clname), 0) 106 # endif 102 107 CALL iom_swap 103 108 … … 124 129 CALL iom_set_axis_attr( "depthw", gdepw_0 ) 125 130 # if defined key_floats 126 CALL iom_set_axis_attr( "nfloat", ( ji, ji=1,nfloat) )131 CALL iom_set_axis_attr( "nfloat", (/ (REAL(ji,wp), ji=1,nfloat) /) ) 127 132 # endif 133 CALL iom_set_axis_attr( "icbcla", class_num ) 128 134 129 135 ! automatic definitions of some of the xml attributs -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/LBC/lbclnk.F90
r3768 r4219 283 283 END SUBROUTINE lbc_lnk_3d 284 284 285 SUBROUTINE lbc_bdy_lnk_3d( pt3d, cd_type, psgn, ib_bdy )286 !!---------------------------------------------------------------------287 !! *** ROUTINE lbc_bdy_lnk ***288 !!289 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used290 !! to maintain the same interface with regards to the mpp case291 !!292 !!----------------------------------------------------------------------293 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points294 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: pt3d ! 3D array on which the lbc is applied295 REAL(wp) , INTENT(in ) :: psgn ! control of the sign296 INTEGER :: ib_bdy ! BDY boundary set297 !!298 CALL lbc_lnk_3d( pt3d, cd_type, psgn)299 300 END SUBROUTINE lbc_bdy_lnk_3d301 302 SUBROUTINE lbc_bdy_lnk_2d( pt2d, cd_type, psgn, ib_bdy )303 !!---------------------------------------------------------------------304 !! *** ROUTINE lbc_bdy_lnk ***305 !!306 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used307 !! to maintain the same interface with regards to the mpp case308 !!309 !!----------------------------------------------------------------------310 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points311 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 3D array on which the lbc is applied312 REAL(wp) , INTENT(in ) :: psgn ! control of the sign313 INTEGER :: ib_bdy ! BDY boundary set314 !!315 CALL lbc_lnk_2d( pt2d, cd_type, psgn)316 317 END SUBROUTINE lbc_bdy_lnk_2d318 319 285 SUBROUTINE lbc_lnk_2d( pt2d, cd_type, psgn, cd_mpp, pval ) 320 286 !!--------------------------------------------------------------------- … … 406 372 END SUBROUTINE lbc_lnk_2d 407 373 374 #endif 375 376 377 SUBROUTINE lbc_bdy_lnk_3d( pt3d, cd_type, psgn, ib_bdy ) 378 !!--------------------------------------------------------------------- 379 !! *** ROUTINE lbc_bdy_lnk *** 380 !! 381 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used 382 !! to maintain the same interface with regards to the mpp 383 !case 384 !! 385 !!---------------------------------------------------------------------- 386 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 387 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: pt3d ! 3D array on which the lbc is applied 388 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 389 INTEGER :: ib_bdy ! BDY boundary set 390 !! 391 CALL lbc_lnk_3d( pt3d, cd_type, psgn) 392 393 END SUBROUTINE lbc_bdy_lnk_3d 394 395 SUBROUTINE lbc_bdy_lnk_2d( pt2d, cd_type, psgn, ib_bdy ) 396 !!--------------------------------------------------------------------- 397 !! *** ROUTINE lbc_bdy_lnk *** 398 !! 399 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used 400 !! to maintain the same interface with regards to the mpp 401 !case 402 !! 403 !!---------------------------------------------------------------------- 404 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 405 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 3D array on which the lbc is applied 406 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 407 INTEGER :: ib_bdy ! BDY boundary set 408 !! 409 CALL lbc_lnk_2d( pt2d, cd_type, psgn) 410 411 END SUBROUTINE lbc_bdy_lnk_2d 412 413 408 414 SUBROUTINE lbc_lnk_2d_e( pt2d, cd_type, psgn, jpri, jprj ) 409 415 !!--------------------------------------------------------------------- … … 430 436 END SUBROUTINE lbc_lnk_2d_e 431 437 432 # endif433 438 #endif 434 439 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/LBC/lib_mpp.F90
r3918 r4219 2179 2179 !!gm Remark : this is very time consumming!!! 2180 2180 ! ! ------------------------ ! 2181 IF( ijpt0 > ijpt1 .OR. iipt0 > iipt1) THEN2181 IF(((nbondi .ne. 0) .AND. (ktype .eq. 2)) .OR. ((nbondj .ne. 0) .AND. (ktype .eq. 1))) THEN 2182 2182 ! there is nothing to be migrated 2183 lmigr = .FALSE.2183 lmigr = .TRUE. 2184 2184 ELSE 2185 lmigr = . TRUE.2185 lmigr = .FALSE. 2186 2186 ENDIF 2187 2187 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/LBC/mppini_2.h90
r3818 r4219 122 122 irestj = 1 + MOD( jpjglo - nrecj -1 , jpnj ) 123 123 124 #if defined key_nemocice_decomp 125 ! Change padding to be consistent with CICE 126 ilci(1:jpni-1 ,:) = jpi 127 ilci(jpni ,:) = jpiglo - (jpni - 1) * (jpi - nreci) 128 129 ilcj(:, 1:jpnj-1) = jpj 130 ilcj(:, jpnj) = jpjglo - (jpnj - 1) * (jpj - nrecj) 131 #else 124 132 ilci(1:iresti ,:) = jpi 125 133 ilci(iresti+1:jpni ,:) = jpi-1 … … 127 135 ilcj(:, 1:irestj) = jpj 128 136 ilcj(:, irestj+1:jpnj) = jpj-1 137 #endif 129 138 130 139 IF(lwp) WRITE(numout,*) -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/SBC/geo2ocean.F90
r2715 r4219 187 187 & gsinf(jpi,jpj), gcosf(jpi,jpj), STAT=ierr ) 188 188 IF(lk_mpp) CALL mpp_sum( ierr ) 189 IF( ierr /= 0 ) CALL ctl_stop(' STOP', 'angle_msh_geo: unable to allocate arrays' )189 IF( ierr /= 0 ) CALL ctl_stop('angle: unable to allocate arrays' ) 190 190 191 191 ! ============================= ! … … 361 361 & gsinlat(jpi,jpj,4) , gcoslat(jpi,jpj,4) , STAT=ierr ) 362 362 IF( lk_mpp ) CALL mpp_sum( ierr ) 363 IF( ierr /= 0 ) CALL ctl_stop(' STOP', 'angle_msh_geo: unable to allocate arrays' )363 IF( ierr /= 0 ) CALL ctl_stop('geo2oce: unable to allocate arrays' ) 364 364 ENDIF 365 365 … … 438 438 !!---------------------------------------------------------------------- 439 439 440 IF( ALLOCATED( gsinlon ) ) THEN440 IF( .NOT. ALLOCATED( gsinlon ) ) THEN 441 441 ALLOCATE( gsinlon(jpi,jpj,4) , gcoslon(jpi,jpj,4) , & 442 442 & gsinlat(jpi,jpj,4) , gcoslat(jpi,jpj,4) , STAT=ierr ) 443 443 IF( lk_mpp ) CALL mpp_sum( ierr ) 444 IF( ierr /= 0 ) CALL ctl_stop(' STOP', 'angle_msh_geo: unable to allocate arrays' )444 IF( ierr /= 0 ) CALL ctl_stop('oce2geo: unable to allocate arrays' ) 445 445 ENDIF 446 446 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/SBC/sbccpl.F90
r3914 r4219 388 388 ! 389 389 IF( TRIM( sn_rcv_tau%cldes ) /= 'oce and ice' ) THEN ! 'oce and ice' case ocean stress on ocean mesh used 390 srcv(jpr_it z1:jpr_itz2)%laction = .FALSE. ! ice components not received (itx1 and ity1 used later)390 srcv(jpr_itx1:jpr_itz2)%laction = .FALSE. ! ice components not received 391 391 srcv(jpr_itx1)%clgrid = 'U' ! ocean stress used after its transformation 392 392 srcv(jpr_ity1)%clgrid = 'V' ! i.e. it is always at U- & V-points for i- & j-comp. resp. … … 407 407 SELECT CASE( TRIM( sn_rcv_emp%cldes ) ) 408 408 CASE( 'oce only' ) ; srcv( jpr_oemp )%laction = .TRUE. 409 CASE( 'conservative' ) ; srcv( (/jpr_rain, jpr_snow, jpr_ievp, jpr_tevp/) )%laction = .TRUE. 409 CASE( 'conservative' ) 410 srcv( (/jpr_rain, jpr_snow, jpr_ievp, jpr_tevp/) )%laction = .TRUE. 411 IF ( k_ice <= 1 ) srcv(jpr_ivep)%laction = .FALSE. 410 412 CASE( 'oce and ice' ) ; srcv( (/jpr_ievp, jpr_sbpr, jpr_semp, jpr_oemp/) )%laction = .TRUE. 411 413 CASE default ; CALL ctl_stop( 'sbc_cpl_init: wrong definition of sn_rcv_emp%cldes' ) … … 465 467 CALL ctl_stop( 'sbc_cpl_init: namsbc_cpl namelist mismatch between sn_rcv_qns%cldes and sn_rcv_dqnsdt%cldes' ) 466 468 ! ! ------------------------- ! 467 ! ! Ice Qsr penetration !468 ! ! ------------------------- !469 ! fraction of net shortwave radiation which is not absorbed in the thin surface layer470 ! and penetrates inside the ice cover ( Maykut and Untersteiner, 1971 ; Elbert anbd Curry, 1993 )471 ! Coupled case: since cloud cover is not received from atmosphere472 ! ===> defined as constant value -> definition done in sbc_cpl_init473 fr1_i0(:,:) = 0.18474 fr2_i0(:,:) = 0.82475 ! ! ------------------------- !476 469 ! ! 10m wind module ! 477 470 ! ! ------------------------- ! … … 508 501 ! Allocate taum part of frcv which is used even when not received as coupling field 509 502 IF ( .NOT. srcv(jpr_taum)%laction ) ALLOCATE( frcv(jpr_taum)%z3(jpi,jpj,srcv(jn)%nct) ) 503 ! Allocate itx1 and ity1 as they are used in sbc_cpl_ice_tau even if srcv(jpr_itx1)%laction = .FALSE. 504 IF( k_ice /= 0 ) THEN 505 IF ( .NOT. srcv(jpr_itx1)%laction ) ALLOCATE( frcv(jpr_itx1)%z3(jpi,jpj,srcv(jn)%nct) ) 506 IF ( .NOT. srcv(jpr_ity1)%laction ) ALLOCATE( frcv(jpr_ity1)%z3(jpi,jpj,srcv(jn)%nct) ) 507 END IF 510 508 511 509 ! ================================ ! … … 1329 1327 END SELECT 1330 1328 1329 ! Ice Qsr penetration used (only?)in lim2 or lim3 1330 ! fraction of net shortwave radiation which is not absorbed in the thin surface layer 1331 ! and penetrates inside the ice cover ( Maykut and Untersteiner, 1971 ; Elbert anbd Curry, 1993 ) 1332 ! Coupled case: since cloud cover is not received from atmosphere 1333 ! ===> defined as constant value -> definition done in sbc_cpl_init 1334 fr1_i0(:,:) = 0.18 1335 fr2_i0(:,:) = 0.82 1336 1337 1331 1338 CALL wrk_dealloc( jpi,jpj, zcptn, ztmp, zicefr ) 1332 1339 ! -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/SBC/sbcmod.F90
r3905 r4219 221 221 ENDIF 222 222 ! 223 CALL sbc_ssm_init ! Sea-surface mean fields initialisation 224 ! 223 225 IF( ln_ssr ) CALL sbc_ssr_init ! Sea-Surface Restoring initialisation 224 226 ! -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/SOL/solmat.F90
r3609 r4219 30 30 USE lbclnk ! lateral boudary conditions 31 31 USE lib_mpp ! distributed memory computing 32 USE c1d ! 1D vertical configuration 32 33 USE in_out_manager ! I/O manager 33 34 USE timing ! timing … … 271 272 272 273 ! SOR and PCG solvers 274 IF( lk_c1d ) CALL lbc_lnk( gcdmat, 'T', 1._wp ) ! 1D case bmask =/0 but gcdmat not define everywhere 273 275 DO jj = 1, jpj 274 276 DO ji = 1, jpi -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/TRA/eosbn2.F90
r3625 r4219 675 675 676 676 677 FUNCTION tfreez( psal ) RESULT( ptf )677 FUNCTION tfreez( psal, pdep ) RESULT( ptf ) 678 678 !!---------------------------------------------------------------------- 679 679 !! *** ROUTINE eos_init *** … … 688 688 !!---------------------------------------------------------------------- 689 689 REAL(wp), DIMENSION(jpi,jpj), INTENT(in ) :: psal ! salinity [psu] 690 REAL(wp), DIMENSION(jpi,jpj), INTENT(in ), OPTIONAL :: pdep ! depth [decibars] 690 691 ! Leave result array automatic rather than making explicitly allocated 691 692 REAL(wp), DIMENSION(jpi,jpj) :: ptf ! freezing temperature [Celcius] … … 694 695 ptf(:,:) = ( - 0.0575_wp + 1.710523e-3_wp * SQRT( psal(:,:) ) & 695 696 & - 2.154996e-4_wp * psal(:,:) ) * psal(:,:) 697 IF ( PRESENT( pdep ) ) THEN 698 ptf(:,:) = ptf(:,:) - 7.53e-4_wp * pdep(:,:) 699 ENDIF 696 700 ! 697 701 END FUNCTION tfreez -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/OPA_SRC/step.F90
r3769 r4219 271 271 ! 272 272 #if defined key_iomput 273 IF( kstp == nitend 273 IF( kstp == nitend .OR. indic < 0 ) CALL xios_context_finalize() ! needed for XIOS+AGRIF 274 274 #endif 275 275 ! -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/SAS_SRC/daymod.F90
r3851 r4219 246 246 nday_year = 1 247 247 nsec_year = ndt05 248 IF( nsec1jan000 >= 2 * (2**30 - nsecd * nyear_len(1) / 2 ) ) THEN ! test integer 4 max value 249 CALL ctl_stop( 'The number of seconds between Jan. 1st 00h of nit000 year and Jan. 1st 00h ', & 250 & 'of the current year is exceeding the INTEGER 4 max VALUE: 2^31-1 -> 68.09 years in seconds', & 251 & 'You must do a restart at higher frequency (or remove this STOP and recompile everything in I8)' ) 252 ENDIF 248 253 nsec1jan000 = nsec1jan000 + nsecd * nyear_len(1) 249 254 IF( nleapy == 1 ) CALL day_mth -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zfechem.F90
r3904 r4219 129 129 zoxy = trn(ji,jj,jk,jpoxy) * ( rhop(ji,jj,jk) / 1.e3 ) 130 130 ! Fe2+ oxydation rate from Santana-Casiano et al. (2005) 131 zkox = 35.407 - 6.7109 * zph + 0.5342 * zph * zph - 5362.6 / ( tsn(ji,jj, 1,jp_tem) + 273.15 ) &131 zkox = 35.407 - 6.7109 * zph + 0.5342 * zph * zph - 5362.6 / ( tsn(ji,jj,jk,jp_tem) + 273.15 ) & 132 132 & - 0.04406 * SQRT( tsn(ji,jj,jk,jp_sal) ) - 0.002847 * tsn(ji,jj,jk,jp_sal) 133 133 zkox = ( 10.** zkox ) * spd -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zsed.F90
r3905 r4219 82 82 IF( nn_timing == 1 ) CALL timing_start('p4z_sed') 83 83 ! 84 IF( kt == nit 000 .AND. jnt == 1 ) THEN84 IF( kt == nittrc000 .AND. jnt == 1 ) THEN 85 85 ryyss = nyear_len(1) * rday ! number of seconds per year and per month 86 86 rmtss = ryyss / raamo -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zsms.F90
r3882 r4219 76 76 ENDIF 77 77 ! 78 IF( ln_rsttr .AND. kt == nittrc000 ) CALL p4z_rst( nittrc000, 'READ' ) !* read or initialize all required fields 78 IF( kt == nittrc000 ) THEN 79 ! 80 CALL p4z_che ! initialize the chemical constants 81 ! 82 IF( .NOT. ln_rsttr ) THEN ; CALL p4z_ph_ini ! set PH at kt=nit000 83 ELSE ; CALL p4z_rst( nittrc000, 'READ' ) !* read or initialize all required fields 84 ENDIF 85 ! 86 ENDIF 87 79 88 IF( ln_pisdmp .AND. MOD( kt - nn_dttrc, nn_pisdmp ) == 0 ) CALL p4z_dmp( kt ) ! Relaxation of some tracers 80 89 ! … … 238 247 END SUBROUTINE p4z_sms_init 239 248 249 SUBROUTINE p4z_ph_ini 250 !!--------------------------------------------------------------------- 251 !! *** ROUTINE p4z_ini_ph *** 252 !! 253 !! ** Purpose : Initialization of chemical variables of the carbon cycle 254 !!--------------------------------------------------------------------- 255 INTEGER :: ji, jj, jk 256 REAL(wp) :: zcaralk, zbicarb, zco3 257 REAL(wp) :: ztmas, ztmas1 258 !!--------------------------------------------------------------------- 259 260 ! Set PH from total alkalinity, borat (???), akb3 (???) and ak23 (???) 261 ! -------------------------------------------------------- 262 DO jk = 1, jpk 263 DO jj = 1, jpj 264 DO ji = 1, jpi 265 ztmas = tmask(ji,jj,jk) 266 ztmas1 = 1. - tmask(ji,jj,jk) 267 zcaralk = trn(ji,jj,jk,jptal) - borat(ji,jj,jk) / ( 1. + 1.E-8 / ( rtrn + akb3(ji,jj,jk) ) ) 268 zco3 = ( zcaralk - trn(ji,jj,jk,jpdic) ) * ztmas + 0.5e-3 * ztmas1 269 zbicarb = ( 2. * trn(ji,jj,jk,jpdic) - zcaralk ) 270 hi(ji,jj,jk) = ( ak23(ji,jj,jk) * zbicarb / zco3 ) * ztmas + 1.e-9 * ztmas1 271 END DO 272 END DO 273 END DO 274 ! 275 END SUBROUTINE p4z_ph_ini 276 240 277 SUBROUTINE p4z_rst( kt, cdrw ) 241 278 !!--------------------------------------------------------------------- … … 266 303 ELSE 267 304 ! hi(:,:,:) = 1.e-9 268 ! Set PH from total alkalinity, borat (???), akb3 (???) and ak23 (???) 269 ! -------------------------------------------------------- 270 DO jk = 1, jpk 271 DO jj = 1, jpj 272 DO ji = 1, jpi 273 ztmas = tmask(ji,jj,jk) 274 ztmas1 = 1. - tmask(ji,jj,jk) 275 zcaralk = trn(ji,jj,jk,jptal) - borat(ji,jj,jk) / ( 1. + 1.E-8 / ( rtrn + akb3(ji,jj,jk) ) ) 276 zco3 = ( zcaralk - trn(ji,jj,jk,jpdic) ) * ztmas + 0.5e-3 * ztmas1 277 zbicarb = ( 2. * trn(ji,jj,jk,jpdic) - zcaralk ) 278 hi(ji,jj,jk) = ( ak23(ji,jj,jk) * zbicarb / zco3 ) * ztmas + 1.e-9 * ztmas1 279 END DO 280 END DO 281 END DO 305 CALL p4z_ph_ini 282 306 ENDIF 283 307 CALL iom_get( numrtr, jpdom_autoglo, 'Silicalim', xksi(:,:) ) … … 392 416 #endif 393 417 & + trn(:,:,:,jpsfe) & 394 & + trn(:,:,:,jpzoo) 418 & + trn(:,:,:,jpzoo) * ferat3 & 395 419 & + trn(:,:,:,jpmes) * ferat3 ) * cvol(:,:,:) ) 396 420 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/NEMO/TOP_SRC/PISCES/trcini_pisces.F90
r3757 r4219 122 122 rdenita = 3._wp / 5._wp 123 123 o2ut = 131._wp / 122._wp 124 125 CALL p4z_che ! initialize the chemical constants126 124 127 125 ! Initialization of tracer concentration in case of no restart … … 162 160 xksi(:,:) = 2.e-6 163 161 xksimax(:,:) = xksi(:,:) 164 165 ! Initialization of chemical variables of the carbon cycle 166 ! -------------------------------------------------------- 167 DO jk = 1, jpk 168 DO jj = 1, jpj 169 DO ji = 1, jpi 170 ztmas = tmask(ji,jj,jk) 171 ztmas1 = 1. - tmask(ji,jj,jk) 172 zcaralk = trn(ji,jj,jk,jptal) - borat(ji,jj,jk) / ( 1. + 1.E-8 / ( rtrn + akb3(ji,jj,jk) ) ) 173 zco3 = ( zcaralk - trn(ji,jj,jk,jpdic) ) * ztmas + 0.5e-3 * ztmas1 174 zbicarb = ( 2. * trn(ji,jj,jk,jpdic) - zcaralk ) 175 hi(ji,jj,jk) = ( ak23(ji,jj,jk) * zbicarb / zco3 ) * ztmas + 1.e-9 * ztmas1 176 END DO 177 END DO 178 END DO 179 ! 162 ! 180 163 END IF 181 164 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/SETTE/all_functions.sh
r3294 r4219 96 96 if [ ${#} -lt ${minargcount} ] 97 97 then 98 echo "not enough targuments for set_namelist"98 echo "not enough arguments for set_namelist" 99 99 echo "${usage}" 100 100 exit 1 … … 113 113 echo "doing \"set_namelist $@\". " 114 114 echo "variable: \"$2\" is empty" 115 echo "con trolthat variable $2 is in \"${EXE_DIR}/$1\" "115 echo "confirm that variable $2 is in \"${EXE_DIR}/$1\" " 116 116 echo "exit" 117 117 echo "error in executing script : set_namelist $@" >> ${SETTE_DIR}/output.sette … … 128 128 echo " " >> ${SETTE_DIR}/output.sette 129 129 } 130 130 131 131 132 # function to tidy up after each test and populate the NEMO_VALIDATION store … … 216 217 fi 217 218 } 219 220 ############################################################# 221 # extra functions to manipulate settings in the iodef.xml file 222 # 223 # Examples: 224 # set_xio_file_type iodef.xml one_file 225 # set_xio_using_server iodef.xml true 226 # set_xio_buffer_size iodef.xml 50000000 227 # 228 ############################################################# 229 230 usage2=" Usage : set_xio_file_type input_iodef.xml one_file||multiple_file" 231 usage3=" Usage : set_xio_using_server input_iodef.xml true||false" 232 usage4=" Usage : set_xio_buffer_size input_iodef.xml int_buffer_size" 233 234 set_xio_file_type () { 235 minargcount=2 236 if [ ${#} -lt ${minargcount} ] 237 then 238 echo "not enough arguments for set_xio_file_type" 239 echo "${usage2}" 240 exit 1 241 fi 242 if [ $2 != "one_file" ] && [ $2 != "multiple_file" ] 243 then 244 echo "unrecognised argument for set_xio_file_type" 245 echo "${usage2}" 246 echo $2 247 exit 1 248 fi 249 unset minargcount 250 if [ ! -f ${SETTE_DIR}/output.sette ] ; then 251 touch ${SETTE_DIR}/output.sette 252 fi 253 254 echo "executing script : set_xio_file_type $@" >> ${SETTE_DIR}/output.sette 255 echo "################" >> ${SETTE_DIR}/output.sette 256 257 VAR_NAME=$( grep "^.*<.*file_definition.*type.*=" ${EXE_DIR}/$1 | sed -e "s% *\!.*%%" ) 258 if [ ${#VAR_NAME} -eq 0 ] 259 then 260 echo "doing \"set_xio_file_type $@\". " 261 echo "xml_tag: file_definition with variable: type is empty" 262 echo "confirm that an appropriate file_definition is in \"${EXE_DIR}/$1\" " 263 echo "exit" 264 echo "error in executing script : set_xio_file_type $@" >> ${SETTE_DIR}/output.sette 265 echo "....." >> ${SETTE_DIR}/output.sette 266 exit 1 267 fi 268 if [ $2 == "one_file" ] 269 then 270 sed -e "s:multiple_file:one_file:" ${EXE_DIR}/$1 > ${EXE_DIR}/$1.tmp 271 else 272 sed -e "s:one_file:multiple_file:" ${EXE_DIR}/$1 > ${EXE_DIR}/$1.tmp 273 fi 274 mv ${EXE_DIR}/$1.tmp ${EXE_DIR}/$1 275 276 echo "finished script : set_xio_file_type $@" >> ${SETTE_DIR}/output.sette 277 echo "++++++++++++++++" >> ${SETTE_DIR}/output.sette 278 echo " " >> ${SETTE_DIR}/output.sette 279 } 280 281 set_xio_using_server () { 282 minargcount=2 283 if [ ${#} -lt ${minargcount} ] 284 then 285 echo "not enough arguments for set_xio_using_server" 286 echo "${usage2}" 287 exit 1 288 fi 289 if [ $2 != "true" ] && [ $2 != "false" ] 290 then 291 echo "unrecognised argument for set_xio_using_server" 292 echo "${usage2}" 293 echo $2 294 exit 1 295 fi 296 unset minargcount 297 if [ ! -f ${SETTE_DIR}/output.sette ] ; then 298 touch ${SETTE_DIR}/output.sette 299 fi 300 301 echo "executing script : set_xio_using_server $@" >> ${SETTE_DIR}/output.sette 302 echo "################" >> ${SETTE_DIR}/output.sette 303 304 VAR_NAME=$( grep "^.*<.*variable id.*=.*using_server.*=.*boolean" ${EXE_DIR}/$1 | sed -e "s% *\!.*%%" ) 305 if [ ${#VAR_NAME} -eq 0 ] 306 then 307 echo "doing \"set_xio_using_server $@\". " 308 echo "xml_tag: "variable id=using_server" with variable: boolean is empty" 309 echo "confirm that an appropriate variable id is in \"${EXE_DIR}/$1\" " 310 echo "exit" 311 echo "error in executing script : set_xio_using_server $@" >> ${SETTE_DIR}/output.sette 312 echo "....." >> ${SETTE_DIR}/output.sette 313 exit 1 314 fi 315 if [ $2 == "false" ] 316 then 317 sed -e "/using_server/s:true:false:" ${EXE_DIR}/$1 > ${EXE_DIR}/$1.tmp 318 export USING_MPMD=no 319 else 320 sed -e "/using_server/s:false:true:" ${EXE_DIR}/$1 > ${EXE_DIR}/$1.tmp 321 export USING_MPMD=yes 322 fi 323 mv ${EXE_DIR}/$1.tmp ${EXE_DIR}/$1 324 325 echo "finished script : set_xio_using_server $@" >> ${SETTE_DIR}/output.sette 326 echo "++++++++++++++++" >> ${SETTE_DIR}/output.sette 327 echo " " >> ${SETTE_DIR}/output.sette 328 } 329 330 set_xio_buffer_size () { 331 minargcount=2 332 if [ ${#} -lt ${minargcount} ] 333 then 334 echo "not enough arguments for set_xio_buffer_size" 335 echo "${usage4}" 336 exit 1 337 fi 338 unset minargcount 339 if [ ! -f ${SETTE_DIR}/output.sette ] ; then 340 touch ${SETTE_DIR}/output.sette 341 fi 342 343 echo "executing script : set_xio_buffer_size $@" >> ${SETTE_DIR}/output.sette 344 echo "################" >> ${SETTE_DIR}/output.sette 345 346 VAR_NAME=$( grep "^.*<.*variable id.*=.*buffer_size.*=.*integer" ${EXE_DIR}/$1 | sed -e "s% *\!.*%%" ) 347 if [ ${#VAR_NAME} -eq 0 ] 348 then 349 echo "doing \"set_xio_buffer_size $@\". " 350 echo "xml_tag: "variable id=buffer_size" with variable: integer is empty" 351 echo "confirm that an appropriate variable id is in \"${EXE_DIR}/$1\" " 352 echo "exit" 353 echo "error in executing script : set_xio_buffer_size $@" >> ${SETTE_DIR}/output.sette 354 echo "....." >> ${SETTE_DIR}/output.sette 355 exit 1 356 fi 357 sed -e "/buffer_size/s:>.*<:>$2<:" ${EXE_DIR}/$1 > ${EXE_DIR}/$1.tmp 358 mv ${EXE_DIR}/$1.tmp ${EXE_DIR}/$1 359 360 echo "finished script : set_xio_buffer_size $@" >> ${SETTE_DIR}/output.sette 361 echo "++++++++++++++++" >> ${SETTE_DIR}/output.sette 362 echo " " >> ${SETTE_DIR}/output.sette 363 } 364 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/SETTE/iodef_sette.xml
r3764 r4219 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".FALSE."/> <!-- 1h files --> … … 54 54 55 55 <axis_definition> 56 <axis id="deptht" long_name="Vertical T levels" unit="m" /><!-- positive=".FALSE." -->57 <axis id="depthu" long_name="Vertical U levels" unit="m" /><!-- positive=".FALSE." -->58 <axis id="depthv" long_name="Vertical V levels" unit="m" /><!-- positive=".FALSE." -->59 <axis id="depthw" long_name="Vertical W levels" unit="m" /><!-- positive=".FALSE." -->56 <axis id="deptht" long_name="Vertical T levels" unit="m" positive="down" /> 57 <axis id="depthu" long_name="Vertical U levels" unit="m" positive="down" /> 58 <axis id="depthv" long_name="Vertical V levels" unit="m" positive="down" /> 59 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 60 60 <axis id="nfloat" long_name="Float number" unit="-" /> 61 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 61 62 </axis_definition> 62 63 -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/SETTE/prepare_job.sh
r3680 r4219 68 68 # 69 69 70 usage=" Usage : ./prepare_job.sh INPUT_FILE_CONFIG_NAME NUMBER_PROC TEST_NAME MPI_FLAG JOB_FILE "71 usage=" example : ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg 8 SHORT no/yes $JOB_FILE "72 73 74 minargcount= 570 usage=" Usage : ./prepare_job.sh INPUT_FILE_CONFIG_NAME NUMBER_PROC TEST_NAME MPI_FLAG JOB_FILE NUM_XIO_SERVERS" 71 usage=" example : ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg 8 SHORT no/yes $JOB_FILE 0" 72 73 74 minargcount=6 75 75 if [ ${#} -lt ${minargcount} ] 76 76 then … … 93 93 MPI_FLAG=$4 94 94 JOB_FILE=$5 95 NXIO_PROC=$6 95 96 96 97 # export EXE_DIR. This directory is used to execute model … … 185 186 case ${COMPILER} in 186 187 ALTIX_NAUTILUS_MPT) 187 NB_REM=$( echo $NB_PROC | awk '{print $1% 4}')188 NB_REM=$( echo $NB_PROC $NXIO_PROC | awk '{print ( $1 + $2 ) % 4}') 188 189 if [ ${NB_REM} == 0 ] ; then 189 190 # number of processes required is an integer multiple of 4 190 191 # 191 NB_NODES=$( echo $NB_PROC | awk '{print $1/ 4}')192 NB_NODES=$( echo $NB_PROC $NXIO_PROC | awk '{print ($1 + $2 ) / 4}') 192 193 else 193 194 # … … 195 196 # round up the number of nodes required. 196 197 # 197 NB_NODES=$( echo $NB_PROC | awk '{printf("%d",$1/ 4 + 1 )}')198 NB_NODES=$( echo $NB_PROC $NXIO_PROC | awk '{printf("%d",($1 + $2 ) / 4 + 1 )}') 198 199 fi 199 200 ;; … … 229 230 # Pass settings into job file by using sed to edit predefined strings 230 231 # 231 cat ${SETTE_DIR}/job_batch_template | sed -e"s/NODES/${NB_NODES}/" -e"s/NPROCS/${NB_PROC}/" \ 232 cat ${SETTE_DIR}/job_batch_template | sed -e"s/NODES/${NB_NODES}/" \ 233 -e"s/NPROCS/${NB_PROC}/" \ 234 -e"s/NXIOPROCS/${NXIO_PROC}/" \ 232 235 -e"s:DEF_SETTE_DIR:${SETTE_DIR}:" -e"s:DEF_INPUT_DIR:${INPUT_DIR}:" \ 233 236 -e"s:DEF_EXE_DIR:${EXE_DIR}:" \ -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/SETTE/sette.sh
r3708 r4219 2 2 ############################################################ 3 3 # Author : Simona Flavoni for NEMO 4 # Contact : sflod@locean-ipsl.upmc.fr 4 # Contact: sflod@locean-ipsl.upmc.fr 5 # 2013 : A.C. Coward added options for testing with XIOS in dettached mode 5 6 # 6 7 # sette.sh : principal script of SET TEsts for NEMO (SETTE) … … 15 16 #set -u 16 17 #set -e 17 #+ 18 # 19 # ================ 20 # sette.sh 21 # ================ 22 # 23 # ---------------------------------------------- 24 # Set of tests for NEMO 25 # ---------------------------------------------- 26 # 27 # SYNOPSIS 28 # ======== 29 # 30 # :: 31 # 32 # $ ./sette.sh 33 # 18 # =========== 34 19 # DESCRIPTION 35 20 # =========== … … 37 22 # Variables to be checked by user: 38 23 # 39 # COMPILER : name of compiler as defined in NEMOGCM/ARCH directory 40 # 41 # BATCH_COMMAND : name of the command for batch submission 42 # 43 # INTERACT_FLAG : flag to run in interactive mode "yes" 44 # to run in batch mode "no" 45 # 46 # MPIRUN_FLAG : flag to run in parallel (MPI) "yes" 47 # to run in sequential mode (NB_PROC = 1) "no" 24 # COMPILER : name of compiler as defined in NEMOGCM/ARCH directory 25 # BATCH_COMMAND_PAR : name of the command for submitting parallel batch jobs 26 # BATCH_COMMAND_SEQ : name of the command for submitting sequential batch jobs 27 # INTERACT_FLAG : flag to run in interactive mode "yes" 28 # to run in batch mode "no" 29 # MPIRUN_FLAG : flag to run in parallel (MPI) "yes" 30 # to run in sequential mode (NB_PROC = 1) "no" 31 # USING_XIOS : flag to control the activation of key_iomput 32 # "yes" to compile using key_iomput and link to the external XIOS library 33 # "no" to compile without key_iomput and link to the old IOIPSL library 34 # USING_MPMD : flag to control the use of stand-alone IO servers 35 # requires USING_XIOS="yes" 36 # "yes" to run in MPMD (detached) mode with stand-alone IO servers 37 # "no" to run in SPMD (attached) mode without separate IO servers 38 # NUM_XIOSERVERS : number of stand-alone IO servers to employ 39 # set to zero if USING_MPMD="no" 48 40 # 49 41 # Principal script is sette.sh, that calls 50 42 # 51 # makenemo 52 # 53 # creates the exectuable in ${CONFIG_NAME}/BLD/bin/nemo.exe (and its link opa in ${CONFIG_NAME}/EXP00) 43 # makenemo : to create successive exectuables in ${CONFIG_NAME}/BLD/bin/nemo.exe 44 # and links to opa in ${CONFIG_NAME}/EXP00) 54 45 # 55 46 # param.cfg : sets and loads following directories: 56 47 # 57 # FORCING_DIR : is the directory for forcing files (tarfile) 58 # 59 # INPUT_DIR : is the directory for input files storing 60 # 61 # TMPDIR : is the temporary directory (if needed) 48 # FORCING_DIR : is the directory for forcing files (tarfile) 49 # INPUT_DIR : is the directory for input files storing 50 # TMPDIR : is the temporary directory (if needed) 51 # NEMO_VALIDATION_DIR : is the validation directory 52 # 53 # (NOTE: this file is the same for all configrations to be tested with sette) 54 # 55 # all_functions.sh : loads functions used by sette (note: new functions can be added here) 56 # set_namelist : function declared in all_functions that sets namelist parameters 57 # post_test_tidyup : creates validation storage directory and copies required output files 58 # (solver.stat and ocean.output) in it after execution of test. 59 # 60 # VALIDATION tree is: 61 # 62 # NEMO_VALIDATION_DIR/WCONFIG_NAME/WCOMPILER_NAME/TEST_NAME/REVISION_NUMBER(or DATE) 63 # 64 # prepare_exe_dir.sh : defines and creates directory where the test is executed 65 # execution directory takes name of TEST_NAME defined for every test 66 # in sette.sh. (each test in executed in its own directory) 67 # 68 # prepare_job.sh : to generate the script run_job.sh 69 # 70 # fcm_job.sh : run in batch (INTERACT_FLAG="no") or interactive (INTERACT_FLAG="yes") 71 # see sette.sh and BATCH_TEMPLATE directory 72 # 73 # NOTE: jobs requiring initial or forcing data need to have an input_CONFIG.cfg in which 74 # can be found paths to the input tar file) 75 # NOTE: if job is not launched for any reason you have the executable ready in ${EXE_DIR} 76 # directory 77 # NOTE: the changed namelists are left in ${EXE_DIR} directory whereas original namelists 78 # remain in ${NEW_CONF}/EXP00 62 79 # 63 # NEMO_VALIDATION_DIR : is the validation directory 64 # 65 # (NOTE: this file is the same for all configrations to be tested with sette) 66 # 67 # 68 # all_functions.sh : loads functions used by sette (note: new functions can be added here) 69 # 70 # set_namelist : function declared in all_functions that set namelist parameters for tests 71 # 72 # post_test_tidyup : creates validation storage directory and copy needed output files (solver.stat and ocean.output) in it after execution of test. 73 # 74 # Tree of VALIDATION is: 75 # 76 # NEMO_VALIDATION_DIR/WCONFIG_NAME/WCOMPILER_NAME/TEST_NAME/REVISION_NUMBER(or DATE) 77 # 78 # 79 # prepare_exe_dir.sh : defines and creates directory where the test is executed 80 # 81 # execution directory takes name of TEST_NAME defined in every test in sette.sh 82 # 83 # ( each test in executed in its own directory ) 84 # 85 # 86 # prepare_job.sh 87 # 88 # to generate the script run_job.sh 89 # 90 # fcm_job.sh 91 # 92 # run in batch (INTERACT_FLAG="no") or interactive (INTERACT_FLAG="yes") see sette.sh and BATCH_TEMPLATE directory 93 # 94 # (note this job needs to have an input_CONFIG.cfg in which can be found input tar file) 95 # 96 # NOTE: if job is not launched for some problems you have executable ready in ${EXE_DIR} directory 97 # 98 # NOTE: the changed namelists are leaved in ${EXE_DIR} directory whereas original namelist remains in ${NEW_CONF}/EXP00 99 # 100 # in ${SETTE_DIR} is created output.sette with the echo of executed commands 101 # 102 # if sette.sh is stopped in output.sette there is written the last command executed by sette.sh 103 # 104 # if you run: ./sette.sh 2>&1 | tee out.sette 105 # 106 # in ${SETTE_DIR} out.sette is redirected standard error & standard output 107 # 108 # 109 # EXAMPLES 110 # ======== 111 # 112 # :: 113 # 114 # $ ./sette.sh 115 # 116 # 117 # TODO 118 # ==== 119 # 120 # option debug 121 # 122 # EVOLUTIONS 123 # ========== 124 # 125 # $Id$ 126 # 127 # * creation 128 # 129 #- 130 # 131 #- 80 # NOTE: a log file, output.sette, is created in ${SETTE_DIR} with the echoes of 81 # executed commands 82 # 83 # NOTE: if sette.sh is stopped in output.sette there is written the last command 84 # executed by sette.sh 85 # 86 # example use: ./sette.sh 87 ######################################################################################### 88 # 132 89 # Compiler among those in NEMOGCM/ARCH 133 90 COMPILER=PW6_VARGAS … … 136 93 export INTERACT_FLAG="no" 137 94 export MPIRUN_FLAG="yes" 138 # IF YOU DON'T WANT TO USE XIOS : (this is a list of keys to be delete) 139 export KEY_XIOS="key_iomput" 140 # IF YOU WANT TO USE XIOS : 141 #export KEY_XIOS="" 142 95 export USING_XIOS="yes" 96 # 97 export DEL_KEYS="key_iomput" 98 if [ ${USING_XIOS} == "yes" ] 99 then 100 export DEL_KEYS="" 101 fi 102 # 103 # Settings which control the use of stand alone servers (only relevant if using xios) 104 # 105 export USING_MPMD="no" 106 export NUM_XIOSERVERS=4 107 export JOB_PREFIX=batch-mpmd 108 # 109 if [ ${USING_MPMD} == "no" ] 110 then 111 export NUM_XIOSERVERS=0 112 export JOB_PREFIX=batch 113 fi 114 # 115 # 116 if [ ${USING_MPMD} == "yes" ] && [ ${USING_XIOS} == "no"] 117 then 118 echo "Incompatible choices. MPMD mode requires the XIOS server" 119 exit 120 fi 121 # 143 122 144 123 # Directory to run the tests … … 152 131 # Copy job_batch_COMPILER file for specific compiler into job_batch_template 153 132 cd ${SETTE_DIR} 154 cp BATCH_TEMPLATE/ batch-${COMPILER} job_batch_template || exit133 cp BATCH_TEMPLATE/${JOB_PREFIX}-${COMPILER} job_batch_template || exit 155 134 156 135 for config in 1 2 3 4 5 6 7 8 9 … … 162 141 export TEST_NAME="LONG" 163 142 cd ${SETTE_DIR} 164 . ../CONFIG/makenemo -m ${CMP_NAM} -n GYRE_LONG -r GYRE -j 8 add_key "key_mpp_mpi" del_key ${ KEY_XIOS}143 . ../CONFIG/makenemo -m ${CMP_NAM} -n GYRE_LONG -r GYRE -j 8 add_key "key_mpp_mpi" del_key ${DEL_KEYS} 165 144 cd ${SETTE_DIR} 166 145 . param.cfg … … 169 148 JOB_FILE=${EXE_DIR}/run_job.sh 170 149 NPROC=4 171 \rm ${JOB_FILE}150 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 172 151 cd ${EXE_DIR} 173 152 set_namelist namelist cn_exp \"GYRE_LONG\" … … 180 159 set_namelist namelist jpnj 2 181 160 set_namelist namelist jpnij 4 182 cd ${SETTE_DIR} 183 . ./prepare_job.sh input_GYRE.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 161 if [ ${USING_MPMD} == "yes" ] ; then 162 set_xio_using_server iodef.xml true 163 else 164 set_xio_using_server iodef.xml false 165 fi 166 cd ${SETTE_DIR} 167 . ./prepare_job.sh input_GYRE.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 184 168 185 169 cd ${SETTE_DIR} … … 199 183 set_namelist namelist jpnij 4 200 184 set_namelist namelist cn_ocerst_in \"GYRE_LONG_00000060_restart\" 185 if [ ${USING_MPMD} == "yes" ] ; then 186 set_xio_using_server iodef.xml true 187 else 188 set_xio_using_server iodef.xml false 189 fi 201 190 for (( i=1; i<=$NPROC; i++)) ; do 202 191 L_NPROC=$(( $i - 1 )) … … 204 193 ln -sf ../LONG/GYRE_LONG_00000060_restart_${L_NPROC}.nc . 205 194 done 206 cd ${SETTE_DIR} 207 . ./prepare_job.sh input_GYRE.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 195 if [ ${USING_MPMD} == "yes" ] ; then 196 set_xio_using_server iodef.xml true 197 else 198 set_xio_using_server iodef.xml false 199 fi 200 cd ${SETTE_DIR} 201 . ./prepare_job.sh input_GYRE.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 208 202 cd ${SETTE_DIR} 209 203 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 214 208 export TEST_NAME="REPRO_1_4" 215 209 cd ${SETTE_DIR} 216 . ../CONFIG/makenemo -m ${CMP_NAM} -n GYRE_4 -r GYRE -j 8 add_key "key_mpp_mpi key_mpp_rep" del_key ${ KEY_XIOS}210 . ../CONFIG/makenemo -m ${CMP_NAM} -n GYRE_4 -r GYRE -j 8 add_key "key_mpp_mpi key_mpp_rep" del_key ${DEL_KEYS} 217 211 cd ${SETTE_DIR} 218 212 . param.cfg … … 221 215 JOB_FILE=${EXE_DIR}/run_job.sh 222 216 NPROC=4 223 \rm ${JOB_FILE}217 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 224 218 cd ${EXE_DIR} 225 219 set_namelist namelist cn_exp \"GYRE_14\" … … 234 228 set_namelist namelist jpnj 4 235 229 set_namelist namelist jpnij 4 236 cd ${SETTE_DIR} 237 . ./prepare_job.sh input_GYRE.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 230 if [ ${USING_MPMD} == "yes" ] ; then 231 set_xio_using_server iodef.xml true 232 else 233 set_xio_using_server iodef.xml false 234 fi 235 cd ${SETTE_DIR} 236 . ./prepare_job.sh input_GYRE.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 238 237 cd ${SETTE_DIR} 239 238 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 244 243 JOB_FILE=${EXE_DIR}/run_job.sh 245 244 NPROC=4 246 \rm $JOB_FILE245 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 247 246 cd ${EXE_DIR} 248 247 set_namelist namelist cn_exp \"GYRE_22\" … … 256 255 set_namelist namelist jpnj 2 257 256 set_namelist namelist jpnij 4 258 cd ${SETTE_DIR} 259 . ./prepare_job.sh input_GYRE.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 257 if [ ${USING_MPMD} == "yes" ] ; then 258 set_xio_using_server iodef.xml true 259 else 260 set_xio_using_server iodef.xml false 261 fi 262 cd ${SETTE_DIR} 263 . ./prepare_job.sh input_GYRE.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 260 264 cd ${SETTE_DIR} 261 265 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 268 272 export TEST_NAME="LONG" 269 273 cd ${SETTE_DIR} 270 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2LIMPIS_LONG -r ORCA2_LIM_PISCES -j 8 add_key "key_mpp_mpi" del_key ${ KEY_XIOS}274 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2LIMPIS_LONG -r ORCA2_LIM_PISCES -j 8 add_key "key_mpp_mpi" del_key ${DEL_KEYS} 271 275 cd ${SETTE_DIR} 272 276 . param.cfg … … 275 279 JOB_FILE=${EXE_DIR}/run_job.sh 276 280 NPROC=4 277 \rm ${JOB_FILE}281 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 278 282 cd ${EXE_DIR} 279 283 set_namelist namelist cn_exp \"O2LP_LONG\" … … 298 302 set_namelist namelist_pisces ln_ironsed .false. 299 303 set_namelist namelist_pisces ln_hydrofe .false. 300 cd ${SETTE_DIR} 301 . ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 304 if [ ${USING_MPMD} == "yes" ] ; then 305 set_xio_using_server iodef.xml true 306 else 307 set_xio_using_server iodef.xml false 308 fi 309 cd ${SETTE_DIR} 310 . ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 302 311 303 312 cd ${SETTE_DIR} … … 341 350 ln -sf ../LONG/O2LP_LONG_00000075_restart_ice_${L_NPROC}.nc . 342 351 done 343 cd ${SETTE_DIR} 344 . ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 352 if [ ${USING_MPMD} == "yes" ] ; then 353 set_xio_using_server iodef.xml true 354 else 355 set_xio_using_server iodef.xml false 356 fi 357 cd ${SETTE_DIR} 358 . ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 345 359 cd ${SETTE_DIR} 346 360 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 351 365 export TEST_NAME="REPRO_4_4" 352 366 cd ${SETTE_DIR} 353 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2LIMPIS_16 -r ORCA2_LIM_PISCES -j 8 add_key "key_mpp_mpi key_mpp_rep" del_key ${ KEY_XIOS}367 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2LIMPIS_16 -r ORCA2_LIM_PISCES -j 8 add_key "key_mpp_mpi key_mpp_rep" del_key ${DEL_KEYS} 354 368 cd ${SETTE_DIR} 355 369 . param.cfg … … 358 372 JOB_FILE=${EXE_DIR}/run_job.sh 359 373 NPROC=16 360 \rm $JOB_FILE374 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 361 375 cd ${EXE_DIR} 362 376 set_namelist namelist nn_it000 1 … … 383 397 # put ln_pisdmp to false : no restoring to global mean value 384 398 set_namelist namelist_pisces ln_pisdmp .false. 385 cd ${SETTE_DIR} 386 . ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 399 if [ ${USING_MPMD} == "yes" ] ; then 400 set_xio_using_server iodef.xml true 401 else 402 set_xio_using_server iodef.xml false 403 fi 404 cd ${SETTE_DIR} 405 . ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 387 406 cd ${SETTE_DIR} 388 407 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 393 412 JOB_FILE=${EXE_DIR}/run_job.sh 394 413 NPROC=16 395 \rm $JOB_FILE414 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 396 415 cd ${EXE_DIR} 397 416 set_namelist namelist nn_it000 1 … … 417 436 # put ln_pisdmp to false : no restoring to global mean value 418 437 set_namelist namelist_pisces ln_pisdmp .false. 419 cd ${SETTE_DIR} 420 . ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 438 if [ ${USING_MPMD} == "yes" ] ; then 439 set_xio_using_server iodef.xml true 440 else 441 set_xio_using_server iodef.xml false 442 fi 443 cd ${SETTE_DIR} 444 . ./prepare_job.sh input_ORCA2_LIM_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 421 445 cd ${SETTE_DIR} 422 446 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 428 452 export TEST_NAME="LONG" 429 453 cd ${SETTE_DIR} 430 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2OFFPIS_LONG -r ORCA2_OFF_PISCES -j 8 add_key "key_mpp_mpi key_mpp_rep" del_key ${ KEY_XIOS}454 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2OFFPIS_LONG -r ORCA2_OFF_PISCES -j 8 add_key "key_mpp_mpi key_mpp_rep" del_key ${DEL_KEYS} 431 455 cd ${SETTE_DIR} 432 456 . param.cfg … … 435 459 JOB_FILE=${EXE_DIR}/run_job.sh 436 460 NPROC=4 437 \rm $JOB_FILE461 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 438 462 cd ${EXE_DIR} 439 463 set_namelist namelist cn_exp \"OFFP_LONG\" … … 459 483 # put ln_pisdmp to false : no restoring to global mean value 460 484 set_namelist namelist_pisces ln_pisdmp .false. 461 cd ${SETTE_DIR} 462 . ./prepare_job.sh input_ORCA2_OFF_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 485 if [ ${USING_MPMD} == "yes" ] ; then 486 set_xio_using_server iodef.xml true 487 else 488 set_xio_using_server iodef.xml false 489 fi 490 cd ${SETTE_DIR} 491 . ./prepare_job.sh input_ORCA2_OFF_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 463 492 464 493 cd ${SETTE_DIR} … … 495 524 # put ln_pisdmp to false : no restoring to global mean value 496 525 set_namelist namelist_pisces ln_pisdmp .false. 497 cd ${SETTE_DIR} 498 . ./prepare_job.sh input_ORCA2_OFF_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 526 if [ ${USING_MPMD} == "yes" ] ; then 527 set_xio_using_server iodef.xml true 528 else 529 set_xio_using_server iodef.xml false 530 fi 531 cd ${SETTE_DIR} 532 . ./prepare_job.sh input_ORCA2_OFF_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 499 533 cd ${SETTE_DIR} 500 534 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 505 539 export TEST_NAME="REPRO_4_4" 506 540 cd ${SETTE_DIR} 507 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2OFFPIS_16 -r ORCA2_OFF_PISCES -j 8 add_key "key_mpp_mpi key_mpp_rep" del_key ${ KEY_XIOS}541 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2OFFPIS_16 -r ORCA2_OFF_PISCES -j 8 add_key "key_mpp_mpi key_mpp_rep" del_key ${DEL_KEYS} 508 542 cd ${SETTE_DIR} 509 543 . param.cfg … … 512 546 JOB_FILE=${EXE_DIR}/run_job.sh 513 547 NPROC=16 514 \rm $JOB_FILE548 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 515 549 cd ${EXE_DIR} 516 550 set_namelist namelist nn_it000 1 … … 535 569 # put ln_pisdmp to false : no restoring to global mean value 536 570 set_namelist namelist_pisces ln_pisdmp .false. 537 cd ${SETTE_DIR} 538 . ./prepare_job.sh input_ORCA2_OFF_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 571 if [ ${USING_MPMD} == "yes" ] ; then 572 set_xio_using_server iodef.xml true 573 else 574 set_xio_using_server iodef.xml false 575 fi 576 cd ${SETTE_DIR} 577 . ./prepare_job.sh input_ORCA2_OFF_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 539 578 cd ${SETTE_DIR} 540 579 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 545 584 JOB_FILE=${EXE_DIR}/run_job.sh 546 585 NPROC=16 547 \rm $JOB_FILE586 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 548 587 cd ${EXE_DIR} 549 588 set_namelist namelist nn_it000 1 … … 568 607 # put ln_pisdmp to false : no restoring to global mean value 569 608 set_namelist namelist_pisces ln_pisdmp .false. 570 cd ${SETTE_DIR} 571 . ./prepare_job.sh input_ORCA2_OFF_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 609 if [ ${USING_MPMD} == "yes" ] ; then 610 set_xio_using_server iodef.xml true 611 else 612 set_xio_using_server iodef.xml false 613 fi 614 cd ${SETTE_DIR} 615 . ./prepare_job.sh input_ORCA2_OFF_PISCES.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 572 616 cd ${SETTE_DIR} 573 617 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 579 623 export TEST_NAME="LONG" 580 624 cd ${SETTE_DIR} 581 . ../CONFIG/makenemo -m ${CMP_NAM} -n AMM12_LONG -r AMM12 -j 8 add_key "key_tide" del_key ${ KEY_XIOS}625 . ../CONFIG/makenemo -m ${CMP_NAM} -n AMM12_LONG -r AMM12 -j 8 add_key "key_tide" del_key ${DEL_KEYS} 582 626 cd ${SETTE_DIR} 583 627 . param.cfg … … 586 630 JOB_FILE=${EXE_DIR}/run_job.sh 587 631 NPROC=32 588 \rm $JOB_FILE632 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 589 633 cd ${EXE_DIR} 590 634 set_namelist namelist nn_it000 1 … … 600 644 set_namelist namelist jpnj 4 601 645 set_namelist namelist jpnij 32 602 cd ${SETTE_DIR} 603 . ./prepare_job.sh input_AMM12.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 646 if [ ${USING_MPMD} == "yes" ] ; then 647 set_xio_using_server iodef.xml true 648 else 649 set_xio_using_server iodef.xml false 650 fi 651 cd ${SETTE_DIR} 652 . ./prepare_job.sh input_AMM12.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 604 653 605 654 cd ${SETTE_DIR} … … 625 674 ln -sf ../LONG/AMM12_00000006_restart_${L_NPROC}.nc . 626 675 done 627 cd ${SETTE_DIR} 628 . ./prepare_job.sh input_AMM12.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 676 if [ ${USING_MPMD} == "yes" ] ; then 677 set_xio_using_server iodef.xml true 678 else 679 set_xio_using_server iodef.xml false 680 fi 681 cd ${SETTE_DIR} 682 . ./prepare_job.sh input_AMM12.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 629 683 cd ${SETTE_DIR} 630 684 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 635 689 export TEST_NAME="REPRO_8_4" 636 690 cd ${SETTE_DIR} 637 . ../CONFIG/makenemo -m ${CMP_NAM} -n AMM12_32 -r AMM12 -j 8 add_key "key_mpp_rep key_tide" del_key ${ KEY_XIOS}691 . ../CONFIG/makenemo -m ${CMP_NAM} -n AMM12_32 -r AMM12 -j 8 add_key "key_mpp_rep key_tide" del_key ${DEL_KEYS} 638 692 cd ${SETTE_DIR} 639 693 . param.cfg … … 642 696 JOB_FILE=${EXE_DIR}/run_job.sh 643 697 NPROC=32 644 \rm ${JOB_FILE}698 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 645 699 cd ${EXE_DIR} 646 700 set_namelist namelist nn_it000 1 … … 655 709 set_namelist namelist jpnj 4 656 710 set_namelist namelist jpnij 32 657 cd ${SETTE_DIR} 658 . ./prepare_job.sh input_AMM12.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 711 if [ ${USING_MPMD} == "yes" ] ; then 712 set_xio_using_server iodef.xml true 713 else 714 set_xio_using_server iodef.xml false 715 fi 716 cd ${SETTE_DIR} 717 . ./prepare_job.sh input_AMM12.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 659 718 cd ${SETTE_DIR} 660 719 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 675 734 set_namelist namelist jpnj 8 676 735 set_namelist namelist jpnij 32 677 cd ${SETTE_DIR} 678 . ./prepare_job.sh input_AMM12.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 736 if [ ${USING_MPMD} == "yes" ] ; then 737 set_xio_using_server iodef.xml true 738 else 739 set_xio_using_server iodef.xml false 740 fi 741 cd ${SETTE_DIR} 742 . ./prepare_job.sh input_AMM12.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 679 743 cd ${SETTE_DIR} 680 744 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} … … 686 750 export TEST_NAME="SHORT" 687 751 cd ${SETTE_DIR} 688 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2AGUL_1_2 -r ORCA2_LIM -j 8 add_key "key_mpp_mpi key_mpp_rep key_agrif" del_key "key_zdftmx" del_key ${ KEY_XIOS}752 . ../CONFIG/makenemo -m ${CMP_NAM} -n ORCA2AGUL_1_2 -r ORCA2_LIM -j 8 add_key "key_mpp_mpi key_mpp_rep key_agrif" del_key "key_zdftmx" del_key ${DEL_KEYS} 689 753 cd ${SETTE_DIR} 690 754 . param.cfg … … 693 757 JOB_FILE=${EXE_DIR}/run_job.sh 694 758 NPROC=2 695 \rm ${JOB_FILE}759 if [ -f ${JOB_FILE} ] ; then \rm ${JOB_FILE} ; fi 696 760 cd ${EXE_DIR} 697 761 set_namelist namelist nn_it000 1 … … 706 770 set_namelist 1_namelist ln_ctl .false. 707 771 set_namelist 1_namelist ln_clobber .true. 708 cd ${SETTE_DIR} 709 . ./prepare_job.sh input_ORCA2_LIM_AGRIF.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} 772 if [ ${USING_MPMD} == "yes" ] ; then 773 set_xio_using_server iodef.xml true 774 else 775 set_xio_using_server iodef.xml false 776 fi 777 cd ${SETTE_DIR} 778 . ./prepare_job.sh input_ORCA2_LIM_AGRIF.cfg $NPROC ${TEST_NAME} ${MPIRUN_FLAG} ${JOB_FILE} ${NUM_XIOSERVERS} 710 779 cd ${SETTE_DIR} 711 780 . ./fcm_job.sh $NPROC ${JOB_FILE} ${INTERACT_FLAG} ${MPIRUN_FLAG} -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/TOOLS/COMPILE/Fcheck_archfile.sh
r3925 r4219 40 40 # :: 41 41 # 42 # $ ./Fcheck_archfile.sh ARCHFILE C OMPILER42 # $ ./Fcheck_archfile.sh ARCHFILE CPPFILE COMPILER 43 43 # 44 44 # … … 94 94 else 95 95 if [ -f ${COMPIL_DIR}/$1 ]; then 96 # has the cpp keys file been changed since we copied the arch file in ${COMPIL_DIR}? 97 mycpp=$( ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" ) 98 if [ "$mycpp" != "$( cat ${COMPIL_DIR}/cpp.history )" ]; then 99 echo $mycpp > ${COMPIL_DIR}/cpp.history 100 cpeval ${myarch} ${COMPIL_DIR}/$1 96 if [ "$2" != "nocpp" ] 97 then 98 # has the cpp keys file been changed since we copied the arch file in ${COMPIL_DIR}? 99 mycpp=$( ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" ) 100 if [ "$mycpp" != "$( cat ${COMPIL_DIR}/cpp.history )" ]; then 101 echo $mycpp > ${COMPIL_DIR}/cpp.history 102 cpeval ${myarch} ${COMPIL_DIR}/$1 103 fi 104 # has the cpp keys file been updated since we copied the arch file in ${COMPIL_DIR}? 105 mycpp=$( find -L ${COMPIL_DIR} -cnewer ${COMPIL_DIR}/$1 -name $2 -print ) 106 [ ${#mycpp} -ne 0 ] && cpeval ${myarch} ${COMPIL_DIR}/$1 101 107 fi 102 # has the cpp keys file been updated since we copied the arch file in ${COMPIL_DIR}?103 mycpp=$( find -L ${COMPIL_DIR} -cnewer ${COMPIL_DIR}/$1 -name $2 -print )104 [ ${#mycpp} -ne 0 ] && cpeval ${myarch} ${COMPIL_DIR}/$1105 108 # has myarch file been updated since we copied it in ${COMPIL_DIR}? 106 109 myarchdir=$( dirname ${myarch} ) … … 134 137 if [ "$myarch" == "$( cat ${COMPIL_DIR}/arch.history )" ]; then 135 138 if [ -f ${COMPIL_DIR}/$1 ]; then 136 # has the cpp keys file been changed since we copied the arch file in ${COMPIL_DIR}? 137 mycpp=$( ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" ) 138 if [ "$mycpp" != "$( cat ${COMPIL_DIR}/cpp.history )" ]; then 139 echo $mycpp > ${COMPIL_DIR}/cpp.history 140 cpeval ${myarch} ${COMPIL_DIR}/$1 139 if [ "$2" != "nocpp" ] 140 then 141 # has the cpp keys file been changed since we copied the arch file in ${COMPIL_DIR}? 142 mycpp=$( ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" ) 143 if [ "$mycpp" != "$( cat ${COMPIL_DIR}/cpp.history )" ]; then 144 echo $mycpp > ${COMPIL_DIR}/cpp.history 145 cpeval ${myarch} ${COMPIL_DIR}/$1 146 fi 147 # has the cpp keys file been updated since we copied the arch file in ${COMPIL_DIR}? 148 mycpp=$( find -L ${COMPIL_DIR} -cnewer ${COMPIL_DIR}/$1 -name $2 -print ) 149 [ ${#mycpp} -ne 0 ] && cpeval ${myarch} ${COMPIL_DIR}/$1 141 150 fi 142 # has the cpp keys file been updated since we copied the arch file in ${COMPIL_DIR}?143 mycpp=$( find -L ${COMPIL_DIR} -cnewer ${COMPIL_DIR}/$1 -name $2 -print )144 [ ${#mycpp} -ne 0 ] && cpeval ${myarch} ${COMPIL_DIR}/$1145 151 # has myarch file been updated since we copied it in ${COMPIL_DIR}? 146 152 myarch=$( find -L ${MAIN_DIR}/ARCH -cnewer ${COMPIL_DIR}/$1 -name arch-${3}.fcm -print ) … … 150 156 fi 151 157 else 152 ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" > ${COMPIL_DIR}/cpp.history 158 if [ "$2" != "nocpp" ] 159 then 160 ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" > ${COMPIL_DIR}/cpp.history 161 fi 153 162 echo ${myarch} > ${COMPIL_DIR}/arch.history 154 163 cpeval ${myarch} ${COMPIL_DIR}/$1 … … 157 166 158 167 #- do we need xios library? 159 use_iom=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$2 | grep -c key_iomput ) 168 if [ "$2" != "nocpp" ] 169 then 170 use_iom=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$2 | grep -c key_iomput ) 171 else 172 use_iom=0 173 fi 160 174 have_lxios=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$1 | grep -c "\-lxios" ) 161 175 if [[ ( $use_iom -eq 0 ) && ( $have_lxios -ge 1 ) ]] … … 166 180 167 181 #- do we need oasis libraries? 168 use_oasis=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$2 | grep -c key_oasis3 ) 182 if [ "$2" != "nocpp" ] 183 then 184 use_oasis=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$2 | grep -c key_oasis3 ) 185 else 186 use_oasis=0 187 fi 169 188 for liboa in psmile.MPI1 mct mpeu scrip mpp_io 170 189 do -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/TOOLS/MISCELLANEOUS/chk_iomput.sh
r2404 r4219 35 35 echo ' --insrc only print all variable definitions found in the source code' 36 36 echo 'Examples' 37 echo ' chk_iomput.sh'38 echo ' chk_iomput.sh --help'39 echo ' chk_iomput.sh ../../CONFIG/ORCA2_LIM/EXP00/iodef.xml "../../NEMO/OPA_SRC/ ../../NEMO/LIM_SRC_2/"'37 echo ' ./chk_iomput.sh' 38 echo ' ./chk_iomput.sh --help' 39 echo ' ./chk_iomput.sh ../../CONFIG/ORCA2_LIM/EXP00/iodef.xml "../../NEMO/OPA_SRC/ ../../NEMO/LIM_SRC_2/"' 40 40 echo 41 41 exit ;; … … 59 59 #------------------------------------------------ 60 60 # 61 [ $inxml -eq 1 ] && grep "< *field * id *=" $xmlfile 61 external=$( grep -c "<field_definition *\([^ ].* \)*src=" $xmlfile ) 62 if [ $external -eq 1 ] 63 then 64 xmlfield_def=$( grep "<field_definition *\([^ ].* \)*src=" $xmlfile | sed -e 's/.*src="\([^"]*\)".*/\1/' ) 65 xmlfield_def=$( dirname $xmlfile )/$xmlfield_def 66 else 67 xmlfield_def=$xmlfile 68 fi 69 [ $inxml -eq 1 ] && grep "< *field *\([^ ].* \)*id *=" $xmlfield_def 62 70 [ $insrc -eq 1 ] && find $srcdir -name "*.[Ffh]90" -exec grep -iH "^[^\!]*call *iom_put *(" {} \; 63 71 [ $(( $insrc + $inxml )) -ge 1 ] && exit … … 71 79 # list of variables used in "CALL iom_put" 72 80 # 73 varlistsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | sort -d ) 81 badvarsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -ic iom_put ) 82 if [ $badvarsrc -ne 0 ] 83 then 84 echo "The following call to iom_put cannot be checked" 85 echo 86 find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -i iom_put | sort -d 87 echo 88 fi 89 varlistsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -vi iom_put | sort -d ) 74 90 # 75 91 # list of variables defined in the xml file 76 92 # 77 varlistxml=$( grep "< *field * id *=" $xmlfile | sed -e "s/^.*< *field *id *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d )93 varlistxml=$( grep "< *field *\([^ ].* \)*id *=" $xmlfield_def | sed -e "s/^.*< *field .*id *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d ) 78 94 # 79 95 # list of variables to be outputed in the xml file 80 96 # 81 varlistout=$( grep "< *field * ref *=" $xmlfile | sed -e "s/^.*< *field *ref *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d )97 varlistout=$( grep "< *field *\([^ ].* \)*field_ref *=" $xmlfile | sed -e "s/^.*< *field .*field_ref *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d ) 82 98 # 83 99 echo "--------------------------------------------------" 84 100 echo check if all iom_put found in $srcdir 85 echo have a corresponding variable definition in $xmlfi le101 echo have a corresponding variable definition in $xmlfield_def 86 102 echo "--------------------------------------------------" 87 103 for var in $varlistsrc … … 90 106 if [ $tst -ne 1 ] 91 107 then 92 echo "problem with $var: $tst lines corresponding to its definition in $xmlfi le, but defined in the code in"108 echo "problem with $var: $tst lines corresponding to its definition in $xmlfield_def, but defined in the code in" 93 109 for f in $srclist 94 110 do -
branches/2013/dev_r3948_NOC_FK/NEMOGCM/TOOLS/maketools
r3294 r4219 146 146 147 147 #- When used for the first time, choose a compiler --- 148 . ${COMPIL_DIR}/Fcheck_archfile.sh arch_tools.fcm ${CMP_NAM} || exit148 . ${COMPIL_DIR}/Fcheck_archfile.sh arch_tools.fcm nocpp ${CMP_NAM} || exit 149 149 150 150 #- Choose a default tool if needed ---
Note: See TracChangeset
for help on using the changeset viewer.