Changeset 4193
- Timestamp:
- 2013-11-14T12:04:31+01:00 (11 years ago)
- Location:
- branches/2013/dev_r3867_MERCATOR1_DYN
- Files:
-
- 19 deleted
- 84 edited
- 11 copied
Legend:
- Unmodified
- Added
- Removed
-
branches/2013/dev_r3867_MERCATOR1_DYN/DOC/NEMO_book.tex
r3294 r4193 23 23 \usepackage[margin=10pt,font={small},labelsep=colon,labelfont={bf}]{caption} % Gives small font for captions 24 24 \usepackage{enumitem} % allows non-bold description items 25 \usepackage{longtable} % allows multipage tables 25 26 %\usepackage{colortbl} % gives coloured panels behind table columns 26 27 -
branches/2013/dev_r3867_MERCATOR1_DYN/DOC/TexFiles/Chapters/Chap_DIA.tex
r3764 r4193 1 1 % ================================================================ 2 % Chapter ÑI/O & Diagnostics2 % Chapter I/O & Diagnostics 3 3 % ================================================================ 4 4 \chapter{Ouput and Diagnostics (IOM, DIA, TRD, FLO)} … … 16 16 17 17 The model outputs are of three types: the restart file, the output listing, 18 and the output file(s). The restart file is used internally by the code when18 and the diagnostic output file(s). The restart file is used internally by the code when 19 19 the user wants to start the model with initial conditions defined by a 20 20 previous simulation. It contains all the information that is necessary in … … 25 25 that it is saved in the same binary format as the one used by the computer 26 26 that is to read it (in particular, 32 bits binary IEEE format must not be used for 27 this file). The output listing and file(s) are predefined but should be checked 27 this file). 28 29 The output listing and file(s) are predefined but should be checked 28 30 and eventually adapted to the user's needs. The output listing is stored in 29 31 the $ocean.output$ file. The information is printed from within the code on the … … 31 33 "\textit{grep -i numout}" in the source code directory. 32 34 33 In the standard configuration, the user will find the model results in 34 NetCDF files containing mean values (or instantaneous values if 35 \key{diainstant} is defined) for every time-step where output is demanded. 36 These outputs are defined in the \mdl{diawri} module. 37 When defining \key{dimgout}, the output are written in DIMG format, 38 an IEEE output format. 39 40 Since version 3.2, an I/O server has been added which provides more 41 flexibility in the choice of the fields to be output as well as how the 42 writing work is distributed over the processors in massively parallel 43 computing. It is presented in next section. 44 35 By default, diagnostic output files are written in NetCDF format but an IEEE binary output format, called DIMG, can be choosen by defining \key{dimgout}. 36 37 Since version 3.2, when defining \key{iomput}, an I/O server has been added which provides more flexibility in the choice of the fields to be written as well as how the writing work is distributed over the processors in massively parallel computing. The complete description of the use of this I/O server is presented in the next section. 38 39 By default, if neither \key{iomput} nor \key{dimgout} are defined, NEMO produces NetCDF with the old IOIPSL library which has been kept for compatibility and its easy installation. However, the IOIPSL library is quite inefficient on parallel machines and, since version 3.2, many diagnostic options have been added presuming the use of \key{iomput}. The usefulness of the default IOIPSL-based option is expected to reduce with each new release. If \key{iomput} is not defined, output files and content are defined in the \mdl{diawri} module and contain mean (or instantaneous if \key{diainstant} is defined) values over a regular period of nn\_write time-steps (namelist parameter). 45 40 46 41 %\gmcomment{ % start of gmcomment … … 53 48 54 49 55 Since version 3.2, iom\_put is the NEMO output interface. It was designed to be simple to use, 56 flexible and efficient. Two main functionalities are covered by iom\_put: 57 (1) the control of the output files through an external xml file defined by the user ; 58 (2) the distribution (or not) of all task related to output files on dedicated processors. 59 The first functionality allows the user to specify, without touching anything into the code, 60 the way he want to output data: \\ 61 - choice of output frequencies that can be different for each file (including real months and years) \\ 62 - choice of file contents: decide which data will be written in which file (the same data can be 63 outputted in different files) \\ 64 - possibility to extract a subdomain (for example all TAO-PIRATA-RAMA moorings are already defined) \\ 65 - choice of the temporal operation to perform: mean, instantaneous, min, max \\ 66 - extremely large choice of data available \\ 67 - redefine variables name and long\_name \\ 68 In addition, iom\_put allows the user to output any variable (scalar, 2D or 3D) in the code 69 in a very easy way. All details of iom\_put functionalities are listed in the following subsections. 70 An example of the iodef.xml file that control the outputs can be found here: 71 NEMOGCM/CONFIG/ORCA2\_LIM/EXP00/iodef.xml 72 73 The second functionality targets outputs performances when running on a very large number of processes. 74 The idea is to dedicate N specific processes to write the outputs, where N is defined by the user. 75 In the current version, this functionality is technically working however, its performance are usually poor 76 (for known reasons). Users can therefore test this functionality but they must be aware that expected 77 performance improvement will not be achieved before the release 3.4. 78 An example of xmlio\_server.def NEMOGCM/CONFIG/ORCA2\_LIM/EXP00/xmlio\_server.def 79 80 81 \subsection{Basic knowledge} 82 50 Since version 3.2, iomput is the NEMO output interface of choice. It has been designed to be simple to use, flexible and efficient. The two main purposes of iomput are: 51 \begin{enumerate} 52 \item The complete and flexible control of the output files through external XML files adapted by the user from standard templates. 53 \item To achieve high performance and scalable output through the optional distribution of all diagnostic output related tasks to dedicated processes. 54 \end{enumerate} 55 The first functionality allows the user to specify, without code changes or recompilation, aspects of the diagnostic output stream, such as: 56 \begin{itemize} 57 \item The choice of output frequencies that can be different for each file (including real months and years). 58 \item The choice of file contents; includes complete flexibility over which data are written in which files (the same data can be written in different files). 59 \item The possibility to split output files at a choosen frequency. 60 \item The possibility to extract a vertical or an horizontal subdomain. 61 \item The choice of the temporal operation to perform, e.g.: average, accumulate, instantaneous, min, max and once. 62 \item Control over metadata via a large XML "database" of possible output fields. 63 \end{itemize} 64 In addition, iomput allows the user to add the output of any new variable (scalar, 2D or 3D) in the code in a very easy way. All details of iomput functionalities are listed in the following subsections. Examples of the XML files that control the outputs can be found in: 65 \begin{alltt} 66 \begin{verbatim} 67 NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef.xml 68 NEMOGCM/CONFIG/SHARED/field_def.xml 69 and 70 NEMOGCM/CONFIG/SHARED/domain_def.xml. 71 \end{verbatim} 72 \end{alltt} 73 74 The second functionality targets output performance when running in parallel (\key{mpp\_mpi}). Iomput provides the possibility to specify N dedicated I/O processes (in addition to the NEMO processes) to collect and write the outputs. With an appropriate choice of N by the user, the bottleneck associated with the writing of the output files can be greatly reduced. 75 76 Since version 3.5, the iom\_put interface depends on an external code called \href{http://forge.ipsl.jussieu.fr/ioserver}{XIOS}. This new IO server can take advantage of the parallel I/O functionality of NetCDF4 to create a single output file and therefore to bypass the rebuilding phase. Note that writing in parallel into the same NetCDF files requires that your NetCDF4 library is linked to an HDF5 library that has been correctly compiled (i.e. with the configure option $--$enable-parallel). Note that the files created by iomput through XIOS are incompatible with NetCDF3. All post-processsing and visualization tools must therefore be compatible with NetCDF4 and not only NetCDF3. 77 78 Even if not using the parallel I/O functionality of NetCDF4, using N dedicated I/O servers, where N is typically much less than the number of NEMO processors, will reduce the number of output files created. This can greatly reduce the post-processing burden usually associated with using large numbers of NEMO processors. Note that for smaller configurations, the rebuilding phase can be avoided, even without a parallel-enabled NetCDF4 library, simply by employing only one dedicated I/O server. 79 80 \subsection{XIOS: the IO\_SERVER} 81 82 \subsubsection{Attached or detached mode?} 83 84 Iomput is based on \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS}, the io\_server developed by Yann Meurdesoif from IPSL. The behaviour of the io subsystem is controlled by settings in the external XML files listed above. Key settings in the iodef.xml file are {\tt using\_server} and the {\tt type} tag associated with each defined file. The {\tt using\_server} setting determines whether or not the server will be used in ''attached mode'' (as a library) [{\tt false}] or in ''detached mode'' (as an external executable on N additional, dedicated cpus) [{\tt true}]. The ''attached mode'' is simpler to use but much less efficient for massively parallel applications. The type of each file can be either ''multiple\_file'' or ''one\_file''. 85 86 In attached mode and if the type of file is ''multiple\_file'', then each NEMO process will also act as an IO server and produce its own set of output files. Superficially, this emulates the standard behaviour in previous versions, However, the subdomain written out by each process does not correspond to the {\tt jpi x jpj x jpk} domain actually computed by the process (although it may if {\tt jpni=1}). Instead each process will have collected and written out a number of complete longitudinal strips. If the ''one\_file'' option is chosen then all processes will collect their longitudinal strips and write (in parallel) to a single output file. 87 88 In detached mode and if the type of file is ''multiple\_file'', then each stand-alone XIOS process will collect data for a range of complete longitudinal strips and write to its own set of output files. If the ''one\_file'' option is chosen then all XIOS processes will collect their longitudinal strips and write (in parallel) to a single output file. Note running in detached mode requires launching a Multiple Process Multiple Data (MPMD) parallel job. The following subsection provides a typical example but the syntax will vary in different MPP environments. 89 90 \subsubsection{Number of cpu used by XIOS in detached mode} 91 92 The number of cores used by the XIOS is specified when launching the model. The number of cores dedicated to XIOS should be from ~1/10 to ~1/50 of the number or cores dedicated to NEMO. Some manufacturers suggest using O($\sqrt{N}$) dedicated IO processors for N processors but this is a general recommendation and not specific to NEMO. It is difficult to provide precise recommendations because the optimal choice will depend on the particular hardware properties of the target system (parallel filesystem performance, available memory, memory bandwidth etc.) and the volume and frequency of data to be created. Here is an example of 2 cpus for the io\_server and 62 cpu for nemo using mpirun: 93 94 \texttt{ mpirun -np 62 ./nemo.exe : -np 2 ./xios\_server.exe } 95 96 \subsubsection{Control of XIOS: the XIOS context in iodef.xml} 97 98 As well as the {\tt using\_server} flag, other controls on the use of XIOS are set in the XIOS context in iodef.xml. See the XML basics section below for more details on XML syntax and rules. 99 100 \begin{tabular}{|p{4cm}|p{6.0cm}|p{2.0cm}|} 101 \hline 102 variable name & 103 description & 104 example \\ 105 \hline 106 \hline 107 buffer\_size & 108 buffer size used by XIOS to send data from NEMO to XIOS. Larger is more efficient. Note that needed/used buffer sizes are summarized at the end of the job & 109 25000000 \\ 110 \hline 111 buffer\_server\_factor\_size & 112 ratio between NEMO and XIOS buffer size. Should be 2. & 113 2 \\ 114 \hline 115 info\_level & 116 verbosity level (0 to 100) & 117 0 \\ 118 \hline 119 using\_server & 120 activate attached(false) or detached(true) mode & 121 true \\ 122 \hline 123 using\_oasis & 124 XIOS is used with OASIS(true) or not (false) & 125 false \\ 126 \hline 127 oasis\_codes\_id & 128 when using oasis, define the identifier of NEMO in the namcouple. Note that the identifier of XIOS is xios.x & 129 oceanx \\ 130 \hline 131 \end{tabular} 132 133 134 \subsection{Practical issues} 135 136 \subsubsection{Installation} 137 138 As mentioned, XIOS is supported separately and must be downloaded and compiled before it can be used with NEMO. See the installation guide on the \href{http://forge.ipsl.jussieu.fr/ioserver/wiki}{XIOS} wiki for help and guidance. NEMO will need to link to the compiled XIOS library. The 139 \href{http://www.nemo-ocean.eu/Using-NEMO/User-Guides/Basics/XIOS-IO-server-installation-and-use}{XIOS with NEMO} guide provides an example illustration of how this can be achieved. 140 141 \subsubsection{Add your own outputs} 142 143 It is very easy to add your own outputs with iomput. Many standard fields and diagnostics are already prepared (i.e., steps 1 to 3 below have been done) and simply need to be activated by including the required output in a file definition in iodef.xml (step 4). To add new output variables, all 4 of the following steps must be taken. 144 \begin{description} 145 \item[1.] in NEMO code, add a \\ 146 \texttt{ CALL iom\_put( 'identifier', array ) } \\ 147 where you want to output a 2D or 3D array. 148 149 \item[2.] If necessary, add \\ 150 \texttt{ USE iom\ \ \ \ \ \ \ \ \ \ \ \ ! I/O manager library } \\ 151 to the list of used modules in the upper part of your module. 152 153 \item[3.] in the field\_def.xml file, add the definition of your variable using the same identifier you used in the f90 code (see subsequent sections for a details of the XML syntax and rules). For example: 154 \vspace{-20pt} 155 \begin{alltt} {{\scriptsize 156 \begin{verbatim} 157 <field_definition> 158 <!-- T grid --> 159 160 <field_group id="grid_T" grid_ref="grid_T_3D"> 161 ... 162 <field id="identifier" long_name="blabla" ... /> 163 ... 164 </field_definition> 165 \end{verbatim} 166 }}\end{alltt} 167 Note your definition must be added to the field\_group whose reference grid is consistent with the size of the array passed to iomput. The grid\_ref attribute refers to definitions set in iodef.xml which, in turn, reference grids and axes either defined in the code (iom\_set\_domain\_attr and iom\_set\_axis\_attr in iom.F90) or defined in the domain\_def.xml file. E.g.: 168 \vspace{-20pt} 169 \begin{alltt} {{\scriptsize 170 \begin{verbatim} 171 <grid id="grid_T_3D" domain_ref="grid_T" axis_ref="deptht"/> 172 \end{verbatim} 173 }}\end{alltt} 174 Note, if your array is computed within the surface module each nn\_fsbc time\_step, 175 add the field definition within the field\_group defined with the id ''SBC'': $<$field\_group id=''SBC''...$>$ which has been defined with the correct frequency of operations (iom\_set\_field\_attr in iom.F90) 176 177 \item[4.] add your field in one of the output files defined in iodef.xml (again see subsequent sections for syntax and rules) \\ 178 \vspace{-20pt} 179 \begin{alltt} {{\scriptsize 180 \begin{verbatim} 181 <file id="file1" .../> 182 ... 183 <field field_ref="identifier" /> 184 ... 185 </file> 186 \end{verbatim} 187 }}\end{alltt} 188 189 \end{description} 190 \subsection{XML fundamentals} 83 191 84 192 \subsubsection{ XML basic rules} … … 93 201 \subsubsection{Structure of the xml file used in NEMO} 94 202 95 The xml file is split into 3 parts: 96 \begin{description} 97 \item[field definition]: define all variables that can be output \\ 98 all lines in between the following two tags\\ 99 \verb? <field\_definition ...> ? \\ 100 \verb? </field\_definition ...> ? 101 \item[file definition]: define the netcdf files to be created and the variables they will contain \\ 102 all lines in between the following two tags \\ 103 \verb? <field\_definition> ? \\ 104 \verb? </field\_definition> ? 105 \item[axis and grid definitions]: define the horizontal and vertical grids \\ 106 all lines in between the following two set of two tags\\ 107 \verb? <axis\_definition ...> ? \\ 108 \verb? </axis\_definition ...> ? 109 and \\ 110 \verb? <grid\_definition ...> ? \\ 111 \verb? </grid\_definition ...> ? 112 \end{description} 113 114 \subsubsection{Inheritance and group } 115 116 Xml extensively uses the concept of inheritance. \\ 203 The XML file used in XIOS is structured by 7 families of tags: context, axis, domain, grid, field, file and variable. Each tag family has hierarchy of three flavors (except for context): 117 204 \\ 118 example 1: \\ 119 \vspace{-30pt} 205 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 206 \hline 207 flavor & 208 description & 209 example \\ 210 \hline 211 \hline 212 root & 213 declaration of the root element that can contain element groups or elements & 214 {\scriptsize \verb? < file_definition ... >?} \\ 215 \hline 216 group & 217 declaration of a group element that can contain element groups or elements & 218 {\scriptsize \verb? < file_group ... >?} \\ 219 \hline 220 element & 221 declaration of an element that can contain elements & 222 {\scriptsize \verb? < file ... >?} \\ 223 \hline 224 \end{tabular} 225 \\ 226 227 Each element may have several attributes. Some attributes are mandatory, other are optional but have a default value and other are are completely optional. Id is a special attribute used to identify an element or a group of elements. It must be unique for a kind of element. It is optional, but no reference to the corresponding element can be done if it is not defined. 228 229 The XML file is split into context tags that are used to isolate IO definition from different codes or different parts of a code. No interference is possible between 2 different contexts. Each context has its own calendar and an associated timestep. In NEMO, we used the following contexts (that can be defined in any order):\\ 230 \\ 231 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 232 \hline 233 context & 234 description & 235 example \\ 236 \hline 237 \hline 238 context xios & 239 context containing information for XIOS & 240 {\scriptsize \verb? <context id="xios" ... ?} \\ 241 \hline 242 context nemo & 243 context containing IO information for NEMO (mother grid when using AGRIF) & 244 {\scriptsize \verb? <context id="nemo" ... ?} \\ 245 \hline 246 context 1\_nemo & 247 context containing IO information for NEMO child grid 1 (when using AGRIF) & 248 {\scriptsize \verb? <context id="1_nemo" ... ?} \\ 249 \hline 250 context n\_nemo & 251 context containing IO information for NEMO child grid n (when using AGRIF) & 252 {\scriptsize \verb? <context id="n_nemo" ... ?} \\ 253 \hline 254 \end{tabular} 255 \\ 256 257 \noindent The xios context contains only 1 tag: 258 \\ 259 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 260 \hline 261 context tag & 262 description & 263 example \\ 264 \hline 265 \hline 266 variable\_definition & 267 define variables needed by XIOS. This can be seen as a kind of namelist for XIOS. & 268 {\scriptsize \verb? <variable_definition ... ?} \\ 269 \hline 270 \end{tabular} 271 \\ 272 273 \noindent Each context tag related to NEMO (mother or child grids) is divided into 5 parts (that can be defined in any order):\\ 274 \\ 275 \begin{tabular}{|p{3.0cm}|p{4.5cm}|p{4.5cm}|} 276 \hline 277 context tag & 278 description & 279 example \\ 280 \hline 281 \hline 282 field\_definition & 283 define all variables that can potentially be outputted & 284 {\scriptsize \verb? <field_definition ... ?} \\ 285 \hline 286 file\_definition & 287 define the netcdf files to be created and the variables they will contain & 288 {\scriptsize \verb? <file_definition ... ?} \\ 289 \hline 290 axis\_definition & 291 define vertical axis & 292 {\scriptsize \verb? <axis_definition ... ?} \\ 293 \hline 294 domain\_definition & 295 define the horizontal grids & 296 {\scriptsize \verb? <domain_definition ... ?} \\ 297 \hline 298 grid\_definition & 299 define the 2D and 3D grids (association of an axis and a domain) & 300 {\scriptsize \verb? <grid_definition ... ?} \\ 301 \hline 302 \end{tabular} 303 \\ 304 305 \subsubsection{Nesting XML files} 306 307 The XML file can be split in different parts to improve its readability and facilitate its use. The inclusion of XML files into the main XML file can be done through the attribute src: \\ 308 {\scriptsize \verb? <context src="./nemo_def.xml" /> ?}\\ 309 310 \noindent In NEMO, by default, the field and domain definition is done in 2 separate files: 311 {\scriptsize \tt 312 \begin{verbatim} 313 NEMOGCM/CONFIG/SHARED/field_def.xml 314 and 315 NEMOGCM/CONFIG/SHARED/domain_def.xml 316 \end{verbatim} 317 } 318 \noindent that are included in the main iodef.xml file through the following commands: \\ 319 {\scriptsize \verb? <field_definition src="./field_def.xml" /> ? \\ 320 \verb? <domain_definition src="./domain_def.xml" /> ? } 321 322 323 \subsubsection{Use of inheritance} 324 325 XML extensively uses the concept of inheritance. XML has a tree based structure with a parent-child oriented relation: all children inherit attributes from parent, but an attribute defined in a child replace the inherited attribute value. Note that the special attribute ''id'' is never inherited. \\ 326 \\ 327 example 1: Direct inheritance. 328 \vspace{-20pt} 120 329 \begin{alltt} {{\scriptsize 121 330 \begin{verbatim} 122 <field_definition operation="ave (X)" >123 124 <field id="sss" operation="inst(X)"/> <!-- instantaneous sss -->331 <field_definition operation="average" > 332 <field id="sst" /> <!-- averaged sst --> 333 <field id="sss" operation="instant"/> <!-- instantaneous sss --> 125 334 </field_definition> 126 335 \end{verbatim} 127 336 }}\end{alltt} 128 337 129 The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''ave (X)''130 of the attribute ''operation'' from its parent ''field definition''. Note that a child can overwrite338 The field ''sst'' which is part (or a child) of the field\_definition will inherit the value ''average'' 339 of the attribute ''operation'' from its parent. Note that a child can overwrite 131 340 the attribute definition inherited from its parents. In the example above, the field ''sss'' will 132 therefore output instantaneous values instead of average values. 133 134 example 2: Use (or overwrite) attributes value of a field when listing the variables included in a file341 for example output instantaneous values instead of average values. \\ 342 \\ 343 example 2: Inheritance by reference. 135 344 \vspace{-20pt} 136 345 \begin{alltt} {{\scriptsize 137 346 \begin{verbatim} 138 347 <field_definition> 139 <field id="sst" description="sea surface temperature" />140 <field id="sss" description="sea surface salinity" />348 <field id="sst" long_name="sea surface temperature" /> 349 <field id="sss" long_name="sea surface salinity" /> 141 350 </field_definition> 142 351 143 352 <file_definition> 144 <file id="file_1" />145 <field ref="sst"/> <!-- default def -->146 <field ref="sss" description="my description" /> <!-- overwrite -->147 353 <file id="myfile" output_freq="1d" /> 354 <field field_ref="sst" /> <!-- default def --> 355 <field field_ref="sss" long_name="my description" /> <!-- overwrite --> 356 </file> 148 357 </file_definition> 149 358 \end{verbatim} 150 359 }}\end{alltt} 151 152 With the help of the inheritance, the concept of group allow to define a set of attributes 153 for several fields or files. 154 155 example 3, group of fields: define a group ''T\_grid\_variables'' identified with the name 156 ''grid\_T''. By default variables of this group have no vertical axis but, following inheritance 157 rules, ''axis\_ref'' can be redefined for the field ''toce'' that is a 3D variable. 158 \vspace{-30pt} 360 Inherit (and overwrite, if needed) the attributes of a tag you are refering to. 361 362 \subsubsection{Use of Groups} 363 364 Groups can be used for 2 purposes. Firstly, the group can be used to define common attributes to be shared by the elements of the group through inheritance. In the following example, we define a group of field that will share a common grid ''grid\_T\_2D''. Note that for the field ''toce'', we overwrite the grid definition inherited from the group by ''grid\_T\_3D''. 365 \vspace{-20pt} 159 366 \begin{alltt} {{\scriptsize 160 367 \begin{verbatim} 161 <field_definition> 162 <group id="grid_T" axis_ref="none" grid_ref="T_grid_variables"> 163 <field id="sst"/> 164 <field id="sss"/> 165 <field id="toce" axis_ref="deptht"/> <!-- overwrite axis def --> 166 </group> 167 </field_definition> 368 <field_group id="grid_T" grid_ref="grid_T_2D"> 369 <field id="toce" long_name="temperature" unit="degC" grid_ref="grid_T_3D"/> 370 <field id="sst" long_name="sea surface temperature" unit="degC" /> 371 <field id="sss" long_name="sea surface salinity" unit="psu" /> 372 <field id="ssh" long_name="sea surface height" unit="m" /> 373 ... 168 374 \end{verbatim} 169 375 }}\end{alltt} 170 376 171 example 4, group of files: define a group of file with the attribute output\_freq equal to 432000 (5 days) 172 \vspace{- 30pt}377 Secondly, the group can be used to replace a list of elements. Several examples of groups of fields are proposed at the end of the file {\tt CONFIG/SHARED/field\_def.xml}. For example, a short list of the usual variables related to the U grid: 378 \vspace{-20pt} 173 379 \begin{alltt} {{\scriptsize 174 380 \begin{verbatim} 175 <file_definition> 176 <group id="5d" output_freq="432000"> <!-- 5d files --> 177 <file id="5d_grid_T" name="auto"> <!-- T grid file --> 381 <field_group id="groupU" > 382 <field field_ref="uoce" /> 383 <field field_ref="suoce" /> 384 <field field_ref="utau" /> 385 </field_group> 386 \end{verbatim} 387 }}\end{alltt} 388 that can be directly included in a file through the following syntax: 389 \vspace{-20pt} 390 \begin{alltt} {{\scriptsize 391 \begin{verbatim} 392 <file id="myfile_U" output_freq="1d" /> 393 <field_group group_ref="groupU"/> 394 <field field_ref="uocetr_eff" /> <!-- add another field --> 395 </file> 396 \end{verbatim} 397 }}\end{alltt} 398 399 \subsection{Detailed functionalities } 400 401 The file {\tt NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xml} provides several examples of the use of the new functionalities offered by the XML interface of XIOS. 402 403 \subsubsection{Define horizontal subdomains} 404 Horizontal subdomains are defined through the attributs zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj of the tag family domain. It must therefore be done in the domain part of the XML file. For example, in {\tt CONFIG/SHARED/domain\_def.xml}, we provide the following example of a definition of a 5 by 5 box with the bottom left corner at point (10,10). 405 \vspace{-20pt} 406 \begin{alltt} {{\scriptsize 407 \begin{verbatim} 408 <domain_group id="grid_T"> 409 <domain id="myzoom" zoom_ibegin="10" zoom_jbegin="10" zoom_ni="5" zoom_nj="5" /> 410 \end{verbatim} 411 }}\end{alltt} 412 The use of this subdomain is done through the redefinition of the attribute domain\_ref of the tag family field. For example: 413 \vspace{-20pt} 414 \begin{alltt} {{\scriptsize 415 \begin{verbatim} 416 <file id="myfile_vzoom" output_freq="1d" > 417 <field field_ref="toce" domain_ref="myzoom"/> 418 </file> 419 \end{verbatim} 420 }}\end{alltt} 421 Moorings are seen as an extrem case corresponding to a 1 by 1 subdomain. The Equatorial section, the TAO, RAMA and PIRATA moorings are alredy registered in the code and can therefore be outputted without taking care of their (i,j) position in the grid. These predefined domains can be activated by the use of specific domain\_ref: ''EqT'', ''EqU'' or ''EqW'' for the equatorial sections and the mooring position for TAO, RAMA and PIRATA followed by ''T'' (for example: ''8s137eT'', ''1.5s80.5eT'' ...) 422 \vspace{-20pt} 423 \begin{alltt} {{\scriptsize 424 \begin{verbatim} 425 <file id="myfile_vzoom" output_freq="1d" > 426 <field field_ref="toce" domain_ref="0n180wT"/> 427 </file> 428 \end{verbatim} 429 }}\end{alltt} 430 Note that if the domain decomposition used in XIOS cuts the subdomain in several parts and if you use the ''multiple\_file'' type for your output files, you will endup with several files you will need to rebuild using unprovided tools (like ncpdq and ncrcat, \href{http://nco.sourceforge.net/nco.html#Concatenation}{see nco manual}). We are therefore advising to use the ''one\_file'' type in this case. 431 432 \subsubsection{Define vertical zooms} 433 Vertical zooms are defined through the attributs zoom\_begin and zoom\_end of the tag family axis. It must therefore be done in the axis part of the XML file. For example, in NEMOGCM/CONFIG/ORCA2\_LIM/iodef\_demo.xml, we provide the following example: 434 \vspace{-20pt} 435 \begin{alltt} {{\scriptsize 436 \begin{verbatim} 437 <axis_group id="deptht" long_name="Vertical T levels" unit="m" positive="down" > 438 <axis id="deptht" /> 439 <axis id="deptht_myzoom" zoom_begin="1" zoom_end="10" /> 440 \end{verbatim} 441 }}\end{alltt} 442 The use of this vertical zoom is done through the redefinition of the attribute axis\_ref of the tag family field. For example: 443 \vspace{-20pt} 444 \begin{alltt} {{\scriptsize 445 \begin{verbatim} 446 <file id="myfile_hzoom" output_freq="1d" > 447 <field field_ref="toce" axis_ref="deptht_myzoom"/> 448 </file> 449 \end{verbatim} 450 }}\end{alltt} 451 452 \subsubsection{Control of the output file names} 453 454 The output file names are defined by the attributs ''name'' and ''name\_suffix'' of the tag family file. for example: 455 \vspace{-20pt} 456 \begin{alltt} {{\scriptsize 457 \begin{verbatim} 458 <file_group id="1d" output_freq="1d" name="myfile_1d" > 459 <file id="myfileA" name_suffix="_AAA" > <!-- will create file "myfile_1d_AAA" --> 178 460 ... 179 180 <file id="5d_grid_U" name="auto"> <!-- U grid file-->461 </file> 462 <file id="myfileB" name_suffix="_BBB" > <!-- will create file "myfile_1d_BBB" --> 181 463 ... 182 </file> 183 </group> 184 </file_definition> 464 </file> 465 </file_group> 185 466 \end{verbatim} 186 467 }}\end{alltt} 187 188 \subsubsection{Control of the xml attributes from NEMO} 189 190 The values of some attributes are automatically defined by NEMO (and any definition 191 given in the xml file is overwritten). By convention, these attributes are defined to ''auto'' 192 (for string) or ''0000'' (for integer) in the xml file (but this is not necessary). 193 194 Here is the list of these attributes: \\ 468 However it is often very convienent to define the file name with the name of the experiment, the output file frequency and the date of the beginning and the end of the simulation (which are informations stored either in the namelist or in the XML file). To do so, we added the following rule: if the id of the tag file is ''fileN''(where N = 1 to 99) or one of the predefined sections or moorings (see next subsection), the following part of the name and the name\_suffix (that can be inherited) will be automatically replaced by:\\ 469 \\ 470 \begin{tabular}{|p{4cm}|p{8cm}|} 471 \hline 472 \centering placeholder string & automatically replaced by \\ 473 \hline 474 \hline 475 \centering @expname@ & 476 the experiment name (from cn\_exp in the namelist) \\ 477 \hline 478 \centering @freq@ & 479 output frequency (from attribute output\_freq) \\ 480 \hline 481 \centering @startdate@ & 482 starting date of the simulation (from nn\_date0 in the restart or the namelist). \verb?yyyymmdd? format \\ 483 \hline 484 \centering @startdatefull@ & 485 starting date of the simulation (from nn\_date0 in the restart or the namelist). \verb?yyyymmdd_hh:mm:ss? format \\ 486 \hline 487 \centering @enddate@ & 488 ending date of the simulation (from nn\_date0 and nn\_itend in the namelist). \verb?yyyymmdd? format \\ 489 \hline 490 \centering @enddatefull@ & 491 ending date of the simulation (from nn\_date0 and nn\_itend in the namelist). \verb?yyyymmdd_hh:mm:ss? format \\ 492 \hline 493 \end{tabular}\\ 494 \\ 495 496 \noindent For example, 497 {{\scriptsize 498 \begin{verbatim} 499 <file id="myfile_hzoom" name="myfile_@expname@_@startdate@_freq@freq@" output_freq="1d" > 500 \end{verbatim} 501 }} 502 \noindent with the namelist: 503 {{\scriptsize 504 \begin{verbatim} 505 cn_exp = "ORCA2" 506 nn_date0 = 19891231 507 ln_rstart = .false. 508 \end{verbatim} 509 }} 510 \noindent will give the following file name radical: 511 {{\scriptsize 512 \begin{verbatim} 513 myfile_ORCA2_19891231_freq1d 514 \end{verbatim} 515 }} 516 517 \subsubsection{Other controls of the xml attributes from NEMO} 518 519 The values of some attributes are defined by subroutine calls within NEMO (calls to iom\_set\_domain\_attr, iom\_set\_axis\_attr and iom\_set\_field\_attr in iom.F90). Any definition given in the xml file will be overwritten. By convention, these attributes are defined to ''auto'' (for string) or ''0000'' (for integer) in the xml file (but this is not necessary). 520 521 Here is the list of these attributes:\\ 195 522 \\ 196 523 \begin{tabular}{|l|c|c|c|} … … 202 529 \multicolumn{2}{|c|}{field\_definition} & freq\_op & \np{rn\_rdt} \\ 203 530 \hline 204 \multicolumn{2}{|c|}{SBC} & freq\_op & \np{rn\_rdt} $\times$ \np{nn\_fsbc} \\ 205 \hline 206 1h, 2h, 3h, 4h, 6h, 12h & \_grid\_T, \_grid\_U, & name & filename defined by \\ 207 1d, 3d, 5d & \_grid\_V, \_grid\_W, & & a call to rou{dia\_nam} \\ 208 1m, 2m, 3m, 4m, 6m & \_icemod, \_ptrc\_T, & & following NEMO \\ 209 1y, 2y, 5y, 10y & \_diad\_T, \_scalar & & nomenclature \\ 531 \multicolumn{2}{|c|}{SBC} & freq\_op & \np{rn\_rdt} $\times$ \np{nn\_fsbc} \\ 532 \hline 533 \multicolumn{2}{|c|}{ptrc\_T} & freq\_op & \np{rn\_rdt} $\times$ \np{nn\_dttrc} \\ 534 \hline 535 \multicolumn{2}{|c|}{diad\_T} & freq\_op & \np{rn\_rdt} $\times$ \np{nn\_dttrc} \\ 210 536 \hline 211 537 \multicolumn{2}{|c|}{EqT, EqU, EqW} & jbegin, ni, & according to the grid \\ 212 538 \multicolumn{2}{|c|}{ } & name\_suffix & \\ 213 539 \hline 214 \multicolumn{2}{|c|}{TAO, RAMA and PIRATA moorings} & ibegin, jbegin,& according to the grid \\540 \multicolumn{2}{|c|}{TAO, RAMA and PIRATA moorings} & zoom\_ibegin, zoom\_jbegin, & according to the grid \\ 215 541 \multicolumn{2}{|c|}{ } & name\_suffix & \\ 216 542 \hline … … 218 544 219 545 220 \subsection{ Detailed functionalities}546 \subsection{XML reference tables} 221 547 222 548 \subsubsection{Tag list} 223 549 224 \begin{description} 225 226 \item[context]: define the model using the xml file. Id is the only attribute accepted. 227 Its value must be ''nemo'' or ''n\_nemo'' for the nth AGRIF zoom. Child of simulation tag. 228 229 \item[field]: define the field to be output. Accepted attributes are axis\_ref, description, enable, 230 freq\_op, grid\_ref, id (if child of field\_definition), level, operation, name, ref (if child of file), 231 unit, zoom\_ref. Child of field\_definition, file or group of fields tag. 232 233 \item[field\_definition]: definition of the part of the xml file corresponding to the field definition. 234 Accept the same attributes as field tag. Child of context tag. 235 236 \item[group]: define a group of file or field. Accept the same attributes as file or field. 237 238 \item[file]: define the output file's characteristics. Accepted attributes are description, enable, 239 output\_freq, output\_level, id, name, name\_suffix. Child of file\_definition or group of files tag. 240 241 \item[file\_definition]: definition of the part of the xml file corresponding to the file definition. 242 Accept the same attributes as file tag. Child of context tag. 243 244 \item[axis]: definition of the vertical axis. Accepted attributes are description, id, positive, size, unit. 245 Child of axis\_definition tag. 246 247 \item[axis\_definition]: definition of the part of the xml file corresponding to the vertical axis definition. 248 Accept the same attributes as axis tag. Child of context tag 249 250 \item[grid]: definition of the horizontal grid. Accepted attributes are description and id. 251 Child of axis\_definition tag. 252 253 \item[grid\_definition]: definition of the part of the xml file corresponding to the horizontal grid definition. 254 Accept the same attributes as grid tag. Child of context tag 255 256 \item[zoom]: definition of a subdomain of an horizontal grid. Accepted attributes are description, id, 257 i/jbegin, ni/j. Child of grid tag. 258 259 \end{description} 550 \begin{longtable}{|p{2.2cm}|p{2.5cm}|p{3.5cm}|p{2.2cm}|p{1.6cm}|} 551 \hline 552 tag name & 553 description & 554 accepted attribute & 555 child of & 556 parent of \endhead 557 \hline 558 simulation & 559 this tag is the root tag which encapsulates all the content of the xml file & 560 none & 561 none & 562 context \\ 563 \hline 564 context & 565 encapsulates parts of the xml file dedicated to different codes or different parts of a code & 566 id (''xios'', ''nemo'' or ''n\_nemo'' for the nth AGRIF zoom), src, time\_origin & 567 simulation & 568 all root tags: ... \_definition \\ 569 \hline 570 \hline 571 field\_definition & 572 encapsulates the definition of all the fields that can potentially be outputted & 573 axis\_ref, default\_value, domain\_ref, enabled, grid\_ref, level, operation, prec, src & 574 context & 575 field or field\_group \\ 576 \hline 577 field\_group & 578 encapsulates a group of fields & 579 axis\_ref, default\_value, domain\_ref, enabled, group\_ref, grid\_ref, id, level, operation, prec, src & 580 field\_definition, field\_group, file & 581 field or field\_group \\ 582 \hline 583 field & 584 define a specific field & 585 axis\_ref, default\_value, domain\_ref, enabled, field\_ref, grid\_ref, id, level, long\_name, name, operation, prec, standard\_name, unit & 586 field\_definition, field\_group, file & 587 none \\ 588 \hline 589 \hline 590 file\_definition & 591 encapsulates the definition of all the files that will be outputted & 592 enabled, min\_digits, name, name\_suffix, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 593 context & 594 file or file\_group \\ 595 \hline 596 file\_group & 597 encapsulates a group of files that will be outputted & 598 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 599 file\_definition, file\_group & 600 file or file\_group \\ 601 \hline 602 file & 603 define the contents of a file to be outputted & 604 enabled, description, id, min\_digits, name, name\_suffix, output\_freq, output\_level, split\_freq\_format, split\_freq, sync\_freq, type, src & 605 file\_definition, file\_group & 606 field \\ 607 \hline 608 axis\_definition & 609 define all the vertical axis potentially used by the variables & 610 src & 611 context & 612 axis\_group, axis \\ 613 \hline 614 axis\_group & 615 encapsulates a group of vertical axis & 616 id, lon\_name, positive, src, standard\_name, unit, zoom\_begin, zoom\_end, zoom\_size & 617 axis\_definition, axis\_group & 618 axis\_group, axis \\ 619 \hline 620 axis & 621 define a vertical axis & 622 id, lon\_name, positive, src, standard\_name, unit, zoom\_begin, zoom\_end, zoom\_size & 623 axis\_definition, axis\_group & 624 none \\ 625 \hline 626 \hline 627 domain\_\-definition & 628 define all the horizontal domains potentially used by the variables & 629 src & 630 context & 631 domain\_\-group, domain \\ 632 \hline 633 domain\_group & 634 encapsulates a group of horizontal domains & 635 id, lon\_name, src, zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj & 636 domain\_\-definition, domain\_group & 637 domain\_\-group, domain \\ 638 \hline 639 domain & 640 define an horizontal domain & 641 id, lon\_name, src, zoom\_ibegin, zoom\_jbegin, zoom\_ni, zoom\_nj & 642 domain\_\-definition, domain\_group & 643 none \\ 644 \hline 645 \hline 646 grid\_definition & 647 define all the grid (association of a domain and/or an axis) potentially used by the variables & 648 src & 649 context & 650 grid\_group, grid \\ 651 \hline 652 grid\_group & 653 encapsulates a group of grids & 654 id, domain\_ref, axis\_ref & 655 grid\_definition, grid\_group & 656 grid\_group, grid \\ 657 \hline 658 grid & 659 define a grid & 660 id, domain\_ref, axis\_ref & 661 grid\_definition, grid\_group & 662 none \\ 663 \hline 664 \end{longtable} 260 665 261 666 262 667 \subsubsection{Attributes list} 263 668 264 Applied to a tag or a group of tags. 265 266 % table to be added ? 267 Another table, perhaps? 268 269 %%%% 270 271 Attribute 272 Applied to? 273 Definition 274 Comment 275 axis\_ref 276 field 277 String defining the vertical axis of the variable. It refers to the id of the vertical axis defined in the axis tag. 278 Use ''non'' if the variable has no vertical axis 279 280 %%%%%% 281 282 \begin{description} 283 284 \item[axis\_ref]: field attribute. String defining the vertical axis of the variable. 285 It refers to the id of the vertical axis defined in the axis tag. 286 Use ''none'' if the variable has no vertical axis 287 288 \item[description]: this attribute can be applied to all tags but it is used only with the field tag. 289 In this case, the value of description will be used to define, in the output netcdf file, 290 the attributes long\_name and standard\_name of the variable. 291 292 \item[enabled]: field and file attribute. Logical to switch on/off the output of a field or a file. 293 294 \item[freq\_op]: field attribute (automatically defined, see part 1.4 (''control of the xml attributes'')). 295 An integer defining the frequency in seconds at which NEMO is calling iom\_put for this variable. 296 It corresponds to the model time step (rn\_rdt in the namelist) except for the variables computed 297 at the frequency of the surface boundary condition (rn\_rdt ? nn\_fsbc in the namelist). 298 299 \item[grid\_ref]: field attribute. String defining the horizontal grid of the variable. 300 It refers to the id of the grid tag. 301 302 \item[ibegin]: zoom attribute. Integer defining the zoom starting point along x direction. 303 Automatically defined for TAO/RAMA/PIRATA moorings (see part 1.4). 304 305 \item[id]: exists for all tag. This is a string defining the name to a specific tag that will be used 306 later to refer to this tag. Tags of the same category must have different ids. 307 308 \item[jbegin]: zoom attribute. Integer defining the zoom starting point along y direction. 309 Automatically defined for TAO/RAMA/PIRATA moorings and equatorial section (see part 1.4). 310 311 \item[level]: field attribute. Integer from 0 to 10 defining the output priority of a field. 312 See output\_level attribute definition 313 314 \item[operation]: field attribute. String defining the type of temporal operation to perform on a variable. 315 Possible choices are ''ave(X)'' for temporal mean, ''inst(X)'' for instantaneous, ''t\_min(X)'' for temporal min 316 and ''t\_max(X)'' for temporal max. 317 318 \item[output\_freq]: file attribute. Integer defining the operation frequency in seconds. 319 For example 86400 for daily mean. 320 321 \item[output\_level]: file attribute. Integer from 0 to 10 defining the output priority of variables in a file: 322 all variables listed in the file with a level smaller or equal to output\_level will be output. 323 Other variables won't be output even if they are listed in the file. 324 325 \item[positive]: axis attribute (always .FALSE.). Logical defining the vertical axis convention used 326 in \NEMO (positive downward). Define the attribute positive of the variable in the netcdf output file. 327 328 \item[prec]: field attribute. Integer defining the output precision. 329 Not implemented, we always output real4 arrays. 330 331 \item[name]: field or file attribute. String defining the name of a variable or a file. 332 If the name of a file is undefined, its id is used as a name. 2 files must have different names. 333 Files with specific ids will have their name automatically defined (see part 1.4). 334 Note that is name will be automatically completed by the cpu number (if needed) and ''.nc'' 335 336 \item[name\_suffix]: file attribute. String defining a suffix to be inserted after the name 337 and before the cpu number and the ''.nc'' termination. Files with specific ids have an 338 automatic definition of their suffix (see part 1.4). 339 340 \item[ni]: zoom attribute. Integer defining the zoom extent along x direction. 341 Automatically defined for equatorial sections (see part 1.4). 342 343 \item[nj]: zoom attribute. Integer defining the zoom extent along x direction. 344 345 \item[ref]: field attribute. String referring to the id of the field we want to add in a file. 346 347 \item[size]: axis attribute. use unknown... 348 349 \item[unit]: field attribute. String defining the unit of a variable and the associated 350 attribute in the netcdf output file. 351 352 \item[zoom\_ref]: field attribute. String defining the subdomain of data on which 353 the file should be written (to ouput data only in a limited area). 354 It refers to the id of a zoom defined in the zoom tag. 355 \end{description} 356 357 358 \subsection{IO\_SERVER} 359 360 \subsubsection{Attached or detached mode?} 361 362 Iom\_put is based on the io\_server developed by Yann Meurdesoif from IPSL 363 (see \href{http://forge.ipsl.jussieu.fr/ioserver/browser}{here} for the source code or 364 see its copy in NEMOGCM/EXTERNAL directory). 365 This server can be used in ''attached mode'' (as a library) or in ''detached mode'' 366 (as an external executable on n cpus). In attached mode, each cpu of NEMO will output 367 its own subdomain. In detached mode, the io\_server will gather data from NEMO 368 and output them split over n files with n the number of cpu dedicated to the io\_server. 369 370 \subsubsection{Control the io\_server: the namelist file xmlio\_server.def} 371 372 % 373 %Again, a small table might be more readable? 374 %Name 375 %Type 376 %Description 377 %Comment 378 %Using_server 379 %Logical 380 %Switch to use the server in attached or detached mode 381 %(.TRUE. corresponding to detached mode). 382 383 The control of the use of the io\_server is done through the namelist file of the io\_server 384 called xmlio\_server.def. 385 386 \textbf{using\_server}: logical, switch to use the server in attached or detached mode 387 (.TRUE. corresponding to detached mode). 388 389 \textbf{using\_oasis}: logical, set to .TRUE. if NEMO is used in coupled mode. 390 391 \textbf{client\_id} = ''oceanx'' : character, used only in coupled mode. 392 Specify the id used in OASIS to refer to NEMO. The same id must be used to refer to NEMO 393 in the \$NBMODEL part of OASIS namcouple in the call of prim\_init\_comp\_proto in cpl\_oasis3f90 394 395 \textbf{server\_id} = ''ionemo'' : character, used only in coupled mode. 396 Specify the id used in OASIS to refer to the IO\_SERVER when used in detached mode. 397 Use the same id to refer to the io\_server in the \$NBMODEL part of OASIS namcouple. 398 399 \textbf{global\_mpi\_buffer\_size}: integer; define the size in Mb of the MPI buffer used by the io\_server. 400 401 \subsubsection{Number of cpu used by the io\_server in detached mode} 402 403 The number of cpu used by the io\_server is specified only when launching the model. 404 Here is an example of 2 cpus for the io\_server and 6 cpu for opa using mpirun: 405 406 \texttt{ -p 2 -e ./ioserver} 407 408 \texttt{ -p 6 -e ./opa } 409 410 411 \subsection{Practical issues} 412 413 \subsubsection{Add your own outputs} 414 415 It is very easy to add you own outputs with iom\_put. 4 points must be followed. 416 \begin{description} 417 \item[1-] in NEMO code, add a \\ 418 \texttt{ CALL iom\_put( 'identifier', array ) } \\ 419 where you want to output a 2D or 3D array. 420 421 \item[2-] don't forget to add \\ 422 \texttt{ USE iom ! I/O manager library } \\ 423 in the list of used modules in the upper part of your module. 424 425 \item[3-] in the file\_definition part of the xml file, add the definition of your variable using the same identifier you used in the f90 code. 426 \vspace{-20pt} 427 \begin{alltt} {{\scriptsize 428 \begin{verbatim} 429 <field_definition> 430 ... 431 <field id="identifier" description="blabla" /> 432 ... 433 </field_definition> 434 \end{verbatim} 435 }}\end{alltt} 436 attributes axis\_ref and grid\_ref must be consistent with the size of the array to pass to iom\_put. 437 if your array is computed within the surface module each nn\_fsbc time\_step, 438 add the field definition within the group defined with the id ''SBC'': $<$group id=''SBC''...$>$ 439 440 \item[4-] add your field in one of the output files \\ 441 \vspace{-20pt} 442 \begin{alltt} {{\scriptsize 443 \begin{verbatim} 444 <file id="file_1" .../> 445 ... 446 <field ref="identifier" /> 447 ... 448 </file> 449 \end{verbatim} 450 }}\end{alltt} 451 452 \end{description} 453 454 \subsubsection{Several time axes in the output file} 455 456 If your output file contains variables with different operations (see operation definition), 457 IOIPSL will create one specific time axis for each operation. Note that inst(X) will have 458 a time axis corresponding to the end each output period whereas all other operators 459 will have a time axis centred in the middle of the output periods. 460 461 \subsubsection{Error/bug messages from IOIPSL} 462 463 If you get the following error in the standard output file: 464 \vspace{-20pt} 465 \begin{alltt} {{\scriptsize 466 \begin{verbatim} 467 FATAL ERROR FROM ROUTINE flio_dom_set 468 --> too many domains simultaneously defined 469 --> please unset useless domains 470 --> by calling flio_dom_unset 471 \end{verbatim} 472 }}\end{alltt} 473 474 You must increase the value of dom\_max\_nb in fliocom.f90 (multiply it by 10 for example). 475 476 If you mix, in the same file, variables with different freq\_op (see definition above), 477 like for example variables from the surface module with other variables, 478 IOIPSL will print in the standard output file warning messages saying there may be a bug. 479 \vspace{-20pt} 480 \begin{alltt} {{\scriptsize 481 \begin{verbatim} 482 WARNING FROM ROUTINE histvar_seq 483 --> There were 10 errors in the learned sequence of variables 484 --> for file 4 485 --> This looks like a bug, please report it. 486 \end{verbatim} 487 }}\end{alltt} 488 489 Don't worry, there is no bug, everything is properly working! 490 491 % } %end \gmcomment 669 \begin{longtable}{|p{2.2cm}|p{4cm}|p{3.8cm}|p{2cm}|} 670 \hline 671 attribute name & 672 description & 673 example & 674 accepted by \endhead 675 \hline 676 axis\_ref & 677 refers to the id of a vertical axis & 678 axis\_ref="deptht" & 679 field, grid families \\ 680 \hline 681 enabled & 682 switch on/off the output of a field or a file & 683 enabled=".TRUE." & 684 field, file families \\ 685 \hline 686 default\_value & 687 missing\_value definition & 688 default\_value="1.e20" & 689 field family \\ 690 \hline 691 description & 692 just for information, not used & 693 description="ocean T grid variables" & 694 all tags \\ 695 \hline 696 domain\_ref & 697 refers to the id of a domain & 698 domain\_ref="grid\_T" & 699 field or grid families \\ 700 \hline 701 field\_ref & 702 id of the field we want to add in a file & 703 field\_ref="toce" & 704 field \\ 705 \hline 706 grid\_ref & 707 refers to the id of a grid & 708 grid\_ref="grid\_T\_2D" & 709 field family \\ 710 \hline 711 group\_ref & 712 refer to a group of variables & 713 group\_ref="mooring" & 714 field\_group \\ 715 \hline 716 id & 717 allow to identify a tag & 718 id="nemo" & 719 accepted by all tags except simulation \\ 720 \hline 721 level & 722 output priority of a field: 0 (high) to 10 (low)& 723 level="1" & 724 field family \\ 725 \hline 726 long\_name & 727 define the long\_name attribute in the NetCDF file & 728 long\_name="Vertical T levels" & 729 field \\ 730 \hline 731 min\_digits & 732 specify the minimum of digits used in the core number in the name of the NetCDF file & 733 min\_digits="4" & 734 file family \\ 735 \hline 736 name & 737 name of a variable or a file. If the name of a file is undefined, its id is used as a name & 738 name="tos" & 739 field or file families \\ 740 \hline 741 name\_suffix & 742 suffix to be inserted after the name and before the cpu number and the ''.nc'' termination of a file & 743 name\_suffix="\_myzoom" & 744 file family \\ 745 \hline 746 attribute name & 747 description & 748 example & 749 accepted by \\ 750 \hline 751 \hline 752 operation & 753 type of temporal operation: average, accumulate, instantaneous, min, max and once & 754 operation="average" & 755 field family \\ 756 \hline 757 output\_freq & 758 operation frequency. units can be ts (timestep), y, mo, d, h, mi, s. & 759 output\_freq="1d12h" & 760 field family \\ 761 \hline 762 output\_level & 763 output priority of variables in a file: 0 (high) to 10 (low). All variables listed in the file with a level smaller or equal to output\_level will be output. Other variables won't be output even if they are listed in the file. & 764 output\_level="10"& 765 file family \\ 766 \hline 767 positive & 768 convention used for the orientation of vertival axis (positive downward in \NEMO). & 769 positive="down" & 770 axis family \\ 771 \hline 772 prec & 773 output precision: real 4 or real 8 & 774 prec="4" & 775 field family \\ 776 \hline 777 split\_freq & 778 frequency at which to temporally split output files. Units can be ts (timestep), y, mo, d, h, mi, s. Useful for long runs to prevent over-sized output files.& 779 split\_freq="1mo" & 780 file family \\ 781 \hline 782 split\_freq\-\_format & 783 date format used in the name of temporally split output files. Can be specified 784 using the following syntaxes: \%y, \%mo, \%d, \%h \%mi and \%s & 785 split\_freq\_format= "\%y\%mo\%d" & 786 file family \\ 787 \hline 788 src & 789 allow to include a file & 790 src="./field\_def.xml" & 791 accepted by all tags except simulation \\ 792 \hline 793 standard\_name & 794 define the standard\_name attribute in the NetCDF file & 795 standard\_name= "Eastward\_Sea\_Ice\_Transport" & 796 field \\ 797 \hline 798 sync\_freq & 799 NetCDF file synchronization frequency (update of the time\_counter). units can be ts (timestep), y, mo, d, h, mi, s. & 800 sync\_freq="10d" & 801 file family \\ 802 \hline 803 attribute name & 804 description & 805 example & 806 accepted by \\ 807 \hline 808 \hline 809 time\_origin & 810 specify the origin of the time counter & 811 time\_origin="1900-01-01 00:00:00"& 812 context \\ 813 \hline 814 type (1)& 815 specify if the output files are to be split spatially (multiple\_file) or not (one\_file) & 816 type="multiple\_file" & 817 file familly \\ 818 \hline 819 type (2)& 820 define the type of a variable tag & 821 type="boolean" & 822 variable \\ 823 \hline 824 unit & 825 unit of a variable or the vertical axis & 826 unit="m" & 827 field and axis families \\ 828 \hline 829 zoom\_ibegin & 830 starting point along x direction of the zoom. Automatically defined for TAO/RAMA/PIRATA moorings & 831 zoom\_ibegin="1" & 832 domain family \\ 833 \hline 834 zoom\_jbegin & 835 starting point along y direction of the zoom. Automatically defined for TAO/RAMA/PIRATA moorings & 836 zoom\_jbegin="1" & 837 domain family \\ 838 \hline 839 zoom\_ni & 840 zoom extent along x direction & 841 zoom\_ni="1" & 842 domain family \\ 843 \hline 844 zoom\_nj & 845 zoom extent along y direction & 846 zoom\_nj="1" & 847 domain family \\ 848 \hline 849 \end{longtable} 850 492 851 493 852 … … 544 903 domain size in any dimension. The algorithm used is: 545 904 905 \vspace{-20pt} 546 906 \begin{alltt} {{\scriptsize 547 907 \begin{verbatim} -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/ARCH/arch-PW7_METO.fcm
r3834 r4193 15 15 16 16 17 %NCDF_INC -I/home/nwp/ofrd/share/netcdf-3.6.0-p1_ec/include 18 %NCDF_LIB -L /home/nwp/ofrd/share/netcdf-3.6.0-p1_ec/lib -lnetcdf 17 %NCDF_INC -I/home/nwp/nm/frrj/lib/MTOOLS/include 18 %NCDF_LIB -L/home/nwp/nm/frrj/lib/MTOOLS/lib -lnetcdf -lhdf5 -lhdf5_hl -lhdf5_fortran -lz 19 %XIOS_ROOT /home/cr/ocean/hadcv/xios_r445 19 20 %FC mpxlf90_r 20 %FCFLAGS -qrealsize=8 -qextname -qsuffix=f=f90 -qarch=pwr7 -qtune=pwr7 -NS32768 -I/home/nwp/ ofrd/share/netcdf-3.6.0-p1_ex/include -g -O3 -qnostrict21 %FFLAGS -qrealsize=8 -qextname -qsuffix=f=f90 -qarch=pwr7 -qtune=pwr7 -NS32768 -I/home/nwp/ ofrd/share/netcdf-3.6.0-p1_ex/include -g -O3 -qnostrict22 %LD mp xlf90_r23 %LDFLAGS - L /home/nwp/ofrd/share/netcdf-3.6.0-p1_ec/lib -lnetcdf -L/projects/um1/lib -lsig -O3 -L MASS21 %FCFLAGS -qrealsize=8 -qextname -qsuffix=f=f90 -qarch=pwr7 -qtune=pwr7 -NS32768 -I/home/nwp/nm/frrj/lib/MTOOLS/include -g -O3 -qnostrict 22 %FFLAGS -qrealsize=8 -qextname -qsuffix=f=f90 -qarch=pwr7 -qtune=pwr7 -NS32768 -I/home/nwp/nm/frrj/lib/MTOOLS/include -g -O3 -qnostrict 23 %LD mpCC_r 24 %LDFLAGS -lxlf90 -L/home/nwp/nm/frrj/lib/MTOOLS/lib -lnetcdf -L/projects/um1/lib -lsig -O3 -L MASS 24 25 %FPPFLAGS -E -P -traditional -I/opt/ibmhpc/pecurrent/ppe.poe/include -I/usr/lpp/ppe.poe/include/thread64 25 26 %AR ar 26 27 %ARFLAGS rs 27 28 %MK gmake 28 %USER_INC %NCDF_INC 29 %USER_LIB %NCDF_LIB 29 %USER_INC %NCDF_INC -I%XIOS_ROOT/inc 30 %USER_LIB %NCDF_LIB -L%XIOS_ROOT/lib -lxios -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/AMM12/EXP00/iodef.xml
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="10d" min_digits="4"> 24 24 25 <file_group id="1ts" output_freq="1ts" output_level="10" enabled=".TRUE."/> <!-- 1 time step files --> 26 25 27 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> 26 28 <file_group id="2h" output_freq="2h" output_level="10" enabled=".TRUE."/> <!-- 2h files --> … … 60 62 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 61 63 <axis id="nfloat" long_name="Float number" unit="-" /> 64 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 62 65 </axis_definition> 63 66 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/AMM12/EXP00/namelist
r3795 r4193 35 35 ! = 1 nn_date0 read in namelist ; nn_it000 : check consistancy between namelist and restart 36 36 ! = 2 nn_date0 read in restart ; nn_it000 : check consistancy between namelist and restart 37 cn_ocerst_in = " restart" ! suffix of ocean restart name (input)37 cn_ocerst_in = "amm12_restart_oce" ! suffix of ocean restart name (input) 38 38 cn_ocerst_out = "restart" ! suffix of ocean restart name (output) 39 39 nn_istate = 1 ! output the initial state (1) or not (0) -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/AMM12/cpp_AMM12.fcm
r3833 r4193 1 bld::tool::fppkeys key_bdy key_tide key_vectopt_loop key_amm_12km key_dynspg_ts key_ldfslp key_zdfgls key_vvl key_diainstant key_mpp_mpi key_iomput1 bld::tool::fppkeys key_bdy key_tide key_vectopt_loop key_amm_12km key_dynspg_ts key_ldfslp key_zdfgls key_vvl key_diainstant key_mpp_mpi -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/GYRE/EXP00/iodef.xml
r3866 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 <file_group id="1ts" output_freq="1ts" output_level="10" enabled=".TRUE."/> <!-- 1 time step files --> 26 25 27 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> 26 28 <file_group id="2h" output_freq="2h" output_level="10" enabled=".TRUE."/> <!-- 2h files --> … … 33 35 <file_group id="5d" output_freq="5d" output_level="10" enabled=".TRUE."> <!-- 5d files --> 34 36 35 <file id="5d_grid_T" name="auto" description="ocean T grid variables" >37 <file id="file1" name_suffix="_grid_T" description="ocean T grid variables" > 36 38 <field field_ref="toce" name="votemper" /> 37 39 <field field_ref="soce" name="vosaline" /> … … 47 49 </file> 48 50 49 <file id="5d_grid_U" name="auto" description="ocean U grid variables" >51 <file id="file2" name_suffix="_grid_U" description="ocean U grid variables" > 50 52 <field field_ref="uoce" name="vozocrtx" /> 51 53 <field field_ref="utau" name="sozotaux" /> 52 54 </file> 53 55 54 <file id="5d_grid_V" name="auto" description="ocean V grid variables" >56 <file id="file3" name_suffix="_grid_V" description="ocean V grid variables" > 55 57 <field field_ref="voce" name="vomecrty" /> 56 58 <field field_ref="vtau" name="sometauy" /> 57 59 </file> 58 60 59 <file id="5d_grid_W" name="auto" description="ocean W grid variables" >61 <file id="file4" name_suffix="_grid_W" description="ocean W grid variables" > 60 62 <field field_ref="woce" name="vovecrtz" /> 61 63 <field field_ref="avt" name="votkeavt" /> … … 90 92 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 91 93 <axis id="nfloat" long_name="Float number" unit="-" /> 94 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 92 95 </axis_definition> 93 96 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/GYRE_BFM/EXP00/iodef.xml
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="10d" min_digits="4"> 24 24 25 <file_group id="1ts" output_freq="1ts" output_level="10" enabled=".TRUE."/> <!-- 1 time step files --> 26 25 27 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> 26 28 <file_group id="2h" output_freq="2h" output_level="10" enabled=".TRUE."/> <!-- 2h files --> … … 60 62 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 61 63 <axis id="nfloat" long_name="Float number" unit="-" /> 64 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 62 65 </axis_definition> 63 66 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/GYRE_PISCES/EXP00/iodef.xml
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="10d" min_digits="4"> 24 24 25 <file_group id="1ts" output_freq="1ts" output_level="10" enabled=".TRUE."/> <!-- 1 time step files --> 26 25 27 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> 26 28 <file_group id="2h" output_freq="2h" output_level="10" enabled=".TRUE."/> <!-- 2h files --> … … 33 35 <file_group id="5d" output_freq="5d" output_level="10" enabled=".TRUE."> <!-- 5d files --> 34 36 35 <file id=" 5d_grid_T" name="auto" description="ocean T grid variables" >37 <file id="file1" name_suffix="_grid_T" description="ocean T grid variables" > 36 38 <field field_ref="toce" name="votemper" /> 37 39 <field field_ref="soce" name="vosaline" /> … … 47 49 </file> 48 50 49 <file id=" 5d_grid_U" name="auto" description="ocean U grid variables" >51 <file id="file2" name_suffix="_grid_U" description="ocean U grid variables" > 50 52 <field field_ref="uoce" name="vozocrtx" /> 51 53 <field field_ref="utau" name="sozotaux" /> 52 54 </file> 53 55 54 <file id=" 5d_grid_V" name="auto" description="ocean V grid variables" >56 <file id="file3" name_suffix="_grid_V" description="ocean V grid variables" > 55 57 <field field_ref="voce" name="vomecrty" /> 56 58 <field field_ref="vtau" name="sometauy" /> 57 59 </file> 58 60 59 <file id=" 5d_grid_W" name="auto" description="ocean W grid variables" >61 <file id="file4" name_suffix="_grid_W" description="ocean W grid variables" > 60 62 <field field_ref="woce" name="vovecrtz" /> 61 63 <field field_ref="avt" name="votkeavt" /> … … 63 65 </file> 64 66 65 <file id=" 5d_ptrc_T" name="auto" description="lobster sms variables" >67 <file id="file5" name="_ptrc_T" description="lobster sms variables" > 66 68 <field field_ref="DET" /> 67 69 <field field_ref="ZOO" /> … … 82 84 <file_group id="1y" output_freq="1y" output_level="10" enabled=".TRUE."> <!-- real yearly files --> 83 85 84 <file id=" 1y_diad_T" name="auto" description="additional lobster diagnostics" >86 <file id="file6" name_suffix="_diad_T" description="additional lobster diagnostics" > 85 87 <field field_ref="FNO3PHY" /> 86 88 <field field_ref="FNH4PHY" /> … … 126 128 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 127 129 <axis id="nfloat" long_name="Float number" unit="-" /> 130 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 128 131 </axis_definition> 129 132 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/1_namelist
r3795 r4193 27 27 cn_exp = "Agulhas" ! experience name 28 28 nn_it000 = 1 ! first time step 29 nn_itend = 10950 ! last time step29 nn_itend = 480 ! last time step 30 30 nn_date0 = 010101 ! date at nit_0000 (format yyyymmdd) used if ln_rstart=F or (ln_rstart=T and nn_rstctl=0 or 1) 31 31 nn_leapy = 0 ! Leap year calendar (1) or not (0) … … 114 114 sn_tem = 'data_1m_potential_temperature_nomask', -1,'votemper', .true. , .true., 'yearly' , ' ' , ' ' 115 115 sn_sal = 'data_1m_salinity_nomask' , -1,'vosaline', .true. , .true., 'yearly' , '' , ' ' 116 cn_dir = './' ! root directory for the location of the runoff files 117 ln_tsd_init = .true. ! Initialisation of ocean T & S with T &S input data (T) or not (F) 118 ln_tsd_tradmp = .false. ! damping of ocean T & S toward T &S input data (T) or not (F) 116 119 / 117 120 !!====================================================================== -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_ar5.xml
r3771 r4193 26 26 --> 27 27 28 <file_definition type="multiple_file" output_level="10" sync_freq="1d" min_digits="4">28 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" output_level="10" sync_freq="2mo" min_digits="4"> 29 29 <!-- 30 30 +++++++++++++++++++++++++++++++++++++++++++++++ daily ++++++++++++++++++++++++++++++++++++++++++++++++++ 31 31 --> 32 32 <file_group id="1d" output_freq="1d" enabled=".TRUE."> <!-- 1d files --> 33 <file id=" 1d_grid_T" name="auto" name_suffix="_table2.2" > <!-- grid T -->33 <file id="file1" name_suffix="_grid_T_table2.2" > <!-- grid T --> 34 34 <field field_ref="sst" name='tos' long_name="sea_surface_temperature" level="2" /> 35 35 <field field_ref="sst2" name='tossq' long_name="square_of_sea_surface_temperature" level="2" /> … … 43 43 <!-- 44 44 .............................................. grid T ................................................. 45 --> 46 <file_group id="1m_grid_T" name="auto" > <!-- grid T --> 47 48 <file id="1m grid_T table 2.2" name_suffix="_table2.2" > 49 <field field_ref="botpres" name="pbo" long_name="sea_water_pressure_at_sea_floor" /> 50 <!-- pso : sea_water_pressure_at_sea_water_surface = 0 --> 51 <field field_ref="ssh" name="zos" long_name="sea_surface_height_above_geoid" /> 52 <field field_ref="ssh2" name="zossq" long_name="square_of_sea_surface_height_above_geoid" level="2" /> 53 <!-- masscello : sea_water_mass_per_unit_area = cellthc*rau0 no time changes --> 54 <field field_ref="cellthc" name="thkcello" long_name="cell_thickness" /> <!-- no time changes --> 55 <field field_ref="toce" name="thetao" long_name="sea_water_potential_temperature" /> 56 <field field_ref="sst" name="tos" long_name="sea_surface_temperature" level="1" /> 57 <field field_ref="sst2" name="tossq" long_name="square_of_sea_surface_temperature" level="2" /> 58 <field field_ref="soce" name="so" long_name="sea_water_salinity" /> 59 <field field_ref="sss" name="sos" long_name="sea_surface_salinity" level="1" /> 60 <field field_ref="rhop" name="rhopoto" long_name="sea_water_potential_density" level="2" /> 61 <!-- no agessc : sea_water_age_since_surface_contact --> 62 <!-- no cfc11 : moles_per_unit_mass_of_cfc11_in_sea_water --> 63 <!-- msftbarot : ocean_barotropic_mass_streamfunction : offline --> 64 <!-- mlotst : ocean_mixed_layer_thickness_defined_by_sigma_t : must be done offline --> 65 <!-- mlotstsq : square_of_ocean_mixed_layer_thickness_defined_by_sigma_t : must be done offline --> 66 <field field_ref="mldkz5" name='omlmax' long_name="ocean_mixed_layer_thickness_defined_by_mixing_scheme" level="2" operation="maximum" /> 67 </file> 68 69 <file id="1m grid_T table 2.5" name_suffix="_table2.5" > 70 <field field_ref="rain" name="pr" long_name="rainfall_flux" level="1" /> 71 <field field_ref="snow_ao_cea" name="prsn" long_name="snowfall_flux" level="1" /> 72 <field field_ref="evap_ao_cea" name="evs" long_name="water_evaporation_flux" level="1" /> 73 <field field_ref="runoffs" name="friver" long_name="water_flux_into_sea_water_from_rivers" level="1" /> 74 <field field_ref="calving" name="ficeberg" long_name="water_flux_into_sea_water_from_icebergs" level="1" /> 75 <field field_ref="isnwmlt_cea" name="fsitherm" long_name="water_flux_into_sea_water_due_to_sea_ice_thermodynamics" level="1" /> 76 <field field_ref="empmr" name="wfo" long_name="water_flux_into_sea_water" level="1" /> 77 <!-- wfonocorr : water_flux_into_sea_water_without_flux_correction : emp - erp --> 78 <field field_ref="erp" name="wfcorr" long_name="water_flux_correction" level="1" /> <!-- usually = 0 --> 79 </file> 80 81 <file id="1m grid_T table 2.6" name_suffix="_table2.6" > 82 <!-- vsfpr : virtual_salt_flux_into_sea_water_due_to_rainfall = 0 --> 83 <!-- vsfevap : virtual_salt_flux_into_sea_water_due_to_evaporation = 0 --> 84 <!-- vsfriver : virtual_salt_flux_into_sea_water_from_rivers = 0 --> 85 <field field_ref="fsal_virt_cea" name="vsfsit" long_name="virtual_salt_flux_into_sea_water_due_to_sea_ice_thermodynamics" level="1" /> 86 <!-- vsf : virtual_salt_flux_into_sea_water = fsal_virtual + fsal_real --> 87 <!-- wfcorr : virtual_salt_flux_correction = 0 --> 88 <field field_ref="fsal_virt_cea" name="sfdsi" long_name="downward_sea_ice_basal_salt_flux" level="1" /> 89 <!-- sfriver : salt_flux_into_sea_water_from_rivers = 0 --> 90 </file> 91 92 <file id="1m grid_T table 2.7" name_suffix="_table2.7" > 93 <!-- hfgeou : upward_geothermal_heat_flux_at_sea_floor : cte, see nambbc and trabbc.F90 --> 94 <field field_ref="hflx_rain_cea" name="hfrainds" long_name="temperature_flux_due_to_rainfall_expressed_as_heat_flux_into_sea_water" level="1" /> 95 <field field_ref="hflx_evap_cea" name="hfevapds" long_name="temperature_flux_due_to_evaporation_expressed_as_heat_flux_out_of_sea_water" level="1" /> 96 <field field_ref="hflx_rnf_cea" name="hfrunoffds" long_name="temperature_flux_due_to_runoff_expressed_as_heat_flux_into_sea_water" level="1" /> 97 <field field_ref="hflx_snow_cea" name="hfsnthermds" long_name="heat_flux_into_sea_water_due_to_snow_thermodynamics" level="1" /> 98 <field field_ref="hflx_ice_cea" name="hfsithermds" long_name="heat_flux_into_sea_water_due_to_sea_ice_thermodynamics" level="1" /> 99 <field field_ref="hflx_cal_cea" name="hfibthermds" long_name="heat_flux_into_sea_water_due_to_iceberg_thermodynamics" level="1" /> 100 <!-- rlds : surface_net_downward_longwave_flux : not available --> 101 <!-- hfls : surface_downward_latent_heat_flux : not available --> 102 <!-- hfss : surface_downward_sensible_heat_flux: not available --> 103 <field field_ref="qns" name="nshfls" long_name="surface_net_downward_non_solar_flux" level="1" /> 104 <field field_ref="qsr" name="rsntds" long_name="surface_net_downward_shortwave_flux" level="1" /> 105 <field field_ref="qsr3d" name="rsds" long_name="downwelling_shortwave_flux_in_sea_water" level="1" /> 106 <field field_ref="qrp" name="hfcorr" long_name="heat_flux_correction" level="1" /> 107 </file> 108 </file_group> 45 --> 46 <file id="file2" name_suffix="_grid_T_table2.2" > 47 <field field_ref="botpres" name="pbo" long_name="sea_water_pressure_at_sea_floor" /> 48 <!-- pso : sea_water_pressure_at_sea_water_surface = 0 --> 49 <field field_ref="ssh" name="zos" long_name="sea_surface_height_above_geoid" /> 50 <field field_ref="ssh2" name="zossq" long_name="square_of_sea_surface_height_above_geoid" level="2" /> 51 <!-- masscello : sea_water_mass_per_unit_area = cellthc*rau0 no time changes --> 52 <field field_ref="cellthc" name="thkcello" long_name="cell_thickness" /> <!-- no time changes --> 53 <field field_ref="toce" name="thetao" long_name="sea_water_potential_temperature" /> 54 <field field_ref="sst" name="tos" long_name="sea_surface_temperature" level="1" /> 55 <field field_ref="sst2" name="tossq" long_name="square_of_sea_surface_temperature" level="2" /> 56 <field field_ref="soce" name="so" long_name="sea_water_salinity" /> 57 <field field_ref="sss" name="sos" long_name="sea_surface_salinity" level="1" /> 58 <field field_ref="rhop" name="rhopoto" long_name="sea_water_potential_density" level="2" /> 59 <!-- no agessc : sea_water_age_since_surface_contact --> 60 <!-- no cfc11 : moles_per_unit_mass_of_cfc11_in_sea_water --> 61 <!-- msftbarot : ocean_barotropic_mass_streamfunction : offline --> 62 <!-- mlotst : ocean_mixed_layer_thickness_defined_by_sigma_t : must be done offline --> 63 <!-- mlotstsq : square_of_ocean_mixed_layer_thickness_defined_by_sigma_t : must be done offline --> 64 <field field_ref="mldkz5" name='omlmax' long_name="ocean_mixed_layer_thickness_defined_by_mixing_scheme" level="2" operation="maximum" /> 65 </file> 66 67 <file id="file3" name_suffix="_grid_T_table2.5" > 68 <field field_ref="rain" name="pr" long_name="rainfall_flux" level="1" /> 69 <field field_ref="snow_ao_cea" name="prsn" long_name="snowfall_flux" level="1" /> 70 <field field_ref="evap_ao_cea" name="evs" long_name="water_evaporation_flux" level="1" /> 71 <field field_ref="runoffs" name="friver" long_name="water_flux_into_sea_water_from_rivers" level="1" /> 72 <field field_ref="calving" name="ficeberg" long_name="water_flux_into_sea_water_from_icebergs" level="1" /> 73 <field field_ref="isnwmlt_cea" name="fsitherm" long_name="water_flux_into_sea_water_due_to_sea_ice_thermodynamics" level="1" /> 74 <field field_ref="empmr" name="wfo" long_name="water_flux_into_sea_water" level="1" /> 75 <!-- wfonocorr : water_flux_into_sea_water_without_flux_correction : emp - erp --> 76 <field field_ref="erp" name="wfcorr" long_name="water_flux_correction" level="1" /> <!-- usually = 0 --> 77 </file> 78 79 <file id="file4" name_suffix="_grid_T_table2.6" > 80 <!-- vsfpr : virtual_salt_flux_into_sea_water_due_to_rainfall = 0 --> 81 <!-- vsfevap : virtual_salt_flux_into_sea_water_due_to_evaporation = 0 --> 82 <!-- vsfriver : virtual_salt_flux_into_sea_water_from_rivers = 0 --> 83 <field field_ref="fsal_virt_cea" name="vsfsit" long_name="virtual_salt_flux_into_sea_water_due_to_sea_ice_thermodynamics" level="1" /> 84 <!-- vsf : virtual_salt_flux_into_sea_water = fsal_virtual + fsal_real --> 85 <!-- wfcorr : virtual_salt_flux_correction = 0 --> 86 <field field_ref="fsal_virt_cea" name="sfdsi" long_name="downward_sea_ice_basal_salt_flux" level="1" /> 87 <!-- sfriver : salt_flux_into_sea_water_from_rivers = 0 --> 88 </file> 89 90 <file id="file5" name_suffix="_grid_T_table2.7" > 91 <!-- hfgeou : upward_geothermal_heat_flux_at_sea_floor : cte, see nambbc and trabbc.F90 --> 92 <field field_ref="hflx_rain_cea" name="hfrainds" long_name="temperature_flux_due_to_rainfall_expressed_as_heat_flux_into_sea_water" level="1" /> 93 <field field_ref="hflx_evap_cea" name="hfevapds" long_name="temperature_flux_due_to_evaporation_expressed_as_heat_flux_out_of_sea_water" level="1" /> 94 <field field_ref="hflx_rnf_cea" name="hfrunoffds" long_name="temperature_flux_due_to_runoff_expressed_as_heat_flux_into_sea_water" level="1" /> 95 <field field_ref="hflx_snow_cea" name="hfsnthermds" long_name="heat_flux_into_sea_water_due_to_snow_thermodynamics" level="1" /> 96 <field field_ref="hflx_ice_cea" name="hfsithermds" long_name="heat_flux_into_sea_water_due_to_sea_ice_thermodynamics" level="1" /> 97 <field field_ref="hflx_cal_cea" name="hfibthermds" long_name="heat_flux_into_sea_water_due_to_iceberg_thermodynamics" level="1" /> 98 <!-- rlds : surface_net_downward_longwave_flux : not available --> 99 <!-- hfls : surface_downward_latent_heat_flux : not available --> 100 <!-- hfss : surface_downward_sensible_heat_flux: not available --> 101 <field field_ref="qns" name="nshfls" long_name="surface_net_downward_non_solar_flux" level="1" /> 102 <field field_ref="qsr" name="rsntds" long_name="surface_net_downward_shortwave_flux" level="1" /> 103 <field field_ref="qsr3d" name="rsds" long_name="downwelling_shortwave_flux_in_sea_water" level="1" /> 104 <field field_ref="qrp" name="hfcorr" long_name="heat_flux_correction" level="1" /> 105 </file> 109 106 <!-- 110 107 .............................................. grid U ................................................. 111 --> 112 <file_group id="1m_grid_U" name="auto" > <!-- grid U --> 113 114 <file id="1m grid_U table 2.3" name_suffix="_table2.3" > 115 <field field_ref="uoce" name="uo" long_name="sea_water_x_velocity" /> 116 <field field_ref="u_masstr" name="umo" long_name="ocean_mass_x_transport" level="1" /> 117 <field field_ref="u_heattr" name="hfx" long_name="ocean_heat_x_transport" level="1" /> 118 <field field_ref="ueiv_heattr" name="hfxba" long_name="ocean_heat_x_transport_due_to_bolus_advection" level="2" /> 119 <field field_ref="udiff_heattr" name="hfxdiff" long_name="ocean_heat_x_transport_due_to_diffusion" level="2" /> 120 </file> 121 122 <file id="1m grid_U table 2.8" name_suffix="_table2.8" > 123 <field field_ref="utau" name="tauuo" long_name="surface_downward_x_stress" level="1" /> 124 <!-- tauucorr : surface_downward_x_stress_correction = 0 --> 125 </file> 126 127 </file_group> 108 --> 109 <file id="file6" name_suffix="_grid_U_table2.3" > 110 <field field_ref="uoce" name="uo" long_name="sea_water_x_velocity" /> 111 <field field_ref="u_masstr" name="umo" long_name="ocean_mass_x_transport" level="1" /> 112 <field field_ref="u_heattr" name="hfx" long_name="ocean_heat_x_transport" level="1" /> 113 <field field_ref="ueiv_heattr" name="hfxba" long_name="ocean_heat_x_transport_due_to_bolus_advection" level="2" /> 114 <field field_ref="udiff_heattr" name="hfxdiff" long_name="ocean_heat_x_transport_due_to_diffusion" level="2" /> 115 </file> 116 117 <file id="file7" name_suffix="_grid_U_table2.8" > 118 <field field_ref="utau" name="tauuo" long_name="surface_downward_x_stress" level="1" /> 119 <!-- tauucorr : surface_downward_x_stress_correction = 0 --> 120 </file> 128 121 <!-- 129 122 .............................................. grid V ................................................. 130 --> 131 <file_group id="1m_grid_V" name="auto" > <!-- grid V --> 132 133 <file id="1m grid_V table 2.3" name_suffix="_table2.3" > 134 <field field_ref="voce" name="vo" long_name="sea_water_y_velocity" /> 135 <field field_ref="v_masstr" name="vmo" long_name="ocean_mass_y_transport" level="1" /> 136 <field field_ref="v_heattr" name="hfy" long_name="ocean_heat_y_transport" level="1" /> 137 <field field_ref="veiv_heattr" name="hfyba" long_name="ocean_heat_y_transport_due_to_bolus_advection" level="2" /> 138 <field field_ref="vdiff_heattr" name="hfydiff" long_name="ocean_heat_y_transport_due_to_diffusion" level="2" /> 139 </file> 140 141 <file id="1m grid_V table 2.8" name_suffix="_table2.8" > 142 <field field_ref="vtau" name="tauvo" long_name="surface_downward_y_stress" level="1" /> 143 <!-- tauvcorr : surface_downward_y_stress_correction = 0 --> 144 </file> 145 146 </file_group> 123 --> 124 <file id="file8" name_suffix="_grid_V_table2.3" > 125 <field field_ref="voce" name="vo" long_name="sea_water_y_velocity" /> 126 <field field_ref="v_masstr" name="vmo" long_name="ocean_mass_y_transport" level="1" /> 127 <field field_ref="v_heattr" name="hfy" long_name="ocean_heat_y_transport" level="1" /> 128 <field field_ref="veiv_heattr" name="hfyba" long_name="ocean_heat_y_transport_due_to_bolus_advection" level="2" /> 129 <field field_ref="vdiff_heattr" name="hfydiff" long_name="ocean_heat_y_transport_due_to_diffusion" level="2" /> 130 </file> 131 132 <file id="file9" name_suffix="_grid_V_table2.8" > 133 <field field_ref="vtau" name="tauvo" long_name="surface_downward_y_stress" level="1" /> 134 <!-- tauvcorr : surface_downward_y_stress_correction = 0 --> 135 </file> 147 136 <!-- 148 137 .............................................. grid W ................................................. 149 --> 150 <file_group id="1m_grid_W" name="auto" > <!-- grid W --> 151 152 <file id="1m grid_W table 2.3" name_suffix="_table2.3" > 153 <field field_ref="w_masstr" name="wmo" long_name="upward_ocean_mass_transport" /> 154 <field field_ref="w_masstr2" name="wmosq" long_name="square_pf_upward_ocean_mass_transport" /> 155 </file> 156 157 <file id="1m grid_W table 2.9" name_suffix="_table2.9" > 158 <field field_ref="avt" name="difvho" long_name="ocean_vertical_heat_diffusivity" level="2" /> 159 <field field_ref="avs" name="difvso" long_name="ocean_vertical_salt_diffusivity" level="2" /> 160 <!-- difvtrbo : ocean_vertical_tracer_diffusivity_due_to_background : cte with time, see namelist parameters nn_avb and nn_havtb --> 161 <field field_ref="av_tide" name="difvtrto" long_name="ocean_vertical_tracer_diffusivity_due_to_tides" level="2" /> 162 <!-- tnpeo : tendency_of_ocean_potential_energy_content : not available --> 163 <!-- tnpeot : tendency_of_ocean_potential_energy_content_due_to_tides : not available --> 164 <!-- tnpeotb : tendency_of_ocean_potential_energy_content_due_to_background : not available --> 165 <field field_ref="avm" name="difvmo" long_name="ocean_vertical_momentum_diffusivity" level="2" /> 166 <!-- difvmbo : ocean_vertical_momentum_diffusivity_due_to_background : cte with time, see namelist parameters nn_avb --> 167 <field field_ref="av_tide" name="difvmto" long_name="ocean_vertical_momentum_diffusivity_due_to_tides" level="2" /> <!-- same as tracer --> 168 <!-- difvmfdo : ocean_vertical_momentum_diffusivity_due_to_form_drag : ??? --> 169 <!-- dispkevfo : ocean_kinetic_energy_dissipation_per_unit_area_due_to_vertical_friction : not available --> 170 </file> 171 172 <file id="1m grid_W table 2.10" name_suffix="_table2.10" > 173 <!-- if ln_traldf_lap = .true. --> 174 <field field_ref="aht2d_eiv" name="diftrblo" long_name="ocean_tracer_bolus_laplacian_diffusivity" level="2" /> 175 <!-- diftrelo : ocean_tracer_epineutral_laplacian_diffusivity : cte with time, see ln_traldf_iso --> 176 <!-- diftrxylo : ocean_tracer_xy_laplacian_diffusivity : cte with time --> 177 <!-- if ln_traldf_bilap = .true. --> 178 <!-- field field_ref="diftrbbo" name="aht2d_eiv" long_name="ocean_tracer_bolus_biharmonic_diffusivity" level="2" /--> 179 <!-- diftrebo : ocean_tracer_epineutral_biharmonic_diffusivity : cte with time, see ln_traldf_iso --> 180 <!-- diftrxybo : ocean_tracer_xy_biharmonic_diffusivity : cte with time --> 181 <!-- tnkebto : tendency_of_ocean_eddy_kinetic_energy_content_due_to_bolus_transport : not available --> 182 <!-- difmxylo : ocean_momentum_xy_laplacian_diffusivity : cte with time, see ln_dynldf_lap --> 183 <!-- difmxybo : ocean_momentum_xy_biharmonic_diffusivity : cte with time, see ln_dynldf_bilap --> 184 <!-- dispkexyfo : ocean_kinetic_energy_dissipation_per_unit_area_due_to_xy_friction : not available --> 185 </file> 186 187 </file_group> 138 --> 139 <file id="file10" name_suffix="_grid_W_table2.3" > 140 <field field_ref="w_masstr" name="wmo" long_name="upward_ocean_mass_transport" /> 141 <field field_ref="w_masstr2" name="wmosq" long_name="square_pf_upward_ocean_mass_transport" /> 142 </file> 143 144 <file id="file11" name_suffix="_grid_W_table2.9" > 145 <field field_ref="avt" name="difvho" long_name="ocean_vertical_heat_diffusivity" level="2" /> 146 <field field_ref="avs" name="difvso" long_name="ocean_vertical_salt_diffusivity" level="2" /> 147 <!-- difvtrbo : ocean_vertical_tracer_diffusivity_due_to_background : cte with time, see namelist parameters nn_avb and nn_havtb --> 148 <field field_ref="av_tide" name="difvtrto" long_name="ocean_vertical_tracer_diffusivity_due_to_tides" level="2" /> 149 <!-- tnpeo : tendency_of_ocean_potential_energy_content : not available --> 150 <!-- tnpeot : tendency_of_ocean_potential_energy_content_due_to_tides : not available --> 151 <!-- tnpeotb : tendency_of_ocean_potential_energy_content_due_to_background : not available --> 152 <field field_ref="avm" name="difvmo" long_name="ocean_vertical_momentum_diffusivity" level="2" /> 153 <!-- difvmbo : ocean_vertical_momentum_diffusivity_due_to_background : cte with time, see namelist parameters nn_avb --> 154 <field field_ref="av_tide" name="difvmto" long_name="ocean_vertical_momentum_diffusivity_due_to_tides" level="2" /> <!-- same as tracer --> 155 <!-- difvmfdo : ocean_vertical_momentum_diffusivity_due_to_form_drag : ??? --> 156 <!-- dispkevfo : ocean_kinetic_energy_dissipation_per_unit_area_due_to_vertical_friction : not available --> 157 </file> 158 159 <file id="file12" name_suffix="_grid_W_table2.10" > 160 <!-- if ln_traldf_lap = .true. --> 161 <field field_ref="aht2d_eiv" name="diftrblo" long_name="ocean_tracer_bolus_laplacian_diffusivity" level="2" /> 162 <!-- diftrelo : ocean_tracer_epineutral_laplacian_diffusivity : cte with time, see ln_traldf_iso --> 163 <!-- diftrxylo : ocean_tracer_xy_laplacian_diffusivity : cte with time --> 164 <!-- if ln_traldf_bilap = .true. --> 165 <!-- field field_ref="diftrbbo" name="aht2d_eiv" long_name="ocean_tracer_bolus_biharmonic_diffusivity" level="2" /--> 166 <!-- diftrebo : ocean_tracer_epineutral_biharmonic_diffusivity : cte with time, see ln_traldf_iso --> 167 <!-- diftrxybo : ocean_tracer_xy_biharmonic_diffusivity : cte with time --> 168 <!-- tnkebto : tendency_of_ocean_eddy_kinetic_energy_content_due_to_bolus_transport : not available --> 169 <!-- difmxylo : ocean_momentum_xy_laplacian_diffusivity : cte with time, see ln_dynldf_lap --> 170 <!-- difmxybo : ocean_momentum_xy_biharmonic_diffusivity : cte with time, see ln_dynldf_bilap --> 171 <!-- dispkexyfo : ocean_kinetic_energy_dissipation_per_unit_area_due_to_xy_friction : not available --> 172 </file> 188 173 <!-- 189 174 .............................................. scalar ................................................. 190 191 <file id=" 1m_scalar" name="auto" name_suffix="_table2.2" > <!-- scalar -->175 --> 176 <file id="file13" name_suffix="_scalar_table2.2" > <!-- scalar --> 192 177 <field field_ref="masstot" name="masso" long_name="sea_water_mass" /> 193 178 <field field_ref="voltot" name="volo" long_name="sea_water_volume" /> … … 200 185 <!-- 201 186 .............................................. icemod ................................................. 202 203 <file id=" 1m_icemod" name="auto" name_suffix="_table2.2" > <!-- scalar -->187 --> 188 <file id="file14" name_suffix="_icemod_table2.2" > <!-- scalar --> 204 189 <field field_ref="ice_pres" /> 205 190 <field field_ref="ice_cover" name="sic" long_name="sea_ice_area_fraction" /> … … 263 248 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 264 249 <axis id="nfloat" long_name="Float number" unit="-" /> 250 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 265 251 </axis_definition> 266 252 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_default.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="10d" min_digits="4"> 24 24 25 <file_group id="1ts" output_freq="1ts" output_level="10" enabled=".TRUE."/> <!-- 1 time step files --> 26 25 27 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> 26 28 <file_group id="2h" output_freq="2h" output_level="10" enabled=".TRUE."/> <!-- 2h files --> … … 31 33 <file_group id="1d" output_freq="1d" output_level="10" enabled=".TRUE."> <!-- 1d files --> 32 34 33 <file id=" 1d_grid_T" name="auto" description="ocean T grid variables" >35 <file id="file1" name_suffix="_grid_T" description="ocean T grid variables" > 34 36 <field field_ref="sst" name="tos" long_name="sea_surface_temperature" /> 35 37 <field field_ref="sss" name="sos" long_name="sea_surface_salinity" /> … … 37 39 </file> 38 40 39 <file id=" 1d_grid_U" name="auto" description="ocean U grid variables" >41 <file id="file2" name_suffix="_grid_U" description="ocean U grid variables" > 40 42 <field field_ref="suoce" name="uos" long_name="sea_surface_x_velocity" /> 41 43 </file> 42 44 43 <file id=" 1d_grid_V" name="auto" description="ocean V grid variables" >45 <file id="file3" name_suffix="_grid_V" description="ocean V grid variables" > 44 46 <field field_ref="svoce" name="vos" long_name="sea_surface_y_velocity" /> 45 47 </file> 46 48 47 49 </file_group> 50 48 51 <file_group id="3d" output_freq="3d" output_level="10" enabled=".TRUE."/> <!-- 3d files --> 49 52 50 53 <file_group id="5d" output_freq="5d" output_level="10" enabled=".TRUE."> <!-- 5d files --> 51 54 52 <file id=" 5d_grid_T" name="auto" description="ocean T grid variables" >55 <file id="file4" name_suffix="_grid_T" description="ocean T grid variables" > 53 56 <field field_ref="toce" name="thetao" long_name="sea_water_potential_temperature" /> 54 57 <field field_ref="soce" name="so" long_name="sea_water_salinity" /> … … 66 69 </file> 67 70 68 <file id=" 5d_grid_U" name="auto" description="ocean U grid variables" >71 <file id="file5" name_suffix="_grid_U" description="ocean U grid variables" > 69 72 <field field_ref="uoce" name="uo" long_name="sea_water_x_velocity" /> 70 73 <field field_ref="suoce" name="uos" long_name="sea_surface_x_velocity" /> … … 72 75 </file> 73 76 74 <file id=" 5d_grid_V" name="auto" description="ocean V grid variables" >77 <file id="file6" name_suffix="_grid_V" description="ocean V grid variables" > 75 78 <field field_ref="voce" name="vo" long_name="sea_water_y_velocity" /> 76 79 <field field_ref="svoce" name="vos" long_name="sea_surface_y_velocity" /> … … 78 81 </file> 79 82 80 <file id=" 5d_grid_W" name="auto" description="ocean W grid variables" >83 <file id="file7" name_suffix="_grid_W" description="ocean W grid variables" > 81 84 <field field_ref="woce" name="wo" long_name="ocean vertical velocity" /> 82 85 <field field_ref="avt" name="difvho" long_name="ocean_vertical_heat_diffusivity" /> 83 86 </file> 84 87 85 <file id=" 5d_icemod" name="auto" description="ice variables" >88 <file id="file8" name_suffix="_icemod" description="ice variables" > 86 89 <field field_ref="ice_pres" /> 87 90 <field field_ref="snowthic_cea" name="snd" long_name="surface_snow_thickness" /> … … 126 129 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 127 130 <axis id="nfloat" long_name="Float number" unit="-" /> 131 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 128 132 </axis_definition> 129 133 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_demo.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."> <!-- 1h files --> 26 <file id=" 1h_grid_T" name="auto" description="ocean T grid variables" >26 <file id="file1" name_suffix="_grid_T" description="ocean T grid variables" > 27 27 <field field_ref="sst" /> 28 28 <field field_ref="qsr" /> … … 32 32 33 33 <file_group id="1d" output_freq="1d" output_level="10" enabled=".TRUE."> <!-- 1d files --> 34 35 <file_group id="1d_grid_T" name="auto" description="ocean T grid variables" > 36 37 <!-- example of "hand made" zoom --> 38 <file id="blabla_1" name_suffix="_myzoom" > 39 <!-- group of variables sharing the same zoom. see zoom definition in domain_def.xml --> 40 <field_group id="blabla" domain_ref="myzoom" > 41 <field field_ref="toce" /> 42 <field field_ref="soce" /> 43 </field_group> 44 </file> 45 46 <!-- mooring: automatic definition of the file name suffix based on id="0n180wT" --> 47 <!-- include a group of variables. see field_def.xml for mooring variables definition --> 48 <file id="0n180wT" name_suffix="auto" > 49 <field_group group_ref="mooring"/> 50 </file> 51 52 <!-- Equatorial section: automatic definition of the file name suffix based on id="EqT" --> 53 <!-- Zoom over vertical axis. def of axis_ref in the axis_definition bellow --> 54 <file id="EqT" name_suffix="auto" > 55 <field_group id="EqT" domain_ref="EqT" > 56 <field field_ref="toce" name="votemper" axis_ref="deptht_zoom" /> 57 </field_group> 58 </file> 59 60 <!-- global file with different operations on data --> 61 <file id="blabla_2" > 62 <field field_ref="toce" default_value="-10" /> <!-- redefine the missing value --> 63 <field field_ref="sst" name="sstmooring1" domain_ref="0n180wT" /> <!-- include a mooring --> 64 <field field_ref="sst" name="sst_1d_ave" /> <!-- mean --> 65 <field field_ref="sst" name="sst_1d_inst" operation="instant" /> <!-- instant value --> 66 <field field_ref="sst" name="sst_1d_max" operation="maximum" /> <!-- max --> 67 <field field_ref="suoce" /> <!-- include a U-grid variable in the list --> 68 </file> 69 70 </file_group> 34 35 <!-- example of "hand made" zoom --> 36 <file id="file2" name_suffix="_grid_T_myzoom" > 37 <!-- group of variables sharing the same zoom. see zoom definition in domain_def.xml --> 38 <field_group id="blabla" domain_ref="myzoom" > 39 <field field_ref="toce" /> 40 <field field_ref="soce" /> 41 </field_group> 42 </file> 43 44 <!-- mooring: automatic definition of the file name suffix based on id="0n180wT" --> 45 <!-- include a group of variables. see field_def.xml for mooring variables definition --> 46 <file id="0n180wT"> 47 <field_group group_ref="mooring" domain_ref="0n180wT" /> 48 </file> 49 50 <!-- Equatorial section: automatic definition of the file name suffix based on id="EqT" --> 51 <!-- Zoom over vertical axis. def of axis_ref in the axis_definition bellow --> 52 <file id="EqT" > 53 <field_group id="EqT" domain_ref="EqT" > 54 <field field_ref="toce" name="votemper" axis_ref="deptht_myzoom" /> 55 <field field_ref="sss" /> 56 </field_group> 57 </file> 58 59 <!-- global file with different operations on data --> 60 <file id="file3" > 61 <field field_ref="toce" default_value="-10" /> <!-- redefine the missing value --> 62 <field field_ref="sst" name="sstmooring1" domain_ref="0n180wT" /> <!-- include a mooring --> 63 <field field_ref="sst" name="sst_1d_ave" /> <!-- mean --> 64 <field field_ref="sst" name="sst_1d_inst" operation="instant" /> <!-- instant value --> 65 <field field_ref="sst" name="sst_1d_max" operation="maximum" /> <!-- max --> 66 <field field_ref="suoce" /> <!-- include a U-grid variable in the list --> 67 </file> 68 71 69 </file_group> 72 73 </file_definition>70 71 </file_definition> 74 72 75 73 <!-- … … 82 80 <axis_group id="deptht" long_name="Vertical T levels" unit="m" positive="down" > 83 81 <axis id="deptht" /> 84 <axis id="deptht_ zoom" zoom_begin="1" zoom_end="10" />82 <axis id="deptht_myzoom" zoom_begin="1" zoom_end="10" /> 85 83 </axis_group> 86 84 <axis id="depthu" long_name="Vertical U levels" unit="m" positive="down" /> … … 88 86 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 89 87 <axis id="nfloat" long_name="Float number" unit="-" /> 88 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 90 89 </axis_definition> 91 90 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/iodef_oldstyle.xml
- Property svn:mime-type deleted
- Property svn:keywords set to Id
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1mo" min_digits="4"> 24 24 25 <file_group id="1ts" output_freq="1ts" output_level="10" enabled=".TRUE."/> <!-- 1 time step files --> 26 25 27 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> 26 28 <file_group id="2h" output_freq="2h" output_level="10" enabled=".TRUE."/> <!-- 2h files --> … … 33 35 <file_group id="5d" output_freq="5d" output_level="10" enabled=".TRUE."> <!-- 5d files --> 34 36 35 <file id=" 5d_grid_T" name="auto" description="ocean T grid variables" >37 <file id="file1" name_suffix="_grid_T" description="ocean T grid variables" > 36 38 <field field_ref="toce" name="votemper" /> 37 39 <field field_ref="soce" name="vosaline" /> … … 52 54 </file> 53 55 54 <file id=" 5d_grid_U" name="auto" description="ocean U grid variables" >56 <file id="file2" name_suffix="_grid_U" description="ocean U grid variables" > 55 57 <field field_ref="uoce" name="vozocrtx" /> 56 58 <field field_ref="uoce_eiv" name="vozoeivu" /> … … 58 60 </file> 59 61 60 <file id=" 5d_grid_V" name="auto" description="ocean V grid variables" >62 <file id="file3" name_suffix="_grid_V" description="ocean V grid variables" > 61 63 <field field_ref="voce" name="vomecrty" /> 62 64 <field field_ref="voce_eiv" name="vomeeivv" /> … … 64 66 </file> 65 67 66 <file id=" 5d_grid_W" name="auto" description="ocean Wgrid variables" >68 <file id="file4" name_suffix="_grid_W" description="ocean V grid variables" > 67 69 <field field_ref="woce" name="vovecrtz" /> 68 70 <field field_ref="avt" name="votkeavt" /> … … 71 73 </file> 72 74 73 <file id=" 5d_icemod" name="auto" description="icevariables" >75 <file id="file5" name_suffix="_icemod" description="ocean V grid variables" > 74 76 <field field_ref="ice_pres" /> 75 77 <field field_ref="snowthic_cea" name="isnowthi" /> … … 114 116 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 115 117 <axis id="nfloat" long_name="Float number" unit="-" /> 118 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 116 119 </axis_definition> 117 120 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM/EXP00/namelist_ice_lim2
r2580 r4193 89 89 ! ! ! (if <0 months) ! name ! (logical) ! (T/F) ! 'monthly' ! filename ! pairing ! 90 90 sn_hicif = 'ice_damping', -1. , 'hicif' , .true. , .true. , 'yearly' , '' , '' 91 sn_ cnf= 'ice_damping', -1. , 'frld' , .true. , .true. , 'yearly' , '' , ''91 sn_frld = 'ice_damping', -1. , 'frld' , .true. , .true. , 'yearly' , '' , '' 92 92 ! 93 93 cn_dir = './' ! root directory for the location of the runoff files -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM_CFC_C14b/EXP00/iodef.xml
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 60 60 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 61 61 <axis id="nfloat" long_name="Float number" unit="-" /> 62 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 62 63 </axis_definition> 63 64 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM_CFC_C14b/EXP00/namelist
r3795 r4193 318 318 rn_alphdi = 0.72 ! (Pyane, 1972) 319 319 / 320 320 !----------------------------------------------------------------------- 321 &namberg ! iceberg parameters 322 !----------------------------------------------------------------------- 323 ln_icebergs = .false. 324 ln_bergdia = .true. ! Calculate budgets 325 nn_verbose_level = 1 ! Turn on more verbose output if level > 0 326 nn_verbose_write = 15 ! Timesteps between verbose messages 327 nn_sample_rate = 1 ! Timesteps between sampling for trajectory storage 328 ! Initial mass required for an iceberg of each class 329 rn_initial_mass = 8.8e7, 4.1e8, 3.3e9, 1.8e10, 3.8e10, 7.5e10, 1.2e11, 2.2e11, 3.9e11, 7.4e11 330 ! Proportion of calving mass to apportion to each class 331 rn_distribution = 0.24, 0.12, 0.15, 0.18, 0.12, 0.07, 0.03, 0.03, 0.03, 0.02 332 ! Ratio between effective and real iceberg mass (non-dim) 333 ! i.e. number of icebergs represented at a point 334 rn_mass_scaling = 2000, 200, 50, 20, 10, 5, 2, 1, 1, 1 335 ! thickness of newly calved bergs (m) 336 rn_initial_thickness = 40., 67., 133., 175., 250., 250., 250., 250., 250., 250. 337 rn_rho_bergs = 850. ! Density of icebergs 338 rn_LoW_ratio = 1.5 ! Initial ratio L/W for newly calved icebergs 339 ln_operator_splitting = .true. ! Use first order operator splitting for thermodynamics 340 rn_bits_erosion_fraction = 0. ! Fraction of erosion melt flux to divert to bergy bits 341 rn_sicn_shift = 0. ! Shift of sea-ice concn in erosion flux (0<sicn_shift<1) 342 ln_passive_mode = .false. ! iceberg - ocean decoupling 343 nn_test_icebergs = 10 ! Create test icebergs of this class (-1 = no) 344 ! Put a test iceberg at each gridpoint in box (lon1,lon2,lat1,lat2) 345 rn_test_box = 108.0, 116.0, -66.0, -58.0 346 rn_speed_limit = 0. ! CFL speed limit for a berg 347 348 ! filename ! freq (hours) ! variable ! time interp. ! clim !'yearly' or ! weights ! rotation ! 349 ! ! (<0 months) ! name ! (logical) ! (T/F) ! 'monthly' ! filename ! pairing ! 350 sn_icb = 'calving' , -1 , 'calvingmask', .true. , .true., 'yearly' , ' ' , ' ' 351 352 cn_dir = './' 353 / 321 354 !!====================================================================== 322 355 !! *** Lateral boundary condition *** -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM_PISCES/EXP00/iodef.xml
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="10d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 31 31 <file_group id="1d" output_freq="1d" output_level="10" enabled=".TRUE."> <!-- 1d files --> 32 32 33 <file id=" 1d_grid_T" name="auto" description="ocean T grid variables" >33 <file id="file1" name_suffix="_grid_T" description="ocean T grid variables" > 34 34 <field field_ref="sst" name="sosstsst" /> 35 35 <field field_ref="sss" name="sosaline" /> … … 37 37 </file> 38 38 39 <file id=" 1d_grid_U" name="auto" description="ocean U grid variables" >39 <file id="file2" name_suffix="_grid_U" description="ocean U grid variables" > 40 40 <field field_ref="suoce" name="vozocrtx" /> 41 41 </file> 42 42 43 <file id=" 1d_grid_V" name="auto" description="ocean V grid variables" >43 <file id="file3" name_suffix="_grid_V" description="ocean V grid variables" > 44 44 <field field_ref="svoce" name="vomecrty" /> 45 45 </file> … … 49 49 <file_group id="5d" output_freq="5d" output_level="10" enabled=".TRUE."> <!-- 5d files --> 50 50 51 <file id=" 5d_grid_T" name="auto" description="ocean T grid variables" >51 <file id="file4" name_suffix="_grid_T" description="ocean T grid variables" > 52 52 <field field_ref="toce" name="votemper" /> 53 53 <field field_ref="soce" name="vosaline" /> … … 58 58 <field field_ref="qsr" name="soshfldo" /> 59 59 <field field_ref="saltflx" name="sosfldow" /> 60 <field field_ref="fmmflx" name="sofmflup" /> 60 61 <field field_ref="qt" name="sohefldo" /> 61 62 <field field_ref="mldr10_1" name="somxl010" /> … … 68 69 </file> 69 70 70 <file id=" 5d_grid_U" name="auto" description="ocean U grid variables" >71 <file id="file5" name_suffix="_grid_U" description="ocean U grid variables" > 71 72 <field field_ref="uoce" name="vozocrtx" /> 72 73 <field field_ref="uoce_eiv" name="vozoeivu" /> … … 74 75 </file> 75 76 76 <file id=" 5d_grid_V" name="auto" description="ocean V grid variables" >77 <file id="file6" name_suffix="_grid_V" description="ocean V grid variables" > 77 78 <field field_ref="voce" name="vomecrty" /> 78 79 <field field_ref="voce_eiv" name="vomeeivv" /> … … 80 81 </file> 81 82 82 <file id=" 5d_grid_W" name="auto" description="ocean W grid variables" >83 <file id="file7" name_suffix="_grid_W" description="ocean W grid variables" > 83 84 <field field_ref="woce" name="vovecrtz" /> 84 85 <field field_ref="avt" name="votkeavt" /> … … 87 88 </file> 88 89 89 <file id=" 5d_icemod" name="auto" description="ice variables" >90 <file id="file8" name_suffix="_icemod" description="ice variables" > 90 91 <field field_ref="ice_pres" /> 91 92 <field field_ref="snowthic_cea" name="isnowthi" /> … … 107 108 <file_group id="1m" output_freq="1mo" output_level="10" enabled=".TRUE."> <!-- real monthly files --> 108 109 109 <file id=" 1m_ptrc_T" name="auto" description="pisces sms variables" >110 <file id="file9" name_suffix="_ptrc_T" description="pisces sms variables" > 110 111 <field field_ref="DIC" /> 111 112 <field field_ref="Alkalini" /> … … 119 120 </file> 120 121 121 <file id=" 1m_diad_T" name="auto" description="additional pisces diagnostics" >122 <file id="file10" name_suffix="_diad_T" description="additional pisces diagnostics" > 122 123 <field field_ref="Cflx" /> 123 124 <field field_ref="Dpco2" /> … … 132 133 <file_group id="1y" output_freq="1y" output_level="10" enabled=".TRUE."> <!-- real yearly files --> 133 134 134 <file id=" 1y_ptrc_T" name="auto" description="pisces sms variables" >135 <file id="file11" name_suffix="_ptrc_T" description="pisces sms variables" > 135 136 <field field_ref="DIC" /> 136 137 <field field_ref="Alkalini" /> … … 159 160 </file> 160 161 161 <file id=" 1y_diad_T" name="auto" description="additional pisces diagnostics" >162 <file id="file12" name_suffix="_diad_T" description="additional pisces diagnostics" > 162 163 <field field_ref="PH" /> 163 164 <field field_ref="CO3" /> … … 229 230 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 230 231 <axis id="nfloat" long_name="Float number" unit="-" /> 232 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 231 233 </axis_definition> 232 234 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_LIM_PISCES/EXP00/namelist_pisces
r3824 r4193 30 30 cn_dir = './' ! root directory for the location of the dynamical files 31 31 ! 32 ln_presatm = . true. ! constant atmopsheric pressure (F) or from a file (T)32 ln_presatm = .false. ! constant atmopsheric pressure (F) or from a file (T) 33 33 / 34 34 !''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' … … 39 39 xkmort = 2.E-7 ! half saturation constant for mortality 40 40 ferat3 = 10.E-6 ! Fe/C in zooplankton 41 wsbio2 = 30. ! Big particles sinking speed41 wsbio2 = 50. ! Big particles sinking speed 42 42 niter1max = 1 ! Maximum number of iterations for POC 43 43 niter2max = 1 ! Maximum number of iterations for GOC … … 47 47 !,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 48 48 concnno3 = 1.e-6 ! Nitrate half saturation of nanophytoplankton 49 concdno3 = 3.E-6 ! Phosphate half saturation for diatoms49 concdno3 = 3.E-6 ! Phosphate half saturation for diatoms 50 50 concnnh4 = 1.E-7 ! NH4 half saturation for phyto 51 concdnh4 = 3.E-7 ! NH4 half saturation for diatoms52 concnfer = 1.E-9 53 concdfer = 3.E-9 ! Iron half saturation for diatoms51 concdnh4 = 3.E-7 ! NH4 half saturation for diatoms 52 concnfer = 1.E-9 ! Iron half saturation for phyto 53 concdfer = 3.E-9 ! Iron half saturation for diatoms 54 54 concbfe = 1.E-11 ! Half-saturation for Fe limitation of Bacteria 55 55 concbnh4 = 2.5E-8 ! NH4 half saturation for phyto … … 60 60 xsizerd = 3.0 ! Size ratio for diatoms 61 61 xksi1 = 2.E-6 ! half saturation constant for Si uptake 62 xksi2 = 20E-6 ! half saturation constant for Si/C62 xksi2 = 20E-6 ! half saturation constant for Si/C 63 63 xkdoc = 417.E-6 ! half-saturation constant of DOC remineralization 64 64 qnfelim = 7.E-6 ! Optimal quota of phyto … … 84 84 excret2 = 0.05 ! excretion ratio of diatoms 85 85 ln_newprod = .true. ! Enable new parame. of production (T/F) 86 bresp = 0.0 0333! Basal respiration rate86 bresp = 0.0333 ! Basal respiration rate 87 87 chlcnm = 0.033 ! Minimum Chl/C in nanophytoplankton 88 88 chlcdm = 0.05 ! Minimum Chl/C in diatoms … … 106 106 part2 = 0.75 ! part of calcite not dissolved in mesozoo guts 107 107 grazrat2 = 0.75 ! maximal mesozoo grazing rate 108 resrat2 = 0.0 05! exsudation rate of mesozooplankton108 resrat2 = 0.01 ! exsudation rate of mesozooplankton 109 109 mzrat2 = 0.03 ! mesozooplankton mortality rate 110 110 xprefc = 1. ! zoo preference for phyto … … 118 118 xthresh2 = 3E-7 ! Food threshold for grazing 119 119 xkgraz2 = 20.E-6 ! half sturation constant for meso grazing 120 epsher2 = 0. 3! Efficicency of Mesozoo growth120 epsher2 = 0.4 ! Efficicency of Mesozoo growth 121 121 sigma2 = 0.6 ! Fraction of mesozoo excretion as DOM 122 122 unass2 = 0.3 ! non assimilated fraction of P by mesozoo … … 128 128 part = 0.5 ! part of calcite not dissolved in microzoo gutsa 129 129 grazrat = 3.0 ! maximal zoo grazing rate 130 resrat = 0.0 3! exsudation rate of zooplankton130 resrat = 0.05 ! exsudation rate of zooplankton 131 131 mzrat = 0.004 ! zooplankton mortality rate 132 132 xpref2c = 0.1 ! Microzoo preference for POM … … 138 138 xthresh = 3.E-7 ! Food threshold for feeding 139 139 xkgraz = 20.E-6 ! half sturation constant for grazing 140 epsher = 0. 3! Efficiency of microzoo growth140 epsher = 0.4 ! Efficiency of microzoo growth 141 141 sigma1 = 0.6 ! Fraction of microzoo excretion as DOM 142 142 unass = 0.3 ! non assimilated fraction of phyto by zoo … … 146 146 !,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 147 147 ln_fechem = .false. ! complex iron chemistry ( T/F ) 148 ln_ligvar = . true. ! variable ligand concentration148 ln_ligvar = .false. ! variable ligand concentration 149 149 xlam1 = 0.005 ! scavenging rate of Iron 150 150 xlamdust = 150.0 ! Scavenging rate of dust … … 154 154 &nampisrem ! parameters for remineralization 155 155 !,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 156 xremik = 0.3 5! remineralization rate of DOC156 xremik = 0.3 ! remineralization rate of DOC 157 157 xremip = 0.025 ! remineralisation rate of POC 158 158 nitrif = 0.05 ! NH4 nitrification rate … … 261 261 &nampismass ! Mass conservation 262 262 !,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 263 ln_check_mass = . false. ! Check mass conservation264 / 263 ln_check_mass = .true. ! Check mass conservation 264 / -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_OFF_PISCES/EXP00/iodef.xml
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 35 35 <file_group id="1m" output_freq="1mo" output_level="10" enabled=".TRUE."> <!-- real monthly files --> 36 36 37 <file id=" 1m_ptrc_T" name="auto" description="pisces sms variables" >37 <file id="file1" name_suffix="_ptrc_T" description="pisces sms variables" > 38 38 <field field_ref="DIC" /> 39 39 <field field_ref="Alkalini" /> … … 47 47 </file> 48 48 49 <file id=" 1m_diad_T" name="auto" description="additional pisces diagnostics" >49 <file id="file2" name_suffix="_diad_T" description="additional pisces diagnostics" > 50 50 <field field_ref="Cflx" /> 51 51 <field field_ref="Dpco2" /> … … 60 60 <file_group id="1y" output_freq="1y" output_level="10" enabled=".TRUE."> <!-- real yearly files --> 61 61 62 <file id=" 1y_ptrc_T" name="auto" description="pisces sms variables" >62 <file id="file3" name_suffix="_ptrc_T" description="pisces sms variables" > 63 63 <field field_ref="DIC" /> 64 64 <field field_ref="Alkalini" /> … … 87 87 </file> 88 88 89 <file id=" 1y_diad_T" name="auto" description="additional pisces diagnostics" >89 <file id="file4" name_suffix="_diad_T" description="additional pisces diagnostics" > 90 90 <field field_ref="PH" /> 91 91 <field field_ref="CO3" /> … … 157 157 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 158 158 <axis id="nfloat" long_name="Float number" unit="-" /> 159 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 159 160 </axis_definition> 160 161 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_OFF_PISCES/EXP00/namelist
r3795 r4193 665 665 sn_mld = 'dyna_grid_T' , 120 , 'somixhgt' , .true. , .true. , 'yearly' , '' , '' 666 666 sn_emp = 'dyna_grid_T' , 120 , 'sowaflup' , .true. , .true. , 'yearly' , '' , '' 667 sn_ sfx = 'dyna_grid_T' , 120 , 'sowaflcd' , .true. , .true. , 'yearly' , '' , ''667 sn_fmf = 'dyna_grid_T' , 120 , 'iowaflup' , .true. , .true. , 'yearly' , '' , '' 668 668 sn_ice = 'dyna_grid_T' , 120 , 'soicecov' , .true. , .true. , 'yearly' , '' , '' 669 669 sn_qsr = 'dyna_grid_T' , 120 , 'soshfldo' , .true. , .true. , 'yearly' , '' , '' -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_OFF_PISCES/EXP00/namelist_pisces
r3824 r4193 30 30 cn_dir = './' ! root directory for the location of the dynamical files 31 31 ! 32 ln_presatm = . true. ! constant atmopsheric pressure (F) or from a file (T)32 ln_presatm = .false. ! constant atmopsheric pressure (F) or from a file (T) 33 33 / 34 34 !''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''' … … 39 39 xkmort = 2.E-7 ! half saturation constant for mortality 40 40 ferat3 = 10.E-6 ! Fe/C in zooplankton 41 wsbio2 = 30. ! Big particles sinking speed41 wsbio2 = 50. ! Big particles sinking speed 42 42 niter1max = 1 ! Maximum number of iterations for POC 43 43 niter2max = 1 ! Maximum number of iterations for GOC … … 47 47 !,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 48 48 concnno3 = 1.e-6 ! Nitrate half saturation of nanophytoplankton 49 concdno3 = 3.E-6 ! Phosphate half saturation for diatoms49 concdno3 = 3.E-6 ! Phosphate half saturation for diatoms 50 50 concnnh4 = 1.E-7 ! NH4 half saturation for phyto 51 concdnh4 = 3.E-7 ! NH4 half saturation for diatoms52 concnfer = 1.E-9 53 concdfer = 3.E-9 ! Iron half saturation for diatoms51 concdnh4 = 3.E-7 ! NH4 half saturation for diatoms 52 concnfer = 1.E-9 ! Iron half saturation for phyto 53 concdfer = 3.E-9 ! Iron half saturation for diatoms 54 54 concbfe = 1.E-11 ! Half-saturation for Fe limitation of Bacteria 55 55 concbnh4 = 2.5E-8 ! NH4 half saturation for phyto … … 60 60 xsizerd = 3.0 ! Size ratio for diatoms 61 61 xksi1 = 2.E-6 ! half saturation constant for Si uptake 62 xksi2 = 20E-6 ! half saturation constant for Si/C62 xksi2 = 20E-6 ! half saturation constant for Si/C 63 63 xkdoc = 417.E-6 ! half-saturation constant of DOC remineralization 64 64 qnfelim = 7.E-6 ! Optimal quota of phyto … … 84 84 excret2 = 0.05 ! excretion ratio of diatoms 85 85 ln_newprod = .true. ! Enable new parame. of production (T/F) 86 bresp = 0.0 0333! Basal respiration rate86 bresp = 0.0333 ! Basal respiration rate 87 87 chlcnm = 0.033 ! Minimum Chl/C in nanophytoplankton 88 88 chlcdm = 0.05 ! Minimum Chl/C in diatoms … … 106 106 part2 = 0.75 ! part of calcite not dissolved in mesozoo guts 107 107 grazrat2 = 0.75 ! maximal mesozoo grazing rate 108 resrat2 = 0.0 05! exsudation rate of mesozooplankton108 resrat2 = 0.01 ! exsudation rate of mesozooplankton 109 109 mzrat2 = 0.03 ! mesozooplankton mortality rate 110 110 xprefc = 1. ! zoo preference for phyto … … 118 118 xthresh2 = 3E-7 ! Food threshold for grazing 119 119 xkgraz2 = 20.E-6 ! half sturation constant for meso grazing 120 epsher2 = 0. 3! Efficicency of Mesozoo growth120 epsher2 = 0.4 ! Efficicency of Mesozoo growth 121 121 sigma2 = 0.6 ! Fraction of mesozoo excretion as DOM 122 122 unass2 = 0.3 ! non assimilated fraction of P by mesozoo … … 128 128 part = 0.5 ! part of calcite not dissolved in microzoo gutsa 129 129 grazrat = 3.0 ! maximal zoo grazing rate 130 resrat = 0.0 3! exsudation rate of zooplankton130 resrat = 0.05 ! exsudation rate of zooplankton 131 131 mzrat = 0.004 ! zooplankton mortality rate 132 132 xpref2c = 0.1 ! Microzoo preference for POM … … 138 138 xthresh = 3.E-7 ! Food threshold for feeding 139 139 xkgraz = 20.E-6 ! half sturation constant for grazing 140 epsher = 0. 3! Efficiency of microzoo growth140 epsher = 0.4 ! Efficiency of microzoo growth 141 141 sigma1 = 0.6 ! Fraction of microzoo excretion as DOM 142 142 unass = 0.3 ! non assimilated fraction of phyto by zoo … … 146 146 !,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 147 147 ln_fechem = .false. ! complex iron chemistry ( T/F ) 148 ln_ligvar = . true. ! variable ligand concentration148 ln_ligvar = .false. ! variable ligand concentration 149 149 xlam1 = 0.005 ! scavenging rate of Iron 150 150 xlamdust = 150.0 ! Scavenging rate of dust … … 154 154 &nampisrem ! parameters for remineralization 155 155 !,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 156 xremik = 0.3 5! remineralization rate of DOC156 xremik = 0.3 ! remineralization rate of DOC 157 157 xremip = 0.025 ! remineralisation rate of POC 158 158 nitrif = 0.05 ! NH4 nitrification rate … … 261 261 &nampismass ! Mass conservation 262 262 !,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,,, 263 ln_check_mass = . false. ! Check mass conservation264 / 263 ln_check_mass = .true. ! Check mass conservation 264 / -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_SAS_LIM/EXP00/iodef.xml
r3771 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".TRUE."/> <!-- 1h files --> … … 60 60 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 61 61 <axis id="nfloat" long_name="Float number" unit="-" /> 62 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 62 63 </axis_definition> 63 64 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/ORCA2_SAS_LIM/EXP00/namelist_ice_lim2
r3331 r4193 89 89 ! ! ! (if <0 months) ! name ! (logical) ! (T/F) ! 'monthly' ! filename ! pairing ! 90 90 sn_hicif = 'ice_damping', -1. , 'hicif' , .true. , .true. , 'yearly' , '' , '' 91 sn_ cnf= 'ice_damping', -1. , 'frld' , .true. , .true. , 'yearly' , '' , ''91 sn_frld = 'ice_damping', -1. , 'frld' , .true. , .true. , 'yearly' , '' , '' 92 92 ! 93 93 cn_dir = './' ! root directory for the location of the runoff files -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/SHARED/field_def.xml
r3824 r4193 53 53 54 54 <field id="empmr" long_name="Net Upward Water Flux" unit="kg/m2/s" /> 55 <field id="saltflx" long_name="Downward salt flux" unit="PSU/m2/s" /> 55 <field id="saltflx" long_name="Downward salt flux" unit="PSU/m2/s" /> 56 <field id="fmmflx" long_name="Water flux due to freezing/melting" unit="kg/m2/s" /> 56 57 <field id="snowpre" long_name="Snow precipitation" unit="kg/m2/s" /> 57 58 <field id="runoffs" long_name="River Runoffs" unit="Kg/m2/s" /> … … 218 219 <field id="traj_dens" long_name="floats density" unit="kg/m3" /> 219 220 <field id="traj_group" long_name="floats group" unit="none" /> 221 </field_group> 222 223 <!-- variables available with iceberg trajectories --> 224 <field_group id="icbvar" domain_ref="grid_T" > 225 <field id="berg_melt" long_name="icb melt rate of icebergs" unit="kg/m2/s" /> 226 <field id="berg_buoy_melt" long_name="icb buoyancy component of iceberg melt rate" unit="kg/m2/s" /> 227 <field id="berg_eros_melt" long_name="icb erosion component of iceberg melt rate" unit="kg/m2/s" /> 228 <field id="berg_conv_melt" long_name="icb convective component of iceberg melt rate" unit="kg/m2/s" /> 229 <field id="berg_virtual_area" long_name="icb virtual coverage by icebergs" unit="m2" /> 230 <field id="bits_src" long_name="icb mass source of bergy bits" unit="kg/m2/s" /> 231 <field id="bits_melt" long_name="icb melt rate of bergy bits" unit="kg/m2/s" /> 232 <field id="bits_mass" long_name="icb bergy bit density field" unit="kg/m2" /> 233 <field id="berg_mass" long_name="icb iceberg density field" unit="kg/m2" /> 234 <field id="calving" long_name="icb calving mass input" unit="kg/s" /> 235 <field id="berg_floating_melt" long_name="icb melt rate of icebergs + bits" unit="kg/m2/s" /> 236 <field id="berg_real_calving" long_name="icb calving into iceberg class" unit="kg/s" axis_ref="icbcla" /> 237 <field id="berg_stored_ice" long_name="icb accumulated ice mass by class" unit="kg" axis_ref="icbcla" /> 220 238 </field_group> 221 239 … … 248 266 <field id="NH4" long_name="Ammonium Concentration" unit="mmol/m3" /> 249 267 268 <!-- PISCES with Kriest parametisation : variables available with key_kriest --> 269 <field id="Num" long_name="Number of organic particles" unit="nbr" /> 270 271 <!-- PISCES light : variables available with key_pisces_reduced --> 250 272 <field id="DET" long_name="Detritus" unit="mmol-N/m3" /> 251 273 <field id="DOM" long_name="Dissolved Organic Matter" unit="mmol-N/m3" /> 252 274 275 <!-- CFC11 : variables available with key_cfc --> 253 276 <field id="CFC11" long_name="CFC-11 Concentration" unit="umol/L" /> 277 <!-- Bomb C14 : variables available with key_c14b --> 254 278 <field id="C14B" long_name="Bomb C14 Concentration" unit="ration" /> 255 279 </field_group> 256 280 257 <!-- diad on T grid : variables available with key_diatrc --> 258 281 <!-- PISCES additional diagnostics on T grid --> 259 282 <field_group id="diad_T" grid_ref="grid_T_2D"> 260 283 <field id="PH" long_name="PH" unit="-" grid_ref="grid_T_3D" /> … … 310 333 <field id="Heup" long_name="Euphotic layer depth" unit="m" /> 311 334 <field id="Irondep" long_name="Iron deposition from dust" unit="mol/m2/s" /> 312 <field id="Ironsed" long_name="Iron deposition from sediment" unit="mol/m2/s" grid_ref="grid_T_3D" /> 313 314 <field id="FNO3PHY" long_name="FNO3PHY" unit="-" grid_ref="grid_T_3D" /> 315 <field id="FNH4PHY" long_name="FNH4PHY" unit="-" grid_ref="grid_T_3D" /> 316 <field id="FNH4NO3" long_name="FNH4NO3" unit="-" grid_ref="grid_T_3D" /> 317 <field id="TNO3PHY" long_name="TNO3PHY" unit="-" /> 318 <field id="TNH4PHY" long_name="TNH4PHY" unit="-" /> 319 <field id="TPHYDOM" long_name="TPHYDOM" unit="-" /> 320 <field id="TPHYNH4" long_name="TPHYNH4" unit="-" /> 321 <field id="TPHYZOO" long_name="TPHYZOO" unit="-" /> 322 <field id="TPHYDET" long_name="TPHYDET" unit="-" /> 323 <field id="TDETZOO" long_name="TDETZOO" unit="-" /> 324 <field id="TZOODET" long_name="TZOODET" unit="-" /> 325 <field id="TZOOBOD" long_name="TZOOBOD" unit="-" /> 326 <field id="TZOONH4" long_name="TZOONH4" unit="-" /> 327 <field id="TZOODOM" long_name="TZOODOM" unit="-" /> 328 <field id="TNH4NO3" long_name="TNH4NO3" unit="-" /> 329 <field id="TDOMNH4" long_name="TDOMNH4" unit="-" /> 330 <field id="TDETNH4" long_name="TDETNH4" unit="-" /> 331 <field id="TPHYTOT" long_name="TPHYTOT" unit="-" /> 332 <field id="TZOOTOT" long_name="TZOOTOT" unit="-" /> 333 <field id="SEDPOC" long_name="SEDPOC" unit="-" /> 334 <field id="TDETSED" long_name="TDETSED" unit="-" /> 335 336 <field id="qtrCFC11" long_name="Air-sea flux of CFC-11" unit="mol/m2/s" /> 337 <field id="qintCFC11" long_name="Cumulative air-sea flux of CFC-11" unit="mol/m2" /> 338 <field id="qtrC14b" long_name="Air-sea flux of Bomb C14" unit="mol/m2/s" /> 339 <field id="qintC14b" long_name="Cumulative air-sea flux of Bomb C14" unit="mol/m2" /> 340 <field id="fdecay" long_name="Radiactive decay of Bomb C14" unit="mol/m3" grid_ref="grid_T_3D" /> 335 <field id="Ironsed" long_name="Iron deposition from sediment" unit="mol/m2/s" grid_ref="grid_T_3D "/> 336 337 <!-- PISCES with Kriest parametisation : variables available with key_kriest --> 338 <field id="POCFlx" long_name="Particulate organic C flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 339 <field id="NumFlx" long_name="Particle number flux" unit="nbr/m2/s" grid_ref="grid_T_3D" /> 340 <field id="SiFlx" long_name="Biogenic Si flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 341 <field id="CaCO3Flx" long_name="CaCO3 flux" unit="mol/m2/s" grid_ref="grid_T_3D" /> 342 <field id="xnum" long_name="Number of particles in aggregats" unit="-" grid_ref="grid_T_3D" /> 343 <field id="W1" long_name="sinking speed of mass flux" unit="m2/s" grid_ref="grid_T_3D" /> 344 <field id="W2" long_name="sinking speed of number flux" unit="m2/s" grid_ref="grid_T_3D" /> 345 346 <!-- PISCES light : variables available with key_pisces_reduced --> 347 <field id="FNO3PHY" long_name="FNO3PHY" unit="-" grid_ref="grid_T_3D" /> 348 <field id="FNH4PHY" long_name="FNH4PHY" unit="-" grid_ref="grid_T_3D" /> 349 <field id="FNH4NO3" long_name="FNH4NO3" unit="-" grid_ref="grid_T_3D" /> 350 <field id="TNO3PHY" long_name="TNO3PHY" unit="-" /> 351 <field id="TNH4PHY" long_name="TNH4PHY" unit="-" /> 352 <field id="TPHYDOM" long_name="TPHYDOM" unit="-" /> 353 <field id="TPHYNH4" long_name="TPHYNH4" unit="-" /> 354 <field id="TPHYZOO" long_name="TPHYZOO" unit="-" /> 355 <field id="TPHYDET" long_name="TPHYDET" unit="-" /> 356 <field id="TDETZOO" long_name="TDETZOO" unit="-" /> 357 <field id="TZOODET" long_name="TZOODET" unit="-" /> 358 <field id="TZOOBOD" long_name="TZOOBOD" unit="-" /> 359 <field id="TZOONH4" long_name="TZOONH4" unit="-" /> 360 <field id="TZOODOM" long_name="TZOODOM" unit="-" /> 361 <field id="TNH4NO3" long_name="TNH4NO3" unit="-" /> 362 <field id="TDOMNH4" long_name="TDOMNH4" unit="-" /> 363 <field id="TDETNH4" long_name="TDETNH4" unit="-" /> 364 <field id="TPHYTOT" long_name="TPHYTOT" unit="-" /> 365 <field id="TZOOTOT" long_name="TZOOTOT" unit="-" /> 366 <field id="SEDPOC" long_name="SEDPOC" unit="-" /> 367 <field id="TDETSED" long_name="TDETSED" unit="-" /> 368 369 <!-- CFC11 : variables available with key_cfc --> 370 <field id="qtrCFC11" long_name="Air-sea flux of CFC-11" unit="mol/m2/s" /> 371 <field id="qintCFC11" long_name="Cumulative air-sea flux of CFC-11" unit="mol/m2" /> 372 <!-- Bomb C14 : variables available with key_c14b --> 373 <field id="qtrC14b" long_name="Air-sea flux of Bomb C14" unit="mol/m2/s" /> 374 <field id="qintC14b" long_name="Cumulative air-sea flux of Bomb C14" unit="mol/m2" /> 375 <field id="fdecay" long_name="Radiactive decay of Bomb C14" unit="mol/m3" grid_ref="grid_T_3D" /> 341 376 </field_group> 342 377 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/cfg.txt
r3769 r4193 1 AMM12 OPA_SRC2 1 GYRE OPA_SRC 3 2 GYRE_BFM OPA_SRC TOP_SRC 4 3 GYRE_PISCES OPA_SRC TOP_SRC 5 4 ORCA2_LIM3 OPA_SRC LIM_SRC_3 6 ORCA2_LIM_PISCES OPA_SRC LIM_SRC_2 NST_SRC TOP_SRC7 ORCA2_OFF_PISCES OPA_SRC OFF_SRC TOP_SRC8 5 ORCA2_SAS_LIM OPA_SRC SAS_SRC LIM_SRC_2 NST_SRC 9 6 ORCA2_LIM_CFC_C14b OPA_SRC LIM_SRC_2 NST_SRC TOP_SRC 7 ORCA2_OFF_PISCES OPA_SRC OFF_SRC TOP_SRC 8 ORCA2_LIM_PISCES OPA_SRC LIM_SRC_2 NST_SRC TOP_SRC 10 9 ORCA2_LIM OPA_SRC LIM_SRC_2 NST_SRC 10 AMM12 OPA_SRC -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/CONFIG/makenemo
r3764 r4193 106 106 export AGRIFUSE=10 107 107 declare -a TAB 108 list_key=0 109 chk_key=1 108 110 #- 109 111 #- FCM and functions location --- … … 112 114 #- 113 115 #- Choice of the options --- 114 while getopts :hd:n:r:m:j:e:s:v:t: V116 while getopts :hd:n:r:m:j:e:s:v:t:k: V 115 117 do 116 118 case $V in 117 119 (h) x_h=${OPTARG}; 118 120 echo "Usage : "${b_n} \ 119 " [-h] [-n name] [-m arch] [-d "dir1 dir2"] [-r conf] [-s Path] [-e Path] [-j No] [-v No] ";121 " [-h] [-n name] [-m arch] [-d "dir1 dir2"] [-r conf] [-s Path] [-e Path] [-j No] [-v No] [-k 0/1]"; 120 122 echo " -h : help"; 121 123 echo " -h institute : specific help for consortium members"; … … 128 130 echo " -j No : number of processes used to compile (0=nocompilation)"; 129 131 echo " -v No : set verbosity level for compilation [0-3]"; 132 echo " -k 0/1 : used cpp keys check (default = 1 -> check activated)"; 130 133 echo " -t dir : temporary directory for compilation" 131 134 echo ""; … … 141 144 echo "Example to clean "; 142 145 echo "./makenemo clean"; 146 echo ""; 147 echo "Example to list the available keys of a CONFIG "; 148 echo "./makenemo list_key"; 143 149 echo ""; 144 150 echo "Example to add and remove keys"; … … 158 164 (j) x_j=${OPTARG};; 159 165 (t) x_t=${OPTARG};; 160 (e) x_e=${OPTARG};; 161 (s) x_s=${OPTARG};; 162 (v) x_v=${OPTARG};; 166 (e) x_e=${OPTARG};; 167 (s) x_s=${OPTARG};; 168 (v) x_v=${OPTARG};; 169 (k) chk_key=${OPTARG};; 163 170 (:) echo ${b_n}" : -"${OPTARG}" option : missing value" 1>&2; 164 171 exit 2;; … … 188 195 export ${list_del_key} 189 196 shift 197 ;; 198 list_key) 199 list_key=1 190 200 ;; 191 201 *) … … 222 232 [ "${CMP_NAM}" == help ] && . ${COMPIL_DIR}/Flist_archfile.sh all && exit 223 233 224 #- When used for the first time, choose a compiler ---225 . ${COMPIL_DIR}/Fcheck_archfile.sh arch_nemo.fcm ${CMP_NAM} || exit226 227 234 #- 228 235 #- Choose a default configuration if needed --- 229 236 #- ORCA2_LIM or last one used --- 230 237 . ${COMPIL_DIR}/Fcheck_config.sh cfg.txt ${NEW_CONF} || exit 231 232 238 233 239 if [ ${#NEW_CONF} -eq 0 ] ; then … … 269 275 . ${COMPIL_DIR}/Fmake_bld.sh ${CONFIG_DIR} ${NEW_CONF} ${NEMO_TDIR} || exit 270 276 277 # build the complete list of the cpp keys of this configuration 278 if [ $chk_key -eq 1 ] ; then 279 for i in $( grep "^ *#.* key_" ${NEW_CONF}/WORK/* ) 280 do 281 echo $i | grep key_ | sed -e "s/=.*//" 282 done | sort -d | uniq > ${COMPIL_DIR}/full_key_list.txt 283 if [ $list_key -eq 1 ]; then 284 cat ${COMPIL_DIR}/full_key_list.txt 285 exit 0 286 fi 287 fi 288 271 289 #- At this stage new configuration has been added, 272 290 #- We add or remove keys … … 278 296 . ${COMPIL_DIR}/Fdel_keys.sh ${NEW_CONF} del_key ${list_del_key} 279 297 fi 298 299 #- check that all keys are really existing... 300 if [ $chk_key -eq 1 ] ; then 301 for kk in $( cat ${NEW_CONF}/cpp_${NEW_CONF}.fcm ) 302 do 303 if [ "$( echo $kk | cut -c 1-4 )" == "key_" ]; then 304 kk=${kk/=*/} 305 nb=$( grep -c $kk ${COMPIL_DIR}/full_key_list.txt ) 306 if [ $nb -eq 0 ]; then 307 echo 308 echo "E R R O R : key "$kk" is not found in ${NEW_CONF}/WORK routines..." 309 echo "we stop..." 310 echo 311 exit 1 312 fi 313 fi 314 done 315 fi 316 317 #- At this stage cpp keys have been updated. we can check the arch file 318 #- When used for the first time, choose a compiler --- 319 . ${COMPIL_DIR}/Fcheck_archfile.sh arch_nemo.fcm cpp.fcm ${CMP_NAM} || exit 280 320 281 321 #- At this stage the configuration has beeen chosen … … 317 357 rm -rf ${NEMO_TDIR}/${NEW_CONF}/BLD 318 358 rm -rf ${NEMO_TDIR}/${NEW_CONF}/EXP00/opa 359 rm -f ${COMPIL_DIR}/*history ${COMPIL_DIR}/*fcm ${COMPIL_DIR}/*txt 319 360 echo "cleaning ${NEW_CONF} WORK, BLD" 320 361 fi -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/EXTERNAL/AGRIF/LIB/SubLoopCreation.c
r2715 r4193 102 102 if ( mark == 1 ) fprintf(fortranout,"!!! aaaaaaaaaaaaaaa \n"); 103 103 WriteLocalParamDeclaration(); 104 if ( mark == 1 ) fprintf(fortranout,"!!! bbbbbbbbbbbbbbb \n"); 104 if ( mark == 1 ) fprintf(fortranout,"!!! bbbbbbbbbbbbbbb \n"); 105 WriteArgumentDeclaration_beforecall(); 106 if ( mark == 1 ) fprintf(fortranout,"!!! bbbbbbccccccccc \n"); 105 107 if ( functiondeclarationisdone == 0 ) WriteFunctionDeclaration(1); 106 if ( mark == 1 ) fprintf(fortranout,"!!! bbbbbbccccccccc \n");107 WriteArgumentDeclaration_beforecall();108 108 /* writesub_loopdeclaration_scalar(List_SubroutineArgument_Var,fortranout); 109 109 writesub_loopdeclaration_tab(List_SubroutineArgument_Var,fortranout);*/ … … 405 405 406 406 AddUseAgrifUtilBeforeCall_0(fortranout); 407 WriteArgumentDeclaration_beforecall(); 407 408 if ( functiondeclarationisdone == 0 ) WriteFunctionDeclaration(0); 408 WriteArgumentDeclaration_beforecall();409 409 if ( !strcasecmp(subofagrifinitgrids,subroutinename) ) 410 410 fprintf(oldfortranout," Call Agrif_Init_Grids () \n"); … … 462 462 " IMPLICIT NONE\n"); 463 463 WriteLocalParamDeclaration(); 464 WriteArgumentDeclaration_beforecall(); 464 465 if ( functiondeclarationisdone == 0 ) WriteFunctionDeclaration(0); 465 WriteArgumentDeclaration_beforecall();466 466 WriteSubroutineDeclaration(0); 467 467 if ( !strcasecmp(subofagrifinitgrids,subroutinename) ) -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/LIM_SRC_2/limsbc_2.F90
r3625 r4193 217 217 zemp_snw = rdm_snw(ji,jj) * r1_rdtice ! snow melting = pure water that enters the ocean 218 218 zfmm = rdm_ice(ji,jj) * r1_rdtice ! Freezing minus Melting (F-M) 219 220 fmmflx(ji,jj) = zfmm ! F/M mass flux save at least for biogeochemical model 219 221 220 222 ! salt flux at the ice/ocean interface (sea ice fraction) [PSU*kg/m2/s] -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/LIM_SRC_3/limsbc.F90
r3625 r4193 226 226 zemp_snw = rdm_snw(ji,jj) * r1_rdtice ! snow melting = pure water that enters the ocean 227 227 zfmm = rdm_ice(ji,jj) * r1_rdtice ! Freezing minus mesting 228 229 fmmflx(ji,jj) = zfmm ! F/M mass flux save at least for biogeochemical model 228 230 229 231 emp(ji,jj) = zemp + zemp_snw + zfmm ! mass flux + F/M mass flux (always ice/ocean mass exchange) -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/NST_SRC/agrif_opa_sponge.F90
r3698 r4193 185 185 INTEGER :: ji,jj,jk 186 186 INTEGER :: ispongearea, ilci, ilcj 187 REAL(wp) :: z1spongearea 188 REAL(wp), POINTER, DIMENSION(:,:) :: zlocalviscsponge 187 LOGICAL :: ll_spdone 188 REAL(wp) :: z1spongearea, zramp 189 REAL(wp), POINTER, DIMENSION(:,:) :: ztabramp 189 190 190 191 #if defined SPONGE || defined SPONGE_TOP 191 192 CALL wrk_alloc( jpi, jpj, zlocalviscsponge ) 193 194 ispongearea = 2 + 2 * Agrif_irhox() 195 ilci = nlci - ispongearea 196 ilcj = nlcj - ispongearea 197 z1spongearea = 1._wp / REAL( ispongearea - 2 ) 198 spbtr2(:,:) = 1. / ( e1t(:,:) * e2t(:,:) ) 192 ll_spdone=.TRUE. 193 IF (( .NOT. spongedoneT ).OR.( .NOT. spongedoneU )) THEN 194 ! Define ramp from boundaries towards domain interior 195 ! at T-points 196 ! Store it in ztabramp 197 ll_spdone=.FALSE. 198 199 CALL wrk_alloc( jpi, jpj, ztabramp ) 200 201 ispongearea = 2 + 2 * Agrif_irhox() 202 ilci = nlci - ispongearea 203 ilcj = nlcj - ispongearea 204 z1spongearea = 1._wp / REAL( ispongearea - 2 ) 205 spbtr2(:,:) = 1. / ( e1t(:,:) * e2t(:,:) ) 206 207 ztabramp(:,:) = 0. 208 209 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 210 DO jj = 1, jpj 211 IF ( umask(2,jj,1) == 1._wp ) THEN 212 DO ji = 2, ispongearea 213 ztabramp(ji,jj) = ( ispongearea-ji ) * z1spongearea 214 END DO 215 ENDIF 216 ENDDO 217 ENDIF 218 219 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 220 DO jj = 1, jpj 221 IF ( umask(nlci-2,jj,1) == 1._wp ) THEN 222 DO ji = ilci+1,nlci-1 223 zramp = (ji - (ilci+1) ) * z1spongearea 224 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 225 ENDDO 226 ENDIF 227 ENDDO 228 ENDIF 229 230 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 231 DO ji = 1, jpi 232 IF ( vmask(ji,2,1) == 1._wp ) THEN 233 DO jj = 2, ispongearea 234 zramp = ( ispongearea-jj ) * z1spongearea 235 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 236 END DO 237 ENDIF 238 ENDDO 239 ENDIF 240 241 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 242 DO ji = 1, jpi 243 IF ( vmask(ji,nlcj-2,1) == 1._wp ) THEN 244 DO jj = ilcj+1,nlcj-1 245 zramp = (jj - (ilcj+1) ) * z1spongearea 246 ztabramp(ji,jj) = MAX( ztabramp(ji,jj), zramp ) 247 END DO 248 ENDIF 249 ENDDO 250 ENDIF 251 252 ENDIF 199 253 200 254 ! Tracers 201 255 IF( .NOT. spongedoneT ) THEN 202 zlocalviscsponge(:,:) = 0.203 256 spe1ur(:,:) = 0. 204 257 spe2vr(:,:) = 0. 205 258 206 259 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 207 DO ji = 2, ispongearea208 zlocalviscsponge(ji,:) = visc_tra * ( ispongearea-ji ) * z1spongearea209 ENDDO210 spe1ur(2:ispongearea-1,: ) = 0.5 * ( zlocalviscsponge(2:ispongearea-1,: ) &211 & + zlocalviscsponge(3:ispongearea ,: ) ) & 212 & * e2u(2:ispongearea-1,: ) / e1u(2:ispongearea-1,: )213 spe2vr(2:ispongearea ,1:jpjm1) = 0.5 * ( zlocalviscsponge(2:ispongearea ,1:jpjm1) &214 & + zlocalviscsponge(2:ispongearea,2 :jpj ) ) &215 & * e1v(2:ispongearea ,1:jpjm1) / e2v(2:ispongearea,1:jpjm1)260 spe1ur(2:ispongearea-1,: ) = visc_tra & 261 & * 0.5 * ( ztabramp(2:ispongearea-1,: ) & 262 & + ztabramp(3:ispongearea ,: ) ) & 263 & * e2u(2:ispongearea-1,:) / e1u(2:ispongearea-1,:) 264 265 spe2vr(2:ispongearea ,1:jpjm1 ) = visc_tra & 266 & * 0.5 * ( ztabramp(2:ispongearea ,1:jpjm1) & 267 & + ztabramp(2:ispongearea,2 :jpj ) ) & 268 & * e1v(2:ispongearea,1:jpjm1) / e2v(2:ispongearea,1:jpjm1) 216 269 ENDIF 217 270 218 271 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 219 DO ji = ilci+1,nlci-1 220 zlocalviscsponge(ji,:) = visc_tra * (ji - (ilci+1) ) * z1spongearea 221 ENDDO 222 223 spe1ur(ilci+1:nlci-2,: ) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-2,:) & 224 & + zlocalviscsponge(ilci+2:nlci-1,:) ) & 225 & * e2u(ilci+1:nlci-2,:) / e1u(ilci+1:nlci-2,:) 226 227 spe2vr(ilci+1:nlci-1,1:jpjm1) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-1,1:jpjm1) & 228 & + zlocalviscsponge(ilci+1:nlci-1,2:jpj ) ) & 229 & * e1v(ilci+1:nlci-1,1:jpjm1) / e2v(ilci+1:nlci-1,1:jpjm1) 272 spe1ur(ilci+1:nlci-2,: ) = visc_tra & 273 & * 0.5 * ( ztabramp(ilci+1:nlci-2,: ) & 274 & + ztabramp(ilci+2:nlci-1,: ) ) & 275 & * e2u(ilci+1:nlci-2,:) / e1u(ilci+1:nlci-2,:) 276 277 spe2vr(ilci+1:nlci-1,1:jpjm1 ) = visc_tra & 278 & * 0.5 * ( ztabramp(ilci+1:nlci-1,1:jpjm1) & 279 & + ztabramp(ilci+1:nlci-1,2:jpj ) ) & 280 & * e1v(ilci+1:nlci-1,1:jpjm1) / e2v(ilci+1:nlci-1,1:jpjm1) 230 281 ENDIF 231 282 232 283 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 233 DO jj = 2, ispongearea 234 zlocalviscsponge(:,jj) = visc_tra * ( ispongearea-jj ) * z1spongearea 235 ENDDO 236 spe1ur(1:jpim1,2:ispongearea ) = 0.5 * ( zlocalviscsponge(1:jpim1,2:ispongearea ) & 237 & + zlocalviscsponge(2:jpi ,2:ispongearea) ) & 284 spe1ur(1:jpim1,2:ispongearea ) = visc_tra & 285 & * 0.5 * ( ztabramp(1:jpim1,2:ispongearea ) & 286 & + ztabramp(2:jpi ,2:ispongearea ) ) & 238 287 & * e2u(1:jpim1,2:ispongearea) / e1u(1:jpim1,2:ispongearea) 239 288 240 spe2vr(: ,2:ispongearea-1) = 0.5 * ( zlocalviscsponge(:,2:ispongearea-1) & 241 & + zlocalviscsponge(:,3:ispongearea ) ) & 289 spe2vr(: ,2:ispongearea-1) = visc_tra & 290 & * 0.5 * ( ztabramp(: ,2:ispongearea-1) & 291 & + ztabramp(: ,3:ispongearea ) ) & 242 292 & * e1v(:,2:ispongearea-1) / e2v(:,2:ispongearea-1) 243 293 ENDIF 244 294 245 295 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 246 DO jj = ilcj+1,nlcj-1 247 zlocalviscsponge(:,jj) = visc_tra * (jj - (ilcj+1) ) * z1spongearea 248 ENDDO 249 spe1ur(1:jpim1,ilcj+1:nlcj-1) = 0.5 * ( zlocalviscsponge(1:jpim1,ilcj+1:nlcj-1) & 250 & + zlocalviscsponge(2:jpi ,ilcj+1:nlcj-1) ) & 296 spe1ur(1:jpim1,ilcj+1:nlcj-1) = visc_tra & 297 & * 0.5 * ( ztabramp(1:jpim1,ilcj+1:nlcj-1) & 298 & + ztabramp(2:jpi ,ilcj+1:nlcj-1) ) & 251 299 & * e2u(1:jpim1,ilcj+1:nlcj-1) / e1u(1:jpim1,ilcj+1:nlcj-1) 252 spe2vr(: ,ilcj+1:nlcj-2) = 0.5 * ( zlocalviscsponge(:,ilcj+1:nlcj-2 ) & 253 & + zlocalviscsponge(:,ilcj+2:nlcj-1) ) & 300 301 spe2vr(: ,ilcj+1:nlcj-2) = visc_tra & 302 & * 0.5 * ( ztabramp(: ,ilcj+1:nlcj-2) & 303 & + ztabramp(: ,ilcj+2:nlcj-1) ) & 254 304 & * e1v(:,ilcj+1:nlcj-2) / e2v(:,ilcj+1:nlcj-2) 255 305 ENDIF … … 259 309 ! Dynamics 260 310 IF( .NOT. spongedoneU ) THEN 261 zlocalviscsponge(:,:) = 0.262 311 spe1ur2(:,:) = 0. 263 312 spe2vr2(:,:) = 0. 264 313 265 314 IF( (nbondi == -1) .OR. (nbondi == 2) ) THEN 266 DO ji = 2, ispongearea 267 zlocalviscsponge(ji,:) = visc_dyn * ( ispongearea-ji ) * z1spongearea 268 ENDDO 269 spe1ur2(2:ispongearea-1,: ) = 0.5 * ( zlocalviscsponge(2:ispongearea-1,: ) & 270 & + zlocalviscsponge(3:ispongearea,: ) ) 271 spe2vr2(2:ispongearea ,1:jpjm1) = 0.5 * ( zlocalviscsponge(2:ispongearea ,1:jpjm1) & 272 & + zlocalviscsponge(2:ispongearea,2:jpj) ) 315 spe1ur2(2:ispongearea-1,: ) = visc_dyn & 316 & * 0.5 * ( ztabramp(2:ispongearea-1,: ) & 317 & + ztabramp(3:ispongearea ,: ) ) 318 spe2vr2(2:ispongearea ,1:jpjm1) = visc_dyn & 319 & * 0.5 * ( ztabramp(2:ispongearea ,1:jpjm1) & 320 & + ztabramp(2:ispongearea ,2:jpj ) ) 273 321 ENDIF 274 322 275 323 IF( (nbondi == 1) .OR. (nbondi == 2) ) THEN 276 DO ji = ilci+1,nlci-1 277 zlocalviscsponge(ji,:) = visc_dyn * (ji - (ilci+1) ) * z1spongearea 278 ENDDO 279 spe1ur2(ilci+1:nlci-2,: ) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-2,:) & 280 & + zlocalviscsponge(ilci+2:nlci-1,:) ) 281 spe2vr2(ilci+1:nlci-1,1:jpjm1) = 0.5 * ( zlocalviscsponge(ilci+1:nlci-1,1:jpjm1) & 282 & + zlocalviscsponge(ilci+1:nlci-1,2:jpj ) ) 324 spe1ur2(ilci+1:nlci-2 ,: ) = visc_dyn & 325 & * 0.5 * ( ztabramp(ilci+1:nlci-2, : ) & 326 & + ztabramp(ilci+2:nlci-1, : ) ) 327 spe2vr2(ilci+1:nlci-1 ,1:jpjm1) = visc_dyn & 328 & * 0.5 * ( ztabramp(ilci+1:nlci-1,1:jpjm1 ) & 329 & + ztabramp(ilci+1:nlci-1,2:jpj ) ) 283 330 ENDIF 284 331 285 332 IF( (nbondj == -1) .OR. (nbondj == 2) ) THEN 286 DO jj = 2, ispongearea 287 zlocalviscsponge(:,jj) = visc_dyn * ( ispongearea-jj ) * z1spongearea 288 ENDDO 289 spe1ur2(1:jpim1,2:ispongearea ) = 0.5 * ( zlocalviscsponge(1:jpim1,2:ispongearea) & 290 & + zlocalviscsponge(2:jpi,2:ispongearea) ) 291 spe2vr2(: ,2:ispongearea-1) = 0.5 * ( zlocalviscsponge(:,2:ispongearea-1) & 292 & + zlocalviscsponge(:,3:ispongearea) ) 333 spe1ur2(1:jpim1,2:ispongearea ) = visc_dyn & 334 & * 0.5 * ( ztabramp(1:jpim1,2:ispongearea ) & 335 & + ztabramp(2:jpi ,2:ispongearea ) ) 336 spe2vr2(: ,2:ispongearea-1) = visc_dyn & 337 & * 0.5 * ( ztabramp(: ,2:ispongearea-1) & 338 & + ztabramp(: ,3:ispongearea ) ) 293 339 ENDIF 294 340 295 341 IF( (nbondj == 1) .OR. (nbondj == 2) ) THEN 296 DO jj = ilcj+1,nlcj-1 297 zlocalviscsponge(:,jj) = visc_dyn * (jj - (ilcj+1) ) * z1spongearea 298 ENDDO 299 spe1ur2(1:jpim1,ilcj+1:nlcj-1) = 0.5 * ( zlocalviscsponge(1:jpim1,ilcj+1:nlcj-1) & 300 & + zlocalviscsponge(2:jpi,ilcj+1:nlcj-1) ) 301 spe2vr2(: ,ilcj+1:nlcj-2) = 0.5 * ( zlocalviscsponge(:,ilcj+1:nlcj-2 ) & 302 & + zlocalviscsponge(:,ilcj+2:nlcj-1) ) 342 spe1ur2(1:jpim1,ilcj+1:nlcj-1 ) = visc_dyn & 343 & * 0.5 * ( ztabramp(1:jpim1,ilcj+1:nlcj-1 ) & 344 & + ztabramp(2:jpi ,ilcj+1:nlcj-1 ) ) 345 spe2vr2(: ,ilcj+1:nlcj-2 ) = visc_dyn & 346 & * 0.5 * ( ztabramp(: ,ilcj+1:nlcj-2 ) & 347 & + ztabramp(: ,ilcj+2:nlcj-1 ) ) 303 348 ENDIF 304 349 spongedoneU = .TRUE. … … 306 351 ENDIF 307 352 ! 308 CALL wrk_dealloc( jpi, jpj, zlocalviscsponge)353 IF (.NOT.ll_spdone) CALL wrk_dealloc( jpi, jpj, ztabramp ) 309 354 ! 310 355 #endif -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OFF_SRC/dtadyn.F90
r3827 r4193 72 72 INTEGER , SAVE :: jf_eiv ! index of v-eiv 73 73 INTEGER , SAVE :: jf_eiw ! index of w-eiv 74 INTEGER , SAVE :: jf_ sfx! index of downward salt flux74 INTEGER , SAVE :: jf_fmf ! index of downward salt flux 75 75 76 76 TYPE(FLD), ALLOCATABLE, DIMENSION(:) :: sf_dyn ! structure of input fields (file informations, fields read) … … 254 254 wndm(:,:) = sf_dyn(jf_wnd)%fnow(:,:,1) * tmask(:,:,1) ! wind speed - needed for gas exchange 255 255 emp (:,:) = sf_dyn(jf_emp)%fnow(:,:,1) * tmask(:,:,1) ! E-P 256 sfx (:,:) = 0.0_wp ! enable testing with old inputs ! downward salt flux 257 ! sfx (:,:) = sf_dyn(jf_sfx)%fnow(:,:,1) * tmask(:,:,1) ! downward salt flux (v3.5+) 256 fmmflx(:,:) = sf_dyn(jf_fmf)%fnow(:,:,1) * tmask(:,:,1) ! downward salt flux (v3.5+) 258 257 fr_i(:,:) = sf_dyn(jf_ice)%fnow(:,:,1) * tmask(:,:,1) ! Sea-ice fraction 259 258 qsr (:,:) = sf_dyn(jf_qsr)%fnow(:,:,1) * tmask(:,:,1) ! solar radiation … … 302 301 CALL prt_ctl(tab2d_1=fr_i , clinfo1=' fr_i - : ', mask1=tmask, ovlap=1 ) 303 302 CALL prt_ctl(tab2d_1=hmld , clinfo1=' hmld - : ', mask1=tmask, ovlap=1 ) 304 CALL prt_ctl(tab2d_1= sfx , clinfo1=' sfx- : ', mask1=tmask, ovlap=1 )303 CALL prt_ctl(tab2d_1=fmmflx , clinfo1=' fmmflx - : ', mask1=tmask, ovlap=1 ) 305 304 CALL prt_ctl(tab2d_1=emp , clinfo1=' emp - : ', mask1=tmask, ovlap=1 ) 306 305 CALL prt_ctl(tab2d_1=wndm , clinfo1=' wspd - : ', mask1=tmask, ovlap=1 ) … … 331 330 TYPE(FLD_N) :: sn_tem, sn_sal, sn_mld, sn_emp, sn_ice, sn_qsr, sn_wnd ! informations about the fields to be read 332 331 TYPE(FLD_N) :: sn_uwd, sn_vwd, sn_wwd, sn_avt, sn_ubl, sn_vbl ! " " 333 TYPE(FLD_N) :: sn_ahu, sn_ahv, sn_ahw, sn_eiu, sn_eiv, sn_eiw, sn_ sfx! " "332 TYPE(FLD_N) :: sn_ahu, sn_ahv, sn_ahw, sn_eiu, sn_eiv, sn_eiw, sn_fmf ! " " 334 333 ! 335 334 NAMELIST/namdta_dyn/cn_dir, ln_dynwzv, ln_dynbbl, ln_degrad, & 336 335 & sn_tem, sn_sal, sn_mld, sn_emp, sn_ice, sn_qsr, sn_wnd, & 337 336 & sn_uwd, sn_vwd, sn_wwd, sn_avt, sn_ubl, sn_vbl, & 338 & sn_ahu, sn_ahv, sn_ahw, sn_eiu, sn_eiv, sn_eiw, sn_ sfx337 & sn_ahu, sn_ahv, sn_ahw, sn_eiu, sn_eiv, sn_eiw, sn_fmf 339 338 340 339 !!---------------------------------------------------------------------- … … 349 348 sn_mld = FLD_N( 'dyna_grid_T' , 120 , 'somixght' , .true. , .true. , 'yearly' , '' , '' ) 350 349 sn_emp = FLD_N( 'dyna_grid_T' , 120 , 'sowaflup' , .true. , .true. , 'yearly' , '' , '' ) 351 sn_sfx = FLD_N( 'dyna_grid_T' , 120 , 'sowaflcd' , .true. , .true. , 'yearly' , '' , '' ) 352 !! sn_sfx = FLD_N( 'dyna_grid_T' , 120 , 'sosfldow' , .true. , .true. , 'yearly' , '' , '' ) ! v3.5+ 350 sn_fmf = FLD_N( 'dyna_grid_T' , 120 , 'sofmflup' , .true. , .true. , 'yearly' , '' , '' ) 353 351 sn_ice = FLD_N( 'dyna_grid_T' , 120 , 'soicecov' , .true. , .true. , 'yearly' , '' , '' ) 354 352 sn_qsr = FLD_N( 'dyna_grid_T' , 120 , 'soshfldo' , .true. , .true. , 'yearly' , '' , '' ) … … 391 389 ENDIF 392 390 393 jf_tem = 1 ; jf_sal = 2 ; jf_mld = 3 ; jf_emp = 4 ; jf_ sfx= 5 ; jf_ice = 6 ; jf_qsr = 7391 jf_tem = 1 ; jf_sal = 2 ; jf_mld = 3 ; jf_emp = 4 ; jf_fmf = 5 ; jf_ice = 6 ; jf_qsr = 7 394 392 jf_wnd = 8 ; jf_uwd = 9 ; jf_vwd = 10 ; jf_wwd = 11 ; jf_avt = 12 ; jfld = 12 395 393 ! 396 394 slf_d(jf_tem) = sn_tem ; slf_d(jf_sal) = sn_sal ; slf_d(jf_mld) = sn_mld 397 slf_d(jf_emp) = sn_emp ; slf_d(jf_ sfx ) = sn_sfx; slf_d(jf_ice) = sn_ice395 slf_d(jf_emp) = sn_emp ; slf_d(jf_fmf ) = sn_fmf ; slf_d(jf_ice) = sn_ice 398 396 slf_d(jf_qsr) = sn_qsr ; slf_d(jf_wnd) = sn_wnd ; slf_d(jf_avt) = sn_avt 399 397 slf_d(jf_uwd) = sn_uwd ; slf_d(jf_vwd) = sn_vwd ; slf_d(jf_wwd) = sn_wwd … … 429 427 ENDIF 430 428 ENDIF 431 ! Salt flux and concntration/dilution terms (new from v3.5) !! disabled to allow testing with old input files432 !! jf_sfx = jfld + 1 ; jfld = jfld + 1433 !! slf_d(jf_sfx) = sn_sfx434 429 435 430 ALLOCATE( sf_dyn(jfld), STAT=ierr ) ! set sf structure -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/ASM/asminc.F90
r3785 r4193 682 682 ! used to prevent the applied increments taking the temperature below the local freezing point 683 683 684 #if defined key_cice 685 fzptnz(:,:,:) = -1.8_wp 686 #else 687 DO jk = 1, jpk 688 DO jj = 1, jpj 689 DO ji = 1, jpk 690 fzptnz (ji,jj,jk) = ( -0.0575_wp + 1.710523e-3_wp * SQRT( tsn(ji,jj,jk,jp_sal) ) & 691 - 2.154996e-4_wp * tsn(ji,jj,jk,jp_sal) ) * tsn(ji,jj,jk,jp_sal) & 692 - 7.53e-4_wp * fsdepw(ji,jj,jk) ! (pressure in dbar) 693 END DO 694 END DO 695 END DO 696 #endif 684 DO jk=1, jpkm1 685 fzptnz (:,:,jk) = tfreez( tsn(:,:,jk,jp_sal), fsdept(:,:,jk) ) 686 ENDDO 697 687 698 688 IF ( ln_asmiau ) THEN -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/BDY/bdydyn.F90
r3970 r4193 85 85 pu2d(:,:) = 0.e0 86 86 pv2d(:,:) = 0.e0 87 ! bg jchanut tschanges (not specifically related to ts; this is a bug) 87 88 88 IF (lk_vvl) THEN 89 DO jk = 1, jpkm1 !! Vertically integrated momentum trends89 DO jk = 1, jpkm1 90 90 pu2d(:,:) = pu2d(:,:) + fse3u_a(:,:,jk) * umask(:,:,jk) * ua(:,:,jk) 91 91 pv2d(:,:) = pv2d(:,:) + fse3v_a(:,:,jk) * vmask(:,:,jk) * va(:,:,jk) … … 93 93 pu2d(:,:) = pu2d(:,:) / ( hu_0(:,:) + sshu_a(:,:) + 1._wp - umask(:,:,1) ) 94 94 pv2d(:,:) = pv2d(:,:) / ( hv_0(:,:) + sshv_a(:,:) + 1._wp - vmask(:,:,1) ) 95 ! end jchanut tschanges96 95 ELSE 97 DO jk = 1, jpkm1 !! Vertically integrated momentum trends96 DO jk = 1, jpkm1 98 97 pu2d(:,:) = pu2d(:,:) + fse3u(:,:,jk) * umask(:,:,jk) * ua(:,:,jk) 99 98 pv2d(:,:) = pv2d(:,:) + fse3v(:,:,jk) * vmask(:,:,jk) * va(:,:,jk) -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/BDY/bdyice_lim2.F90
- Property svn:keywords set to Id
r3680 r4193 32 32 !!---------------------------------------------------------------------- 33 33 !! NEMO/OPA 3.3 , NEMO Consortium (2010) 34 !! $Id : bdyice.F90 2715 2011-03-30 15:58:35Z rblod$34 !! $Id$ 35 35 !! Software governed by the CeCILL licence (NEMOGCM/NEMO_CeCILL.txt) 36 36 !!---------------------------------------------------------------------- … … 76 76 INTEGER, INTENT(in) :: ib_bdy ! BDY set index 77 77 !! 78 INTEGER :: jb, j k, jgrd ! dummy loop indices78 INTEGER :: jb, jgrd ! dummy loop indices 79 79 INTEGER :: ii, ij ! local scalar 80 80 REAL(wp) :: zwgt, zwgt1 ! local scalar … … 86 86 ! 87 87 DO jb = 1, idx%nblen(jgrd) 88 DO jk = 1, jpkm189 88 ii = idx%nbi(jb,jgrd) 90 89 ij = idx%nbj(jb,jgrd) … … 94 93 hicif(ii,ij) = ( hicif(ii,ij) * zwgt1 + dta%hicif(jb) * zwgt ) * tmask(ii,ij,1) ! Ice depth 95 94 hsnif(ii,ij) = ( hsnif(ii,ij) * zwgt1 + dta%hsnif(jb) * zwgt ) * tmask(ii,ij,1) ! Snow depth 96 END DO97 95 END DO 98 96 CALL lbc_bdy_lnk( frld, 'T', 1., ib_bdy ) ! lateral boundary conditions -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/BDY/bdyini.F90
r3970 r4193 1067 1067 1068 1068 bdytmask(:,:) = tmask(:,:,1) 1069 IF( .not. ln_mask_file ) THEN 1070 ! If .not. ln_mask_file then we need to derive mask on U and V grid 1071 ! from mask on T grid here. 1072 bdyumask(:,:) = 0.e0 1073 bdyvmask(:,:) = 0.e0 1074 DO ij=1, jpjm1 1075 DO ii=1, jpim1 1076 bdyumask(ii,ij)=bdytmask(ii,ij)*bdytmask(ii+1, ij ) 1077 bdyvmask(ii,ij)=bdytmask(ii,ij)*bdytmask(ii ,ij+1) 1078 END DO 1079 END DO 1080 CALL lbc_lnk( bdyumask(:,:), 'U', 1. ) ; CALL lbc_lnk( bdyvmask(:,:), 'V', 1. ) ! Lateral boundary cond. 1081 ENDIF 1069 1082 1070 1083 ! bdy masks and bmask are now set to zero on boundary points: -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/C1D/step_c1d.F90
r3680 r4193 59 59 60 60 indic = 0 ! reset to no error condition 61 IF( kstp == nit000 ) CALL iom_init ! iom_put initialization (must be done after nemo_init for AGRIF+XIOS+OASIS) 61 62 IF( kstp /= nit000 ) CALL day( kstp ) ! Calendar (day was already called at nit000 in day_init) 62 CALL iom_setkt( kstp ) ! say to iom that we are at time step kstp63 CALL iom_setkt( kstp - nit000 + 1 ) ! say to iom that we are at time step kstp 63 64 64 65 !>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>> … … 106 107 !<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< 107 108 CALL dia_wri( kstp ) ! ocean model: outputs 109 IF( lk_diahth ) CALL dia_hth( kstp ) ! Thermocline depth (20°C) 110 108 111 109 112 #if defined key_top -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/DIA/diadct.F90
r3680 r4193 42 42 #endif 43 43 #if defined key_lim3 44 USE ice_3 44 USE par_ice 45 USE ice 45 46 #endif 46 47 USE domvvl … … 484 485 ijglo = secs(jsec)%listPoint(jpt)%J + jpjzoom - 1 + njmpp - 1 485 486 WRITE(numout,*)' # I J : ',iiglo,ijglo 487 CALL FLUSH(numout) 486 488 ENDDO 487 489 ENDIF … … 606 608 607 609 !! * Local variables 608 INTEGER :: jk, jseg, jclass, &!loop on level/segment/classes610 INTEGER :: jk, jseg, jclass,jl, &!loop on level/segment/classes/ice categories 609 611 isgnu, isgnv ! 610 612 REAL(wp) :: zumid, zvmid, &!U/V velocity on a cell segment … … 771 773 772 774 zTnorm=zumid_ice*e2u(k%I,k%J)+zvmid_ice*e1v(k%I,k%J) 773 775 776 #if defined key_lim2 774 777 transports_2d(1,jsec,jseg) = transports_2d(1,jsec,jseg) + (zTnorm)* & 775 778 (1.0 - frld(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J)) & … … 778 781 transports_2d(2,jsec,jseg) = transports_2d(2,jsec,jseg) + (zTnorm)* & 779 782 (1.0 - frld(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J)) 783 #endif 784 #if defined key_lim3 785 DO jl=1,jpl 786 transports_2d(1,jsec,jseg) = transports_2d(1,jsec,jseg) + (zTnorm)* & 787 a_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) * & 788 ( ht_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) + & 789 ht_s(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) ) 790 791 transports_2d(2,jsec,jseg) = transports_2d(2,jsec,jseg) + (zTnorm)* & 792 a_i(sec%listPoint(jseg)%I,sec%listPoint(jseg)%J,jl) 793 ENDDO 794 #endif 780 795 781 796 ENDIF !end of ice case -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/DIA/diahsb.F90
r3625 r4193 21 21 USE bdy_par ! (for lk_bdy) 22 22 USE timing ! preformance summary 23 USE lib_fortran 24 USE sbcrnf 23 25 24 26 IMPLICIT NONE … … 33 35 REAL(dp) :: surf_tot , vol_tot ! 34 36 REAL(dp) :: frc_t , frc_s , frc_v ! global forcing trends 37 REAL(dp) :: frc_wn_t , frc_wn_s ! global forcing trends 35 38 REAL(dp) :: fact1 ! conversion factors 36 39 REAL(dp) :: fact21 , fact22 ! - - … … 38 41 REAL(dp), DIMENSION(:,:) , ALLOCATABLE :: surf , ssh_ini ! 39 42 REAL(dp), DIMENSION(:,:,:), ALLOCATABLE :: hc_loc_ini, sc_loc_ini, e3t_ini ! 43 REAL(dp), DIMENSION(:,:) , ALLOCATABLE :: ssh_hc_loc_ini, ssh_sc_loc_ini 40 44 41 45 !! * Substitutions … … 67 71 INTEGER :: jk ! dummy loop indice 68 72 REAL(dp) :: zdiff_hc , zdiff_sc ! heat and salt content variations 73 REAL(dp) :: zdiff_hc1 , zdiff_sc1 ! heat and salt content variations of ssh 69 74 REAL(dp) :: zdiff_v1 , zdiff_v2 ! volume variation 75 REAL(dp) :: zerr_hc1 , zerr_sc1 ! Non conservation due to free surface 70 76 REAL(dp) :: z1_rau0 ! local scalars 71 77 REAL(dp) :: zdeltat ! - - 72 78 REAL(dp) :: z_frc_trd_t , z_frc_trd_s ! - - 73 79 REAL(dp) :: z_frc_trd_v ! - - 80 REAL(dp) :: z_wn_trd_t , z_wn_trd_s ! - - 81 REAL(dp) :: z_ssh_hc , z_ssh_sc ! - - 74 82 !!--------------------------------------------------------------------------- 75 83 IF( nn_timing == 1 ) CALL timing_start('dia_hsb') … … 79 87 ! ------------------------- ! 80 88 z1_rau0 = 1.e0 / rau0 81 z_frc_trd_v = z1_rau0 * SUM( - ( emp(:,:) - rnf(:,:) ) * surf(:,:) ) ! volume fluxes 82 z_frc_trd_t = SUM( sbc_tsc(:,:,jp_tem) * surf(:,:) ) ! heat fluxes 83 z_frc_trd_s = SUM( sbc_tsc(:,:,jp_sal) * surf(:,:) ) ! salt fluxes 89 z_frc_trd_v = z1_rau0 * glob_sum( - ( emp(:,:) - rnf(:,:) ) * surf(:,:) ) ! volume fluxes 90 z_frc_trd_t = glob_sum( sbc_tsc(:,:,jp_tem) * surf(:,:) ) ! heat fluxes 91 z_frc_trd_s = glob_sum( sbc_tsc(:,:,jp_sal) * surf(:,:) ) ! salt fluxes 92 ! Add runoff heat & salt input 93 IF( ln_rnf ) z_frc_trd_t = z_frc_trd_t + glob_sum( rnf_tsc(:,:,jp_tem) * surf(:,:) ) 94 IF( ln_rnf_sal) z_frc_trd_s = z_frc_trd_s + glob_sum( rnf_tsc(:,:,jp_sal) * surf(:,:) ) 84 95 ! Add penetrative solar radiation 85 IF( ln_traqsr ) z_frc_trd_t = z_frc_trd_t + r1_rau0_rcp * SUM( qsr (:,:) * surf(:,:) )96 IF( ln_traqsr ) z_frc_trd_t = z_frc_trd_t + r1_rau0_rcp * glob_sum( qsr (:,:) * surf(:,:) ) 86 97 ! Add geothermal heat flux 87 IF( ln_trabbc ) z_frc_trd_t = z_frc_trd_t + r1_rau0_rcp * SUM( qgh_trd0(:,:) * surf(:,:) ) 88 IF( lk_mpp ) THEN 89 CALL mpp_sum( z_frc_trd_v ) 90 CALL mpp_sum( z_frc_trd_t ) 91 ENDIF 98 IF( ln_trabbc ) z_frc_trd_t = z_frc_trd_t + glob_sum( qgh_trd0(:,:) * surf(:,:) ) 99 IF( .NOT. lk_vvl ) THEN 100 z_wn_trd_t = - glob_sum( surf(:,:) * wn(:,:,1) * tsb(:,:,1,jp_tem) ) 101 z_wn_trd_s = - glob_sum( surf(:,:) * wn(:,:,1) * tsb(:,:,1,jp_sal) ) 102 ENDIF 103 92 104 frc_v = frc_v + z_frc_trd_v * rdt 93 105 frc_t = frc_t + z_frc_trd_t * rdt 94 106 frc_s = frc_s + z_frc_trd_s * rdt 107 ! ! Advection flux through fixed surface (z=0) 108 IF( .NOT. lk_vvl ) THEN 109 frc_wn_t = frc_wn_t + z_wn_trd_t * rdt 110 frc_wn_s = frc_wn_s + z_wn_trd_s * rdt 111 ENDIF 95 112 96 113 ! ----------------------- ! … … 100 117 zdiff_hc = 0.d0 101 118 zdiff_sc = 0.d0 119 102 120 ! volume variation (calculated with ssh) 103 zdiff_v1 = SUM( surf(:,:) * tmask(:,:,1) * ( sshn(:,:) - ssh_ini(:,:) ) ) 121 zdiff_v1 = glob_sum( surf(:,:) * ( sshn(:,:) - ssh_ini(:,:) ) ) 122 123 ! heat & salt content variation (associated with ssh) 124 IF( .NOT. lk_vvl ) THEN 125 z_ssh_hc = glob_sum( surf(:,:) * ( tsn(:,:,1,jp_tem) * sshn(:,:) - ssh_hc_loc_ini(:,:) ) ) 126 z_ssh_sc = glob_sum( surf(:,:) * ( tsn(:,:,1,jp_sal) * sshn(:,:) - ssh_sc_loc_ini(:,:) ) ) 127 ENDIF 128 104 129 DO jk = 1, jpkm1 105 106 zdiff_v2 = zdiff_v2 + SUM( surf(:,:) * tmask(:,:,jk) &130 ! volume variation (calculated with scale factors) 131 zdiff_v2 = zdiff_v2 + glob_sum( surf(:,:) * tmask(:,:,jk) & 107 132 & * ( fse3t_n(:,:,jk) & 108 133 & - e3t_ini(:,:,jk) ) ) 109 134 ! heat content variation 110 zdiff_hc = zdiff_hc + SUM( surf(:,:) * tmask(:,:,jk) &135 zdiff_hc = zdiff_hc + glob_sum( surf(:,:) * tmask(:,:,jk) & 111 136 & * ( fse3t_n(:,:,jk) * tsn(:,:,jk,jp_tem) & 112 137 & - hc_loc_ini(:,:,jk) ) ) 113 138 ! salt content variation 114 zdiff_sc = zdiff_sc + SUM( surf(:,:) * tmask(:,:,jk) &139 zdiff_sc = zdiff_sc + glob_sum( surf(:,:) * tmask(:,:,jk) & 115 140 & * ( fse3t_n(:,:,jk) * tsn(:,:,jk,jp_sal) & 116 141 & - sc_loc_ini(:,:,jk) ) ) 117 142 ENDDO 118 143 119 IF( lk_mpp ) THEN120 CALL mpp_sum( zdiff_hc )121 CALL mpp_sum( zdiff_sc )122 CALL mpp_sum( zdiff_v1 )123 CALL mpp_sum( zdiff_v2 )124 ENDIF125 126 144 ! Substract forcing from heat content, salt content and volume variations 127 145 zdiff_v1 = zdiff_v1 - frc_v 128 zdiff_v2 = zdiff_v2 - frc_v146 IF( lk_vvl ) zdiff_v2 = zdiff_v2 - frc_v 129 147 zdiff_hc = zdiff_hc - frc_t 130 148 zdiff_sc = zdiff_sc - frc_s 149 IF( .NOT. lk_vvl ) THEN 150 zdiff_hc1 = zdiff_hc + z_ssh_hc 151 zdiff_sc1 = zdiff_sc + z_ssh_sc 152 zerr_hc1 = z_ssh_hc - frc_wn_t 153 zerr_sc1 = z_ssh_sc - frc_wn_s 154 ENDIF 131 155 132 156 ! ----------------------- ! … … 134 158 ! ----------------------- ! 135 159 zdeltat = 1.e0 / ( ( kt - nit000 + 1 ) * rdt ) 136 WRITE(numhsb , 9020) kt , zdiff_hc / vol_tot , zdiff_hc * fact1 * zdeltat, & 137 & zdiff_sc / vol_tot , zdiff_sc * fact21 * zdeltat, zdiff_sc * fact22 * zdeltat, & 138 & zdiff_v1 , zdiff_v1 * fact31 * zdeltat, zdiff_v1 * fact32 * zdeltat, & 139 & zdiff_v2 , zdiff_v2 * fact31 * zdeltat, zdiff_v2 * fact32 * zdeltat 160 IF( lk_vvl ) THEN 161 WRITE(numhsb , 9020) kt , zdiff_hc / vol_tot , zdiff_hc * fact1 * zdeltat, & 162 & zdiff_sc / vol_tot , zdiff_sc * fact21 * zdeltat, zdiff_sc * fact22 * zdeltat, & 163 & zdiff_v1 , zdiff_v1 * fact31 * zdeltat, zdiff_v1 * fact32 * zdeltat, & 164 & zdiff_v2 , zdiff_v2 * fact31 * zdeltat, zdiff_v2 * fact32 * zdeltat 165 ELSE 166 WRITE(numhsb , 9030) kt , zdiff_hc1 / vol_tot , zdiff_hc1 * fact1 * zdeltat, & 167 & zdiff_sc1 / vol_tot , zdiff_sc1 * fact21 * zdeltat, zdiff_sc1 * fact22 * zdeltat, & 168 & zdiff_v1 , zdiff_v1 * fact31 * zdeltat, zdiff_v1 * fact32 * zdeltat, & 169 & zerr_hc1 / vol_tot , zerr_sc1 / vol_tot 170 ENDIF 140 171 141 172 IF ( kt == nitend ) CLOSE( numhsb ) … … 144 175 145 176 9020 FORMAT(I5,11D15.7) 177 9030 FORMAT(I5,10D15.7) 146 178 ! 147 179 END SUBROUTINE dia_hsb … … 179 211 180 212 IF( .NOT. ln_diahsb ) RETURN 213 IF( .NOT. lk_mpp_rep ) & 214 CALL ctl_stop (' Your global mpp_sum if performed in single precision - 64 bits -', & 215 & ' whereas the global sum to be precise must be done in double precision ',& 216 & ' please add key_mpp_rep') 181 217 182 218 ! ------------------- ! 183 219 ! 1 - Allocate memory ! 184 220 ! ------------------- ! 185 ALLOCATE( hc_loc_ini(jpi,jpj,jpk), STAT=ierror ) 221 ALLOCATE( hc_loc_ini(jpi,jpj,jpk), sc_loc_ini(jpi,jpj,jpk), & 222 & ssh_hc_loc_ini(jpi,jpj), ssh_sc_loc_ini(jpi,jpj), & 223 & e3t_ini(jpi,jpj,jpk) , & 224 & surf(jpi,jpj), ssh_ini(jpi,jpj), STAT=ierror ) 186 225 IF( ierror > 0 ) THEN 187 226 CALL ctl_stop( 'dia_hsb: unable to allocate hc_loc_ini' ) ; RETURN 188 ENDIF189 ALLOCATE( sc_loc_ini(jpi,jpj,jpk), STAT=ierror )190 IF( ierror > 0 ) THEN191 CALL ctl_stop( 'dia_hsb: unable to allocate sc_loc_ini' ) ; RETURN192 ENDIF193 ALLOCATE( e3t_ini(jpi,jpj,jpk) , STAT=ierror )194 IF( ierror > 0 ) THEN195 CALL ctl_stop( 'dia_hsb: unable to allocate e3t_ini' ) ; RETURN196 ENDIF197 ALLOCATE( surf(jpi,jpj) , STAT=ierror )198 IF( ierror > 0 ) THEN199 CALL ctl_stop( 'dia_hsb: unable to allocate surf' ) ; RETURN200 ENDIF201 ALLOCATE( ssh_ini(jpi,jpj) , STAT=ierror )202 IF( ierror > 0 ) THEN203 CALL ctl_stop( 'dia_hsb: unable to allocate ssh_ini' ) ; RETURN204 227 ENDIF 205 228 … … 214 237 cl_name = 'heat_salt_volume_budgets.txt' ! name of output file 215 238 surf(:,:) = e1t(:,:) * e2t(:,:) * tmask(:,:,1) * tmask_i(:,:) ! masked surface grid cell area 216 surf_tot = SUM( surf(:,:) ) ! total ocean surface area239 surf_tot = glob_sum( surf(:,:) ) ! total ocean surface area 217 240 vol_tot = 0.d0 ! total ocean volume 218 241 DO jk = 1, jpkm1 219 vol_tot = vol_tot + SUM( surf(:,:) * tmask(:,:,jk) &220 & * fse3t_n(:,:,jk) )242 vol_tot = vol_tot + glob_sum( surf(:,:) * tmask(:,:,jk) & 243 & * fse3t_n(:,:,jk) ) 221 244 END DO 222 IF( lk_mpp ) THEN223 CALL mpp_sum( vol_tot )224 CALL mpp_sum( surf_tot )225 ENDIF226 245 227 246 CALL ctl_opn( numhsb , cl_name , 'UNKNOWN' , 'FORMATTED' , 'SEQUENTIAL' , 1 , numout , lwp , 1 ) 228 ! 12345678901234567890123456789012345678901234567890123456789012345678901234567890 -> 80 229 WRITE( numhsb, 9010 ) "kt | heat content budget | salt content budget ", & 230 ! 123456789012345678901234567890123456789012345 -> 45 231 & "| volume budget (ssh) ", & 232 ! 678901234567890123456789012345678901234567890 -> 45 233 & "| volume budget (e3t) " 234 WRITE( numhsb, 9010 ) " | [C] [W/m2] | [psu] [mmm/s] [SV] ", & 235 & "| [m3] [mmm/s] [SV] ", & 236 & "| [m3] [mmm/s] [SV] " 237 247 IF( lk_vvl ) THEN 248 ! 12345678901234567890123456789012345678901234567890123456789012345678901234567890 -> 80 249 WRITE( numhsb, 9010 ) "kt | heat content budget | salt content budget ", & 250 ! 123456789012345678901234567890123456789012345 -> 45 251 & "| volume budget (ssh) ", & 252 ! 678901234567890123456789012345678901234567890 -> 45 253 & "| volume budget (e3t) " 254 WRITE( numhsb, 9010 ) " | [C] [W/m2] | [psu] [mmm/s] [SV] ", & 255 & "| [m3] [mmm/s] [SV] ", & 256 & "| [m3] [mmm/s] [SV] " 257 ELSE 258 ! 12345678901234567890123456789012345678901234567890123456789012345678901234567890 -> 80 259 WRITE( numhsb, 9011 ) "kt | heat content budget | salt content budget ", & 260 ! 123456789012345678901234567890123456789012345 -> 45 261 & "| volume budget (ssh) ", & 262 ! 678901234567890123456789012345678901234567890 -> 45 263 & "| Non conservation due to free surface " 264 WRITE( numhsb, 9011 ) " | [C] [W/m2] | [psu] [mmm/s] [SV] ", & 265 & "| [m3] [mmm/s] [SV] ", & 266 & "| [heat - C] [salt - psu] " 267 ENDIF 238 268 ! --------------- ! 239 269 ! 3 - Conversions ! (factors will be multiplied by duration afterwards) … … 261 291 frc_t = 0.d0 ! heat content - - - - 262 292 frc_s = 0.d0 ! salt content - - - - 293 IF( .NOT. lk_vvl ) THEN 294 ssh_hc_loc_ini(:,:) = tsn(:,:,1,jp_tem) * ssh_ini(:,:) ! initial heat content associated with ssh 295 ssh_sc_loc_ini(:,:) = tsn(:,:,1,jp_sal) * ssh_ini(:,:) ! initial salt content associated with ssh 296 frc_wn_t = 0.d0 297 frc_wn_s = 0.d0 298 ENDIF 263 299 ! 264 300 9010 FORMAT(A80,A45,A45) 301 9011 FORMAT(A80,A45,A45) 265 302 ! 266 303 END SUBROUTINE dia_hsb_init -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/DIA/diaptr.F90
r3764 r4193 350 350 DO jn = 1, nptr 351 351 tn_jk(:,:,jn) = ptr_tjk( tsn(:,:,:,jp_tem), btmsk(:,:,jn) ) * r1_sjk(:,:,jn) 352 sn_jk(:,:,jn) = ptr_tjk( tsn(:,:,:,jp_sal), btmsk(:,:,jn) ) * r1_sjk(:,:,jn) 352 353 END DO 353 354 ENDIF … … 563 564 !!-------------------------------------------------------------------- 564 565 ! 565 CALL wrk_alloc( jp i, zphi , zfoo )566 CALL wrk_alloc( jp i, jpk, z_1 )566 CALL wrk_alloc( jpj , zphi , zfoo ) 567 CALL wrk_alloc( jpj , jpk, z_1 ) 567 568 568 569 ! define time axis … … 878 879 ENDIF 879 880 ! 880 CALL wrk_dealloc( jp i, zphi , zfoo )881 CALL wrk_dealloc( jp i, jpk, z_1 )881 CALL wrk_dealloc( jpj , zphi , zfoo ) 882 CALL wrk_dealloc( jpj , jpk, z_1 ) 882 883 ! 883 884 END SUBROUTINE dia_ptr_wri -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/DOM/closea.F90
r3632 r4193 108 108 ncsi1(2) = 97 ; ncsj1(2) = 107 109 109 ncsi2(2) = 103 ; ncsj2(2) = 111 110 ncsir(2,1) = 110 ; ncsjr(2,1) = 111 111 ! ! Black Sea 1 : west part of the Black Sea 112 ncsnr(3) = 1 ; ncstt(3) = 2 ! (ie west of the cyclic b.c.) 113 ncsi1(3) = 174 ; ncsj1(3) = 107 ! put in Med Sea 114 ncsi2(3) = 181 ; ncsj2(3) = 112 115 ncsir(3,1) = 171 ; ncsjr(3,1) = 106 116 ! ! Black Sea 2 : est part of the Black Sea 117 ncsnr(4) = 1 ; ncstt(4) = 2 ! (ie est of the cyclic b.c.) 118 ncsi1(4) = 2 ; ncsj1(4) = 107 ! put in Med Sea 119 ncsi2(4) = 6 ; ncsj2(4) = 112 120 ncsir(4,1) = 171 ; ncsjr(4,1) = 106 110 ncsir(2,1) = 110 ; ncsjr(2,1) = 111 111 ! ! Black Sea (crossed by the cyclic boundary condition) 112 ncsnr(3:4) = 4 ; ncstt(3:4) = 2 ! put in Med Sea (north of Aegean Sea) 113 ncsir(3:4,1) = 171; ncsjr(3:4,1) = 106 ! 114 ncsir(3:4,2) = 170; ncsjr(3:4,2) = 106 115 ncsir(3:4,3) = 171; ncsjr(3:4,3) = 105 116 ncsir(3:4,4) = 170; ncsjr(3:4,4) = 105 117 ncsi1(3) = 174 ; ncsj1(3) = 107 ! 1 : west part of the Black Sea 118 ncsi2(3) = 181 ; ncsj2(3) = 112 ! (ie west of the cyclic b.c.) 119 ncsi1(4) = 2 ; ncsj1(4) = 107 ! 2 : east part of the Black Sea 120 ncsi2(4) = 6 ; ncsj2(4) = 112 ! (ie east of the cyclic b.c.) 121 122 123 121 124 ! ! ======================= 122 125 CASE ( 4 ) ! ORCA_R4 configuration … … 372 375 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: p_rnfmsk ! river runoff mask (rnfmsk array) 373 376 ! 374 INTEGER :: jc, jn ! dummy loop indices 375 INTEGER :: ii, ij ! temporary integer 377 INTEGER :: jc, jn, ji, jj ! dummy loop indices 376 378 !!---------------------------------------------------------------------- 377 379 ! … … 379 381 IF( ncstt(jc) >= 1 ) THEN ! runoff mask set to 1 at closed sea outflows 380 382 DO jn = 1, 4 381 ii = mi0( ncsir(jc,jn) ) 382 ij = mj0( ncsjr(jc,jn) ) 383 p_rnfmsk(ii,ij) = MAX( p_rnfmsk(ii,ij), 1.0_wp ) 383 DO jj = mj0( ncsjr(jc,jn) ), mj1( ncsjr(jc,jn) ) 384 DO ji = mi0( ncsir(jc,jn) ), mi1( ncsir(jc,jn) ) 385 p_rnfmsk(ji,jj) = MAX( p_rnfmsk(ji,jj), 1.0_wp ) 386 END DO 387 END DO 384 388 END DO 385 389 ENDIF -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/DOM/daymod.F90
r3851 r4193 238 238 nday_year = 1 239 239 nsec_year = ndt05 240 IF( nsec1jan000 >= 2 * (2**30 - nsecd * nyear_len(1) / 2 ) ) THEN ! test integer 4 max value 241 CALL ctl_stop( 'The number of seconds between Jan. 1st 00h of nit000 year and Jan. 1st 00h ', & 242 & 'of the current year is exceeding the INTEGER 4 max VALUE: 2^31-1 -> 68.09 years in seconds', & 243 & 'You must do a restart at higher frequency (or remove this STOP and recompile everything in I8)' ) 244 ENDIF 240 245 nsec1jan000 = nsec1jan000 + nsecd * nyear_len(1) 241 246 IF( nleapy == 1 ) CALL day_mth -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/DOM/domzgr.F90
r3764 r4193 169 169 !!---------------------------------------------------------------------- 170 170 !! *** ROUTINE zgr_z *** 171 !! 171 !! 172 172 !! ** Purpose : set the depth of model levels and the resulting 173 173 !! vertical scale factors. … … 639 639 END DO 640 640 END DO 641 IF( lk_mpp ) CALL mpp_sum( icompt ) 641 642 IF( icompt == 0 ) THEN 642 643 IF(lwp) WRITE(numout,*)' no isolated ocean grid points' … … 1252 1253 DO jj = 1, jpj 1253 1254 DO ji = 1, jpi 1254 ztaper = EXP( -(gphit(ji,jj)/8._wp)**2 )1255 ztaper = EXP( -(gphit(ji,jj)/8._wp)**2._wp ) 1255 1256 hbatt(ji,jj) = rn_sbot_max * ztaper + hbatt(ji,jj) * ( 1._wp - ztaper ) 1256 1257 END DO … … 1367 1368 fsde3w(:,:,:) = gdep3w(:,:,:) 1368 1369 ! 1369 where (e3t (:,:,:).eq.0.0) e3t(:,:,:) = 1.0 1370 where (e3u (:,:,:).eq.0.0) e3u(:,:,:) = 1.0 1371 where (e3v (:,:,:).eq.0.0) e3v(:,:,:) = 1.0 1372 where (e3f (:,:,:).eq.0.0) e3f(:,:,:) = 1.0 1373 where (e3w (:,:,:).eq.0.0) e3w(:,:,:) = 1.0 1374 where (e3uw (:,:,:).eq.0.0) e3uw(:,:,:) = 1.0 1375 where (e3vw (:,:,:).eq.0.0) e3vw(:,:,:) = 1.0 1376 1370 where (e3t (:,:,:).eq.0.0) e3t(:,:,:) = 1._wp 1371 where (e3u (:,:,:).eq.0.0) e3u(:,:,:) = 1._wp 1372 where (e3v (:,:,:).eq.0.0) e3v(:,:,:) = 1._wp 1373 where (e3f (:,:,:).eq.0.0) e3f(:,:,:) = 1._wp 1374 where (e3w (:,:,:).eq.0.0) e3w(:,:,:) = 1._wp 1375 where (e3uw (:,:,:).eq.0.0) e3uw(:,:,:) = 1._wp 1376 where (e3vw (:,:,:).eq.0.0) e3vw(:,:,:) = 1._wp 1377 1378 #if defined key_agrif 1379 ! Ensure meaningful vertical scale factors in ghost lines/columns 1380 IF( .NOT. Agrif_Root() ) THEN 1381 ! 1382 IF((nbondi == -1).OR.(nbondi == 2)) THEN 1383 e3u(1,:,:) = e3u(2,:,:) 1384 ENDIF 1385 ! 1386 IF((nbondi == 1).OR.(nbondi == 2)) THEN 1387 e3u(nlci-1,:,:) = e3u(nlci-2,:,:) 1388 ENDIF 1389 ! 1390 IF((nbondj == -1).OR.(nbondj == 2)) THEN 1391 e3v(:,1,:) = e3v(:,2,:) 1392 ENDIF 1393 ! 1394 IF((nbondj == 1).OR.(nbondj == 2)) THEN 1395 e3v(:,nlcj-1,:) = e3v(:,nlcj-2,:) 1396 ENDIF 1397 ! 1398 ENDIF 1399 #endif 1377 1400 1378 1401 fsdept(:,:,:) = gdept (:,:,:) … … 1423 1446 WRITE(numout,"(10x,i4,4f9.2)") ( jk, fsdept(1,1,jk), fsdepw(1,1,jk), & 1424 1447 & fse3t (1,1,jk), fse3w (1,1,jk), jk=1,jpk ) 1425 DO jj = mj0(20), mj1(20) 1426 DO ji = mi0(20), mi1(20) 1448 iip1 = MIN(20, jpiglo-1) ! for config with i smaller than 20 points 1449 ijp1 = MIN(20, jpjglo-1) ! for config with j smaller than 20 points 1450 DO jj = mj0(ijp1), mj1(ijp1) 1451 DO ji = mi0(iip1), mi1(iip1) 1427 1452 WRITE(numout,*) 1428 WRITE(numout,*) ' domzgr: vertical coordinates : point (20,20,k) bathy = ', bathy(ji,jj), hbatt(ji,jj) 1453 WRITE(numout,*) ' domzgr: vertical coordinates : point (',iip1,',',ijp1,',k) bathy = ', & 1454 & bathy(ji,jj), hbatt(ji,jj) 1429 1455 WRITE(numout,*) ' ~~~~~~ --------------------' 1430 1456 WRITE(numout,"(9x,' level gdept gdepw gde3w e3t e3w ')") … … 1433 1459 END DO 1434 1460 END DO 1435 DO jj = mj0(74), mj1(74) 1436 DO ji = mi0(100), mi1(100) 1461 iip1 = MIN( 74, jpiglo-1) 1462 ijp1 = MIN( 100, jpjglo-1) 1463 DO jj = mj0(ijp1), mj1(ijp1) 1464 DO ji = mi0(iip1), mi1(iip1) 1437 1465 WRITE(numout,*) 1438 WRITE(numout,*) ' domzgr: vertical coordinates : point (100,74,k) bathy = ', bathy(ji,jj), hbatt(ji,jj) 1466 WRITE(numout,*) ' domzgr: vertical coordinates : point (',iip1,',',ijp1,',k) bathy = ', & 1467 & bathy(ji,jj), hbatt(ji,jj) 1439 1468 WRITE(numout,*) ' ~~~~~~ --------------------' 1440 1469 WRITE(numout,"(9x,' level gdept gdepw gde3w e3t e3w ')") … … 1723 1752 ENDDO 1724 1753 ! 1725 CALL lbc_lnk(e3t ,'T',1.) ; CALL lbc_lnk(e3u ,'T',1.)1726 CALL lbc_lnk(e3v ,'T',1.) ; CALL lbc_lnk(e3f ,'T',1.)1727 CALL lbc_lnk(e3w ,'T',1.)1728 CALL lbc_lnk(e3uw,'T',1.) ; CALL lbc_lnk(e3vw,'T',1.)1729 !1730 1754 ! ! ============= 1731 1755 … … 1824 1848 !!---------------------------------------------------------------------- 1825 1849 ! 1826 pf = ( TANH( rn_theta * ( -(pk-0.5_wp) / REAL(jpkm1 ) + rn_thetb ) ) &1850 pf = ( TANH( rn_theta * ( -(pk-0.5_wp) / REAL(jpkm1,wp) + rn_thetb ) ) & 1827 1851 & - TANH( rn_thetb * rn_theta ) ) & 1828 1852 & * ( COSH( rn_theta ) & … … 1850 1874 ! 1851 1875 IF ( rn_theta == 0 ) then ! uniform sigma 1852 pf1 = - ( pk1 - 0.5_wp ) / REAL( jpkm1 )1876 pf1 = - ( pk1 - 0.5_wp ) / REAL( jpkm1,wp ) 1853 1877 ELSE ! stretched sigma 1854 pf1 = ( 1._wp - pbb ) * ( SINH( rn_theta*(-(pk1-0.5_wp)/REAL(jpkm1 )) ) ) / SINH( rn_theta ) &1855 & + pbb * ( (TANH( rn_theta*( (-(pk1-0.5_wp)/REAL(jpkm1 )) + 0.5_wp) ) - TANH( 0.5_wp * rn_theta ) ) &1878 pf1 = ( 1._wp - pbb ) * ( SINH( rn_theta*(-(pk1-0.5_wp)/REAL(jpkm1,wp)) ) ) / SINH( rn_theta ) & 1879 & + pbb * ( (TANH( rn_theta*( (-(pk1-0.5_wp)/REAL(jpkm1,wp)) + 0.5_wp) ) - TANH( 0.5_wp * rn_theta ) ) & 1856 1880 & / ( 2._wp * TANH( 0.5_wp * rn_theta ) ) ) 1857 1881 ENDIF -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/DYN/dynspg_flt.F90
r3765 r4193 109 109 INTEGER :: ji, jj, jk ! dummy loop indices 110 110 REAL(wp) :: z2dt, z2dtg, zgcb, zbtd, ztdgu, ztdgv ! local scalars 111 REAL(wp), POINTER, DIMENSION(:,:,:) :: zub, zvb112 111 !!---------------------------------------------------------------------- 113 112 ! 114 113 IF( nn_timing == 1 ) CALL timing_start('dyn_spg_flt') 115 114 ! 116 CALL wrk_alloc( jpi,jpj,jpk, zub, zvb )117 115 ! 118 116 IF( kt == nit000 ) THEN … … 213 211 DO jk = 1, jpkm1 214 212 DO ji = 1, jpij 215 spgu(ji,1) = spgu(ji,1) + fse3u (ji,1,jk) * ua(ji,1,jk)216 spgv(ji,1) = spgv(ji,1) + fse3v (ji,1,jk) * va(ji,1,jk)213 spgu(ji,1) = spgu(ji,1) + fse3u_a(ji,1,jk) * ua(ji,1,jk) 214 spgv(ji,1) = spgv(ji,1) + fse3v_a(ji,1,jk) * va(ji,1,jk) 217 215 END DO 218 216 END DO … … 221 219 DO jj = 2, jpjm1 222 220 DO ji = 2, jpim1 223 spgu(ji,jj) = spgu(ji,jj) + fse3u (ji,jj,jk) * ua(ji,jj,jk)224 spgv(ji,jj) = spgv(ji,jj) + fse3v (ji,jj,jk) * va(ji,jj,jk)221 spgu(ji,jj) = spgu(ji,jj) + fse3u_a(ji,jj,jk) * ua(ji,jj,jk) 222 spgv(ji,jj) = spgv(ji,jj) + fse3v_a(ji,jj,jk) * va(ji,jj,jk) 225 223 END DO 226 224 END DO … … 360 358 IF( lrst_oce ) CALL flt_rst( kt, 'WRITE' ) 361 359 ! 362 CALL wrk_dealloc( jpi,jpj,jpk, zub, zvb )363 360 ! 364 361 IF( nn_timing == 1 ) CALL timing_stop('dyn_spg_flt') -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/ICB/icb_oce.F90
r3614 r4193 37 37 USE par_oce ! ocean parameters 38 38 USE lib_mpp ! MPP library 39 USE fldread ! read input fields (FLD type)40 39 41 40 IMPLICIT NONE … … 151 150 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: griddata !: work array for icbrst 152 151 153 TYPE(FLD), PUBLIC, ALLOCATABLE , DIMENSION(:) :: sf_icb !: structure: file information, fields read154 155 152 !!---------------------------------------------------------------------- 156 153 !! NEMO/OPA 3.3 , NEMO Consortium (2011) … … 168 165 ! 169 166 icb_alloc = 0 170 ALLOCATE( berg_grid , & 171 & berg_grid%calving (jpi,jpj) , berg_grid%calving_hflx (jpi,jpj) , & 167 ALLOCATE( berg_grid%calving (jpi,jpj) , berg_grid%calving_hflx (jpi,jpj) , & 172 168 & berg_grid%stored_heat(jpi,jpj) , berg_grid%floating_melt(jpi,jpj) , & 173 169 & berg_grid%maxclass (jpi,jpj) , berg_grid%stored_ice (jpi,jpj,nclasses) , & -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/ICB/icbini.F90
r3785 r4193 35 35 PUBLIC icb_init ! routine called in nemogcm.F90 module 36 36 37 CHARACTER(len=100) :: cn_dir = './' ! Root directory for location of icb files 38 TYPE(FLD_N) :: sn_icb ! information about the calving file to be read 37 CHARACTER(len=100) :: cn_dir = './' !: Root directory for location of icb files 38 TYPE(FLD_N) :: sn_icb !: information about the calving file to be read 39 TYPE(FLD), PUBLIC, ALLOCATABLE , DIMENSION(:) :: sf_icb !: structure: file information, fields read 40 !: used in icbini and icbstp 39 41 40 42 !!---------------------------------------------------------------------- -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/ICB/icbstp.F90
r3614 r4193 24 24 USE lib_mpp 25 25 USE iom 26 USE fldread 26 27 USE timing ! timing 27 28 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/IOM/iom.F90
r3771 r4193 31 31 USE sbc_oce, ONLY : nn_fsbc ! ocean space and time domain 32 32 USE trc_oce, ONLY : nn_dttrc ! !: frequency of step on passive tracers 33 USE icb_oce, ONLY : class_num ! !: iceberg classes 33 34 USE domngb ! ocean space and time domain 34 35 USE phycst ! physical constants … … 36 37 USE xios 37 38 # endif 39 USE ioipsl, ONLY : ju2ymds ! for calendar 38 40 39 41 IMPLICIT NONE … … 52 54 PRIVATE iom_p1d, iom_p2d, iom_p3d 53 55 #if defined key_iomput 54 PRIVATE iom_set_domain_attr, iom_set_axis_attr, iom_set_field_attr, iom_set_file_attr, iom_ set_grid_attr55 PRIVATE set_grid, set_scalar, set_xmlatt, set_mooring 56 PRIVATE iom_set_domain_attr, iom_set_axis_attr, iom_set_field_attr, iom_set_file_attr, iom_get_file_attr, iom_set_grid_attr 57 PRIVATE set_grid, set_scalar, set_xmlatt, set_mooring, iom_update_file_name, iom_sdate 56 58 # endif 57 59 … … 98 100 clname = "nemo" 99 101 IF( TRIM(Agrif_CFixed()) /= '0' ) clname = TRIM(Agrif_CFixed())//"_"//TRIM(clname) 102 # if defined key_mpp_mpi 100 103 CALL xios_context_initialize(TRIM(clname), mpi_comm_opa) 104 # else 105 CALL xios_context_initialize(TRIM(clname), 0) 106 # endif 101 107 CALL iom_swap 102 108 … … 123 129 CALL iom_set_axis_attr( "depthw", gdepw_0 ) 124 130 # if defined key_floats 125 CALL iom_set_axis_attr( "nfloat", ( ji, ji=1,nfloat) )131 CALL iom_set_axis_attr( "nfloat", (/ (REAL(ji,wp), ji=1,nfloat) /) ) 126 132 # endif 133 CALL iom_set_axis_attr( "icbcla", class_num ) 127 134 128 135 ! automatic definitions of some of the xml attributs … … 130 137 131 138 ! end file definition 132 dtime%second=rdt133 134 135 136 139 dtime%second = rdt 140 CALL xios_set_timestep(dtime) 141 CALL xios_close_context_definition() 142 143 CALL xios_update_calendar(0) 137 144 #endif 138 145 139 146 END SUBROUTINE iom_init 140 147 … … 174 181 LOGICAL , INTENT(in ), OPTIONAL :: ldiof ! Interp On the Fly, needed for AGRIF (default = .FALSE.) 175 182 176 CHARACTER(LEN= 100) :: clname ! the name of the file based on cdname [[+clcpu]+clcpu]177 CHARACTER(LEN= 100) :: cltmpn ! tempory name to store clname (in writting mode)183 CHARACTER(LEN=256) :: clname ! the name of the file based on cdname [[+clcpu]+clcpu] 184 CHARACTER(LEN=256) :: cltmpn ! tempory name to store clname (in writting mode) 178 185 CHARACTER(LEN=10) :: clsuffix ! ".nc" or ".dimg" 179 186 CHARACTER(LEN=15) :: clcpu ! the cpu number (max jpmax_digits digits) 180 CHARACTER(LEN= 100) :: clinfo ! info character187 CHARACTER(LEN=256) :: clinfo ! info character 181 188 LOGICAL :: llok ! check the existence 182 189 LOGICAL :: llwrt ! local definition of ldwrt … … 561 568 REAL(wp) :: zscf, zofs ! sacle_factor and add_offset 562 569 INTEGER :: itmp ! temporary integer 563 CHARACTER(LEN= 100) :: clinfo ! info character564 CHARACTER(LEN= 100) :: clname ! file name570 CHARACTER(LEN=256) :: clinfo ! info character 571 CHARACTER(LEN=256) :: clname ! file name 565 572 CHARACTER(LEN=1) :: clrankpv, cldmspc ! 566 573 !--------------------------------------------------------------------- … … 1010 1017 !!---------------------------------------------------------------------- 1011 1018 1012 1013 1019 #if defined key_iomput 1014 1020 1015 SUBROUTINE iom_set_domain_attr( cd name, ni_glo, nj_glo, ibegin, jbegin, ni, nj, zoom_ibegin, zoom_jbegin, zoom_ni, zoom_nj, &1021 SUBROUTINE iom_set_domain_attr( cdid, ni_glo, nj_glo, ibegin, jbegin, ni, nj, zoom_ibegin, zoom_jbegin, zoom_ni, zoom_nj, & 1016 1022 & data_dim, data_ibegin, data_ni, data_jbegin, data_nj, lonvalue, latvalue, mask ) 1017 CHARACTER(LEN=*) , INTENT(in) :: cd name1023 CHARACTER(LEN=*) , INTENT(in) :: cdid 1018 1024 INTEGER , OPTIONAL, INTENT(in) :: ni_glo, nj_glo, ibegin, jbegin, ni, nj 1019 1025 INTEGER , OPTIONAL, INTENT(in) :: data_dim, data_ibegin, data_ni, data_jbegin, data_nj … … 1022 1028 LOGICAL, DIMENSION(:,:), OPTIONAL, INTENT(in) :: mask 1023 1029 1024 IF ( xios_is_valid_domain (cd name) ) THEN1025 CALL xios_set_domain_attr ( cd name, ni_glo=ni_glo, nj_glo=nj_glo, ibegin=ibegin, jbegin=jbegin, ni=ni, nj=nj, &1026 & data_dim=data_dim, data_ibegin=data_ibegin, data_ni=data_ni, data_jbegin=data_jbegin, data_nj=data_nj 1027 & zoom_ibegin=zoom_ibegin, zoom_jbegin=zoom_jbegin, zoom_ni=zoom_ni, zoom_nj=zoom_nj, 1030 IF ( xios_is_valid_domain (cdid) ) THEN 1031 CALL xios_set_domain_attr ( cdid, ni_glo=ni_glo, nj_glo=nj_glo, ibegin=ibegin, jbegin=jbegin, ni=ni, nj=nj, & 1032 & data_dim=data_dim, data_ibegin=data_ibegin, data_ni=data_ni, data_jbegin=data_jbegin, data_nj=data_nj , & 1033 & zoom_ibegin=zoom_ibegin, zoom_jbegin=zoom_jbegin, zoom_ni=zoom_ni, zoom_nj=zoom_nj, & 1028 1034 & lonvalue=lonvalue, latvalue=latvalue,mask=mask ) 1029 1035 ENDIF 1030 1036 1031 IF ( xios_is_valid_domaingroup(cd name) ) THEN1032 CALL xios_set_domaingroup_attr( cd name, ni_glo=ni_glo, nj_glo=nj_glo, ibegin=ibegin, jbegin=jbegin, ni=ni, nj=nj, &1033 & data_dim=data_dim, data_ibegin=data_ibegin, data_ni=data_ni, data_jbegin=data_jbegin, data_nj=data_nj 1034 & zoom_ibegin=zoom_ibegin, zoom_jbegin=zoom_jbegin, zoom_ni=zoom_ni, zoom_nj=zoom_nj, 1037 IF ( xios_is_valid_domaingroup(cdid) ) THEN 1038 CALL xios_set_domaingroup_attr( cdid, ni_glo=ni_glo, nj_glo=nj_glo, ibegin=ibegin, jbegin=jbegin, ni=ni, nj=nj, & 1039 & data_dim=data_dim, data_ibegin=data_ibegin, data_ni=data_ni, data_jbegin=data_jbegin, data_nj=data_nj , & 1040 & zoom_ibegin=zoom_ibegin, zoom_jbegin=zoom_jbegin, zoom_ni=zoom_ni, zoom_nj=zoom_nj, & 1035 1041 & lonvalue=lonvalue, latvalue=latvalue,mask=mask ) 1036 1042 ENDIF 1043 CALL xios_solve_inheritance() 1037 1044 1038 1045 END SUBROUTINE iom_set_domain_attr 1039 1046 1040 1047 1041 SUBROUTINE iom_set_axis_attr( cd name, paxis )1042 CHARACTER(LEN=*) , INTENT(in) :: cd name1048 SUBROUTINE iom_set_axis_attr( cdid, paxis ) 1049 CHARACTER(LEN=*) , INTENT(in) :: cdid 1043 1050 REAL(wp), DIMENSION(:), INTENT(in) :: paxis 1044 IF ( xios_is_valid_axis (cdname) ) CALL xios_set_axis_attr ( cdname, size=size(paxis),value=paxis ) 1045 IF ( xios_is_valid_axisgroup(cdname) ) CALL xios_set_axisgroup_attr( cdname, size=size(paxis),value=paxis ) 1051 IF ( xios_is_valid_axis (cdid) ) CALL xios_set_axis_attr ( cdid, size=size(paxis),value=paxis ) 1052 IF ( xios_is_valid_axisgroup(cdid) ) CALL xios_set_axisgroup_attr( cdid, size=size(paxis),value=paxis ) 1053 CALL xios_solve_inheritance() 1046 1054 END SUBROUTINE iom_set_axis_attr 1047 1055 1048 1056 1049 SUBROUTINE iom_set_field_attr( cd name, freq_op)1050 CHARACTER(LEN=*) , INTENT(in) :: cd name1057 SUBROUTINE iom_set_field_attr( cdid, freq_op, freq_offset ) 1058 CHARACTER(LEN=*) , INTENT(in) :: cdid 1051 1059 CHARACTER(LEN=*),OPTIONAL , INTENT(in) :: freq_op 1052 IF ( xios_is_valid_field (cdname) ) CALL xios_set_field_attr ( cdname, freq_op=freq_op ) 1053 IF ( xios_is_valid_fieldgroup(cdname) ) CALL xios_set_fieldgroup_attr( cdname, freq_op=freq_op ) 1060 CHARACTER(LEN=*),OPTIONAL , INTENT(in) :: freq_offset 1061 IF ( xios_is_valid_field (cdid) ) CALL xios_set_field_attr ( cdid, freq_op=freq_op, freq_offset=freq_offset ) 1062 IF ( xios_is_valid_fieldgroup(cdid) ) CALL xios_set_fieldgroup_attr( cdid, freq_op=freq_op, freq_offset=freq_offset ) 1063 CALL xios_solve_inheritance() 1054 1064 END SUBROUTINE iom_set_field_attr 1055 1065 1056 1066 1057 SUBROUTINE iom_set_file_attr( cd name, name, name_suffix )1058 CHARACTER(LEN=*) , INTENT(in) :: cd name1067 SUBROUTINE iom_set_file_attr( cdid, name, name_suffix ) 1068 CHARACTER(LEN=*) , INTENT(in) :: cdid 1059 1069 CHARACTER(LEN=*),OPTIONAL , INTENT(in) :: name, name_suffix 1060 IF ( xios_is_valid_file (cdname) ) CALL xios_set_file_attr ( cdname, name=name, name_suffix=name_suffix ) 1061 IF ( xios_is_valid_filegroup(cdname) ) CALL xios_set_filegroup_attr( cdname, name=name, name_suffix=name_suffix ) 1070 IF ( xios_is_valid_file (cdid) ) CALL xios_set_file_attr ( cdid, name=name, name_suffix=name_suffix ) 1071 IF ( xios_is_valid_filegroup(cdid) ) CALL xios_set_filegroup_attr( cdid, name=name, name_suffix=name_suffix ) 1072 CALL xios_solve_inheritance() 1062 1073 END SUBROUTINE iom_set_file_attr 1063 1074 1064 1075 1065 SUBROUTINE iom_set_grid_attr( cdname, mask ) 1066 CHARACTER(LEN=*) , INTENT(in) :: cdname 1076 SUBROUTINE iom_get_file_attr( cdid, name, name_suffix, output_freq ) 1077 CHARACTER(LEN=*) , INTENT(in ) :: cdid 1078 CHARACTER(LEN=*),OPTIONAL , INTENT(out) :: name, name_suffix, output_freq 1079 LOGICAL :: llexist1,llexist2,llexist3 1080 !--------------------------------------------------------------------- 1081 IF( PRESENT( name ) ) name = '' ! default values 1082 IF( PRESENT( name_suffix ) ) name_suffix = '' 1083 IF( PRESENT( output_freq ) ) output_freq = '' 1084 IF ( xios_is_valid_file (cdid) ) THEN 1085 CALL xios_solve_inheritance() 1086 CALL xios_is_defined_file_attr ( cdid, name = llexist1, name_suffix = llexist2, output_freq = llexist3) 1087 IF(llexist1) CALL xios_get_file_attr ( cdid, name = name ) 1088 IF(llexist2) CALL xios_get_file_attr ( cdid, name_suffix = name_suffix ) 1089 IF(llexist3) CALL xios_get_file_attr ( cdid, output_freq = output_freq ) 1090 ENDIF 1091 IF ( xios_is_valid_filegroup(cdid) ) THEN 1092 CALL xios_solve_inheritance() 1093 CALL xios_is_defined_filegroup_attr( cdid, name = llexist1, name_suffix = llexist2, output_freq = llexist3) 1094 IF(llexist1) CALL xios_get_filegroup_attr( cdid, name = name ) 1095 IF(llexist2) CALL xios_get_filegroup_attr( cdid, name_suffix = name_suffix ) 1096 IF(llexist3) CALL xios_get_filegroup_attr( cdid, output_freq = output_freq ) 1097 ENDIF 1098 END SUBROUTINE iom_get_file_attr 1099 1100 1101 SUBROUTINE iom_set_grid_attr( cdid, mask ) 1102 CHARACTER(LEN=*) , INTENT(in) :: cdid 1067 1103 LOGICAL, DIMENSION(:,:,:), OPTIONAL, INTENT(in) :: mask 1068 IF ( xios_is_valid_grid (cdname) ) CALL xios_set_grid_attr ( cdname, mask=mask ) 1069 IF ( xios_is_valid_gridgroup(cdname) ) CALL xios_set_gridgroup_attr( cdname, mask=mask ) 1104 IF ( xios_is_valid_grid (cdid) ) CALL xios_set_grid_attr ( cdid, mask=mask ) 1105 IF ( xios_is_valid_gridgroup(cdid) ) CALL xios_set_gridgroup_attr( cdid, mask=mask ) 1106 CALL xios_solve_inheritance() 1070 1107 END SUBROUTINE iom_set_grid_attr 1071 1108 … … 1073 1110 SUBROUTINE set_grid( cdgrd, plon, plat ) 1074 1111 !!---------------------------------------------------------------------- 1075 !! *** ROUTINE ***1112 !! *** ROUTINE set_grid *** 1076 1113 !! 1077 1114 !! ** Purpose : define horizontal grids … … 1101 1138 END SELECT 1102 1139 ! 1103 CALL iom_set_domain_attr( "grid_"//cdgrd , mask = zmask(:,:,1) /= 0. )1104 CALL iom_set_grid_attr ( "grid_"//cdgrd//"_3D", mask = zmask(:,:,:) /= 0. )1140 CALL iom_set_domain_attr( "grid_"//cdgrd , mask = RESHAPE(zmask(nldi:nlei,nldj:nlej,1),(/ni,nj /)) /= 0. ) 1141 CALL iom_set_grid_attr ( "grid_"//cdgrd//"_3D", mask = RESHAPE(zmask(nldi:nlei,nldj:nlej,:),(/ni,nj,jpk/)) /= 0. ) 1105 1142 ENDIF 1106 1143 … … 1110 1147 SUBROUTINE set_scalar 1111 1148 !!---------------------------------------------------------------------- 1112 !! *** ROUTINE ***1149 !! *** ROUTINE set_scalar *** 1113 1150 !! 1114 1151 !! ** Purpose : define fake grids for scalar point … … 1126 1163 SUBROUTINE set_xmlatt 1127 1164 !!---------------------------------------------------------------------- 1128 !! *** ROUTINE ***1165 !! *** ROUTINE set_xmlatt *** 1129 1166 !! 1130 1167 !! ** Purpose : automatic definitions of some of the xml attributs... 1131 1168 !! 1132 1169 !!---------------------------------------------------------------------- 1133 CHARACTER(len=6),DIMENSION( 8) :: clsuff ! suffix name1134 1170 CHARACTER(len=1),DIMENSION( 3) :: clgrd ! suffix name 1135 CHARACTER(len= 50) :: clname ! filename1171 CHARACTER(len=256) :: clsuff ! suffix name 1136 1172 CHARACTER(len=1) :: cl1 ! 1 character 1137 1173 CHARACTER(len=2) :: cl2 ! 1 character 1138 CHARACTER(len=255) :: tfo 1139 INTEGER :: idt ! time-step in seconds 1140 INTEGER :: iddss, ihhss ! number of seconds in 1 day, 1 hour and 1 year 1141 INTEGER :: iyymo ! number of months in 1 year 1142 INTEGER :: jg, jh, jd, jm, jy ! loop counters 1174 INTEGER :: ji, jg ! loop counters 1143 1175 INTEGER :: ix, iy ! i-,j- index 1144 1176 REAL(wp) ,DIMENSION(11) :: zlontao ! longitudes of tao moorings … … 1150 1182 !!---------------------------------------------------------------------- 1151 1183 ! 1152 idt = NINT( rdttra(1) )1153 iddss = NINT( rday ) ! number of seconds in 1 day1154 ihhss = NINT( rmmss * rhhmm ) ! number of seconds in 1 hour1155 iyymo = NINT( raamo ) ! number of months in 1 year1156 1157 1184 ! frequency of the call of iom_put (attribut: freq_op) 1158 tfo = TRIM(i2str(idt))//'s' 1159 CALL iom_set_field_attr('field_definition', freq_op=tfo) 1160 CALL iom_set_field_attr('SBC' , freq_op=TRIM(i2str(idt* nn_fsbc ))//'s') 1161 CALL iom_set_field_attr('ptrc_T', freq_op=TRIM(i2str(idt* nn_dttrc))//'s') 1162 CALL iom_set_field_attr('diad_T', freq_op=TRIM(i2str(idt* nn_dttrc))//'s') 1185 WRITE(cl1,'(i1)') 1 ; CALL iom_set_field_attr('field_definition', freq_op = cl1//'ts', freq_offset='0ts') 1186 WRITE(cl1,'(i1)') nn_fsbc ; CALL iom_set_field_attr('SBC' , freq_op = cl1//'ts', freq_offset='0ts') 1187 WRITE(cl1,'(i1)') nn_dttrc ; CALL iom_set_field_attr('ptrc_T' , freq_op = cl1//'ts', freq_offset='0ts') 1188 WRITE(cl1,'(i1)') nn_dttrc ; CALL iom_set_field_attr('diad_T' , freq_op = cl1//'ts', freq_offset='0ts') 1163 1189 1164 1190 ! output file names (attribut: name) 1165 clsuff(:) = (/ 'grid_T', 'grid_U', 'grid_V', 'grid_W', 'icemod', 'ptrc_T', 'diad_T', 'scalar' /) 1166 DO jg = 1, SIZE(clsuff) ! grid type 1167 DO jh = 1, 24 ! 1-24 hours 1168 WRITE(cl2,'(i2)') jh 1169 CALL dia_nam( clname, jh * ihhss, clsuff(jg), ldfsec = .TRUE. ) 1170 CALL iom_set_file_attr(TRIM(ADJUSTL(cl2))//'h_'//clsuff(jg), name=TRIM(clname)) 1171 END DO 1172 DO jd = 1, 30 ! 1-30 days 1173 WRITE(cl1,'(i1)') jd 1174 CALL dia_nam( clname, jd * iddss, clsuff(jg), ldfsec = .TRUE. ) 1175 CALL iom_set_file_attr(cl1//'d_'//clsuff(jg), name=TRIM(clname)) 1176 END DO 1177 DO jm = 1, 11 ! 1-11 months 1178 WRITE(cl1,'(i1)') jm 1179 CALL dia_nam( clname, -jm, clsuff(jg) ) 1180 CALL iom_set_file_attr(cl1//'m_'//clsuff(jg), name=TRIM(clname)) 1181 END DO 1182 DO jy = 1, 50 ! 1-50 years 1183 WRITE(cl2,'(i2)') jy 1184 CALL dia_nam( clname, -jy * iyymo, clsuff(jg) ) 1185 CALL iom_set_file_attr(TRIM(ADJUSTL(cl2))//'y_'//clsuff(jg), name=TRIM(clname)) 1186 END DO 1191 DO ji = 1, 9 1192 WRITE(cl1,'(i1)') ji 1193 CALL iom_update_file_name('file'//cl1) 1194 END DO 1195 DO ji = 1, 99 1196 WRITE(cl2,'(i2.2)') ji 1197 CALL iom_update_file_name('file'//cl2) 1187 1198 END DO 1188 1199 … … 1193 1204 ! Equatorial section (attributs: jbegin, ni, name_suffix) 1194 1205 CALL dom_ngb( 0., 0., ix, iy, cl1 ) 1195 CALL iom_set_domain_attr('Eq'//cl1, zoom_jbegin=iy, zoom_ni=jpiglo) 1196 CALL iom_set_file_attr('Eq'//cl1, name_suffix= '_Eq') 1206 CALL iom_set_domain_attr ('Eq'//cl1, zoom_jbegin=iy, zoom_ni=jpiglo) 1207 CALL iom_get_file_attr ('Eq'//cl1, name_suffix = clsuff ) 1208 CALL iom_set_file_attr ('Eq'//cl1, name_suffix = TRIM(clsuff)//'_Eq') 1209 CALL iom_update_file_name('Eq'//cl1) 1197 1210 END DO 1198 1211 ! TAO moorings (attributs: ibegin, jbegin, name_suffix) … … 1214 1227 SUBROUTINE set_mooring( plon, plat) 1215 1228 !!---------------------------------------------------------------------- 1216 !! *** ROUTINE ***1229 !! *** ROUTINE set_mooring *** 1217 1230 !! 1218 1231 !! ** Purpose : automatic definitions of moorings xml attributs... … … 1223 1236 !!$ CHARACTER(len=1),DIMENSION(4) :: clgrd = (/ 'T', 'U', 'V', 'W' /) ! suffix name 1224 1237 CHARACTER(len=1),DIMENSION(1) :: clgrd = (/ 'T' /) ! suffix name 1225 CHARACTER(len=50) :: clname ! file name 1238 CHARACTER(len=256) :: clname ! file name 1239 CHARACTER(len=256) :: clsuff ! suffix name 1226 1240 CHARACTER(len=1) :: cl1 ! 1 character 1227 1241 CHARACTER(len=6) :: clon,clat ! name of longitude, latitude … … 1269 1283 ENDIF 1270 1284 clname = TRIM(ADJUSTL(clat))//TRIM(ADJUSTL(clon)) 1271 CALL iom_set_domain_attr(TRIM(clname)//cl1, zoom_ibegin= ix, zoom_jbegin= iy) 1272 CALL iom_set_file_attr(TRIM(clname)//cl1, name_suffix= '_'//TRIM(clname)) 1285 CALL iom_set_domain_attr (TRIM(clname)//cl1, zoom_ibegin= ix, zoom_jbegin= iy) 1286 CALL iom_get_file_attr (TRIM(clname)//cl1, name_suffix = clsuff ) 1287 CALL iom_set_file_attr (TRIM(clname)//cl1, name_suffix = TRIM(clsuff)//'_'//TRIM(clname)) 1288 CALL iom_update_file_name(TRIM(clname)//cl1) 1273 1289 END DO 1274 1290 END DO … … 1277 1293 END SUBROUTINE set_mooring 1278 1294 1295 1296 SUBROUTINE iom_update_file_name( cdid ) 1297 !!---------------------------------------------------------------------- 1298 !! *** ROUTINE iom_update_file_name *** 1299 !! 1300 !! ** Purpose : 1301 !! 1302 !!---------------------------------------------------------------------- 1303 CHARACTER(LEN=*) , INTENT(in) :: cdid 1304 ! 1305 CHARACTER(LEN=256) :: clname 1306 CHARACTER(LEN=20) :: clfreq 1307 CHARACTER(LEN=20) :: cldate 1308 INTEGER :: idx 1309 INTEGER :: jn 1310 INTEGER :: itrlen 1311 INTEGER :: iyear, imonth, iday, isec 1312 REAL(wp) :: zsec 1313 LOGICAL :: llexist 1314 !!---------------------------------------------------------------------- 1315 1316 DO jn = 1,2 1317 1318 IF( jn == 1 ) CALL iom_get_file_attr( cdid, name = clname, output_freq = clfreq ) 1319 IF( jn == 2 ) CALL iom_get_file_attr( cdid, name_suffix = clname ) 1320 1321 IF ( TRIM(clname) /= '' ) THEN 1322 1323 idx = INDEX(clname,'@expname@') + INDEX(clname,'@EXPNAME@') 1324 DO WHILE ( idx /= 0 ) 1325 clname = clname(1:idx-1)//TRIM(cexper)//clname(idx+9:LEN_TRIM(clname)) 1326 idx = INDEX(clname,'@expname@') + INDEX(clname,'@EXPNAME@') 1327 END DO 1328 1329 idx = INDEX(clname,'@freq@') + INDEX(clname,'@FREQ@') 1330 DO WHILE ( idx /= 0 ) 1331 IF ( TRIM(clfreq) /= '' ) THEN 1332 itrlen = LEN_TRIM(clfreq) 1333 IF ( clfreq(itrlen-1:itrlen) == 'mo' ) clfreq = clfreq(1:itrlen-1) 1334 clname = clname(1:idx-1)//TRIM(clfreq)//clname(idx+6:LEN_TRIM(clname)) 1335 ELSE 1336 CALL ctl_stop('error in the name of file id '//TRIM(cdid), & 1337 & ' attribute output_freq is undefined -> cannot replace @freq@ in '//TRIM(clname) ) 1338 ENDIF 1339 idx = INDEX(clname,'@freq@') + INDEX(clname,'@FREQ@') 1340 END DO 1341 1342 idx = INDEX(clname,'@startdate@') + INDEX(clname,'@STARTDATE@') 1343 DO WHILE ( idx /= 0 ) 1344 cldate = iom_sdate( fjulday - rdttra(1) / rday ) 1345 clname = clname(1:idx-1)//TRIM(cldate)//clname(idx+11:LEN_TRIM(clname)) 1346 idx = INDEX(clname,'@startdate@') + INDEX(clname,'@STARTDATE@') 1347 END DO 1348 1349 idx = INDEX(clname,'@startdatefull@') + INDEX(clname,'@STARTDATEFULL@') 1350 DO WHILE ( idx /= 0 ) 1351 cldate = iom_sdate( fjulday - rdttra(1) / rday, ldfull = .TRUE. ) 1352 clname = clname(1:idx-1)//TRIM(cldate)//clname(idx+15:LEN_TRIM(clname)) 1353 idx = INDEX(clname,'@startdatefull@') + INDEX(clname,'@STARTDATEFULL@') 1354 END DO 1355 1356 idx = INDEX(clname,'@enddate@') + INDEX(clname,'@ENDDATE@') 1357 DO WHILE ( idx /= 0 ) 1358 cldate = iom_sdate( fjulday + rdttra(1) / rday * REAL( nitend - nit000, wp ), ld24 = .TRUE. ) 1359 clname = clname(1:idx-1)//TRIM(cldate)//clname(idx+9:LEN_TRIM(clname)) 1360 idx = INDEX(clname,'@enddate@') + INDEX(clname,'@ENDDATE@') 1361 END DO 1362 1363 idx = INDEX(clname,'@enddatefull@') + INDEX(clname,'@ENDDATEFULL@') 1364 DO WHILE ( idx /= 0 ) 1365 cldate = iom_sdate( fjulday + rdttra(1) / rday * REAL( nitend - nit000, wp ), ld24 = .TRUE., ldfull = .TRUE. ) 1366 clname = clname(1:idx-1)//TRIM(cldate)//clname(idx+13:LEN_TRIM(clname)) 1367 idx = INDEX(clname,'@enddatefull@') + INDEX(clname,'@ENDDATEFULL@') 1368 END DO 1369 1370 IF( jn == 1 ) CALL iom_set_file_attr( cdid, name = clname ) 1371 IF( jn == 2 ) CALL iom_set_file_attr( cdid, name_suffix = clname ) 1372 1373 ENDIF 1374 1375 END DO 1376 1377 END SUBROUTINE iom_update_file_name 1378 1379 1380 FUNCTION iom_sdate( pjday, ld24, ldfull ) 1381 !!---------------------------------------------------------------------- 1382 !! *** ROUTINE iom_sdate *** 1383 !! 1384 !! ** Purpose : send back the date corresponding to the given julian day 1385 !! 1386 !!---------------------------------------------------------------------- 1387 REAL(wp), INTENT(in ) :: pjday ! julian day 1388 LOGICAL , INTENT(in ), OPTIONAL :: ld24 ! true to force 24:00 instead of 00:00 1389 LOGICAL , INTENT(in ), OPTIONAL :: ldfull ! true to get the compleate date: yyyymmdd_hh:mm:ss 1390 ! 1391 CHARACTER(LEN=20) :: iom_sdate 1392 CHARACTER(LEN=50) :: clfmt ! format used to write the date 1393 INTEGER :: iyear, imonth, iday, ihour, iminute, isec 1394 REAL(wp) :: zsec 1395 LOGICAL :: ll24, llfull 1396 ! 1397 IF( PRESENT(ld24) ) THEN ; ll24 = ld24 1398 ELSE ; ll24 = .FALSE. 1399 ENDIF 1400 1401 IF( PRESENT(ldfull) ) THEN ; llfull = ldfull 1402 ELSE ; llfull = .FALSE. 1403 ENDIF 1404 1405 CALL ju2ymds( pjday, iyear, imonth, iday, zsec ) 1406 isec = NINT(zsec) 1407 1408 IF ( ll24 .AND. isec == 0 ) THEN ! 00:00 of the next day -> move to 24:00 of the current day 1409 CALL ju2ymds( pjday - 1., iyear, imonth, iday, zsec ) 1410 isec = 86400 1411 ENDIF 1412 1413 IF( iyear < 10000 ) THEN ; clfmt = "i4.4,2i2.2" ! format used to write the date 1414 ELSE ; WRITE(clfmt, "('i',i1,',2i2.2')") INT(LOG10(REAL(iyear,wp))) + 1 1415 ENDIF 1416 1417 IF( llfull ) THEN 1418 clfmt = TRIM(clfmt)//",'_',i2.2,':',i2.2,':',i2.2" 1419 ihour = isec / 3600 1420 isec = MOD(isec, 3600) 1421 iminute = isec / 60 1422 isec = MOD(isec, 60) 1423 WRITE(iom_sdate, '('//TRIM(clfmt)//')') iyear, imonth, iday, ihour, iminute, isec ! date of the end of run 1424 ELSE 1425 WRITE(iom_sdate, '('//TRIM(clfmt)//')') iyear, imonth, iday ! date of the end of run 1426 ENDIF 1427 1428 END FUNCTION iom_sdate 1429 1279 1430 #else 1280 1431 … … 1285 1436 1286 1437 #endif 1287 1288 FUNCTION i2str(int)1289 IMPLICIT NONE1290 INTEGER, INTENT(IN) :: int1291 CHARACTER(LEN=255) :: i2str1292 1293 WRITE(i2str,*) int1294 1295 END FUNCTION i2str1296 1438 1297 1439 !!====================================================================== -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/LBC/lbclnk.F90
r3768 r4193 283 283 END SUBROUTINE lbc_lnk_3d 284 284 285 SUBROUTINE lbc_bdy_lnk_3d( pt3d, cd_type, psgn, ib_bdy )286 !!---------------------------------------------------------------------287 !! *** ROUTINE lbc_bdy_lnk ***288 !!289 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used290 !! to maintain the same interface with regards to the mpp case291 !!292 !!----------------------------------------------------------------------293 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points294 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: pt3d ! 3D array on which the lbc is applied295 REAL(wp) , INTENT(in ) :: psgn ! control of the sign296 INTEGER :: ib_bdy ! BDY boundary set297 !!298 CALL lbc_lnk_3d( pt3d, cd_type, psgn)299 300 END SUBROUTINE lbc_bdy_lnk_3d301 302 SUBROUTINE lbc_bdy_lnk_2d( pt2d, cd_type, psgn, ib_bdy )303 !!---------------------------------------------------------------------304 !! *** ROUTINE lbc_bdy_lnk ***305 !!306 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used307 !! to maintain the same interface with regards to the mpp case308 !!309 !!----------------------------------------------------------------------310 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points311 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 3D array on which the lbc is applied312 REAL(wp) , INTENT(in ) :: psgn ! control of the sign313 INTEGER :: ib_bdy ! BDY boundary set314 !!315 CALL lbc_lnk_2d( pt2d, cd_type, psgn)316 317 END SUBROUTINE lbc_bdy_lnk_2d318 319 285 SUBROUTINE lbc_lnk_2d( pt2d, cd_type, psgn, cd_mpp, pval ) 320 286 !!--------------------------------------------------------------------- … … 406 372 END SUBROUTINE lbc_lnk_2d 407 373 374 #endif 375 376 377 SUBROUTINE lbc_bdy_lnk_3d( pt3d, cd_type, psgn, ib_bdy ) 378 !!--------------------------------------------------------------------- 379 !! *** ROUTINE lbc_bdy_lnk *** 380 !! 381 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used 382 !! to maintain the same interface with regards to the mpp 383 !case 384 !! 385 !!---------------------------------------------------------------------- 386 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 387 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: pt3d ! 3D array on which the lbc is applied 388 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 389 INTEGER :: ib_bdy ! BDY boundary set 390 !! 391 CALL lbc_lnk_3d( pt3d, cd_type, psgn) 392 393 END SUBROUTINE lbc_bdy_lnk_3d 394 395 SUBROUTINE lbc_bdy_lnk_2d( pt2d, cd_type, psgn, ib_bdy ) 396 !!--------------------------------------------------------------------- 397 !! *** ROUTINE lbc_bdy_lnk *** 398 !! 399 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used 400 !! to maintain the same interface with regards to the mpp 401 !case 402 !! 403 !!---------------------------------------------------------------------- 404 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 405 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 3D array on which the lbc is applied 406 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 407 INTEGER :: ib_bdy ! BDY boundary set 408 !! 409 CALL lbc_lnk_2d( pt2d, cd_type, psgn) 410 411 END SUBROUTINE lbc_bdy_lnk_2d 412 413 408 414 SUBROUTINE lbc_lnk_2d_e( pt2d, cd_type, psgn, jpri, jprj ) 409 415 !!--------------------------------------------------------------------- … … 430 436 END SUBROUTINE lbc_lnk_2d_e 431 437 432 # endif433 438 #endif 434 439 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/LBC/lib_mpp.F90
r3799 r4193 162 162 163 163 ! Arrays used in mpp_lbc_north_3d() 164 REAL(wp), DIMENSION(:,:,:) , ALLOCATABLE, SAVE :: ztab, znorthloc165 REAL(wp), DIMENSION(:,:,:,:), ALLOCATABLE, SAVE :: znorthgloio166 REAL(wp), DIMENSION(:,:,:) , ALLOCATABLE, SAVE :: zfoldwk ! Workspace for message transfers avoiding mpi_allgather164 REAL(wp), DIMENSION(:,:,:) , ALLOCATABLE, SAVE :: tab_3d, xnorthloc 165 REAL(wp), DIMENSION(:,:,:,:), ALLOCATABLE, SAVE :: xnorthgloio 166 REAL(wp), DIMENSION(:,:,:) , ALLOCATABLE, SAVE :: foldwk ! Workspace for message transfers avoiding mpi_allgather 167 167 168 168 ! Arrays used in mpp_lbc_north_2d() 169 REAL(wp), DIMENSION(:,:) , ALLOCATABLE, SAVE :: ztab_2d, znorthloc_2d170 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE, SAVE :: znorthgloio_2d171 REAL(wp), DIMENSION(:,:) , ALLOCATABLE, SAVE :: zfoldwk_2d ! Workspace for message transfers avoiding mpi_allgather169 REAL(wp), DIMENSION(:,:) , ALLOCATABLE, SAVE :: tab_2d, xnorthloc_2d 170 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE, SAVE :: xnorthgloio_2d 171 REAL(wp), DIMENSION(:,:) , ALLOCATABLE, SAVE :: foldwk_2d ! Workspace for message transfers avoiding mpi_allgather 172 172 173 173 ! Arrays used in mpp_lbc_north_e() 174 REAL(wp), DIMENSION(:,:) , ALLOCATABLE, SAVE :: ztab_e, znorthloc_e175 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE, SAVE :: znorthgloio_e174 REAL(wp), DIMENSION(:,:) , ALLOCATABLE, SAVE :: tab_e, xnorthloc_e 175 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE, SAVE :: xnorthgloio_e 176 176 177 177 ! North fold arrays used to minimise the use of allgather operations. Set in nemo_northcomms (nemogcm) so need to be public … … 207 207 & t2p1(jpi,jprecj ,2) , t2p2(jpi,jprecj ,2) , & 208 208 ! 209 & ztab(jpiglo,4,jpk) , znorthloc(jpi,4,jpk) , znorthgloio(jpi,4,jpk,jpni) , &210 & zfoldwk(jpi,4,jpk) , &211 ! 212 & ztab_2d(jpiglo,4) , znorthloc_2d(jpi,4) , znorthgloio_2d(jpi,4,jpni) , &213 & zfoldwk_2d(jpi,4) , &214 ! 215 & ztab_e(jpiglo,4+2*jpr2dj) , znorthloc_e(jpi,4+2*jpr2dj) , znorthgloio_e(jpi,4+2*jpr2dj,jpni) , &209 & tab_3d(jpiglo,4,jpk) , xnorthloc(jpi,4,jpk) , xnorthgloio(jpi,4,jpk,jpni) , & 210 & foldwk(jpi,4,jpk) , & 211 ! 212 & tab_2d(jpiglo,4) , xnorthloc_2d(jpi,4) , xnorthgloio_2d(jpi,4,jpni) , & 213 & foldwk_2d(jpi,4) , & 214 ! 215 & tab_e(jpiglo,4+2*jpr2dj) , xnorthloc_e(jpi,4+2*jpr2dj) , xnorthgloio_e(jpi,4+2*jpr2dj,jpni) , & 216 216 ! 217 217 & STAT=lib_mpp_alloc ) … … 2179 2179 !!gm Remark : this is very time consumming!!! 2180 2180 ! ! ------------------------ ! 2181 IF( ijpt0 > ijpt1 .OR. iipt0 > iipt1) THEN2181 IF(((nbondi .ne. 0) .AND. (ktype .eq. 2)) .OR. ((nbondj .ne. 0) .AND. (ktype .eq. 1))) THEN 2182 2182 ! there is nothing to be migrated 2183 lmigr = .FALSE.2183 lmigr = .TRUE. 2184 2184 ELSE 2185 lmigr = . TRUE.2185 lmigr = .FALSE. 2186 2186 ENDIF 2187 2187 … … 2598 2598 ityp = -1 2599 2599 ijpjm1 = 3 2600 ztab(:,:,:) = 0.e02601 ! 2602 DO jj = nlcj - ijpj +1, nlcj ! put in znorthloc the last 4 jlines of pt3d2600 tab_3d(:,:,:) = 0.e0 2601 ! 2602 DO jj = nlcj - ijpj +1, nlcj ! put in xnorthloc the last 4 jlines of pt3d 2603 2603 ij = jj - nlcj + ijpj 2604 znorthloc(:,ij,:) = pt3d(:,jj,:)2604 xnorthloc(:,ij,:) = pt3d(:,jj,:) 2605 2605 END DO 2606 2606 ! 2607 ! ! Build in procs of ncomm_north the znorthgloio2607 ! ! Build in procs of ncomm_north the xnorthgloio 2608 2608 itaille = jpi * jpk * ijpj 2609 2609 IF ( l_north_nogather ) THEN … … 2615 2615 ij = jj - nlcj + ijpj 2616 2616 DO ji = 1, nlci 2617 ztab(ji+nimpp-1,ij,:) = pt3d(ji,jj,:)2617 tab_3d(ji+nimpp-1,ij,:) = pt3d(ji,jj,:) 2618 2618 END DO 2619 2619 END DO … … 2640 2640 2641 2641 DO jr = 1,nsndto(ityp) 2642 CALL mppsend(5, znorthloc, itaille, isendto(jr,ityp), ml_req_nf(jr) )2642 CALL mppsend(5, xnorthloc, itaille, isendto(jr,ityp), ml_req_nf(jr) ) 2643 2643 END DO 2644 2644 DO jr = 1,nsndto(ityp) 2645 CALL mpprecv(5, zfoldwk, itaille, isendto(jr,ityp))2645 CALL mpprecv(5, foldwk, itaille, isendto(jr,ityp)) 2646 2646 iproc = isendto(jr,ityp) + 1 2647 2647 ildi = nldit (iproc) … … 2650 2650 DO jj = 1, ijpj 2651 2651 DO ji = ildi, ilei 2652 ztab(ji+iilb-1,jj,:) = zfoldwk(ji,jj,:)2652 tab_3d(ji+iilb-1,jj,:) = foldwk(ji,jj,:) 2653 2653 END DO 2654 2654 END DO … … 2665 2665 2666 2666 IF ( ityp .lt. 0 ) THEN 2667 CALL MPI_ALLGATHER( znorthloc , itaille, MPI_DOUBLE_PRECISION, &2668 & znorthgloio, itaille, MPI_DOUBLE_PRECISION, ncomm_north, ierr )2667 CALL MPI_ALLGATHER( xnorthloc , itaille, MPI_DOUBLE_PRECISION, & 2668 & xnorthgloio, itaille, MPI_DOUBLE_PRECISION, ncomm_north, ierr ) 2669 2669 ! 2670 2670 DO jr = 1, ndim_rank_north ! recover the global north array … … 2675 2675 DO jj = 1, ijpj 2676 2676 DO ji = ildi, ilei 2677 ztab(ji+iilb-1,jj,:) = znorthgloio(ji,jj,:,jr)2677 tab_3d(ji+iilb-1,jj,:) = xnorthgloio(ji,jj,:,jr) 2678 2678 END DO 2679 2679 END DO … … 2681 2681 ENDIF 2682 2682 ! 2683 ! The ztabarray has been either:2683 ! The tab_3d array has been either: 2684 2684 ! a. Fully populated by the mpi_allgather operation or 2685 2685 ! b. Had the active points for this domain and northern neighbours populated … … 2688 2688 ! this domain will be identical. 2689 2689 ! 2690 CALL lbc_nfd( ztab, cd_type, psgn ) ! North fold boundary condition2690 CALL lbc_nfd( tab_3d, cd_type, psgn ) ! North fold boundary condition 2691 2691 ! 2692 2692 DO jj = nlcj-ijpj+1, nlcj ! Scatter back to pt3d 2693 2693 ij = jj - nlcj + ijpj 2694 2694 DO ji= 1, nlci 2695 pt3d(ji,jj,:) = ztab(ji+nimpp-1,ij,:)2695 pt3d(ji,jj,:) = tab_3d(ji+nimpp-1,ij,:) 2696 2696 END DO 2697 2697 END DO … … 2730 2730 ityp = -1 2731 2731 ijpjm1 = 3 2732 ztab_2d(:,:) = 0.e02733 ! 2734 DO jj = nlcj-ijpj+1, nlcj ! put in znorthloc_2d the last 4 jlines of pt2d2732 tab_2d(:,:) = 0.e0 2733 ! 2734 DO jj = nlcj-ijpj+1, nlcj ! put in xnorthloc_2d the last 4 jlines of pt2d 2735 2735 ij = jj - nlcj + ijpj 2736 znorthloc_2d(:,ij) = pt2d(:,jj)2736 xnorthloc_2d(:,ij) = pt2d(:,jj) 2737 2737 END DO 2738 2738 2739 ! ! Build in procs of ncomm_north the znorthgloio_2d2739 ! ! Build in procs of ncomm_north the xnorthgloio_2d 2740 2740 itaille = jpi * ijpj 2741 2741 IF ( l_north_nogather ) THEN … … 2747 2747 ij = jj - nlcj + ijpj 2748 2748 DO ji = 1, nlci 2749 ztab_2d(ji+nimpp-1,ij) = pt2d(ji,jj)2749 tab_2d(ji+nimpp-1,ij) = pt2d(ji,jj) 2750 2750 END DO 2751 2751 END DO … … 2773 2773 2774 2774 DO jr = 1,nsndto(ityp) 2775 CALL mppsend(5, znorthloc_2d, itaille, isendto(jr,ityp), ml_req_nf(jr) )2775 CALL mppsend(5, xnorthloc_2d, itaille, isendto(jr,ityp), ml_req_nf(jr) ) 2776 2776 END DO 2777 2777 DO jr = 1,nsndto(ityp) 2778 CALL mpprecv(5, zfoldwk_2d, itaille, isendto(jr,ityp))2778 CALL mpprecv(5, foldwk_2d, itaille, isendto(jr,ityp)) 2779 2779 iproc = isendto(jr,ityp) + 1 2780 2780 ildi = nldit (iproc) … … 2783 2783 DO jj = 1, ijpj 2784 2784 DO ji = ildi, ilei 2785 ztab_2d(ji+iilb-1,jj) = zfoldwk_2d(ji,jj)2785 tab_2d(ji+iilb-1,jj) = foldwk_2d(ji,jj) 2786 2786 END DO 2787 2787 END DO … … 2798 2798 2799 2799 IF ( ityp .lt. 0 ) THEN 2800 CALL MPI_ALLGATHER( znorthloc_2d , itaille, MPI_DOUBLE_PRECISION, &2801 & znorthgloio_2d, itaille, MPI_DOUBLE_PRECISION, ncomm_north, ierr )2800 CALL MPI_ALLGATHER( xnorthloc_2d , itaille, MPI_DOUBLE_PRECISION, & 2801 & xnorthgloio_2d, itaille, MPI_DOUBLE_PRECISION, ncomm_north, ierr ) 2802 2802 ! 2803 2803 DO jr = 1, ndim_rank_north ! recover the global north array … … 2808 2808 DO jj = 1, ijpj 2809 2809 DO ji = ildi, ilei 2810 ztab_2d(ji+iilb-1,jj) = znorthgloio_2d(ji,jj,jr)2810 tab_2d(ji+iilb-1,jj) = xnorthgloio_2d(ji,jj,jr) 2811 2811 END DO 2812 2812 END DO … … 2814 2814 ENDIF 2815 2815 ! 2816 ! The ztab array has been either:2816 ! The tab array has been either: 2817 2817 ! a. Fully populated by the mpi_allgather operation or 2818 2818 ! b. Had the active points for this domain and northern neighbours populated … … 2821 2821 ! this domain will be identical. 2822 2822 ! 2823 CALL lbc_nfd( ztab_2d, cd_type, psgn ) ! North fold boundary condition2823 CALL lbc_nfd( tab_2d, cd_type, psgn ) ! North fold boundary condition 2824 2824 ! 2825 2825 ! … … 2827 2827 ij = jj - nlcj + ijpj 2828 2828 DO ji = 1, nlci 2829 pt2d(ji,jj) = ztab_2d(ji+nimpp-1,ij)2829 pt2d(ji,jj) = tab_2d(ji+nimpp-1,ij) 2830 2830 END DO 2831 2831 END DO … … 2860 2860 ! 2861 2861 ijpj=4 2862 ztab_e(:,:) = 0.e02862 tab_e(:,:) = 0.e0 2863 2863 2864 2864 ij=0 2865 ! put in znorthloc_e the last 4 jlines of pt2d2865 ! put in xnorthloc_e the last 4 jlines of pt2d 2866 2866 DO jj = nlcj - ijpj + 1 - jpr2dj, nlcj +jpr2dj 2867 2867 ij = ij + 1 2868 2868 DO ji = 1, jpi 2869 znorthloc_e(ji,ij)=pt2d(ji,jj)2869 xnorthloc_e(ji,ij)=pt2d(ji,jj) 2870 2870 END DO 2871 2871 END DO 2872 2872 ! 2873 2873 itaille = jpi * ( ijpj + 2 * jpr2dj ) 2874 CALL MPI_ALLGATHER( znorthloc_e(1,1) , itaille, MPI_DOUBLE_PRECISION, &2875 & znorthgloio_e(1,1,1), itaille, MPI_DOUBLE_PRECISION, ncomm_north, ierr )2874 CALL MPI_ALLGATHER( xnorthloc_e(1,1) , itaille, MPI_DOUBLE_PRECISION, & 2875 & xnorthgloio_e(1,1,1), itaille, MPI_DOUBLE_PRECISION, ncomm_north, ierr ) 2876 2876 ! 2877 2877 DO jr = 1, ndim_rank_north ! recover the global north array … … 2882 2882 DO jj = 1, ijpj+2*jpr2dj 2883 2883 DO ji = ildi, ilei 2884 ztab_e(ji+iilb-1,jj) = znorthgloio_e(ji,jj,jr)2884 tab_e(ji+iilb-1,jj) = xnorthgloio_e(ji,jj,jr) 2885 2885 END DO 2886 2886 END DO … … 2890 2890 ! 2. North-Fold boundary conditions 2891 2891 ! ---------------------------------- 2892 CALL lbc_nfd( ztab_e(:,:), cd_type, psgn, pr2dj = jpr2dj )2892 CALL lbc_nfd( tab_e(:,:), cd_type, psgn, pr2dj = jpr2dj ) 2893 2893 2894 2894 ij = jpr2dj … … 2897 2897 ij = ij +1 2898 2898 DO ji= 1, nlci 2899 pt2d(ji,jj) = ztab_e(ji+nimpp-1,ij)2899 pt2d(ji,jj) = tab_e(ji+nimpp-1,ij) 2900 2900 END DO 2901 2901 END DO -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/LBC/mppini_2.h90
r3818 r4193 122 122 irestj = 1 + MOD( jpjglo - nrecj -1 , jpnj ) 123 123 124 #if defined key_nemocice_decomp 125 ! Change padding to be consistent with CICE 126 ilci(1:jpni-1 ,:) = jpi 127 ilci(jpni ,:) = jpiglo - (jpni - 1) * (jpi - nreci) 128 129 ilcj(:, 1:jpnj-1) = jpj 130 ilcj(:, jpnj) = jpjglo - (jpnj - 1) * (jpj - nrecj) 131 #else 124 132 ilci(1:iresti ,:) = jpi 125 133 ilci(iresti+1:jpni ,:) = jpi-1 … … 127 135 ilcj(:, 1:irestj) = jpj 128 136 ilcj(:, irestj+1:jpnj) = jpj-1 137 #endif 129 138 130 139 IF(lwp) WRITE(numout,*) -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/SBC/geo2ocean.F90
r2715 r4193 187 187 & gsinf(jpi,jpj), gcosf(jpi,jpj), STAT=ierr ) 188 188 IF(lk_mpp) CALL mpp_sum( ierr ) 189 IF( ierr /= 0 ) CALL ctl_stop(' STOP', 'angle_msh_geo: unable to allocate arrays' )189 IF( ierr /= 0 ) CALL ctl_stop('angle: unable to allocate arrays' ) 190 190 191 191 ! ============================= ! … … 361 361 & gsinlat(jpi,jpj,4) , gcoslat(jpi,jpj,4) , STAT=ierr ) 362 362 IF( lk_mpp ) CALL mpp_sum( ierr ) 363 IF( ierr /= 0 ) CALL ctl_stop(' STOP', 'angle_msh_geo: unable to allocate arrays' )363 IF( ierr /= 0 ) CALL ctl_stop('geo2oce: unable to allocate arrays' ) 364 364 ENDIF 365 365 … … 438 438 !!---------------------------------------------------------------------- 439 439 440 IF( ALLOCATED( gsinlon ) ) THEN440 IF( .NOT. ALLOCATED( gsinlon ) ) THEN 441 441 ALLOCATE( gsinlon(jpi,jpj,4) , gcoslon(jpi,jpj,4) , & 442 442 & gsinlat(jpi,jpj,4) , gcoslat(jpi,jpj,4) , STAT=ierr ) 443 443 IF( lk_mpp ) CALL mpp_sum( ierr ) 444 IF( ierr /= 0 ) CALL ctl_stop(' STOP', 'angle_msh_geo: unable to allocate arrays' )444 IF( ierr /= 0 ) CALL ctl_stop('oce2geo: unable to allocate arrays' ) 445 445 ENDIF 446 446 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/SBC/sbc_oce.F90
r3680 r4193 70 70 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: sfx , sfx_b !: salt flux [PSU/m2/s] 71 71 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: emp_tot !: total E-P over ocean and ice [Kg/m2/s] 72 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: fmmflx !: freshwater budget: freezing/melting [Kg/m2/s] 72 73 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: rnf , rnf_b !: river runoff [Kg/m2/s] 73 74 !! … … 115 116 & qsr_tot(jpi,jpj) , qsr (jpi,jpj) , & 116 117 & emp (jpi,jpj) , emp_b(jpi,jpj) , & 117 & sfx (jpi,jpj) , sfx_b(jpi,jpj) , emp_tot(jpi,jpj) 118 & sfx (jpi,jpj) , sfx_b(jpi,jpj) , emp_tot(jpi,jpj), fmmflx(jpi,jpj), STAT=ierr(2) ) 118 119 ! 119 120 ALLOCATE( rnf (jpi,jpj) , sbc_tsc (jpi,jpj,jpts) , qsr_hc (jpi,jpj,jpk) , & -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/SBC/sbccpl.F90
r3680 r4193 407 407 SELECT CASE( TRIM( sn_rcv_emp%cldes ) ) 408 408 CASE( 'oce only' ) ; srcv( jpr_oemp )%laction = .TRUE. 409 CASE( 'conservative' ) ; srcv( (/jpr_rain, jpr_snow, jpr_ievp, jpr_tevp/) )%laction = .TRUE. 409 CASE( 'conservative' ) 410 srcv( (/jpr_rain, jpr_snow, jpr_ievp, jpr_tevp/) )%laction = .TRUE. 411 IF ( k_ice <= 1 ) srcv(jpr_ivep)%laction = .FALSE. 410 412 CASE( 'oce and ice' ) ; srcv( (/jpr_ievp, jpr_sbpr, jpr_semp, jpr_oemp/) )%laction = .TRUE. 411 413 CASE default ; CALL ctl_stop( 'sbc_cpl_init: wrong definition of sn_rcv_emp%cldes' ) … … 465 467 CALL ctl_stop( 'sbc_cpl_init: namsbc_cpl namelist mismatch between sn_rcv_qns%cldes and sn_rcv_dqnsdt%cldes' ) 466 468 ! ! ------------------------- ! 467 ! ! Ice Qsr penetration !468 ! ! ------------------------- !469 ! fraction of net shortwave radiation which is not absorbed in the thin surface layer470 ! and penetrates inside the ice cover ( Maykut and Untersteiner, 1971 ; Elbert anbd Curry, 1993 )471 ! Coupled case: since cloud cover is not received from atmosphere472 ! ===> defined as constant value -> definition done in sbc_cpl_init473 fr1_i0(:,:) = 0.18474 fr2_i0(:,:) = 0.82475 ! ! ------------------------- !476 469 ! ! 10m wind module ! 477 470 ! ! ------------------------- ! … … 508 501 ! Allocate taum part of frcv which is used even when not received as coupling field 509 502 IF ( .NOT. srcv(jpr_taum)%laction ) ALLOCATE( frcv(jpr_taum)%z3(jpi,jpj,srcv(jn)%nct) ) 503 ! Allocate itx1 and ity1 as they are used in sbc_cpl_ice_tau even if srcv(jpr_itx1)%laction = .FALSE. 504 IF( k_ice /= 0 ) THEN 505 IF ( .NOT. srcv(jpr_itx1)%laction ) ALLOCATE( frcv(jpr_itx1)%z3(jpi,jpj,srcv(jn)%nct) ) 506 IF ( .NOT. srcv(jpr_ity1)%laction ) ALLOCATE( frcv(jpr_ity1)%z3(jpi,jpj,srcv(jn)%nct) ) 507 END IF 510 508 511 509 ! ================================ ! … … 911 909 !! third as 2 components on the cp_ice_msh point 912 910 !! 913 !! In 'oce and ice' case, only one vector stress field911 !! Except in 'oce and ice' case, only one vector stress field 914 912 !! is received. It has already been processed in sbc_cpl_rcv 915 913 !! so that it is now defined as (i,j) components given at U- 916 !! and V-points, respectively. Therefore, hereonly the third914 !! and V-points, respectively. Therefore, only the third 917 915 !! transformation is done and only if the ice-grid is a 'I'-grid. 918 916 !! … … 1329 1327 END SELECT 1330 1328 1329 ! Ice Qsr penetration used (only?)in lim2 or lim3 1330 ! fraction of net shortwave radiation which is not absorbed in the thin surface layer 1331 ! and penetrates inside the ice cover ( Maykut and Untersteiner, 1971 ; Elbert anbd Curry, 1993 ) 1332 ! Coupled case: since cloud cover is not received from atmosphere 1333 ! ===> defined as constant value -> definition done in sbc_cpl_init 1334 fr1_i0(:,:) = 0.18 1335 fr2_i0(:,:) = 0.82 1336 1337 1331 1338 CALL wrk_dealloc( jpi,jpj, zcptn, ztmp, zicefr ) 1332 1339 ! -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/SBC/sbcmod.F90
r3764 r4193 146 146 sfx(:,:) = 0.0_wp ! the salt flux due to freezing/melting will be computed (i.e. will be non-zero) 147 147 ! only if sea-ice is present 148 149 fmmflx(:,:) = 0.0_wp ! freezing-melting array initialisation 148 150 149 151 ! ! restartability … … 218 220 IF( nsbc == 6 ) WRITE(numout,*) ' MFS Bulk formulation' 219 221 ENDIF 222 ! 223 CALL sbc_ssm_init ! Sea-surface mean fields initialisation 220 224 ! 221 225 IF( ln_ssr ) CALL sbc_ssr_init ! Sea-Surface Restoring initialisation … … 362 366 ! (includes virtual salt flux beneath ice 363 367 ! in linear free surface case) 368 CALL iom_put( "fmmflx", fmmflx ) ! Freezing-melting water flux 364 369 CALL iom_put( "qt" , qns + qsr ) ! total heat flux 365 370 CALL iom_put( "qns" , qns ) ! solar heat flux -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/SOL/solmat.F90
r3609 r4193 30 30 USE lbclnk ! lateral boudary conditions 31 31 USE lib_mpp ! distributed memory computing 32 USE c1d ! 1D vertical configuration 32 33 USE in_out_manager ! I/O manager 33 34 USE timing ! timing … … 271 272 272 273 ! SOR and PCG solvers 274 IF( lk_c1d ) CALL lbc_lnk( gcdmat, 'T', 1._wp ) ! 1D case bmask =/0 but gcdmat not define everywhere 273 275 DO jj = 1, jpj 274 276 DO ji = 1, jpi -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/TRA/eosbn2.F90
r3625 r4193 675 675 676 676 677 FUNCTION tfreez( psal ) RESULT( ptf )677 FUNCTION tfreez( psal, pdep ) RESULT( ptf ) 678 678 !!---------------------------------------------------------------------- 679 679 !! *** ROUTINE eos_init *** … … 688 688 !!---------------------------------------------------------------------- 689 689 REAL(wp), DIMENSION(jpi,jpj), INTENT(in ) :: psal ! salinity [psu] 690 REAL(wp), DIMENSION(jpi,jpj), INTENT(in ), OPTIONAL :: pdep ! depth [decibars] 690 691 ! Leave result array automatic rather than making explicitly allocated 691 692 REAL(wp), DIMENSION(jpi,jpj) :: ptf ! freezing temperature [Celcius] … … 694 695 ptf(:,:) = ( - 0.0575_wp + 1.710523e-3_wp * SQRT( psal(:,:) ) & 695 696 & - 2.154996e-4_wp * psal(:,:) ) * psal(:,:) 697 IF ( PRESENT( pdep ) ) THEN 698 ptf(:,:) = ptf(:,:) - 7.53e-4_wp * pdep(:,:) 699 ENDIF 696 700 ! 697 701 END FUNCTION tfreez -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/OPA_SRC/step.F90
r3970 r4193 298 298 ! 299 299 #if defined key_iomput 300 IF( kstp == nitend 300 IF( kstp == nitend .OR. indic < 0 ) CALL xios_context_finalize() ! needed for XIOS+AGRIF 301 301 #endif 302 302 ! -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/SAS_SRC/daymod.F90
r3851 r4193 246 246 nday_year = 1 247 247 nsec_year = ndt05 248 IF( nsec1jan000 >= 2 * (2**30 - nsecd * nyear_len(1) / 2 ) ) THEN ! test integer 4 max value 249 CALL ctl_stop( 'The number of seconds between Jan. 1st 00h of nit000 year and Jan. 1st 00h ', & 250 & 'of the current year is exceeding the INTEGER 4 max VALUE: 2^31-1 -> 68.09 years in seconds', & 251 & 'You must do a restart at higher frequency (or remove this STOP and recompile everything in I8)' ) 252 ENDIF 248 253 nsec1jan000 = nsec1jan000 + nsecd * nyear_len(1) 249 254 IF( nleapy == 1 ) CALL day_mth -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/SAS_SRC/diawri.F90
r3331 r4193 259 259 CALL histdef( nid_T, "sowaflup", "Net Upward Water Flux" , "Kg/m2/s", & ! (emp-rnf) 260 260 & jpi, jpj, nh_T, 1 , 1, 1 , -99 , 32, clop, zsto, zout ) 261 CALL histdef( nid_T, "so waflcd", "concentration/dilution water flux" , "kg/m2/s", & ! (emps-rnf)262 &jpi, jpj, nh_T, 1 , 1, 1 , -99 , 32, clop, zsto, zout )261 CALL histdef( nid_T, "sosfldow", "downward salt flux" , "PSU/m2/s", & ! (sfx) 262 & jpi, jpj, nh_T, 1 , 1, 1 , -99 , 32, clop, zsto, zout ) 263 263 CALL histdef( nid_T, "sohefldo", "Net Downward Heat Flux" , "W/m2" , & ! qns + qsr 264 264 & jpi, jpj, nh_T, 1 , 1, 1 , -99 , 32, clop, zsto, zout ) … … 309 309 CALL histwrite( nid_T, "sst_m", it, sst_m, ndim_hT, ndex_hT ) ! sea surface temperature 310 310 CALL histwrite( nid_T, "sss_m", it, sss_m, ndim_hT, ndex_hT ) ! sea surface salinity 311 CALL histwrite( nid_T, "sowaflup", it, emp , ndim_hT, ndex_hT ) ! upward water flux 312 CALL histwrite( nid_T, "sowaflcd", it, emps , ndim_hT, ndex_hT ) ! c/d water flux 311 CALL histwrite( nid_T, "sowaflup", it, (emp - rnf ) , ndim_hT, ndex_hT ) ! upward water flux 312 CALL histwrite( nid_T, "sosfldow", it, sfx , ndim_hT, ndex_hT ) ! downward salt flux 313 ! (includes virtual salt flux beneath ice 314 ! in linear free surface case) 315 313 316 CALL histwrite( nid_T, "sohefldo", it, qns + qsr , ndim_hT, ndex_hT ) ! total heat flux 314 317 CALL histwrite( nid_T, "soshfldo", it, qsr , ndim_hT, ndex_hT ) ! solar heat flux -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/SAS_SRC/nemogcm.F90
r3769 r4193 382 382 USE diawri , ONLY: dia_wri_alloc 383 383 USE dom_oce , ONLY: dom_oce_alloc 384 ! 385 INTEGER :: ierr 384 USE oce , ONLY : sshn, sshb, snwice_mass, snwice_mass_b, snwice_fmass 385 ! 386 INTEGER :: ierr,ierr4 386 387 !!---------------------------------------------------------------------- 387 388 ! … … 389 390 ierr = ierr + dom_oce_alloc () ! ocean domain 390 391 ierr = ierr + lib_mpp_alloc (numout) ! mpp exchanges 392 ALLOCATE( snwice_mass(jpi,jpj) , snwice_mass_b(jpi,jpj), & 393 & snwice_fmass(jpi,jpj), STAT= ierr4 ) 394 ierr = ierr + ierr4 391 395 ! 392 396 IF( lk_mpp ) CALL mpp_sum( ierr ) -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/SAS_SRC/sbcssm.F90
r3364 r4193 81 81 82 82 ! 83 IF (kt == nn_it000 ) CALL sbc_ssm_init() 84 83 85 IF( nn_timing == 1 ) CALL timing_start( 'sbc_ssm') 84 86 … … 100 102 tsn(:,:,1,jp_tem) = sst_m(:,:) 101 103 tsn(:,:,1,jp_sal) = sss_m(:,:) 104 IF ( nn_ice == 1 ) THEN 105 tsb(:,:,1,jp_tem) = sst_m(:,:) 106 tsb(:,:,1,jp_sal) = sss_m(:,:) 107 ENDIF 102 108 ub (:,:,1 ) = ssu_m(:,:) 103 109 vb (:,:,1 ) = ssv_m(:,:) … … 135 141 TYPE(FLD_N) :: sn_usp, sn_vsp, sn_ssh 136 142 ! 137 NAMELIST/namsbc_s sm/cn_dir, ln_3d_uv, sn_tem, sn_sal, sn_usp, sn_vsp, sn_ssh143 NAMELIST/namsbc_sas/cn_dir, ln_3d_uv, sn_tem, sn_sal, sn_usp, sn_vsp, sn_ssh 138 144 139 145 !!---------------------------------------------------------------------- … … 151 157 ! 152 158 REWIND( numnam ) ! read in namlist namsbc_ssm 153 READ ( numnam, namsbc_s sm)159 READ ( numnam, namsbc_sas ) 154 160 ! ! store namelist information in an array 155 161 ! ! Control print 156 162 IF(lwp) THEN 157 163 WRITE(numout,*) 158 WRITE(numout,*) 'sbc_s sm: standalone surface scheme '164 WRITE(numout,*) 'sbc_sas : standalone surface scheme ' 159 165 WRITE(numout,*) '~~~~~~~~~~~ ' 160 WRITE(numout,*) ' Namelist namsbc_s sm'166 WRITE(numout,*) ' Namelist namsbc_sas' 161 167 WRITE(numout,*) 162 168 ENDIF … … 273 279 ! so allocate enough of arrays to use 274 280 ! 281 ierr3 = 0 275 282 jpm = MAX(jp_tem, jp_sal) 276 283 ALLOCATE( tsn(jpi,jpj,1,jpm), STAT=ierr0 ) 277 284 ALLOCATE( ub(jpi,jpj,1) , STAT=ierr1 ) 278 285 ALLOCATE( vb(jpi,jpj,1) , STAT=ierr2 ) 279 ierr = ierr0 + ierr1 + ierr2 286 IF ( nn_ice == 1 ) ALLOCATE( tsb(jpi,jpj,1,jpm), STAT=ierr3 ) 287 ierr = ierr0 + ierr1 + ierr2 + ierr3 280 288 IF( ierr > 0 ) THEN 281 289 CALL ctl_stop('sbc_ssm_init: unable to allocate surface arrays') -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zfechem.F90
r3780 r4193 129 129 zoxy = trn(ji,jj,jk,jpoxy) * ( rhop(ji,jj,jk) / 1.e3 ) 130 130 ! Fe2+ oxydation rate from Santana-Casiano et al. (2005) 131 zkox = 35.407 - 6.7109 * zph + 0.5342 * zph * zph - 5362.6 / ( tsn(ji,jj, 1,jp_tem) + 273.15 ) &131 zkox = 35.407 - 6.7109 * zph + 0.5342 * zph * zph - 5362.6 / ( tsn(ji,jj,jk,jp_tem) + 273.15 ) & 132 132 & - 0.04406 * SQRT( tsn(ji,jj,jk,jp_sal) ) - 0.002847 * tsn(ji,jj,jk,jp_sal) 133 133 zkox = ( 10.** zkox ) * spd … … 263 263 zdep = MIN( 1., 1000. / fsdept(ji,jj,jk) ) 264 264 zlam1b = xlam1 * MAX( 0.e0, ( trn(ji,jj,jk,jpfer) * 1.e9 - ztotlig(ji,jj,jk) ) ) 265 zcoag = zfeequi * zlam1b * zstep + 1E-4 * ( 1. - zlamfac ) * zdep * zstep * zfecoll265 zcoag = zfeequi * zlam1b * zstep + 1E-4 * ( 1. - zlamfac ) * zdep * zstep * trn(ji,jj,jk,jpfer) 266 266 267 267 ! Compute the coagulation of colloidal iron. This parameterization … … 278 278 tra(ji,jj,jk,jpsfe) = tra(ji,jj,jk,jpsfe) + zscave * zdenom1 + zaggdfea + zaggdfeb 279 279 #else 280 zlam1b = 3.53E3 * trn(ji,jj,jk,jpgoc) * xdiss(ji,jj,jk) + 1E-4 * ( 1. - zlamfac ) * zdep280 zlam1b = 3.53E3 * trn(ji,jj,jk,jpgoc) * xdiss(ji,jj,jk) 281 281 zaggdfeb = zlam1b * zstep * zfecoll 282 282 ! -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zflx.F90
r3481 r4193 109 109 ! then the first atmospheric CO2 record read is at years(1) 110 110 zyr_dec = REAL( nyear + nn_offset, wp ) + REAL( nday_year, wp ) / REAL( nyear_len(1), wp ) 111 jm = 2112 DO WHILE( jm <= nmaxrec .AND. years(jm -1) < zyr_dec .AND. years(jm) >=zyr_dec ) ; jm = jm + 1 ; END DO111 jm = 1 112 DO WHILE( jm <= nmaxrec .AND. years(jm) < zyr_dec ) ; jm = jm + 1 ; END DO 113 113 iind = jm ; iindm1 = jm - 1 114 114 zdco2dt = ( atcco2h(iind) - atcco2h(iindm1) ) / ( years(iind) - years(iindm1) + rtrn ) … … 196 196 END DO 197 197 198 t_oce_co2_flx = t_oce_co2_flx + glob_sum( oce_co2(:,:) ) 199 t_atm_co2_flx = glob_sum( satmco2(:,:) * patm(:,:) * e1e2t(:,:) )! Total atmospheric pCO2198 t_oce_co2_flx = t_oce_co2_flx + glob_sum( oce_co2(:,:) ) ! Cumulative Total Flux of Carbon 199 t_atm_co2_flx = glob_sum( satmco2(:,:) * e1e2t(:,:) ) ! Total atmospheric pCO2 200 200 201 201 IF(ln_ctl) THEN ! print mean trends (used for debugging) -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zmeso.F90
r3856 r4193 140 140 ! ---------------------------------- 141 141 # if ! defined key_kriest 142 zgrazffeg = grazflux * zstep * wsbio4(ji,jj,jk) & 143 ! & * tgfunc2(ji,jj,jk) * trn(ji,jj,jk,jpgoc) * trn(ji,jj,jk,jpmes) 144 & * 2. * trn(ji,jj,jk,jpgoc) * trn(ji,jj,jk,jpmes) 142 zgrazffeg = grazflux * zstep * wsbio4(ji,jj,jk) * trn(ji,jj,jk,jpgoc) * trn(ji,jj,jk,jpmes) 145 143 zgrazfffg = zgrazffeg * trn(ji,jj,jk,jpbfe) / (trn(ji,jj,jk,jpgoc) + rtrn) 146 144 # endif 147 zgrazffep = grazflux * zstep * wsbio3(ji,jj,jk) & 148 ! & * tgfunc2(ji,jj,jk) * trn(ji,jj,jk,jppoc) * trn(ji,jj,jk,jpmes) 149 & * 2. * trn(ji,jj,jk,jppoc) * trn(ji,jj,jk,jpmes) 145 zgrazffep = grazflux * zstep * wsbio3(ji,jj,jk) * trn(ji,jj,jk,jppoc) * trn(ji,jj,jk,jpmes) 150 146 zgrazfffp = zgrazffep * trn(ji,jj,jk,jpsfe) / (trn(ji,jj,jk,jppoc) + rtrn) 151 147 ! … … 154 150 ! Compute the proportion of filter feeders 155 151 zproport = (zgrazffep + zgrazffeg)/(rtrn + zgraztot) 152 ! Compute fractionation of aggregates. It is assumed that diatoms based aggregates are more prone to fractionation 153 ! since they are more porous (marine snow instead of fecal pellets) 156 154 zratio = trn(ji,jj,jk,jpgsi) / ( trn(ji,jj,jk,jpgoc) + rtrn ) 157 155 zratio2 = zratio * zratio 158 ! zfrac = zproport * 0.15 * zstep * & 159 ! ( 0.2 + 0.8 * zratio2 / ( 1.5**2 + zratio2 ) ) & 160 ! *trn(ji,jj,jk,jpmes)/3E-7 *trn(ji,jj,jk,jpgoc) 161 zfrac = zproport * grazflux * zstep * wsbio4(ji,jj,jk) & 162 & * ( 0.1 + 3.9 * zratio2 / ( 1.**2 + zratio2 ) ) & 163 & * 2. * trn(ji,jj,jk,jpmes) * trn(ji,jj,jk,jpgoc) 156 zfrac = zproport * zgrazffeg * ( 0.1 + 3.9 * zratio2 / ( 1.**2 + zratio2 ) ) 164 157 165 158 zfracfe = zfrac * trn(ji,jj,jk,jpbfe) / (trn(ji,jj,jk,jpgoc) + rtrn) … … 193 186 zepshert = epsher2 * MIN( 1., zncratio ) 194 187 zepsherv = zepshert * MIN( 1., zgrasrat / ferat3 ) 195 zgrarem2 = zgraztot * ( 1. - zepsherv - unass2 ) + zrespz2 &196 & + ( 1. - zepsherv - unass2 ) /( 1. - zepsherv + rtrn) * ztortz2197 zgrafer2 = zgraztot * MAX( 0. , ( 1. - unass2 ) * zgrasrat - ferat3 * zepsherv ) &198 & + ferat3 * ( zrespz2 + ( 1. - zepsherv - unass2 ) /( 1. - zepsherv + rtrn) * ztortz2 )188 zgrarem2 = zgraztot * ( 1. - zepsherv - unass2 ) + zrespz2 & 189 & + ( 1. - zepsherv - unass2 ) /( 1. - zepsherv ) * ztortz2 190 zgrafer2 = zgraztot * MAX( 0. , ( 1. - unass2 ) * zgrasrat - ferat3 * zepsherv ) & 191 & + ferat3 * ( zrespz2 + ( 1. - zepsherv - unass2 ) /( 1. - zepsherv ) * ztortz2 ) 199 192 zgrapoc2 = zgraztot * unass2 200 193 … … 208 201 tra(ji,jj,jk,jpdic) = tra(ji,jj,jk,jpdic) + zgrarsig 209 202 tra(ji,jj,jk,jptal) = tra(ji,jj,jk,jptal) + rno3 * zgrarsig 210 #if defined key_kriest 211 tra(ji,jj,jk,jppoc) = tra(ji,jj,jk,jppoc) + zgrapoc2 212 tra(ji,jj,jk,jpnum) = tra(ji,jj,jk,jpnum) + zgrapoc2 * xkr_dmeso 213 tra(ji,jj,jk,jpsfe) = tra(ji,jj,jk,jpsfe) + zgraztotf * unass2 214 #else 215 tra(ji,jj,jk,jpgoc) = tra(ji,jj,jk,jpgoc) + zgrapoc2 - zfrac 216 tra(ji,jj,jk,jppoc) = tra(ji,jj,jk,jppoc) + zfrac 217 tra(ji,jj,jk,jpbfe) = tra(ji,jj,jk,jpbfe) + zgraztotf * unass2 - zfracfe 218 tra(ji,jj,jk,jpsfe) = tra(ji,jj,jk,jpsfe) + zfracfe 219 220 #endif 203 221 204 zmortz2 = ztortz2 + zrespz2 222 zmortzgoc = unass2 / ( 1. - zepsherv + rtrn) * ztortz2205 zmortzgoc = unass2 / ( 1. - zepsherv ) * ztortz2 223 206 tra(ji,jj,jk,jpmes) = tra(ji,jj,jk,jpmes) - zmortz2 + zepsherv * zgraztot 224 207 tra(ji,jj,jk,jpdia) = tra(ji,jj,jk,jpdia) - zgrazd … … 242 225 #if defined key_kriest 243 226 znumpoc = trn(ji,jj,jk,jpnum) / ( trn(ji,jj,jk,jppoc) + rtrn ) 244 tra(ji,jj,jk,jppoc) = tra(ji,jj,jk,jppoc) + zmortzgoc - zgrazpoc - zgrazffep 245 tra(ji,jj,jk,jpnum) = tra(ji,jj,jk,jpnum) - zgrazpoc * znumpoc &246 & + zmortzgoc * xkr_dmeso - zgrazffep * znumpoc * wsbio4(ji,jj,jk) / ( wsbio3(ji,jj,jk) + rtrn )247 tra(ji,jj,jk,jpsfe) = tra(ji,jj,jk,jpsfe) + ferat3 * zmortz2 - zgrazfffp - zgrazpof 227 tra(ji,jj,jk,jppoc) = tra(ji,jj,jk,jppoc) + zmortzgoc - zgrazpoc - zgrazffep + zgrapoc2 228 tra(ji,jj,jk,jpnum) = tra(ji,jj,jk,jpnum) - zgrazpoc * znumpoc + zgrapoc2 * xkr_dmeso & 229 & + zmortzgoc * xkr_dmeso - zgrazffep * znumpoc * wsbio4(ji,jj,jk) / ( wsbio3(ji,jj,jk) + rtrn ) 230 tra(ji,jj,jk,jpsfe) = tra(ji,jj,jk,jpsfe) + ferat3 * zmortz2 - zgrazfffp - zgrazpof + zgraztotf * unass2 248 231 #else 249 tra(ji,jj,jk,jppoc) = tra(ji,jj,jk,jppoc) - zgrazpoc - zgrazffep 250 tra(ji,jj,jk,jpgoc) = tra(ji,jj,jk,jpgoc) + zmortzgoc - zgrazffeg 251 tra(ji,jj,jk,jpsfe) = tra(ji,jj,jk,jpsfe) - zgrazpof - zgrazfffp 252 tra(ji,jj,jk,jpbfe) = tra(ji,jj,jk,jpbfe) + ferat3 * zmortzgoc - zgrazfffg 232 tra(ji,jj,jk,jppoc) = tra(ji,jj,jk,jppoc) - zgrazpoc - zgrazffep + zfrac 233 tra(ji,jj,jk,jpgoc) = tra(ji,jj,jk,jpgoc) + zmortzgoc - zgrazffeg + zgrapoc2 - zfrac 234 tra(ji,jj,jk,jpsfe) = tra(ji,jj,jk,jpsfe) - zgrazpof - zgrazfffp + zfracfe 235 tra(ji,jj,jk,jpbfe) = tra(ji,jj,jk,jpbfe) + ferat3 * zmortzgoc - zgrazfffg + zgraztotf * unass2 - zfracfe 253 236 #endif 254 255 237 END DO 256 238 END DO -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zprod.F90
r3686 r4193 201 201 zconctemp2 = trn(ji,jj,jk,jpdia) - zconctemp 202 202 ! 203 zpislopead (ji,jj,jk) = pislope * ( 1.+ zadap * EXP( enano(ji,jj,jk) ) )203 zpislopead (ji,jj,jk) = pislope * ( 1.+ zadap * EXP( -0.21 * enano(ji,jj,jk) ) ) 204 204 zpislopead2(ji,jj,jk) = (pislope * zconctemp2 + pislope2 * zconctemp) / ( trn(ji,jj,jk,jpdia) + rtrn ) 205 205 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zsed.F90
r3751 r4193 69 69 REAL(wp) :: zwflux, zfminus, zfplus 70 70 REAL(wp) :: zlim, zfact, zfactcal 71 REAL(wp) :: zo2, zno3, zflx, zpdenit, z1pdenit 71 REAL(wp) :: zo2, zno3, zflx, zpdenit, z1pdenit, zdenitt, zolimit 72 72 REAL(wp) :: zsiloss, zcaloss, zwsbio3, zwsbio4, zwscal, zdep, zwstpoc 73 73 REAL(wp) :: ztrfer, ztrpo4, zwdust … … 82 82 IF( nn_timing == 1 ) CALL timing_start('p4z_sed') 83 83 ! 84 IF( kt == nit 000 .AND. jnt == 1 ) THEN84 IF( kt == nittrc000 .AND. jnt == 1 ) THEN 85 85 ryyss = nyear_len(1) * rday ! number of seconds per year and per month 86 86 rmtss = ryyss / raamo … … 105 105 DO ji = 1, jpi 106 106 zdep = rfact2 / fse3t(ji,jj,1) 107 ! zwflux = ( emps(ji,jj) - emp(ji,jj) ) & 108 ! & * tsn(ji,jj,1,jp_sal) / ( tsn(ji,jj,1,jp_sal) - 6.0 ) / 1000. 109 zwflux = 0. 110 zfminus = MIN( 0., -zwflux ) * trn(ji,jj,1,jpfer) * zdep 111 zfplus = MAX( 0., -zwflux ) * 10E-9 * zdep 107 zwflux = fmmflx(ji,jj) / 1000._wp 108 zfminus = MIN( 0._wp, -zwflux ) * trn(ji,jj,1,jpfer) * zdep 109 zfplus = MAX( 0._wp, -zwflux ) * icefeinput * zdep 112 110 zironice(ji,jj) = zfplus + zfminus 113 111 END DO … … 135 133 ENDIF 136 134 zsidep(:,:) = 8.8 * 0.075 * dust(:,:) * rfact2 / fse3t(:,:,1) / ( 28.1 * rmtss ) 137 zpdep (:,:) = 0.1 * 0.021 * dust(:,:) * rfact2 / fse3t(:,:,1) / ( 31. * rmtss ) 135 zpdep (:,:) = 0.1 * 0.021 * dust(:,:) * rfact2 / fse3t(:,:,1) / ( 31. * rmtss ) / po4r 138 136 ! ! Iron solubilization of particles in the water column 139 137 zwdust = 0.005 / ( wdust * 55.85 * 30.42 ) / ( 45. * rday ) … … 246 244 #endif 247 245 248 ! THEN this loss is scaled at each bottom grid cell for 249 ! equilibrating the total budget of silica in the ocean. 250 ! Thus, the amount of silica lost in the sediments equal 251 ! the supply at the surface (dust+rivers) 246 ! This loss is scaled at each bottom grid cell for equilibrating the total budget of silica in the ocean. 247 ! Thus, the amount of silica lost in the sediments equal the supply at the surface (dust+rivers) 252 248 ! ------------------------------------------------------ 253 249 #if ! defined key_sed … … 302 298 303 299 #if ! defined key_sed 304 zpdenit = MIN( ( trn(ji,jj,ikt,jpno3) - rtrn ) / rdenit, zdenit2d(ji,jj) * zwstpoc * zrivno3 ) 300 ! The 0.5 factor in zpdenit and zdenitt is to avoid negative NO3 concentration after both denitrification 301 ! in the sediments and just above the sediments. Not very clever, but simpliest option. 302 zpdenit = MIN( 0.5 * ( trn(ji,jj,ikt,jpno3) - rtrn ) / rdenit, zdenit2d(ji,jj) * zwstpoc * zrivno3 ) 305 303 z1pdenit = zwstpoc * zrivno3 - zpdenit 306 trn(ji,jj,ikt,jpdoc) = trn(ji,jj,ikt,jpdoc) + z1pdenit 307 trn(ji,jj,ikt,jppo4) = trn(ji,jj,ikt,jppo4) + zpdenit 308 trn(ji,jj,ikt,jpnh4) = trn(ji,jj,ikt,jpnh4) + zpdenit 309 trn(ji,jj,ikt,jpno3) = trn(ji,jj,ikt,jpno3) - rdenit * zpdenit 310 trn(ji,jj,ikt,jptal) = trn(ji,jj,ikt,jptal) + rno3 * ( 1. + rdenit ) * zpdenit 311 trn(ji,jj,ikt,jpdic) = trn(ji,jj,ikt,jpdic) + zpdenit 304 zolimit = MIN( ( trn(ji,jj,ikt,jpoxy) - rtrn ) / o2ut, z1pdenit * ( 1.- nitrfac(ji,jj,ikt) ) ) 305 zdenitt = MIN( 0.5 * ( trn(ji,jj,ikt,jpno3) - rtrn ) / rdenit, z1pdenit * nitrfac(ji,jj,ikt) ) 306 trn(ji,jj,ikt,jpdoc) = trn(ji,jj,ikt,jpdoc) + z1pdenit - zolimit - zdenitt 307 trn(ji,jj,ikt,jppo4) = trn(ji,jj,ikt,jppo4) + zpdenit + zolimit + zdenitt 308 trn(ji,jj,ikt,jpnh4) = trn(ji,jj,ikt,jpnh4) + zpdenit + zolimit + zdenitt 309 trn(ji,jj,ikt,jpno3) = trn(ji,jj,ikt,jpno3) - rdenit * (zpdenit + zdenitt) 310 trn(ji,jj,ikt,jpoxy) = trn(ji,jj,ikt,jpoxy) - zolimit * o2ut 311 trn(ji,jj,ikt,jptal) = trn(ji,jj,ikt,jptal) + rno3 * (zolimit + (1.+rdenit) * (zpdenit + zdenitt) ) 312 trn(ji,jj,ikt,jpdic) = trn(ji,jj,ikt,jpdic) + zpdenit + zolimit + zdenitt 312 313 zwork4(ji,jj) = rdenit * zpdenit * fse3t(ji,jj,ikt) 313 314 #endif -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zsink.F90
r3829 r4193 156 156 DO ji = 1, jpi 157 157 IF( tmask(ji,jj,jk) == 1 ) THEN 158 zwsmax = 0. 8* fse3t(ji,jj,jk) / xstep158 zwsmax = 0.5 * fse3t(ji,jj,jk) / xstep 159 159 wsbio3(ji,jj,jk) = MIN( wsbio3(ji,jj,jk), zwsmax * FLOAT( iiter1 ) ) 160 160 wsbio4(ji,jj,jk) = MIN( wsbio4(ji,jj,jk), zwsmax * FLOAT( iiter2 ) ) … … 217 217 zaggdoc = ( ( 0.369 * 0.3 * trn(ji,jj,jk,jpdoc) + 102.4 * trn(ji,jj,jk,jppoc) ) * zfact & 218 218 & + 2.4 * zstep * trn(ji,jj,jk,jppoc) ) * 0.3 * trn(ji,jj,jk,jpdoc) 219 ! zaggdoc = ( 0.83 * trn(ji,jj,jk,jpdoc) + 271. * trn(ji,jj,jk,jppoc) ) * zfact * trn(ji,jj,jk,jpdoc)220 219 ! transfer of DOC to GOC : 221 220 ! 1st term is shear aggregation 222 221 ! 2nd term is differential settling 223 222 zaggdoc2 = ( 3.53E3 * zfact + 0.1 * zstep ) * trn(ji,jj,jk,jpgoc) * 0.3 * trn(ji,jj,jk,jpdoc) 224 ! zaggdoc2 = 1.07e4 * zfact * trn(ji,jj,jk,jpgoc) * trn(ji,jj,jk,jpdoc)225 223 ! tranfer of DOC to POC due to brownian motion 226 ! zaggdoc3 = 0.02 * ( 16706. * trn(ji,jj,jk,jppoc) + 231. * trn(ji,jj,jk,jpdoc) ) * zstep * trn(ji,jj,jk,jpdoc)227 224 zaggdoc3 = ( 5095. * trn(ji,jj,jk,jppoc) + 114. * 0.3 * trn(ji,jj,jk,jpdoc) ) *zstep * 0.3 * trn(ji,jj,jk,jpdoc) 228 225 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/P4Z/p4zsms.F90
r3780 r4193 76 76 ENDIF 77 77 ! 78 IF( ln_rsttr .AND. kt == nittrc000 ) CALL p4z_rst( nittrc000, 'READ' ) !* read or initialize all required fields 79 IF( ln_rsttr .AND. ln_pisclo ) CALL p4z_clo ! damping on closed seas 78 IF( kt == nittrc000 ) THEN 79 ! 80 CALL p4z_che ! initialize the chemical constants 81 ! 82 IF( .NOT. ln_rsttr ) THEN ; CALL p4z_ph_ini ! set PH at kt=nit000 83 ELSE ; CALL p4z_rst( nittrc000, 'READ' ) !* read or initialize all required fields 84 ENDIF 85 ! 86 ENDIF 87 80 88 IF( ln_pisdmp .AND. MOD( kt - nn_dttrc, nn_pisdmp ) == 0 ) CALL p4z_dmp( kt ) ! Relaxation of some tracers 81 89 ! … … 164 172 NAMELIST/nampiskrp/ xkr_eta, xkr_zeta, xkr_ncontent, xkr_mass_min, xkr_mass_max 165 173 #endif 166 NAMELIST/nampisdmp/ ln_pisdmp, nn_pisdmp , ln_pisclo174 NAMELIST/nampisdmp/ ln_pisdmp, nn_pisdmp 167 175 NAMELIST/nampismass/ ln_check_mass 168 176 !!---------------------------------------------------------------------- … … 215 223 ln_pisdmp = .true. 216 224 nn_pisdmp = 1 217 ln_pisclo = .false.218 225 219 226 REWIND( numnatp ) … … 225 232 WRITE(numout,*) ' Relaxation of tracer to glodap mean value ln_pisdmp =', ln_pisdmp 226 233 WRITE(numout,*) ' Frequency of Relaxation nn_pisdmp =', nn_pisdmp 227 WRITE(numout,*) ' Restoring of tracer to initial value on closed seas ln_pisclo =', ln_pisclo228 234 WRITE(numout,*) ' ' 229 235 ENDIF … … 241 247 END SUBROUTINE p4z_sms_init 242 248 249 SUBROUTINE p4z_ph_ini 250 !!--------------------------------------------------------------------- 251 !! *** ROUTINE p4z_ini_ph *** 252 !! 253 !! ** Purpose : Initialization of chemical variables of the carbon cycle 254 !!--------------------------------------------------------------------- 255 INTEGER :: ji, jj, jk 256 REAL(wp) :: zcaralk, zbicarb, zco3 257 REAL(wp) :: ztmas, ztmas1 258 !!--------------------------------------------------------------------- 259 260 ! Set PH from total alkalinity, borat (???), akb3 (???) and ak23 (???) 261 ! -------------------------------------------------------- 262 DO jk = 1, jpk 263 DO jj = 1, jpj 264 DO ji = 1, jpi 265 ztmas = tmask(ji,jj,jk) 266 ztmas1 = 1. - tmask(ji,jj,jk) 267 zcaralk = trn(ji,jj,jk,jptal) - borat(ji,jj,jk) / ( 1. + 1.E-8 / ( rtrn + akb3(ji,jj,jk) ) ) 268 zco3 = ( zcaralk - trn(ji,jj,jk,jpdic) ) * ztmas + 0.5e-3 * ztmas1 269 zbicarb = ( 2. * trn(ji,jj,jk,jpdic) - zcaralk ) 270 hi(ji,jj,jk) = ( ak23(ji,jj,jk) * zbicarb / zco3 ) * ztmas + 1.e-9 * ztmas1 271 END DO 272 END DO 273 END DO 274 ! 275 END SUBROUTINE p4z_ph_ini 276 243 277 SUBROUTINE p4z_rst( kt, cdrw ) 244 278 !!--------------------------------------------------------------------- … … 269 303 ELSE 270 304 ! hi(:,:,:) = 1.e-9 271 ! Set PH from total alkalinity, borat (???), akb3 (???) and ak23 (???) 272 ! -------------------------------------------------------- 273 DO jk = 1, jpk 274 DO jj = 1, jpj 275 DO ji = 1, jpi 276 ztmas = tmask(ji,jj,jk) 277 ztmas1 = 1. - tmask(ji,jj,jk) 278 zcaralk = trn(ji,jj,jk,jptal) - borat(ji,jj,jk) / ( 1. + 1.E-8 / ( rtrn + akb3(ji,jj,jk) ) ) 279 zco3 = ( zcaralk - trn(ji,jj,jk,jpdic) ) * ztmas + 0.5e-3 * ztmas1 280 zbicarb = ( 2. * trn(ji,jj,jk,jpdic) - zcaralk ) 281 hi(ji,jj,jk) = ( ak23(ji,jj,jk) * zbicarb / zco3 ) * ztmas + 1.e-9 * ztmas1 282 END DO 283 END DO 284 END DO 305 CALL p4z_ph_ini 285 306 ENDIF 286 307 CALL iom_get( numrtr, jpdom_autoglo, 'Silicalim', xksi(:,:) ) … … 395 416 #endif 396 417 & + trn(:,:,:,jpsfe) & 397 & + trn(:,:,:,jpzoo) 418 & + trn(:,:,:,jpzoo) * ferat3 & 398 419 & + trn(:,:,:,jpmes) * ferat3 ) * cvol(:,:,:) ) 399 420 … … 421 442 END SUBROUTINE p4z_chk_mass 422 443 423 SUBROUTINE p4z_clo424 !!---------------------------------------------------------------------425 !! *** ROUTINE p4z_clo ***426 !!427 !! ** Purpose : Closed sea domain initialization428 !!429 !! ** Method : if a closed sea is located only in a model grid point430 !! we restore to initial data431 !!432 !! ** Action : ictsi1(), ictsj1() : south-west closed sea limits (i,j)433 !! ictsi2(), ictsj2() : north-east Closed sea limits (i,j)434 !!----------------------------------------------------------------------435 INTEGER, PARAMETER :: npicts = 4 ! number of closed sea436 INTEGER, DIMENSION(npicts) :: ictsi1, ictsj1 ! south-west closed sea limits (i,j)437 INTEGER, DIMENSION(npicts) :: ictsi2, ictsj2 ! north-east closed sea limits (i,j)438 INTEGER :: ji, jj, jk, jn, jl, jc ! dummy loop indices439 INTEGER :: ierr ! local integer440 REAL(wp), POINTER, DIMENSION(:,:,:,:) :: ztrcdta ! 4D workspace441 !!----------------------------------------------------------------------442 443 IF(lwp) WRITE(numout,*)444 IF(lwp) WRITE(numout,*)' p4z_clo : closed seas '445 IF(lwp) WRITE(numout,*)'~~~~~~~'446 447 ! initial values448 ictsi1(:) = 1 ; ictsi2(:) = 1449 ictsj1(:) = 1 ; ictsj2(:) = 1450 451 ! set the closed seas (in data domain indices)452 ! -------------------453 454 IF( cp_cfg == "orca" ) THEN455 !456 SELECT CASE ( jp_cfg )457 ! ! =======================458 CASE ( 2 ) ! ORCA_R2 configuration459 ! ! =======================460 ! ! Caspian Sea461 ictsi1(1) = 11 ; ictsj1(1) = 103462 ictsi2(1) = 17 ; ictsj2(1) = 112463 ! ! Great North American Lakes464 ictsi1(2) = 97 ; ictsj1(2) = 107465 ictsi2(2) = 103 ; ictsj2(2) = 111466 ! ! Black Sea 1 : west part of the Black Sea467 ictsi1(3) = 174 ; ictsj1(3) = 107468 ictsi2(3) = 181 ; ictsj2(3) = 112469 ! ! Black Sea 2 : est part of the Black Sea470 ictsi1(4) = 2 ; ictsj1(4) = 107471 ictsi2(4) = 6 ; ictsj2(4) = 112472 ! ! =======================473 CASE ( 4 ) ! ORCA_R4 configuration474 ! ! =======================475 ! ! Caspian Sea476 ictsi1(1) = 4 ; ictsj1(1) = 53477 ictsi2(1) = 4 ; ictsj2(1) = 56478 ! ! Great North American Lakes479 ictsi1(2) = 49 ; ictsj1(2) = 55480 ictsi2(2) = 51 ; ictsj2(2) = 56481 ! ! Black Sea482 ictsi1(3) = 88 ; ictsj1(3) = 55483 ictsi2(3) = 91 ; ictsj2(3) = 56484 ! ! Baltic Sea485 ictsi1(4) = 75 ; ictsj1(4) = 59486 ictsi2(4) = 76 ; ictsj2(4) = 61487 ! ! =======================488 ! ! =======================489 CASE ( 025 ) ! ORCA_R025 configuration490 ! ! =======================491 ! Caspian + Aral sea492 ictsi1(1) = 1330 ; ictsj1(1) = 645493 ictsi2(1) = 1400 ; ictsj2(1) = 795494 ! ! Azov Sea495 ictsi1(2) = 1284 ; ictsj1(2) = 722496 ictsi2(2) = 1304 ; ictsj2(2) = 747497 !498 END SELECT499 !500 ENDIF501 502 ! convert the position in local domain indices503 ! --------------------------------------------504 DO jc = 1, npicts505 ictsi1(jc) = mi0( ictsi1(jc) )506 ictsj1(jc) = mj0( ictsj1(jc) )507 508 ictsi2(jc) = mi1( ictsi2(jc) )509 ictsj2(jc) = mj1( ictsj2(jc) )510 END DO511 512 ! Restore close seas values to initial data513 IF( ln_trcdta .AND. nb_trcdta > 0 ) THEN ! Initialisation of tracer from a file that may also be used for damping514 !515 CALL wrk_alloc( jpi, jpj, jpk, nb_trcdta, ztrcdta ) ! Memory allocation516 !517 CALL trc_dta( nittrc000, ztrcdta ) ! read tracer data at nittrc000518 !519 DO jn = 1, jptra520 IF( ln_trc_ini(jn) ) THEN ! update passive tracers arrays with input data read from file521 jl = n_trc_index(jn)522 DO jc = 1, npicts523 DO jk = 1, jpkm1524 DO jj = ictsj1(jc), ictsj2(jc)525 DO ji = ictsi1(jc), ictsi2(jc)526 trn(ji,jj,jk,jn) = ztrcdta(ji,jj,jk,jl) * tmask(ji,jj,jk)527 trb(ji,jj,jk,jn) = trn(ji,jj,jk,jn)528 ENDDO529 ENDDO530 ENDDO531 ENDDO532 ENDIF533 ENDDO534 CALL wrk_dealloc( jpi, jpj, jpk, nb_trcdta, ztrcdta )535 ENDIF536 !537 END SUBROUTINE p4z_clo538 444 #else 539 445 !!====================================================================== -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/sms_pisces.F90
r3780 r4193 56 56 LOGICAL :: ln_pisdmp !: restoring or not of nutrients to a mean value 57 57 INTEGER :: nn_pisdmp !: frequency of relaxation or not of nutrients to a mean value 58 LOGICAL :: ln_pisclo !: Restoring or not of nutrients to initial value on closed seas59 58 60 59 !!* Mass conservation -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/PISCES/trcini_pisces.F90
r3757 r4193 122 122 rdenita = 3._wp / 5._wp 123 123 o2ut = 131._wp / 122._wp 124 125 CALL p4z_che ! initialize the chemical constants126 124 127 125 ! Initialization of tracer concentration in case of no restart … … 162 160 xksi(:,:) = 2.e-6 163 161 xksimax(:,:) = xksi(:,:) 164 165 ! Initialization of chemical variables of the carbon cycle 166 ! -------------------------------------------------------- 167 DO jk = 1, jpk 168 DO jj = 1, jpj 169 DO ji = 1, jpi 170 ztmas = tmask(ji,jj,jk) 171 ztmas1 = 1. - tmask(ji,jj,jk) 172 zcaralk = trn(ji,jj,jk,jptal) - borat(ji,jj,jk) / ( 1. + 1.E-8 / ( rtrn + akb3(ji,jj,jk) ) ) 173 zco3 = ( zcaralk - trn(ji,jj,jk,jpdic) ) * ztmas + 0.5e-3 * ztmas1 174 zbicarb = ( 2. * trn(ji,jj,jk,jpdic) - zcaralk ) 175 hi(ji,jj,jk) = ( ak23(ji,jj,jk) * zbicarb / zco3 ) * ztmas + 1.e-9 * ztmas1 176 END DO 177 END DO 178 END DO 179 ! 162 ! 180 163 END IF 181 164 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/TRP/trcdmp.F90
r3294 r4193 11 11 !! 3.3 ! 2010-06 (C. Ethe, G. Madec) merge TRA-TRC 12 12 !!---------------------------------------------------------------------- 13 #if defined key_top && defined key_trcdmp 14 !!---------------------------------------------------------------------- 15 !! key_trcdmp internal damping 13 #if defined key_top 16 14 !!---------------------------------------------------------------------- 17 15 !! trc_dmp : update the tracer trend with the internal damping … … 25 23 USE prtctl_trc ! Print control for debbuging 26 24 USE trdtra 25 USE trdmod_oce 27 26 28 27 IMPLICIT NONE … … 30 29 31 30 PUBLIC trc_dmp ! routine called by step.F90 31 PUBLIC trc_dmp_clo ! routine called by step.F90 32 32 PUBLIC trc_dmp_alloc ! routine called by nemogcm.F90 33 33 34 LOGICAL , PUBLIC, PARAMETER :: lk_trcdmp = .TRUE. !: internal damping flag35 36 34 ! !!* Namelist namtrc_dmp : passive tracer newtonian damping * 37 INTEGER :: nn_hdmp_tr = -1 ! = 0/-1/'latitude' for damping over passive tracer38 INTEGER :: nn_zdmp_tr = 0 ! = 0/1/2 flag for damping in the mixed layer39 REAL(wp) :: rn_surf_tr = 50. ! surface time scale for internal damping [days]40 REAL(wp) :: rn_bot_tr = 360. ! bottom time scale for internal damping [days]41 REAL(wp) :: rn_dep_tr = 800. ! depth of transition between rn_surf and rn_bot [meters]42 INTEGER :: nn_file_tr = 2 ! = 1 create a damping.coeff NetCDF file43 44 35 REAL(wp), ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: restotr ! restoring coeff. on tracers (s-1) 36 37 INTEGER, PARAMETER :: npncts = 5 ! number of closed sea 38 INTEGER, DIMENSION(npncts) :: nctsi1, nctsj1 ! south-west closed sea limits (i,j) 39 INTEGER, DIMENSION(npncts) :: nctsi2, nctsj2 ! north-east closed sea limits (i,j) 45 40 46 41 !! * Substitutions … … 86 81 INTEGER, INTENT( in ) :: kt ! ocean time-step index 87 82 !! 88 INTEGER :: ji, jj, jk, jn ! dummy loop indices83 INTEGER :: ji, jj, jk, jn, jl ! dummy loop indices 89 84 REAL(wp) :: ztra ! temporary scalars 90 85 CHARACTER (len=22) :: charout 91 86 REAL(wp), POINTER, DIMENSION(:,:,:) :: ztrtrd 87 REAL(wp), POINTER, DIMENSION(:,:,:,:) :: ztrcdta ! 4D workspace 92 88 !!---------------------------------------------------------------------- 93 89 ! … … 99 95 100 96 IF( l_trdtrc ) CALL wrk_alloc( jpi, jpj, jpk, ztrtrd ) ! temporary save of trends 101 102 ! 1. Newtonian damping trends on tracer fields 103 ! -------------------------------------------- 104 ! Initialize the input fields for newtonian damping 105 CALL trc_dta( kt ) 106 ! ! =========== 107 DO jn = 1, jptra ! tracer loop 108 ! ! =========== 109 IF( l_trdtrc ) ztrtrd(:,:,:) = tra(:,:,:,jn) ! save trends 110 111 IF( lutini(jn) ) THEN 97 ! 98 IF( nb_trcdta > 0 ) THEN ! Initialisation of tracer from a file that may also be used for damping 99 ! 100 CALL wrk_alloc( jpi, jpj, jpk, nb_trcdta, ztrcdta ) ! Memory allocation 101 CALL trc_dta( kt, ztrcdta ) ! read tracer data at nit000 102 ! ! =========== 103 DO jn = 1, jptra ! tracer loop 104 ! ! =========== 105 IF( l_trdtrc ) ztrtrd(:,:,:) = tra(:,:,:,jn) ! save trends 112 106 ! 113 SELECT CASE ( nn_zdmp_trc ) 114 ! 115 CASE( 0 ) !== newtonian damping throughout the water column ==! 116 DO jk = 1, jpkm1 117 DO jj = 2, jpjm1 118 DO ji = fs_2, fs_jpim1 ! vector opt. 119 ztra = restotr(ji,jj,jk) * ( trdta(ji,jj,jk,jn) - trb(ji,jj,jk,jn) ) 120 tra(ji,jj,jk,jn) = tra(ji,jj,jk,jn) + ztra 107 IF( ln_trc_ini(jn) ) THEN ! update passive tracers arrays with input data read from file 108 109 jl = n_trc_index(jn) 110 111 SELECT CASE ( nn_zdmp_tr ) 112 ! 113 CASE( 0 ) !== newtonian damping throughout the water column ==! 114 DO jk = 1, jpkm1 115 DO jj = 2, jpjm1 116 DO ji = fs_2, fs_jpim1 ! vector opt. 117 ztra = restotr(ji,jj,jk) * ( ztrcdta(ji,jj,jk,jl) - trb(ji,jj,jk,jn) ) 118 tra(ji,jj,jk,jn) = tra(ji,jj,jk,jn) + ztra 119 END DO 121 120 END DO 122 121 END DO 123 END DO124 !125 CASE ( 1 ) !== no damping in the turbocline (avt > 5 cm2/s) ==!126 DO jk = 1, jpkm1127 DO jj = 2, jpjm1128 DO ji = fs_2, fs_jpim1 ! vector opt.129 IF( avt(ji,jj,jk) <= 5.e-4 ) THEN130 ztra = restotr(ji,jj,jk) * ( trdta(ji,jj,jk,jn) - trb(ji,jj,jk,jn) )131 tra(ji,jj,jk,jn) = tra(ji,jj,jk,jn) + ztra132 END IF122 ! 123 CASE ( 1 ) !== no damping in the turbocline (avt > 5 cm2/s) ==! 124 DO jk = 1, jpkm1 125 DO jj = 2, jpjm1 126 DO ji = fs_2, fs_jpim1 ! vector opt. 127 IF( avt(ji,jj,jk) <= 5.e-4 ) THEN 128 ztra = restotr(ji,jj,jk) * ( ztrcdta(ji,jj,jk,jl) - trb(ji,jj,jk,jn) ) 129 tra(ji,jj,jk,jn) = tra(ji,jj,jk,jn) + ztra 130 ENDIF 131 END DO 133 132 END DO 134 133 END DO 135 END DO136 !137 CASE ( 2 ) !== no damping in the mixed layer ==!138 DO jk = 1, jpkm1139 DO jj = 2, jpjm1140 DO ji = fs_2, fs_jpim1 ! vector opt.141 IF( fsdept(ji,jj,jk) >= hmlp (ji,jj) ) THEN142 ztra = restotr(ji,jj,jk,jn) * ( trdta(ji,jj,jk,jn) - trb(ji,jj,jk,jn) )143 tra(ji,jj,jk,jn) = tra(ji,jj,jk,jn) + ztra144 END IF134 ! 135 CASE ( 2 ) !== no damping in the mixed layer ==! 136 DO jk = 1, jpkm1 137 DO jj = 2, jpjm1 138 DO ji = fs_2, fs_jpim1 ! vector opt. 139 IF( fsdept(ji,jj,jk) >= hmlp (ji,jj) ) THEN 140 ztra = restotr(ji,jj,jk) * ( ztrcdta(ji,jj,jk,jl) - trb(ji,jj,jk,jn) ) 141 tra(ji,jj,jk,jn) = tra(ji,jj,jk,jn) + ztra 142 END IF 143 END DO 145 144 END DO 146 145 END DO 147 END DO 148 ! 149 END SELECT 150 ! 151 ENDIF 152 ! 153 IF( l_trdtrc ) THEN 154 ztrtrd(:,:,:) = tra(:,:,:,jn) - ztrtrd(:,:,:) 155 CALL trd_tra( kt, 'TRC', jn, jptra_trd_dmp, ztrtrd ) 156 END IF 157 ! ! =========== 158 END DO ! tracer loop 159 ! ! =========== 146 ! 147 END SELECT 148 ! 149 ENDIF 150 ! 151 IF( l_trdtrc ) THEN 152 ztrtrd(:,:,:) = tra(:,:,:,jn) - ztrtrd(:,:,:) 153 CALL trd_tra( kt, 'TRC', jn, jptra_trd_dmp, ztrtrd ) 154 END IF 155 ! ! =========== 156 END DO ! tracer loop 157 ! ! =========== 158 CALL wrk_dealloc( jpi, jpj, jpk, nb_trcdta, ztrcdta ) 159 ENDIF 160 ! 160 161 IF( l_trdtrc ) CALL wrk_dealloc( jpi, jpj, jpk, ztrtrd ) 161 162 ! ! print mean trends (used for debugging) … … 168 169 ! 169 170 END SUBROUTINE trc_dmp 171 172 SUBROUTINE trc_dmp_clo( kt ) 173 !!--------------------------------------------------------------------- 174 !! *** ROUTINE trc_dmp_clo *** 175 !! 176 !! ** Purpose : Closed sea domain initialization 177 !! 178 !! ** Method : if a closed sea is located only in a model grid point 179 !! we restore to initial data 180 !! 181 !! ** Action : nctsi1(), nctsj1() : south-west closed sea limits (i,j) 182 !! nctsi2(), nctsj2() : north-east Closed sea limits (i,j) 183 !!---------------------------------------------------------------------- 184 INTEGER, INTENT( in ) :: kt ! ocean time-step index 185 ! 186 INTEGER :: ji, jj, jk, jn, jl, jc ! dummy loop indicesa 187 REAL(wp), POINTER, DIMENSION(:,:,:,:) :: ztrcdta ! 4D workspace 188 189 !!---------------------------------------------------------------------- 190 191 IF( kt == nit000 ) THEN 192 ! initial values 193 nctsi1(:) = 1 ; nctsi2(:) = 1 194 nctsj1(:) = 1 ; nctsj2(:) = 1 195 196 ! set the closed seas (in data domain indices) 197 ! ------------------- 198 199 IF( cp_cfg == "orca" ) THEN 200 ! 201 SELECT CASE ( jp_cfg ) 202 ! ! ======================= 203 CASE ( 2 ) ! ORCA_R2 configuration 204 ! ! ======================= 205 ! ! Caspian Sea 206 nctsi1(1) = 11 ; nctsj1(1) = 103 207 nctsi2(1) = 17 ; nctsj2(1) = 112 208 ! ! Great North American Lakes 209 nctsi1(2) = 97 ; nctsj1(2) = 107 210 nctsi2(2) = 103 ; nctsj2(2) = 111 211 ! ! Black Sea 1 : west part of the Black Sea 212 nctsi1(3) = 174 ; nctsj1(3) = 107 213 nctsi2(3) = 181 ; nctsj2(3) = 112 214 ! ! Black Sea 2 : est part of the Black Sea 215 nctsi1(4) = 2 ; nctsj1(4) = 107 216 nctsi2(4) = 6 ; nctsj2(4) = 112 217 ! ! Baltic Sea 218 nctsi1(5) = 145 ; nctsj1(5) = 116 219 nctsi2(5) = 150 ; nctsj2(5) = 126 220 ! ! ======================= 221 CASE ( 4 ) ! ORCA_R4 configuration 222 ! ! ======================= 223 ! ! Caspian Sea 224 nctsi1(1) = 4 ; nctsj1(1) = 53 225 nctsi2(1) = 4 ; nctsj2(1) = 56 226 ! ! Great North American Lakes 227 nctsi1(2) = 49 ; nctsj1(2) = 55 228 nctsi2(2) = 51 ; nctsj2(2) = 56 229 ! ! Black Sea 230 nctsi1(3) = 88 ; nctsj1(3) = 55 231 nctsi2(3) = 91 ; nctsj2(3) = 56 232 ! ! Baltic Sea 233 nctsi1(4) = 75 ; nctsj1(4) = 59 234 nctsi2(4) = 76 ; nctsj2(4) = 61 235 ! ! ======================= 236 CASE ( 025 ) ! ORCA_R025 configuration 237 ! ! ======================= 238 ! Caspian + Aral sea 239 nctsi1(1) = 1330 ; nctsj1(1) = 645 240 nctsi2(1) = 1400 ; nctsj2(1) = 795 241 ! ! Azov Sea 242 nctsi1(2) = 1284 ; nctsj1(2) = 722 243 nctsi2(2) = 1304 ; nctsj2(2) = 747 244 ! 245 END SELECT 246 ! 247 ENDIF 248 ! 249 250 ! convert the position in local domain indices 251 ! -------------------------------------------- 252 DO jc = 1, npncts 253 nctsi1(jc) = mi0( nctsi1(jc) ) 254 nctsj1(jc) = mj0( nctsj1(jc) ) 255 256 nctsi2(jc) = mi1( nctsi2(jc) ) 257 nctsj2(jc) = mj1( nctsj2(jc) ) 258 END DO 259 ! 260 ENDIF 261 262 ! Restore close seas values to initial data 263 IF( ln_trcdta .AND. nb_trcdta > 0 ) THEN ! Initialisation of tracer from a file that may also be used for damping 264 ! 265 IF(lwp) WRITE(numout,*) 266 IF(lwp) WRITE(numout,*) ' trc_dmp_clo : Restoring of nutrients on close seas at time-step kt = ', kt 267 IF(lwp) WRITE(numout,*) 268 ! 269 CALL wrk_alloc( jpi, jpj, jpk, nb_trcdta, ztrcdta ) ! Memory allocation 270 ! 271 CALL trc_dta( kt , ztrcdta ) ! read tracer data at nittrc000 272 ! 273 DO jn = 1, jptra 274 IF( ln_trc_ini(jn) ) THEN ! update passive tracers arrays with input data read from file 275 jl = n_trc_index(jn) 276 DO jc = 1, npncts 277 DO jk = 1, jpkm1 278 DO jj = nctsj1(jc), nctsj2(jc) 279 DO ji = nctsi1(jc), nctsi2(jc) 280 trn(ji,jj,jk,jn) = ztrcdta(ji,jj,jk,jl) * tmask(ji,jj,jk) 281 trb(ji,jj,jk,jn) = trn(ji,jj,jk,jn) 282 ENDDO 283 ENDDO 284 ENDDO 285 ENDDO 286 ENDIF 287 ENDDO 288 CALL wrk_dealloc( jpi, jpj, jpk, nb_trcdta, ztrcdta ) 289 ENDIF 290 ! 291 END SUBROUTINE trc_dmp_clo 170 292 171 293 … … 199 321 END SELECT 200 322 201 IF( .NOT. lk_dtatrc ) & 202 & CALL ctl_stop( 'no passive tracer data define key_dtatrc' ) 203 204 IF( .NOT. lk_tradmp ) & 323 IF( .NOT. ln_tradmp ) & 205 324 & CALL ctl_stop( 'passive trace damping need key_tradmp to compute damping coef.' ) 206 325 ! … … 214 333 ! 215 334 END SUBROUTINE trc_dmp_init 335 216 336 #else 217 337 !!---------------------------------------------------------------------- 218 !! Default key NO internal damping 219 !!---------------------------------------------------------------------- 220 LOGICAL , PUBLIC, PARAMETER :: lk_trcdmp = .FALSE. !: internal damping flag 338 !! Dummy module : No passive tracer 339 !!---------------------------------------------------------------------- 221 340 CONTAINS 222 341 SUBROUTINE trc_dmp( kt ) ! Empty routine … … 225 344 END SUBROUTINE trc_dmp 226 345 #endif 346 347 227 348 !!====================================================================== 228 349 END MODULE trcdmp -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/TRP/trcnam_trp.F90
r3718 r4193 13 13 !! trc_nam_trp : read the passive tracer namelist for transport 14 14 !!---------------------------------------------------------------------- 15 USE trc ! ocean passive tracers variables 15 USE oce_trc ! shared ocean passive tracers variables 16 USE trc ! passive tracers variables 16 17 USE in_out_manager ! ocean dynamics and active tracers variables 17 18 … … 48 49 INTEGER , PUBLIC :: nn_trczdf_exp = 3 !: number of sub-time step (explicit time stepping) 49 50 50 51 #if defined key_trcdmp52 51 ! !!: ** newtonian damping namelist (nam_trcdmp) ** 53 52 INTEGER , PUBLIC :: nn_hdmp_tr = -1 ! = 0/-1/'latitude' for damping over passive tracer … … 57 56 REAL(wp), PUBLIC :: rn_dep_tr = 800. ! depth of transition between rn_surf and rn_bot [meters] 58 57 INTEGER , PUBLIC :: nn_file_tr = 2 ! = 1 create a damping.coeff NetCDF file 59 #endif60 58 61 59 !!---------------------------------------------------------------------- … … 82 80 NAMELIST/namtrc_zdf/ ln_trczdf_exp , nn_trczdf_exp 83 81 NAMELIST/namtrc_rad/ ln_trcrad 84 #if defined key_trcdmp 85 NAMELIST/namtrc_dmp/ ln_trcdmp, nn_hdmp_tr, nn_zdmp_tr, rn_surf_tr, & 82 NAMELIST/namtrc_dmp/ nn_hdmp_tr, nn_zdmp_tr, rn_surf_tr, & 86 83 & rn_bot_tr , rn_dep_tr , nn_file_tr 87 #endif88 84 !!---------------------------------------------------------------------- 89 85 … … 148 144 149 145 150 # if defined key_trcdmp151 146 REWIND ( numnat ) ! Read Namelist namtra_dmp : temperature and salinity damping term 152 147 READ ( numnat, namtrc_dmp ) 153 IF( lzoom ) nn_zdmp_tr c= 0 ! restoring to climatology at closed north or south boundaries148 IF( lzoom ) nn_zdmp_tr = 0 ! restoring to climatology at closed north or south boundaries 154 149 155 150 IF(lwp) THEN ! Namelist print … … 158 153 WRITE(numout,*) '~~~~~~~' 159 154 WRITE(numout,*) ' Namelist namtrc_dmp : set damping parameter' 160 WRITE(numout,*) ' add a damping term or not ln_trcdmp = ', ln_trcdmp161 155 WRITE(numout,*) ' tracer damping option nn_hdmp_tr = ', nn_hdmp_tr 162 156 WRITE(numout,*) ' mixed layer damping option nn_zdmp_tr = ', nn_zdmp_tr, '(zoom: forced to 0)' … … 166 160 WRITE(numout,*) ' create a damping.coeff file nn_file_tr = ', nn_file_tr 167 161 ENDIF 168 #endif169 162 ! 170 163 END SUBROUTINE trc_nam_trp -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/TRP/trctrp.F90
r3680 r4193 66 66 CALL trc_sbc( kstp ) ! surface boundary condition 67 67 IF( lk_trabbl ) CALL trc_bbl( kstp ) ! advective (and/or diffusive) bottom boundary layer scheme 68 IF( lk_trcdmp ) CALL trc_dmp( kstp ) ! internal damping trends 68 IF( ln_trcdmp ) CALL trc_dmp( kstp ) ! internal damping trends 69 IF( ln_trcdmp_clo ) CALL trc_dmp_clo( kstp ) ! internal damping trends on closed seas only 69 70 CALL trc_adv( kstp ) ! horizontal & vertical advection 70 71 CALL trc_ldf( kstp ) ! lateral mixing -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/oce_trc.F90
r3680 r4193 99 99 USE sbc_oce , ONLY : emp => emp !: freshwater budget: volume flux [Kg/m2/s] 100 100 USE sbc_oce , ONLY : emp_b => emp_b !: freshwater budget: volume flux [Kg/m2/s] 101 USE sbc_oce , ONLY : sfx => sfx !: downward salt flux [PSU/m2/s]101 USE sbc_oce , ONLY : fmmflx => fmmflx !: freshwater budget: volume flux [Kg/m2/s] 102 102 USE sbc_oce , ONLY : rnf => rnf !: river runoff [Kg/m2/s] 103 103 USE sbc_oce , ONLY : ln_dm2dc => ln_dm2dc !: Daily mean to Diurnal Cycle short wave (qsr) -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/trc.F90
r3770 r4193 26 26 INTEGER, PUBLIC :: numrtr !: logical unit for trc restart (read ) 27 27 INTEGER, PUBLIC :: numrtw !: logical unit for trc restart ( write ) 28 LOGICAL, PUBLIC :: ln_top_euler !: boolean term for euler integration in the first timestep29 28 30 29 !! passive tracers fields (before,now,after) … … 53 52 CHARACTER(len = 80) , PUBLIC :: cn_trcrst_out !: suffix of pass. tracer restart name (output) 54 53 REAL(wp) , PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:) :: rdttrc !: vertical profile of passive tracer time step 54 LOGICAL , PUBLIC :: ln_top_euler !: boolean term for euler integration 55 55 LOGICAL , PUBLIC :: ln_trcdta !: Read inputs data from files 56 56 LOGICAL , PUBLIC :: ln_trcdmp !: internal damping flag 57 LOGICAL , PUBLIC :: ln_trcdmp_clo !: internal damping flag on closed seas 57 58 INTEGER , PUBLIC :: nittrc000 !: first time step of passive tracers model 58 59 … … 140 141 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: fr_i_tm !: average ice fraction [m/s] 141 142 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: emp_tm !: freshwater budget: volume flux [Kg/m2/s] 142 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: sfx_tm !: downward salt flux [PSU/m2/s]143 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: fmmflx_tm !: freshwater budget: freezing/melting [Kg/m2/s] 143 144 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: emp_b_hold !: hold emp from the beginning of each sub-stepping[m] 144 145 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: qsr_tm !: solar radiation average [m] … … 180 181 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:,:) :: hdivb_temp, rotb_temp 181 182 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: hmld_temp, qsr_temp, fr_i_temp,wndm_temp 182 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: emp_temp, sfx_temp, emp_b_temp183 REAL(wp), PUBLIC, ALLOCATABLE, SAVE, DIMENSION(:,:) :: emp_temp, fmmflx_temp, emp_b_temp 183 184 ! 184 185 #if defined key_trabbl -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/trcdta.F90
r3827 r4193 254 254 ENDDO 255 255 ENDIF 256 !257 IF( .NOT.ln_trcdmp ) THEN!== deallocate data structure ==!256 257 IF( .NOT.ln_trcdmp .AND. .NOT.ln_trcdmp_clo ) THEN !== deallocate data structure ==! 258 258 ! (data used only for initialisation) 259 259 IF(lwp) WRITE(numout,*) 'trc_dta: deallocate data arrays as they are only use to initialize the run' -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/trcnam.F90
r3749 r4193 59 59 !! 60 60 NAMELIST/namtrc/ nn_dttrc, nn_writetrc, ln_rsttr, nn_rsttr, & 61 & cn_trcrst_in, cn_trcrst_out, sn_tracer, ln_trcdta, ln_trcdmp,&62 & ln_t op_euler61 & cn_trcrst_in, cn_trcrst_out, sn_tracer, ln_trcdta, & 62 & ln_trcdmp, ln_trcdmp_clo, ln_top_euler 63 63 #if defined key_trdmld_trc || defined key_trdtrc 64 64 NAMELIST/namtrc_trd/ nn_trd_trc, nn_ctls_trc, rn_ucf_trc, & … … 92 92 sn_tracer(jn)%llsave = .TRUE. 93 93 END DO 94 ln_trcdta = .FALSE. 95 ln_trcdmp = .FALSE. 94 ln_trcdta = .FALSE. 95 ln_trcdmp = .FALSE. 96 ln_trcdmp_clo = .FALSE. 96 97 97 98 … … 121 122 WRITE(numout,*) ' Read inputs data from file (y/n) ln_trcdta = ', ln_trcdta 122 123 WRITE(numout,*) ' Damping of passive tracer (y/n) ln_trcdmp = ', ln_trcdmp 124 WRITE(numout,*) ' Restoring of tracer on closed seas ln_trcdmp_clo = ', ln_trcdmp_clo 123 125 WRITE(numout,*) ' Use euler integration for TRC (y/n) ln_top_euler = ', ln_top_euler 124 126 WRITE(numout,*) ' ' … … 181 183 182 184 183 IF( ln_trcdmp .AND. .NOT.ln_trcdta ) THEN 184 CALL ctl_warn( 'trc_nam: passive tracer damping requires data from files we set ln_trcdta to TRUE' ) 185 ln_trcdta = .TRUE. 186 ENDIF 187 ! 188 IF( ln_rsttr .AND. .NOT.ln_trcdmp .AND. ln_trcdta ) THEN 189 CALL ctl_warn( 'trc_nam: passive tracer restart and data intialisation, ', & 190 & 'we keep the restart values and set ln_trcdta to FALSE' ) 191 ln_trcdta = .FALSE. 192 ENDIF 185 IF( ln_rsttr ) ln_trcdta = .FALSE. ! restart : no need of clim data 186 ! 187 IF( ln_trcdmp .OR. ln_trcdmp_clo ) ln_trcdta = .TRUE. ! damping : need to have clim data 193 188 ! 194 189 IF( .NOT.ln_trcdta ) THEN … … 199 194 IF( ln_rsttr ) THEN 200 195 WRITE(numout,*) 201 WRITE(numout,*) ' read a restart file for passive tracer : ', TRIM( cn_trcrst_in ) 202 WRITE(numout,*) 203 ELSE 204 IF( .NOT.ln_trcdta ) THEN 205 WRITE(numout,*) 206 WRITE(numout,*) ' All the passive tracers are initialised with constant values ' 207 WRITE(numout,*) 208 ENDIF 196 WRITE(numout,*) ' Read a restart file for passive tracer : ', TRIM( cn_trcrst_in ) 197 WRITE(numout,*) 198 ENDIF 199 IF( ln_trcdta .AND. .NOT.ln_rsttr ) THEN 200 WRITE(numout,*) 201 WRITE(numout,*) ' Some of the passive tracers are initialised from climatologies ' 202 WRITE(numout,*) 203 ENDIF 204 IF( .NOT.ln_trcdta ) THEN 205 WRITE(numout,*) 206 WRITE(numout,*) ' All the passive tracers are initialised with constant values ' 207 WRITE(numout,*) 209 208 ENDIF 210 209 ENDIF -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/NEMO/TOP_SRC/trcsub.F90
r3680 r4193 124 124 fr_i_tm (:,:) = fr_i_tm (:,:) + fr_i (:,:) 125 125 emp_tm (:,:) = emp_tm (:,:) + emp (:,:) 126 sfx_tm (:,:) = sfx_tm (:,:) + sfx(:,:)126 fmmflx_tm(:,:) = fmmflx_tm(:,:) + fmmflx(:,:) 127 127 qsr_tm (:,:) = qsr_tm (:,:) + qsr (:,:) 128 128 wndm_tm (:,:) = wndm_tm (:,:) + wndm (:,:) … … 212 212 emp_temp (:,:) = emp (:,:) 213 213 emp_b_temp (:,:) = emp_b (:,:) 214 sfx_temp (:,:) = sfx(:,:)214 fmmflx_temp(:,:) = fmmflx(:,:) 215 215 qsr_temp (:,:) = qsr (:,:) 216 216 wndm_temp (:,:) = wndm (:,:) … … 316 316 fr_i_tm (:,:) = fr_i_tm (:,:) + fr_i (:,:) 317 317 emp_tm (:,:) = emp_tm (:,:) + emp (:,:) 318 sfx_tm (:,:) = sfx_tm (:,:) + sfx(:,:)318 fmmflx_tm(:,:) = fmmflx_tm (:,:) + fmmflx(:,:) 319 319 qsr_tm (:,:) = qsr_tm (:,:) + qsr (:,:) 320 320 wndm_tm (:,:) = wndm_tm (:,:) + wndm (:,:) … … 335 335 qsr (:,:) = qsr_tm (:,:) * r1_ndttrc 336 336 emp (:,:) = emp_tm (:,:) * r1_ndttrc 337 sfx (:,:) = sfx_tm(:,:) * r1_ndttrc337 fmmflx(:,:) = fmmflx_tm (:,:) * r1_ndttrc 338 338 fr_i (:,:) = fr_i_tm (:,:) * r1_ndttrc 339 339 # if defined key_trabbl … … 351 351 qsr (:,:) = qsr_tm (:,:) * r1_ndttrcp1 352 352 emp (:,:) = emp_tm (:,:) * r1_ndttrcp1 353 sfx (:,:) = sfx_tm(:,:) * r1_ndttrcp1353 fmmflx(:,:) = fmmflx_tm (:,:) * r1_ndttrcp1 354 354 fr_i (:,:) = fr_i_tm (:,:) * r1_ndttrcp1 355 355 # if defined key_trabbl … … 501 501 CALL lbc_lnk( emp (:,:) , 'T', 1. ) 502 502 CALL lbc_lnk( emp_b (:,:) , 'T', 1. ) 503 CALL lbc_lnk( sfx(:,:) , 'T', 1. )503 CALL lbc_lnk( fmmflx(:,:) , 'T', 1. ) 504 504 CALL lbc_lnk( qsr (:,:) , 'T', 1. ) 505 505 CALL lbc_lnk( wndm (:,:) , 'T', 1. ) … … 601 601 fr_i_tm(:,:) = 0._wp 602 602 emp_tm (:,:) = 0._wp 603 sfx_tm(:,:) = 0._wp603 fmmflx_tm(:,:) = 0._wp 604 604 qsr_tm (:,:) = 0._wp 605 605 wndm_tm(:,:) = 0._wp … … 708 708 fr_i (:,:) = fr_i_temp (:,:) 709 709 emp (:,:) = emp_temp (:,:) 710 sfx (:,:) = sfx_temp(:,:)710 fmmflx(:,:) = fmmflx_temp(:,:) 711 711 emp_b (:,:) = emp_b_temp (:,:) 712 712 qsr (:,:) = qsr_temp (:,:) … … 827 827 fr_i_tm (:,:) = fr_i (:,:) 828 828 emp_tm (:,:) = emp (:,:) 829 sfx_tm (:,:) = sfx(:,:)829 fmmflx_tm (:,:) = fmmflx(:,:) 830 830 qsr_tm (:,:) = qsr (:,:) 831 831 wndm_tm (:,:) = wndm (:,:) … … 1056 1056 & rnf_temp(jpi,jpj) , h_rnf_temp(jpi,jpj) , & 1057 1057 & tsn_temp(jpi,jpj,jpk,2) , emp_b_temp(jpi,jpj), & 1058 & emp_temp(jpi,jpj) , sfx_temp(jpi,jpj) ,&1058 & emp_temp(jpi,jpj) , fmmflx_temp(jpi,jpj), & 1059 1059 & hmld_temp(jpi,jpj) , qsr_temp(jpi,jpj) , & 1060 1060 & fr_i_temp(jpi,jpj) , fr_i_tm(jpi,jpj) , & … … 1104 1104 & sshv_n_tm(jpi,jpj) , sshv_b_hold(jpi,jpj), & 1105 1105 & tsn_tm(jpi,jpj,jpk,2) , & 1106 & emp_tm(jpi,jpj) , sfx_tm(jpi,jpj) ,&1106 & emp_tm(jpi,jpj) , fmmflx_tm(jpi,jpj) , & 1107 1107 & emp_b_hold(jpi,jpj) , & 1108 1108 & hmld_tm(jpi,jpj) , qsr_tm(jpi,jpj) , & -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/SETTE/iodef_sette.xml
r3764 r4193 21 21 --> 22 22 23 <file_definition type="multiple_file" sync_freq="1d" min_digits="4">23 <file_definition type="multiple_file" name="@expname@_@freq@_@startdate@_@enddate@" sync_freq="1d" min_digits="4"> 24 24 25 25 <file_group id="1h" output_freq="1h" output_level="10" enabled=".FALSE."/> <!-- 1h files --> … … 54 54 55 55 <axis_definition> 56 <axis id="deptht" long_name="Vertical T levels" unit="m" /><!-- positive=".FALSE." -->57 <axis id="depthu" long_name="Vertical U levels" unit="m" /><!-- positive=".FALSE." -->58 <axis id="depthv" long_name="Vertical V levels" unit="m" /><!-- positive=".FALSE." -->59 <axis id="depthw" long_name="Vertical W levels" unit="m" /><!-- positive=".FALSE." -->56 <axis id="deptht" long_name="Vertical T levels" unit="m" positive="down" /> 57 <axis id="depthu" long_name="Vertical U levels" unit="m" positive="down" /> 58 <axis id="depthv" long_name="Vertical V levels" unit="m" positive="down" /> 59 <axis id="depthw" long_name="Vertical W levels" unit="m" positive="down" /> 60 60 <axis id="nfloat" long_name="Float number" unit="-" /> 61 <axis id="icbcla" long_name="Iceberg class" unit="-" /> 61 62 </axis_definition> 62 63 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/TOOLS/COMPILE/Fcheck_archfile.sh
r3294 r4193 40 40 # :: 41 41 # 42 # $ ./Fcheck_archfile.sh ARCHFILE C OMPILER42 # $ ./Fcheck_archfile.sh ARCHFILE CPPFILE COMPILER 43 43 # 44 44 # … … 59 59 # 60 60 #- 61 cpeval () 62 { 63 cat > $2 << EOF 61 64 62 if [ ${#2} -eq 0 ]; then 63 if [ ! -f ${COMPIL_DIR}/$1 ]; then 64 echo "Warning !!!" 65 echo "NO compiler chosen" 66 echo "Try makenemo -h for help" 67 echo "EXITING..." 68 exit 1 69 fi 65 #========================================================== 66 # Automatically generated by Fcheck_archfile.sh from 67 # $1 68 #========================================================== 69 70 EOF 71 while read line 72 do 73 eval "echo \"$line\" >> $2" 74 done < $1 75 } 76 # cleaning related to the old version 77 rm -f $( find ${COMPIL_DIR} -type l -name $1 -print ) 78 # 79 if [ ${#3} -eq 0 ]; then # arch not specified 80 if [ ! -f ${COMPIL_DIR}/arch.history ]; then 81 echo "Warning !!!" 82 echo "NO compiler chosen" 83 echo "Try makenemo -h for help" 84 echo "EXITING..." 85 exit 1 86 else # use the arch file defined in arch.history 87 myarch=$( cat ${COMPIL_DIR}/arch.history ) 88 if [ ! -f $myarch ]; then 89 echo "Warning !!!" 90 echo "previously used arch file no more found:" 91 echo $myarch 92 echo "EXITING..." 93 exit 1 94 else 95 if [ -f ${COMPIL_DIR}/$1 ]; then 96 if [ "$2" != "nocpp" ] 97 then 98 # has the cpp keys file been changed since we copied the arch file in ${COMPIL_DIR}? 99 mycpp=$( ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" ) 100 if [ "$mycpp" != "$( cat ${COMPIL_DIR}/cpp.history )" ]; then 101 echo $mycpp > ${COMPIL_DIR}/cpp.history 102 cpeval ${myarch} ${COMPIL_DIR}/$1 103 fi 104 # has the cpp keys file been updated since we copied the arch file in ${COMPIL_DIR}? 105 mycpp=$( find -L ${COMPIL_DIR} -cnewer ${COMPIL_DIR}/$1 -name $2 -print ) 106 [ ${#mycpp} -ne 0 ] && cpeval ${myarch} ${COMPIL_DIR}/$1 107 fi 108 # has myarch file been updated since we copied it in ${COMPIL_DIR}? 109 myarchdir=$( dirname ${myarch} ) 110 myarchname=$( basename ${myarch} ) 111 myarch=$( find -L $myarchdir -cnewer ${COMPIL_DIR}/$1 -name $myarchname -print ) 112 [ ${#myarch} -ne 0 ] && cpeval ${myarch} ${COMPIL_DIR}/$1 113 else 114 cpeval ${myarch} ${COMPIL_DIR}/$1 115 fi 116 fi 117 fi 118 else 119 nb=$( find ${MAIN_DIR}/ARCH -name arch-${3}.fcm -print | wc -l ) 120 if [ $nb -eq 0 ]; then # no arch file found 121 echo "Warning !!!" 122 echo "Compiler not existing" 123 echo "Try makenemo -h for help" 124 echo "EXITING..." 125 exit 1 126 fi 127 if [ $nb -gt 1 ]; then # more than 1 arch file found 128 echo "Warning !!!" 129 echo "more than 1 arch file for the same compiler have been found" 130 find ${MAIN_DIR}/ARCH -name arch-${3}.fcm -print 131 echo "keep only 1" 132 echo "EXITING..." 133 exit 1 134 fi 135 myarch=$( find ${MAIN_DIR}/ARCH -name arch-${3}.fcm -print ) 136 # we were already using this arch file ? 137 if [ "$myarch" == "$( cat ${COMPIL_DIR}/arch.history )" ]; then 138 if [ -f ${COMPIL_DIR}/$1 ]; then 139 if [ "$2" != "nocpp" ] 140 then 141 # has the cpp keys file been changed since we copied the arch file in ${COMPIL_DIR}? 142 mycpp=$( ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" ) 143 if [ "$mycpp" != "$( cat ${COMPIL_DIR}/cpp.history )" ]; then 144 echo $mycpp > ${COMPIL_DIR}/cpp.history 145 cpeval ${myarch} ${COMPIL_DIR}/$1 146 fi 147 # has the cpp keys file been updated since we copied the arch file in ${COMPIL_DIR}? 148 mycpp=$( find -L ${COMPIL_DIR} -cnewer ${COMPIL_DIR}/$1 -name $2 -print ) 149 [ ${#mycpp} -ne 0 ] && cpeval ${myarch} ${COMPIL_DIR}/$1 150 fi 151 # has myarch file been updated since we copied it in ${COMPIL_DIR}? 152 myarch=$( find -L ${MAIN_DIR}/ARCH -cnewer ${COMPIL_DIR}/$1 -name arch-${3}.fcm -print ) 153 [ ${#myarch} -ne 0 ] && cpeval ${myarch} ${COMPIL_DIR}/$1 154 else 155 cpeval ${myarch} ${COMPIL_DIR}/$1 156 fi 157 else 158 if [ "$2" != "nocpp" ] 159 then 160 ls -l ${COMPIL_DIR}/$2 | sed -e "s/.* -> //" > ${COMPIL_DIR}/cpp.history 161 fi 162 echo ${myarch} > ${COMPIL_DIR}/arch.history 163 cpeval ${myarch} ${COMPIL_DIR}/$1 164 fi 165 fi 166 167 #- do we need xios library? 168 if [ "$2" != "nocpp" ] 169 then 170 use_iom=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$2 | grep -c key_iomput ) 70 171 else 71 myfile=$( find ${MAIN_DIR}/ARCH -name arch-${2}.fcm -print ) 72 if [ ${#myfile} -gt 0 ]; then 73 ln -sf ${myfile} ${COMPIL_DIR}/$1 74 else 75 echo "Warning !!!" 76 echo "Compiler not existing" 77 echo "Try makenemo -h for help" 78 echo "EXITING..." 79 exit 1 80 fi 172 use_iom=0 81 173 fi 174 have_lxios=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$1 | grep -c "\-lxios" ) 175 if [[ ( $use_iom -eq 0 ) && ( $have_lxios -ge 1 ) ]] 176 then 177 sed -e "s/-lxios//g" ${COMPIL_DIR}/$1 > ${COMPIL_DIR}/tmp$$ 178 mv -f ${COMPIL_DIR}/tmp$$ ${COMPIL_DIR}/$1 179 fi 180 181 #- do we need oasis libraries? 182 if [ "$2" != "nocpp" ] 183 then 184 use_oasis=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$2 | grep -c key_oasis3 ) 185 else 186 use_oasis=0 187 fi 188 for liboa in psmile.MPI1 mct mpeu scrip mpp_io 189 do 190 have_liboa=$( sed -e "s/#.*$//" ${COMPIL_DIR}/$1 | grep -c "\-l${liboa}" ) 191 if [[ ( $use_oasis -eq 0 ) && ( $have_liboa -ge 1 ) ]] 192 then 193 sed -e "s/-l${liboa}//g" ${COMPIL_DIR}/$1 > ${COMPIL_DIR}/tmp$$ 194 mv -f ${COMPIL_DIR}/tmp$$ ${COMPIL_DIR}/$1 195 fi 196 done 197 -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/TOOLS/COMPILE/Flist_archfile.sh
r3294 r4193 61 61 62 62 echo "Available compilers for -m option :" 63 for file in $(ls ${MAIN_DIR}/ARCH | grep fcm)63 for file in $(ls ${MAIN_DIR}/ARCH | grep "fcm$" ) 64 64 do 65 65 zvar1=${file#arch-} … … 71 71 72 72 if [ "$1" == "all" ]; then 73 for dir in $(ls ${MAIN_DIR}/ARCH | grep -v fcm)73 for dir in $(ls ${MAIN_DIR}/ARCH | grep -v "fcm$" ) 74 74 do 75 75 echo "Available compilers at ${dir} :" 76 for file in $(ls ${MAIN_DIR}/ARCH/${dir} | grep fcm)76 for file in $(ls ${MAIN_DIR}/ARCH/${dir} | grep "fcm$" ) 77 77 do 78 78 zvar1=${file#arch-} … … 84 84 elif [ -d ${MAIN_DIR}/ARCH/${1} ]; then 85 85 echo "Available compilers at $1 :" 86 for file in $(ls ${MAIN_DIR}/ARCH/$1 | grep fcm)86 for file in $(ls ${MAIN_DIR}/ARCH/$1 | grep "fcm$" ) 87 87 do 88 88 zvar1=${file#arch-} … … 93 93 else 94 94 echo "Available consortium member sub-directories :" 95 for dir in $(ls ${MAIN_DIR}/ARCH | grep -v fcm)95 for dir in $(ls ${MAIN_DIR}/ARCH | grep -v "fcm$" ) 96 96 do 97 97 echo ${dir} -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/TOOLS/MISCELLANEOUS/chk_iomput.sh
r2404 r4193 35 35 echo ' --insrc only print all variable definitions found in the source code' 36 36 echo 'Examples' 37 echo ' chk_iomput.sh'38 echo ' chk_iomput.sh --help'39 echo ' chk_iomput.sh ../../CONFIG/ORCA2_LIM/EXP00/iodef.xml "../../NEMO/OPA_SRC/ ../../NEMO/LIM_SRC_2/"'37 echo ' ./chk_iomput.sh' 38 echo ' ./chk_iomput.sh --help' 39 echo ' ./chk_iomput.sh ../../CONFIG/ORCA2_LIM/EXP00/iodef.xml "../../NEMO/OPA_SRC/ ../../NEMO/LIM_SRC_2/"' 40 40 echo 41 41 exit ;; … … 59 59 #------------------------------------------------ 60 60 # 61 [ $inxml -eq 1 ] && grep "< *field * id *=" $xmlfile 61 external=$( grep -c "<field_definition *\([^ ].* \)*src=" $xmlfile ) 62 if [ $external -eq 1 ] 63 then 64 xmlfield_def=$( grep "<field_definition *\([^ ].* \)*src=" $xmlfile | sed -e 's/.*src="\([^"]*\)".*/\1/' ) 65 xmlfield_def=$( dirname $xmlfile )/$xmlfield_def 66 else 67 xmlfield_def=$xmlfile 68 fi 69 [ $inxml -eq 1 ] && grep "< *field *\([^ ].* \)*id *=" $xmlfield_def 62 70 [ $insrc -eq 1 ] && find $srcdir -name "*.[Ffh]90" -exec grep -iH "^[^\!]*call *iom_put *(" {} \; 63 71 [ $(( $insrc + $inxml )) -ge 1 ] && exit … … 71 79 # list of variables used in "CALL iom_put" 72 80 # 73 varlistsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | sort -d ) 81 badvarsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -ic iom_put ) 82 if [ $badvarsrc -ne 0 ] 83 then 84 echo "The following call to iom_put cannot be checked" 85 echo 86 find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -i iom_put | sort -d 87 echo 88 fi 89 varlistsrc=$( find $srcdir -name "*.[Ffh]90" -exec grep -i "^[^\!]*call *iom_put *(" {} \; | sed -e "s/.*iom_put *( *[\"\']\([^\"\']*\)[\"\'] *,.*/\1/" | grep -vi iom_put | sort -d ) 74 90 # 75 91 # list of variables defined in the xml file 76 92 # 77 varlistxml=$( grep "< *field * id *=" $xmlfile | sed -e "s/^.*< *field *id *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d )93 varlistxml=$( grep "< *field *\([^ ].* \)*id *=" $xmlfield_def | sed -e "s/^.*< *field .*id *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d ) 78 94 # 79 95 # list of variables to be outputed in the xml file 80 96 # 81 varlistout=$( grep "< *field * ref *=" $xmlfile | sed -e "s/^.*< *field *ref *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d )97 varlistout=$( grep "< *field *\([^ ].* \)*field_ref *=" $xmlfile | sed -e "s/^.*< *field .*field_ref *= *[\"\']\([^\"\']*\)[\"\'].*/\1/" | sort -d ) 82 98 # 83 99 echo "--------------------------------------------------" 84 100 echo check if all iom_put found in $srcdir 85 echo have a corresponding variable definition in $xmlfi le101 echo have a corresponding variable definition in $xmlfield_def 86 102 echo "--------------------------------------------------" 87 103 for var in $varlistsrc … … 90 106 if [ $tst -ne 1 ] 91 107 then 92 echo "problem with $var: $tst lines corresponding to its definition in $xmlfi le, but defined in the code in"108 echo "problem with $var: $tst lines corresponding to its definition in $xmlfield_def, but defined in the code in" 93 109 for f in $srclist 94 110 do -
branches/2013/dev_r3867_MERCATOR1_DYN/NEMOGCM/TOOLS/maketools
r3294 r4193 146 146 147 147 #- When used for the first time, choose a compiler --- 148 . ${COMPIL_DIR}/Fcheck_archfile.sh arch_tools.fcm ${CMP_NAM} || exit148 . ${COMPIL_DIR}/Fcheck_archfile.sh arch_tools.fcm nocpp ${CMP_NAM} || exit 149 149 150 150 #- Choose a default tool if needed ---
Note: See TracChangeset
for help on using the changeset viewer.