[[PageOutline]] Last edited [[Timestamp]] [[BR]] '''Author''' : Daley Calvert '''ticket''' : #1332 '''Branch''' : https://forge.ipsl.jussieu.fr/nemo/browser/branches/2014/dev_r5134_UKMO4_CF_compliance ---- === Description === This development implements compliance with the CF-1.5 standard, through a combination of changes to XIOS (via XIOS dev team) and the NEMO IOM module. '''XIOS-1.0 r618''' is required at version r5415 of the trunk and above. ---- === Testing === ||NVTK Tested||'''YES'''|| ||Other model configurations||'''YES'''|| ||Processor configurations tested||'''[ 4x4, 2x2, 2x1, 1x2, 1x1 ] [attached XIOS, 1 XIOS server, 2 XIOS servers, 4 XIOS servers]''' || ||If adding new functionality please confirm that the [[BR]]New code doesn't change results when it is switched off [[BR]]and !''works!'' when switched on||'''YES'''|| Mainly 2x2 decomposition, 10 day tests with ORCA2_LIM and some basic checking of ORCA2_LIM3. Tested at trunk version r5426 with XIOS-1.0 r618. Tested with standard ORCA2_LIM and ORCA2_LIM3 iodef.xml namelists, iodef_ar5.xml and an iodef.xml file containing all possible outputs (not all of which were output by the model of course). '''Issues encountered:''' On the Met Office IBM there appears to be an issue with parallel write and detached XIOS: for 2+ XIOS servers in "one_file" mode there is a crash of sorts (no core file) in XIOS after it has completed the files (perhaps during finalization) and the job will hang. This is not an issue with attached XIOS. Unknown if this occurs on the Cray machine. Since this has not been reported elsewhere it is presumed to be a problem with our build, but it is noted here. The iodef_ar5.xml file causes the job to hang in one_file mode for 2+ XIOS servers (another parallel write issue) while the scalar "transifs" variable is inside the "_icemod.nc" file. When moved to the scalar file the job proceeds ok. Perhaps an issue with XIOS? Unknown if anyone else in the institution has used the iodef_ar5.xml file for a while. ---- === Bit Comparability === ||Does this change preserve answers in your tested standard configurations (to the last bit) ?||'''YES'''|| ||Does this change bit compare across various processor configurations. (1xM, Nx1 and MxN are recommended)||'''YES'''|| ||Is this change expected to preserve answers in all possible model configurations?||'''YES'''|| ||Is this change expected to preserve all diagnostics? [[BR]]!,,!''Preserving answers in model runs does not necessarily imply preserved diagnostics. !''||'''YES'''|| The changes significantly affecting file size are controlled by the '''ln_cfmeta''' namelist parameter; the equivalent of 5 global 2D arrays are added. At ORCA2, the T grid file size (using standard iodef.xml for ORCA2_LIM) increases from 23.12Mb to 24.06Mb. ---- === System Changes === ||Does your change alter namelists?||'''YES'''|| ||Does your change require a change in compiler options?||'''NO'''|| Parameter '''ln_cfmeta''' added to namrun. This controls the output of cell areas and cell vertices to all output files. ---- === Resources === Results below are for ORCA2_LIM on a 2x2 decomposition with 1 XIOS server, multiple_file mode. No notable change in run time. Significant increase in highwater memory for NEMO-XIOS combined when using ln_cfmeta = .TRUE. (from ~2100Mb to ~2650Mb). Presumably this is on the XIOS side as the only extra NEMO arrays in memory are allocated and deallocated within a single initialization subroutine. ---- === IPR issues === ||Has the code been wholly (100%) produced by NEMO developers staff working exclusively on NEMO?||'''YES'''||