Changes between Version 1 and Version 2 of Doc/Models/DYNAMICO


Ignore:
Timestamp:
10/11/19 15:59:41 (5 years ago)
Author:
ymipsl
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Doc/Models/DYNAMICO

    v1 v2  
    55[[PageOutline(1-2,Table of contents)]] 
    66 
     7---- 
     8# Introduction #  
    79 
     10The DYNAMICO project develops a new dynamical core for LMD-Z, the atmospheric general circulation model (GCM) part of IPSL-CM Earth System Model. The primary goal of DYNAMICO is to re-formulate in LMD-Z the horizontal advection and dynamics on a icosahedral grid, while preserving or improving their qualities with respect to accuracy, conservation laws and wave dispersion. In turn, a new grid refinement strategy is required. A broader goal is to revisit all fundamental features of the dynamical core, especially the shallow-atmosphere/traditional approximation, the vertical coordinate and the coupling with physics. Efficient implementation on present and future supercomputing architectures is also a key issue addressed by DYNAMICO.  
     11 
     12# Getting DYNAMICO # 
     13 
     14DYNAMICO is licensed under the [http://www.cecill.info/index.en.html CeCILL] open source license. 
     15The latest version is accessible through svn : 
     16{{{ 
     17svn co http://forge.ipsl.jussieu.fr/dynamico/svn/codes/icosagcm/trunk 
     18}}} 
     19 
     20Registered IPSL forge users belonging to the DYNAMICO group can do : 
     21{{{ 
     22svn co svn+ssh://mylogin@forge.ipsl.jussieu.fr/ipsl/forge/projets/dynamico/svn/codes/icosagcm/trunk DYNAMICO 
     23}}} 
     24 
     25'mylogin' should be your forge login. svn will create the DYNAMICO directory and download the source code there. 
     26Source can also been browsed at https://forge.ipsl.jussieu.fr/dynamico/browser/codes/icosagcm/trunk 
     27 
     28# Compiling DYNAMICO # 
     29 
     30DYNAMICO is written in Fortran 90 with some legacy code in Fortran 77. The build process is based on [http://metomi.github.io/fcm/doc/user_guide/make.html FCM]. DYNAMICO requires the NetCDF library, including the F90 modules. MPI is required for parallel execution but DYNAMICO can compile and run without MPI. 
     31 
     32DYNAMICO depends on the NetCDF and BLAS libraries. 
     33 
     34The compiling process is automated but some information is needed to guide it. This information is contained in text files in https://forge.ipsl.jussieu.fr/dynamico/browser/codes/icosagcm/trunk/arch . Sample files corresponding to a few machines (Jean-Zay at IDRIS, Irène at TGCC) are present. Assuming you compile on Irène : 
     35 
     36{{{ 
     37cd DYNAMICO 
     38./make_icosa -arch X64_IRENE -parallel mpi -prod -job 8 
     39}}} 
     40 
     41will compile DYNAMICO. The make_icosa script accepts keyword-value pairs which drive its behaviour. The most important option is arch=ARCH (here ARCH=X64_IRENE). It directs make_icosa to use the information contained in : 
     42 * arch/arch-ARCH.env 
     43 * arch/arch-ARCH.fcm 
     44 * arch/arch-ARCH.path 
     45 
     46The *.env file is a shell script that is executed by make_icosa. It sets up the environment for use by *.path. *.path defines paths to libraries and modules needed for compilation. *.fcm defines the commands used to compile, link, etc. as well as options to be passed to the compiler/linker . 
     47 
     48The option "-job 8" is similar to "make -j 8" and compiles in parallel for speed. After a successful build the main executable is found in the bin/ directory. 
     49 
     50== Compiling with XIOS output == 
     51 
     52DYNAMICO can direct its output through XIOS, a parallel I/O library and server. See https://forge.ipsl.jussieu.fr/ioserver. 
     53To enable XIOS output : 
     54 * get and compile XIOS in a separate directory 
     55 * set the variables XIOS_INCDIR , XIOS_LIBDIR and XIOS_LIB to appropriate values in your arch.path 
     56 * use the option "-with_xios" in your "make_icosa" command 
     57Why use XIOS : 
     58 * without XIOS, each output field is written to a separate NetCDF file. Post-processing is required to group several fields together. 
     59 * with XIOS, several fields can be written to a few output files. This behavior is controlled by the input file xios.xml (required). See https://forge.ipsl.jussieu.fr/ioserver for the syntax of this XML file. 
     60 * without XIOS, data to be written is communicated to the main MPI process, which writes to the NetCDF files while other MPI processes stay idle. This is not expected to scale to a large number of MPI processes.  
     61 * XIOS provides asynchronous, parallel I/O in order to scale to large MPI process counts. 
     62 
     63# Running dynamico # 
     64 
     65DYNAMICO can produce a rather large amount of output. It is therefore recommended to prepare a separate directory for each numerical experiment on a filesystem of adequate capacity. 
     66In this directory, copy the executable icosa_gcm.exe. You will find it in DYNAMICO/bin/, where DYNAMICO is the main directory containing the source code. 
     67 
     68You will also need configuration files that define resolution, initial condition, etc. Sample files can be found in subdirectories of [source:codes/icosagcm/trunk/param_sets DYNAMICO/param_sets]. Copy the *.def files from the desired sub-directory. There is typically run.def and earth_const.def . run.def is the main configuration file and includes earth_const.def 
     69 
     70Currently DYNAMICO generates its own grid. It can run idealized test cases which define their own initial conditions. In that case no other input files are needed beyond *.def and,  if using XIOS, *.xml files controlling XIOS behavior. It may also restart from a previous run, reading from a restart file. 
     71 
     72== Running dynamico on parallel computers == 
     73 
     74DYNAMICO can run in parallel by dividing the icosahedral mesh into tiles. There are at least 10 tiles corresponding to the 20 faces of the icosahedron joined in pairs to form rhombi. These 10 rhombi are further subdivided in nsplit_i x nsplit_j tiles. nsplit_i and nsplit_j are defined in run.def . nsplit_X needs not divide nbp exactly. 
     75 
     76To run DYNAMICO on a parallel machine, you must first compile it with OpenMPI and/or MPI. Then use mpirun or the equivalent command to run it. 
     77There must be less MPIxOpenMP processes than the 10 x nsplit_i x nsplit_j tiles. There can be more tiles than processes, in which case each process will take care of several tiles.