wiki:RunningDynamico

Version 7 (modified by millour, 10 years ago) (diff)

--

Running DYNAMICO

DYNAMICO can produce a rather large amount of output. It is therefore recommended to prepare a separate directory for each numerical experiment on a filesystem of adequate capacity. In this directory, copy the executable icosagcm.exe. You will find it in DYNAMICO/bin/, where DYNAMICO is the main directory containing the source code.

You will also need configuration files that define resolution, initial condition, etc. Sample files can be found in subdirectories of DYNAMICO/param_sets. Copy the *.def files from the desired sub-directory. There is typically run.def and earth_const.def . run.def is the main configuration file and includes earth_const.def

Currently DYNAMICO generates its own grid and only runs idealized test cases which define their own initial conditions. No other input files are needed beyond *.def .

Running DYNAMICO as a multilayer shallow-water solver

DYNAMICO behaves as a multilayer shallow-water solver if the parameters caldyn_eta and boussinesq are set to eta_lag and .TRUE. The number of layers is defined by the parameter llm. A one-layer example is provided in shallow_water/williamson91/test6. This example runs test case 6 of Williamson (1991), a Rossby-Haurwitz wave.

Running DYNAMICO as a primitive equation solver

By default DYNAMICO solves the traditional, shallow-atmosphere, hydrostatic equations. An example is provided in dcmip2012/test4/test4.1/test4.1-0 . This example runs a dry baroclinic instability test case (Jablonowski & Williamson, 2006). Sample configuration files for the climate-like Held and Suarez (1994) benchmark can be found in climate/Held_Suarez

Running DYNAMICO with Saturn (LMDZ-GENERIC) physics

This is for those playing with the SATURN_DYNAMICO branch. A quick tutorial on setting up the Saturn test case on Ada:

  • Download the SATURN_DYNAMICO branch:
    svn co svn+ssh://yourlogin@forge.ipsl.jussieu.fr/ipsl/forge/projets/dynamico/svn/codes/icosagcm/branches/SATURN_DYNAMICO
    
  • compile XIOS using script in directory XIOS:
    ./compile_ada
    
  • Compile the model using script (current settings are for MPI compilation) in directory ICOSAGCM:
    ./compile_dynlmdz_ada
    
  • Set up the test case in some directory:
    1. Copy over icosa_gcm.exe from ICOSAGCM/bin
    2. Copy over apbp.txt (vertical coordinates specifications) and temp_profile.txt (input initial temperature profile) from TEST directory
    3. Copy over all the *.def files from TEST directory
    4. Copy over all the *.xml files from TEST directory (these control the XIOS ouputs)
    5. Adapt the path "datadir" in callphys.def:
      datadir = /ccc/scratch/cont003/dsm/p86yann/SATURNE_128x96x64/DATAGENERIC/
      
      to point to the TEST/DATAGENERIC directory
    6. Set run parameters in run_icosa.def (e.g. nqtot=0 since there are no tracers, run_length=..., etc.)
    7. Run the model using a job (see sample script launch.ada in TEST)
    8. With XIOS outputs, the output file xios_diagfi.nc is on native icosahedral grid, so it is usually better to reinterpolate onto a lon-lat grid, e.g.:
      % cat mygrid
      gridtype = lonlat
      xsize    = 90
      ysize    = 45
      xfirst   = -180
      xinc     = 4
      yfirst   = -90
      yinc     = 4
      % cdo remapdis,mygrid xios_diagfi.nc xios_latlon.nc
      

Horizontal resolution

Horizontal resolution is controlled by the parameter nbp defined in run.def. The total number of hexagonal cells is about 10 x nbp x nbp, corresponding to subdividing each main triangle of the icosahedron in nbp x nbp sub-triangles (there are about twice as many triangles as there are hexagons). Notice that, everything else being equal, the time step (dt) should be inversely proportional to nbp for numerical stability.

Parallel computing with DYNAMICO

DYNAMICO can run in parallel by dividing the icosahedral mesh into tiles. There are at least 10 tiles corresponding to the 20 faces of the icosahedron joined in pairs to form rhombi. These 10 rhombi are further subdivided in nsplit_i x nsplit_j tiles. nsplit_i and nsplit_j are defined in run.def . nsplit_X needs not divide nbp exactly.

To run DYNAMICO on a parallel machine, you must first compile it with OpenMPI and/or MPI. Then use mpirun or the equivalent command to run it. There must be less MPIxOpenMP processes than the 10 x nsplit_i x nsplit_j tiles. There can be more tiles than processes, in which case each process will take care of several tiles.

Tips and tricks

It may be useful to set

ulimit -s unlimited

before running DYNAMICO in order to avoid stack overflows (segmentation faults) due to large automatic arrays. With OpenMP:

export OMP_STACK_SIZE=100M

or a larger value if necessary.