DYNAMICO can produce a rather large amount of output. It is therefore recommended to prepare a separate directory for each numerical experiment on a filesystem of adequate capacity. In this directory, copy the executable icosa_gcm.exe. You will find it in DYNAMICO/bin/, where DYNAMICO is the main directory containing the source code.

You will also need configuration files that define resolution, initial condition, etc. Sample files can be found in subdirectories of DYNAMICO/param_sets. Copy the *.def files from the desired sub-directory. There is typically run.def and earth_const.def . run.def is the main configuration file and includes earth_const.def

Currently DYNAMICO generates its own grid. It can run idealized test cases which define their own initial conditions. In that case no other input files are needed beyond *.def and, if using XIOS, *.xml files controlling XIOS behavior. It may also restart from a previous run, reading from a restart file.

Tips and tricks

Make sure you set

ulimit -s unlimited

before running DYNAMICO in order to avoid stack overflows (segmentation faults) due to large automatic arrays. With OpenMP:

export OMP_STACK_SIZE=100M

or a larger value if necessary.

Running DYNAMICO as a multilayer shallow-water solver

DYNAMICO behaves as a multilayer shallow-water solver if the parameters caldyn_eta and boussinesq are set to eta_lag and .TRUE. The number of layers is defined by the parameter llm. A one-layer example is provided in shallow_water/williamson91/test6. This example runs test case 6 of Williamson (1991), a Rossby-Haurwitz wave.

Running DYNAMICO as a primitive equation solver

By default DYNAMICO solves the traditional, shallow-atmosphere, hydrostatic equations. An example is provided in dcmip2012/test4/test4.1/test4.1-0 . This example runs a dry baroclinic instability test case (Jablonowski & Williamson, 2006). Sample configuration files for the climate-like Held and Suarez (1994) benchmark can be found in climate/Held_Suarez

Running DYNAMICO with Saturn (LMDZ-GENERIC) physics

This is for those playing with the SATURN_DYNAMICO branch. A quick tutorial on setting up the Saturn test case on Ada:

  • Download the SATURN_DYNAMICO branch ; make sure we use FCM bundled with DYNAMICO:
    svn co svn+ssh://
  • compile XIOS using script in directory XIOS:
    cd XIOS
  • Compile the model using script (current settings are for MPI compilation) in directory ICOSAGCM:
    cd ../ICOSAGCM
  • Set up the test case in some directory:
    1. Copy over icosa_gcm.exe from ICOSAGCM/bin
    2. Copy over apbp.txt (vertical coordinates specifications) and temp_profile.txt (input initial temperature profile) from TEST directory
    3. Copy over all the *.def files from TEST directory
    4. Copy over all the *.xml files from TEST directory (these control the XIOS ouputs)
    5. Copy or preferably symlink the directory TEST/DATAGENERIC
    6. Set run parameters in run_icosa.def (e.g. run_length=..., etc.)
    7. Run the model using a job (see sample script launch.ada in TEST)
    8. With XIOS outputs, the output file is on native icosahedral grid, so it is usually better to reinterpolate onto a lon-lat grid, e.g.:
      % cat mygrid
      gridtype = lonlat
      xsize    = 90
      ysize    = 45
      xfirst   = -180
      xinc     = 4
      yfirst   = -90
      yinc     = 4
      % cdo remapdis,mygrid

Horizontal resolution

Horizontal resolution is controlled by the parameter nbp defined in run.def. The total number of hexagonal cells is about 10 x nbp x nbp, corresponding to subdividing each main triangle of the icosahedron in nbp x nbp sub-triangles (there are about twice as many triangles as there are hexagons). Notice that, everything else being equal, the time step (dt) should be inversely proportional to nbp for numerical stability.

Parallel computing with DYNAMICO

DYNAMICO can run in parallel by dividing the icosahedral mesh into tiles. There are at least 10 tiles corresponding to the 20 faces of the icosahedron joined in pairs to form rhombi. These 10 rhombi are further subdivided in nsplit_i x nsplit_j tiles. nsplit_i and nsplit_j are defined in run.def . nsplit_X needs not divide nbp exactly.

To run DYNAMICO on a parallel machine, you must first compile it with OpenMPI and/or MPI. Then use mpirun or the equivalent command to run it. There must be less MPIxOpenMP processes than the 10 x nsplit_i x nsplit_j tiles. There can be more tiles than processes, in which case each process will take care of several tiles.

Last modified 4 years ago Last modified on 09/17/15 09:52:37