wiki:Doc/ComputingCenters/ESPRImesocenter

Version 13 (modified by jgipsl, 8 years ago) (diff)

--

Working on ciclad

ciclad is an IPSL computing server located on the Jussieu campus in Paris, France.

1. General information

1.1. Documentation

http://ciclad-web.ipsl.jussieu.fr

http://ciclad-web.ipsl.jussieu.fr/ciclad-utilisation.pdf

hotline : svp-ciclad_at_ipsl_dot_jussieu_dot_fr

1.2. The machines and file systems

The front-end machine can be accessed via the ciclad.jussieu.ipsl.fr.

Data files must be placed in /data/yourlogin/ or in the filesystem dedicated to your project.

1.3. Shared account

The repository for shared files are found in /ipslfs/igcmg/IGCM.

Read more: Repository for shared files and shared tools

1.4. Individual account

You must belong to the igcmg users' group. Use following command to check to which groups you belong:

id -a

1.5. How to define your environment

To set up ferret and FAST tools, add the following line in your login file (e.g. /home/igcmg/.bashrc) :

. /home/igcmg/.atlas_env_ciclad_ksh

To receive the end-of-job messages returned by the job itself (e.g. end of simulation, error,...) you must specify your email address in the file ${HOME}/.forward.

2. Compiling at ciclad

When installing modipsl, the default compiler at ciclad is set to ifort. In modipsl/util/AA_make.gdef this corresponds to the target ifort_CICLAD. The corresponding arch files for compiling with fcm are named arch-ifort_CICLAD.fcm and arch-ifort_CICLAD.path. To compile at ciclad you need LMDZ5/trunk rev 2133 or later, ORCHIDEE/trunk rev 2375 or later. Other compilers exist at CICLAD but they have not been tested with all models.

Following forced configurations have been tested on ciclad with the ifort compiler:

  • NEMO forced mode
  • ORCHIDEE offline
  • LMDZ and LMDZOR forced mode (with configuration LMDZOR_v6, LMDZOR_v5.2 or LMDZ_v5)
  • LMDZOR_v6 (LMDZ5/trunk 2449, ORCHIDEE trunk 3171, XIOS branchs/xios-1.0 rev 604, libIGCM_v2.7). Some modifications are required:
    • Compiling XIOS using netcdf sequential. For this add in modipsl/config/AA_make : --netcdf_lib netcdf4_seq on the line make_xios as follow:
      (cd  ../../modeles/XIOS ; ./make_xios --netcdf_lib netcdf4_seq  --prod --arch ${FCM_ARCH} --job 8 ; cp bin/xios_server.exe ../../bin/. ; )
      
    • To use older versions of LMDZ, add following 2 lines in the end of modipsl/models/LMDZ/arch/arch-ifort_CICLAD.path:
      XIOS_INCDIR=$LMDGCM/../XIOS/inc
      XIOS_LIBDIR=$LMDGCM/../XIOS/lib
      

The coupled model IPSLCM5 has not been compiled at CICLAD.

3. libIGCM at ciclad

libIGCM can be used since tag libIGCM_v2.6 . No monitoring and atlas are done. No pack is implemented.

The memory needs to be adapted or added in the job's heading section. For LMDZOR resolution 96x95x39 the following seems to be needed, adjust if needed more :

#PBS -l mem=6gb
#PBS -l vmem=30gb

3.1. Only MPI

LMDZOR_v6 configuration (LMDZ testing 3114, ORCHIDEE trunk 2724, XIOS branchs/xios-1.0 rev 604, libIGCM_v2.7) has been tested successfully using XIOS with 1 server when running using only MPI. Only the memory needed to be adapted as said above. Note that default compilation for hybrid mode mpi_omp is used also to run with only MPI.

3.2. Mixte MPI-OMP

3.2.1. Attached mode or using one executable

Add following in the main job but change OMP_NUM_THREADS to the number of threads used in your case :

module load openmpi/1.4.5-ifort
export OMP_STACKSIZE=200M
export OMP_NUM_THREADS=2

3.2.2. Server mode or using two executables

Not yet done

4. Example of job for a MPI executable

#PBS -S  /bin/bash
#PBS -N  job_mpi8
###PBS -q short
#PBS -j eo
#PBS -l nodes=1:ppn=8
#PBS -l walltime=00:15:00
#PBS -l mem=6gb
#PBS -l vmem=20gb

ulimit -s unlimited
module load netcdf4/4.2.1.1-ifort

# Go to directory where the job was launched
cd $PBS_O_WORKDIR

/usr/lib64/openmpi/1.4.5-ifort/bin/mpirun gcm.e > gcm.out 2>&1

The job is launched with qsub . Use "qstat -u login" to check the queue. Use qdel to cancel a job in queue or running.