WikiPrint - from Polar Technologies

Working on ciclad and climserv

ciclad is an IPSL computing cluster located on the Jussieu campus in Paris, France. ClimServ is another IPSL computing cluster located at Polytechnique in Palaisseau. The both clusters have the same software and some file systems are cross-mounted. libIGCM is used in the same way on both clusters and the shared account, located at ciclad is used also at climserv. The documentation below is mainly written for ciclad but the same is true for climserv.

1. General information

1.1. Documentation

http://ciclad-web.ipsl.jussieu.fr

http://ciclad-web.ipsl.jussieu.fr/ciclad-utilisation.pdf

hotline : svp-ciclad_at_ipsl_dot_jussieu_dot_fr

1.2. The machines and file systems

The front-end machine can be accessed via the ciclad.jussieu.ipsl.fr.

Output files written by libIGCM are stored by default in /data/yourlogin/IGCM_OUT at ciclad and in /homedata/yourlogin/IGCM_OUT at climserv.

1.3. Shared account

The repository for shared files are found in /prodigfs/ipslfs/igcmg/IGCM.

Read more: Repository for shared files and shared tools

1.4. Individual account

You must belong to the igcmg users' group. Use following command to check to which groups you belong:

id -a

1.5. How to define your environment

To set up ferret and FAST tools, add the following line in your login file (e.g. /home/igcmg/.bashrc) :

. /home/igcmg/.atlas_env_ciclad_ksh

To receive the end-of-job messages returned by the job itself (e.g. end of simulation, error,...) you must specify your email address in the file ${HOME}/.forward.

2. Compiling at ciclad and climserv

When installing modipsl, the default compiler at ciclad and climserv is set to ifort. In modipsl/util/AA_make.gdef this corresponds to the target ifort_CICLAD. The same target is used for both ciclad and climserv. The corresponding arch files for compiling with fcm are named arch-ifort_CICLAD.fcm and arch-ifort_CICLAD.path. Other compilers exist at CICLAD and ClimServ but they have not been tested with all models. Note following message from the script ins_make which installs the makefiles is correct both for ciclad and climserv:

Installation of makefiles, scripts and data for ifort_CICLAD

Following forced configurations have been tested on ciclad with the ifort compiler:

The coupled models IPSLCM5 or IPSLCM6 have not been tested at CICLAD.

To be checked before compilation

2.1. Older versions

To compile at ciclad/climserv you need LMDZ5/trunk rev 2133 or later, ORCHIDEE/trunk rev 2375 or later, XIOS branchs/xios-1.0 rev 604 or XIOS/trunk, libIGCM_v2.7 or later. Some modifications might be needed :

3. libIGCM at ciclad and climserv

libIGCM can be used since tag libIGCM_v2.6 . No monitoring and atlas are done. No pack is implemented.

The memory needs to be adapted or added in the job's heading section. For LMDZOR resolution 96x95x39 the following seems to be needed, adjust if needed more :

#PBS -l mem=6gb
#PBS -l vmem=30gb

3.1. Only MPI

LMDZOR_v6 configuration (LMDZ testing 3114, ORCHIDEE trunk 2724, XIOS branchs/xios-1.0 rev 604, libIGCM_v2.7) has been tested successfully using XIOS with 1 server when running using only MPI. Only the memory needed to be adapted as said above. Note that default compilation for hybrid mode mpi_omp is used also to run with only MPI.

3.2. Mixte MPI-OMP

3.2.1. Attached mode or using one executable

Add following in the main job but change OMP_NUM_THREADS to the number of threads used in your case :

module load openmpi/1.4.5-ifort
export OMP_STACKSIZE=200M
export OMP_NUM_THREADS=2

3.2.2. Server mode or using two executables

Not yet done

4. Example of job for a MPI executable

#PBS -S  /bin/bash
#PBS -N  job_mpi8
###PBS -q short
#PBS -j eo
#PBS -l nodes=1:ppn=8
#PBS -l walltime=00:15:00
#PBS -l mem=6gb
#PBS -l vmem=20gb

ulimit -s unlimited
module load netcdf4/4.2.1.1-ifort

# Go to directory where the job was launched
cd $PBS_O_WORKDIR

/usr/lib64/openmpi/1.4.5-ifort/bin/mpirun gcm.e > gcm.out 2>&1

The job is launched with qsub . Use "qstat -u login" to check the queue. Use qdel to cancel a job in queue or running.