ciclad is an IPSL computing cluster located on the Jussieu campus in Paris, France. ClimServ is another IPSL computing cluster located at Polytechnique in Palaisseau. The both clusters have the same software and some file systems are cross-mounted. libIGCM is used in the same way on both clusters and the shared account, located at ciclad is used also at climserv. The documentation below is mainly written for ciclad but the same is true for climserv.
http://ciclad-web.ipsl.jussieu.fr
http://ciclad-web.ipsl.jussieu.fr/ciclad-utilisation.pdf
hotline : svp-ciclad_at_ipsl_dot_jussieu_dot_fr
The front-end machine can be accessed via the ciclad.jussieu.ipsl.fr.
Output files written by libIGCM are stored by default in /data/yourlogin/IGCM_OUT at ciclad and in /homedata/yourlogin/IGCM_OUT at climserv.
The repository for shared files are found in /prodigfs/ipslfs/igcmg/IGCM.
Read more: Repository for shared files and shared tools
You must belong to the igcmg users' group. Use following command to check to which groups you belong:
id -a
To set up ferret and FAST tools, add the following line in your login file (e.g. /home/igcmg/.bashrc) :
. /home/igcmg/.atlas_env_ciclad_ksh
To receive the end-of-job messages returned by the job itself (e.g. end of simulation, error,...) you must specify your email address in the file ${HOME}/.forward.
When installing modipsl, the default compiler at ciclad and climserv is set to ifort. In modipsl/util/AA_make.gdef this corresponds to the target ifort_CICLAD. The same target is used for both ciclad and climserv. The corresponding arch files for compiling with fcm are named arch-ifort_CICLAD.fcm and arch-ifort_CICLAD.path. Other compilers exist at CICLAD and ClimServ but they have not been tested with all models. Note following message from the script ins_make which installs the makefiles is correct both for ciclad and climserv:
Installation of makefiles, scripts and data for ifort_CICLAD
Following forced configurations have been tested on ciclad with the ifort compiler:
The coupled models IPSLCM5 or IPSLCM6 have not been tested at CICLAD.
To be checked before compilation
module unload intel openmpi netcdf4 module load intel/15.0.6.233 openmpi/1.4.5-ifort netcdf4/4.2.1.1-ifort
cd modeles/XIOS/arch svn update -r 1039
xios : (cd ../../modeles/XIOS ; ./make_xios \ --prod --arch ${FCM_ARCH} --job 8 ; cp bin/xios_server.exe ../../bin/. ; )
To compile at ciclad/climserv you need LMDZ5/trunk rev 2133 or later, ORCHIDEE/trunk rev 2375 or later, XIOS branchs/xios-1.0 rev 604 or XIOS/trunk, libIGCM_v2.7 or later. Some modifications might be needed :
(cd ../../modeles/XIOS ; ./make_xios --netcdf_lib netcdf4_seq --prod --arch ${FCM_ARCH} --job 8 ; cp bin/xios_server.exe ../../bin/. ; )
XIOS_INCDIR=$LMDGCM/../XIOS/inc XIOS_LIBDIR=$LMDGCM/../XIOS/lib
libIGCM can be used since tag libIGCM_v2.6 . No monitoring and atlas are done. No pack is implemented.
The memory needs to be adapted or added in the job's heading section. For LMDZOR resolution 96x95x39 the following seems to be needed, adjust if needed more :
#PBS -l mem=6gb #PBS -l vmem=30gb
LMDZOR_v6 configuration (LMDZ testing 3114, ORCHIDEE trunk 2724, XIOS branchs/xios-1.0 rev 604, libIGCM_v2.7) has been tested successfully using XIOS with 1 server when running using only MPI. Only the memory needed to be adapted as said above. Note that default compilation for hybrid mode mpi_omp is used also to run with only MPI.
Add following in the main job but change OMP_NUM_THREADS to the number of threads used in your case :
module load openmpi/1.4.5-ifort export OMP_STACKSIZE=200M export OMP_NUM_THREADS=2
Not yet done
#PBS -S /bin/bash #PBS -N job_mpi8 ###PBS -q short #PBS -j eo #PBS -l nodes=1:ppn=8 #PBS -l walltime=00:15:00 #PBS -l mem=6gb #PBS -l vmem=20gb ulimit -s unlimited module load netcdf4/4.2.1.1-ifort # Go to directory where the job was launched cd $PBS_O_WORKDIR /usr/lib64/openmpi/1.4.5-ifort/bin/mpirun gcm.e > gcm.out 2>&1
The job is launched with qsub . Use "qstat -u login" to check the queue. Use qdel to cancel a job in queue or running.