# Working on ESPRI mesocenter: Ciclad and Climserv # [[PageOutline]] For several years, all the IPSL infrastructure is federated as the [https://mesocentre.ipsl.fr ESPRI mesocenter]. In this infrastructure there are: - Ciclad: an IPSL computing cluster located on the Jussieu campus (Paris, France), which hosts mostly modeling data (such as CORDEX, CMIP...) - !ClimServ: an IPSL computing cluster located at Polytechnique (Palaisseau, France), which hosts mostly observation data. The both clusters have the same software and some file systems are cross-mounted, thus [wiki:Doc/Tools#libIGCM libIGCM] is used in the same way on both clusters. The shared account, located at Ciclad, is used also at !ClimServ. The documentation below is mainly written for Ciclad but the same can be applied for !ClimServ. ## General information ## ### Documentation ### Quick documentation could be found [https://mesocentre.ipsl.fr/quick-start/ here] (only in French for now). For mesocenter details about [http://mesocentre.ipsl.fr/quick-start/2/ computation on Ciclad and !ClimServ] hotline : [http://mesocentre.ipsl.fr/ouverture-de-tickets/ meso-support (at) ipsl.fr] ### The machines and file systems ### The front-end machine can be accessed via {{{ciclad.jussieu.ipsl.fr}}}. Output files written by [wiki:Doc/Tools#libIGCM libIGCM] are stored by default in `/data/yourlogin/IGCM_OUT` at Ciclad and in `/homedata/yourlogin/IGCM_OUT` at !ClimServ. ### Shared account ### The repository for shared files are found in `/prodigfs/ipslfs/igcmg/IGCM`. Read more: [wiki:Doc/ComputingCenters/SharedFiles Repository for shared files and shared tools] ### Individual account ### You must belong to the igcmg users' group. Use following command to check to which groups you belong: {{{ #!sh id -a }}} ### How to define your environment ### To set up ferret and FAST tools and to load the modules needed for compilation of LMDZ, ORCHIDEE and XIOS, add the following line in your login file (e.g. `${HOME}/.bashrc`) : {{{ #!sh . /home/igcmg/.atlas_env_ciclad_ksh }}} To receive the end-of-job messages returned by the job itself (e.g. end of simulation, error,...) you must specify your email address in the file {{{${HOME}/.forward}}}. ## Compiling at Ciclad and !ClimServ ## When installing [https://forge.ipsl.jussieu.fr/igcmg_doc/wiki/Doc/Tools#modipsl modipsl], the default compiler at Ciclad and !ClimServ is set to ''ifort''. In `modipsl/util/AA_make.gdef` this corresponds to the target `ifort_CICLAD`. The same target is used for both Ciclad and Climserv. The corresponding arch files for compiling with fcm are named `arch-ifort_CICLAD.fcm` and `arch-ifort_CICLAD.path`. Other compilers exist at Ciclad and !ClimServ but they have not been tested with all models. Note that the following message from the script ''ins_make'' which installs the makefiles is correct both for Ciclad and !ClimServ: {{{ Installation of makefiles, scripts and data for ifort_CICLAD }}} Following forced configurations have been tested at Ciclad with the ifort compiler: * NEMO forced mode * ORCHIDEE offline * LMDZOR_v6 The coupled models IPSLCM5 or IPSLCM6 have not been tested at Ciclad. '''To be checked before compilation''' * Make sure following libraries are loaded in the terminal: '''intel/15.0.6.233 openmpi/1.4.5-ifort netcdf4/4.2.1.1-ifort''' (use "module liste" to see loaded modules). If this is not the case, load them as follow (use 1 of 2 options) : {{{ source /home/igcmg/.atlas_env_ciclad_ksh # or module unload intel openmpi netcdf4 module load intel/15.0.6.233 openmpi/1.4.5-ifort netcdf4/4.2.1.1-ifort }}} * If you use an older version than revision 1039 on the trunk XIOS, then update compile options as follow : {{{ cd modeles/XIOS/arch svn update -r 1039 }}} * Verify that the option '''--netcdf_lib netcdf4_seq''' is set on the line with make_xios in config/xxxx/Makefile. Otherwise modify to have as follow : {{{ xios : (cd ../../modeles/XIOS ; ./make_xios --netcdf_lib netcdf4_seq \ --prod --arch ${FCM_ARCH} --job 8 ; cp bin/xios_server.exe ../../bin/. ; ) }}} ### Older versions ### To compile at Ciclad/!ClimServ you need LMDZ5/trunk rev 2133 or later, ORCHIDEE/trunk rev 2375 or later, XIOS branchs/xios-1.0 rev 604 or XIOS/trunk, libIGCM_v2.7 or later. Some modifications might be needed : * Compiling XIOS using netcdf sequential. For this add in `modipsl/config/AA_make` : `--netcdf_lib netcdf4_seq` on the line `make_xios` as follow: {{{ (cd ../../modeles/XIOS ; ./make_xios --netcdf_lib netcdf4_seq --prod --arch ${FCM_ARCH} --job 8 ; cp bin/xios_server.exe ../../bin/. ; ) }}} * To use older versions of LMDZ, add following 2 lines in the end of `modipsl/models/LMDZ/arch/arch-ifort_CICLAD.path`: {{{ XIOS_INCDIR=$LMDGCM/../XIOS/inc XIOS_LIBDIR=$LMDGCM/../XIOS/lib }}} * LMDZOR_v5, LMDZ_v5, LMDZOR_v5.2 : The versions of LMDZ and ORCHIDEE are too old in these configurations. They can be used but the arch* files needs to be added. Do following * update for LMDZ: cd modipsl/modeles/LMDZ/arch; svn -r 2449 update * update for ORCHIDEE only in LMDZOR_v5.2 : cd modipsl/modeles/ORCHIDEE/arch ; svn -r 3171 update * update for XIOS: cd modipsl/modeles/XIOS/arch; svn -r 1039 update ## libIGCM at Ciclad and !ClimServ ## [wiki:Doc/Tools#libIGCM libIGCM] can be used since tag libIGCM_v2.6 . The options MONITORING, PACK and ATLAS is not implemeted for Ciclad and !ClimServ. The memory needs to be adapted or added in the job's heading section. For LMDZOR resolution 144x142x79 the following seems to be needed, adjust if needed more, in this example using 31MPIx1OMP for the gcm and 1MPI for xios server: {{{ #PBS -l nodes=1:ppn=32 #PBS -l mem=60gb #PBS -l vmem=200gb }}} For LMDZOR resolution 96x95x39 the following is enough: {{{ #PBS -l mem=6gb #PBS -l vmem=30gb }}} ### Only MPI ### LMDZOR_v6 configuration (LMDZ testing 3114, ORCHIDEE trunk 2724, XIOS branchs/xios-1.0 rev 604, libIGCM_v2.7) has been tested successfully using XIOS with 1 server when running using only MPI. Only the memory needed to be adapted as said above. Note that default compilation for hybrid mode mpi_omp is used also to run with only MPI. ### Mixte MPI-OMP ### #### Attached mode or using one executable #### Add following in the main job but change OMP_NUM_THREADS to the number of threads used in your case : {{{ module load openmpi/1.4.5-ifort export OMP_STACKSIZE=200M export OMP_NUM_THREADS=2 }}} #### Server mode or using two executables #### Not yet done ## Example of job for a MPI executable ## {{{ #PBS -S /bin/bash #PBS -N job_mpi8 ###PBS -q short #PBS -j eo #PBS -l nodes=1:ppn=8 #PBS -l walltime=00:15:00 #PBS -l mem=6gb #PBS -l vmem=20gb ulimit -s unlimited module load netcdf4/4.2.1.1-ifort # Go to directory where the job was launched cd $PBS_O_WORKDIR /usr/lib64/openmpi/1.4.5-ifort/bin/mpirun gcm.e > gcm.out 2>&1 }}} The job is launched with '''qsub''' . Use "'''qstat''' -u login" to check the queue. Use '''qdel''' to cancel a job in queue or running.