Frequently (and not so frequently) Asked Questions
If you want to relaunch a simulation from the beginning you need to delete everything created previously. All the output files must be deleted because they cannot be overwritten. They are 2 ways to do it, one using the purge tool from libIGCM otherwise to delete everything manually.
1. Use libIGCM purge
To purge your simulation (ie delete all outputs) just run:
path/to/libIGCM/purge_simulation.job
2. Manual purge
Do remove all outputs created by simulation, do the following:
Space | TGCC | IDRIS |
WORK | $CCCWORKDIR | $WORK |
SCRATCH | $CCCSCRATCHDIR | $SCRATCH |
STORE | $CCCSTOREDIR | $STORE |
TIP: If you have already done a simulation before you could find all output paths in the Script_output* file. Delete it before starting a new simulation.
See here.
See here.
See here.
During each job execution, a corresponding Script_Output file is created.
Important : If your simulation stops you can look for the keyword "IGCM_debug_CallStack" in this file. This word will be preceded by a line giving more details on the problem that occurred.
Click here for more details.
See here.
It's define in config.card file, and the script ins_job will define what it's need in the job header.
If you run your model in hybrid mode (MPI-OpenMP), the number of MPI processes and the number of OpenMP threads are set in config.card in the section "Executable".
For example, for LMDZOR : we choose to run with 71 MPI processes and 8 OpenMP threads for LMDZ, and 1 MPI for XIOS
ATM= (gcm.e, lmdz.x, 71MPI, 8OMP) SRF= ("", "") SBG= ("", "") IOS= (xios_server.exe, xios.x, 1MPI)
In this case the job will ask for 71*8 +1 = 569 CPU
If we don't use OpenMP parallelization
ATM= (gcm.e, lmdz.x, 71MPI, 1OMP) SRF= ("", "") SBG= ("", "") IOS= (xios_server.exe, xios.x, 1MPI)
In this case the job will ask for 71 +1 = 72 CPU
The keyword Fatal indicates that something went wrong in your simulation. Below is a list of the most common reasons:
See the corresponding chapter about monitoring and debug for further information.
libIGCM is constantly being updated. We recommend to choose the latest tag of libIGCM. Here is what to do:
cd modipsl mv libIGCM libIGCM_old svn checkout -r `number_revision` http://forge.ipsl.jussieu.fr/libigcm/svn/trunk/libIGCM libIGCM
where number_revision is specified by someone from PlatForm? group.
If AA_job has been modified, you must :
cd ...../config/MYCONFIG/MYEXP mv Job_MYEXP OLDJOB # save the old job ../../../libIGCM/ins_job # modifier Job_MYEXP : NbPeriod, memory,... as it was done in OLDJOB
This method shows how to rerun a complete simulation period in a different directory (REDO instead of DEVT/PROD).
For reminder
Space | TGCC | IDRIS |
WORK | $CCCWORKDIR | $WORK |
SCRATCH | $CCCSCRATCHDIR | $SCRATCH |
STORE | $CCCSTOREDIR | $STORE |
Example : To rerun v3.historicalAnt1 to recompute a whole year (e.g. 1964) you must :
## Directory mkdir STORE/....IGCM_OUT/IPSLCM5A/REDO/historicalAnt/v3.historicalAnt1 cd STORE/....IGCM_OUT/IPSLCM5A/REDO/historicalAnt/v3.historicalAnt1 # RESTART mkdir -p RESTART ; cd RESTART ln -s ../../../../PROD/historicalAnt/v3.historicalAnt1/RESTART/v3.historicalAnt1_19640831_restart.nc v3.historicalAnt1REDO_19640831_restart.nc
mkdir SCRATCH/....IGCM_OUT/IPSLCM5A/REDO/historicalAnt/v3.historicalAnt1REDO cd SCRATCH/....IGCM_OUT/IPSLCM5A/REDO/historicalAnt/v3.historicalAnt1REDO # mesh_mask mkdir -p OCE/Output cd OCE/Output ln -s ../../../../../PROD/historicalAnt/v3.historicalAnt1/OCE/Output/v3.historicalAnt1_mesh_mask.nc v3.historicalAnt1REDO_mesh_mask.nc cd ../..
cp -pr v3.historicalAnt1 v3.historicalAnt1REDO
OldPrefix= v3.historicalAnt1_19631231 PeriodDateBegin= 1964-01-01 PeriodDateEnd= 1964-01-31 CumulPeriod= xxx # Specify the proper "cad" value, i.e. the same month in the run.card cookie (ARGENT) PeriodState= OnQueue
JobName=v3.historicalAnt1 ... SpaceName=REDO ... DateEnd= 1964-12-31 ... RebuildFrequency=1M # only for ORCHIDEE_OL PackFrequency=1Y ... TimeSeriesFrequency=NONE ... SeasonalFrequency=NONE
vi run.card # check one more time vi Job_v3.historicalAnt1 # check the time parameters and names of the output scripts ccc_msub Job_v3.historicalAnt1
sdiff OCE/Debug/v3.historicalAnt1REDO_19640901_19640930_solver.stat $DMFDIR/../p86maf/IGCM_OUT/IPSLCM5A/PROD/historicalAnt/v3.historicalAnt1/OCE/Debug/v3.historicalAnt1_19640901_19640930_solver.stat
To do this you have to make some changes in your files.
LMD4848-L79 : libioipsl liborchidee lmdz48x48x79 verif echo "noORCAxLMD4848" >.resol_48x48x79 echo "RESOL_ATM_3D=48x48x79" >>.resol_48x48x79 lmdz48x48x79: $(M_K) lmdz RESOL_LMDZ=48x48x79
(cd ../../modeles/LMDZ; ./makelmdz_fcm -cpp ORCHIDEE_NOOPENMP -d $(RESOL_LMDZ) -cosp true -v true -parallel mpi -arch $(FCM_ARCH) ce0l ; cp bin/ce0l_$(RESOL_LMDZ)_phylmd_para_orch.e ../../bin/create_etat0_limit.e_$(RESOL_LMDZ) ; ) (cd ../../modeles/LMDZ; ./makelmdz_fcm -cpp ORCHIDEE_NOOPENMP -d $(RESOL_LMDZ) -cosp true -v true -mem -parallel mpi -arch $(FCM_ARCH) gcm ; cp bin/gcm_$(RESOL_LMDZ)_phylmd_para_mem_orch.e ../../bin/gcm.e_$(RESOL_LMDZ) ; )
[ -f ${SUBMIT_DIR}/../.resol ] && RESOL=$(head -1 ${SUBMIT_DIR}/../.resol) become [ -f ${SUBMIT_DIR}/../.resol_myresolution ] && RESOL=$(head -1 ${SUBMIT_DIR}/../.resol_myresolution)
[ -f ${SUBMIT_DIR}/../.resol ] && eval $(grep RESOL_ATM_3D ${SUBMIT_DIR}/../.resol) || RESOL_ATM_3D=96x95x19 by [ -f ${SUBMIT_DIR}/../.resol_myresolution ] && eval $(grep RESOL_ATM_3D ${SUBMIT_DIR}/../.resol_myresolution) || RESOL_ATM_3D=96x95x19
Now you can create as many experiment as you have compiled your model.
cd modipsl/config/LMDZOR/ cp EXPERIMENT/LMDZOR/clim/config.card . etc...
Warning: you'll need to get parameter files and maybe some forcing ones corresponding to the resolution.
For a few configurations such as LMDZOR and LMDZREPR, you must create initial and boundary conditions in advance. This is not necessary for coupled configurations such as IPSLCM6.
For more information, see this chapter.
Atmospherical nudging
This paragraph describes how to perform a nudged run for configurations that include LMDZ.
To do so, you have to:
For example:
[BoundaryFiles] List= ....\ (work_subipsl/subipsl/ECMWF{your_resolution}/AN${year}/u_ecmwf_${year}${month}.nc, u.nc)\ (work_subipsl/subipsl/ECMWF{your_resolution}/AN${year}/v_ecmwf_${year}${month}.nc, v.nc)\
Oceanic nudging
To force the oceanic model in salinity or SST you could find the procedure in NEMO official documentation (section 7.12.3: Surface restoring to observed SST and/or SSS)
Notice that NEMO uses the salinity nudging, by default, when it's used in oceanic forced configurations.
For various reasons you may want to run simulations with different versions of compiler or libraries (mainly netCDF).
The first thing is to keep a dedicated installation of modipsl for this specific setup since you will have to modify the libIGCM associated with the simulations.
Keep in mind that you need the modules of the libraries you want to use to be properly loaded at both:
Compile time
You can create a script shell that unloads the modules of the default configuration and loads the modules you want to use. Here is an example of the file modules.sh to use intel/12 and netCDF3.6.3: (the order in which you unload and load the modules is important)
#!/bin/bash #set -vx # unload modules module unload nco #/4.1.0 module unload netcdf #/4.2_hdf5_parallel module unload hdf5 #/1.8.9_parallel module unload intel # load modules module load intel/12.1.9.293 module load netcdf/3.6.3 module load hdf5/1.8.8 module load nco/4.1.0
You have to make sure the modules you want to be used by your code are loaded before each compilation of your configuration. Use module list to view the currently loaded modules. If necessary source module.sh before compiling.
Runtime
The proper modules have to be loaded for the dynamic linking to your libraries to succeed.
You can source modules.sh before submitting (ccc_msub), however this is not very convenient.
A better way is to modify libIGCM_sys_irene.ksh in your libIGCM installation ((...)/modipsl/libIGCM/libIGCM_sys/ directory).
Locate the part where the environment tools are set in this file and add module unload and load commands:
#==================================================== # Set environment tools (ferret, nco, cdo) #==================================================== if [ X${TaskType} = Xcomputing ] ; then . $CCCHOME/../../dsm/p86ipsl/.atlas_env_netcdf4_irene_ksh > /dev/null 2>&1 # to run with netcdf 3.6.3 ie compilation done before 17/2/2014 # uncomment 2 lines : # module unload netcdf # module load netcdf/3.6.3 # set the proper modules module unload nco module unload netcdf module unload hdf5 module unload intel module load intel/12.1.9.293 module load netcdf/3.6.3_p1 module load hdf5/1.8.8 module load nco/4.1.0 #set the proper modules end export PATH=${PATH}:$CCCHOME/../../dsm/p86ipsl/AddNoise/src_X64_IRENE/bin export PATH=${PATH}:$CCCHOME/../../dsm/p86ipsl/AddPerturbation/src_X64_IRENE/bin else . $CCCHOME/../../dsm/p86ipsl/.atlas_env_netcdf4_irene_ksh > /dev/null 2>&1 PCMDI_MP=$CCCHOME/../../dsm/p86ipsl/PCMDI-MP fi
This way you can launch experiments on IRENE without having to source your module.sh file.
Keep in mind that the code has to be compiled with the same modules that the ones that are loaded by libIGCM at runtime.
In case of module mismatch you will have a runtime error stating a library was not found.
To add min max sum values of one field exchanged through OASIS, one has to add verbose mode (LOGPRT 1) , to add 2 operations (4 instead of 2 operations, CHECKIN CHECKOUT) and to describe them (INT=1 added for CHECKIN and for CHECKOUT). Then you will find information in output text file.
Example :
$NLOGPRT 0 ... O_SSTSST SISUTESW 1 5400 2 sstoc.nc EXPORTED 362 332 144 143 torc tlmd LAG=2700 P 2 P 0 LOCTRANS MAPPING # LOCTRANS CHECKIN MAPPING CHECKOUT # LOCTRANS: AVERAGE to average value over coupling period AVERAGE # CHECKIN: calculates the global minimum, the maximum and the sum of the field # INT=1 # Mozaic: 1) mapping filename 2) connected unit 3) dataset rank 4) Maximum # number of overlapped neighbors rmp_torc_to_tlmd_MOSAIC.nc src # CHECKOUT: calculates the global minimum, the maximum and the sum of the field # INT=1 #
$NLOGPRT 1 ... O_SSTSST SISUTESW 1 5400 4 sstoc.nc EXPORTED 362 332 144 143 torc tlmd LAG=2700 P 2 P 0 # LOCTRANS MAPPING LOCTRANS CHECKIN MAPPING CHECKOUT # LOCTRANS: AVERAGE to average value over coupling period AVERAGE # CHECKIN: calculates the global minimum, the maximum and the sum of the field INT=1 # Mozaic: 1) mapping filename 2) connected unit 3) dataset rank 4) Maximum # number of overlapped neighbors rmp_torc_to_tlmd_MOSAIC.nc src # CHECKOUT: calculates the global minimum, the maximum and the sum of the field INT=1 #
> egrep 'oasis_advance_run at .*RECV|diags:' debug.root.01|more oasis_advance_run at 0 0 RECV: SISUTESW diags: SISUTESW 0.00000000000 304.540452041 3548934.08936 oasis_advance_run at 0 0 RECV: SIICECOV oasis_advance_run at 0 0 RECV: SIICEALW oasis_advance_run at 0 0 RECV: SIICTEMW oasis_advance_run at 0 0 RECV: CURRENTX oasis_advance_run at 0 0 RECV: CURRENTY oasis_advance_run at 0 0 RECV: CURRENTZ oasis_advance_run at 5400 5400 RECV: SISUTESW diags: SISUTESW 0.00000000000 304.569482446 3549053.65992 ...
> egrep 'oasis_advance_run at.*SEND|diags:' debug.root.02|more oasis_advance_run at -2700 0 SEND: O_SSTSST diags: O_SSTSST 0.271306415433 304.835436600 31678793.3366 oasis_advance_run at -2700 0 SEND: OIceFrc oasis_advance_run at -2700 0 SEND: O_TepIce oasis_advance_run at -2700 0 SEND: O_AlbIce oasis_advance_run at -2700 0 SEND: O_OCurx1 oasis_advance_run at -2700 0 SEND: O_OCury1 oasis_advance_run at -2700 0 SEND: O_OCurz1 oasis_advance_run at 2700 5400 SEND: O_SSTSST diags: O_SSTSST 0.271306391122 304.852847163 31680753.5627 ...
To have output of exchanged fields by OASIS, one have to set 3 parameters :
Then you will obtain 2 types of files :
On last improvment still to be done to have the calendar of the simulation and the right number of values.
in lmdz.card add the CO2.txt file
ListNonDel= (...),\ (${R_IN}/ATM/GHG/CMIP6/1pctCO2/CO2_CMIP6_1pctCO2_1850_2100.txt, CO2.txt)
in config.card change the ExperimentName
ExperimentName=1pctCO2
Modify the CO2 concentration in config.def_preind file
co2_ppm = 1137.28
in config.card modify the ExperimentName
ExperimentName=abrupt-4xCO2
You can find some information here
libIGCM allows you to perform post processing jobs on the same machine as the main job. You can also start post processing jobs on other machines dedicated particularly to post processing. It is not done anymore.
Currently used machines:
Center | Computing machine | Post processing |
TGCC | Irene | xlarge node, -q standard |
IDRIS | JeanZay | --partition=prepost |
see here
ciclad : ferret ... > use "https://esgf.extra.cea.fr/thredds/dodsC/store/yourlogin/.../file.nc" > use "https://prodn.idris.fr/thredds/dodsC/ipsl_public/yourlogin/.../file.nc"
More information on Monitoring can be found here: Doc/Running
See this section.
The general tool to check simulations and monitor them is Hermes (only accessible from IPSL network).
You could use it to monitor a simulation by clicking on the M button on the right and select several simulations using checkboxes and selectionning the intermonitoring tool to see all of them into the same graphs.
Another way to do it, is to use directly the intermonitoring webservice:
Audio
Short link :
To select simulations from two centers or for two different logins, you must go back to step 1 and click on append directories to add new simulations.
See chapter Run and post-proc, section Monitoring and Intermonitoring here
The answer to this question is here.
In order to compute a seasonal mean over 100 years, check that all decades are on the file server (SE_checker). Then run the job create_multi_se on the post processing machine.
Note that an atlas for these 100 years will also be created. See the example for the 10-year ATM atlas for CM61-LR-pi-03 here : SE ATM 2000-2009
libIGCM=${libIGCM:=.../POST_CMIP5/libIGCM_v1_10/modipsl/libIGCM}
libIGCM=${libIGCM:=.../POST_CMIP5/libIGCM_v1_10/modipsl/libIGCM} SpaceName=${SpaceName:=PROD} ExperimentName=${ExperimentName:=piControl} JobName=${JobName:=piControlMR1} CARD_DIR=${CARD_DIR:=${CURRENT_DIR}}
./SE_Checker.job ==================================================== Where do we run ? cesium21 Linux cesium21 2.6.18-194.11.4.el5 #1 SMP Tue Sep 21 05:04:09 EDT 2010 x86_64 ==================================================== sys source cesium Intel X-64 lib. --Debug1--> DefineVariableFromOption : config_UserChoices --------------Debug3--> config_UserChoices_JobName=piControlMR1 --------------Debug3--> config_UserChoices_CalendarType=noleap --------------Debug3--> config_UserChoices_DateBegin=1800-01-01 --------------Debug3--> config_UserChoices_DateEnd=2099-12-31 --Debug1--> DateBegin/End for SE : 1800_1809 --Debug1--> ATM --Debug1--> SRF --Debug1--> SBG --Debug1--> OCE --Debug1--> ICE --Debug1--> MBG --Debug1--> CPL ... --Debug1--> DateBegin/End for SE : 2030_2039 --Debug1--> ATM --Debug1--> 2 file(s) missing for ATM : --Debug1--> piControlMR1_SE_2030_2039_1M_histmth.nc --Debug1--> piControlMR1_SE_2030_2039_1M_histmthNMC.nc --Debug1--> SRF --Debug1--> 1 file(s) missing for SRF : --Debug1--> piControlMR1_SE_2030_2039_1M_sechiba_history.nc --Debug1--> SBG --Debug1--> 2 file(s) missing for SBG : --Debug1--> piControlMR1_SE_2030_2039_1M_stomate_history.nc --Debug1--> piControlMR1_SE_2030_2039_1M_stomate_ipcc_history.nc --Debug1--> OCE --Debug1--> 4 file(s) missing for OCE : --Debug1--> piControlMR1_SE_2030_2039_1M_grid_T.nc --Debug1--> piControlMR1_SE_2030_2039_1M_grid_U.nc --Debug1--> piControlMR1_SE_2030_2039_1M_grid_V.nc --Debug1--> piControlMR1_SE_2030_2039_1M_grid_W.nc --Debug1--> ICE --Debug1--> 1 file(s) missing for ICE : --Debug1--> piControlMR1_SE_2030_2039_1M_icemod.nc --Debug1--> MBG --Debug1--> 3 file(s) missing for MBG : --Debug1--> piControlMR1_SE_2030_2039_1M_ptrc_T.nc --Debug1--> piControlMR1_SE_2030_2039_1M_diad_T.nc --Debug1--> piControlMR1_SE_2030_2039_1M_dbio_T.nc --Debug1--> CPL --Debug1--> 2 file(s) missing for CPL : --Debug1--> piControlMR1_SE_2030_2039_1M_cpl_atm.nc --Debug1--> piControlMR1_SE_2030_2039_1M_cpl_oce.nc --------Debug2--> Submit create_se for period 2030-2039 IGCM_sys_MkdirWork : .../POST_CMIP5/piControl/piControlMR1/OutScript IGCM_sys_QsubPost : create_se Submitted Batch Session 179472 ...
libIGCM=${libIGCM:=.../POST_CMIP5/libIGCM_v1_10/modipsl/libIGCM}
MultiSeasonalFrequency=100Y
The mean values are stored in the "Analyse" directories of each model component in the subdirectory SE_100Y (e.g. ATM/Analyse/SE_100Y).
The thredds space is regularly over quota in number of inodes.
Reminder: Normally no file is stored only in this space: there are only hard links of files stored on the workdir of your projects. These hard links are not counted in the volume quota. Here is the command to locate files that follow the rule.
cd $CCCWORKDIR/../../thredds/VOTRELOGIN find -links 1
Command to remove these files after having carefully check the list.
cd $CCCWORKDIR/../../thredds/VOTRELOGIN find -links 1 -exec \ rm {} \;
We recommend to also read the find manual.
Examples :
find . -name '*DEMO*' -exec rm -f {} \;
find . \( -name "*DEMO*" -o -name "*TEST*" -o -name "*ENCORE*" \) -print -exec rm -f {} \;
find . -type f | wc -l
The chmod -R ugo+rX * command gives access to everybody to all files and subdirectories in the current directory.
Copy or move the target installation:
cp -r OldInstall NewInstall or mv OldInstall NewInstall
Regenerate the makefiles to account for the new path:
cd NewInstall/modipsl/util ./ins_make
Recompile if you've done modifications in the source code:
cd NewInstall/modipsl/config/[YourConfig] gmake clean gmake [target]
Update your libIGCM installation:
rm NewInstall/modipsl/libIGCM/*.job
Prepare a new experiment as usual and launch ins_job to generate the .job files in your libIGCM directory and your experiment directory.
Depending on your libIGCM version you will have to launch NewInstall/modipsl/libIGCM/ins_job or NewInstall/modipsl/util/ins_job for older versions.
Check that the .job files are properly generated in NewInstall/modipsl/libIGCM/ and you are set.
You have to modify Makefile to add debug option for each component :
(cd ../../modeles/ORCHIDEE/ ; ./makeorchidee_fcm -debug -parallel mpi_omp -arch $(FCM_ARCH) -j 8 -xios2)
(cd ../../modeles/LMDZ ; ./makelmdz_fcm -d $(RESOL_LMDZ) -mem -debug ...
cd ../../modeles/XIOS; ./make_xios --arch $(FCM_ARCH) --debug
and in SOURCES/NEMO/arch-X64_IRENE.fcm add traceback :
%FCFLAGS -i4 -r8 -O3 -traceback -fp-model precise
gmake clean gmake