{{{ #!html

Frequently Asked Questions

}}} ---- [[NoteBox(note,Frequently (and not so frequently) Asked Questions, 600px)]] [[TOC(heading=Table of contents,depth=1,inline)]] [[PageOutline(1,Table of contents,pullout)]] ---- # FAQ : Setting up and performing a simulation # ## How do I overwrite an existing simulation? ## If you want to relaunch a simulation from the beginning you need to delete everything created previously. All the output files must be deleted because they cannot be overwritten. They are 2 ways to do it, one using the `purge` tool from libIGCM otherwise to delete everything manually. '''1. Use libIGCM purge''' To purge your simulation (ie delete all outputs) just run: {{{ #!sh path/to/libIGCM/purge_simulation.job }}} '''2. Manual purge''' Do remove all outputs created by simulation, do the following: 1. Delete the `run.card` file in your experiment directory. 2. Delete all output directories: * `STORE/IGCM_OUT/TagName/(...)/JobName ` * `WORK/IGCM_OUT/TagName/(...)/JobName` * `SCRATCH/IGCM_OUT/TagName/(...)/JobName` || Space || TGCC || IDRIS || || WORK || $CCCWORKDIR || $WORK || || SCRATCH || $CCCSCRATCHDIR || $SCRATCH || || STORE || $CCCSTOREDIR || $STORE || 3. Launch the job. ''TIP'': If you have already done a simulation before you could find all output paths in the Script_output* file. Delete it before starting a new simulation. ## How do I continue or restart a simulation? ## See [wiki:Doc/Running#Howtocontinueorrestartasimulation here]. ## How do I setup a new experiment? ## See [wiki:Doc/Setup#Prepareanewexperiment here]. ## How can I start from another simulation? ## See [wiki:Doc/Setup#Examplefordifferentrestart here]. # FAQ : Running the model # ## How do I read the Script_Output file? ## During each job execution, a corresponding `Script_Output` file is created. [[BR]] '''Important''' : If your simulation stops you can look for the keyword "IGCM_debug_CallStack" in this file. This word will be preceded by a line giving more details on the problem that occurred. Click [wiki:Doc/CheckDebug#AnalyzingtheJoboutput:Script_Output here for more details]. ## The LMDZ parallelism and the Bands files ## See [wiki:Doc/Models/LMDZ#ParallelismandtheBandsfile here]. ## How do I define the number of MPI jobs and the number of OpenMP threads? ## It's define in config.card file, and the script ins_job will define what it's need in the job header. [[BR]] If you run your model in hybrid mode (MPI-OpenMP), the number of MPI processes and the number of OpenMP threads are set in config.card in the section "Executable". For example, for LMDZOR : we choose to run with 71 MPI processes and 8 OpenMP threads for LMDZ, and 1 MPI for XIOS {{{ ATM= (gcm.e, lmdz.x, 71MPI, 8OMP) SRF= ("", "") SBG= ("", "") IOS= (xios_server.exe, xios.x, 1MPI) }}} In this case the job will ask for 71*8 +1 = 569 CPU [[BR]] If we don't use OpenMP parallelization {{{ ATM= (gcm.e, lmdz.x, 71MPI, 1OMP) SRF= ("", "") SBG= ("", "") IOS= (xios_server.exe, xios.x, 1MPI) }}} In this case the job will ask for 71 +1 = 72 CPU ## Why does the `run.card` file contain the keyword `Fatal`? ## The keyword `Fatal` indicates that something went wrong in your simulation. Below is a list of the most common reasons: * a problem was encountered while copying the input files * the frequency settings in config.card are erroneous * run.card has not been deleted before resubmitting a simulation, or "!OnQueue" has not been specified in run.card when continuing a simulation * a problem was encountered during the run * the disk quotas have been reached * a problem was encountered while copying the output files * a post processing job encountered a problem * `pack_xxx` has failed and caused the simulation to abort. In this case, you must find `STOP HERE INCLUDING THE COMPUTING JOB` located in the appropriate output pack file. * `rebuild` was not completed successfully (for ORCHIDEE_OL) See the corresponding chapter about [wiki:Doc/CheckDebug monitoring and debug] for further information. ## How do I use a different version of libIGCM? ## libIGCM is constantly being updated. We recommend to choose the latest tag of libIGCM. Here is what to do: * save the old libIGCM version (just in case) * get new libIGCM * reinstall the post processing jobs * make sure that there has been no major change in AA_job, otherwise reinstall the main job {{{#!sh cd modipsl mv libIGCM libIGCM_old svn checkout -r `number_revision` http://forge.ipsl.jussieu.fr/libigcm/svn/trunk/libIGCM libIGCM }}} where number_revision is specified by someone from PlatForm group. If AA_job has been modified, you must : * move to the experiment directory, * delete or move old jobs * rerun the new jobs using `ins_job`. MYCONFIG could be IPSLCM6 or ORCHIDEE_OL, for example: {{{ cd ...../config/MYCONFIG/MYEXP mv Job_MYEXP OLDJOB # save the old job ../../../libIGCM/ins_job # modifier Job_MYEXP : NbPeriod, memory,... as it was done in OLDJOB }}} ## How do I restart a simulation to recover missing output files? ## This method shows how to rerun a complete simulation period in a different directory (REDO instead of DEVT/PROD). For reminder || Space || TGCC || IDRIS || || WORK || $CCCWORKDIR || $WORK || || SCRATCH || $CCCSCRATCHDIR || $SCRATCH || || STORE || $CCCSTOREDIR || $STORE || Example : To rerun v3.historicalAnt1 to recompute a whole year (e.g. 1964) you must : * On the file server (STORE), create the necessary RESTART file. {{{ ## Directory mkdir STORE/....IGCM_OUT/IPSLCM5A/REDO/historicalAnt/v3.historicalAnt1 cd STORE/....IGCM_OUT/IPSLCM5A/REDO/historicalAnt/v3.historicalAnt1 # RESTART mkdir -p RESTART ; cd RESTART ln -s ../../../../PROD/historicalAnt/v3.historicalAnt1/RESTART/v3.historicalAnt1_19640831_restart.nc v3.historicalAnt1REDO_19640831_restart.nc }}} * There is nothing to do for the Bands file, it's save at previous period in PARAM directory, and the simulation know where to find it. * If you are running a coupled model : On the scratch disk ($CCCSCRATCHDIR/IGCM_OUT), create the mesh_mask file {{{ mkdir SCRATCH/....IGCM_OUT/IPSLCM5A/REDO/historicalAnt/v3.historicalAnt1REDO cd SCRATCH/....IGCM_OUT/IPSLCM5A/REDO/historicalAnt/v3.historicalAnt1REDO # mesh_mask mkdir -p OCE/Output cd OCE/Output ln -s ../../../../../PROD/historicalAnt/v3.historicalAnt1/OCE/Output/v3.historicalAnt1_mesh_mask.nc v3.historicalAnt1REDO_mesh_mask.nc cd ../.. }}} * On the computing machine: * create a new directory {{{ cp -pr v3.historicalAnt1 v3.historicalAnt1REDO }}} * in this new directory, change the run.card file and set the following parameters to: {{{ OldPrefix= v3.historicalAnt1_19631231 PeriodDateBegin= 1964-01-01 PeriodDateEnd= 1964-01-31 CumulPeriod= xxx # Specify the proper "cad" value, i.e. the same month in the run.card cookie (ARGENT) PeriodState= OnQueue }}} * change the config.card file to one pack period (1 year), do not do any post processing, start rebuild month by month (only for ORCHIDEE_OL) and specify !PackFrequency. {{{ JobName=v3.historicalAnt1 ... SpaceName=REDO ... DateEnd= 1964-12-31 ... RebuildFrequency=1M # only for ORCHIDEE_OL PackFrequency=1Y ... TimeSeriesFrequency=NONE ... SeasonalFrequency=NONE }}} * you don't need to change the name of the simulation * restart the simulation : {{{ vi run.card # check one more time vi Job_v3.historicalAnt1 # check the time parameters and names of the output scripts ccc_msub Job_v3.historicalAnt1 }}} * once the job is finished, if you are running a coupled mode : check that the solver.stat files are identical. The solver.stat files are stored in DEBUG : {{{ sdiff OCE/Debug/v3.historicalAnt1REDO_19640901_19640930_solver.stat $DMFDIR/../p86maf/IGCM_OUT/IPSLCM5A/PROD/historicalAnt/v3.historicalAnt1/OCE/Debug/v3.historicalAnt1_19640901_19640930_solver.stat }}} ## How can I change the atmosphere horizontal resolutions using the same LMDZOR libIGCM configuration ? ## To do this you have to make some changes in your files. * in the modipsl/config/LMDZOR directory, modify your Makefile to add the resolutions you need. Here is an example for 48x48x79 resolution: {{{ LMD4848-L79 : libioipsl liborchidee lmdz48x48x79 verif echo "noORCAxLMD4848" >.resol_48x48x79 echo "RESOL_ATM_3D=48x48x79" >>.resol_48x48x79 lmdz48x48x79: $(M_K) lmdz RESOL_LMDZ=48x48x79 }}} * Also add the resolution "$(RESOL_LMDZ)" in the name of executables : {{{ (cd ../../modeles/LMDZ; ./makelmdz_fcm -cpp ORCHIDEE_NOOPENMP -d $(RESOL_LMDZ) -cosp true -v true -parallel mpi -arch $(FCM_ARCH) ce0l ; cp bin/ce0l_$(RESOL_LMDZ)_phylmd_para_orch.e ../../bin/create_etat0_limit.e_$(RESOL_LMDZ) ; ) (cd ../../modeles/LMDZ; ./makelmdz_fcm -cpp ORCHIDEE_NOOPENMP -d $(RESOL_LMDZ) -cosp true -v true -mem -parallel mpi -arch $(FCM_ARCH) gcm ; cp bin/gcm_$(RESOL_LMDZ)_phylmd_para_mem_orch.e ../../bin/gcm.e_$(RESOL_LMDZ) ; ) }}} * in modipsl/libIGCM/AA_job replace .resol by .resol_myresolution like this : {{{ [ -f ${SUBMIT_DIR}/../.resol ] && RESOL=$(head -1 ${SUBMIT_DIR}/../.resol) become [ -f ${SUBMIT_DIR}/../.resol_myresolution ] && RESOL=$(head -1 ${SUBMIT_DIR}/../.resol_myresolution) }}} * modify the modipsl/config/LMDZOR/GENERAL/DRIVER/lmdz.driver by replacing {{{ [ -f ${SUBMIT_DIR}/../.resol ] && eval $(grep RESOL_ATM_3D ${SUBMIT_DIR}/../.resol) || RESOL_ATM_3D=96x95x19 by [ -f ${SUBMIT_DIR}/../.resol_myresolution ] && eval $(grep RESOL_ATM_3D ${SUBMIT_DIR}/../.resol_myresolution) || RESOL_ATM_3D=96x95x19 }}} Now you can create as many experiment as you have compiled your model. {{{ cd modipsl/config/LMDZOR/ cp EXPERIMENT/LMDZOR/clim/config.card . etc... }}} [[BR]][[BR]] '''Warning: you'll need to get parameter files and maybe some forcing ones corresponding to the resolution.''' # FAQ : Special configurations # ## How do I create the initial conditions for LMDZOR? ## For a few configurations such as LMDZOR and LMDZREPR, you must create initial and boundary conditions in advance. This is not necessary for coupled configurations such as IPSLCM6. [[BR]] For more information, see [wiki:Doc/Models/LMDZ#Creatinginitialstatesandinterpolatingboundaryconditions this chapter]. ## How do I deactivate STOMATE in IPSLCM5 or in LMDZOR? ## [wiki:Doc/Models/ORCHIDEE#DeactivatestomateinORCHIDEE Here is how to do it.] ## How do I perform a nudged run? ## '''Atmospherical nudging'''[[br]] This paragraph describes how to perform a nudged run for configurations that include LMDZ. To do so, you have to: * activate option `ok_guide` in the `lmdz.card` file (this option enables you to activate the corresponding flag_ in `PARAM/guide.def`) * check that the wind fields specified are contained in `BoundaryFiles`. ([wiki:Doc/Config/Lmdzorinca#Thenudgedmode Several forcing] are available on Irene) For example: {{{ #!sh [BoundaryFiles] List= ....\ (work_subipsl/subipsl/ECMWF{your_resolution}/AN${year}/u_ecmwf_${year}${month}.nc, u.nc)\ (work_subipsl/subipsl/ECMWF{your_resolution}/AN${year}/v_ecmwf_${year}${month}.nc, v.nc)\ }}} * choose the proper dates in `config.card` (pay attention to leap years) '''Oceanic nudging'''[[br]] To force the oceanic model in salinity or SST you could find the procedure in [https://zenodo.org/record/3248739#.XZ8HapMza1s NEMO official documentation] (section 7.12.3: ''Surface restoring to observed SST and/or SSS'') Notice that NEMO uses the salinity nudging, by default, when it's used in oceanic forced configurations. ## How do I run simulations with specific versions of compiler and/or libraries on Irene at the TGCC ? (modules) ## For various reasons you may want to run simulations with different versions of compiler or libraries (mainly netCDF). The first thing is to keep a dedicated installation of modipsl for this specific setup since you will have to modify the libIGCM associated with the simulations. Keep in mind that you need the modules of the libraries you want to use to be properly loaded at both: * compile time * run time '''Compile time''' You can create a script shell that unloads the modules of the default configuration and loads the modules you want to use. Here is an example of the file `modules.sh` to use intel/12 and netCDF3.6.3: (the order in which you unload and load the modules is important) {{{ #!/bin/bash #set -vx # unload modules module unload nco #/4.1.0 module unload netcdf #/4.2_hdf5_parallel module unload hdf5 #/1.8.9_parallel module unload intel # load modules module load intel/12.1.9.293 module load netcdf/3.6.3 module load hdf5/1.8.8 module load nco/4.1.0 }}} You have to make sure the modules you want to be used by your code are loaded before each compilation of your configuration. Use module list to view the currently loaded modules. If necessary source `module.sh` before compiling. '''Runtime''' The proper modules have to be loaded for the dynamic linking to your libraries to succeed. You can source `modules.sh` before submitting (ccc_msub), however this is not very convenient. A better way is to modify `libIGCM_sys_irene.ksh` in your libIGCM installation (`(...)/modipsl/libIGCM/libIGCM_sys/` directory). Locate the part where the environment tools are set in this file and add module unload and load commands: {{{ #==================================================== # Set environment tools (ferret, nco, cdo) #==================================================== if [ X${TaskType} = Xcomputing ] ; then . $CCCHOME/../../dsm/p86ipsl/.atlas_env_netcdf4_irene_ksh > /dev/null 2>&1 # to run with netcdf 3.6.3 ie compilation done before 17/2/2014 # uncomment 2 lines : # module unload netcdf # module load netcdf/3.6.3 # set the proper modules module unload nco module unload netcdf module unload hdf5 module unload intel module load intel/12.1.9.293 module load netcdf/3.6.3_p1 module load hdf5/1.8.8 module load nco/4.1.0 #set the proper modules end export PATH=${PATH}:$CCCHOME/../../dsm/p86ipsl/AddNoise/src_X64_IRENE/bin export PATH=${PATH}:$CCCHOME/../../dsm/p86ipsl/AddPerturbation/src_X64_IRENE/bin else . $CCCHOME/../../dsm/p86ipsl/.atlas_env_netcdf4_irene_ksh > /dev/null 2>&1 PCMDI_MP=$CCCHOME/../../dsm/p86ipsl/PCMDI-MP fi }}} This way you can launch experiments on IRENE without having to source your `module.sh` file. Keep in mind that the code has to be compiled with the same modules that the ones that are loaded by libIGCM at runtime. In case of module mismatch you will have a runtime error stating a library was not found. ## How to have min and max value exchanged through OASIS? ## To add min max sum values of one field exchanged through OASIS, one has to add verbose mode (LOGPRT 1) , to add 2 operations (4 instead of 2 operations, CHECKIN CHECKOUT) and to describe them (INT=1 added for CHECKIN and for CHECKOUT). Then you will find information in output text file. Example : * Modification in namcouple : * Before : {{{ $NLOGPRT 0 ... O_SSTSST SISUTESW 1 5400 2 sstoc.nc EXPORTED 362 332 144 143 torc tlmd LAG=2700 P 2 P 0 LOCTRANS MAPPING # LOCTRANS CHECKIN MAPPING CHECKOUT # LOCTRANS: AVERAGE to average value over coupling period AVERAGE # CHECKIN: calculates the global minimum, the maximum and the sum of the field # INT=1 # Mozaic: 1) mapping filename 2) connected unit 3) dataset rank 4) Maximum # number of overlapped neighbors rmp_torc_to_tlmd_MOSAIC.nc src # CHECKOUT: calculates the global minimum, the maximum and the sum of the field # INT=1 # }}} * After : {{{ $NLOGPRT 1 ... O_SSTSST SISUTESW 1 5400 4 sstoc.nc EXPORTED 362 332 144 143 torc tlmd LAG=2700 P 2 P 0 # LOCTRANS MAPPING LOCTRANS CHECKIN MAPPING CHECKOUT # LOCTRANS: AVERAGE to average value over coupling period AVERAGE # CHECKIN: calculates the global minimum, the maximum and the sum of the field INT=1 # Mozaic: 1) mapping filename 2) connected unit 3) dataset rank 4) Maximum # number of overlapped neighbors rmp_torc_to_tlmd_MOSAIC.nc src # CHECKOUT: calculates the global minimum, the maximum and the sum of the field INT=1 # }}} * Informations : * min, max and sum for received field in component 1 : atmosphere in debug.root.01 file. {{{ > egrep 'oasis_advance_run at .*RECV|diags:' debug.root.01|more oasis_advance_run at 0 0 RECV: SISUTESW diags: SISUTESW 0.00000000000 304.540452041 3548934.08936 oasis_advance_run at 0 0 RECV: SIICECOV oasis_advance_run at 0 0 RECV: SIICEALW oasis_advance_run at 0 0 RECV: SIICTEMW oasis_advance_run at 0 0 RECV: CURRENTX oasis_advance_run at 0 0 RECV: CURRENTY oasis_advance_run at 0 0 RECV: CURRENTZ oasis_advance_run at 5400 5400 RECV: SISUTESW diags: SISUTESW 0.00000000000 304.569482446 3549053.65992 ... }}} * min, max and sum for sent field from component 2 : ocean in debug.root.02 {{{ > egrep 'oasis_advance_run at.*SEND|diags:' debug.root.02|more oasis_advance_run at -2700 0 SEND: O_SSTSST diags: O_SSTSST 0.271306415433 304.835436600 31678793.3366 oasis_advance_run at -2700 0 SEND: OIceFrc oasis_advance_run at -2700 0 SEND: O_TepIce oasis_advance_run at -2700 0 SEND: O_AlbIce oasis_advance_run at -2700 0 SEND: O_OCurx1 oasis_advance_run at -2700 0 SEND: O_OCury1 oasis_advance_run at -2700 0 SEND: O_OCurz1 oasis_advance_run at 2700 5400 SEND: O_SSTSST diags: O_SSTSST 0.271306391122 304.852847163 31680753.5627 ... }}} ## How to output exchanged fields by OASIS? ## To have output of exchanged fields by OASIS, one have to set 3 parameters : * {{{OutputMode=y}}} in COMP/oasis.card * {{{WriteFrequency="1M 1D"}}} : Add 1D write frequency in config.card for CPL section * {{{RebuildFrequency=1D}}} : Add a post rebuild step ie frequency for rebuild in config.card for Post section Then you will obtain 2 types of files : * DA/... _1M_cpl_oce.nc and ... _1M_cpl_atm.nc variables in ocean (resp. atmosphere) received or sent to the other component, for each exchange. (17, 16, 3 or 2 values per day., 0, 1, 14 or 15 extra values forced to 0) * MO/..._1M_cpl_oce.nc and ... _1M_cpl_atm.nc variables in ocean (resp. atmosphere) received or sent to the other component, averaged per month. result of cdo monavg and ncatted -a axis,time,c,c,T -a long_name,time,c,c,Time axis -a title,time,c,c,Time -a calendar,time,c,c,noleap -a units,time,c,c,seconds since ... -a time_origin,time,c,c,... On last improvment still to be done to have the calendar of the simulation and the right number of values. ## How I create a 1pctCO2 experiment ## in lmdz.card add the CO2.txt file {{{ ListNonDel= (...),\ (${R_IN}/ATM/GHG/CMIP6/1pctCO2/CO2_CMIP6_1pctCO2_1850_2100.txt, CO2.txt) }}} in config.card change the !ExperimentName {{{ ExperimentName=1pctCO2 }}} ## How I create an abrupt-4xCO2 experiment ## Modify the CO2 concentration in config.def_preind file {{{ co2_ppm = 1137.28 }}} in config.card modify the !ExperimentName {{{ ExperimentName=abrupt-4xCO2 }}} You can find some information [https://forge.ipsl.jussieu.fr/igcmg/wiki/IPSLCM6/Simulations/DECK here] # FAQ : Post processing # ## Where are post processing jobs run? ## libIGCM allows you to perform post processing jobs on the same machine as the main job. You can also start post processing jobs on other machines dedicated particularly to post processing. It is not done anymore. Currently used machines: || Center || Computing machine || Post processing || || TGCC || Irene || xlarge node, -q standard|| || IDRIS || !JeanZay || --partition=prepost || ## How do I check that the post processing jobs were successful? ## see [wiki:Doc/CheckDebug#Checkstatusofyoursimulations here] ## How do I read/retrieve/use files on esgf/thredds? ## * At IDRIS, visit the following website (will change soon): * [http://prodn.idris.fr/thredds] and select ipsl_public, your login, your configuration, your simulation and the ATM component (then the `Analyse` subdirectory) as well as `ATLAS` or `MONITORING`. * At TGCC, visit the following website: * [https://vesg.ipsl.upmc.fr/thredds/catalog/catalog.html] and select: * work, your login, your configuration, your simulation, etc. for ATLAS and MONITORING * project (such as CORDEX), your login, your configuration, your simulation, etc. and ATM or other component to access Analyse files (TS or SE) * Once you found a netcdf file (suffix `.nc`), you can download it by clicking on it or you can analyze it with openDAP functions. To do so add `thredds/dodsC` to the address right after the server address. For example: {{{ ciclad : ferret ... > use "https://esgf.extra.cea.fr/thredds/dodsC/store/yourlogin/.../file.nc" > use "https://prodn.idris.fr/thredds/dodsC/ipsl_public/yourlogin/.../file.nc" }}} More information on Monitoring can be found here: [wiki:Doc/Running#Monitoring] ## How do I add a variable to the Time Series? ## See this [wiki:Doc/Running#TimeSeries section]. ## How do I superimpose monitoring plots (intermonitoring)? ## The general tool to check simulations and monitor them is [wiki:Doc/CheckDebug#Supervisor:hermes.ipsl.upmc.fr Hermes] (only accessible from IPSL network). [[BR]] You could use it to monitor a simulation by clicking on the `M` button on the right and select several simulations using checkboxes and selectionning the intermonitoring tool to see all of them into the same graphs. Another way to do it, is to use directly the intermonitoring webservice: [[BR]] [https://vesg.ipsl.upmc.fr/thredds/fileServer/IPSLFS/brocksce/screencast/InterMonitoring.html Audio] [[BR]] Short link : * for esgf type : {{{ http://webservices2017.ipsl.fr/interMonitoring/ }}} {{{ #!comment Visit: http://webservices2017.ipsl.fr/interMonitoring/ In the 1st tab, type: https://esgf.extra.cea.fr/thredds/catalog/store/p86ghatt/OL2/PROD Click on the "List directories". To add simulations at IDRIS, go back to the 1st tab and type https://prodn.idris.fr/thredds/catalog/ipsl_public/rces061/OL2/DEVT Then click on "Append directories" to display TGCC and IDRIS simulations on the next tab. In the 2nd tab, select the simulations 27, 29, 30 and 33 (shift click or control click to choose several simulations) Then click on "search files". In the 3rd tab, choose a variable (SBG_BIOMASS) and click on "Validate" then "Validate" in the 4th tab and "Prepare and Run the ferret script". A page called "http://webservices.ipsl.jussieu.fr/monitoring/script.php" is then displayed with a biomass multi-monitoring. Click on "Run script on server" to display all figures. The steps to save the ferret script and run it locally is described in the 'Help'. }}} To select simulations from two centers or for two different logins, you must go back to step 1 and click on '''append directories''' to add new simulations. ## What is the Monitoring? ## See chapter '''Run and post-proc''''''', section ''Monitoring and Intermonitoring'' [wiki:Doc/Running#Monitoringandintermonitoring here] ## How do I add a plot to the monitoring? ## The answer to this question is [wiki:Doc/Running#Addingavariabletothemonitoring here]. ## How do I calculate seasonal means over 100 years? ## In order to compute a seasonal mean over 100 years, check that all decades are on the file server (`SE_checker`). Then run the job `create_multi_se` on the post processing machine. Note that an atlas for these 100 years will also be created. See the example for the 10-year ATM atlas for CM61-LR-pi-03 here : [https://vesg.ipsl.upmc.fr/thredds/fileServer/work/p86maf/IPSLCM6/PROD/piControl/CM61-LR-pi-03/ATLAS/SE_2000_2009/ATM/ATM.html SE ATM 2000-2009] 1. If not done yet, create a specific post processing directory. See the chapter on how to [wiki:Doc/Running#Lancerourelancerlespost-traitements run or restart post processing jobs] for details. 1. Copy `create_se.job`, `SE_checker.job` and `create_multi_se.job` 1. Check/change the following variables in `create_se.job`: {{{ #!sh libIGCM=${libIGCM:=.../POST_CMIP5/libIGCM_v1_10/modipsl/libIGCM} }}} 1. Check that all decades exist. 1. Check/change the variables in SE_checker.job: {{{ #!sh libIGCM=${libIGCM:=.../POST_CMIP5/libIGCM_v1_10/modipsl/libIGCM} SpaceName=${SpaceName:=PROD} ExperimentName=${ExperimentName:=piControl} JobName=${JobName:=piControlMR1} CARD_DIR=${CARD_DIR:=${CURRENT_DIR}} }}} 1. Start the `./SE_checker.job` in interactive mode. All needed jobs `create_se.job` will be started. For example: {{{ #!sh ./SE_Checker.job ==================================================== Where do we run ? cesium21 Linux cesium21 2.6.18-194.11.4.el5 #1 SMP Tue Sep 21 05:04:09 EDT 2010 x86_64 ==================================================== sys source cesium Intel X-64 lib. --Debug1--> DefineVariableFromOption : config_UserChoices --------------Debug3--> config_UserChoices_JobName=piControlMR1 --------------Debug3--> config_UserChoices_CalendarType=noleap --------------Debug3--> config_UserChoices_DateBegin=1800-01-01 --------------Debug3--> config_UserChoices_DateEnd=2099-12-31 --Debug1--> DateBegin/End for SE : 1800_1809 --Debug1--> ATM --Debug1--> SRF --Debug1--> SBG --Debug1--> OCE --Debug1--> ICE --Debug1--> MBG --Debug1--> CPL ... --Debug1--> DateBegin/End for SE : 2030_2039 --Debug1--> ATM --Debug1--> 2 file(s) missing for ATM : --Debug1--> piControlMR1_SE_2030_2039_1M_histmth.nc --Debug1--> piControlMR1_SE_2030_2039_1M_histmthNMC.nc --Debug1--> SRF --Debug1--> 1 file(s) missing for SRF : --Debug1--> piControlMR1_SE_2030_2039_1M_sechiba_history.nc --Debug1--> SBG --Debug1--> 2 file(s) missing for SBG : --Debug1--> piControlMR1_SE_2030_2039_1M_stomate_history.nc --Debug1--> piControlMR1_SE_2030_2039_1M_stomate_ipcc_history.nc --Debug1--> OCE --Debug1--> 4 file(s) missing for OCE : --Debug1--> piControlMR1_SE_2030_2039_1M_grid_T.nc --Debug1--> piControlMR1_SE_2030_2039_1M_grid_U.nc --Debug1--> piControlMR1_SE_2030_2039_1M_grid_V.nc --Debug1--> piControlMR1_SE_2030_2039_1M_grid_W.nc --Debug1--> ICE --Debug1--> 1 file(s) missing for ICE : --Debug1--> piControlMR1_SE_2030_2039_1M_icemod.nc --Debug1--> MBG --Debug1--> 3 file(s) missing for MBG : --Debug1--> piControlMR1_SE_2030_2039_1M_ptrc_T.nc --Debug1--> piControlMR1_SE_2030_2039_1M_diad_T.nc --Debug1--> piControlMR1_SE_2030_2039_1M_dbio_T.nc --Debug1--> CPL --Debug1--> 2 file(s) missing for CPL : --Debug1--> piControlMR1_SE_2030_2039_1M_cpl_atm.nc --Debug1--> piControlMR1_SE_2030_2039_1M_cpl_oce.nc --------Debug2--> Submit create_se for period 2030-2039 IGCM_sys_MkdirWork : .../POST_CMIP5/piControl/piControlMR1/OutScript IGCM_sys_QsubPost : create_se Submitted Batch Session 179472 ... }}} 1. Wait for the `create_se` jobs to be completed 1. Copy `create_multi_se.job` 1. Check/change the variables : {{{ #!sh libIGCM=${libIGCM:=.../POST_CMIP5/libIGCM_v1_10/modipsl/libIGCM} }}} 1. If needed, adjust the number of decades in `config.card`: default=`50Y` (i.e. 50 years). Add the following line to the `POST` section, i.e. at the end after the keyword `[POST]` {{{ #!sh MultiSeasonalFrequency=100Y }}} 1. Run the `create_multi_se.job` job:`ccc_msub create_multi_se.job` 1. The years used for the calculations are those between `DateEnd` (set in `config.card` in the local directory) and `DateEnd - MultiSeasonalFrequency`. The mean values are stored in the "Analyse" directories of each model component in the subdirectory `SE_100Y` (e.g. `ATM/Analyse/SE_100Y`). ## There is over quota on thredds (TGCC), what can I do ? ## The thredds space is regularly over quota in number of inodes. Reminder: Normally no file is stored only in this space: there are only hard links of files stored on the workdir of your projects. These hard links are not counted in the volume quota. Here is the command to locate files that follow the rule. > cd $CCCWORKDIR/../../thredds/VOTRELOGIN > find -links 1 Command to remove these files after having carefully check the list. > cd $CCCWORKDIR/../../thredds/VOTRELOGIN > find -links 1 -exec \ rm {} \; # FAQ : Unix tricks # ## How to delete a group of files using the find command? ## [[NoteBox(note, We recommend to also read the find manual.,600px)]] Examples : * command recursively deleting all files in a directory containing DEMO in their name: {{{ find . -name '*DEMO*' -exec rm -f {} \; }}} * command recursively deleting all files in a directory containing DEMO, TEST or ENCORE in their name: {{{ find . \( -name "*DEMO*" -o -name "*TEST*" -o -name "*ENCORE*" \) -print -exec rm -f {} \; }}} * command recursively computing the number of files in the current directory: {{{ find . -type f | wc -l }}} ## Allowing read-access to everybody ## The `chmod -R ugo+rX *` command gives access to everybody to all files and subdirectories in the current directory. # FAQ : Miscellaneous # ## How do I copy a model installation directory instead of downloading from the forge (or move a directory)? ## Copy or move the target installation: {{{ cp -r OldInstall NewInstall or mv OldInstall NewInstall }}} Regenerate the makefiles to account for the new path: {{{ cd NewInstall/modipsl/util ./ins_make }}} Recompile if you've done modifications in the source code: {{{ cd NewInstall/modipsl/config/[YourConfig] gmake clean gmake [target] }}} Update your libIGCM installation:[[BR]] * install the latest version of libIGCM by following these [wiki:Doc/FAQ#HowdoIuseadifferentversionoflibIGCM explanations] * or remove and regenerate the .job files in your libIGCM directory as follows: {{{ rm NewInstall/modipsl/libIGCM/*.job }}} Prepare a new experiment as usual and launch `ins_job` to generate the `.job` files in your `libIGCM` directory and your experiment directory.[[BR]] Depending on your `libIGCM` version you will have to launch `NewInstall/modipsl/libIGCM/ins_job` or `NewInstall/modipsl/util/ins_job` for older versions. Check that the `.job` files are properly generated in `NewInstall/modipsl/libIGCM/` and you are set. ## I need to compile IPSL model in debug mode. How to do that? ## You have to modify {{{Makefile}}} to add debug option for each component : (cd ../../modeles/ORCHIDEE/ ; ./makeorchidee_fcm '''-debug''' -parallel mpi_omp -arch $(FCM_ARCH) -j 8 -xios2) (cd ../../modeles/LMDZ ; ./makelmdz_fcm -d $(RESOL_LMDZ) -mem '''-debug''' ... cd ../../modeles/XIOS; ./make_xios --arch $(FCM_ARCH) '''--debug''' and in SOURCES/NEMO/arch-X64_IRENE.fcm add traceback : %FCFLAGS -i4 -r8 -O3 '''-traceback''' -fp-model precise {{{ gmake clean gmake }}}