{{{ #!html

Monitor, debug and relaunching

}}} ---- [[NoteBox(note,This section describes the monitoring tools\, the tools to identify and solve problems\, and the tools to monitor and restart the post processing jobs if needed., 600px)]] [[TOC(heading=Table of contents,depth=1,inline)]] [[PageOutline(1,Table of contents,pullout)]] ---- # Check status of your simulations # ## How to verify the status of your simulation ## {{{ #!comment Plot generated with graphviz, script available here : https://forge.ipsl.jussieu.fr/igcmg_doc/wiki/DocYgraphvizLibigcmprod [[Image(libigcm_prod.jpg, 50%)]] ===> normal image [[Image(libigcm_prod_rotate.jpg, 50%)]] ===> to print a pdf }}} [[Image(libigcm_prod.jpg, 50%)]] [[NoteBox(note, We strongly encourage you to check your simulation frequently during run time., 600px)]] ### System tools ### The batch manager at each computing center provides tools to check the status of your jobs. For example to know if the job is on queue, running or suspended. #### TGCC #### You can use `ccc_mstat` on Curie. To see the available options and useful scripts, see [wiki:DocBenvBtgccAcurie#Jobmanagercommands Working on Curie]. #### IDRIS #### You can use `llq` on Ada. To see the available options and useful scripts, see [wiki:DocBenvAidrisAada#Commandstomanagejobsonada Working on Ada]. ### run.card ### When the simulation has started, the file run.card is created by libIGCM using the template run.card.init. run.card contains information of the current run period and the previous periods already finished. This file is updated at each run period by libIGCM. You can find here information of the time consumption of each period. The status of the job is set to !OnQueue, Running, Completed or Fatal. ### !RunChecker ### This tool provided with libIGCM allows you to find out your simulations' status. #### Example of a !RunChecker output for a successful simulation #### [[Image(RunChecker-OK.jpg, 50%)]] #### General description #### Below is a diagram of a long simulation that failed. [[Image(RunChecker_extrait.jpg, 50%)]] 1. In this block, you will find information about the simulation directories. 1. Here is information about the main job; the information comes from the `config.card` and the `run.card` : * The first line returns the job name and the date of the last time data saved to disk in the `run.card`. * !DateBegin - !DateEnd : start and end dates of the simulation as defined in `config.card`. * !PeriodState : variable coming from the `run.card` giving the run's status : * !OnQueue, Waiting : the run is queued ; * Running : the job is running ; * Completed : the run was completed successfully ; * Fatal : the run failed. * Current Period : this variable from `run.card` shows which integration step (most often which month) is being computed * !CumulPeriod : variable from `run.card`. Number of the period being computed * Pending Rebuilds, Nb | From | To : number of files waiting to be "rebuild", date of the oldest and the latest files. 1. The third block contains the status of the latest post processing jobs, the Rebuilds, the Pack, the Monitoring and the Atlas. Only the computed periods are returned for the Monitoring and the Atlas. For the other processing jobs, the computed periods and the number of successfully transferred files are returned. 1. Lastly, the current date. #### Usage and options #### The script can be started from any machine : {{{ #!sh path/to/libIGCM/RunCkecker.job [-u user] [-q] [-j n] [-s] [-p path] job_name }}} * `-u user` : starts the Checker for the simulation of another user * `-q` : silence mode * `-j n` : displays n post processing jobs (10 by default) * `-s` : looks for a simulation $WORKDIR and adds it to its catalog of simulations before displaying the information * `-p path` : !!!absolute!!! path of the directory containing the `config.card` instead of the job_name. #### Use #### This listing allows you to detect a few known errors : * Job running but the date of the last time output was written to disk in the `run.card` is much older than the current date : * Date is in red in the post processing listing : this means that errors during file transfers occurred in the post processing job. * For a given post processing job, the number of successfully-transfered files varies according to the date : this might mean that errors occurred. [[NoteBox(warn, In some cases (such as for historical simulations where the COSP outputs are activated starting from 1979 ...) this behavior is normal!, 600px)]] * A `PeriodState` to Fatal indicates that an error occurred either in the main job or in one of the post processing jobs. * If the number of rebuilds waiting is above... #### Good things to know #### During the first integration of a simulation using IPSLCM5, an additional rebuild file is transferred. This extra file is the NEMO "mesh_mask.nc" file. It is created and transferred only during the first step. It is then used for each "rebuild" of the NEMO output files to mask the variables. ## End of simulation ## Once your simulation is finished you will receive an email saying that the simulation was "Completed" or that it "Failed" and two files will be created in the working directory of your experiment: * [wiki:DocFsimu#run.cardattheendofasimulation run.card] * `Script_Output_JobName` A `Debug/` directory is created if the simulation failed. This directory contains diagnostic text files for each model component. If the simulation was successfully completed output files will be stored in the following directory: * `$CCCSTORE/IGCM_OUT/TagName/[SpaceName]/[ExperimentName]/JobName` at TGCC * `ergon:IGCM_OUT/TagName/[SpaceName]/[ExperimentName]/JobName` at IDRIS with the following subdirectories: * `RESTART` = tar of the restart files for all model components and with the pack frequency * `DEBUG` = tar of the debug text files for all model components * `ATM` * `CPL` * `ICE` * `OCE` * `SRF` * `SBG` * `Out` = run log files * `Exe` = executables used for the run * `ATM/Output`, `CPL/Output`, etc... = NetCDF output of the model components [[NoteBox(note, If !SpaceName was set to TEST the output files will remain in the work directories,600px)]] * `$SCRATCHDIR/IGCM_OUT/TagName/[SpaceName]/[ExperimentName]/JobName` at TGCC * `$WORKDIR/IGCM_OUT/TagName/[SpaceName]/[ExperimentName]/JobName` at ada/IDRIS ## Diagnostic tools : Checker ## ### !TimeSeries_Checker ### The `TimeSeries_Checker.job` can be used in diagnostic mode to check if time series have been created. You must change the `TimeSeries_Checker.job` before starting it in interactive mode (see [#TimeSeries_checker.job-Recommendedmethod TimeSeries_Checker.job]) and answer `n` to the following question: {{{ #!sh "Run for real (y/n)" }}} ### SE_Checker ### See [#SE_Checker.jobrecommendedmethod SE_Checker.job]. ---- # Analyzing the Job output : Script_Output # Reminder --> This file contains three parts: * copying the input files * running the model * post processing These three parts are defined as follows: {{{ ####################################### # ANOTHER GREAT SIMULATION # ####################################### 1st part (copying the input files) ####################################### # DIR BEFORE RUN EXECUTION # ####################################### 2nd part (running the model) ####################################### # DIR AFTER RUN EXECUTION # ####################################### 3rd part (post processing) }}} A few common bugs are listed below: * if the file ends before the second part, possible reasons can be: * you didn't delete the existing run.card file in case you wanted to overwrite the simulation; * you didn't specify !OnQueue in the run.card file in case you wanted to continue the simulation; * one of the input files was missing (e.g. it doesn't exist, the machine has a problem,...); * the frequencies (!RebuildFrequency, !PackFrequency ...) do not match !PeriodLength. * if the file ends in the middle of the second part, it's most likely because you didn't request enough memory or CPU time. * if the file ends in the third part, it could be caused by: * an error during the execution; * a problem while copying the output; * a problem when starting the post processing jobs. If the following message is displayed in the second part of the file, it's because there was a problem during the execution: {{{ ======================================================================== EXECUTION of : mpirun -f ./run_file > out_run_file 2>&1 Return code of executable : 1 IGCM_debug_Exit : EXECUTABLE !!!!!!!!!!!!!!!!!!!!!!!!!! !! IGCM_debug_CallStack !! !------------------------! !------------------------! IGCM_sys_Cp : out_run_file xxxxxxxxxxxx_out_run_file_error ======================================================================== }}} If the following message is displayed : {{{ ======================================================================== EXECUTION of : mpirun -f ./run_file > out_run_file 2>&1 ======================================================================== }}} If there is a message indicating that the "restartphy.nc" file doesn't exist it means that the model simulation was completed but before the end date of your simulation. If this happens and if your model creates an output log other than the simulation output log, you must refer to this log. For example, the output file of the ocean model is stored on the file server under this name: {{{ IGCM_sys_Put_Out : ocean.output xxxxxxxx/OCE/Debug/xxxxxxxx_ocean.output }}} For LMDZ your output log is the same as the simulation output log and it has not been copied to the storage space. If your simulation has been performed on $SCRATCHDIR (TGCC) you can retrieve it there. Otherwise, you must restart your simulation using $WORKDIR (IDRIS) as the working directory keeping all needed files. You must also change the RUN_DIR_PATH variable. See [#run_dir_path here] before restarting it. [[NoteBox(tip,In general\, if your simulation stops you can look for the keyword "IGCM_debug_CallStack" in this file. This keyword will come after a line explaining the error you are experiencing., 600px)]] {{{ Example : --Debug1--> IGCM_comp_Update IGCM_debug_Exit : IGCM_comp_Update missing executable create_etat0_limit.e !!!!!!!!!!!!!!!!!!!!!!!!!! !! IGCM_debug_CallStack !! !------------------------! }}} # Debug # ## Where does the problem come from ? ## Your problem could come from a programing error. To find it you can use the text output of the model components located in the Debug subdirectories. Your problem could be caused by the computing environment. This problem is not always easy to identify. It is therefore important to perform benchmark simulations to learn about the usual behavior of a successfully completed simulation. ### The Debug directory ### If the simulation failed due to anormal exit from the executable, a Debug/ directory is created in the working directory. It contains output text files of all model components for your configuration. You should read them to look for errors. For example : * xxx_out_gcm.e_error --> lmdz text output * xxx_out_orchidee --> orchidee text output * xxx_ocean.output --> nemo text output * xxx_inca.out --> inca text output * xxx_run.def --> lmdz parameter files * xxx_gcm.def --> lmdz parameter files * xxx_traceur.def --> lmdz parameter files * xxx_physiq.def --> lmdz parameter files * xxx_orchidee.def --> orchidee parameter files ### Programming error ### Please, take the time to read and analyze modifications you have done in the code. Nobody codes perfectly. ### Unknown error ### In this case, it's possible to relaunch the main job to run again the last period. If the simulation stopped before coming to the end due to an error, it it possible to relaunch the latest period after eventual modifications. The simulation will then read run.card to know where to start and the simulation will continue until the end (if the problem was solved). To relaunch manually you first need to be sure that no files have been stored for the same period. In libIGCM there are 2 scripts that help you do this cleaning up : * The error occurred before the packs have been created: {{{ #!sh path/to/libIGCM/clean_month.job }}} * The error occurred after the packs were created: {{{ #!sh path/to/libIGCM/clean_year.job [SSAA] # SSAA = year up to which you are deleting everything (this year included). By default, it's the current year in run.card }}}