WikiPrint - from Polar Technologies

Monitor, debug and relaunching


This section describes the monitoring tools, the tools to identify and solve problems, and the tools to monitor and restart the post processing jobs if needed.


1. Check status of your simulations

1.1. How to verify the status of your simulation

We strongly encourage you to check your simulation frequently during run time.

1.1.1. System tools

The batch manager at each computing center provides tools to check the status of your jobs. For example to know if the job is on queue, running or suspended.

1.1.1.1. TGCC

You can use ccc_mstat on Curie. To see the available options and useful scripts, see Working on Curie.

1.1.1.2. IDRIS

You can use llq on Ada. To see the available options and useful scripts, see Working on Ada.

1.1.2. run.card

When the simulation has started, the file run.card is created by libIGCM using the template run.card.init. run.card contains information of the current run period and the previous periods already finished. This file is updated at each run period by libIGCM. You can find here information of the time consumption of each period. The status of the job is set to OnQueue, Running, Completed or Fatal.

1.1.3. RunChecker

This tool provided with libIGCM allows you to find out your simulations' status.

1.1.3.1. Example of a RunChecker output for a successful simulation

1.1.3.2. General description

Below is a diagram of a long simulation that failed.

  1. In this block, you will find information about the simulation directories.
  2. Here is information about the main job; the information comes from the config.card and the run.card :
  3. The third block contains the status of the latest post processing jobs, the Rebuilds, the Pack, the Monitoring and the Atlas. Only the computed periods are returned for the Monitoring and the Atlas. For the other processing jobs, the computed periods and the number of successfully transferred files are returned.
  4. Lastly, the current date.

1.1.3.3. Usage and options

The script can be started from any machine :

path/to/libIGCM/RunCkecker.job [-u user] [-q] [-j n] [-s] [-p path] job_name

1.1.3.4. Use

This listing allows you to detect a few known errors :

In some cases (such as for historical simulations where the COSP outputs are activated starting from 1979 ...) this behavior is normal!

1.1.3.5. Good things to know

During the first integration of a simulation using IPSLCM5, an additional rebuild file is transferred. This extra file is the NEMO "mesh_mask.nc" file. It is created and transferred only during the first step. It is then used for each "rebuild" of the NEMO output files to mask the variables.

1.2. End of simulation

Once your simulation is finished you will receive an email saying that the simulation was "Completed" or that it "Failed" and two files will be created in the working directory of your experiment:

A Debug/ directory is created if the simulation failed. This directory contains diagnostic text files for each model component.

If the simulation was successfully completed output files will be stored in the following directory:

with the following subdirectories:

If SpaceName was set to TEST the output files will remain in the work directories

1.3. Diagnostic tools : Checker

1.3.1. !TimeSeries_Checker

The TimeSeries_Checker.job can be used in diagnostic mode to check if time series have been created. You must change the TimeSeries_Checker.job before starting it in interactive mode (see TimeSeries_Checker.job) and answer n to the following question:

"Run for real (y/n)"

1.3.2. SE_Checker

See SE_Checker.job.


2. Analyzing the Job output : Script_Output

Reminder --> This file contains three parts:

These three parts are defined as follows:

#######################################
#       ANOTHER GREAT SIMULATION      #
#######################################

 1st part (copying the input files)

#######################################
#      DIR BEFORE RUN EXECUTION       #
#######################################

 2nd part (running the model)

#######################################
#       DIR AFTER RUN EXECUTION       #
#######################################

 3rd part (post processing)

A few common bugs are listed below:

If the following message is displayed in the second part of the file, it's because there was a problem during the execution:

========================================================================
EXECUTION of : mpirun -f ./run_file > out_run_file 2>&1
Return code of executable : 1
IGCM_debug_Exit :  EXECUTABLE

!!!!!!!!!!!!!!!!!!!!!!!!!!
!! IGCM_debug_CallStack !!
!------------------------!

!------------------------!
IGCM_sys_Cp : out_run_file xxxxxxxxxxxx_out_run_file_error
========================================================================

If the following message is displayed :

========================================================================
EXECUTION of : mpirun -f ./run_file > out_run_file 2>&1
========================================================================

If there is a message indicating that the "restartphy.nc" file doesn't exist it means that the model simulation was completed but before the end date of your simulation. If this happens and if your model creates an output log other than the simulation output log, you must refer to this log. For example, the output file of the ocean model is stored on the file server under this name:

IGCM_sys_Put_Out : ocean.output xxxxxxxx/OCE/Debug/xxxxxxxx_ocean.output

For LMDZ your output log is the same as the simulation output log and it has not been copied to the storage space. If your simulation has been performed on $SCRATCHDIR (TGCC) you can retrieve it there. Otherwise, you must restart your simulation using $WORKDIR (IDRIS) as the working directory keeping all needed files. You must also change the RUN_DIR_PATH variable. See here before restarting it.

In general, if your simulation stops you can look for the keyword "IGCM_debug_CallStack" in this file. This keyword will come after a line explaining the error you are experiencing.

Example : 

--Debug1--> IGCM_comp_Update

IGCM_debug_Exit :  IGCM_comp_Update missing executable create_etat0_limit.e

!!!!!!!!!!!!!!!!!!!!!!!!!!
!! IGCM_debug_CallStack !!
!------------------------!

3. Debug

3.1. Where does the problem come from ?

Your problem could come from a programing error. To find it you can use the text output of the model components located in the Debug subdirectories. Your problem could be caused by the computing environment. This problem is not always easy to identify. It is therefore important to perform benchmark simulations to learn about the usual behavior of a successfully completed simulation.

3.1.1. The Debug directory

If the simulation failed due to anormal exit from the executable, a Debug/ directory is created in the working directory. It contains output text files of all model components for your configuration. You should read them to look for errors. For example :

3.1.2. Programming error

Please, take the time to read and analyze modifications you have done in the code. Nobody codes perfectly.

3.1.3. Unknown error

In this case, it's possible to relaunch the main job to run again the last period.

If the simulation stopped before coming to the end due to an error, it it possible to relaunch the latest period after eventual modifications. The simulation will then read run.card to know where to start and the simulation will continue until the end (if the problem was solved).

To relaunch manually you first need to be sure that no files have been stored for the same period. In libIGCM there are 2 scripts that help you do this cleaning up :

3.2. Start or restart post processing jobs

Please look at the next paragraph.

4. Start or restart post processing jobs

You can run post processing jobs once the main job is finished (for example if the post processing job was deactivated in config.card or if you encountered a bug).

On TGCC, the machine used for post processing is the same as the computing machine. On IDRIS, the machine to be used for postprocessing is adapp (since July 2013) with same file system available : $WORKDIR, $HOME, ... . You can :

  1. work directly in the experiment directory (which looks like PATH_MODIPSL/config/IPSLCM5A/ST11/) ;
  2. work in a dedicated directory located in the experiment directory (e.g. PATH_MODIPSL/config/IPSLCM5A/ST11/POST_REDO) ;
  3. work in a dedicated directory which is independant of the experiment directory (e.g. $WORKDIR/POST_REDO).

For the last two options you must first:

Before submitting a post processing job at TGCC (rebuild_fromWorkdir.job, pack_debug.job, pack_output.job, pack_restart.job, monitoring.job, create_ts.job, create_se.job) you must make sure that the submission group in present in the job header (#MSUB -A genxxxx). If it isn't, add it.

4.1. Restart REBUILD

The rebuild job submits pack_output.job automatically.

4.2. Restart Pack_output

The pack_output (e.g. in case it was not submitted by the rebuild job):

create_ts.job and create_se.job are submitted automatically.

4.3. Restart Pack_restart or Pack_debug

4.4. Restart the Time series

In case you haven't done it yet, retrieve config.card COMP POST and eventually run.card (post process only part of the simulation) in the POST_REDO/ directory.

There are two ways:

4.4.1. !TimeSeries_checker.job - Recommended method

4.4.2. Restart create_ts.job

If your time series (TS) are 2D and 3D you must run the create_ts jobs twice and change the TsTask variable accordingly.

4.5. Restarting the seasonal mean calculations

Transfer config.card, COMP, POST, and run.card (post process part of the simulation only) in the POST_REDO/ directory if you have not done so yet.

There are two methods:

4.5.1. SE_Checker.job (recommended method)

4.5.2. Restart create_se.job