Very Important News

See also platform-users mailing list archive (for registred users only) here : or directly here :



2022/10/24 gestion du storedir sur les machines du TGCC / management of the storedir on TGCC computers

(english below)


Le TGCC a mis à jour les deux commandes ccc_quota et ccc_tree afin
qu'elles puissent aider à surveiller l'usage que l'on fait du store et à
respecter les critères définis par le TGCC.

Pour rappel le TGCC a un système de surveillance qui enclenche des
alertes lorsque sur le storedir d'un compte il y a : plus de 500
fichiers ET  { 80% des fichiers ont une taille inférieure à 500Mo OU la
taille moyenne des fichiers est inférieure à 1Go }

La commande ccc_quota vous permet de connaître, entre autre, vos
pourcentages de fichiers dont la taille est inférieure à 500Mo (très
petits fichiers), idem pour ceux dont la taille est inférieure à 1Go
(petits fichiers), et vous donne la taille moyenne de vos fichiers.

La commande ccc_tree vous indiquera répertoire par répertoire la
proportion de très petits fichiers, petits fichiers, la taille moyenne
des fichiers, mais également le nombre d'inodes qui ne sont pas des
fichiers. Tout cela correspond à une note sur 20. Si la note est
supérieure à 10/20 le répertoire apparaît en vert, sinon il apparaît en
rouge. Idéalement vous devriez avoir un maximum de répertoires verts.

Pour rappel au delà des conditions de déclenchement d'alerte, il est
fortement recommandé de ne stocker que des gros fichiers ur le storedir
et de stocker les petits fichiers sur le workdir.




The TGCC has updated the two commands ccc_quota and ccc_tree so that
they can help to monitor the use of the store and to respect the
criteria defined by the TGCC.

As a reminder the TGCC has a monitoring system that triggers alerts when
on the storedir of an account there are: more than 500 files AND { 80%
of the files are smaller than 500MB OR the average file size is smaller
than 1GB }

The ccc_quota command allows you to know, among other things, the
percentage of files whose size is less than 500MB (very small files), as
well as those whose size is less than 1GB (small files), and gives you
the average size of your files.

The ccc_tree command will tell you directory by directory the proportion
of very small files, small files, the average size of the files, but
also the number of inodes that are not files. All this corresponds to a
score out of 20. If the score is higher than 10/20 the directory appears
in green, otherwise it appears in red. Ideally you should have a maximum
of green directories.

As a reminder, beyond the conditions for triggering an alert, it is
strongly recommended to store only large files on the storedir and to
store small files on the workdir.


2022/09/23 ORCHIDEE at Spirit and Spiritx on IPSL ESPRI MESO center

Dear all,

ORCHIDEE offline configurations using modipsl and libIGCM are now updated to compile and run at the new Spirit and Spiritx cluster at IPSL ESPRI MESO center. No specific environment needs to be installed before compiling.

Note that:
- The file system is the same between ciclad and spirit and between climserv and spritx. But you cannot run an executable compiled at ciclad/climserv on the new spirit and spritx. You need to make a new installation and new compilation.
- Following configurations have been update : ORCHIDEE trunk, ORCHIDEE_4_1, ORCHIDEE_3 and ORCHIDEE_2_2
- For these configurations, there is no need to install or load specific modules before compilation. All is included and automatically loaded during compilation and running.
- If you use another version or personal version, you need to update the file ORCHIDEE_OL/ and add ORCHIDEE_OL/ARCH/arch-ifort-MESOIPSL.env, ORCHIDEE/arch/arch-ifort_MESOIPSL.path and arch-ifort_MESOIPSL.fcm.
- Using libIGCM at Spirit(x), creating, launching job and post-processing TS and SE have been done. Pack tools and monitoring have not been set up at Spirit(x).
- Compilation with old method using Makefiles based on AA_make has not been adapted to spirit/spiritx
- Launching is done using slurm at spirit and spiritx. For you that work on Jean-Zay/IDIRS, this is the same system. Launch using sbatch Job_, use squeue to see the queue status and scancel jobid to kill a job in the queue.
- If problems, use
- LMDZOR configurations to come hopefully soon (let me know if you're waiting for it)

Kind regards,

2022/09/09 Arrêt du superviseur Hermes


Nous vous informons que le service de supervision Hermes sera arrêté
définitivement d'ici quelques jours. Pour rappel, le superviseur Hermes
avait été mis en service en 2016 afin de pouvoir suivre les simulations
utilisant la chaîne de calcul et de post-traitement de l'IPSL sur les
différents centres de calcul. Il a été très utilisé durant la phase de
production de l'exercice CMIP6 et n'est plus fonctionnel depuis début 2021.

Le groupe PlateForme



We would like to inform you that the Hermes supervisor service will be
permanently stopped within a few days. As a reminder, the Hermes
supervisor had been put in service in 2016 in order to be able to follow
the simulations using the IPSL computation and post-processing chain on
the different computing centers. It was used extensively during the
production phase of the CMIP6 exercise and is no longer functional since
early 2021.

The PlateForme group

2022/01/03 configuration IPSLCM5A2.2


(English below)

The IPSLCM5A2.2 configuration is now available when downloading and installing the IPSLCM model with modipsl (./model)

The changes and bug fixes are mostly directed at pelo simulations.

For people that use this configuration in Prindustrial or actual simulation the change from IPSLCM5A2.1 to IPSLCM5A2.2 should be transparent.
It is now possible to use MOSAIX interpolation weights instead of MOSAIC.
If using MOSAIX you should set cpl_old_calving = n in run.def.

For paleo simulations:
- there are now paleo experiments available for IPSLCM and LMDZOR when copying config.card to create your experiment directory.
- there are also namelist file in PARAM named filesomething_paleo that are to be used for paleo simulations (driver files have been modified and .card file may need editing)

A bug in orchidee that forced to introduce an ice point at the south pole has been corrected.

The IPSLCM5A documentation will be updated to account for these modifications

If you have any pronlem when trying to use IPSLCM5A2.2 please let me know so that I can correct what is wrong if needed.

Best !
Sébastien Nguyen (Engineer in charge of IPSLCM5A configuration within IPSL)


2021/12/15 [INFORMATION] New THREDDS web-service

We deployed a new THREDDS server into production: It replace to provide access to WORK THREDDS spaces from TGCC,
IDRIS and CICLAD. This implies new URLs to access data on your THREDDS spaces. 
For the next 3 months, we won't modify the configuration of to avoid any disruption of service. 

We strongly recommend that you do NOT wait to adapt your scripts and tools.

2021/07/09 Clean of old configurations in mod.def

We cleaned all old configurations in file mod.def that were not supported anymore. 
Removed configurations are : 
- IPSLCM6.0.0 => IPSLCM6.0.15 
- IPSLCM6.1.0 => IPSLCM6.1.9 
- LMDZOR_v6.1.2 => LMDZOR_v6.1.9 
- LMDZ5A2.1_ISO 

If you want to checkout a mod.def with all informations on old versions, you can use the 3 following command lines: 
svn co -r 5879 --depth empty
cd util
svn up mod.def

or you can extract all modipsl architecture with the version of mod.def containing old revisions:
svn co -r 5879 modipsl 


2020/05/20 IPSL-CMC workflow adapted to Irene-amd/TGCC

The IPSL-CMC running environment for calculation and post-treatment has now been adapted for use at Irene-amd the new supercomputer at TGCC. Current list of maintained configurations can now be used at Jean-Zay.

The main information related to the use of Irene-amd can be found in the documentation You have first to set up working environment as indicated in the documentation. Note that front-end machines are specific to irene-skl and irene-amd and the connexion method are different. Beware to separate model sources compiled on irene-skl and irene-amd because executables are note compatibles.

A technical control quality have been performed on the IPSLCM6.1.11-LR and IPSLCM6.2_work configurations. A scientific validation has been done on the coupled experiences piControl during 50 years and scenario with the configuration IPSLCM6.1.11. See information about these tests here:
We encourage you strongly to also validate personally the configuration you use. 

The way to use this new supercomputer is very close to Irene-skl machine concerning filesystems, use of modules, compilers as well as method to submit computing jobs. Nevertheless, we draw your attention to the architecture of AMD Rome processor (128 cores per node, 2 Gb per core) that is different from Irene-skl : computing performances are better on Irene-skl, by comparing core to core. See some performances tests here : It is possible to improve computing performances by increasing the number of cores used per process as well as dedicating nodes to perform IOs.The users guide of theses functionailties is available here : Beware the use of these functionalities should increase the hour-consumption of yuo run, that's why their use must be validated by the person in charge of your configuration.

Note for CMIP6 simulations to be run on this machine : you have to modify the config.card as indicated here :

If you have any questions, don't hesitate to send them to the list "" so that more people can learn from the answers. You can also answer questions from your colleges at this list. 

2020/04/28 server svn for LMDZ on TGCC

Dear all,

I'm not sure that this information was already give on the plateform_users list.
The svn LMDZ server was modify at the beginning of april. The new one doesn't work on irene, so LMDZ team done a copy of the old one, by consequence if you want to extract a configuration with LMDZ in it on IRENE or IRENE AMD you need to make some modification on file modipsl/util/mod.def before :


#-S- 11 svn

will become

#-S- 11 svn

Have a good day


you can read archive here


you can read archive here


you can read archive here

Last modified 3 months ago Last modified on 11/08/22 16:33:17