Changes between Version 13 and Version 14 of Doc/ComputingCenters/TGCC/Irene


Ignore:
Timestamp:
06/28/18 19:10:00 (6 years ago)
Author:
jgipsl
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • Doc/ComputingCenters/TGCC/Irene

    v13 v14  
    11{{{ 
    22#!html 
    3 <h1>Working on the Irène machine </h1> 
     3<h1>Working on the Irene machine </h1> 
    44}}} 
    55---- 
    66[[PageOutline(1-3,Table of contents,,numbered)]] 
    77 
    8 Update 27/06/18 
     8Update 28/06/18 
    99[[NoteBox(warn, Documentation under development , 600px)]] 
    1010 
     
    1919 * {{{ ccc_mpp  -u $(whoami)}}} ->display your jobs. 
    2020 
    21 # Irene environment and files system # 
    22 ## Suggested environment ## 
     21# Suggested environment # 
    2322Before working on Irene you need to prepare your environment. We propose you 2 files which you can copy from the home igcmg. The first one called '''bashrc''' will source the second called '''bashrc_irene'''. We suggest that you copy both files to your home, rename them by adding a dot as prefix and modify bashrc so that your bashrc_irene will be sourced instead of the one in igcmg home. Later on you can add personal settings in your .bashrc_irene. Do as follow:  
    2423{{{ 
     
    4241 9) feature/mkl/sequential             18) mpi/openmpi/2.0.2                            27) ferret/7.2(default)           
    4342}}} 
    44  --> Be careful this environment can be update during next weeks according to TGCC recommendations 
    4543 
    46 ## File system at irene ## 
    47 On Irene you have a storedir, a workdir, a scratchdir by project. For example if you are working with project gen2201 and gen2212 you will have all these directories 
     44--> Be careful this environment can be update during next weeks according to TGCC recommendations 
     45 
     46# File system # 
     47On Irene you have a home, a storedir, a workdir, a scratchdir by project. For example if you are working with project gen2201 and gen2212 you will have all these directories 
    4848{{{ 
     49/ccc/store/cont003/gen2201/login 
     50/ccc/store/cont003/gen2212/login 
    4951 
    5052/ccc/work/cont003/gen2201/login 
    5153/ccc/work/cont003/gen2212/login 
    5254 
    53 /ccc/store/cont003/gen2201/login 
    54 /ccc/store/cont003/gen2212/login 
    55  
    5655/ccc/scratch/cont003/gen2201/login 
    5756/ccc/scratch/cont003/gen2212/login 
    5857 
     58/ccc/cont003/home/gen2201/login 
     59/ccc/cont003/home/gen2212/login 
    5960}}} 
    60 Before working you need to know with which project you are, and load its environment (You can put this command in your .barshrc_irene file) 
     61 
     62Before start working you need to know which project you are and load corresponding environment. For example if you will work on the project gen2201, do following (you can the command into your .bashrc_irene): 
    6163{{{ 
    6264module load datadir/gen2201  
    6365}}} 
    64 You will have new environment variables to access working directories  
     66By loading a datadir you will have new environment variables to access working directories:  
    6567{{{ 
    6668GEN2201_ALL_CCCSCRATCHDIR=/ccc/scratch/cont003/gen2201/gen2201 
     
    7476}}} 
    7577 
    76 [[NoteBox(warn, If your previous work and cie directories were on /dsm/ directory\, you will find your data in a specific new project file system "dsmipsl". We recommand to copy your data on your genci project file system , 600px)]] 
     78[[NoteBox(warn, If you previously worked at curie and your directories were in /dsm/, you will now find your data in a specific new project file system "dsmipsl". We recommend to copy your data in your genci project file system , 600px)]] 
    7779 
    7880 
    7981 * Computing nodes: the nodes of partition skylake have 48 cores each, which is 3 times more than the computing nodes from the standard partition of Curie; 
    80  * Filesystem accesses MUST be explicit: from the login nodes, you will see the WORK, SCRATCH, STORE spaces as you probably are used to. However, when submitting any job through ccc_msub or ccc_mprun, you must specify -m work, -m scratch, -m store, or combine them like in -m work,scratch; this constraint has the advantage that your jobs won't be supended if a filesystem you don't need becomes unavailable; 
     82 * File system access MUST be explicit: from the login nodes, you will see the WORK, SCRATCH, STORE spaces as you probably are used to. However, when submitting any job through ccc_msub or ccc_mprun, you must specify -m work, -m scratch, -m store, or combine them like in -m work,scratch; this constraint has the advantage that your jobs won't be suspended if a file system you don't need becomes unavailable; 
    8183 * Compute nodes are diskless, meaning that /tmp is not hosted on a local hard drive anymore, but on system memory instead. It offers up to 16 GB (compared to 64 GB on Curie). Please note that any data written to it is reduces the size of the memory that remains available for computations. In our case it change the number of core use for post-treatment like pack_output.  
    8284 * The default time limit for a job submission is 2hours (7200s) contrary to 24h (86400s) on curie 
     
    8587 
    8688 
    87 # How working with last LMDZOR_v6 or last LMDZORINCA_v6 ? # 
    88   * install your environment (see below)  
    89   * XIOS : After download :update arch XIOS for IRENE  
     89# How to work with the last LMDZOR_v6 or the last LMDZORINCA_v6 ? # 
     90  * install your environment (see above)  
     91  * XIOS : After download : update arch XIOS for IRENE  
    9092{{{ 
    9193cd modipsl/modeles/XIOS/arch