Changes between Version 19 and Version 20 of 2013WP/2013Action_institutions_CMCC
- Timestamp:
- 2013-10-30T08:55:15+01:00 (11 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
2013WP/2013Action_institutions_CMCC
v19 v20 27 27 || Action || Brief description || Status || Branch name || Trac ticket n° || Wiki page name || Reviewer(s) || Review status || Nb of weeks || Ready for merge/ If "NO", reason and status for 2014 || 28 28 || CMCC-1a || Scalability || Continuation || https://forge.ipsl.jussieu.fr/nemo/browser/branches/2013/dev_r3948_CMCC_NorthFold_Opt || https://forge.ipsl.jussieu.fr/nemo/ticket/1150 || || A. Coward || DONE || || YES || 29 || CMCC-1b || Scalability: MPI-OPENMP || Continuation || [[BR]]https://forge.ipsl.jussieu.fr/nemo/browser/branches/2013/dev_r4017_CMCC_MPI_OpenMP || [[BR]]https://forge.ipsl.jussieu.fr/nemo/ticket/1151 || || || || || NO[[BR]]Th is is a long term activity. Currently only the GYRE conf is fully parallelized with OpenMP+MPI.[[BR]][[BR]]'''See further notes at CMCC.1 below''' ||29 || CMCC-1b || Scalability: MPI-OPENMP || Continuation || [[BR]]https://forge.ipsl.jussieu.fr/nemo/browser/branches/2013/dev_r4017_CMCC_MPI_OpenMP || [[BR]]https://forge.ipsl.jussieu.fr/nemo/ticket/1151 || || || || || NO[[BR]]The hybrid OpenMP/MPI approach has been implemented for the Gyre configuration and scalability studies are on going. The development will be continued in 2014[[BR]][[BR]]'''See further notes at CMCC.1 below''' || 30 30 || CMCC-2 || Masks || Started || || || || || || || || 31 31 || CMCC-3 || Data assimilation || Not Started || || || || || || || || … … 36 36 37 37 === '''CMCC.1''' Scalability Optimization === 38 ''' Motivation:''' Improve NEMO scalability. [[BR]] ''' Status:''' Partly closed. The feasibility study related to the introduction of the 38 ''' Motivation:''' Improve NEMO scalability. [[BR]] ''' Status:''' Partly closed. The feasibility study related to the introduction of the PETSc library has been concluded. From this study it has been deduced that a consistent manpower is needed. Since this effort is not available using the currently funded projects, the activity related to the introduction of PETSc into the NEMO code will be sustained when new project funds will be achieved. A report on the results of this study is available and will be provided at the November merge party.[[BR]] ''' Main tasks:'''[[BR]] * Introduction of a new software layer in the NEMO stack software exploiting numerical optimized and parallel libraries such as pBLAS, ScaLAPACK and PETSc. [[BR]] * Continuation of the implementation of hybrid parallel approach based on the functional parallelism for the tracers, momentum equations and 3D domain decomposition.[[BR]] * Test performances of XIOS on CMCC architectures[[BR]] '''System Reviewer:''' [[BR]] ''' Deadline:''' 2013[[BR]] ''' Priority: '''Medium [[BR]] ''' Principal investigator:''' Italo Epicoco [[BR]] 39 39 40 40 === '''CMCC.2 ''' Modification of masks ===