New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
WorkingGroups/HPC/Mins_sub_2017_12_05 (diff) – NEMO

Changes between Initial Version and Version 1 of WorkingGroups/HPC/Mins_sub_2017_12_05


Ignore:
Timestamp:
2018-02-01T10:13:09+01:00 (6 years ago)
Author:
mocavero
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • WorkingGroups/HPC/Mins_sub_2017_12_05

    v1 v1  
     1'''NEMO HPC subgroup: Tue 05 Dec 2017''' 
     2 
     3Attending: Mike Bell (Met Office), Tim Graham (Met Office), Miroslaw Andrejczuk (Met Office), Martin Price (Met Office), Matthew Glover (Met Office), Andy Porter (STFC), Mario Acosta (BSC), Sebastien Masson (CNRS), Martin Schreiber (Uniexe), Silvia Mocavero (CMCC)  
     4 
     5Apologies: Claire Levy (CNRS), Marie-Alice Foujols (CNRS) 
     6 
     7== 1.   CROCO and NEMO performance == 
     8  
     9Laurent does not join the meeting. The presentation could be postponed to the next meeting (maybe the enlarged HPC-WG meeting).  
     10 
     11'''Action''': Mike to check Laurent availability 
     12 
     13 
     14== 2.   Hybrid parallelization status == 
     15  
     16Silvia updates the group about the coarse-grained tiled parallelisation on the ZDF package. The new version improves the previous one and the pure MPI version at intranode level. The gain on the parallel efficiency is respectively 13% and 9% when the node of the CMCC system is filled out. Next steps will be the test of the approach on a new kernel (more representative from the communication point of view) and the comparison between the hybrid parallel approach and the cache blocking technique application in terms of intranode improvement. 
     17 
     18The library used in XIOS to introduce the OpenMP parallelisation is not yet ready to be used.  
     19 
     20'''Action''': Silvia to continue to work on the proposed activities    
     21 
     22 
     23== 3.   Single-core performance   == 
     24 
     25Some tests on the correlation between the difference on the execution time and the difference on the LLC misses have been performed on CMCC system by increasing the domain sizes (from 10x10 up to 50x50). The trend is confirmed: the increasing of the execution time when we execute more than one instance within the socket is strictly related to the increasing of the LLC misses. 
     26 
     27The implementation of the cache blocking on the fct advection scheme has been started. A key point is the definition of the best block size depending on the memory hierarchy parameters and the code we are working on. Cache blocking, such as other code transformations, can be automatically integrated in NEMO by using the PSyclone-like parser. 
     28 
     29The vectorization issue can be addressed in parallel with the memory access improvement. 
     30 
     31'''Actions''': Silvia to continue the investigation; Tim to test different domain sizes on the MetO system 
     32 
     33 
     34== 4.   ESiWACE GA in december and ESiWACE2 preparation   == 
     35 
     36The group basically agrees with the contents of the short presentation on NEMO for the ESiWACE GA. The talk should give an idea of the resolutions supported by the consortium (1 kilometer resolution is not supported by the NEMO science today and CMIP global experiments do not require this kind of resolution, then 1/12° currently and 1/36° and 1/48° in the future are the target resolutions); only resolutions that needed some HPC actions to be improved should be referred in the presentation. 
     37 
     38Tim suggests to indicate a gain of 20% (achieved on several GYRE configurations) instead of 60% due to the wrk_alloc removing. 
     39Benchmark activity on different HPC system (following Mike’s suggestion) will be added. 
     40 
     41'''Action''': Silvia to change the presentation accordingly to the suggestions 
     42 
     43There are three proposals in preparation (IS-ENES3, ESiWACE2 and the LC-SPACE-03-EO-2018) which include some activities on NEMO-HPC aspects. It is important as HPC-WG to coordinate the contribution to the three proposals. It could be useful to have a list of the HPC tasks and to put in the interest and the contribution from each institution. 
     44 
     45'''Actions''': Mike to set up a google doc listing the activities (from the HPC chapter of the NDS document) by next Monday and all to express the interest on the activities   
     46 
     47 
     48== 5.   Next meeting call   == 
     49  
     50Next meeting will be in March since in January there will be the enlarged group meeting. 
     51 
     52'''Action''': Silvia to send the doodle poll for the next meeting. 
     53 
     54 
     55== 6.   AOB   == 
     56 
     57Matthew performed some benchmark tests with NEMO-GYRE on a new ARM-based system. The code is 1.25 faster compared with a Broadwell socket due to memory bandwidth. A presentation of his work will be scheduled for the next meeting.