New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
WorkingGroups/HPC/Mins_2015_10_20 (diff) – NEMO

Changes between Version 1 and Version 2 of WorkingGroups/HPC/Mins_2015_10_20


Ignore:
Timestamp:
2016-03-08T18:47:55+01:00 (8 years ago)
Author:
mikebell
Comment:

--

Legend:

Unmodified
Added
Removed
Modified
  • WorkingGroups/HPC/Mins_2015_10_20

    v1 v2  
    1 = Minutes = 
     1= UPDATED Actions and notes from NEMO HPC working group meeting  
     2 
     3Attending: Mike Bell, Miguel Castrillo, Marie-Alice Foujols, Tim Graham, Claire Levy, Gurvan Madec, Silvia Mocavero, Oriol Tinto-Prims  
     4 
     5Apologies: Mondher Chekki, Martin Schreiber, Julien le Sommer 
     6  
     7 
     81.      Discuss results from various groups  
     9a.      Barcelona (Miguel) – document circulated prior to meeting. The report describes performance of ORCA2. Miguel is keen to study higher resolution models. Section 4 highlights improvements achieved by reducing the number of communications between nodes (due to high network latency).  The message packing and reduced frequency of convergence checking have been included in the NEMO trunk at vn 3.6. The other change and an improvement to the message packing for the north-fold are available as branches dev_r5302_CNRS18_HPC_scalability & dev_r5546_CNRS19_HPC_scalability 
     10The Dimemas simulator has not yet been used to simulate  impact of cache misses on performance. Gurvan noted that work is in progress to couple LIM3 through OASIS so that its domain decomposition can differ from NEMO.  
     11Action: Miguel to provide some information about the domain decompositions used in his report.  
     12 
     13b.      CMCC (Silvia) – document and ppt circulated prior to meeting. The report describes a reduction in the communications used for the north pole fold which have been included in the NEMO trunk at vn 3.6. The ppt describes investigations of the impact of 3 approaches to implementing OpenMP for the MUSCL advection scheme on a number of machines. Tim mentioned that the Met Office has had more success implementing OpenMP in its Unified Model on its CRAY than its previous IBM machine.  
     14   
     15c.      Met Office (Tim) (preliminary results on our new CRAY) – notes on tests circulated prior to the meeting. Results suggest calculations within a node are memory bound. Tim is exploring the CRAY PAT tool options and checking consistency with the NEMO timing outputs.    
     162.      What is the current level of performance of NEMO on HPCs ? 
     17a.      Compared with other ocean models – ROMS takes two times larger timesteps than NEMO. It has OpenMP implemented at the level of its second loop.  (Gurvan gave a more precise description). Performance is very machine dependent. Gurvan – is there a reference on this ?    
     18Action: Gurvan to circulate a message from Steve Griffies describing domain decomposition and performance of GFDL 1/10 deg coupled model (complete) 
     19Numbers from the GFDL coupled system : 
     20For CM2.6 (50km atmos coupled to 1/10th degree MOM)  
     21Atmos PEs = 1440 
     22Ocean PEs = 17820 
     23Ocean decomposition 160x160 
     24Land locked PEs masked out = 7780 
     25300 second ocean timestep 
     261200 second atmosphere timestep 
     271200 second coupling step 
     2812-13 hours for one year of simulation. 1 Tb of data per year. 
     29 
     30 
     31b.      Compared with the best that could reasonably be expected 
     32i.      On a single processor / node : Silvia has looked at this and found calculations running at @ 10% of peak. It’s not clear what could reasonably be expected.  
     33Action: Silvia to circulate her document describing roofline modelling of Glob16 with BFM (biology model) on BlueGene.   
     34 
     35ii.     Through parallelisation – Communication bottlenecks have been identified and some have been removed. Parallelisation of computations and communications between nodes has not been explored yet  
     36 
     373.      What are the priorities for future work ? 
     38-       It would be useful to define common benchmark configurations  at low resolution and high resolution. The GYRE configuration could also be useful. There is a lot of support for this idea 
     39A NEMO HPC benchmark already exists, built by Sébastien Masson, and already given to a few computer vendors. Probably useful to restart from this existing benchmark. It is mainly composed of 2 pieces: first a GYRE-global 1/12° equivalent, and second a “real ORCA12”. For the GYRE configuration we would need to check/update the physical/numerical choices of its namelist in order to activate the up-to-date choices 
     40 
     41-       The short-term work-plan for the group proposed by Sebastien Masson describes a number of good practical steps for communications between nodes that would definitely give improvements   
     42 
     43-       A longer term methodology / strategy is not clear. It could be based on continued analysis of bottlenecks. Larger changes to the code organisation would need very careful consideration of impacts on science users. An approach will need to be developed for nodes supporting 100 or 1000 threads; this is why OpenMP is important.    
     44-       It would be useful at some point to write an HPC contribution to the NEMO coding rules, in order to avoid code that will degrade performance (see Gurvan’s point A below)  
     45 
     46-       Gurvan suggests:  
     47A)      For me, the short term target for HPC with NEMO is to be able to run efficiently the system with a local domain of 10 by 10. Currently the limit of scalability is ~40 by 40 local domain. The gain is to be able to use 16 more processors, thus probable being in elapse time ~ 10 time faster (up to 16 time in theory). Reaching this target requires (at least) that all the 5 tasks proposed by Seb are achieved, but also a 6th one:  
     486)    Restrict the computation to the inner domain everywhere: 
     49Description: verify that wherever it is possible, the computation is done only in the inner domain (i.e. from 2 to jpi-1 and from 2 to jpj-1). In particular, in many places implicite do loop using (:,:) has been introduced that should be removed. NB: this is particularly true on LIM3 where most DO loop are over the full domain 
     50Skills: f90, good knowledge of NEMO. 
     51Amount of work: < 4 months. 
     52Easy to verify: results must be unchanged. 
     53Impact for HPC: potentially significant when local domain is small (for example, with a 10x10 local domain, full versus inner domain computation means 36% more calculation !  still 10% for a 40x40 local domain) 
     54Impact for the users: no 
     55B) The performance tests should be done separately on ocean (OPA), bio, and sea-ice. This can be done by running the ocean alone (GYRE), the off-line TOP (OFF), and StandAlone Surface module (SAS) with sea-ice activated. Indeed, currently setting nn_components=1 for ocean, and  2 for SAS in namsbc namelist allows to run an ocean-ice system (in forced or coupled to the atmosphere) on two separate executables, i.e. in parallel, so the sea-ice computation is masked by the one computation (shorter elapse time !) Similarly, the same technique can be introduced for biogeochemistry (especially when not using the one-line coarsening of TOP). In order to make progress in this aspect, two others task can be defined: 
     56I. optimization of SAS (with ice) : design the optimal domain decomposition for the ice (idea: (1) use larger jpj size for processors that are in the 40°N-40°S band (no sea-ice faster computation compared to icy area ; (2) change the global model domain for SAS so that there will be no north-fold communication in SAS (Pacific sector of Arctic ocean move north of the Atlantic sector one)). 
     57II. introduce the possibility of running TOP in parallel of the ocean using the same technique as the one set up for SAS. 
     58Both tasks required a good knowledge of NEMO,  and of OASIS for the second. 
     59 
     60C) If the north fold is still a blocking issue (despite its recent great improvement), It is possible to increase the overlap area juste on the north-fold, restricting its application to the strict minimum by time-step without increasing all the size of the allows. 
     61 
     624.      What resources are available for this work ? 
     63Miguel could spend up to 50% of his time. Oriol Tinto will be doing a PhD dedicated to research on HPC optimisations.  
     64 
     65Silvia could spend 20% of her time on HPC optimisation issues.  
     66 
     67Tim could only spend @10-20% of his time on these issues. Martin Schreiber has applied for a PhD student to work on NEMO optimisation issues.  
     68 
     69Mondher could spend @ 10-20% of his time on these issues.  
     70 
     71There could be EC funding available for work of this sort.  
     72  
     735.      Future meetings   
     74Mike will call another meeting in 4-6 weeks time.  
     75 
     76Specific agenda items:  
     77 
     78-       Agree who will own actions identified in Sebastien Masson’s task list  
     79-       Discuss which benchmark configurations to use for future tests   
     80-       Further discussion of existing evidence about bottlenecks  (Silvia’s roofline analysis; further results from Tim, …. )  
     81-       Others ?