Changes between Version 9 and Version 10 of WorkingGroups/HPC
- Timestamp:
- 2014-11-11T16:50:18+01:00 (9 years ago)
Legend:
- Unmodified
- Added
- Removed
- Modified
-
WorkingGroups/HPC
v9 v10 2 2 3 3 = '''NEMO HPC''' = 4 5 6 4 Working group leader (and responsible for wiki pages) : Sébastien Masson.[[BR]] 7 5 6 ---- 7 == Members of the Working group: == 8 * Sébastien Masson 9 * Italo Epicoco 10 * Silvia Mocavero 11 * Marie-Alice Foujols 12 * Jason Holt 13 * Gurvan Madec 14 * Mondher Chekki 8 15 9 16 ---- 17 == Objectives: == 18 * make short term recommendations for improving the performance of the existing system 19 * propose criteria for a taking decisions at Gateway 2025 regarding HPC. 20 * provide more detail on Gung-Ho (esp. regarding its implications for mesh discretization) 21 * identify other possible strategies and approaches for evolutions in the long term. 22 * define a simple configuration (with IO and complex geometry) that will serve as a proof of concept for validating the proposed approach for the future system. 10 23 24 == Some ideas...: == 25 A strong improvement of NEMO scalability is needed to be able to take advantage of the new machines. This probably means a deep review/rewrite of NEMO code at some point in the futur (beyond 5 years from now?). At the same time, we already know that CMIP7 won't use an ocean model that has not been strongly tested and validated and will stick to a NEMO model not so far from the existing one. [[BR]] This means that we need to: 11 26 12 == Members of the Working group: == 13 14 * Sébastien Masson 15 * Italo Epicoco 16 * Silvia Mocavero 17 * Marie-Alice Foujols 18 * Jason Holt 19 * Gurvan Madec 20 * Mondher Chekki 21 22 ---- 23 24 == Objectives:[[BR]] == 25 * make short term recommendations for improving the performance of the existing system 26 * propose criteria for a taking decisions at Gateway 2025 regarding HPC. 27 * provide more detail on Gung-Ho (esp. regarding its implications for mesh discretization) 28 * identify other possible strategies and approaches for evolutions in the long term. 29 * define a simple configuration (with IO and complex geometry) that will serve as a proof of concept for validating the proposed approach for the future system. 30 31 == Some ideas...:[[BR]] == 32 A strong improvement of NEMO scalability is needed to be able to take advantage of the new machines. This probably means a deep review/rewrite of NEMO code at some point in the futur (beyond 5 years from now?). At the same time, we already know that CMIP7 won't use an ocean model that has not been strongly tested and validated and will stick to a NEMO model not so far from the existing one. [[BR]] 33 This means that we need to: 34 1) keep improving the current structure of NEMO so it works quite efficiently for almost 10 more years (until the end of CMPI7). [[BR]] 35 2) start to work on a new structure that would fully tested and validated at least for CMIP8 in about 10 years. [[BR]] 27 1) keep improving the current structure of NEMO so it works quite efficiently for almost 10 more years (until the end of CMPI7). [[BR]] 2) start to work on a new structure that would fully tested and validated at least for CMIP8 in about 10 years. [[BR]] 36 28 37 29 Based on this, we propose to divide the work according to 3 temporal windows [[BR]] 38 30 39 31 '''0-3 years''': improvements with existing code: [[BR]] 40 0) remove solvers and global sums (to be done in 3.7) 41 1) reduce the number of communications: do less and bigger communications (group communications, use larger halo). main priority: communications in the time splitting and sea-ice rheology. [[BR]] 42 2) reduce the number of communications: remove useless communications (a lot of them are simply associated with output...) [[BR]] 43 3) introduce asynchronous communications [[BR]] 44 4) check code vectorization (SIMD instructions) [[BR]] 32 33 1) remove solvers and global sums (to be done in 3.7) 1) reduce the number of communications: do less and bigger communications (group communications, use larger halo). main priority: communications in the time splitting and sea-ice rheology. [[BR]] 2) reduce the number of communications: remove useless communications (a lot of them are simply associated with output...) [[BR]] 3) introduce asynchronous communications [[BR]] 4) check code vectorization (SIMD instructions) [[BR]] 45 34 46 35 '''0-5 years''': improvements through the introduction of OpenMP: [[BR]] 47 work initialed by CMCC. 48 implementation such as tiling may be efficient with many cores processors? review lbclnk to be able to deal with MPI and OpenMP 49 OpenMP along the vertical axis? Find a way to remove implicit schemes? 50 test different way to find new sources of parallelism for example with the help of OpenMP4 51 test OpenACC (not that far from OpenMP)? 36 37 work initialed by CMCC. implementation such as tiling may be efficient with many cores processors? review lbclnk to be able to deal with MPI and OpenMP OpenMP along the vertical axis? Find a way to remove implicit schemes? test different way to find new sources of parallelism for example with the help of OpenMP4 test OpenACC (not that far from OpenMP)? 52 38 53 39 '''beyond 5 years''': [[BR]] 54 GungHo or not GungHo, that is the question...55 40 41 GungHo or not GungHo , that is the question... 56 42 57 == Agenda:[[BR]] == 43 == Agenda: == 44 For the next 2 years, as a start, a workshop to be organized in 2015 on “NEMO in 2025 : routes toward multi-resolution approaches”. 58 45 59 For the next 2 years, as a start, a workshop to be organized in 2015 on “NEMO in 2025 : routes toward multi-resolution approaches”. 46 == Comments :[BR]] == 47 '''gurvan''' -- (2014 November 11): 48 49 • improving the code efficiency imply using more processor for a given application. This means breaking the current limit of 35x35 local horizontal domain. The 3 years propositions go in that direction. One point is missing: A target for an ORCA 1/36° is a 10x10 local domain to be able to use 1 Million cores... In this case, the number of horizontal grid points is the same as the vertical one (about 100 levels is currently what we are running). So, do we have to consider a change in the indexation of arrays from i-j-k to k-j-i ? 50 51 • Sea-ice running in parallel with the ocean on its own set of processors (with a 1 time-step asynchronous coupling between ice and ocean). 52 53 • BGC : obviously on-line coarsening significantly reduces the cost of BGC models, further improvement can be achieved by considering SMS term fo BGC as a big 1D vector and a compuation over only the required area (ocean point only, oceans and euphotique layer only etc...). Same idea for sea-ice physics... 54 55 • Remark: the version of MON currently under development (MOM5: switch to C-grid, use of finit volume approach,...) is not, to my knowledge, using a GungHo type approach...