New URL for NEMO forge!   http://forge.nemo-ocean.eu

Since March 2022 along with NEMO 4.2 release, the code development moved to a self-hosted GitLab.
This present forge is now archived and remained online for history.
WorkingGroups/HPC – NEMO
wiki:WorkingGroups/HPC

Version 11 (modified by gm, 9 years ago) (diff)

--

TOC(heading=NEMO_HPC,NEMO_HPC/*, depth=1)?

NEMO HPC

Working group leader (and responsible for wiki pages) : Sébastien Masson.


Members of the Working group:

  • Sébastien Masson
  • Italo Epicoco
  • Silvia Mocavero
  • Marie-Alice Foujols
  • Jason Holt
  • Gurvan Madec
  • Mondher Chekki

Objectives:

  • make short term recommendations for improving the performance of the existing system
  • propose criteria for a taking decisions at Gateway 2025 regarding HPC.
  • provide more detail on Gung-Ho (esp. regarding its implications for mesh discretization)
  • identify other possible strategies and approaches for evolutions in the long term.
  • define a simple configuration (with IO and complex geometry) that will serve as a proof of concept for validating the proposed approach for the future system.

Some ideas...:

A strong improvement of NEMO scalability is needed to be able to take advantage of the new machines. This probably means a deep review/rewrite of NEMO code at some point in the futur (beyond 5 years from now?). At the same time, we already know that CMIP7 won't use an ocean model that has not been strongly tested and validated and will stick to a NEMO model not so far from the existing one.
This means that we need to:

1) keep improving the current structure of NEMO so it works quite efficiently for almost 10 more years (until the end of CMPI7).
2) start to work on a new structure that would fully tested and validated at least for CMIP8 in about 10 years.

Based on this, we propose to divide the work according to 3 temporal windows

0-3 years: improvements with existing code:

1) remove solvers and global sums (to be done in 3.7) 1) reduce the number of communications: do less and bigger communications (group communications, use larger halo). main priority: communications in the time splitting and sea-ice rheology.
2) reduce the number of communications: remove useless communications (a lot of them are simply associated with output...)
3) introduce asynchronous communications
4) check code vectorization (SIMD instructions)

0-5 years: improvements through the introduction of OpenMP:

work initialed by CMCC. implementation such as tiling may be efficient with many cores processors? review lbclnk to be able to deal with MPI and OpenMP OpenMP along the vertical axis? Find a way to remove implicit schemes? test different way to find new sources of parallelism for example with the help of OpenMP4 test OpenACC (not that far from OpenMP)?

beyond 5 years:

GungHo? or not GungHo? , that is the question...

Agenda:

For the next 2 years, as a start, a workshop to be organized in 2015 on “NEMO in 2025 : routes toward multi-resolution approaches”.

Comments of group members :

gurvan -- (2014 November 11):

  • improving the code efficiency imply using more processor for a given application. This means breaking the current limit of 35x35 local horizontal domain. The 3 years propositions go in that direction. One point is missing: A target for an ORCA 1/36° is a 10x10 local domain to be able to use 1 Million cores... In this case, the number of horizontal grid points is the same as the vertical one (about 100 levels is currently what we are running). So, do we have to consider a change in the indexation of arrays from i-j-k to k-j-i ?
  • Sea-ice running in parallel with the ocean on its own set of processors (with a 1 time-step asynchronous coupling between ice and ocean).
  • BGC : obviously on-line coarsening significantly reduces the cost of BGC models, further improvement can be achieved by considering SMS term fo BGC as a big 1D vector and a compuation over only the required area (ocean point only, oceans and euphotique layer only etc...). Same idea for sea-ice physics...
  • Remark: the version of MON currently under development (MOM5: switch to C-grid, use of finit volume approach,...) is not, to my knowledge, using a GungHo? type approach...

Attachments (5)

Download all attachments as: .zip