# Changeset 11528

Ignore:
Timestamp:
2019-09-10T18:28:56+02:00 (12 months ago)
Message:

dev_r10984_HPC-13 : update mpp doc

File:
1 edited

### Legend:

Unmodified
 r11512 {Boundary condition at the coast (\protect\np{rn\_shlat})} \label{sec:LBC_coast} %--------------------------------------------nam_lbc------------------------------------------------------- %--------------------------------------------namlbc------------------------------------------------------- \nlst{namlbc} \label{sec:LBC_mpp} For massively parallel processing (mpp), a domain decomposition method is used. The basic idea of the method is to split the large computation domain of a numerical experiment into several smaller domains and solve the set of equations by addressing independent local problems. Each processor has its own local memory and computes the model equation over a subdomain of the whole model domain. The subdomain boundary conditions are specified through communications between processors which are organized by explicit statements (message passing method). A big advantage is that the method does not need many modifications of the initial \fortran code. From the modeller's point of view, each sub domain running on a processor is identical to the "mono-domain" code. In addition, the programmer manages the communications between subdomains, and the code is faster when the number of processors is increased. The porting of OPA code on an iPSC860 was achieved during Guyon's PhD [Guyon et al. 1994, 1995] in collaboration with CETIIS and ONERA. The implementation in the operational context and the studies of performance on a T3D and T3E Cray computers have been made in collaboration with IDRIS and CNRS. The present implementation is largely inspired by Guyon's work [Guyon 1995]. %-----------------------------------------nammpp-------------------------------------------- \nlst{nammpp} %----------------------------------------------------------------------------------------------- For massively parallel processing (mpp), a domain decomposition method is used. The basic idea of the method is to split the large computation domain of a numerical experiment into several smaller domains and solve the set of equations by addressing independent local problems. Each processor has its own local memory and computes the model equation over a subdomain of the whole model domain. The subdomain boundary conditions are specified through communications between processors which are organized by explicit statements (message passing method). The present implementation is largely inspired by Guyon's work [Guyon 1995]. The parallelization strategy is defined by the physical characteristics of the ocean model. depend at the very most on one neighbouring point. The only non-local computations concern the vertical physics (implicit diffusion, turbulent closure scheme, ...) (delocalization over the whole water column), and the solving of the elliptic equation associated with the surface pressure gradient computation (delocalization over the whole horizontal domain). (implicit diffusion, turbulent closure scheme, ...). Therefore, a pencil strategy is used for the data sub-structuration: the 3D initial domain is laid out on local processor memories following a 2D horizontal topological splitting. each processor sends to its neighbouring processors the update values of the points corresponding to the interior overlapping area to its neighbouring sub-domain (\ie\ the innermost of the two overlapping rows). The communication is done through the Message Passing Interface (MPI). Communications are first done according to the east-west direction and next according to the north-south direction. There is no specific communications for the corners. The communication is done through the Message Passing Interface (MPI) and requires \key{mpp\_mpi}. Use also \key{mpi2} if MPI3 is not available on your computer. The data exchanges between processors are required at the very place where lateral domain boundary conditions are set in the mono-domain computation: the \rou{lbc\_lnk} routine (found in \mdl{lbclnk} module) which manages such conditions is interfaced with routines found in \mdl{lib\_mpp} module when running on an MPP computer (\ie\ when \key{mpp\_mpi} defined). It has to be pointed out that when using the MPP version of the model, the east-west cyclic boundary condition is done implicitly, whilst the south-symmetric boundary condition option is not available. routines found in \mdl{lib\_mpp} module. The output file \textit{communication\_report.txt} provides the list of which routines do how many communications during 1 time step of the model.\\ %>>>>>>>>>>>>>>>>>>>>>>>>>>>> %>>>>>>>>>>>>>>>>>>>>>>>>>>>> In the standard version of \NEMO, the splitting is regular and arithmetic. The i-axis is divided by \jp{jpni} and the j-axis by \jp{jpnj} for a number of processors \jp{jpnij} most often equal to $jpni \times jpnj$ (parameters set in  \nam{mpp} namelist). Each processor is independent and without message passing or synchronous process, programs run alone and access just its own local memory. For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) that are named \jp{jpi}, \jp{jpj}, \jp{jpk}. These dimensions include the internal domain and the overlapping rows. The number of rows to exchange (known as the halo) is usually set to one (\jp{jpreci}=1, in \mdl{par\_oce}). The whole domain dimensions are named \jp{jpiglo}, \jp{jpjglo} and \jp{jpk}. The relationship between the whole domain and a sub-domain is: $jpi = ( jpiglo-2*jpreci + (jpni-1) ) / jpni + 2*jpreci jpj = ( jpjglo-2*jprecj + (jpnj-1) ) / jpnj + 2*jprecj$ where \jp{jpni}, \jp{jpnj} are the number of processors following the i- and j-axis. One also defines variables nldi and nlei which correspond to the internal domain bounds, and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain. In \NEMO, the splitting is regular and arithmetic. The total number of subdomains corresponds to the number of MPI processes allocated to \NEMO\ when the model is launched (\ie\ mpirun -np x ./nemo will automatically give x subdomains). The i-axis is divided by \np{jpni} and the j-axis by \np{jpnj}. These parameters are defined in \nam{mpp} namelist. If \np{jpni} and \np{jpnj} are < 1, they will be automatically redefined in the code to give the best domain decomposition (see bellow). Each processor is independent and without message passing or synchronous process, programs run alone and access just its own local memory. For this reason, the main model dimensions are now the local dimensions of the subdomain (pencil) that are named \jp{jpi}, \jp{jpj}, \jp{jpk}. These dimensions include the internal domain and the overlapping rows. The number of rows to exchange (known as the halo) is usually set to one (nn\_hls=1, in \mdl{par\_oce}, and must be kept to one until further notice). The whole domain dimensions are named \jp{jpiglo}, \jp{jpjglo} and \jp{jpk}. The relationship between the whole domain and a sub-domain is: $jpi = ( jpiglo-2\times nn\_hls + (jpni-1) ) / jpni + 2\times nn\_hls$ $jpj = ( jpjglo-2\times nn\_hls + (jpnj-1) ) / jpnj + 2\times nn\_hls$ One also defines variables nldi and nlei which correspond to the internal domain bounds, and the variables nimpp and njmpp which are the position of the (1,1) grid-point in the global domain (\autoref{fig:mpp}). Note that since the version 4, there is no more extra-halo area as defined in \autoref{fig:mpp} so \jp{jpi} is now always equal to nlci and \jp{jpj} equal to nlcj. An element of $T_{l}$, a local array (subdomain) corresponds to an element of $T_{g}$, a global array (whole domain) by the relationship: T_{g} (i+nimpp-1,j+njmpp-1,k) = T_{l} (i,j,k), \] with  $1 \leq i \leq jpi$, $1 \leq j \leq jpj$ , and  $1 \leq k \leq jpk$. Processors are numbered from 0 to $jpnij-1$, the number is saved in the variable nproc. In the standard version, a processor has no more than four neighbouring processors named nono (for north), noea (east), noso (south) and nowe (west) and two variables, nbondi and nbondj, indicate the relative position of the processor: \begin{itemize} \item       nbondi = -1    an east neighbour, no west processor, \item       nbondi =  0 an east neighbour, a west neighbour, \item       nbondi =  1    no east processor, a west neighbour, \item       nbondi =  2    no splitting following the i-axis. \end{itemize} During the simulation, processors exchange data with their neighbours. If there is effectively a neighbour, the processor receives variables from this processor on its overlapping row, and sends the data issued from internal domain corresponding to the overlapping row of the other processor. The \NEMO\ model computes equation terms with the help of mask arrays (0 on land points and 1 on sea points). It is easily readable and very efficient in the context of a computer with vectorial architecture. However, in the case of a scalar processor, computations over the land regions become more expensive in terms of CPU time. It is worse when we use a complex configuration with a realistic bathymetry like the global ocean where more than 50 \% of points are land points. For this reason, a pre-processing tool can be used to choose the mpp domain decomposition with a maximum number of only land points processors, which can then be eliminated (\autoref{fig:mppini2}) (For example, the mpp\_optimiz tools, available from the DRAKKAR web site). This optimisation is dependent on the specific bathymetry employed. The user then chooses optimal parameters \jp{jpni}, \jp{jpnj} and \jp{jpnij} with $jpnij < jpni \times jpnj$, leading to the elimination of $jpni \times jpnj - jpnij$ land processors. When those parameters are specified in \nam{mpp} namelist, the algorithm in the \rou{inimpp2} routine sets each processor's parameters (nbound, nono, noea,...) so that the land-only processors are not taken into account. \gmcomment{Note that the inimpp2 routine is general so that the original inimpp routine should be suppressed from the code.} When land processors are eliminated, the value corresponding to these locations in the model output files is undefined. Note that this is a problem for the meshmask file which requires to be defined over the whole domain. Therefore, user should not eliminate land processors when creating a meshmask file (\ie\ when setting a non-zero value to \np{nn\_msh}). with $1 \leq i \leq jpi$, $1 \leq j \leq jpj$ , and  $1 \leq k \leq jpk$. The 1-d arrays $mig(1:\jp{jpi})$ and $mjg(1:\jp{jpj})$, defined in \rou{dom\_glo} routine (\mdl{domain} module), should be used to get global domain indices from local domain indices. The 1-d arrays, $mi0(1:\jp{jpiglo})$, $mi1(1:\jp{jpiglo})$ and $mj0(1:\jp{jpjglo})$, $mj1(1:\jp{jpjglo})$ have the reverse purpose and should be used to define loop indices expressed in global domain indices (see examples in \mdl{dtastd} module).\\ The \NEMO\ model computes equation terms with the help of mask arrays (0 on land points and 1 on sea points). It is therefore possible that an MPI subdomain contains only land points. To save ressources, we try to supress from the computational domain as much land subdomains as possible. For example if $N_{mpi}$ processes are allocated to NEMO, the domain decomposition will be given by the following equation: $N_{mpi} = jpni \times jpnj - N_{land} + N_{useless}$ $N_{land}$ is the total number of land subdomains in the domain decomposition defined by \np{jpni} and \np{jpnj}. $N_{useless}$ is the number of land subdomains that are kept in the compuational domain in order to make sure that $N_{mpi}$ MPI processes are indeed allocated to a given subdomain. The values of $N_{mpi}$, \np{jpni}, \np{jpnj},  $N_{land}$ and $N_{useless}$ are printed in the output file \texttt{ocean.output}. $N_{useless}$ must, of course, be as small as possible to limit the waste of ressources. A warning is issued in  \texttt{ocean.output} if $N_{useless}$ is not zero. Note that non-zero value of $N_{useless}$ is uselly required when using AGRIF as, up to now, the parent grid and each of the child grids must use all the $N_{mpi}$ processes. If the domain decomposition is automatically defined (when \np{jpni} and \np{jpnj} are < 1), the decomposition chosen by the model will minimise the sub-domain size (defined as $max_{all domains}(jpi \times jpj)$) and maximize the number of eliminated land subdomains. This means that no other domain decomposition (a set of \np{jpni} and \np{jpnj} values) will use less processes than $(jpni \times jpnj - N_{land})$ and get a smaller subdomain size. In order to specify $N_{mpi}$ properly (minimize $N_{useless}$), you must run the model once with \np{ln\_list} activated. In this case, the model will start the initialisation phase, print the list of optimum decompositions ($N_{mpi}$, \np{jpni} and \np{jpnj}) in \texttt{ocean.output} and directly abort. The maximum value of $N_{mpi}$ tested in this list is given by $max(N_{MPI\_tasks}, \np{jpni} \times \np{jpnj})$. For example, run the model on 40 nodes with ln\_list activated and $\np{jpni} = 10000$ and $\np{jpnj} = 1$, will print the list of optimum domains decomposition from 1 to about 10000. Processors are numbered from 0 to $N_{mpi} - 1$. Subdomains containning some ocean points are numbered first from 0 to $jpni * jpnj - N_{land} -1$. The remaining $N_{useless}$ land subdomains are numbered next, which means that, for a given (\np{jpni}, \np{jpnj}), the numbers attributed to he ocean subdomains do not vary with $N_{useless}$. When land processors are eliminated, the value corresponding to these locations in the model output files is undefined. \np{ln\_mskland} must be activated in order avoid Not a Number values in output files. Note that it is better to not eliminate land processors when creating a meshmask file (\ie\ when setting a non-zero value to \np{nn\_msh}). %>>>>>>>>>>>>>>>>>>>>>>>>>>>>