Ignore:
Timestamp:
07/13/18 14:18:28 (6 years ago)
Author:
yushan
Message:

report update

File:
1 edited

Legend:

Unmodified
Added
Removed
  • XIOS/dev/branch_openmp/Note/rapport ESIWACE.tex

    r1552 r1560  
    77\usepackage{url} 
    88\usepackage{verbatim} 
     9\usepackage{cprotect} 
    910 
    1011% Title Page 
     
    4748project develops a new dynamical core for LMD-Z, the atmospheric general circulation model (GCM) part of IPSL-CM Earth System Model.  
    4849\url{http://www.lmd.polytechnique.fr/~dubos/DYNAMICO/}} all use XIOS as the output back end. M\'et\'eoFrance and MetOffice also choose XIOS  
    49 to manege the I/O for their models. 
     50to manage the I/O for their models. 
    5051 
    5152 
     
    5455Although XIOS copes well with many models, there is one potential optimization in XIOS which needs to be investigated: making XIOS thread-friendly. 
    5556 
    56 This topic comes along with the configuration of the climate models. Take LMDZ as example, it is designed with the 2-level parallelization scheme. To be more specific, LMDZ uses the domain decomposition method in which each sub-domain is associated with one MPI process. Inside of the sub-domain, the model also uses OpenMP derivatives to accelerate the computation. We can imagine that the sub-domain be divided into sub-sub-domain and is managed by threads.  
     57This topic comes along with the configuration of the climate models. Take LMDZ as example, it is designed with the 2-level parallelization  
     58scheme. To be more specific, LMDZ uses the domain decomposition method in which each sub-domain is associated with one MPI process. Inside  
     59of the sub-domain, the model also uses OpenMP derivatives to accelerate the computation. We can imagine that the sub-domain be divided into  
     60sub-sub-domain and is managed by threads.  
    5761 
    5862\begin{figure}[ht] 
     
    6266\end{figure} 
    6367 
    64 As we know, each sub-domain, or in another word, each MPI process is a XIOS client. The data exchange between client and XIOS servers is handled by MPI communications. In order to write an output field, all threads must gather the data to the master thread who acts as MPI process in order to call MPI routines. There are two disadvantages about this method : first, we have to spend time on gathering information to the master thread which not only increases the memory use, but also implies an OpenMP barrier; second, while the master thread calls MPI routine, other threads are in the idle state thus a waster of computing resources. What we want obtain with the thread-friendly XIOS is that all threads can act like MPI processes. They can call directly the MPI routine thus no waste in memory nor in computing resources as shown in Figure \ref{fig:omp}. 
     68As we know, each sub-domain, or in another word, each MPI process is a XIOS client. The data exchange between client and XIOS servers is  
     69handled by MPI communications. In order to write an output field, all threads must gather the data to the master thread who acts as MPI  
     70process in order to call MPI routines. There are two disadvantages about this method : first, we have to spend time on gathering information  
     71to the master thread which not only increases the memory use, but also implies an OpenMP barrier; second, while the master thread calls MPI  
     72routine, other threads are in the idle state thus a waster of computing resources. What we want obtain with the thread-friendly XIOS is that  
     73all threads can act like MPI processes. They can call directly the MPI routine thus no waste in memory nor in computing resources as shown  
     74in Figure \ref{fig:omp}. 
    6575 
    6676\begin{figure}[ht] 
     
    7181\end{figure} 
    7282 
    73 There are two ways to make XIOS thread-friendly. First of all, change the structure of XIOS which demands a lot of modification is the XIOS library. Knowing that XIOS is about 100 000 lines of code, this method will be very time consuming. What's more, the modification will be local to XIOS. If we want to optimize an other code to be thread-friendly, we have to redo the modifications. The second choice is to add an extra interface to MPI in order to manage the threads. When a thread want to call an MPI routine inside XIOS, it will first pass the interface, in which the communication information will be analyzed before the MPI routine is invoked. With this method, we only need to modify a very small part of XIOS in order to make it work. What is more interesting is that the interface we created can be adjusted to suit other MPI based libraries. 
     83There are two ways to make XIOS thread-friendly. First of all, change the structure of XIOS which demands a lot of modification is the XIOS  
     84library. Knowing that XIOS is about 100 000 lines of code, this method will be very time consuming. What's more, the modification will be  
     85local to XIOS. If we want to optimize an other code to be thread-friendly, we have to redo the modifications. The second choice is to add an  
     86extra interface to MPI in order to manage the threads. When a thread want to call an MPI routine inside XIOS, it will first pass the  
     87interface, in which the communication information will be analyzed before the MPI routine is invoked. With this method, we only need to  
     88modify a very small part of XIOS in order to make it work. What is more interesting is that the interface we created can be adjusted to suit  
     89other MPI based libraries. 
    7490 
    7591 
     
    163179data, execution of the MPI function by all master/root threads, distribution or arrangement of the resulting data among threads.  
    164180 
    165 %The most representative functions of the collective communications are \verb|MPI_Gather| and \verb|MPI_Bcast|. 
    166181 
    167182For example, if we want to perform a broadcast operation, only 2 steps are needed (\textit{c.f.} Figure \ref{fig:bcast}). Firstly, the root  
     
    172187\centering 
    173188\includegraphics[scale=0.3]{bcast.png}  
    174 \caption{} 
     189\cprotect\caption{\verb|MPI_Bcast|} 
    175190\label{fig:bcast} 
    176191\end{figure} 
     
    184199\centering 
    185200\includegraphics[scale=0.3]{allreduce.png}  
    186 \caption{} 
     201\cprotect\caption{\verb|MPI_Allreduce|} 
    187202\label{fig:allreduce} 
    188203\end{figure} 
    189204 
    190205Other MPI routines, such as \verb|MPI_Wait|, \verb|MPI_Intercomm_create| \textit{etc.}, can be found in the technique report of the  
    191 endpoint interface. 
     206endpoint interface \cite{ep:2018}. 
    192207 
    193208\section{The multi-threaded XIOS and performance results} 
    194209 
    195210The development of endpoint interface for thread-friendly XIOS library took about one year and a half. The main difficulty is the  
    196 co-existence of MPI processes and OpenMP threads. All MPI classes must be redefined in the endpoint interface along with all the routines.  
    197 The development is now available on the forge server: \url{http://forge.ipsl.jussieu.fr/ioserver/browser/XIOS/dev/branch_openmp}. One  
    198 technique report is also available in which one can find more detail about how endpoint works and how the routines are implemented  
    199 \cite{ep:2018}. We must note that the thread-friendly XIOS library is still in the phase of optimization. It will be released in the  
    200 future with a stable version. 
    201  
    202 All the functionalities of XIOS is reserved in its thread-friendly version. Single threaded code can work successfully with the new  
    203 version of XIOS. For multi-threaded models, some modifications are needed in order to work with the multi-threaded XIOS library. Detail can  
    204 be found in our technique report \cite{ep:2018}. 
     211co-existence of MPI processes and OpenMP threads. One essential requirement for using the endpoint interface is that the underlying MPI  
     212implementation must support the level-3 of thread support which is \verb|MPI_THREAD_MULTIPLE|. This means that if the MPI process is  
     213multi-threaded, multiple threads may call MPI at once with no restrictions. Another importance aspect to be mentioned is that in XIOS, we  
     214have variables with \verb|static| attribute. It means that inside of an MPI process, threads share the static variable. In order to use  
     215correctly the endpoint interface, these static variables have to be defined as \verb|threadprivate| to limit the visibility to thread.   
     216 
     217To develop the endpoint interface, we redefined all MPI classes along with all the MPI routines that are used in XIOS library. The current  
     218version of the interface includes about 7000 lines of code and is now available on the forge server:  
     219\url{http://forge.ipsl.jussieu.fr/ioserver/browser/XIOS/dev/branch_openmp}. One technique report is also available in which one can find  
     220more detail about how endpoint works and how the routines are implemented \cite{ep:2018}. We must note that the thread-friendly XIOS  
     221library is still in the phase of optimization. It will be released in the future with a stable version. 
     222 
     223All the functionalities of XIOS is reserved in its thread-friendly XIOS library. Single threaded code can work successfully under the  
     224endpoint interface with the new version of XIOS. For multi-threaded models, some modifications are needed in order to work with the  
     225multi-threaded XIOS library. For example, the MPI initialization has be to modified to require the \verb|MPI_THREAD_MULTIPLE|  
     226support. Each thread should have its own data set. What's most important is that the OpenMP master region in which the master thread calls  
     227XIOS routines should be erased in order that every threads can call XIOS routines simultaneously. More detail can be found in our technique  
     228report \cite{ep:2018}. 
    205229 
    206230Even though the multi-threaded XIOS library is not fully accomplished and further optimization in ongoing. We have already done some tests  
    207231to see the potential of the endpoint framework. We take LMDZ as the target model and have tested with several work-flow charges.  
     232 
     233\subsection{LMDZ work-flow} 
     234 
     235In the LMDZ work-flow, we have a daily output file. We have up to 413 two-dimension variables and 187 three-dimension variables. According  
     236to user's need, we can change the ``output\_level'' key argument in the \verb|xml| file to select the desired variables to be written. In  
     237our  
     238tests, we choose to set ``output\_level=2'' for a light output, and ``output\_level=11'' for a full output. We run the LMDZ code for  
     239one, two, and three-month simulations using 12 MPI client processes and 1 server process. Each client process includes 8 OpenMP threads  
     240which gives us 92 XIOS clients in total.  
     241 
     242\subsection{CMIP6 work-flow} 
    208243 
    209244\begin{comment} 
Note: See TracChangeset for help on using the changeset viewer.