Changeset 1560 for XIOS/dev/branch_openmp/Note/rapport ESIWACE.tex
- Timestamp:
- 07/13/18 14:18:28 (6 years ago)
- File:
-
- 1 edited
Legend:
- Unmodified
- Added
- Removed
-
XIOS/dev/branch_openmp/Note/rapport ESIWACE.tex
r1552 r1560 7 7 \usepackage{url} 8 8 \usepackage{verbatim} 9 \usepackage{cprotect} 9 10 10 11 % Title Page … … 47 48 project develops a new dynamical core for LMD-Z, the atmospheric general circulation model (GCM) part of IPSL-CM Earth System Model. 48 49 \url{http://www.lmd.polytechnique.fr/~dubos/DYNAMICO/}} all use XIOS as the output back end. M\'et\'eoFrance and MetOffice also choose XIOS 49 to man ege the I/O for their models.50 to manage the I/O for their models. 50 51 51 52 … … 54 55 Although XIOS copes well with many models, there is one potential optimization in XIOS which needs to be investigated: making XIOS thread-friendly. 55 56 56 This topic comes along with the configuration of the climate models. Take LMDZ as example, it is designed with the 2-level parallelization scheme. To be more specific, LMDZ uses the domain decomposition method in which each sub-domain is associated with one MPI process. Inside of the sub-domain, the model also uses OpenMP derivatives to accelerate the computation. We can imagine that the sub-domain be divided into sub-sub-domain and is managed by threads. 57 This topic comes along with the configuration of the climate models. Take LMDZ as example, it is designed with the 2-level parallelization 58 scheme. To be more specific, LMDZ uses the domain decomposition method in which each sub-domain is associated with one MPI process. Inside 59 of the sub-domain, the model also uses OpenMP derivatives to accelerate the computation. We can imagine that the sub-domain be divided into 60 sub-sub-domain and is managed by threads. 57 61 58 62 \begin{figure}[ht] … … 62 66 \end{figure} 63 67 64 As we know, each sub-domain, or in another word, each MPI process is a XIOS client. The data exchange between client and XIOS servers is handled by MPI communications. In order to write an output field, all threads must gather the data to the master thread who acts as MPI process in order to call MPI routines. There are two disadvantages about this method : first, we have to spend time on gathering information to the master thread which not only increases the memory use, but also implies an OpenMP barrier; second, while the master thread calls MPI routine, other threads are in the idle state thus a waster of computing resources. What we want obtain with the thread-friendly XIOS is that all threads can act like MPI processes. They can call directly the MPI routine thus no waste in memory nor in computing resources as shown in Figure \ref{fig:omp}. 68 As we know, each sub-domain, or in another word, each MPI process is a XIOS client. The data exchange between client and XIOS servers is 69 handled by MPI communications. In order to write an output field, all threads must gather the data to the master thread who acts as MPI 70 process in order to call MPI routines. There are two disadvantages about this method : first, we have to spend time on gathering information 71 to the master thread which not only increases the memory use, but also implies an OpenMP barrier; second, while the master thread calls MPI 72 routine, other threads are in the idle state thus a waster of computing resources. What we want obtain with the thread-friendly XIOS is that 73 all threads can act like MPI processes. They can call directly the MPI routine thus no waste in memory nor in computing resources as shown 74 in Figure \ref{fig:omp}. 65 75 66 76 \begin{figure}[ht] … … 71 81 \end{figure} 72 82 73 There are two ways to make XIOS thread-friendly. First of all, change the structure of XIOS which demands a lot of modification is the XIOS library. Knowing that XIOS is about 100 000 lines of code, this method will be very time consuming. What's more, the modification will be local to XIOS. If we want to optimize an other code to be thread-friendly, we have to redo the modifications. The second choice is to add an extra interface to MPI in order to manage the threads. When a thread want to call an MPI routine inside XIOS, it will first pass the interface, in which the communication information will be analyzed before the MPI routine is invoked. With this method, we only need to modify a very small part of XIOS in order to make it work. What is more interesting is that the interface we created can be adjusted to suit other MPI based libraries. 83 There are two ways to make XIOS thread-friendly. First of all, change the structure of XIOS which demands a lot of modification is the XIOS 84 library. Knowing that XIOS is about 100 000 lines of code, this method will be very time consuming. What's more, the modification will be 85 local to XIOS. If we want to optimize an other code to be thread-friendly, we have to redo the modifications. The second choice is to add an 86 extra interface to MPI in order to manage the threads. When a thread want to call an MPI routine inside XIOS, it will first pass the 87 interface, in which the communication information will be analyzed before the MPI routine is invoked. With this method, we only need to 88 modify a very small part of XIOS in order to make it work. What is more interesting is that the interface we created can be adjusted to suit 89 other MPI based libraries. 74 90 75 91 … … 163 179 data, execution of the MPI function by all master/root threads, distribution or arrangement of the resulting data among threads. 164 180 165 %The most representative functions of the collective communications are \verb|MPI_Gather| and \verb|MPI_Bcast|.166 181 167 182 For example, if we want to perform a broadcast operation, only 2 steps are needed (\textit{c.f.} Figure \ref{fig:bcast}). Firstly, the root … … 172 187 \centering 173 188 \includegraphics[scale=0.3]{bcast.png} 174 \c aption{}189 \cprotect\caption{\verb|MPI_Bcast|} 175 190 \label{fig:bcast} 176 191 \end{figure} … … 184 199 \centering 185 200 \includegraphics[scale=0.3]{allreduce.png} 186 \c aption{}201 \cprotect\caption{\verb|MPI_Allreduce|} 187 202 \label{fig:allreduce} 188 203 \end{figure} 189 204 190 205 Other MPI routines, such as \verb|MPI_Wait|, \verb|MPI_Intercomm_create| \textit{etc.}, can be found in the technique report of the 191 endpoint interface .206 endpoint interface \cite{ep:2018}. 192 207 193 208 \section{The multi-threaded XIOS and performance results} 194 209 195 210 The development of endpoint interface for thread-friendly XIOS library took about one year and a half. The main difficulty is the 196 co-existence of MPI processes and OpenMP threads. All MPI classes must be redefined in the endpoint interface along with all the routines. 197 The development is now available on the forge server: \url{http://forge.ipsl.jussieu.fr/ioserver/browser/XIOS/dev/branch_openmp}. One 198 technique report is also available in which one can find more detail about how endpoint works and how the routines are implemented 199 \cite{ep:2018}. We must note that the thread-friendly XIOS library is still in the phase of optimization. It will be released in the 200 future with a stable version. 201 202 All the functionalities of XIOS is reserved in its thread-friendly version. Single threaded code can work successfully with the new 203 version of XIOS. For multi-threaded models, some modifications are needed in order to work with the multi-threaded XIOS library. Detail can 204 be found in our technique report \cite{ep:2018}. 211 co-existence of MPI processes and OpenMP threads. One essential requirement for using the endpoint interface is that the underlying MPI 212 implementation must support the level-3 of thread support which is \verb|MPI_THREAD_MULTIPLE|. This means that if the MPI process is 213 multi-threaded, multiple threads may call MPI at once with no restrictions. Another importance aspect to be mentioned is that in XIOS, we 214 have variables with \verb|static| attribute. It means that inside of an MPI process, threads share the static variable. In order to use 215 correctly the endpoint interface, these static variables have to be defined as \verb|threadprivate| to limit the visibility to thread. 216 217 To develop the endpoint interface, we redefined all MPI classes along with all the MPI routines that are used in XIOS library. The current 218 version of the interface includes about 7000 lines of code and is now available on the forge server: 219 \url{http://forge.ipsl.jussieu.fr/ioserver/browser/XIOS/dev/branch_openmp}. One technique report is also available in which one can find 220 more detail about how endpoint works and how the routines are implemented \cite{ep:2018}. We must note that the thread-friendly XIOS 221 library is still in the phase of optimization. It will be released in the future with a stable version. 222 223 All the functionalities of XIOS is reserved in its thread-friendly XIOS library. Single threaded code can work successfully under the 224 endpoint interface with the new version of XIOS. For multi-threaded models, some modifications are needed in order to work with the 225 multi-threaded XIOS library. For example, the MPI initialization has be to modified to require the \verb|MPI_THREAD_MULTIPLE| 226 support. Each thread should have its own data set. What's most important is that the OpenMP master region in which the master thread calls 227 XIOS routines should be erased in order that every threads can call XIOS routines simultaneously. More detail can be found in our technique 228 report \cite{ep:2018}. 205 229 206 230 Even though the multi-threaded XIOS library is not fully accomplished and further optimization in ongoing. We have already done some tests 207 231 to see the potential of the endpoint framework. We take LMDZ as the target model and have tested with several work-flow charges. 232 233 \subsection{LMDZ work-flow} 234 235 In the LMDZ work-flow, we have a daily output file. We have up to 413 two-dimension variables and 187 three-dimension variables. According 236 to user's need, we can change the ``output\_level'' key argument in the \verb|xml| file to select the desired variables to be written. In 237 our 238 tests, we choose to set ``output\_level=2'' for a light output, and ``output\_level=11'' for a full output. We run the LMDZ code for 239 one, two, and three-month simulations using 12 MPI client processes and 1 server process. Each client process includes 8 OpenMP threads 240 which gives us 92 XIOS clients in total. 241 242 \subsection{CMIP6 work-flow} 208 243 209 244 \begin{comment}
Note: See TracChangeset
for help on using the changeset viewer.