Custom Query (124 matches)
Results (61 - 63 of 124)
Ticket | Resolution | Summary | Owner | Reporter |
---|---|---|---|---|
#26 | fixed | name of coordinate one-dimensional variable | ymipsl | aclsce |
Description |
As it is said in CF convention (http://cf-pcmdi.llnl.gov/documents/cf-conventions/1.6/cf-conventions.html): "We use this term precisely as it is defined in section 2.3.1 of the NUG 2.3.1 of the NUG . It is a one-dimensional variable with the same name as its dimension [e.g., time(time)], and it is defined as a numeric data type with values that are ordered monotonically. Missing values are not allowed in coordinate variables." It means we have to rename lon(x) into lon(lon) or x(x) (same for lat and time_counter). Comment : lon(lon) seems better than x(x)... Thanks ! |
|||
#73 | wontfix | `multiple_file` mode cannot always work in `read` mode | rlacroix | |
Description |
XIOS is supposed to be able to open files created in multiple_file mode as long as the number of servers is unmodified. However XIOS tries to auto-complete the XML by reading attributes from the files. Unfortunately this operation is done on the clients so it fails because the number of clients does not match the expected number of servers. |
|||
#90 | fixed | MPI dead lock in XIOS | ymipsl | mcastril |
Description |
We are experiencing a repetitive issue with XIOS 1.0 . It appeared using NEMO 3.6 stable and more than 2600 cores, and it seemed to be solved when using Intel 16 compiler and IMPI 5. However, after updating to NEMO 3.6 current stable, the problem appears when using 1920 or more cores. I don't really get how the NEMO revision change could affect to this, but there it is. The problem is just in this line of client.cpp: MPI_Send(buff,buffer.count(),MPI_CHAR,serverLeader,1,CXios::globalComm) ; In the meanwhile the server.cpp is doing MPI_Iprobe continuosly in order to receive all the MPI_Send. What we have observed is that using a high number of cores, around 80-100 of these cores get stucked at the MPI_Send, causing the run to hang and not complete. The fact that with a certain number of cores the issue appears 80% of the times but not always, made us think that could be related with the IMPI implementation. |