Changeset 6140 for trunk/NEMOGCM/NEMO/OPA_SRC/LBC
- Timestamp:
- 2015-12-21T12:35:23+01:00 (8 years ago)
- Location:
- trunk/NEMOGCM/NEMO/OPA_SRC/LBC
- Files:
-
- 5 edited
Legend:
- Unmodified
- Added
- Removed
-
trunk/NEMOGCM/NEMO/OPA_SRC/LBC/lbclnk.F90
r5429 r6140 4 4 !! Ocean : lateral boundary conditions 5 5 !!===================================================================== 6 !! History : OPA ! 1997-06 (G. Madec) 7 !! NEMO 1.0 ! 2002-09 (G. Madec) 6 !! History : OPA ! 1997-06 (G. Madec) Original code 7 !! NEMO 1.0 ! 2002-09 (G. Madec) F90: Free form and module 8 8 !! 3.2 ! 2009-03 (R. Benshila) External north fold treatment 9 !! 3.5 ! 2012 (S.Mocavero, I. Epicoco) Add 'lbc_bdy_lnk' 10 !! and lbc_obc_lnk' routine to optimize 11 !! the BDY/OBC communications 12 !! 3.4 ! 2012-12 (R. Bourdalle-Badie and G. Reffray) add a C1D case 9 !! 3.5 ! 2012 (S.Mocavero, I. Epicoco) optimization of BDY comm. via lbc_bdy_lnk and lbc_obc_lnk 10 !! 3.4 ! 2012-12 (R. Bourdalle-Badie, G. Reffray) add a C1D case 13 11 !!---------------------------------------------------------------------- 14 12 #if defined key_mpp_mpi … … 17 15 !!---------------------------------------------------------------------- 18 16 !! lbc_lnk : generic interface for mpp_lnk_3d and mpp_lnk_2d routines defined in lib_mpp 17 !! lbc_sum : generic interface for mpp_lnk_sum_3d and mpp_lnk_sum_2d routines defined in lib_mpp 19 18 !! lbc_lnk_e : generic interface for mpp_lnk_2d_e routine defined in lib_mpp 20 19 !! lbc_bdy_lnk : generic interface for mpp_lnk_bdy_2d and mpp_lnk_bdy_3d routines defined in lib_mpp 21 20 !!---------------------------------------------------------------------- 22 USE lib_mpp ! distributed memory computing library 23 21 USE lib_mpp ! distributed memory computing library 24 22 25 23 INTERFACE lbc_lnk_multi 26 24 MODULE PROCEDURE mpp_lnk_2d_9 27 25 END INTERFACE 28 26 ! 29 27 INTERFACE lbc_lnk 30 28 MODULE PROCEDURE mpp_lnk_3d_gather, mpp_lnk_3d, mpp_lnk_2d 31 29 END INTERFACE 30 ! 31 !JMM interface not defined if not key_mpp_mpi : likely do not compile without this CPP key !!!! 32 INTERFACE lbc_sum 33 MODULE PROCEDURE mpp_lnk_sum_3d, mpp_lnk_sum_2d 34 END INTERFACE 32 35 33 36 INTERFACE lbc_bdy_lnk 34 37 MODULE PROCEDURE mpp_lnk_bdy_2d, mpp_lnk_bdy_3d 35 38 END INTERFACE 36 39 ! 37 40 INTERFACE lbc_lnk_e 38 41 MODULE PROCEDURE mpp_lnk_2d_e 39 42 END INTERFACE 40 43 ! 41 44 INTERFACE lbc_lnk_icb 42 45 MODULE PROCEDURE mpp_lnk_2d_icb 43 46 END INTERFACE 44 47 45 PUBLIC lbc_lnk ! ocean lateral boundary conditions 46 PUBLIC lbc_lnk_multi ! modified ocean lateral boundary conditions 47 PUBLIC lbc_lnk_e 48 PUBLIC lbc_bdy_lnk ! ocean lateral BDY boundary conditions 49 PUBLIC lbc_lnk_icb 48 PUBLIC lbc_lnk ! ocean lateral boundary conditions 49 PUBLIC lbc_lnk_multi ! modified ocean lateral boundary conditions 50 PUBLIC lbc_sum 51 PUBLIC lbc_lnk_e ! 52 PUBLIC lbc_bdy_lnk ! ocean lateral BDY boundary conditions 53 PUBLIC lbc_lnk_icb ! 50 54 51 55 !!---------------------------------------------------------------------- … … 54 58 !! Software governed by the CeCILL licence (NEMOGCM/NEMO_CeCILL.txt) 55 59 !!---------------------------------------------------------------------- 56 57 60 #else 58 61 !!---------------------------------------------------------------------- 59 62 !! Default option shared memory computing 60 63 !!---------------------------------------------------------------------- 61 !! lbc_lnk : generic interface for lbc_lnk_3d and lbc_lnk_2d 62 !! lbc_lnk_3d : set the lateral boundary condition on a 3D variable on ocean mesh 63 !! lbc_lnk_2d : set the lateral boundary condition on a 2D variable on ocean mesh 64 !! lbc_bdy_lnk : set the lateral BDY boundary condition 64 !! lbc_sum : generic interface for mpp_lnk_sum_3d and mpp_lnk_sum_2d 65 !! lbc_lnk_sum_3d: compute sum over the halos on a 3D variable on ocean mesh 66 !! lbc_lnk_sum_3d: compute sum over the halos on a 2D variable on ocean mesh 67 !! lbc_lnk : generic interface for lbc_lnk_3d and lbc_lnk_2d 68 !! lbc_lnk_3d : set the lateral boundary condition on a 3D variable on ocean mesh 69 !! lbc_lnk_2d : set the lateral boundary condition on a 2D variable on ocean mesh 70 !! lbc_bdy_lnk : set the lateral BDY boundary condition 65 71 !!---------------------------------------------------------------------- 66 72 USE oce ! ocean dynamics and tracers … … 75 81 MODULE PROCEDURE lbc_lnk_3d_gather, lbc_lnk_3d, lbc_lnk_2d 76 82 END INTERFACE 83 ! 84 INTERFACE lbc_sum 85 MODULE PROCEDURE mpp_lnk_sum_3d, mpp_lnk_sum_2d 86 END INTERFACE 77 87 78 88 INTERFACE lbc_lnk_e 79 89 MODULE PROCEDURE lbc_lnk_2d_e 80 90 END INTERFACE 81 91 ! 82 92 INTERFACE lbc_bdy_lnk 83 93 MODULE PROCEDURE lbc_bdy_lnk_2d, lbc_bdy_lnk_3d 84 94 END INTERFACE 85 95 ! 86 96 INTERFACE lbc_lnk_icb 87 97 MODULE PROCEDURE lbc_lnk_2d_e … … 89 99 90 100 PUBLIC lbc_lnk ! ocean/ice lateral boundary conditions 91 PUBLIC lbc_lnk_e 101 PUBLIC lbc_lnk_e ! 92 102 PUBLIC lbc_bdy_lnk ! ocean lateral BDY boundary conditions 93 PUBLIC lbc_lnk_icb 103 PUBLIC lbc_lnk_icb ! 94 104 95 105 !!---------------------------------------------------------------------- 96 !! NEMO/OPA 3. 3 , NEMO Consortium (2010)106 !! NEMO/OPA 3.7 , NEMO Consortium (2015) 97 107 !! $Id$ 98 108 !! Software governed by the CeCILL licence (NEMOGCM/NEMO_CeCILL.txt) … … 230 240 ! this is in mpp case. In this module, just do nothing 231 241 ELSE 232 !233 242 ! ! East-West boundaries 234 243 ! ! ====================== … … 249 258 ! 250 259 END SELECT 251 !252 260 ! ! North-South boundaries 253 261 ! ! ====================== … … 287 295 END SUBROUTINE lbc_lnk_3d 288 296 297 289 298 SUBROUTINE lbc_lnk_2d( pt2d, cd_type, psgn, cd_mpp, pval ) 290 299 !!--------------------------------------------------------------------- … … 316 325 ! this is in mpp case. In this module, just do nothing 317 326 ELSE 318 !319 327 ! ! East-West boundaries 320 328 ! ! ==================== … … 335 343 ! 336 344 END SELECT 337 !338 345 ! ! North-South boundaries 339 346 ! ! ====================== … … 375 382 #endif 376 383 377 378 384 SUBROUTINE lbc_bdy_lnk_3d( pt3d, cd_type, psgn, ib_bdy ) 379 385 !!--------------------------------------------------------------------- … … 381 387 !! 382 388 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used 383 !! to maintain the same interface with regards to the mpp384 ! case385 !! 386 !!----------------------------------------------------------------------387 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points388 REAL(wp) , DIMENSION(jpi,jpj,jpk), INTENT(inout) :: pt3d ! 3D array on which the lbc is applied389 REAL(wp) , INTENT(in ) :: psgn ! control of the sign390 INTEGER :: ib_bdy ! BDY boundary set391 ! !389 !! to maintain the same interface with regards to the mpp case 390 !! 391 !!---------------------------------------------------------------------- 392 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 393 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: pt3d ! 3D array on which the lbc is applied 394 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 395 INTEGER , INTENT(in ) :: ib_bdy ! BDY boundary set 396 !!---------------------------------------------------------------------- 397 ! 392 398 CALL lbc_lnk_3d( pt3d, cd_type, psgn) 393 399 ! 394 400 END SUBROUTINE lbc_bdy_lnk_3d 395 401 402 396 403 SUBROUTINE lbc_bdy_lnk_2d( pt2d, cd_type, psgn, ib_bdy ) 397 404 !!--------------------------------------------------------------------- … … 399 406 !! 400 407 !! ** Purpose : wrapper rountine to 'lbc_lnk_3d'. This wrapper is used 401 !! to maintain the same interface with regards to the mpp402 ! case403 !! 404 !!----------------------------------------------------------------------405 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points406 REAL(wp) , DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 3D array on which the lbc is applied407 REAL(wp) , INTENT(in ) :: psgn ! control of the sign408 INTEGER :: ib_bdy ! BDY boundary set409 ! !408 !! to maintain the same interface with regards to the mpp case 409 !! 410 !!---------------------------------------------------------------------- 411 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 412 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 3D array on which the lbc is applied 413 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 414 INTEGER , INTENT(in ) :: ib_bdy ! BDY boundary set 415 !!---------------------------------------------------------------------- 416 ! 410 417 CALL lbc_lnk_2d( pt2d, cd_type, psgn) 411 418 ! 412 419 END SUBROUTINE lbc_bdy_lnk_2d 413 420 … … 426 433 !! for closed boundaries. 427 434 !!---------------------------------------------------------------------- 428 CHARACTER(len=1) , INTENT(in ) 429 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) 430 REAL(wp) , INTENT(in ) 431 INTEGER , INTENT(in ) 432 INTEGER , INTENT(in ) 433 !!---------------------------------------------------------------------- 434 435 CHARACTER(len=1) , INTENT(in ) :: cd_type ! nature of pt3d grid-points 436 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 2D array on which the lbc is applied 437 REAL(wp) , INTENT(in ) :: psgn ! control of the sign 438 INTEGER , INTENT(in ) :: jpri ! size of extra halo (not needed in non-mpp) 439 INTEGER , INTENT(in ) :: jprj ! size of extra halo (not needed in non-mpp) 440 !!---------------------------------------------------------------------- 441 ! 435 442 CALL lbc_lnk_2d( pt2d, cd_type, psgn ) 436 443 ! -
trunk/NEMOGCM/NEMO/OPA_SRC/LBC/lbcnfd.F90
r4686 r6140 24 24 MODULE PROCEDURE lbc_nfd_3d, lbc_nfd_2d 25 25 END INTERFACE 26 27 PUBLIC lbc_nfd ! north fold conditions 26 ! 28 27 INTERFACE mpp_lbc_nfd 29 28 MODULE PROCEDURE mpp_lbc_nfd_3d, mpp_lbc_nfd_2d 30 29 END INTERFACE 31 30 32 PUBLIC mpp_lbc_nfd ! north fold conditions in parallel case 33 34 INTEGER, PUBLIC, PARAMETER :: jpmaxngh = 3 35 INTEGER, PUBLIC :: nsndto, nfsloop, nfeloop 36 INTEGER, PUBLIC, DIMENSION (jpmaxngh) :: isendto ! processes to which communicate 37 38 31 PUBLIC lbc_nfd ! north fold conditions 32 PUBLIC mpp_lbc_nfd ! north fold conditions (parallel case) 33 34 INTEGER, PUBLIC, PARAMETER :: jpmaxngh = 3 !: 35 INTEGER, PUBLIC :: nsndto, nfsloop, nfeloop !: 36 INTEGER, PUBLIC, DIMENSION (jpmaxngh) :: isendto !: processes to which communicate 39 37 40 38 !!---------------------------------------------------------------------- … … 391 389 REAL(wp) , INTENT(in ) :: psgn ! control of the sign change 392 390 ! ! = -1. , the sign is changed if north fold boundary 393 ! ! = 1. , the sign is kept if north fold boundary394 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: pt3dl 395 REAL(wp), DIMENSION(:,:,:), INTENT(in ) :: pt3dr! 3D array on which the boundary condition is applied391 ! ! = 1. , the sign is kept if north fold boundary 392 REAL(wp), DIMENSION(:,:,:), INTENT(inout) :: pt3dl ! 3D array on which the boundary condition is applied 393 REAL(wp), DIMENSION(:,:,:), INTENT(in ) :: pt3dr ! 3D array on which the boundary condition is applied 396 394 ! 397 395 INTEGER :: ji, jk 398 396 INTEGER :: ijt, iju, ijpj, ijpjm1, ijta, ijua, jia, startloop, endloop 399 397 !!---------------------------------------------------------------------- 400 398 ! 401 399 SELECT CASE ( jpni ) 402 400 CASE ( 1 ) ; ijpj = nlcj ! 1 proc only along the i-direction … … 657 655 ! ! = -1. , the sign is changed if north fold boundary 658 656 ! ! = 1. , the sign is kept if north fold boundary 659 REAL(wp), DIMENSION(:,:), INTENT(inout) :: pt2dl 660 REAL(wp), DIMENSION(:,:), INTENT(in ) :: pt2dr! 2D array on which the boundary condition is applied657 REAL(wp), DIMENSION(:,:), INTENT(inout) :: pt2dl ! 2D array on which the boundary condition is applied 658 REAL(wp), DIMENSION(:,:), INTENT(in ) :: pt2dr ! 2D array on which the boundary condition is applied 661 659 ! 662 660 INTEGER :: ji … … 970 968 END SUBROUTINE mpp_lbc_nfd_2d 971 969 970 !!====================================================================== 972 971 END MODULE lbcnfd -
trunk/NEMOGCM/NEMO/OPA_SRC/LBC/lib_mpp.F90
r5836 r6140 23 23 !! the mppobc routine to optimize the BDY and OBC communications 24 24 !! 3.5 ! 2013 ( C. Ethe, G. Madec ) message passing arrays as local variables 25 !! 3.5 ! 2013 (S.Mocavero, I.Epicoco - CMCC) north fold optimizations25 !! 3.5 ! 2013 (S.Mocavero, I.Epicoco - CMCC) north fold optimizations 26 26 !!---------------------------------------------------------------------- 27 27 28 28 !!---------------------------------------------------------------------- 29 !! ctl_stop : update momentum and tracer Kz from a tke scheme30 !! ctl_warn : initialization, namelist read, and parameters control31 !! ctl_opn : Open file and check if required file is available.32 !! ctl_nam : Prints informations when an error occurs while reading a namelist33 !! get_unit : give the index of an unused logical unit29 !! ctl_stop : update momentum and tracer Kz from a tke scheme 30 !! ctl_warn : initialization, namelist read, and parameters control 31 !! ctl_opn : Open file and check if required file is available. 32 !! ctl_nam : Prints informations when an error occurs while reading a namelist 33 !! get_unit : give the index of an unused logical unit 34 34 !!---------------------------------------------------------------------- 35 35 #if defined key_mpp_mpi … … 43 43 !! mpp_lnk_e : interface (defined in lbclnk) for message passing of 2d array with extra halo (mpp_lnk_2d_e) 44 44 !! mpp_lnk_icb : interface for message passing of 2d arrays with extra halo for icebergs (mpp_lnk_2d_icb) 45 !! mpprecv 45 !! mpprecv : 46 46 !! mppsend : SUBROUTINE mpp_ini_znl 47 47 !! mppscatter : … … 72 72 PUBLIC mpp_lnk_3d, mpp_lnk_3d_gather, mpp_lnk_2d, mpp_lnk_2d_e 73 73 PUBLIC mpp_lnk_2d_9 74 PUBLIC mpp_lnk_sum_3d, mpp_lnk_sum_2d 74 75 PUBLIC mppscatter, mppgather 75 76 PUBLIC mpp_ini_ice, mpp_ini_znl … … 94 95 END INTERFACE 95 96 INTERFACE mpp_sum 96 MODULE PROCEDURE mppsum_a_int, mppsum_int, mppsum_a_real, mppsum_real, &97 MODULE PROCEDURE mppsum_a_int, mppsum_int, mppsum_a_real, mppsum_real, & 97 98 mppsum_realdd, mppsum_a_realdd 98 99 END INTERFACE … … 175 176 !! ** Purpose : Find processor unit 176 177 !!---------------------------------------------------------------------- 177 CHARACTER(len=*),DIMENSION(:), INTENT( out) :: ldtxt 178 CHARACTER(len=*) , INTENT(in ) :: ldname 179 INTEGER , INTENT(in ) :: kumnam_ref 180 INTEGER , INTENT(in ) :: kumnam_cfg 181 INTEGER , INTENT(inout) :: kumond 182 INTEGER , INTENT(inout) :: kstop 183 INTEGER , OPTIONAL , INTENT(in ) :: localComm178 CHARACTER(len=*),DIMENSION(:), INTENT( out) :: ldtxt ! 179 CHARACTER(len=*) , INTENT(in ) :: ldname ! 180 INTEGER , INTENT(in ) :: kumnam_ref ! logical unit for reference namelist 181 INTEGER , INTENT(in ) :: kumnam_cfg ! logical unit for configuration namelist 182 INTEGER , INTENT(inout) :: kumond ! logical unit for namelist output 183 INTEGER , INTENT(inout) :: kstop ! stop indicator 184 INTEGER , OPTIONAL , INTENT(in ) :: localComm ! 184 185 ! 185 186 INTEGER :: mynode, ierr, code, ji, ii, ios … … 190 191 ! 191 192 ii = 1 192 WRITE(ldtxt(ii),*) 193 WRITE(ldtxt(ii),*) 'mynode : mpi initialisation' 194 WRITE(ldtxt(ii),*) '~~~~~~ ' 193 WRITE(ldtxt(ii),*) ; ii = ii + 1 194 WRITE(ldtxt(ii),*) 'mynode : mpi initialisation' ; ii = ii + 1 195 WRITE(ldtxt(ii),*) '~~~~~~ ' ; ii = ii + 1 195 196 ! 196 197 … … 204 205 205 206 ! ! control print 206 WRITE(ldtxt(ii),*) ' Namelist nammpp' 207 WRITE(ldtxt(ii),*) ' mpi send type cn_mpi_send = ', cn_mpi_send; ii = ii + 1208 WRITE(ldtxt(ii),*) ' size in bytes of exported buffer nn_buffer = ', nn_buffer; ii = ii + 1207 WRITE(ldtxt(ii),*) ' Namelist nammpp' ; ii = ii + 1 208 WRITE(ldtxt(ii),*) ' mpi send type cn_mpi_send = ', cn_mpi_send ; ii = ii + 1 209 WRITE(ldtxt(ii),*) ' size exported buffer nn_buffer = ', nn_buffer,' bytes'; ii = ii + 1 209 210 210 211 #if defined key_agrif … … 223 224 224 225 IF( (jpni < 1) .OR. (jpnj < 1) )THEN 225 WRITE(ldtxt(ii),*) ' jpni, jpnj and jpnij will be calculated automatically' ;ii = ii + 1226 WRITE(ldtxt(ii),*) ' jpni, jpnj and jpnij will be calculated automatically' ; ii = ii + 1 226 227 ELSE 227 WRITE(ldtxt(ii),*) ' processor grid extent in i jpni = ',jpni ;ii = ii + 1228 WRITE(ldtxt(ii),*) ' processor grid extent in j jpnj = ',jpnj ;ii = ii + 1229 WRITE(ldtxt(ii),*) ' number of local domains jpnij = ',jpnij ; ii = ii +1228 WRITE(ldtxt(ii),*) ' processor grid extent in i jpni = ',jpni ; ii = ii + 1 229 WRITE(ldtxt(ii),*) ' processor grid extent in j jpnj = ',jpnj ; ii = ii + 1 230 WRITE(ldtxt(ii),*) ' number of local domains jpnij = ',jpnij ; ii = ii + 1 230 231 END IF 231 232 … … 246 247 SELECT CASE ( cn_mpi_send ) 247 248 CASE ( 'S' ) ! Standard mpi send (blocking) 248 WRITE(ldtxt(ii),*) ' Standard blocking mpi send (send)' 249 WRITE(ldtxt(ii),*) ' Standard blocking mpi send (send)' ; ii = ii + 1 249 250 CASE ( 'B' ) ! Buffer mpi send (blocking) 250 WRITE(ldtxt(ii),*) ' Buffer blocking mpi send (bsend)' 251 WRITE(ldtxt(ii),*) ' Buffer blocking mpi send (bsend)' ; ii = ii + 1 251 252 IF( Agrif_Root() ) CALL mpi_init_opa( ldtxt, ii, ierr ) 252 253 CASE ( 'I' ) ! Immediate mpi send (non-blocking send) 253 WRITE(ldtxt(ii),*) ' Immediate non-blocking send (isend)' 254 WRITE(ldtxt(ii),*) ' Immediate non-blocking send (isend)' ; ii = ii + 1 254 255 l_isend = .TRUE. 255 256 CASE DEFAULT 256 WRITE(ldtxt(ii),cform_err) 257 WRITE(ldtxt(ii),*) ' bad value for cn_mpi_send = ', cn_mpi_send 257 WRITE(ldtxt(ii),cform_err) ; ii = ii + 1 258 WRITE(ldtxt(ii),*) ' bad value for cn_mpi_send = ', cn_mpi_send ; ii = ii + 1 258 259 kstop = kstop + 1 259 260 END SELECT 260 261 ELSE IF ( PRESENT(localComm) .and. .not. mpi_was_called ) THEN 261 WRITE(ldtxt(ii),*) ' lib_mpp: You cannot provide a local communicator ' 262 WRITE(ldtxt(ii),*) ' without calling MPI_Init before ! ' 262 WRITE(ldtxt(ii),*) ' lib_mpp: You cannot provide a local communicator ' ; ii = ii + 1 263 WRITE(ldtxt(ii),*) ' without calling MPI_Init before ! ' ; ii = ii + 1 263 264 kstop = kstop + 1 264 265 ELSE 265 266 SELECT CASE ( cn_mpi_send ) 266 267 CASE ( 'S' ) ! Standard mpi send (blocking) 267 WRITE(ldtxt(ii),*) ' Standard blocking mpi send (send)' 268 WRITE(ldtxt(ii),*) ' Standard blocking mpi send (send)' ; ii = ii + 1 268 269 CALL mpi_init( ierr ) 269 270 CASE ( 'B' ) ! Buffer mpi send (blocking) 270 WRITE(ldtxt(ii),*) ' Buffer blocking mpi send (bsend)' 271 WRITE(ldtxt(ii),*) ' Buffer blocking mpi send (bsend)' ; ii = ii + 1 271 272 IF( Agrif_Root() ) CALL mpi_init_opa( ldtxt, ii, ierr ) 272 273 CASE ( 'I' ) ! Immediate mpi send (non-blocking send) 273 WRITE(ldtxt(ii),*) ' Immediate non-blocking send (isend)' 274 WRITE(ldtxt(ii),*) ' Immediate non-blocking send (isend)' ; ii = ii + 1 274 275 l_isend = .TRUE. 275 276 CALL mpi_init( ierr ) 276 277 CASE DEFAULT 277 WRITE(ldtxt(ii),cform_err) 278 WRITE(ldtxt(ii),*) ' bad value for cn_mpi_send = ', cn_mpi_send 278 WRITE(ldtxt(ii),cform_err) ; ii = ii + 1 279 WRITE(ldtxt(ii),*) ' bad value for cn_mpi_send = ', cn_mpi_send ; ii = ii + 1 279 280 kstop = kstop + 1 280 281 END SELECT … … 319 320 END FUNCTION mynode 320 321 322 321 323 SUBROUTINE mpp_lnk_3d( ptab, cd_type, psgn, cd_mpp, pval ) 322 324 !!---------------------------------------------------------------------- … … 347 349 CHARACTER(len=3), OPTIONAL , INTENT(in ) :: cd_mpp ! fill the overlap area only 348 350 REAL(wp) , OPTIONAL , INTENT(in ) :: pval ! background value (used at closed boundaries) 349 ! !351 ! 350 352 INTEGER :: ji, jj, jk, jl ! dummy loop indices 351 353 INTEGER :: imigr, iihom, ijhom ! temporary integers 352 354 INTEGER :: ml_req1, ml_req2, ml_err ! for key_mpi_isend 353 355 REAL(wp) :: zland 354 INTEGER, DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 355 ! 356 INTEGER , DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 356 357 REAL(wp), DIMENSION(:,:,:,:), ALLOCATABLE :: zt3ns, zt3sn ! 3d for north-south & south-north 357 358 REAL(wp), DIMENSION(:,:,:,:), ALLOCATABLE :: zt3ew, zt3we ! 3d for east-west & west-east 358 359 359 !!---------------------------------------------------------------------- 360 360 … … 364 364 ! 365 365 IF( PRESENT( pval ) ) THEN ; zland = pval ! set land value 366 ELSE ; zland = 0. e0! zero by default366 ELSE ; zland = 0._wp ! zero by default 367 367 ENDIF 368 368 … … 455 455 END SELECT 456 456 457 458 457 ! 3. North and south directions 459 458 ! ----------------------------- … … 508 507 END SELECT 509 508 510 511 509 ! 4. north fold treatment 512 510 ! ----------------------- … … 524 522 ! 525 523 END SUBROUTINE mpp_lnk_3d 524 526 525 527 526 SUBROUTINE mpp_lnk_2d_multiple( pt2d_array , type_array , psgn_array , num_fields , cd_mpp, pval ) … … 542 541 !! noso : number for local neighboring processors 543 542 !! nono : number for local neighboring processors 544 !! 545 !!---------------------------------------------------------------------- 546 547 INTEGER :: num_fields 548 TYPE( arrayptr ), DIMENSION(:) :: pt2d_array 543 !!---------------------------------------------------------------------- 549 544 CHARACTER(len=1), DIMENSION(:), INTENT(in ) :: type_array ! define the nature of ptab array grid-points 550 545 ! ! = T , U , V , F , W and I points … … 558 553 INTEGER :: imigr, iihom, ijhom ! temporary integers 559 554 INTEGER :: ml_req1, ml_req2, ml_err ! for key_mpi_isend 560 555 INTEGER :: num_fields 556 TYPE( arrayptr ), DIMENSION(:) :: pt2d_array 561 557 REAL(wp) :: zland 562 INTEGER, DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 563 ! 558 INTEGER , DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 564 559 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: zt2ns, zt2sn ! 2d for north-south & south-north 565 560 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: zt2ew, zt2we ! 2d for east-west & west-east 566 561 567 562 !!---------------------------------------------------------------------- 568 563 ! 569 564 ALLOCATE( zt2ns(jpi,jprecj,2*num_fields), zt2sn(jpi,jprecj,2*num_fields), & 570 565 & zt2ew(jpj,jpreci,2*num_fields), zt2we(jpj,jpreci,2*num_fields) ) 571 572 566 ! 573 567 IF( PRESENT( pval ) ) THEN ; zland = pval ! set land value 574 ELSE ; zland = 0. e0! zero by default568 ELSE ; zland = 0._wp ! zero by default 575 569 ENDIF 576 570 … … 744 738 ! 745 739 END DO 746 740 ! 747 741 DEALLOCATE( zt2ns, zt2sn, zt2ew, zt2we ) 748 742 ! … … 750 744 751 745 752 SUBROUTINE load_array( pt2d,cd_type,psgn,pt2d_array, type_array, psgn_array,num_fields)746 SUBROUTINE load_array( pt2d, cd_type, psgn, pt2d_array, type_array, psgn_array, num_fields ) 753 747 !!--------------------------------------------------------------------- 754 REAL(wp), DIMENSION(jpi,jpj), TARGET ,INTENT(inout) :: pt2d ! Second 2D array on which the boundary condition is applied755 CHARACTER(len=1) , INTENT(in ) :: cd_type ! define the nature of ptab array grid-points756 REAL(wp) , INTENT(in ) :: psgn ! =-1 the sign change across the north fold boundary748 REAL(wp), DIMENSION(jpi,jpj), TARGET, INTENT(inout) :: pt2d ! Second 2D array on which the boundary condition is applied 749 CHARACTER(len=1) , INTENT(in ) :: cd_type ! define the nature of ptab array grid-points 750 REAL(wp) , INTENT(in ) :: psgn ! =-1 the sign change across the north fold boundary 757 751 TYPE(arrayptr) , DIMENSION(9) :: pt2d_array 758 752 CHARACTER(len=1) , DIMENSION(9) :: type_array ! define the nature of ptab array grid-points 759 753 REAL(wp) , DIMENSION(9) :: psgn_array ! =-1 the sign change across the north fold boundary 760 INTEGER , INTENT (inout):: num_fields754 INTEGER , INTENT (inout) :: num_fields 761 755 !!--------------------------------------------------------------------- 762 num_fields =num_fields+1763 pt2d_array(num_fields)%pt2d =>pt2d764 type_array(num_fields) =cd_type765 psgn_array(num_fields) =psgn756 num_fields = num_fields + 1 757 pt2d_array(num_fields)%pt2d => pt2d 758 type_array(num_fields) = cd_type 759 psgn_array(num_fields) = psgn 766 760 END SUBROUTINE load_array 767 761 … … 792 786 INTEGER :: num_fields 793 787 !!--------------------------------------------------------------------- 794 788 ! 795 789 num_fields = 0 796 797 !! Load the first array 798 CALL load_array(pt2dA,cd_typeA,psgnA,pt2d_array, type_array, psgn_array,num_fields) 799 800 !! Look if more arrays are added 801 IF(PRESENT (psgnB) )CALL load_array(pt2dB,cd_typeB,psgnB,pt2d_array, type_array, psgn_array,num_fields) 802 IF(PRESENT (psgnC) )CALL load_array(pt2dC,cd_typeC,psgnC,pt2d_array, type_array, psgn_array,num_fields) 803 IF(PRESENT (psgnD) )CALL load_array(pt2dD,cd_typeD,psgnD,pt2d_array, type_array, psgn_array,num_fields) 804 IF(PRESENT (psgnE) )CALL load_array(pt2dE,cd_typeE,psgnE,pt2d_array, type_array, psgn_array,num_fields) 805 IF(PRESENT (psgnF) )CALL load_array(pt2dF,cd_typeF,psgnF,pt2d_array, type_array, psgn_array,num_fields) 806 IF(PRESENT (psgnG) )CALL load_array(pt2dG,cd_typeG,psgnG,pt2d_array, type_array, psgn_array,num_fields) 807 IF(PRESENT (psgnH) )CALL load_array(pt2dH,cd_typeH,psgnH,pt2d_array, type_array, psgn_array,num_fields) 808 IF(PRESENT (psgnI) )CALL load_array(pt2dI,cd_typeI,psgnI,pt2d_array, type_array, psgn_array,num_fields) 809 810 CALL mpp_lnk_2d_multiple(pt2d_array,type_array,psgn_array,num_fields,cd_mpp,pval) 790 ! 791 ! Load the first array 792 CALL load_array( pt2dA, cd_typeA, psgnA, pt2d_array, type_array, psgn_array, num_fields ) 793 ! 794 ! Look if more arrays are added 795 IF( PRESENT(psgnB) ) CALL load_array(pt2dB,cd_typeB,psgnB,pt2d_array, type_array, psgn_array,num_fields) 796 IF( PRESENT(psgnC) ) CALL load_array(pt2dC,cd_typeC,psgnC,pt2d_array, type_array, psgn_array,num_fields) 797 IF( PRESENT(psgnD) ) CALL load_array(pt2dD,cd_typeD,psgnD,pt2d_array, type_array, psgn_array,num_fields) 798 IF( PRESENT(psgnE) ) CALL load_array(pt2dE,cd_typeE,psgnE,pt2d_array, type_array, psgn_array,num_fields) 799 IF( PRESENT(psgnF) ) CALL load_array(pt2dF,cd_typeF,psgnF,pt2d_array, type_array, psgn_array,num_fields) 800 IF( PRESENT(psgnG) ) CALL load_array(pt2dG,cd_typeG,psgnG,pt2d_array, type_array, psgn_array,num_fields) 801 IF( PRESENT(psgnH) ) CALL load_array(pt2dH,cd_typeH,psgnH,pt2d_array, type_array, psgn_array,num_fields) 802 IF( PRESENT(psgnI) ) CALL load_array(pt2dI,cd_typeI,psgnI,pt2d_array, type_array, psgn_array,num_fields) 803 ! 804 CALL mpp_lnk_2d_multiple( pt2d_array, type_array, psgn_array, num_fields, cd_mpp,pval ) 805 ! 811 806 END SUBROUTINE mpp_lnk_2d_9 812 807 … … 843 838 INTEGER :: ml_req1, ml_req2, ml_err ! for key_mpi_isend 844 839 REAL(wp) :: zland 845 INTEGER, DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 846 ! 840 INTEGER, DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 847 841 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: zt2ns, zt2sn ! 2d for north-south & south-north 848 842 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: zt2ew, zt2we ! 2d for east-west & west-east 849 850 !!---------------------------------------------------------------------- 851 843 !!---------------------------------------------------------------------- 844 ! 852 845 ALLOCATE( zt2ns(jpi,jprecj,2), zt2sn(jpi,jprecj,2), & 853 846 & zt2ew(jpj,jpreci,2), zt2we(jpj,jpreci,2) ) 854 855 847 ! 856 848 IF( PRESENT( pval ) ) THEN ; zland = pval ! set land value 857 ELSE ; zland = 0. e0! zero by default849 ELSE ; zland = 0._wp ! zero by default 858 850 ENDIF 859 851 … … 1046 1038 INTEGER :: imigr, iihom, ijhom ! temporary integers 1047 1039 INTEGER :: ml_req1, ml_req2, ml_err ! for key_mpi_isend 1048 INTEGER, DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 1049 ! 1040 INTEGER , DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 1050 1041 REAL(wp), DIMENSION(:,:,:,:,:), ALLOCATABLE :: zt4ns, zt4sn ! 2 x 3d for north-south & south-north 1051 1042 REAL(wp), DIMENSION(:,:,:,:,:), ALLOCATABLE :: zt4ew, zt4we ! 2 x 3d for east-west & west-east 1052 1053 ! !----------------------------------------------------------------------1043 !!---------------------------------------------------------------------- 1044 ! 1054 1045 ALLOCATE( zt4ns(jpi,jprecj,jpk,2,2), zt4sn(jpi,jprecj,jpk,2,2) , & 1055 1046 & zt4ew(jpj,jpreci,jpk,2,2), zt4we(jpj,jpreci,jpk,2,2) ) 1056 1057 1047 ! 1058 1048 ! 1. standard boundary treatment 1059 1049 ! ------------------------------ … … 1399 1389 END DO 1400 1390 END SELECT 1401 1391 ! 1402 1392 END SUBROUTINE mpp_lnk_2d_e 1403 1393 1394 SUBROUTINE mpp_lnk_sum_3d( ptab, cd_type, psgn, cd_mpp, pval ) 1395 !!---------------------------------------------------------------------- 1396 !! *** routine mpp_lnk_sum_3d *** 1397 !! 1398 !! ** Purpose : Message passing manadgement (sum the overlap region) 1399 !! 1400 !! ** Method : Use mppsend and mpprecv function for passing mask 1401 !! between processors following neighboring subdomains. 1402 !! domain parameters 1403 !! nlci : first dimension of the local subdomain 1404 !! nlcj : second dimension of the local subdomain 1405 !! nbondi : mark for "east-west local boundary" 1406 !! nbondj : mark for "north-south local boundary" 1407 !! noea : number for local neighboring processors 1408 !! nowe : number for local neighboring processors 1409 !! noso : number for local neighboring processors 1410 !! nono : number for local neighboring processors 1411 !! 1412 !! ** Action : ptab with update value at its periphery 1413 !! 1414 !!---------------------------------------------------------------------- 1415 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: ptab ! 3D array on which the boundary condition is applied 1416 CHARACTER(len=1) , INTENT(in ) :: cd_type ! define the nature of ptab array grid-points 1417 ! ! = T , U , V , F , W points 1418 REAL(wp) , INTENT(in ) :: psgn ! =-1 the sign change across the north fold boundary 1419 ! ! = 1. , the sign is kept 1420 CHARACTER(len=3), OPTIONAL , INTENT(in ) :: cd_mpp ! fill the overlap area only 1421 REAL(wp) , OPTIONAL , INTENT(in ) :: pval ! background value (used at closed boundaries) 1422 !! 1423 INTEGER :: ji, jj, jk, jl ! dummy loop indices 1424 INTEGER :: imigr, iihom, ijhom ! temporary integers 1425 INTEGER :: ml_req1, ml_req2, ml_err ! for key_mpi_isend 1426 REAL(wp) :: zland 1427 INTEGER, DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 1428 ! 1429 REAL(wp), DIMENSION(:,:,:,:), ALLOCATABLE :: zt3ns, zt3sn ! 3d for north-south & south-north 1430 REAL(wp), DIMENSION(:,:,:,:), ALLOCATABLE :: zt3ew, zt3we ! 3d for east-west & west-east 1431 1432 !!---------------------------------------------------------------------- 1433 1434 ALLOCATE( zt3ns(jpi,jprecj,jpk,2), zt3sn(jpi,jprecj,jpk,2), & 1435 & zt3ew(jpj,jpreci,jpk,2), zt3we(jpj,jpreci,jpk,2) ) 1436 1437 ! 1438 IF( PRESENT( pval ) ) THEN ; zland = pval ! set land value 1439 ELSE ; zland = 0.e0 ! zero by default 1440 ENDIF 1441 1442 ! 1. standard boundary treatment 1443 ! ------------------------------ 1444 ! 2. East and west directions exchange 1445 ! ------------------------------------ 1446 ! we play with the neigbours AND the row number because of the periodicity 1447 ! 1448 SELECT CASE ( nbondi ) ! Read lateral conditions 1449 CASE ( -1, 0, 1 ) ! all exept 2 (i.e. close case) 1450 iihom = nlci-jpreci 1451 DO jl = 1, jpreci 1452 zt3ew(:,jl,:,1) = ptab(jl ,:,:) ; ptab(jl ,:,:) = 0.0_wp 1453 zt3we(:,jl,:,1) = ptab(iihom+jl,:,:) ; ptab(iihom+jl,:,:) = 0.0_wp 1454 END DO 1455 END SELECT 1456 ! 1457 ! ! Migrations 1458 imigr = jpreci * jpj * jpk 1459 ! 1460 SELECT CASE ( nbondi ) 1461 CASE ( -1 ) 1462 CALL mppsend( 2, zt3we(1,1,1,1), imigr, noea, ml_req1 ) 1463 CALL mpprecv( 1, zt3ew(1,1,1,2), imigr, noea ) 1464 IF(l_isend) CALL mpi_wait(ml_req1, ml_stat, ml_err) 1465 CASE ( 0 ) 1466 CALL mppsend( 1, zt3ew(1,1,1,1), imigr, nowe, ml_req1 ) 1467 CALL mppsend( 2, zt3we(1,1,1,1), imigr, noea, ml_req2 ) 1468 CALL mpprecv( 1, zt3ew(1,1,1,2), imigr, noea ) 1469 CALL mpprecv( 2, zt3we(1,1,1,2), imigr, nowe ) 1470 IF(l_isend) CALL mpi_wait(ml_req1, ml_stat, ml_err) 1471 IF(l_isend) CALL mpi_wait(ml_req2, ml_stat, ml_err) 1472 CASE ( 1 ) 1473 CALL mppsend( 1, zt3ew(1,1,1,1), imigr, nowe, ml_req1 ) 1474 CALL mpprecv( 2, zt3we(1,1,1,2), imigr, nowe ) 1475 IF(l_isend) CALL mpi_wait(ml_req1, ml_stat, ml_err) 1476 END SELECT 1477 ! 1478 ! ! Write lateral conditions 1479 iihom = nlci-nreci 1480 ! 1481 SELECT CASE ( nbondi ) 1482 CASE ( -1 ) 1483 DO jl = 1, jpreci 1484 ptab(iihom+jl,:,:) = ptab(iihom+jl,:,:) + zt3ew(:,jl,:,2) 1485 END DO 1486 CASE ( 0 ) 1487 DO jl = 1, jpreci 1488 ptab(jpreci+jl,:,:) = ptab(jpreci+jl,:,:) + zt3we(:,jl,:,2) 1489 ptab(iihom +jl,:,:) = ptab(iihom +jl,:,:) + zt3ew(:,jl,:,2) 1490 END DO 1491 CASE ( 1 ) 1492 DO jl = 1, jpreci 1493 ptab(jpreci+jl,:,:) = ptab(jpreci+jl,:,:) + zt3we(:,jl,:,2) 1494 END DO 1495 END SELECT 1496 1497 1498 ! 3. North and south directions 1499 ! ----------------------------- 1500 ! always closed : we play only with the neigbours 1501 ! 1502 IF( nbondj /= 2 ) THEN ! Read lateral conditions 1503 ijhom = nlcj-jprecj 1504 DO jl = 1, jprecj 1505 zt3sn(:,jl,:,1) = ptab(:,ijhom+jl,:) ; ptab(:,ijhom+jl,:) = 0.0_wp 1506 zt3ns(:,jl,:,1) = ptab(:,jl ,:) ; ptab(:,jl ,:) = 0.0_wp 1507 END DO 1508 ENDIF 1509 ! 1510 ! ! Migrations 1511 imigr = jprecj * jpi * jpk 1512 ! 1513 SELECT CASE ( nbondj ) 1514 CASE ( -1 ) 1515 CALL mppsend( 4, zt3sn(1,1,1,1), imigr, nono, ml_req1 ) 1516 CALL mpprecv( 3, zt3ns(1,1,1,2), imigr, nono ) 1517 IF(l_isend) CALL mpi_wait(ml_req1, ml_stat, ml_err) 1518 CASE ( 0 ) 1519 CALL mppsend( 3, zt3ns(1,1,1,1), imigr, noso, ml_req1 ) 1520 CALL mppsend( 4, zt3sn(1,1,1,1), imigr, nono, ml_req2 ) 1521 CALL mpprecv( 3, zt3ns(1,1,1,2), imigr, nono ) 1522 CALL mpprecv( 4, zt3sn(1,1,1,2), imigr, noso ) 1523 IF(l_isend) CALL mpi_wait(ml_req1, ml_stat, ml_err) 1524 IF(l_isend) CALL mpi_wait(ml_req2, ml_stat, ml_err) 1525 CASE ( 1 ) 1526 CALL mppsend( 3, zt3ns(1,1,1,1), imigr, noso, ml_req1 ) 1527 CALL mpprecv( 4, zt3sn(1,1,1,2), imigr, noso ) 1528 IF(l_isend) CALL mpi_wait(ml_req1, ml_stat, ml_err) 1529 END SELECT 1530 ! 1531 ! ! Write lateral conditions 1532 ijhom = nlcj-nrecj 1533 ! 1534 SELECT CASE ( nbondj ) 1535 CASE ( -1 ) 1536 DO jl = 1, jprecj 1537 ptab(:,ijhom+jl,:) = ptab(:,ijhom+jl,:) + zt3ns(:,jl,:,2) 1538 END DO 1539 CASE ( 0 ) 1540 DO jl = 1, jprecj 1541 ptab(:,jprecj+jl,:) = ptab(:,jprecj+jl,:) + zt3sn(:,jl,:,2) 1542 ptab(:,ijhom +jl,:) = ptab(:,ijhom +jl,:) + zt3ns(:,jl,:,2) 1543 END DO 1544 CASE ( 1 ) 1545 DO jl = 1, jprecj 1546 ptab(:,jprecj+jl,:) = ptab(:,jprecj+jl,:) + zt3sn(:,jl ,:,2) 1547 END DO 1548 END SELECT 1549 1550 1551 ! 4. north fold treatment 1552 ! ----------------------- 1553 ! 1554 IF( npolj /= 0 .AND. .NOT. PRESENT(cd_mpp) ) THEN 1555 ! 1556 SELECT CASE ( jpni ) 1557 CASE ( 1 ) ; CALL lbc_nfd ( ptab, cd_type, psgn ) ! only 1 northern proc, no mpp 1558 CASE DEFAULT ; CALL mpp_lbc_north( ptab, cd_type, psgn ) ! for all northern procs. 1559 END SELECT 1560 ! 1561 ENDIF 1562 ! 1563 DEALLOCATE( zt3ns, zt3sn, zt3ew, zt3we ) 1564 ! 1565 END SUBROUTINE mpp_lnk_sum_3d 1566 1567 SUBROUTINE mpp_lnk_sum_2d( pt2d, cd_type, psgn, cd_mpp, pval ) 1568 !!---------------------------------------------------------------------- 1569 !! *** routine mpp_lnk_sum_2d *** 1570 !! 1571 !! ** Purpose : Message passing manadgement for 2d array (sum the overlap region) 1572 !! 1573 !! ** Method : Use mppsend and mpprecv function for passing mask 1574 !! between processors following neighboring subdomains. 1575 !! domain parameters 1576 !! nlci : first dimension of the local subdomain 1577 !! nlcj : second dimension of the local subdomain 1578 !! nbondi : mark for "east-west local boundary" 1579 !! nbondj : mark for "north-south local boundary" 1580 !! noea : number for local neighboring processors 1581 !! nowe : number for local neighboring processors 1582 !! noso : number for local neighboring processors 1583 !! nono : number for local neighboring processors 1584 !! 1585 !!---------------------------------------------------------------------- 1586 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: pt2d ! 2D array on which the boundary condition is applied 1587 CHARACTER(len=1) , INTENT(in ) :: cd_type ! define the nature of ptab array grid-points 1588 ! ! = T , U , V , F , W and I points 1589 REAL(wp) , INTENT(in ) :: psgn ! =-1 the sign change across the north fold boundary 1590 ! ! = 1. , the sign is kept 1591 CHARACTER(len=3), OPTIONAL , INTENT(in ) :: cd_mpp ! fill the overlap area only 1592 REAL(wp) , OPTIONAL , INTENT(in ) :: pval ! background value (used at closed boundaries) 1593 !! 1594 INTEGER :: ji, jj, jl ! dummy loop indices 1595 INTEGER :: imigr, iihom, ijhom ! temporary integers 1596 INTEGER :: ml_req1, ml_req2, ml_err ! for key_mpi_isend 1597 REAL(wp) :: zland 1598 INTEGER, DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 1599 ! 1600 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: zt2ns, zt2sn ! 2d for north-south & south-north 1601 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: zt2ew, zt2we ! 2d for east-west & west-east 1602 1603 !!---------------------------------------------------------------------- 1604 1605 ALLOCATE( zt2ns(jpi,jprecj,2), zt2sn(jpi,jprecj,2), & 1606 & zt2ew(jpj,jpreci,2), zt2we(jpj,jpreci,2) ) 1607 1608 ! 1609 IF( PRESENT( pval ) ) THEN ; zland = pval ! set land value 1610 ELSE ; zland = 0.e0 ! zero by default 1611 ENDIF 1612 1613 ! 1. standard boundary treatment 1614 ! ------------------------------ 1615 ! 2. East and west directions exchange 1616 ! ------------------------------------ 1617 ! we play with the neigbours AND the row number because of the periodicity 1618 ! 1619 SELECT CASE ( nbondi ) ! Read lateral conditions 1620 CASE ( -1, 0, 1 ) ! all exept 2 (i.e. close case) 1621 iihom = nlci - jpreci 1622 DO jl = 1, jpreci 1623 zt2ew(:,jl,1) = pt2d(jl ,:) ; pt2d(jl ,:) = 0.0_wp 1624 zt2we(:,jl,1) = pt2d(iihom +jl,:) ; pt2d(iihom +jl,:) = 0.0_wp 1625 END DO 1626 END SELECT 1627 ! 1628 ! ! Migrations 1629 imigr = jpreci * jpj 1630 ! 1631 SELECT CASE ( nbondi ) 1632 CASE ( -1 ) 1633 CALL mppsend( 2, zt2we(1,1,1), imigr, noea, ml_req1 ) 1634 CALL mpprecv( 1, zt2ew(1,1,2), imigr, noea ) 1635 IF(l_isend) CALL mpi_wait(ml_req1,ml_stat,ml_err) 1636 CASE ( 0 ) 1637 CALL mppsend( 1, zt2ew(1,1,1), imigr, nowe, ml_req1 ) 1638 CALL mppsend( 2, zt2we(1,1,1), imigr, noea, ml_req2 ) 1639 CALL mpprecv( 1, zt2ew(1,1,2), imigr, noea ) 1640 CALL mpprecv( 2, zt2we(1,1,2), imigr, nowe ) 1641 IF(l_isend) CALL mpi_wait(ml_req1,ml_stat,ml_err) 1642 IF(l_isend) CALL mpi_wait(ml_req2,ml_stat,ml_err) 1643 CASE ( 1 ) 1644 CALL mppsend( 1, zt2ew(1,1,1), imigr, nowe, ml_req1 ) 1645 CALL mpprecv( 2, zt2we(1,1,2), imigr, nowe ) 1646 IF(l_isend) CALL mpi_wait(ml_req1,ml_stat,ml_err) 1647 END SELECT 1648 ! 1649 ! ! Write lateral conditions 1650 iihom = nlci-nreci 1651 ! 1652 SELECT CASE ( nbondi ) 1653 CASE ( -1 ) 1654 DO jl = 1, jpreci 1655 pt2d(iihom+jl,:) = pt2d(iihom+jl,:) + zt2ew(:,jl,2) 1656 END DO 1657 CASE ( 0 ) 1658 DO jl = 1, jpreci 1659 pt2d(jpreci+jl,:) = pt2d(jpreci+jl,:) + zt2we(:,jl,2) 1660 pt2d(iihom +jl,:) = pt2d(iihom +jl,:) + zt2ew(:,jl,2) 1661 END DO 1662 CASE ( 1 ) 1663 DO jl = 1, jpreci 1664 pt2d(jpreci+jl,:) = pt2d(jpreci+jl,:) + zt2we(:,jl,2) 1665 END DO 1666 END SELECT 1667 1668 1669 ! 3. North and south directions 1670 ! ----------------------------- 1671 ! always closed : we play only with the neigbours 1672 ! 1673 IF( nbondj /= 2 ) THEN ! Read lateral conditions 1674 ijhom = nlcj - jprecj 1675 DO jl = 1, jprecj 1676 zt2sn(:,jl,1) = pt2d(:,ijhom +jl) ; pt2d(:,ijhom +jl) = 0.0_wp 1677 zt2ns(:,jl,1) = pt2d(:,jl ) ; pt2d(:,jl ) = 0.0_wp 1678 END DO 1679 ENDIF 1680 ! 1681 ! ! Migrations 1682 imigr = jprecj * jpi 1683 ! 1684 SELECT CASE ( nbondj ) 1685 CASE ( -1 ) 1686 CALL mppsend( 4, zt2sn(1,1,1), imigr, nono, ml_req1 ) 1687 CALL mpprecv( 3, zt2ns(1,1,2), imigr, nono ) 1688 IF(l_isend) CALL mpi_wait(ml_req1,ml_stat,ml_err) 1689 CASE ( 0 ) 1690 CALL mppsend( 3, zt2ns(1,1,1), imigr, noso, ml_req1 ) 1691 CALL mppsend( 4, zt2sn(1,1,1), imigr, nono, ml_req2 ) 1692 CALL mpprecv( 3, zt2ns(1,1,2), imigr, nono ) 1693 CALL mpprecv( 4, zt2sn(1,1,2), imigr, noso ) 1694 IF(l_isend) CALL mpi_wait(ml_req1,ml_stat,ml_err) 1695 IF(l_isend) CALL mpi_wait(ml_req2,ml_stat,ml_err) 1696 CASE ( 1 ) 1697 CALL mppsend( 3, zt2ns(1,1,1), imigr, noso, ml_req1 ) 1698 CALL mpprecv( 4, zt2sn(1,1,2), imigr, noso ) 1699 IF(l_isend) CALL mpi_wait(ml_req1,ml_stat,ml_err) 1700 END SELECT 1701 ! 1702 ! ! Write lateral conditions 1703 ijhom = nlcj-nrecj 1704 ! 1705 SELECT CASE ( nbondj ) 1706 CASE ( -1 ) 1707 DO jl = 1, jprecj 1708 pt2d(:,ijhom+jl) = pt2d(:,ijhom+jl) + zt2ns(:,jl,2) 1709 END DO 1710 CASE ( 0 ) 1711 DO jl = 1, jprecj 1712 pt2d(:,jprecj+jl) = pt2d(:,jprecj+jl) + zt2sn(:,jl,2) 1713 pt2d(:,ijhom +jl) = pt2d(:,ijhom +jl) + zt2ns(:,jl,2) 1714 END DO 1715 CASE ( 1 ) 1716 DO jl = 1, jprecj 1717 pt2d(:,jprecj+jl) = pt2d(:,jprecj+jl) + zt2sn(:,jl,2) 1718 END DO 1719 END SELECT 1720 1721 1722 ! 4. north fold treatment 1723 ! ----------------------- 1724 ! 1725 IF( npolj /= 0 .AND. .NOT. PRESENT(cd_mpp) ) THEN 1726 ! 1727 SELECT CASE ( jpni ) 1728 CASE ( 1 ) ; CALL lbc_nfd ( pt2d, cd_type, psgn ) ! only 1 northern proc, no mpp 1729 CASE DEFAULT ; CALL mpp_lbc_north( pt2d, cd_type, psgn ) ! for all northern procs. 1730 END SELECT 1731 ! 1732 ENDIF 1733 ! 1734 DEALLOCATE( zt2ns, zt2sn, zt2ew, zt2we ) 1735 ! 1736 END SUBROUTINE mpp_lnk_sum_2d 1404 1737 1405 1738 SUBROUTINE mppsend( ktyp, pmess, kbytes, kdest, md_req ) … … 1449 1782 !!---------------------------------------------------------------------- 1450 1783 ! 1451 1452 1784 ! If a specific process number has been passed to the receive call, 1453 1785 ! use that one. Default is to use mpi_any_source 1454 use_source=mpi_any_source 1455 if(present(ksource)) then 1456 use_source=ksource 1457 end if 1458 1786 use_source = mpi_any_source 1787 IF( PRESENT(ksource) ) use_source = ksource 1788 ! 1459 1789 CALL mpi_recv( pmess, kbytes, mpi_double_precision, use_source, ktyp, mpi_comm_opa, istatus, iflag ) 1460 1790 ! … … 1470 1800 !! 1471 1801 !!---------------------------------------------------------------------- 1472 REAL(wp), DIMENSION(jpi,jpj) ,INTENT(in ) :: ptab ! subdomain input array1473 INTEGER ,INTENT(in ) :: kp ! record length1802 REAL(wp), DIMENSION(jpi,jpj) , INTENT(in ) :: ptab ! subdomain input array 1803 INTEGER , INTENT(in ) :: kp ! record length 1474 1804 REAL(wp), DIMENSION(jpi,jpj,jpnij), INTENT( out) :: pio ! subdomain input array 1475 1805 !! … … 1492 1822 !! 1493 1823 !!---------------------------------------------------------------------- 1494 REAL(wp), DIMENSION(jpi,jpj,jpnij) :: pio! output array1495 INTEGER :: kp 1496 REAL(wp), DIMENSION(jpi,jpj) :: ptab! subdomain array input1824 REAL(wp), DIMENSION(jpi,jpj,jpnij) :: pio ! output array 1825 INTEGER :: kp ! Tag (not used with MPI 1826 REAL(wp), DIMENSION(jpi,jpj) :: ptab ! subdomain array input 1497 1827 !! 1498 1828 INTEGER :: itaille, ierror ! temporary integer 1499 1829 !!--------------------------------------------------------------------- 1500 1830 ! 1501 itaille =jpi*jpj1831 itaille = jpi * jpj 1502 1832 ! 1503 1833 CALL mpi_scatter( pio, itaille, mpi_double_precision, ptab, itaille , & … … 1517 1847 INTEGER , INTENT(inout), DIMENSION(kdim) :: ktab ! input array 1518 1848 INTEGER , INTENT(in ), OPTIONAL :: kcom ! 1519 ! !1849 ! 1520 1850 INTEGER :: ierror, localcomm ! temporary integer 1521 1851 INTEGER, DIMENSION(kdim) :: iwork … … 1539 1869 !! 1540 1870 !!---------------------------------------------------------------------- 1541 INTEGER, INTENT(inout) :: ktab 1542 INTEGER, INTENT(in ), OPTIONAL :: kcom 1543 ! !1871 INTEGER, INTENT(inout) :: ktab ! ??? 1872 INTEGER, INTENT(in ), OPTIONAL :: kcom ! ??? 1873 ! 1544 1874 INTEGER :: ierror, iwork, localcomm ! temporary integer 1545 1875 !!---------------------------------------------------------------------- … … 1548 1878 IF( PRESENT(kcom) ) localcomm = kcom 1549 1879 ! 1550 CALL mpi_allreduce( ktab, iwork, 1, mpi_integer, mpi_max, localcomm, ierror )1880 CALL mpi_allreduce( ktab, iwork, 1, mpi_integer, mpi_max, localcomm, ierror ) 1551 1881 ! 1552 1882 ktab = iwork … … 1562 1892 !! 1563 1893 !!---------------------------------------------------------------------- 1564 INTEGER , INTENT( in ) :: kdim 1565 INTEGER , INTENT(inout), DIMENSION(kdim) :: ktab 1566 INTEGER , INTENT( in ), OPTIONAL :: kcom 1894 INTEGER , INTENT( in ) :: kdim ! size of array 1895 INTEGER , INTENT(inout), DIMENSION(kdim) :: ktab ! input array 1896 INTEGER , INTENT( in ), OPTIONAL :: kcom ! input array 1567 1897 !! 1568 1898 INTEGER :: ierror, localcomm ! temporary integer … … 1596 1926 IF( PRESENT(kcom) ) localcomm = kcom 1597 1927 ! 1598 CALL mpi_allreduce( ktab, iwork, 1, mpi_integer, mpi_min, localcomm, ierror )1928 CALL mpi_allreduce( ktab, iwork, 1, mpi_integer, mpi_min, localcomm, ierror ) 1599 1929 ! 1600 1930 ktab = iwork … … 1610 1940 !! 1611 1941 !!---------------------------------------------------------------------- 1612 INTEGER, INTENT(in ) :: kdim 1613 INTEGER, INTENT(inout), DIMENSION (kdim) :: ktab 1614 ! !1942 INTEGER, INTENT(in ) :: kdim ! ??? 1943 INTEGER, INTENT(inout), DIMENSION (kdim) :: ktab ! ??? 1944 ! 1615 1945 INTEGER :: ierror 1616 1946 INTEGER, DIMENSION (kdim) :: iwork … … 1653 1983 REAL(wp), INTENT(inout), DIMENSION(kdim) :: ptab 1654 1984 INTEGER , INTENT(in ), OPTIONAL :: kcom 1655 ! !1985 ! 1656 1986 INTEGER :: ierror, localcomm 1657 1987 REAL(wp), DIMENSION(kdim) :: zwork … … 1785 2115 END SUBROUTINE mppsum_real 1786 2116 2117 1787 2118 SUBROUTINE mppsum_realdd( ytab, kcom ) 1788 2119 !!---------------------------------------------------------------------- … … 1793 2124 !! 1794 2125 !!----------------------------------------------------------------------- 1795 COMPLEX(wp), INTENT(inout) ::ytab ! input scalar1796 INTEGER , INTENT( in ), OPTIONAL ::kcom1797 1798 !! * Local variables (MPI version)1799 INTEGER :: ierror1800 INTEGER :: localcomm1801 COMPLEX(wp) :: zwork1802 2126 COMPLEX(wp), INTENT(inout) :: ytab ! input scalar 2127 INTEGER , INTENT(in ), OPTIONAL :: kcom 2128 ! 2129 INTEGER :: ierror 2130 INTEGER :: localcomm 2131 COMPLEX(wp) :: zwork 2132 !!----------------------------------------------------------------------- 2133 ! 1803 2134 localcomm = mpi_comm_opa 1804 IF( PRESENT(kcom) ) localcomm = kcom1805 2135 IF( PRESENT(kcom) ) localcomm = kcom 2136 ! 1806 2137 ! reduce local sums into global sum 1807 CALL MPI_ALLREDUCE (ytab, zwork, 1, MPI_DOUBLE_COMPLEX, & 1808 MPI_SUMDD,localcomm,ierror) 2138 CALL MPI_ALLREDUCE (ytab, zwork, 1, MPI_DOUBLE_COMPLEX, MPI_SUMDD, localcomm, ierror ) 1809 2139 ytab = zwork 1810 2140 ! 1811 2141 END SUBROUTINE mppsum_realdd 1812 2142 … … 1820 2150 !! 1821 2151 !!----------------------------------------------------------------------- 1822 INTEGER , INTENT( in ) :: kdim ! size of ytab 1823 COMPLEX(wp), DIMENSION(kdim), INTENT( inout ) :: ytab ! input array 1824 INTEGER , INTENT( in ), OPTIONAL :: kcom 1825 1826 !! * Local variables (MPI version) 1827 INTEGER :: ierror ! temporary integer 1828 INTEGER :: localcomm 2152 INTEGER , INTENT(in ) :: kdim ! size of ytab 2153 COMPLEX(wp), DIMENSION(kdim), INTENT(inout) :: ytab ! input array 2154 INTEGER , OPTIONAL , INTENT(in ) :: kcom 2155 ! 2156 INTEGER:: ierror, localcomm ! local integer 1829 2157 COMPLEX(wp), DIMENSION(kdim) :: zwork ! temporary workspace 1830 2158 !!----------------------------------------------------------------------- 2159 ! 1831 2160 localcomm = mpi_comm_opa 1832 IF( PRESENT(kcom) ) localcomm = kcom 1833 1834 CALL MPI_ALLREDUCE (ytab, zwork, kdim, MPI_DOUBLE_COMPLEX, & 1835 MPI_SUMDD,localcomm,ierror) 2161 IF( PRESENT(kcom) ) localcomm = kcom 2162 ! 2163 CALL MPI_ALLREDUCE( ytab, zwork, kdim, MPI_DOUBLE_COMPLEX, MPI_SUMDD, localcomm, ierror ) 1836 2164 ytab(:) = zwork(:) 1837 2165 ! 1838 2166 END SUBROUTINE mppsum_a_realdd 2167 1839 2168 1840 2169 SUBROUTINE mpp_minloc2d( ptab, pmask, pmin, ki,kj ) … … 1852 2181 REAL(wp) , INTENT( out) :: pmin ! Global minimum of ptab 1853 2182 INTEGER , INTENT( out) :: ki, kj ! index of minimum in global frame 1854 !! 2183 ! 2184 INTEGER :: ierror 1855 2185 INTEGER , DIMENSION(2) :: ilocs 1856 INTEGER :: ierror1857 2186 REAL(wp) :: zmin ! local minimum 1858 2187 REAL(wp), DIMENSION(2,1) :: zain, zaout … … 2662 2991 END SUBROUTINE mpp_lbc_north_e 2663 2992 2664 SUBROUTINE mpp_lnk_bdy_3d( ptab, cd_type, psgn, ib_bdy ) 2993 2994 SUBROUTINE mpp_lnk_bdy_3d( ptab, cd_type, psgn, ib_bdy ) 2665 2995 !!---------------------------------------------------------------------- 2666 2996 !! *** routine mpp_lnk_bdy_3d *** … … 2683 3013 !! 2684 3014 !!---------------------------------------------------------------------- 2685 2686 USE lbcnfd ! north fold2687 2688 INCLUDE 'mpif.h'2689 2690 3015 REAL(wp), DIMENSION(jpi,jpj,jpk), INTENT(inout) :: ptab ! 3D array on which the boundary condition is applied 2691 3016 CHARACTER(len=1) , INTENT(in ) :: cd_type ! define the nature of ptab array grid-points … … 2694 3019 ! ! = 1. , the sign is kept 2695 3020 INTEGER , INTENT(in ) :: ib_bdy ! BDY boundary set 3021 ! 2696 3022 INTEGER :: ji, jj, jk, jl ! dummy loop indices 2697 INTEGER :: imigr, iihom, ijhom ! temporaryintegers3023 INTEGER :: imigr, iihom, ijhom ! local integers 2698 3024 INTEGER :: ml_req1, ml_req2, ml_err ! for key_mpi_isend 2699 REAL(wp) :: zland 3025 REAL(wp) :: zland ! local scalar 2700 3026 INTEGER, DIMENSION(MPI_STATUS_SIZE) :: ml_stat ! for key_mpi_isend 2701 3027 ! 2702 3028 REAL(wp), DIMENSION(:,:,:,:), ALLOCATABLE :: zt3ns, zt3sn ! 3d for north-south & south-north 2703 3029 REAL(wp), DIMENSION(:,:,:,:), ALLOCATABLE :: zt3ew, zt3we ! 3d for east-west & west-east 2704 2705 !!---------------------------------------------------------------------- 2706 3030 !!---------------------------------------------------------------------- 3031 ! 2707 3032 ALLOCATE( zt3ns(jpi,jprecj,jpk,2), zt3sn(jpi,jprecj,jpk,2), & 2708 3033 & zt3ew(jpj,jpreci,jpk,2), zt3we(jpj,jpreci,jpk,2) ) 2709 3034 2710 zland = 0. e03035 zland = 0._wp 2711 3036 2712 3037 ! 1. standard boundary treatment 2713 3038 ! ------------------------------ 2714 2715 3039 ! ! East-West boundaries 2716 3040 ! !* Cyclic east-west 2717 2718 3041 IF( nbondi == 2) THEN 2719 IF (nperio == 1 .OR. nperio == 4 .OR. nperio == 6) THEN2720 ptab( 1 ,:,:) = ptab(jpim1,:,:)2721 ptab(jpi,:,:) = ptab( 2 ,:,:)2722 ELSE2723 IF( .NOT. cd_type == 'F' ) ptab( 1:jpreci,:,:) = zland ! south except F-point2724 ptab(nlci-jpreci+1:jpi,:,:) = zland ! north2725 ENDIF3042 IF( nperio == 1 .OR. nperio == 4 .OR. nperio == 6 ) THEN 3043 ptab( 1 ,:,:) = ptab(jpim1,:,:) 3044 ptab(jpi,:,:) = ptab( 2 ,:,:) 3045 ELSE 3046 IF( .NOT. cd_type == 'F' ) ptab(1:jpreci,:,:) = zland ! south except F-point 3047 ptab(nlci-jpreci+1:jpi,:,:) = zland ! north 3048 ENDIF 2726 3049 ELSEIF(nbondi == -1) THEN 2727 IF( .NOT. cd_type == 'F' ) ptab( 1:jpreci,:,:) = zland ! south except F-point3050 IF( .NOT. cd_type == 'F' ) ptab(1:jpreci,:,:) = zland ! south except F-point 2728 3051 ELSEIF(nbondi == 1) THEN 2729 ptab(nlci-jpreci+1:jpi,:,:) = zland ! north3052 ptab(nlci-jpreci+1:jpi,:,:) = zland ! north 2730 3053 ENDIF !* closed 2731 3054 2732 3055 IF (nbondj == 2 .OR. nbondj == -1) THEN 2733 IF( .NOT. cd_type == 'F' ) ptab(:, 1:jprecj,:) = zland ! south except F-point3056 IF( .NOT. cd_type == 'F' ) ptab(:,1:jprecj,:) = zland ! south except F-point 2734 3057 ELSEIF (nbondj == 2 .OR. nbondj == 1) THEN 2735 ptab(:,nlcj-jprecj+1:jpj ,:) = zland ! north 2736 ENDIF 2737 2738 ! 2739 3058 ptab(:,nlcj-jprecj+1:jpj,:) = zland ! north 3059 ENDIF 3060 ! 2740 3061 ! 2. East and west directions exchange 2741 3062 ! ------------------------------------ … … 2794 3115 CASE ( 0 ) 2795 3116 DO jl = 1, jpreci 2796 ptab( jl,:,:) = zt3we(:,jl,:,2)3117 ptab( jl,:,:) = zt3we(:,jl,:,2) 2797 3118 ptab(iihom+jl,:,:) = zt3ew(:,jl,:,2) 2798 3119 END DO 2799 3120 CASE ( 1 ) 2800 3121 DO jl = 1, jpreci 2801 ptab( jl,:,:) = zt3we(:,jl,:,2)3122 ptab( jl,:,:) = zt3we(:,jl,:,2) 2802 3123 END DO 2803 3124 END SELECT … … 2885 3206 END SUBROUTINE mpp_lnk_bdy_3d 2886 3207 2887 SUBROUTINE mpp_lnk_bdy_2d( ptab, cd_type, psgn, ib_bdy ) 3208 3209 SUBROUTINE mpp_lnk_bdy_2d( ptab, cd_type, psgn, ib_bdy ) 2888 3210 !!---------------------------------------------------------------------- 2889 3211 !! *** routine mpp_lnk_bdy_2d *** … … 2906 3228 !! 2907 3229 !!---------------------------------------------------------------------- 2908 2909 USE lbcnfd ! north fold 2910 2911 INCLUDE 'mpif.h' 2912 2913 REAL(wp), DIMENSION(jpi,jpj) , INTENT(inout) :: ptab ! 3D array on which the boundary condition is applied 2914 CHARACTER(len=1) , INTENT(in ) :: cd_type ! define the nature of ptab array grid-points 2915 ! ! = T , U , V , F , W points 2916 REAL(wp) , INTENT(in ) :: psgn ! =-1 the sign change across the north fold boundary 2917 ! ! = 1. , the sign is kept 2918 INTEGER , INTENT(in ) :: ib_bdy ! BDY boundary set 3230 REAL(wp), DIMENSION(jpi,jpj), INTENT(inout) :: ptab ! 3D array on which the boundary condition is applied 3231 CHARACTER(len=1) , INTENT(in ) :: cd_type ! define the nature of ptab array grid-points 3232 ! ! = T , U , V , F , W points 3233 REAL(wp) , INTENT(in ) :: psgn ! =-1 the sign change across the north fold boundary 3234 ! ! = 1. , the sign is kept 3235 INTEGER , INTENT(in ) :: ib_bdy ! BDY boundary set 3236 ! 2919 3237 INTEGER :: ji, jj, jl ! dummy loop indices 2920 INTEGER :: imigr, iihom, ijhom ! temporaryintegers3238 INTEGER :: imigr, iihom, ijhom ! local integers 2921 3239 INTEGER :: ml_req1, ml_req2, ml_err ! for key_mpi_isend 2922 3240 REAL(wp) :: zland … … 2925 3243 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: zt2ns, zt2sn ! 2d for north-south & south-north 2926 3244 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: zt2ew, zt2we ! 2d for east-west & west-east 2927 2928 3245 !!---------------------------------------------------------------------- 2929 3246 … … 2931 3248 & zt2ew(jpj,jpreci,2), zt2we(jpj,jpreci,2) ) 2932 3249 2933 zland = 0. e03250 zland = 0._wp 2934 3251 2935 3252 ! 1. standard boundary treatment 2936 3253 ! ------------------------------ 2937 2938 3254 ! ! East-West boundaries 2939 ! !* Cyclic east-west 2940 2941 IF( nbondi == 2) THEN 2942 IF (nperio == 1 .OR. nperio == 4 .OR. nperio == 6) THEN 2943 ptab( 1 ,:) = ptab(jpim1,:) 2944 ptab(jpi,:) = ptab( 2 ,:) 2945 ELSE 2946 IF( .NOT. cd_type == 'F' ) ptab( 1 :jpreci,:) = zland ! south except F-point 2947 ptab(nlci-jpreci+1:jpi ,:) = zland ! north 2948 ENDIF 3255 ! !* Cyclic east-west 3256 IF( nbondi == 2 ) THEN 3257 IF (nperio == 1 .OR. nperio == 4 .OR. nperio == 6) THEN 3258 ptab( 1 ,:) = ptab(jpim1,:) 3259 ptab(jpi,:) = ptab( 2 ,:) 3260 ELSE 3261 IF(.NOT.cd_type == 'F' ) ptab( 1 :jpreci,:) = zland ! south except F-point 3262 ptab(nlci-jpreci+1:jpi ,:) = zland ! north 3263 ENDIF 2949 3264 ELSEIF(nbondi == -1) THEN 2950 IF( .NOT. cd_type == 'F' )ptab( 1 :jpreci,:) = zland ! south except F-point3265 IF( .NOT.cd_type == 'F' ) ptab( 1 :jpreci,:) = zland ! south except F-point 2951 3266 ELSEIF(nbondi == 1) THEN 2952 ptab(nlci-jpreci+1:jpi ,:) = zland ! north2953 ENDIF !* closed2954 2955 IF (nbondj == 2 .OR. nbondj == -1) THEN2956 IF( .NOT. cd_type == 'F' ) ptab(:, 1 :jprecj) = zland! south except F-point3267 ptab(nlci-jpreci+1:jpi ,:) = zland ! north 3268 ENDIF 3269 ! !* closed 3270 IF( nbondj == 2 .OR. nbondj == -1 ) THEN 3271 IF( .NOT.cd_type == 'F' ) ptab(:, 1 :jprecj) = zland ! south except F-point 2957 3272 ELSEIF (nbondj == 2 .OR. nbondj == 1) THEN 2958 ptab(:,nlcj-jprecj+1:jpj) = zland ! north 2959 ENDIF 2960 2961 ! 2962 3273 ptab(:,nlcj-jprecj+1:jpj ) = zland ! north 3274 ENDIF 3275 ! 2963 3276 ! 2. East and west directions exchange 2964 3277 ! ------------------------------------ … … 3107 3420 ! 3108 3421 END SUBROUTINE mpp_lnk_bdy_2d 3422 3109 3423 3110 3424 SUBROUTINE mpi_init_opa( ldtxt, ksft, code ) … … 3196 3510 END SUBROUTINE DDPDD_MPI 3197 3511 3512 3198 3513 SUBROUTINE mpp_lbc_north_icb( pt2d, cd_type, psgn, pr2dj) 3199 3514 !!--------------------------------------------------------------------- … … 3218 3533 !! ! north fold, = 1. otherwise 3219 3534 INTEGER, OPTIONAL , INTENT(in ) :: pr2dj 3535 ! 3220 3536 INTEGER :: ji, jj, jr 3221 3537 INTEGER :: ierr, itaille, ildi, ilei, iilb … … 3224 3540 REAL(wp), DIMENSION(:,:) , ALLOCATABLE :: ztab_e, znorthloc_e 3225 3541 REAL(wp), DIMENSION(:,:,:), ALLOCATABLE :: znorthgloio_e 3226 3227 3542 !!---------------------------------------------------------------------- 3228 3543 ! … … 3234 3549 ENDIF 3235 3550 ALLOCATE( ztab_e(jpiglo,4+2*ipr2dj), znorthloc_e(jpi,4+2*ipr2dj), znorthgloio_e(jpi,4+2*ipr2dj,jpni) ) 3236 3237 ! 3238 ztab_e(:,:) = 0.e0 3239 3240 ij=0 3551 ! 3552 ztab_e(:,:) = 0._wp 3553 ! 3554 ij = 0 3241 3555 ! put in znorthloc_e the last 4 jlines of pt2d 3242 3556 DO jj = nlcj - ijpj + 1 - ipr2dj, nlcj +ipr2dj … … 3280 3594 ! 3281 3595 END SUBROUTINE mpp_lbc_north_icb 3596 3282 3597 3283 3598 SUBROUTINE mpp_lnk_2d_icb( pt2d, cd_type, psgn, jpri, jprj ) … … 3300 3615 !! noso : number for local neighboring processors 3301 3616 !! nono : number for local neighboring processors 3302 !!3303 3617 !!---------------------------------------------------------------------- 3304 3618 INTEGER , INTENT(in ) :: jpri … … 3459 3773 3460 3774 END SUBROUTINE mpp_lnk_2d_icb 3775 3461 3776 #else 3462 3777 !!---------------------------------------------------------------------- -
trunk/NEMOGCM/NEMO/OPA_SRC/LBC/mppini.F90
r4679 r6140 11 11 !! mpp_init_ioispl: IOIPSL initialization in mpp 12 12 !!---------------------------------------------------------------------- 13 !! * Modules used14 13 USE dom_oce ! ocean space and time domain 15 14 USE in_out_manager ! I/O Manager … … 23 22 PUBLIC mpp_init2 ! called by opa.F90 24 23 25 !! * Substitutions26 # include "domzgr_substitute.h90"27 24 !!---------------------------------------------------------------------- 28 25 !! NEMO/OPA 3.3 , NEMO Consortium (2010) … … 457 454 # include "mppini_2.h90" 458 455 459 # if defined key_dimgout460 !!----------------------------------------------------------------------461 !! 'key_dimgout' NO use of NetCDF files462 !!----------------------------------------------------------------------463 SUBROUTINE mpp_init_ioipsl ! Dummy routine464 END SUBROUTINE mpp_init_ioipsl465 # else466 456 SUBROUTINE mpp_init_ioipsl 467 457 !!---------------------------------------------------------------------- … … 509 499 END SUBROUTINE mpp_init_ioipsl 510 500 511 # endif512 501 #endif 513 502 -
trunk/NEMOGCM/NEMO/OPA_SRC/LBC/mppini_2.h90
r5130 r6140 136 136 137 137 imask(:,:)=1 138 WHERE ( zdta(:,:) - zdtaisf(:,:) <= 0.) imask = 0138 WHERE ( zdta(:,:) - zdtaisf(:,:) <= rn_isfhmin ) imask = 0 139 139 140 140 ! 1. Dimension arrays for subdomains
Note: See TracChangeset
for help on using the changeset viewer.