FV3 Bundle
mpp_domains.F90
Go to the documentation of this file.
1 !***********************************************************************
2 !* GNU Lesser General Public License
3 !*
4 !* This file is part of the GFDL Flexible Modeling System (FMS).
5 !*
6 !* FMS is free software: you can redistribute it and/or modify it under
7 !* the terms of the GNU Lesser General Public License as published by
8 !* the Free Software Foundation, either version 3 of the License, or (at
9 !* your option) any later version.
10 !*
11 !* FMS is distributed in the hope that it will be useful, but WITHOUT
12 !* ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
13 !* FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License
14 !* for more details.
15 !*
16 !* You should have received a copy of the GNU Lesser General Public
17 !* License along with FMS. If not, see <http://www.gnu.org/licenses/>.
18 !***********************************************************************
19 !-----------------------------------------------------------------------
20 ! Domain decomposition and domain update for message-passing codes
21 !
22 ! AUTHOR: V. Balaji (vb@gfdl.gov)
23 ! SGI/GFDL Princeton University
24 !
25 ! This program is free software; you can redistribute it and/or modify
26 ! it under the terms of the GNU General Public License as published by
27 ! the Free Software Foundation; either version 2 of the License, or
28 ! (at your option) any later version.
29 !
30 ! This program is distributed in the hope that it will be useful,
31 ! but WITHOUT ANY WARRANTY; without even the implied warranty of
32 ! MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
33 ! GNU General Public License for more details.
34 !
35 ! For the full text of the GNU General Public License,
36 ! write to: Free Software Foundation, Inc.,
37 ! 675 Mass Ave, Cambridge, MA 02139, USA.
38 !-----------------------------------------------------------------------
39 
40 ! <CONTACT EMAIL="V.Balaji@noaa.gov">
41 ! V. Balaji
42 ! </CONTACT>
43 ! <CONTACT EMAIL="Zhi.Liang@noaa.gov">
44 ! Zhi Liang
45 ! </CONTACT>
46 ! <HISTORY SRC="http://www.gfdl.noaa.gov/fms-cgi-bin/cvsweb.cgi/FMS/"/>
47 ! <RCSLOG SRC="http://www.gfdl.noaa.gov/~vb/changes_mpp_domains.html"/>
48 
49 ! <OVERVIEW>
50 ! <TT>mpp_domains_mod</TT> is a set of simple calls for domain
51 ! decomposition and domain updates on rectilinear grids. It requires the
52 ! module <LINK SRC="mpp.html">mpp_mod</LINK>, upon which it is built.
53 ! </OVERVIEW>
54 
55 ! <DESCRIPTION>
56 ! Scalable implementations of finite-difference codes are generally
57 ! based on decomposing the model domain into subdomains that are
58 ! distributed among processors. These domains will then be obliged to
59 ! exchange data at their boundaries if data dependencies are merely
60 ! nearest-neighbour, or may need to acquire information from the global
61 ! domain if there are extended data dependencies, as in the spectral
62 ! transform. The domain decomposition is a key operation in the
63 ! development of parallel codes.
64 !
65 ! <TT>mpp_domains_mod</TT> provides a domain decomposition and domain
66 ! update API for <I>rectilinear</I> grids, built on top of the <LINK
67 ! SRC="mpp.html">mpp_mod</LINK> API for message passing. Features
68 ! of <TT>mpp_domains_mod</TT> include:
69 !
70 ! Simple, minimal API, with free access to underlying API for more complicated stuff.
71 !
72 ! Design toward typical use in climate/weather CFD codes.
73 !
74 ! <H4>Domains</H4>
75 !
76 ! I have assumed that domain decomposition will mainly be in 2
77 ! horizontal dimensions, which will in general be the two
78 ! fastest-varying indices. There is a separate implementation of 1D
79 ! decomposition on the fastest-varying index, and 1D decomposition on
80 ! the second index, treated as a special case of 2D decomposition, is
81 ! also possible. We define <I>domain</I> as the grid associated with a <I>task</I>.
82 ! We define the <I>compute domain</I> as the set of gridpoints that are
83 ! computed by a task, and the <I>data domain</I> as the set of points
84 ! that are required by the task for the calculation. There can in
85 ! general be more than 1 task per PE, though often
86 ! the number of domains is the same as the processor count. We define
87 ! the <I>global domain</I> as the global computational domain of the
88 ! entire model (i.e, the same as the computational domain if run on a
89 ! single processor). 2D domains are defined using a derived type <TT>domain2D</TT>,
90 ! constructed as follows (see comments in code for more details):
91 !
92 ! <PRE>
93 ! type, public :: domain_axis_spec
94 ! private
95 ! integer :: begin, end, size, max_size
96 ! logical :: is_global
97 ! end type domain_axis_spec
98 ! type, public :: domain1D
99 ! private
100 ! type(domain_axis_spec) :: compute, data, global, active
101 ! logical :: mustputb, mustgetb, mustputf, mustgetf, folded
102 ! type(domain1D), pointer, dimension(:) :: list
103 ! integer :: pe !PE to which this domain is assigned
104 ! integer :: pos
105 ! end type domain1D
106 !domaintypes of higher rank can be constructed from type domain1D
107 !typically we only need 1 and 2D, but could need higher (e.g 3D LES)
108 !some elements are repeated below if they are needed once per domain
109 ! type, public :: domain2D
110 ! private
111 ! type(domain1D) :: x
112 ! type(domain1D) :: y
113 ! type(domain2D), pointer, dimension(:) :: list
114 ! integer :: pe !PE to which this domain is assigned
115 ! integer :: pos
116 ! end type domain2D
117 ! type(domain1D), public :: NULL_DOMAIN1D
118 ! type(domain2D), public :: NULL_DOMAIN2D
119 ! </PRE>
120 
121 ! The <TT>domain2D</TT> type contains all the necessary information to
122 ! define the global, compute and data domains of each task, as well as the PE
123 ! associated with the task. The PEs from which remote data may be
124 ! acquired to update the data domain are also contained in a linked list
125 ! of neighbours.
126 ! </DESCRIPTION>
127 
129 !a generalized domain decomposition package for use with mpp_mod
130 !Balaji (vb@gfdl.gov) 15 March 1999
131 #include <fms_platform.h>
132 
133 #if defined(use_libMPI) && defined(sgi_mipspro)
134  use mpi
135 #endif
136 
151  use mpp_data_mod, only : mpp_domains_stack, ptr_domains_stack
152  use mpp_data_mod, only : mpp_domains_stack_nonblock, ptr_domains_stack_nonblock
153  use mpp_mod, only : mpp_pe, mpp_root_pe, mpp_npes, mpp_error, fatal, warning, note
154  use mpp_mod, only : stdout, stderr, stdlog, mpp_send, mpp_recv, mpp_transmit, mpp_sync_self
155  use mpp_mod, only : mpp_clock_id, mpp_clock_begin, mpp_clock_end
156  use mpp_mod, only : mpp_max, mpp_min, mpp_sum, mpp_get_current_pelist, mpp_broadcast
157  use mpp_mod, only : mpp_sum_ad
158  use mpp_mod, only : mpp_sync, mpp_init, mpp_malloc, lowercase
159  use mpp_mod, only : input_nml_file, mpp_alltoall
160  use mpp_mod, only : mpp_type, mpp_byte
161  use mpp_mod, only : mpp_type_create, mpp_type_free
162  use mpp_mod, only : comm_tag_1, comm_tag_2, comm_tag_3, comm_tag_4
164  use mpp_pset_mod, only : mpp_pset_init
165  use mpp_efp_mod, only : mpp_reproducing_sum
166  implicit none
167  private
168 
169 #if defined(use_libMPI) && !defined(sgi_mipspro)
170 #include <mpif.h>
171 !sgi_mipspro gets this from 'use mpi'
172 #endif
173 
174  !--- public paramters imported from mpp_domains_parameter_mod
175  public :: global_data_domain, cyclic_global_domain, bgrid_ne, bgrid_sw, cgrid_ne, cgrid_sw, agrid
176  public :: dgrid_ne, dgrid_sw, fold_west_edge, fold_east_edge, fold_south_edge, fold_north_edge
177  public :: wupdate, eupdate, supdate, nupdate, xupdate, yupdate
178  public :: non_bitwise_exact_sum, bitwise_exact_sum, mpp_domain_time, bitwise_efp_sum
179  public :: center, corner, scalar_pair
180  public :: north, north_east, east, south_east
181  public :: south, south_west, west, north_west
182  public :: zero, ninety, minus_ninety, one_hundred_eighty
183  public :: edgeupdate, nonsymedgeupdate
184 
185  !--- public data imported from mpp_data_mod
186  public :: null_domain1d, null_domain2d
187 
190 
191  !--- public interface from mpp_domains_util.h
192  public :: mpp_domains_set_stack_size, mpp_get_compute_domain, mpp_get_compute_domains
193  public :: mpp_get_data_domain, mpp_get_global_domain, mpp_get_domain_components
194  public :: mpp_get_layout, mpp_get_pelist, operator(.EQ.), operator(.NE.)
195  public :: mpp_domain_is_symmetry, mpp_domain_is_initialized
198  public :: mpp_get_memory_domain, mpp_get_domain_shift, mpp_domain_is_tile_root_pe
199  public :: mpp_get_tile_id, mpp_get_domain_extents, mpp_get_current_ntile, mpp_get_ntile_count
200  public :: mpp_get_tile_list
201  public :: mpp_get_tile_npes, mpp_get_domain_root_pe, mpp_get_tile_pelist, mpp_get_tile_compute_domains
202  public :: mpp_get_num_overlap, mpp_get_overlap
203  public :: mpp_get_io_domain, mpp_get_domain_pe, mpp_get_domain_tile_root_pe
204  public :: mpp_get_domain_name, mpp_get_io_domain_layout
205  public :: mpp_copy_domain, mpp_set_domain_symmetry
206  public :: mpp_get_update_pelist, mpp_get_update_size
207  public :: mpp_get_domain_npes, mpp_get_domain_pelist
208  public :: mpp_clear_group_update
209  public :: mpp_group_update_initialized, mpp_group_update_is_set
210 
211  !--- public interface from mpp_domains_reduce.h
214  !--- public interface from mpp_domains_misc.h
215  public :: mpp_broadcast_domain, mpp_domains_init, mpp_domains_exit, mpp_redistribute
222  public :: mpp_get_boundary
223  public :: mpp_update_domains_ad
224  public :: mpp_get_boundary_ad
226  !--- public interface from mpp_domains_define.h
227  public :: mpp_define_layout, mpp_define_domains, mpp_modify_domain, mpp_define_mosaic
228  public :: mpp_define_mosaic_pelist, mpp_define_null_domain, mpp_mosaic_defined
229  public :: mpp_define_io_domain, mpp_deallocate_domain
230  public :: mpp_compute_extent, mpp_compute_block_extent
231 
232  !--- public interface for unstruct domain
233  public :: mpp_define_unstruct_domain, domainug, mpp_get_ug_io_domain
234  public :: mpp_get_ug_domain_npes, mpp_get_ug_compute_domain, mpp_get_ug_domain_tile_id
235  public :: mpp_get_ug_domain_pelist, mpp_get_ug_domain_grid_index
236  public :: mpp_get_ug_domain_ntiles, mpp_get_ug_global_domain
237  public :: mpp_global_field_ug, mpp_get_ug_domain_tile_list, mpp_get_ug_compute_domains
238  public :: mpp_define_null_ug_domain, null_domainug, mpp_get_ug_domains_index
239  public :: mpp_get_ug_sg_domain, mpp_get_ug_domain_tile_pe_inf
240 
241  !--- public interface from mpp_define_domains.inc
242  public :: mpp_define_nest_domains, mpp_get_c2f_index, mpp_get_f2c_index
243 
244 !----------
245 !ug support
246  public :: mpp_domain_ug_is_tile_root_pe
247  public :: mpp_deallocate_domainug
248  public :: mpp_get_io_domain_ug_layout
249 !----------
250 
251  integer, parameter :: name_length = 64
252  integer, parameter :: maxlist = 100
253  integer, parameter :: maxoverlap = 200
254  integer, parameter :: field_s = 0
255  integer, parameter :: field_x = 1
256  integer, parameter :: field_y = 2
257 
258 
259  !--- data types used mpp_domains_mod.
261  private
262  integer :: begin, end, size, max_size
263  integer :: begin_index, end_index
264  end type unstruct_axis_spec
265 
267  private
268  type(unstruct_axis_spec) :: compute
269  integer :: pe
270  integer :: pos
271  integer :: tile_id
272  end type unstruct_domain_spec
273 
275  private
276  integer :: count = 0
277  integer :: pe
278  integer, pointer :: i(:)=>null()
279  integer, pointer :: j(:)=>null()
280  end type unstruct_overlap_type
281 
283  private
284  integer :: nsend, nrecv
285  type(unstruct_overlap_type), pointer :: recv(:)=>null()
286  type(unstruct_overlap_type), pointer :: send(:)=>null()
287  end type unstruct_pass_type
288 
289  type domainug
290  private
291  type(unstruct_axis_spec) :: compute, global
292  type(unstruct_domain_spec), pointer :: list(:)=>null()
293  type(domainug), pointer :: io_domain=>null()
294  type(unstruct_pass_type) :: sg2ug
295  type(unstruct_pass_type) :: ug2sg
296  integer, pointer :: grid_index(:) => null() ! on current pe
297  type(domain2d), pointer :: sg_domain => null()
298  integer :: pe
299  integer :: pos
300  integer :: ntiles
301  integer :: tile_id
302  integer :: tile_root_pe
303  integer :: tile_npes
304  integer :: npes_io_group
305  integer(INT_KIND) :: io_layout
306  end type domainug
307 
308  type domain_axis_spec !type used to specify index limits along an axis of a domain
309  private
310  integer :: begin, end, size, max_size !start, end of domain axis, size, max size in set
311  logical :: is_global !TRUE if domain axis extent covers global domain
312  end type domain_axis_spec
313 
314  type domain1d
315  private
316  type(domain_axis_spec) :: compute, data, global, memory
317  logical :: cyclic
318  type(domain1d), pointer :: list(:) =>null()
319  integer :: pe !PE to which this domain is assigned
320  integer :: pos !position of this PE within link list, i.e domain%list(pos)%pe = pe
321  integer :: goffset, loffset !needed for global sum
322  end type domain1d
323 
325  private
326  type(domain_axis_spec) :: compute
327  integer :: pos
328  end type domain1d_spec
329 
331  private
332  type(domain1d_spec), pointer :: x(:) => null() ! x-direction domain decomposition
333  type(domain1d_spec), pointer :: y(:) => null() ! x-direction domain decomposition
334  integer, pointer :: tile_id(:) => null() ! tile id of each tile
335  integer :: pe ! PE to which this domain is assigned
336  integer :: pos ! position of this PE within link list
337  integer :: tile_root_pe ! root pe of tile.
338  end type domain2d_spec
339 
341  private
342  integer :: count = 0 ! number of ovrelapping
343  integer :: pe
344  integer :: start_pos ! start position in the buffer
345  integer :: totsize ! all message size
346  integer , pointer :: msgsize(:) => null() ! overlapping msgsize to be sent or received
347  integer, pointer :: tileme(:) => null() ! my tile id for this overlap
348  integer, pointer :: tilenbr(:) => null() ! neighbor tile id for this overlap
349  integer, pointer :: is(:) => null() ! starting i-index
350  integer, pointer :: ie(:) => null() ! ending i-index
351  integer, pointer :: js(:) => null() ! starting j-index
352  integer, pointer :: je(:) => null() ! ending j-index
353  integer, pointer :: dir(:) => null() ! direction ( value 1,2,3,4 = E,S,W,N)
354  integer, pointer :: rotation(:) => null() ! rotation angle.
355  integer, pointer :: index(:) => null() ! for refinement
356  logical, pointer :: from_contact(:) => null() ! indicate if the overlap is computed from define_contact_overlap
357  end type overlap_type
358 
360  private
361  integer :: whalo, ehalo, shalo, nhalo ! halo size
362  integer :: xbegin, xend, ybegin, yend
363  integer :: nsend, nrecv
364  integer :: sendsize, recvsize
365  type(overlap_type), pointer :: send(:) => null()
366  type(overlap_type), pointer :: recv(:) => null()
367  type(overlapspec), pointer :: next => null()
368  end type overlapspec
369 
371  integer :: xbegin, xend, ybegin, yend
372  end type tile_type
373 
374 !domaintypes of higher rank can be constructed from type domain1D
375 !typically we only need 1 and 2D, but could need higher (e.g 3D LES)
376 !some elements are repeated below if they are needed once per domain, not once per axis
377 
378  type domain2d
379  private
380  character(len=NAME_LENGTH) :: name='unnamed' ! name of the domain, default is "unspecified"
381  integer(LONG_KIND) :: id
382  integer :: pe ! PE to which this domain is assigned
383  integer :: fold
384  integer :: pos ! position of this PE within link list
385  logical :: symmetry ! indicate the domain is symmetric or non-symmetric.
386  integer :: whalo, ehalo ! halo size in x-direction
387  integer :: shalo, nhalo ! halo size in y-direction
388  integer :: ntiles ! number of tiles within mosaic
389  integer :: max_ntile_pe ! maximum value in the pelist of number of tiles on each pe.
390  integer :: ncontacts ! number of contact region within mosaic.
391  logical :: rotated_ninety ! indicate if any contact rotate NINETY or MINUS_NINETY
392  logical :: initialized=.false. ! indicate if the overlapping is computed or not.
393  integer :: tile_root_pe ! root pe of current tile.
394  integer :: io_layout(2) ! io_layout, will be set through mpp_define_io_domain
395  ! default = domain layout
396  integer, pointer :: pearray(:,:) => null() ! pe of each layout position
397  integer, pointer :: tile_id(:) => null() ! tile id of each tile
398  type(domain1d), pointer :: x(:) => null() ! x-direction domain decomposition
399  type(domain1d), pointer :: y(:) => null() ! y-direction domain decomposition
400  type(domain2d_spec),pointer :: list(:) => null() ! domain decomposition on pe list
401  type(tile_type), pointer :: tilelist(:) => null() ! store tile information
402  type(overlapspec), pointer :: check_c => null() ! send and recv information for boundary consistency check of C-cell
403  type(overlapspec), pointer :: check_e => null() ! send and recv information for boundary consistency check of E-cell
404  type(overlapspec), pointer :: check_n => null() ! send and recv information for boundary consistency check of N-cell
405  type(overlapspec), pointer :: bound_c => null() ! send information for getting boundary value for symmetry domain.
406  type(overlapspec), pointer :: bound_e => null() ! send information for getting boundary value for symmetry domain.
407  type(overlapspec), pointer :: bound_n => null() ! send information for getting boundary value for symmetry domain.
408  type(overlapspec), pointer :: update_t => null() ! send and recv information for halo update of T-cell.
409  type(overlapspec), pointer :: update_e => null() ! send and recv information for halo update of E-cell.
410  type(overlapspec), pointer :: update_c => null() ! send and recv information for halo update of C-cell.
411  type(overlapspec), pointer :: update_n => null() ! send and recv information for halo update of N-cell.
412  type(domain2d), pointer :: io_domain => null() ! domain for IO, will be set through calling mpp_set_io_domain ( this will be changed).
413  end type domain2d
414 
415  !--- the following type is used to reprsent the contact between tiles.
416  !--- this type will only be used in mpp_domains_define.inc
418  private
419  integer :: ncontact ! number of neighbor tile.
420  integer, pointer :: tile(:) =>null() ! neighbor tile
421  integer, pointer :: align1(:)=>null(), align2(:)=>null() ! alignment of me and neighbor
422  real, pointer :: refine1(:)=>null(), refine2(:)=>null() !
423  integer, pointer :: is1(:)=>null(), ie1(:)=>null() ! i-index of current tile repsenting contact
424  integer, pointer :: js1(:)=>null(), je1(:)=>null() ! j-index of current tile repsenting contact
425  integer, pointer :: is2(:)=>null(), ie2(:)=>null() ! i-index of neighbor tile repsenting contact
426  integer, pointer :: js2(:)=>null(), je2(:)=>null() ! j-index of neighbor tile repsenting contact
427  end type contact_type
428 
429 
431  integer :: is_me, ie_me, js_me, je_me
432  integer :: is_you, ie_you, js_you, je_you
433  end type index_type
434 
435  type nestspec
436  private
437  integer :: xbegin, xend, ybegin, yend
438  type(index_type) :: west, east, south, north, center
439  integer :: nsend, nrecv
440  integer :: extra_halo
441  type(overlap_type), pointer :: send(:) => null()
442  type(overlap_type), pointer :: recv(:) => null()
443  type(nestspec), pointer :: next => null()
444 
445  end type nestspec
446 
447 
448 
450  private
451  integer :: tile_fine, tile_coarse
452  integer :: istart_fine, iend_fine, jstart_fine, jend_fine
453  integer :: istart_coarse, iend_coarse, jstart_coarse, jend_coarse
454  integer :: x_refine, y_refine
455  logical :: is_fine_pe, is_coarse_pe
456  integer, pointer :: pelist_fine(:) => null()
457  integer, pointer :: pelist_coarse(:) => null()
458  character(len=NAME_LENGTH) :: name
459  type(nestspec), pointer :: c2f_t => null()
460  type(nestspec), pointer :: c2f_c => null()
461  type(nestspec), pointer :: c2f_e => null()
462  type(nestspec), pointer :: c2f_n => null()
463  type(nestspec), pointer :: f2c_t => null()
464  type(nestspec), pointer :: f2c_c => null()
465  type(nestspec), pointer :: f2c_e => null()
466  type(nestspec), pointer :: f2c_n => null()
467  type(domain2d), pointer :: domain_fine => null()
468  type(domain2d), pointer :: domain_coarse => null()
469  end type nest_domain_type
470 
471 
472 
474  private
475  logical :: initialized=.false.
476  integer(LONG_KIND) :: id=-9999
477  integer(LONG_KIND) :: l_addr =-9999
478  integer(LONG_KIND) :: l_addrx =-9999
479  integer(LONG_KIND) :: l_addry =-9999
480  type(domain2d), pointer :: domain =>null()
481  type(domain2d), pointer :: domain_in =>null()
482  type(domain2d), pointer :: domain_out =>null()
483  type(overlapspec), pointer :: send(:,:,:,:) => null()
484  type(overlapspec), pointer :: recv(:,:,:,:) => null()
485  integer, dimension(:,:), _allocatable :: sendis _null
486  integer, dimension(:,:), _allocatable :: sendie _null
487  integer, dimension(:,:), _allocatable :: sendjs _null
488  integer, dimension(:,:), _allocatable :: sendje _null
489  integer, dimension(:,:), _allocatable :: recvis _null
490  integer, dimension(:,:), _allocatable :: recvie _null
491  integer, dimension(:,:), _allocatable :: recvjs _null
492  integer, dimension(:,:), _allocatable :: recvje _null
493  logical, dimension(:), _allocatable :: s_do_buf _null
494  logical, dimension(:), _allocatable :: r_do_buf _null
495  integer, dimension(:), _allocatable :: cto_pe _null
496  integer, dimension(:), _allocatable :: cfrom_pe _null
497  integer, dimension(:), _allocatable :: s_msize _null
498  integer, dimension(:), _allocatable :: r_msize _null
499  integer :: slist_size=0, rlist_size=0
500  integer :: isize=0, jsize=0, ke=0
501  integer :: isize_in=0, jsize_in=0
502  integer :: isize_out=0, jsize_out=0
503  integer :: isize_max=0, jsize_max=0
504  integer :: gf_ioff=0, gf_joff=0
505  ! Remote data
506  integer, dimension(:) , _allocatable :: isizer _null
507  integer, dimension(:) , _allocatable :: jsizer _null
508  integer, dimension(:,:), _allocatable :: sendisr _null
509  integer, dimension(:,:), _allocatable :: sendjsr _null
510  integer(LONG_KIND), dimension(:), _allocatable :: rem_addr _null
511  integer(LONG_KIND), dimension(:), _allocatable :: rem_addrx _null
512  integer(LONG_KIND), dimension(:), _allocatable :: rem_addry _null
513  integer(LONG_KIND), dimension(:,:), _allocatable :: rem_addrl _null
514  integer(LONG_KIND), dimension(:,:), _allocatable :: rem_addrlx _null
515  integer(LONG_KIND), dimension(:,:), _allocatable :: rem_addrly _null
516  integer :: position ! data location. T, E, C, or N.
517  end type domaincommunicator2d
518 
519  integer, parameter :: max_request = 100
520 
522  integer :: recv_pos
523  integer :: send_pos
524  integer :: recv_msgsize
525  integer :: send_msgsize
526  integer :: update_flags
527  integer :: update_position
528  integer :: update_gridtype
529  integer :: update_whalo
530  integer :: update_ehalo
531  integer :: update_shalo
532  integer :: update_nhalo
533  integer :: request_send_count
534  integer :: request_recv_count
535  integer, dimension(MAX_REQUEST) :: request_send
536  integer, dimension(MAX_REQUEST) :: request_recv
537  integer, dimension(MAX_REQUEST) :: size_recv
538  integer, dimension(MAX_REQUEST) :: type_recv
539  integer, dimension(MAX_REQUEST) :: buffer_pos_send
540  integer, dimension(MAX_REQUEST) :: buffer_pos_recv
541  integer(LONG_KIND) :: field_addrs(max_domain_fields)
542  integer(LONG_KIND) :: field_addrs2(max_domain_fields)
543  integer :: nfields
544  end type nonblock_type
545 
547  private
548  logical :: initialized = .false.
549  logical :: k_loop_inside = .true.
550  logical :: nonsym_edge = .false.
551  integer :: nscalar = 0
552  integer :: nvector = 0
553  integer :: flags_s=0, flags_v=0
554  integer :: whalo_s=0, ehalo_s=0, shalo_s=0, nhalo_s=0
555  integer :: isize_s=0, jsize_s=0, ksize_s=1
556  integer :: whalo_v=0, ehalo_v=0, shalo_v=0, nhalo_v=0
557  integer :: isize_x=0, jsize_x=0, ksize_v=1
558  integer :: isize_y=0, jsize_y=0
559  integer :: position=0, gridtype=0
560  logical :: recv_s(8), recv_x(8), recv_y(8)
561  integer :: is_s=0, ie_s=0, js_s=0, je_s=0
562  integer :: is_x=0, ie_x=0, js_x=0, je_x=0
563  integer :: is_y=0, ie_y=0, js_y=0, je_y=0
564  integer :: nrecv=0, nsend=0
565  integer :: npack=0, nunpack=0
566  integer :: reset_index_s = 0
567  integer :: reset_index_v = 0
568  integer :: tot_msgsize = 0
569  integer :: from_pe(maxoverlap)
570  integer :: to_pe(maxoverlap)
571  integer :: recv_size(maxoverlap)
572  integer :: send_size(maxoverlap)
573  integer :: buffer_pos_recv(maxoverlap)
574  integer :: buffer_pos_send(maxoverlap)
575  integer :: pack_type(maxoverlap)
576  integer :: pack_buffer_pos(maxoverlap)
577  integer :: pack_rotation(maxoverlap)
578  integer :: pack_size(maxoverlap)
579  integer :: pack_is(maxoverlap)
580  integer :: pack_ie(maxoverlap)
581  integer :: pack_js(maxoverlap)
582  integer :: pack_je(maxoverlap)
583  integer :: unpack_type(maxoverlap)
584  integer :: unpack_buffer_pos(maxoverlap)
585  integer :: unpack_rotation(maxoverlap)
586  integer :: unpack_size(maxoverlap)
587  integer :: unpack_is(maxoverlap)
588  integer :: unpack_ie(maxoverlap)
589  integer :: unpack_js(maxoverlap)
590  integer :: unpack_je(maxoverlap)
591  integer(LONG_KIND) :: addrs_s(max_domain_fields)
592  integer(LONG_KIND) :: addrs_x(max_domain_fields)
593  integer(LONG_KIND) :: addrs_y(max_domain_fields)
594  integer :: buffer_start_pos = -1
595  integer :: request_send(max_request)
596  integer :: request_recv(max_request)
597  integer :: type_recv(max_request)
598  end type mpp_group_update_type
599 
600 !#######################################################################
601 
602 !***********************************************************************
603 !
604 ! module variables
605 !
606 !***********************************************************************
607  integer :: pe
608  logical :: module_is_initialized = .false.
609  logical :: debug = .false.
610  logical :: verbose=.false.
611  logical :: mosaic_defined = .false.
614  type(domain1d),save :: null_domain1d
615  type(domain2d),save :: null_domain2d
616  type(domainug),save :: null_domainug
617  integer :: current_id_update = 0
618  integer :: num_update = 0
620  integer :: nonblock_buffer_pos = 0
622  logical :: start_update = .true.
623  logical :: complete_update = .false.
624  type(nonblock_type), allocatable :: nonblock_data(:)
625  integer, parameter :: max_nonblock_update = 100
626 
627  integer :: group_update_buffer_pos = 0
628  logical :: complete_group_update_on = .false.
629  !-------- The following variables are used in mpp_domains_comm.h
630 
631  integer, parameter :: max_addrs=512
632  integer(LONG_KIND),dimension(MAX_ADDRS),save :: addrs_sorted=-9999 ! list of sorted local addrs
633  integer, dimension(-1:MAX_ADDRS),save :: addrs_idx=-9999 ! idx of addr assoicated w/ d_comm
634  integer, dimension(MAX_ADDRS),save :: a_salvage=-9999 ! freed idx list of addr
635  integer, save :: a_sort_len=0 ! len sorted memory list
636  integer, save :: n_addrs=0 ! num memory addresses used
637 
638  integer(LONG_KIND), parameter :: addr2_base=z'0000000000010000'
639  integer, parameter :: max_addrs2=128
640  integer(LONG_KIND),dimension(MAX_ADDRS2),save :: addrs2_sorted=-9999 ! list of sorted local addrs
641  integer, dimension(-1:MAX_ADDRS2),save :: addrs2_idx=-9999 ! idx of addr2 assoicated w/ d_comm
642  integer, dimension(MAX_ADDRS2),save :: a2_salvage=-9999 ! freed indices of addr2
643  integer, save :: a2_sort_len=0 ! len sorted memory list
644  integer, save :: n_addrs2=0 ! num memory addresses used
645 
646  integer, parameter :: max_dom_ids=128
647  integer(LONG_KIND),dimension(MAX_DOM_IDS),save :: ids_sorted=-9999 ! list of sorted domain identifiers
648  integer, dimension(-1:MAX_DOM_IDS),save :: ids_idx=-9999 ! idx of d_comm associated w/ sorted addr
649  integer, save :: i_sort_len=0 ! len sorted domain ids list
650  integer, save :: n_ids=0 ! num domain ids used (=i_sort_len; dom ids never removed)
651 
652  integer, parameter :: max_fields=1024
653  integer(LONG_KIND), dimension(MAX_FIELDS),save :: dckey_sorted=-9999 ! list of sorted local addrs
654  ! Not sure why static d_comm fails during deallocation of derived type members; allocatable works
655  ! type(DomainCommunicator2D),dimension(MAX_FIELDS),save,target :: d_comm ! domain communicators
656  type(domaincommunicator2d),dimension(:),allocatable,save,target :: d_comm ! domain communicators
657  integer, dimension(-1:MAX_FIELDS),save :: d_comm_idx=-9999 ! idx of d_comm associated w/ sorted addr
658  integer, dimension(MAX_FIELDS),save :: dc_salvage=-9999 ! freed indices of d_comm
659  integer, save :: dc_sort_len=0 ! len sorted comm keys (=num active communicators)
660  integer, save :: n_comm=0 ! num communicators used
661 
662  ! integer(LONG_KIND), parameter :: GT_BASE=2**8
663  integer(LONG_KIND), parameter :: gt_base=z'0000000000000100' ! Workaround for 64bit int init problem
664 
665  ! integer(LONG_KIND), parameter :: KE_BASE=2**48
666  integer(LONG_KIND), parameter :: ke_base=z'0001000000000000' ! Workaround for 64bit int init problem
667 
668  integer(LONG_KIND) :: domain_cnt=0
669 
670  !--- the following variables are used in mpp_domains_misc.h
671  logical :: domain_clocks_on=.false.
672  integer :: send_clock=0, recv_clock=0, unpk_clock=0
673  integer :: wait_clock=0, pack_clock=0
675  integer :: wait_clock_nonblock=0
681 
682  !--- namelist interface
683 ! <NAMELIST NAME="mpp_domains_nml">
684 ! <DATA NAME="debug_update_domain" TYPE="character(len=32)" DEFAULT="none">
685 ! when debug_update_domain = none, no debug will be done. When debug_update_domain is set to fatal,
686 ! the run will be exited with fatal error message. When debug_update_domain is set to
687 ! warning, the run will output warning message. when debug update_domain is set to
688 ! note, the run will output some note message. Will check the consistency on the boundary between
689 ! processor/tile when updating doamin for symmetric domain and check the consistency on the north
690 ! folded edge.
691 ! </DATA>
692 ! <DATA NAME="efp_sum_overflow_check" TYPE="logical" DEFAULT=".FALSE.">
693 ! Set true to always do overflow_check when doing EFP bitwise mpp_global_sum.
694 ! </DATA>
695 ! <DATA NAME="nthread_control_loop" TYPE="integer" DEFAULT="4">
696 ! Determine the loop order for packing and unpacking. When number of threads is greater than nthread_control_loop,
697 ! k-loop will be moved outside and combined with number of pack and unpack. When number of threads is less
698 ! than or equal to nthread_control_loop, k-loop is moved inside but still outside of j,i loop.
699 ! </DATA>
700 ! </NAMELIST>
701  character(len=32) :: debug_update_domain = "none"
702  logical :: debug_message_passing = .false.
703  integer :: nthread_control_loop = 8
704  logical :: efp_sum_overflow_check = .false.
705  logical :: use_alltoallw = .false.
708 
709  !***********************************************************************
710 
711  integer, parameter :: no_check = -1
713 !***********************************************************************
714 !
715 ! public interface from mpp_domains_define.h
716 !
717 !***********************************************************************
718 
719  ! <INTERFACE NAME="mpp_define_layout">
720  ! <OVERVIEW>
721  ! Retrieve layout associated with a domain decomposition.
722  ! </OVERVIEW>
723  ! <DESCRIPTION>
724  ! Given a global 2D domain and the number of divisions in the
725  ! decomposition (<TT>ndivs</TT>: usually the PE count unless some
726  ! domains are masked) this calls returns a 2D domain layout.
727  !
728  ! By default, <TT>mpp_define_layout</TT> will attempt to divide the
729  ! 2D index space into domains that maintain the aspect ratio of the
730  ! global domain. If this cannot be done, the algorithm favours domains
731  ! that are longer in <TT>x</TT> than <TT>y</TT>, a preference that could
732  ! improve vector performance.
733  ! </DESCRIPTION>
734  ! <TEMPLATE>
735  ! call mpp_define_layout( global_indices, ndivs, layout )
736  ! </TEMPLATE>
737  ! <IN NAME="global_indices"></IN>
738  ! <IN NAME="ndivs"></IN>
739  ! <OUT NAME="layout"></OUT>
740  ! </INTERFACE>
741 
743  module procedure mpp_define_layout2d
744  end interface
745 
746 
747  ! <INTERFACE NAME="mpp_define_domains">
748 
749  ! <OVERVIEW>
750  ! Set up a domain decomposition.
751  ! </OVERVIEW>
752  ! <DESCRIPTION>
753  ! There are two forms for the <TT>mpp_define_domains</TT> call. The 2D
754  ! version is generally to be used but is built by repeated calls to the
755  ! 1D version, also provided.
756  ! </DESCRIPTION>
757  ! <TEMPLATE>
758  ! call mpp_define_domains( global_indices, ndivs, domain, &
759  ! pelist, flags, halo, extent, maskmap )
760  ! </TEMPLATE>
761  ! <TEMPLATE>
762  ! call mpp_define_domains( global_indices, layout, domain, pelist, &
763  ! xflags, yflags, xhalo, yhalo, &
764  ! xextent, yextent, maskmap, name )
765  ! </TEMPLATE>
766  ! <IN NAME="global_indices" >
767  ! Defines the global domain.
768  ! </IN>
769  ! <IN NAME="ndivs">
770  ! Is the number of domain divisions required.
771  ! </IN>
772  ! <INOUT NAME="domain">
773  ! Holds the resulting domain decomposition.
774  ! </INOUT>
775  ! <IN NAME="pelist">
776  ! List of PEs to which the domains are to be assigned.
777  ! </IN>
778  ! <IN NAME="flags">
779  ! An optional flag to pass additional information
780  ! about the desired domain topology. Useful flags in a 1D decomposition
781  ! include <TT>GLOBAL_DATA_DOMAIN</TT> and
782  ! <TT>CYCLIC_GLOBAL_DOMAIN</TT>. Flags are integers: multiple flags may
783  ! be added together. The flag values are public parameters available by
784  ! use association.
785  ! </IN>
786  ! <IN NAME="halo">
787  ! Width of the halo.
788  ! </IN>
789  ! <IN NAME="extent">
790  ! Normally <TT>mpp_define_domains</TT> attempts
791  ! an even division of the global domain across <TT>ndivs</TT>
792  ! domains. The <TT>extent</TT> array can be used by the user to pass a
793  ! custom domain division. The <TT>extent</TT> array has <TT>ndivs</TT>
794  ! elements and holds the compute domain widths, which should add up to
795  ! cover the global domain exactly.
796  ! </IN>
797  ! <IN NAME="maskmap">
798  ! Some divisions may be masked
799  ! (<TT>maskmap=.FALSE.</TT>) to exclude them from the computation (e.g
800  ! for ocean model domains that are all land). The <TT>maskmap</TT> array
801  ! is dimensioned <TT>ndivs</TT> and contains <TT>.TRUE.</TT> values for
802  ! any domain that must be <I>included</I> in the computation (default
803  ! all). The <TT>pelist</TT> array length should match the number of
804  ! domains included in the computation.
805  ! </IN>
806 
807  ! <IN NAME="layout"></IN>
808  ! <IN NAME="xflags, yflags"></IN>
809  ! <IN NAME="xhalo, yhalo"></IN>
810  ! <IN NAME="xextent, yextent"></IN>
811  ! <IN NAME="name" ></IN>
812 
813  ! <NOTE>
814  ! For example:
815  !
816  ! <PRE>
817  ! call mpp_define_domains( (/1,100/), 10, domain, &
818  ! flags=GLOBAL_DATA_DOMAIN+CYCLIC_GLOBAL_DOMAIN, halo=2 )
819  ! </PRE>
820  !
821  ! defines 10 compute domains spanning the range [1,100] of the global
822  ! domain. The compute domains are non-overlapping blocks of 10. All the data
823  ! domains are global, and with a halo of 2 span the range [-1:102]. And
824  ! since the global domain has been declared to be cyclic,
825  ! <TT>domain(9)%next => domain(0)</TT> and <TT>domain(0)%prev =>
826  ! domain(9)</TT>. A field is allocated on the data domain, and computations proceed on
827  ! the compute domain. A call to <LINK
828  ! SRC="#mpp_update_domains"><TT>mpp_update_domains</TT></LINK> would fill in
829  ! the values in the halo region:
830 
831  ! <PRE>
832  ! call mpp_get_data_domain( domain, isd, ied ) !returns -1 and 102
833  ! call mpp_get_compute_domain( domain, is, ie ) !returns (1,10) on PE 0 ...
834  ! allocate( a(isd:ied) )
835  ! do i = is,ie
836  ! a(i) = &lt;perform computations&gt;
837  ! end do
838  ! call mpp_update_domains( a, domain )
839  ! </PRE>
840 
841  ! The call to <TT>mpp_update_domains</TT> fills in the regions outside
842  ! the compute domain. Since the global domain is cyclic, the values at
843  ! <TT>i=(-1,0)</TT> are the same as at <TT>i=(99,100)</TT>; and
844  ! <TT>i=(101,102)</TT> are the same as <TT>i=(1,2)</TT>.
845  !
846  ! The 2D version is just an extension of this syntax to two
847  ! dimensions.
848  !
849  ! The 2D version of the above should generally be used in
850  ! codes, including 1D-decomposed ones, if there is a possibility of
851  ! future evolution toward 2D decomposition. The arguments are similar to
852  ! the 1D case, except that now we have optional arguments
853  ! <TT>flags</TT>, <TT>halo</TT>, <TT>extent</TT> and <TT>maskmap</TT>
854  ! along two axes.
855  !
856  ! <TT>flags</TT> can now take an additional possible value to fold
857  ! one or more edges. This is done by using flags
858  ! <TT>FOLD_WEST_EDGE</TT>, <TT>FOLD_EAST_EDGE</TT>,
859  ! <TT>FOLD_SOUTH_EDGE</TT> or <TT>FOLD_NORTH_EDGE</TT>. When a fold
860  ! exists (e.g cylindrical domain), vector fields reverse sign upon
861  ! crossing the fold. This parity reversal is performed only in the
862  ! vector version of <LINK
863  ! SRC="#mpp_update_domains"><TT>mpp_update_domains</TT></LINK>. In
864  ! addition, shift operations may need to be applied to vector fields on
865  ! staggered grids, also described in the vector interface to
866  ! <TT>mpp_update_domains</TT>.
867  !
868  ! <TT>name</TT> is the name associated with the decomposition,
869  ! e.g <TT>'Ocean model'</TT>. If this argument is present,
870  ! <TT>mpp_define_domains</TT> will print the domain decomposition
871  ! generated to <TT>stdlog</TT>.
872  !
873  ! Examples:
874  !
875  ! <PRE>
876  ! call mpp_define_domains( (/1,100,1,100/), (/2,2/), domain, xhalo=1 )
877  ! </PRE>
878  !
879  ! will create the following domain layout:
880  ! <PRE>
881  ! |---------|-----------|-----------|-------------|
882  ! |domain(1)|domain(2) |domain(3) |domain(4) |
883  ! |--------------|---------|-----------|-----------|-------------|
884  ! |Compute domain|1,50,1,50|51,100,1,50|1,50,51,100|51,100,51,100|
885  ! |--------------|---------|-----------|-----------|-------------|
886  ! |Data domain |0,51,1,50|50,101,1,50|0,51,51,100|50,101,51,100|
887  ! |--------------|---------|-----------|-----------|-------------|
888  ! </PRE>
889  !
890  ! Again, we allocate arrays on the data domain, perform computations
891  ! on the compute domain, and call <TT>mpp_update_domains</TT> to update
892  ! the halo region.
893  !
894  ! If we wished to perfom a 1D decomposition along <TT>Y</TT>
895  ! on the same global domain, we could use:
896 
897  ! <PRE>
898  ! call mpp_define_domains( (/1,100,1,100/), layout=(/4,1/), domain, xhalo=1 )
899  ! </PRE>
900 
901  ! This will create the following domain layout:
902  ! <PRE>
903  ! |----------|-----------|-----------|------------|
904  ! |domain(1) |domain(2) |domain(3) |domain(4) |
905  ! |--------------|----------|-----------|-----------|------------|
906  ! |Compute domain|1,100,1,25|1,100,26,50|1,100,51,75|1,100,76,100|
907  ! |--------------|----------|-----------|-----------|------------|
908  ! |Data domain |0,101,1,25|0,101,26,50|0,101,51,75|1,101,76,100|
909  ! |--------------|----------|-----------|-----------|------------|
910  ! </PRE>
911  ! </NOTE>
912  ! </INTERFACE>
914  module procedure mpp_define_domains1d
915  module procedure mpp_define_domains2d
916  end interface
917 
919  module procedure mpp_define_null_domain1d
920  module procedure mpp_define_null_domain2d
921  end interface
922 
923  interface mpp_copy_domain
924  module procedure mpp_copy_domain1d
925  module procedure mpp_copy_domain2d
926  end interface mpp_copy_domain
927 
929  module procedure mpp_deallocate_domain1d
930  module procedure mpp_deallocate_domain2d
931  end interface
932 
933 ! <INTERFACE NAME="mpp_modify_domain">
934 ! <OVERVIEW>
935 ! modifies the extents (compute, data and global) of domain
936 ! </OVERVIEW>
937 ! <IN NAME="domain_in">
938 ! The source domain.
939 ! </IN>
940 ! <IN NAME="halo">
941 ! Halo size of the returned 1D doamin. Default value is 0.
942 ! </IN>
943 ! <IN NAME="cbegin,cend">
944 ! Axis specifications associated with the compute domain of the returned 1D domain.
945 ! </IN>
946 ! <IN NAME="gbegin,gend">
947 ! Axis specifications associated with the global domain of the returned 1D domain.
948 ! </IN>
949 ! <IN NAME="isc,iec">
950 ! Zonal axis specifications associated with the compute domain of the returned 2D domain.
951 ! </IN>
952 ! <IN NAME="jsc,jec">
953 ! Meridinal axis specifications associated with the compute domain of the returned 2D domain.
954 ! </IN>
955 ! <IN NAME="isg,ieg">
956 ! Zonal axis specifications associated with the global domain of the returned 2D domain.
957 ! </IN>
958 ! <IN NAME="jsg,jeg">
959 ! Meridinal axis specifications associated with the global domain of the returned 2D domain.
960 ! </IN>
961 ! <IN NAME="xhalo,yhalo">
962 ! Halo size of the returned 2D doamin. Default value is 0.
963 ! </IN>
964 ! <INOUT NAME="domain_out">
965 ! The returned domain.
966 ! </INOUT>
967 
968 ! </INTERFACE>
969 
971  module procedure mpp_modify_domain1d
972  module procedure mpp_modify_domain2d
973  end interface
974 
975 
976 !***********************************************************************
977 !
978 ! public interface from mpp_domains_misc.h
979 !
980 !***********************************************************************
981 
982 ! <INTERFACE NAME="mpp_update_domains">
983 ! <OVERVIEW>
984 ! Halo updates.
985 ! </OVERVIEW>
986 ! <DESCRIPTION>
987 ! <TT>mpp_update_domains</TT> is used to perform a halo update of a
988 ! domain-decomposed array on each PE. <TT>MPP_TYPE_</TT> can be of type
989 ! <TT>complex</TT>, <TT>integer</TT>, <TT>logical</TT> or <TT>real</TT>;
990 ! of 4-byte or 8-byte kind; of rank up to 5. The vector version (with
991 ! two input data fields) is only present for <TT>real</TT> types.
992 !
993 ! For 2D domain updates, if there are halos present along both
994 ! <TT>x</TT> and <TT>y</TT>, we can choose to update one only, by
995 ! specifying <TT>flags=XUPDATE</TT> or <TT>flags=YUPDATE</TT>. In
996 ! addition, one-sided updates can be performed by setting <TT>flags</TT>
997 ! to any combination of <TT>WUPDATE</TT>, <TT>EUPDATE</TT>,
998 ! <TT>SUPDATE</TT> and <TT>NUPDATE</TT>, to update the west, east, north
999 ! and south halos respectively. Any combination of halos may be used by
1000 ! adding the requisite flags, e.g: <TT>flags=XUPDATE+SUPDATE</TT> or
1001 ! <TT>flags=EUPDATE+WUPDATE+SUPDATE</TT> will update the east, west and
1002 ! south halos.
1003 !
1004 ! If a call to <TT>mpp_update_domains</TT> involves at least one E-W
1005 ! halo and one N-S halo, the corners involved will also be updated, i.e,
1006 ! in the example above, the SE and SW corners will be updated.
1007 !
1008 ! If <TT>flags</TT> is not supplied, that is
1009 ! equivalent to <TT>flags=XUPDATE+YUPDATE</TT>.
1010 !
1011 ! The vector version is passed the <TT>x</TT> and <TT>y</TT>
1012 ! components of a vector field in tandem, and both are updated upon
1013 ! return. They are passed together to treat parity issues on various
1014 ! grids. For example, on a cubic sphere projection, the <TT>x</TT> and
1015 ! <TT>y</TT> components may be interchanged when passing from an
1016 ! equatorial cube face to a polar face. For grids with folds, vector
1017 ! components change sign on crossing the fold. Paired scalar quantities
1018 ! can also be passed with the vector version if flags=SCALAR_PAIR, in which
1019 ! case components are appropriately interchanged, but signs are not.
1020 !
1021 ! Special treatment at boundaries such as folds is also required for
1022 ! staggered grids. The following types of staggered grids are
1023 ! recognized:
1024 !
1025 ! 1) <TT>AGRID</TT>: values are at grid centers.<BR/>
1026 ! 2) <TT>BGRID_NE</TT>: vector fields are at the NE vertex of a grid
1027 ! cell, i.e: the array elements <TT>u(i,j)</TT> and <TT>v(i,j)</TT> are
1028 ! actually at (i+&#189;,j+&#189;) with respect to the grid centers.<BR/>
1029 ! 3) <TT>BGRID_SW</TT>: vector fields are at the SW vertex of a grid
1030 ! cell, i.e: the array elements <TT>u(i,j)</TT> and <TT>v(i,j)</TT> are
1031 ! actually at (i-&#189;,j-&#189;) with respect to the grid centers.<BR/>
1032 ! 4) <TT>CGRID_NE</TT>: vector fields are at the N and E faces of a
1033 ! grid cell, i.e: the array elements <TT>u(i,j)</TT> and <TT>v(i,j)</TT>
1034 ! are actually at (i+&#189;,j) and (i,j+&#189;) with respect to the
1035 ! grid centers.<BR/>
1036 ! 5) <TT>CGRID_SW</TT>: vector fields are at the S and W faces of a
1037 ! grid cell, i.e: the array elements <TT>u(i,j)</TT> and <TT>v(i,j)</TT>
1038 ! are actually at (i-&#189;,j) and (i,j-&#189;) with respect to the
1039 ! grid centers.
1040 !
1041 ! The gridtypes listed above are all available by use association as
1042 ! integer parameters. The scalar version of <TT>mpp_update_domains</TT>
1043 ! assumes that the values of a scalar field are always at <TT>AGRID</TT>
1044 ! locations, and no special boundary treatment is required. If vector
1045 ! fields are at staggered locations, the optional argument
1046 ! <TT>gridtype</TT> must be appropriately set for correct treatment at
1047 ! boundaries.
1048 !
1049 ! It is safe to apply vector field updates to the appropriate arrays
1050 ! irrespective of the domain topology: if the topology requires no
1051 ! special treatment of vector fields, specifying <TT>gridtype</TT> will
1052 ! do no harm.
1053 !
1054 ! <TT>mpp_update_domains</TT> internally buffers the date being sent
1055 ! and received into single messages for efficiency. A turnable internal
1056 ! buffer area in memory is provided for this purpose by
1057 ! <TT>mpp_domains_mod</TT>. The size of this buffer area can be set by
1058 ! the user by calling <LINK SRC="mpp_domains.html#mpp_domains_set_stack_size">
1059 ! <TT>mpp_domains_set_stack_size</TT></LINK>.
1060 ! </DESCRIPTION>
1061 ! <TEMPLATE>
1062 ! call mpp_update_domains( field, domain, flags )
1063 ! </TEMPLATE>
1064 ! <TEMPLATE>
1065 ! call mpp_update_domains( fieldx, fieldy, domain, flags, gridtype )
1066 ! </TEMPLATE>
1067 ! </INTERFACE>
1069  module procedure mpp_update_domain2d_r8_2d
1070  module procedure mpp_update_domain2d_r8_3d
1071  module procedure mpp_update_domain2d_r8_4d
1072  module procedure mpp_update_domain2d_r8_5d
1073  module procedure mpp_update_domain2d_r8_2dv
1074  module procedure mpp_update_domain2d_r8_3dv
1075  module procedure mpp_update_domain2d_r8_4dv
1076  module procedure mpp_update_domain2d_r8_5dv
1077 #ifdef OVERLOAD_C8
1078  module procedure mpp_update_domain2d_c8_2d
1079  module procedure mpp_update_domain2d_c8_3d
1080  module procedure mpp_update_domain2d_c8_4d
1081  module procedure mpp_update_domain2d_c8_5d
1082 #endif
1083 #ifndef no_8byte_integers
1084  module procedure mpp_update_domain2d_i8_2d
1085  module procedure mpp_update_domain2d_i8_3d
1086  module procedure mpp_update_domain2d_i8_4d
1087  module procedure mpp_update_domain2d_i8_5d
1088 #endif
1089 #ifdef OVERLOAD_R4
1090  module procedure mpp_update_domain2d_r4_2d
1091  module procedure mpp_update_domain2d_r4_3d
1092  module procedure mpp_update_domain2d_r4_4d
1093  module procedure mpp_update_domain2d_r4_5d
1094  module procedure mpp_update_domain2d_r4_2dv
1095  module procedure mpp_update_domain2d_r4_3dv
1096  module procedure mpp_update_domain2d_r4_4dv
1097  module procedure mpp_update_domain2d_r4_5dv
1098 #endif
1099 #ifdef OVERLOAD_C4
1100  module procedure mpp_update_domain2d_c4_2d
1101  module procedure mpp_update_domain2d_c4_3d
1102  module procedure mpp_update_domain2d_c4_4d
1103  module procedure mpp_update_domain2d_c4_5d
1104 #endif
1105  module procedure mpp_update_domain2d_i4_2d
1106  module procedure mpp_update_domain2d_i4_3d
1107  module procedure mpp_update_domain2d_i4_4d
1108  module procedure mpp_update_domain2d_i4_5d
1109  end interface
1110 
1111 ! <INTERFACE NAME="mpp_start_update_domains/mpp_complete_update_domains">
1112 ! <OVERVIEW>
1113 ! Interface to start halo updates.
1114 ! </OVERVIEW>
1115 ! <DESCRIPTION>
1116 ! <TT>mpp_start_update_domains</TT> is used to start a halo update of a
1117 ! domain-decomposed array on each PE. <TT>MPP_TYPE_</TT> can be of type
1118 ! <TT>complex</TT>, <TT>integer</TT>, <TT>logical</TT> or <TT>real</TT>;
1119 ! of 4-byte or 8-byte kind; of rank up to 5. The vector version (with
1120 ! two input data fields) is only present for <TT>real</TT> types.
1121 !
1122 ! <TT>mpp_start_update_domains</TT> must be paired together with
1123 ! <TT>mpp_complete_update_domains</TT>. In <TT>mpp_start_update_domains</TT>,
1124 ! a buffer will be pre-post to receive (non-blocking) the
1125 ! data and data on computational domain will be packed and sent (non-blocking send)
1126 ! to other processor. In <TT>mpp_complete_update_domains</TT>, buffer will
1127 ! be unpacked to fill the halo and mpp_sync_self will be called to
1128 ! to ensure communication safe at the last call of mpp_complete_update_domains.
1129 !
1130 ! Each mpp_update_domains can be replaced by the combination of mpp_start_update_domains
1131 ! and mpp_complete_update_domains. The arguments in mpp_start_update_domains
1132 ! and mpp_complete_update_domains should be the exact the same as in
1133 ! mpp_update_domains to be replaced except no optional argument "complete".
1134 ! The following are examples on how to replace mpp_update_domains with
1135 ! mpp_start_update_domains/mpp_complete_update_domains
1136 !
1137 ! Example 1: Replace one scalar mpp_update_domains.
1138 !
1139 ! Replace
1140 !
1141 ! call mpp_update_domains(data, domain, flags=update_flags)
1142 !
1143 ! with
1144 !
1145 ! id_update = mpp_start_update_domains(data, domain, flags=update_flags)<BR/>
1146 ! ...( doing some computation )<BR/>
1147 ! call mpp_complete_update_domains(id_update, data, domain, flags=update_flags)<BR/>
1148 !<BR/>
1149 ! Example 2: Replace group scalar mpp_update_domains,
1150 !
1151 ! Replace
1152 !
1153 ! call mpp_update_domains(data_1, domain, flags=update_flags, complete=.false.)<BR/>
1154 ! .... ( other n-2 call mpp_update_domains with complete = .false. )<BR/>
1155 ! call mpp_update_domains(data_n, domain, flags=update_flags, complete=.true. )<BR/>
1156 !<BR/>
1157 ! With
1158 !
1159 ! id_up_1 = mpp_start_update_domains(data_1, domain, flags=update_flags)<BR/>
1160 ! .... ( other n-2 call mpp_start_update_domains )<BR/>
1161 ! id_up_n = mpp_start_update_domains(data_n, domain, flags=update_flags)<BR/>
1162 !
1163 ! ..... ( doing some computation )
1164 !
1165 ! call mpp_complete_update_domains(id_up_1, data_1, domain, flags=update_flags)<BR/>
1166 ! .... ( other n-2 call mpp_complete_update_domains )<BR/>
1167 ! call mpp_complete_update_domains(id_up_n, data_n, domain, flags=update_flags)<BR/>
1168 !<BR/>
1169 ! Example 3: Replace group CGRID_NE vector, mpp_update_domains
1170 !
1171 ! Replace
1172 !
1173 ! call mpp_update_domains(u_1, v_1, domain, flags=update_flgs, gridtype=CGRID_NE, complete=.false.)<BR/>
1174 ! .... ( other n-2 call mpp_update_domains with complete = .false. )<BR/>
1175 ! call mpp_update_domains(u_1, v_1, domain, flags=update_flags, gridtype=CGRID_NE, complete=.true. )<BR/>
1176 !<BR/>
1177 ! with
1178 !
1179 ! id_up_1 = mpp_start_update_domains(u_1, v_1, domain, flags=update_flags, gridtype=CGRID_NE)<BR/>
1180 ! .... ( other n-2 call mpp_start_update_domains )<BR/>
1181 ! id_up_n = mpp_start_update_domains(u_n, v_n, domain, flags=update_flags, gridtype=CGRID_NE)<BR/>
1182 !<BR/>
1183 ! ..... ( doing some computation )
1184 !
1185 ! call mpp_complete_update_domains(id_up_1, u_1, v_1, domain, flags=update_flags, gridtype=CGRID_NE)<BR/>
1186 ! .... ( other n-2 call mpp_complete_update_domains )<BR/>
1187 ! call mpp_complete_update_domains(id_up_n, u_n, v_n, domain, flags=update_flags, gridtype=CGRID_NE)<BR/>
1188 ! <BR/>
1189 ! For 2D domain updates, if there are halos present along both
1190 ! <TT>x</TT> and <TT>y</TT>, we can choose to update one only, by
1191 ! specifying <TT>flags=XUPDATE</TT> or <TT>flags=YUPDATE</TT>. In
1192 ! addition, one-sided updates can be performed by setting <TT>flags</TT>
1193 ! to any combination of <TT>WUPDATE</TT>, <TT>EUPDATE</TT>,
1194 ! <TT>SUPDATE</TT> and <TT>NUPDATE</TT>, to update the west, east, north
1195 ! and south halos respectively. Any combination of halos may be used by
1196 ! adding the requisite flags, e.g: <TT>flags=XUPDATE+SUPDATE</TT> or
1197 ! <TT>flags=EUPDATE+WUPDATE+SUPDATE</TT> will update the east, west and
1198 ! south halos.
1199 !
1200 ! If a call to <TT>mpp_start_update_domains/mpp_complete_update_domains</TT> involves at least one E-W
1201 ! halo and one N-S halo, the corners involved will also be updated, i.e,
1202 ! in the example above, the SE and SW corners will be updated.
1203 !
1204 ! If <TT>flags</TT> is not supplied, that is
1205 ! equivalent to <TT>flags=XUPDATE+YUPDATE</TT>.
1206 !
1207 ! The vector version is passed the <TT>x</TT> and <TT>y</TT>
1208 ! components of a vector field in tandem, and both are updated upon
1209 ! return. They are passed together to treat parity issues on various
1210 ! grids. For example, on a cubic sphere projection, the <TT>x</TT> and
1211 ! <TT>y</TT> components may be interchanged when passing from an
1212 ! equatorial cube face to a polar face. For grids with folds, vector
1213 ! components change sign on crossing the fold. Paired scalar quantities
1214 ! can also be passed with the vector version if flags=SCALAR_PAIR, in which
1215 ! case components are appropriately interchanged, but signs are not.
1216 !
1217 ! Special treatment at boundaries such as folds is also required for
1218 ! staggered grids. The following types of staggered grids are
1219 ! recognized:
1220 !
1221 ! 1) <TT>AGRID</TT>: values are at grid centers.<BR/>
1222 ! 2) <TT>BGRID_NE</TT>: vector fields are at the NE vertex of a grid
1223 ! cell, i.e: the array elements <TT>u(i,j)</TT> and <TT>v(i,j)</TT> are
1224 ! actually at (i+&#189;,j+&#189;) with respect to the grid centers.<BR/>
1225 ! 3) <TT>BGRID_SW</TT>: vector fields are at the SW vertex of a grid
1226 ! cell, i.e: the array elements <TT>u(i,j)</TT> and <TT>v(i,j)</TT> are
1227 ! actually at (i-&#189;,j-&#189;) with respect to the grid centers.<BR/>
1228 ! 4) <TT>CGRID_NE</TT>: vector fields are at the N and E faces of a
1229 ! grid cell, i.e: the array elements <TT>u(i,j)</TT> and <TT>v(i,j)</TT>
1230 ! are actually at (i+&#189;,j) and (i,j+&#189;) with respect to the
1231 ! grid centers.<BR/>
1232 ! 5) <TT>CGRID_SW</TT>: vector fields are at the S and W faces of a
1233 ! grid cell, i.e: the array elements <TT>u(i,j)</TT> and <TT>v(i,j)</TT>
1234 ! are actually at (i-&#189;,j) and (i,j-&#189;) with respect to the
1235 ! grid centers.
1236 !
1237 ! The gridtypes listed above are all available by use association as
1238 ! integer parameters. If vector fields are at staggered locations, the
1239 ! optional argument <TT>gridtype</TT> must be appropriately set for
1240 ! correct treatment at boundaries.
1241 !
1242 ! It is safe to apply vector field updates to the appropriate arrays
1243 ! irrespective of the domain topology: if the topology requires no
1244 ! special treatment of vector fields, specifying <TT>gridtype</TT> will
1245 ! do no harm.
1246 !
1247 ! <TT>mpp_start_update_domains/mpp_complete_update_domains</TT> internally
1248 ! buffers the data being sent and received into single messages for efficiency.
1249 ! A turnable internal buffer area in memory is provided for this purpose by
1250 ! <TT>mpp_domains_mod</TT>. The size of this buffer area can be set by
1251 ! the user by calling <LINK SRC="mpp_domains.html#mpp_domains_set_stack_size">
1252 ! <TT>mpp_domains_set_stack_size</TT></LINK>.
1253 ! </DESCRIPTION>
1254 ! <TEMPLATE>
1255 ! call mpp_start_update_domains( field, domain, flags )
1256 ! call mpp_complete_update_domains( field, domain, flags )
1257 ! </TEMPLATE>
1258 
1259 ! </INTERFACE>
1260 
1262  module procedure mpp_start_update_domain2d_r8_2d
1263  module procedure mpp_start_update_domain2d_r8_3d
1264  module procedure mpp_start_update_domain2d_r8_4d
1265  module procedure mpp_start_update_domain2d_r8_5d
1266  module procedure mpp_start_update_domain2d_r8_2dv
1267  module procedure mpp_start_update_domain2d_r8_3dv
1268  module procedure mpp_start_update_domain2d_r8_4dv
1269  module procedure mpp_start_update_domain2d_r8_5dv
1270 #ifdef OVERLOAD_C8
1271  module procedure mpp_start_update_domain2d_c8_2d
1272  module procedure mpp_start_update_domain2d_c8_3d
1273  module procedure mpp_start_update_domain2d_c8_4d
1274  module procedure mpp_start_update_domain2d_c8_5d
1275 #endif
1276 #ifndef no_8byte_integers
1277  module procedure mpp_start_update_domain2d_i8_2d
1278  module procedure mpp_start_update_domain2d_i8_3d
1279  module procedure mpp_start_update_domain2d_i8_4d
1280  module procedure mpp_start_update_domain2d_i8_5d
1281 #endif
1282 #ifdef OVERLOAD_R4
1283  module procedure mpp_start_update_domain2d_r4_2d
1284  module procedure mpp_start_update_domain2d_r4_3d
1285  module procedure mpp_start_update_domain2d_r4_4d
1286  module procedure mpp_start_update_domain2d_r4_5d
1287  module procedure mpp_start_update_domain2d_r4_2dv
1288  module procedure mpp_start_update_domain2d_r4_3dv
1289  module procedure mpp_start_update_domain2d_r4_4dv
1290  module procedure mpp_start_update_domain2d_r4_5dv
1291 #endif
1292 #ifdef OVERLOAD_C4
1293  module procedure mpp_start_update_domain2d_c4_2d
1294  module procedure mpp_start_update_domain2d_c4_3d
1295  module procedure mpp_start_update_domain2d_c4_4d
1296  module procedure mpp_start_update_domain2d_c4_5d
1297 #endif
1298  module procedure mpp_start_update_domain2d_i4_2d
1299  module procedure mpp_start_update_domain2d_i4_3d
1300  module procedure mpp_start_update_domain2d_i4_4d
1301  module procedure mpp_start_update_domain2d_i4_5d
1302  end interface
1303 
1305  module procedure mpp_complete_update_domain2d_r8_2d
1306  module procedure mpp_complete_update_domain2d_r8_3d
1307  module procedure mpp_complete_update_domain2d_r8_4d
1308  module procedure mpp_complete_update_domain2d_r8_5d
1309  module procedure mpp_complete_update_domain2d_r8_2dv
1310  module procedure mpp_complete_update_domain2d_r8_3dv
1311  module procedure mpp_complete_update_domain2d_r8_4dv
1312  module procedure mpp_complete_update_domain2d_r8_5dv
1313 #ifdef OVERLOAD_C8
1314  module procedure mpp_complete_update_domain2d_c8_2d
1315  module procedure mpp_complete_update_domain2d_c8_3d
1316  module procedure mpp_complete_update_domain2d_c8_4d
1317  module procedure mpp_complete_update_domain2d_c8_5d
1318 #endif
1319 #ifndef no_8byte_integers
1320  module procedure mpp_complete_update_domain2d_i8_2d
1321  module procedure mpp_complete_update_domain2d_i8_3d
1322  module procedure mpp_complete_update_domain2d_i8_4d
1323  module procedure mpp_complete_update_domain2d_i8_5d
1324 #endif
1325 #ifdef OVERLOAD_R4
1326  module procedure mpp_complete_update_domain2d_r4_2d
1327  module procedure mpp_complete_update_domain2d_r4_3d
1328  module procedure mpp_complete_update_domain2d_r4_4d
1329  module procedure mpp_complete_update_domain2d_r4_5d
1330  module procedure mpp_complete_update_domain2d_r4_2dv
1331  module procedure mpp_complete_update_domain2d_r4_3dv
1332  module procedure mpp_complete_update_domain2d_r4_4dv
1333  module procedure mpp_complete_update_domain2d_r4_5dv
1334 #endif
1335 #ifdef OVERLOAD_C4
1336  module procedure mpp_complete_update_domain2d_c4_2d
1337  module procedure mpp_complete_update_domain2d_c4_3d
1338  module procedure mpp_complete_update_domain2d_c4_4d
1339  module procedure mpp_complete_update_domain2d_c4_5d
1340 #endif
1341  module procedure mpp_complete_update_domain2d_i4_2d
1342  module procedure mpp_complete_update_domain2d_i4_3d
1343  module procedure mpp_complete_update_domain2d_i4_4d
1344  module procedure mpp_complete_update_domain2d_i4_5d
1345  end interface
1346 
1348  module procedure mpp_start_do_update_r8_3d
1349  module procedure mpp_start_do_update_r8_3dv
1350 #ifdef OVERLOAD_C8
1351  module procedure mpp_start_do_update_c8_3d
1352 #endif
1353 #ifndef no_8byte_integers
1354  module procedure mpp_start_do_update_i8_3d
1355 #endif
1356 #ifdef OVERLOAD_R4
1357  module procedure mpp_start_do_update_r4_3d
1358  module procedure mpp_start_do_update_r4_3dv
1359 #endif
1360 #ifdef OVERLOAD_C4
1361  module procedure mpp_start_do_update_c4_3d
1362 #endif
1363  module procedure mpp_start_do_update_i4_3d
1364  end interface
1365 
1367  module procedure mpp_complete_do_update_r8_3d
1368  module procedure mpp_complete_do_update_r8_3dv
1369 #ifdef OVERLOAD_C8
1370  module procedure mpp_complete_do_update_c8_3d
1371 #endif
1372 #ifndef no_8byte_integers
1373  module procedure mpp_complete_do_update_i8_3d
1374 #endif
1375 #ifdef OVERLOAD_R4
1376  module procedure mpp_complete_do_update_r4_3d
1377  module procedure mpp_complete_do_update_r4_3dv
1378 #endif
1379 #ifdef OVERLOAD_C4
1380  module procedure mpp_complete_do_update_c4_3d
1381 #endif
1382  module procedure mpp_complete_do_update_i4_3d
1383  end interface
1384 
1385 
1387  module procedure mpp_create_group_update_r4_2d
1388  module procedure mpp_create_group_update_r4_3d
1389  module procedure mpp_create_group_update_r4_4d
1390  module procedure mpp_create_group_update_r4_2dv
1391  module procedure mpp_create_group_update_r4_3dv
1392  module procedure mpp_create_group_update_r4_4dv
1393  module procedure mpp_create_group_update_r8_2d
1394  module procedure mpp_create_group_update_r8_3d
1395  module procedure mpp_create_group_update_r8_4d
1396  module procedure mpp_create_group_update_r8_2dv
1397  module procedure mpp_create_group_update_r8_3dv
1398  module procedure mpp_create_group_update_r8_4dv
1399  end interface mpp_create_group_update
1400 
1402  module procedure mpp_do_group_update_r4
1403  module procedure mpp_do_group_update_r8
1404  end interface mpp_do_group_update
1405 
1407  module procedure mpp_start_group_update_r4
1408  module procedure mpp_start_group_update_r8
1409  end interface mpp_start_group_update
1410 
1412  module procedure mpp_complete_group_update_r4
1413  module procedure mpp_complete_group_update_r8
1414  end interface mpp_complete_group_update
1415 
1417  module procedure mpp_reset_group_update_field_r4_2d
1418  module procedure mpp_reset_group_update_field_r4_3d
1419  module procedure mpp_reset_group_update_field_r4_4d
1420  module procedure mpp_reset_group_update_field_r4_2dv
1421  module procedure mpp_reset_group_update_field_r4_3dv
1422  module procedure mpp_reset_group_update_field_r4_4dv
1423  module procedure mpp_reset_group_update_field_r8_2d
1424  module procedure mpp_reset_group_update_field_r8_3d
1425  module procedure mpp_reset_group_update_field_r8_4d
1426  module procedure mpp_reset_group_update_field_r8_2dv
1427  module procedure mpp_reset_group_update_field_r8_3dv
1428  module procedure mpp_reset_group_update_field_r8_4dv
1429  end interface mpp_reset_group_update_field
1430 
1431  ! <INTERFACE NAME="mpp_define_nest_domains">
1432  ! <OVERVIEW>
1433  ! Set up a domain to pass data between coarse and fine grid of nested model.
1434  ! </OVERVIEW>
1435  ! <DESCRIPTION>
1436  ! Set up a domain to pass data between coarse and fine grid of nested model.
1437  ! Currently it only support one fine nest region over the corase grid region.
1438  ! It supports both serial and concurrent nesting. The serial nesting is that
1439  ! both coarse and fine grid are on the exact same processor list. Concurrent
1440  ! nesting is that coarse and fine grid are on individual processor list and
1441  ! no overlapping. Coarse and fine grid domain need to be defined before
1442  ! calling mpp_define_nest_domains. For concurrent nesting, mpp_broadcast
1443  ! need to be called to broadcast both fine and coarse grid domain onto
1444  ! all the processors.
1445  ! <BR>
1446  ! </BR>
1447  ! <BR>
1448  ! </BR>
1449  ! mpp_update_nest_coarse is used to pass data from fine grid to coarse grid computing domain.
1450  ! mpp_update_nest_fine is used to pass data from coarse grid to fine grid halo.
1451  ! You may call mpp_get_C2F_index before calling mpp_update_nest_fine to get the index for
1452  ! passing data from coarse to fine. You may call mpp_get_F2C_index before calling
1453  ! mpp_update_nest_coarse to get the index for passing data from coarse to fine.
1454  ! <BR>
1455  ! </BR>
1456  ! <BR>
1457  ! </BR>
1458 
1459  ! NOTE: The following tests are done in test_mpp_domains: the coarse grid is cubic sphere
1460  ! grid and the fine grid is a regular-latlon grid (symmetric domain) nested inside
1461  ! face 3 of the cubic sphere grid. Tests are done for data at T, E, C, N-cell center.
1462  !
1463  ! Below is an example to pass data between fine and coarse grid (More details on how to
1464  ! use the nesting domain update are available in routing test_update_nest_domain of
1465  ! shared/mpp/test_mpp_domains.F90.
1466  !
1467  ! <PRE>
1468  ! if( concurrent ) then
1469  ! call mpp_broadcast_domain(domain_fine)
1470  ! call mpp_broadcast_domain(domain_coarse)
1471  ! endif
1472  !
1473  ! call mpp_define_nest_domains(nest_domain, domain_fine, domain_coarse, tile_fine, tile_coarse, &
1474  ! istart_fine, iend_fine, jstart_fine, jend_fine, &
1475  ! istart_coarse, iend_coarse, jstart_coarse, jend_coarse, &
1476  ! pelist, extra_halo, name="nest_domain")
1477  ! call mpp_get_C2F_index(nest_domain, isw_f, iew_f, jsw_f, jew_f, isw_c, iew_c, jsw_c, jew_c, WEST)
1478  ! call mpp_get_C2F_index(nest_domain, ise_f, iee_f, jse_f, jee_f, ise_c, iee_c, jse_c, jee_c, EAST)
1479  ! call mpp_get_C2F_index(nest_domain, iss_f, ies_f, jss_f, jes_f, iss_c, ies_c, jss_c, jes_c, SOUTH)
1480  ! call mpp_get_C2F_index(nest_domain, isn_f, ien_f, jsn_f, jen_f, isn_c, ien_c, jsn_c, jen_c, NORTH)
1481  !
1482  ! allocate(wbuffer(isw_c:iew_c, jsw_c:jew_c,nz))
1483  ! allocate(ebuffer(ise_c:iee_c, jse_c:jee_c,nz))
1484  ! allocate(sbuffer(iss_c:ies_c, jss_c:jes_c,nz))
1485  ! allocate(nbuffer(isn_c:ien_c, jsn_c:jen_c,nz))
1486  ! call mpp_update_nest_fine(x, nest_domain, wbuffer, sbuffer, ebuffer, nbuffer)
1487  !
1488  ! call mpp_get_F2C_index(nest_domain, is_c, ie_c, js_c, je_c, is_f, ie_f, js_f, je_f)
1489  ! allocate(buffer (is_f:ie_f, js_f:je_f,nz))
1490  ! call mpp_update_nest_coarse(x, nest_domain, buffer)
1491  ! </PRE>
1492 
1493  ! </DESCRIPTION>
1494  ! <TEMPLATE>
1495  ! call mpp_define_nest_domains(nest_domain, domain_fine, domain_coarse, tile_fine, tile_coarse,
1496  ! istart_fine, iend_fine, jstart_fine, jend_fine,
1497  ! istart_coarse, iend_coarse, jstart_coarse, jend_coarse,
1498  ! pelist, extra_halo, name)
1499  ! </TEMPLATE>
1500  !
1501  ! <INOUT NAME="nest_domain">
1502  ! Holds the information to pass data between fine and coarse grid.
1503  ! </INOUT>
1504  ! <IN NAME="domain_fine">
1505  ! domain for fine grid.
1506  ! </IN>
1507  ! <IN NAME="domain_coarse">
1508  ! domain for coarse grid.
1509  ! </IN>
1510  ! <IN NAME="tile_fine">
1511  ! tile number of the fine grid. Currently this value should be 1.
1512  ! </IN>
1513  ! <IN NAME="tile_coarse">
1514  ! tile numuber of the coarse grid.
1515  ! </IN>
1516  ! <IN NAME="istart_fine, iend_fine, jstart_fine, jend_fine">
1517  ! index in the fine grid of the nested region
1518  ! </IN>
1519  ! <IN NAME="istart_coarse, iend_coarse, jstart_coarse, jend_coarse">
1520  ! index in the coarse grid of the nested region
1521  ! </IN>
1522  ! <IN NAME="pelist">
1523  ! List of PEs to which the domains are to be assigned.
1524  ! </IN>
1525  ! <IN NAME="extra_halo">
1526  ! optional argument. extra halo for passing data from coarse grid to fine grid.
1527  ! Default is 0 and currently only support extra_halo = 0.
1528  ! </IN>
1529  ! <IN NAME="name">
1530  ! opitonal argument. Name of the nest domain.
1531  ! </IN>
1532  ! </INTERFACE>
1533 
1534  ! <INTERFACE NAME="mpp_get_C2F_index">
1535  ! <OVERVIEW>
1536  ! Get the index of the data passed from coarse grid to fine grid.
1537  ! </OVERVIEW>
1538  ! <DESCRIPTION>
1539  ! Get the index of the data passed from coarse grid to fine grid.
1540  ! </DESCRIPTION>
1541  ! <TEMPLATE>
1542  ! call mpp_get_C2F_index(nest_domain, is_fine, ie_fine, js_fine, je_fine,
1543  ! is_coarse, ie_coarse, js_coarse, je_coarse, dir, position)
1544  ! </TEMPLATE>
1545  !
1546  ! <IN NAME="nest_domain">
1547  ! Holds the information to pass data between fine and coarse grid.
1548  ! </IN>
1549  ! <OUT NAME="istart_fine, iend_fine, jstart_fine, jend_fine">
1550  ! index in the fine grid of the nested region
1551  ! </OUT>
1552  ! <OUT NAME="istart_coarse, iend_coarse, jstart_coarse, jend_coarse">
1553  ! index in the coarse grid of the nested region
1554  ! </OUT>
1555  ! <IN NAME="dir">
1556  ! direction of the halo update. Its value should be WEST, EAST, SOUTH or NORTH.
1557  ! </IN>
1558  ! <IN NAME="position">
1559  ! Cell position. It value should be CENTER, EAST, NORTH or SOUTH.
1560  ! </IN>
1561  ! </INTERFACE>
1562 
1563  ! <INTERFACE NAME="mpp_get_F2C_index">
1564  ! <OVERVIEW>
1565  ! Get the index of the data passed from fine grid to coarse grid.
1566  ! </OVERVIEW>
1567  ! <DESCRIPTION>
1568  ! Get the index of the data passed from fine grid to coarse grid.
1569  ! </DESCRIPTION>
1570  ! <TEMPLATE>
1571  ! call mpp_get_F2C_index(nest_domain, is_coarse, ie_coarse, js_coarse, je_coarse,
1572  ! is_fine, ie_fine, js_fine, je_fine, position)
1573  ! </TEMPLATE>
1574  !
1575  ! <IN NAME="nest_domain">
1576  ! Holds the information to pass data between fine and coarse grid.
1577  ! </IN>
1578  ! <OUT NAME="istart_fine, iend_fine, jstart_fine, jend_fine">
1579  ! index in the fine grid of the nested region
1580  ! </OUT>
1581  ! <OUT NAME="istart_coarse, iend_coarse, jstart_coarse, jend_coarse">
1582  ! index in the coarse grid of the nested region
1583  ! </OUT>
1584  ! <IN NAME="position">
1585  ! Cell position. It value should be CENTER, EAST, NORTH or SOUTH.
1586  ! </IN>
1587  ! </INTERFACE>
1588 
1589  ! <INTERFACE NAME="mpp_update_nest_fine">
1590  ! <OVERVIEW>
1591  ! Pass the data from coarse grid to fill the buffer to be ready to be interpolated
1592  ! onto fine grid.
1593  ! </OVERVIEW>
1594  ! <DESCRIPTION>
1595  ! Pass the data from coarse grid to fill the buffer to be ready to be interpolated
1596  ! onto fine grid.
1597  ! </DESCRIPTION>
1598  ! <TEMPLATE>
1599  ! call mpp_update_nest_fine(field, nest_domain, wbuffer, ebuffer, sbuffer, nbuffer,
1600  ! flags, complete, position, extra_halo, name, tile_count)
1601  ! </TEMPLATE>
1602  !
1603  ! <IN NAME="field">
1604  ! field on the model grid.
1605  ! </IN>
1606  ! <INOUT NAME="nest_domain">
1607  ! Holds the information to pass data between fine and coarse grid.
1608  ! </INOUT>
1609  ! <OUT NAME="wbuffer">
1610  ! west side buffer to be filled with data on coarse grid.
1611  ! </OUT>
1612  ! <OUT NAME="ebuffer">
1613  ! east side buffer to be filled with data on coarse grid.
1614  ! </OUT>
1615  ! <OUT NAME="sbuffer">
1616  ! south side buffer to be filled with data on coarse grid.
1617  ! </OUT>
1618  ! <OUT NAME="nbuffer">
1619  ! north side buffer to be filled with data on coarse grid.
1620  ! </OUT>
1621  ! <IN NAME="flags">
1622  ! optional arguments. Specify the direction of fine grid halo buffer to be filled.
1623  ! Default value is XUPDATE+YUPDATE.
1624  ! </IN>
1625  ! <IN NAME="complete">
1626  ! optional argument. When true, do the buffer filling. Default value is true.
1627  ! </IN>
1628  ! <IN NAME="position">
1629  ! Cell position. It value should be CENTER, EAST, NORTH or SOUTH. Default is CENTER.
1630  ! </IN>
1631  ! <IN NAME="extra_halo">
1632  ! optional argument. extra halo for passing data from coarse grid to fine grid.
1633  ! Default is 0 and currently only support extra_halo = 0.
1634  ! </IN>
1635  ! <IN NAME="name">
1636  ! opitonal argument. Name of the nest domain.
1637  ! </IN>
1638  ! <IN NAME="tile_count">
1639  ! optional argument. Used to support multiple-tile-per-pe. default is 1 and currently
1640  ! only support tile_count = 1.
1641  ! </IN>
1642  ! </INTERFACE>
1643 
1644  ! <INTERFACE NAME="mpp_update_nest_coarse">
1645  ! <OVERVIEW>
1646  ! Pass the data from fine grid to fill the buffer to be ready to be interpolated
1647  ! onto coarse grid.
1648  ! </OVERVIEW>
1649  ! <DESCRIPTION>
1650  ! Pass the data from fine grid to fill the buffer to be ready to be interpolated
1651  ! onto coarse grid.
1652  ! </DESCRIPTION>
1653  ! <TEMPLATE>
1654  ! call mpp_update_nest_coarse(field, nest_domain, buffer, complete, position, name, tile_count)
1655  ! </TEMPLATE>
1656  !
1657  ! <IN NAME="field">
1658  ! field on the model grid.
1659  ! </IN>
1660  ! <INOUT NAME="nest_domain">
1661  ! Holds the information to pass data between fine and coarse grid.
1662  ! </INOUT>
1663  ! <OUT NAME="buffer">
1664  ! buffer to be filled with data on coarse grid.
1665  ! </OUT>
1666  ! <IN NAME="complete">
1667  ! optional argument. When true, do the buffer filling. Default value is true.
1668  ! </IN>
1669  ! <IN NAME="position">
1670  ! Cell position. It value should be CENTER, EAST, NORTH or SOUTH. Default is CENTER.
1671  ! </IN>
1672  ! <IN NAME="name">
1673  ! opitonal argument. Name of the nest domain.
1674  ! </IN>
1675  ! <IN NAME="tile_count">
1676  ! optional argument. Used to support multiple-tile-per-pe. default is 1 and currently
1677  ! only support tile_count = 1.
1678  ! </IN>
1679  ! </INTERFACE>
1680 
1682  module procedure mpp_update_nest_fine_r8_2d
1683  module procedure mpp_update_nest_fine_r8_3d
1684  module procedure mpp_update_nest_fine_r8_4d
1685 #ifdef OVERLOAD_C8
1686  module procedure mpp_update_nest_fine_c8_2d
1687  module procedure mpp_update_nest_fine_c8_3d
1688  module procedure mpp_update_nest_fine_c8_4d
1689 #endif
1690 #ifndef no_8byte_integers
1691  module procedure mpp_update_nest_fine_i8_2d
1692  module procedure mpp_update_nest_fine_i8_3d
1693  module procedure mpp_update_nest_fine_i8_4d
1694 #endif
1695 #ifdef OVERLOAD_R4
1696  module procedure mpp_update_nest_fine_r4_2d
1697  module procedure mpp_update_nest_fine_r4_3d
1698  module procedure mpp_update_nest_fine_r4_4d
1699 #endif
1700 #ifdef OVERLOAD_C4
1701  module procedure mpp_update_nest_fine_c4_2d
1702  module procedure mpp_update_nest_fine_c4_3d
1703  module procedure mpp_update_nest_fine_c4_4d
1704 #endif
1705  module procedure mpp_update_nest_fine_i4_2d
1706  module procedure mpp_update_nest_fine_i4_3d
1707  module procedure mpp_update_nest_fine_i4_4d
1708  end interface
1709 
1711  module procedure mpp_do_update_nest_fine_r8_3d
1712 #ifdef OVERLOAD_C8
1713  module procedure mpp_do_update_nest_fine_c8_3d
1714 #endif
1715 #ifndef no_8byte_integers
1716  module procedure mpp_do_update_nest_fine_i8_3d
1717 #endif
1718 #ifdef OVERLOAD_R4
1719  module procedure mpp_do_update_nest_fine_r4_3d
1720 #endif
1721 #ifdef OVERLOAD_C4
1722  module procedure mpp_do_update_nest_fine_c4_3d
1723 #endif
1724  module procedure mpp_do_update_nest_fine_i4_3d
1725  end interface
1726 
1728  module procedure mpp_update_nest_coarse_r8_2d
1729  module procedure mpp_update_nest_coarse_r8_3d
1730  module procedure mpp_update_nest_coarse_r8_4d
1731 #ifdef OVERLOAD_C8
1732  module procedure mpp_update_nest_coarse_c8_2d
1733  module procedure mpp_update_nest_coarse_c8_3d
1734  module procedure mpp_update_nest_coarse_c8_4d
1735 #endif
1736 #ifndef no_8byte_integers
1737  module procedure mpp_update_nest_coarse_i8_2d
1738  module procedure mpp_update_nest_coarse_i8_3d
1739  module procedure mpp_update_nest_coarse_i8_4d
1740 #endif
1741 #ifdef OVERLOAD_R4
1742  module procedure mpp_update_nest_coarse_r4_2d
1743  module procedure mpp_update_nest_coarse_r4_3d
1744  module procedure mpp_update_nest_coarse_r4_4d
1745 #endif
1746 #ifdef OVERLOAD_C4
1747  module procedure mpp_update_nest_coarse_c4_2d
1748  module procedure mpp_update_nest_coarse_c4_3d
1749  module procedure mpp_update_nest_coarse_c4_4d
1750 #endif
1751  module procedure mpp_update_nest_coarse_i4_2d
1752  module procedure mpp_update_nest_coarse_i4_3d
1753  module procedure mpp_update_nest_coarse_i4_4d
1754  end interface
1755 
1757  module procedure mpp_do_update_nest_coarse_r8_3d
1758 #ifdef OVERLOAD_C8
1759  module procedure mpp_do_update_nest_coarse_c8_3d
1760 #endif
1761 #ifndef no_8byte_integers
1762  module procedure mpp_do_update_nest_coarse_i8_3d
1763 #endif
1764 #ifdef OVERLOAD_R4
1765  module procedure mpp_do_update_nest_coarse_r4_3d
1766 #endif
1767 #ifdef OVERLOAD_C4
1768  module procedure mpp_do_update_nest_coarse_c4_3d
1769 #endif
1770  module procedure mpp_do_update_nest_coarse_i4_3d
1771  end interface
1772 
1773 
1775  module procedure mpp_broadcast_domain_1
1776  module procedure mpp_broadcast_domain_2
1777  module procedure mpp_broadcast_domain_ug
1778 end interface
1779 
1780 !--------------------------------------------------------------
1781 ! for adjoint update
1782 !--------------------------------------------------------------
1784  module procedure mpp_update_domains_ad_2d_r8_2d
1785  module procedure mpp_update_domains_ad_2d_r8_3d
1786  module procedure mpp_update_domains_ad_2d_r8_4d
1787  module procedure mpp_update_domains_ad_2d_r8_5d
1788  module procedure mpp_update_domains_ad_2d_r8_2dv
1789  module procedure mpp_update_domains_ad_2d_r8_3dv
1790  module procedure mpp_update_domains_ad_2d_r8_4dv
1791  module procedure mpp_update_domains_ad_2d_r8_5dv
1792 #ifdef OVERLOAD_R4
1793  module procedure mpp_update_domains_ad_2d_r4_2d
1794  module procedure mpp_update_domains_ad_2d_r4_3d
1795  module procedure mpp_update_domains_ad_2d_r4_4d
1796  module procedure mpp_update_domains_ad_2d_r4_5d
1797  module procedure mpp_update_domains_ad_2d_r4_2dv
1798  module procedure mpp_update_domains_ad_2d_r4_3dv
1799  module procedure mpp_update_domains_ad_2d_r4_4dv
1800  module procedure mpp_update_domains_ad_2d_r4_5dv
1801 #endif
1802  end interface
1803 !
1804 
1805  interface mpp_do_update
1806  module procedure mpp_do_update_r8_3d
1807  module procedure mpp_do_update_r8_3dv
1808 #ifdef OVERLOAD_C8
1809  module procedure mpp_do_update_c8_3d
1810 #endif
1811 #ifndef no_8byte_integers
1812  module procedure mpp_do_update_i8_3d
1813 #endif
1814 #ifdef OVERLOAD_R4
1815  module procedure mpp_do_update_r4_3d
1816  module procedure mpp_do_update_r4_3dv
1817 #endif
1818 #ifdef OVERLOAD_C4
1819  module procedure mpp_do_update_c4_3d
1820 #endif
1821  module procedure mpp_do_update_i4_3d
1822  end interface
1823 
1824  interface mpp_do_check
1825  module procedure mpp_do_check_r8_3d
1826  module procedure mpp_do_check_r8_3dv
1827 #ifdef OVERLOAD_C8
1828  module procedure mpp_do_check_c8_3d
1829 #endif
1830 #ifndef no_8byte_integers
1831  module procedure mpp_do_check_i8_3d
1832 #endif
1833 #ifdef OVERLOAD_R4
1834  module procedure mpp_do_check_r4_3d
1835  module procedure mpp_do_check_r4_3dv
1836 #endif
1837 #ifdef OVERLOAD_C4
1838  module procedure mpp_do_check_c4_3d
1839 #endif
1840  module procedure mpp_do_check_i4_3d
1841  end interface
1842 
1843 
1845  module procedure mpp_pass_sg_to_ug_r8_2d
1846  module procedure mpp_pass_sg_to_ug_r8_3d
1847 #ifdef OVERLOAD_R4
1848  module procedure mpp_pass_sg_to_ug_r4_2d
1849  module procedure mpp_pass_sg_to_ug_r4_3d
1850 #endif
1851  module procedure mpp_pass_sg_to_ug_i4_2d
1852  module procedure mpp_pass_sg_to_ug_i4_3d
1853  module procedure mpp_pass_sg_to_ug_l4_2d
1854  module procedure mpp_pass_sg_to_ug_l4_3d
1855  end interface
1856 
1858  module procedure mpp_pass_ug_to_sg_r8_2d
1859  module procedure mpp_pass_ug_to_sg_r8_3d
1860 #ifdef OVERLOAD_R4
1861  module procedure mpp_pass_ug_to_sg_r4_2d
1862  module procedure mpp_pass_ug_to_sg_r4_3d
1863 #endif
1864  module procedure mpp_pass_ug_to_sg_i4_2d
1865  module procedure mpp_pass_ug_to_sg_i4_3d
1866  module procedure mpp_pass_ug_to_sg_l4_2d
1867  module procedure mpp_pass_ug_to_sg_l4_3d
1868  end interface
1869 
1870 
1871 !!$ module procedure mpp_do_update_ad_i4_3d
1872 !!$ end interface
1873 !
1875  module procedure mpp_do_update_ad_r8_3d
1876  module procedure mpp_do_update_ad_r8_3dv
1877 #ifdef OVERLOAD_R4
1878  module procedure mpp_do_update_ad_r4_3d
1879  module procedure mpp_do_update_ad_r4_3dv
1880 #endif
1881  end interface
1882 !
1883 
1884 ! <INTERFACE NAME="mpp_get_boundary">
1885 ! <OVERVIEW>
1886 ! Get the boundary data for symmetric domain when the data is at C, E, or N-cell center
1887 ! </OVERVIEW>
1888 ! <DESCRIPTION>
1889 ! <TT>mpp_get_boundary</TT> is used to get the boundary data for symmetric domain
1890 ! when the data is at C, E, or N-cell center. For cubic grid, the data should
1891 ! always at C-cell center.
1892 ! </DESCRIPTION>
1893 ! <TEMPLATE>
1894 ! call mpp_get_boundary
1895 ! </TEMPLATE>
1896 ! <TEMPLATE>
1897 ! call mpp_get_boundary
1898 ! </TEMPLATE>
1899 ! </INTERFACE>
1901  module procedure mpp_get_boundary_r8_2d
1902  module procedure mpp_get_boundary_r8_3d
1903 ! module procedure mpp_get_boundary_r8_4d
1904 ! module procedure mpp_get_boundary_r8_5d
1905  module procedure mpp_get_boundary_r8_2dv
1906  module procedure mpp_get_boundary_r8_3dv
1907 ! module procedure mpp_get_boundary_r8_4dv
1908 ! module procedure mpp_get_boundary_r8_5dv
1909 #ifdef OVERLOAD_R4
1910  module procedure mpp_get_boundary_r4_2d
1911  module procedure mpp_get_boundary_r4_3d
1912 ! module procedure mpp_get_boundary_r4_4d
1913 ! module procedure mpp_get_boundary_r4_5d
1914  module procedure mpp_get_boundary_r4_2dv
1915  module procedure mpp_get_boundary_r4_3dv
1916 ! module procedure mpp_get_boundary_r4_4dv
1917 ! module procedure mpp_get_boundary_r4_5dv
1918 #endif
1919  end interface
1920 
1922  module procedure mpp_get_boundary_ad_r8_2d
1923  module procedure mpp_get_boundary_ad_r8_3d
1924  module procedure mpp_get_boundary_ad_r8_2dv
1925  module procedure mpp_get_boundary_ad_r8_3dv
1926 #ifdef OVERLOAD_R4
1927  module procedure mpp_get_boundary_ad_r4_2d
1928  module procedure mpp_get_boundary_ad_r4_3d
1929  module procedure mpp_get_boundary_ad_r4_2dv
1930  module procedure mpp_get_boundary_ad_r4_3dv
1931 #endif
1932  end interface
1933 
1935  module procedure mpp_do_get_boundary_r8_3d
1936  module procedure mpp_do_get_boundary_r8_3dv
1937 #ifdef OVERLOAD_R4
1938  module procedure mpp_do_get_boundary_r4_3d
1939  module procedure mpp_do_get_boundary_r4_3dv
1940 #endif
1941  end interface
1942 
1944  module procedure mpp_do_get_boundary_ad_r8_3d
1945  module procedure mpp_do_get_boundary_ad_r8_3dv
1946 #ifdef OVERLOAD_R4
1947  module procedure mpp_do_get_boundary_ad_r4_3d
1948  module procedure mpp_do_get_boundary_ad_r4_3dv
1949 #endif
1950  end interface
1951 
1952 ! <INTERFACE NAME="mpp_redistribute">
1953 ! <OVERVIEW>
1954 ! Reorganization of distributed global arrays.
1955 ! </OVERVIEW>
1956 ! <DESCRIPTION>
1957 ! <TT>mpp_redistribute</TT> is used to reorganize a distributed
1958 ! array. <TT>MPP_TYPE_</TT> can be of type <TT>integer</TT>,
1959 ! <TT>complex</TT>, or <TT>real</TT>; of 4-byte or 8-byte kind; of rank
1960 ! up to 5.
1961 ! </DESCRIPTION>
1962 ! <TEMPLATE>
1963 ! call mpp_redistribute( domain_in, field_in, domain_out, field_out )
1964 ! </TEMPLATE>
1965 ! <IN NAME="field_in" TYPE="MPP_TYPE_">
1966 ! <TT>field_in</TT> is dimensioned on the data domain of <TT>domain_in</TT>.
1967 ! </IN>
1968 ! <OUT NAME="field_out" TYPE="MPP_TYPE_">
1969 ! <TT>field_out</TT> on the data domain of <TT>domain_out</TT>.
1970 ! </OUT>
1971 ! </INTERFACE>
1973  module procedure mpp_redistribute_r8_2d
1974  module procedure mpp_redistribute_r8_3d
1975  module procedure mpp_redistribute_r8_4d
1976  module procedure mpp_redistribute_r8_5d
1977 #ifdef OVERLOAD_C8
1978  module procedure mpp_redistribute_c8_2d
1979  module procedure mpp_redistribute_c8_3d
1980  module procedure mpp_redistribute_c8_4d
1981  module procedure mpp_redistribute_c8_5d
1982 #endif
1983 #ifndef no_8byte_integers
1984  module procedure mpp_redistribute_i8_2d
1985  module procedure mpp_redistribute_i8_3d
1986  module procedure mpp_redistribute_i8_4d
1987  module procedure mpp_redistribute_i8_5d
1988 !!$ module procedure mpp_redistribute_l8_2D
1989 !!$ module procedure mpp_redistribute_l8_3D
1990 !!$ module procedure mpp_redistribute_l8_4D
1991 !!$ module procedure mpp_redistribute_l8_5D
1992 #endif
1993 #ifdef OVERLOAD_R4
1994  module procedure mpp_redistribute_r4_2d
1995  module procedure mpp_redistribute_r4_3d
1996  module procedure mpp_redistribute_r4_4d
1997  module procedure mpp_redistribute_r4_5d
1998 #endif
1999 #ifdef OVERLOAD_C4
2000  module procedure mpp_redistribute_c4_2d
2001  module procedure mpp_redistribute_c4_3d
2002  module procedure mpp_redistribute_c4_4d
2003  module procedure mpp_redistribute_c4_5d
2004 #endif
2005  module procedure mpp_redistribute_i4_2d
2006  module procedure mpp_redistribute_i4_3d
2007  module procedure mpp_redistribute_i4_4d
2008  module procedure mpp_redistribute_i4_5d
2009 !!$ module procedure mpp_redistribute_l4_2D
2010 !!$ module procedure mpp_redistribute_l4_3D
2011 !!$ module procedure mpp_redistribute_l4_4D
2012 !!$ module procedure mpp_redistribute_l4_5D
2013  end interface
2014 
2016  module procedure mpp_do_redistribute_r8_3d
2017 #ifdef OVERLOAD_C8
2018  module procedure mpp_do_redistribute_c8_3d
2019 #endif
2020 #ifndef no_8byte_integers
2021  module procedure mpp_do_redistribute_i8_3d
2022  module procedure mpp_do_redistribute_l8_3d
2023 #endif
2024 #ifdef OVERLOAD_R4
2025  module procedure mpp_do_redistribute_r4_3d
2026 #endif
2027 #ifdef OVERLOAD_C4
2028  module procedure mpp_do_redistribute_c4_3d
2029 #endif
2030  module procedure mpp_do_redistribute_i4_3d
2031  module procedure mpp_do_redistribute_l4_3d
2032  end interface
2033 
2034 
2035 ! <INTERFACE NAME="mpp_check_field">
2036 ! <OVERVIEW>
2037 ! Parallel checking between two ensembles which run
2038 ! on different set pes at the same time.
2039 ! </OVERVIEW>
2040 ! <DESCRIPTION>
2041 ! There are two forms for the <TT>mpp_check_field</TT> call. The 2D
2042 ! version is generally to be used and 3D version is built by repeated calls to the
2043 ! 2D version.
2044 ! </DESCRIPTION>
2045 ! <TEMPLATE>
2046 ! call mpp_check_field(field_in, pelist1, pelist2, domain, mesg, &
2047 ! w_halo, s_halo, e_halo, n_halo, force_abort )
2048 ! </TEMPLATE>
2049 ! <IN NAME="field_in" >
2050 ! Field to be checked
2051 ! </IN>
2052 ! <IN NAME="pelist1, pelist2">
2053 ! Pelist of the two ensembles to be compared
2054 ! </IN>
2055 ! <IN NAME="domain">
2056 ! Domain of current pe
2057 ! </IN>
2058 ! <IN NAME="mesg" >
2059 ! Message to be printed out
2060 ! </IN>
2061 ! <IN NAME="w_halo, s_halo, e_halo, n_halo">
2062 ! Halo size to be checked. Default value is 0.
2063 ! </IN>
2064 ! <IN NAME="force_abort">
2065 ! When true, abort program when any difference found. Default value is false.
2066 ! </IN>
2067 ! </INTERFACE>
2068 
2070  module procedure mpp_check_field_2d
2071  module procedure mpp_check_field_3d
2072  end interface
2073 
2074 !***********************************************************************
2075 !
2076 ! public interface from mpp_domains_reduce.h
2077 !
2078 !***********************************************************************
2079 
2080 ! <INTERFACE NAME="mpp_global_field">
2081 ! <OVERVIEW>
2082 ! Fill in a global array from domain-decomposed arrays.
2083 ! </OVERVIEW>
2084 ! <DESCRIPTION>
2085 ! <TT>mpp_global_field</TT> is used to get an entire
2086 ! domain-decomposed array on each PE. <TT>MPP_TYPE_</TT> can be of type
2087 ! <TT>complex</TT>, <TT>integer</TT>, <TT>logical</TT> or <TT>real</TT>;
2088 ! of 4-byte or 8-byte kind; of rank up to 5.
2089 !
2090 ! All PEs in a domain decomposition must call
2091 ! <TT>mpp_global_field</TT>, and each will have a complete global field
2092 ! at the end. Please note that a global array of rank 3 or higher could
2093 ! occupy a lot of memory.
2094 ! </DESCRIPTION>
2095 ! <TEMPLATE>
2096 ! call mpp_global_field( domain, local, global, flags )
2097 ! </TEMPLATE>
2098 ! <IN NAME="domain" TYPE="type(domain2D)"></IN>
2099 ! <IN NAME="local" TYPE="MPP_TYPE_">
2100 ! <TT>local</TT> is dimensioned on either the compute domain or the
2101 ! data domain of <TT>domain</TT>.
2102 ! </IN>
2103 ! <OUT NAME="global" TYPE="MPP_TYPE_">
2104 ! <TT>global</TT> is dimensioned on the corresponding global domain.
2105 ! </OUT>
2106 ! <IN NAME="flags" TYPE="integer">
2107 ! <TT>flags</TT> can be given the value <TT>XONLY</TT> or
2108 ! <TT>YONLY</TT>, to specify a globalization on one axis only.
2109 ! </IN>
2110 ! </INTERFACE>
2112  module procedure mpp_global_field2d_r8_2d
2113  module procedure mpp_global_field2d_r8_3d
2114  module procedure mpp_global_field2d_r8_4d
2115  module procedure mpp_global_field2d_r8_5d
2116 #ifdef OVERLOAD_C8
2117  module procedure mpp_global_field2d_c8_2d
2118  module procedure mpp_global_field2d_c8_3d
2119  module procedure mpp_global_field2d_c8_4d
2120  module procedure mpp_global_field2d_c8_5d
2121 #endif
2122 #ifndef no_8byte_integers
2123  module procedure mpp_global_field2d_i8_2d
2124  module procedure mpp_global_field2d_i8_3d
2125  module procedure mpp_global_field2d_i8_4d
2126  module procedure mpp_global_field2d_i8_5d
2127  module procedure mpp_global_field2d_l8_2d
2128  module procedure mpp_global_field2d_l8_3d
2129  module procedure mpp_global_field2d_l8_4d
2130  module procedure mpp_global_field2d_l8_5d
2131 #endif
2132 #ifdef OVERLOAD_R4
2133  module procedure mpp_global_field2d_r4_2d
2134  module procedure mpp_global_field2d_r4_3d
2135  module procedure mpp_global_field2d_r4_4d
2136  module procedure mpp_global_field2d_r4_5d
2137 #endif
2138 #ifdef OVERLOAD_C4
2139  module procedure mpp_global_field2d_c4_2d
2140  module procedure mpp_global_field2d_c4_3d
2141  module procedure mpp_global_field2d_c4_4d
2142  module procedure mpp_global_field2d_c4_5d
2143 #endif
2144  module procedure mpp_global_field2d_i4_2d
2145  module procedure mpp_global_field2d_i4_3d
2146  module procedure mpp_global_field2d_i4_4d
2147  module procedure mpp_global_field2d_i4_5d
2148  module procedure mpp_global_field2d_l4_2d
2149  module procedure mpp_global_field2d_l4_3d
2150  module procedure mpp_global_field2d_l4_4d
2151  module procedure mpp_global_field2d_l4_5d
2152  end interface
2153 
2155  module procedure mpp_global_field2d_r8_2d_ad
2156  module procedure mpp_global_field2d_r8_3d_ad
2157  module procedure mpp_global_field2d_r8_4d_ad
2158  module procedure mpp_global_field2d_r8_5d_ad
2159 #ifdef OVERLOAD_C8
2160  module procedure mpp_global_field2d_c8_2d_ad
2161  module procedure mpp_global_field2d_c8_3d_ad
2162  module procedure mpp_global_field2d_c8_4d_ad
2163  module procedure mpp_global_field2d_c8_5d_ad
2164 #endif
2165 #ifndef no_8byte_integers
2166  module procedure mpp_global_field2d_i8_2d_ad
2167  module procedure mpp_global_field2d_i8_3d_ad
2168  module procedure mpp_global_field2d_i8_4d_ad
2169  module procedure mpp_global_field2d_i8_5d_ad
2170  module procedure mpp_global_field2d_l8_2d_ad
2171  module procedure mpp_global_field2d_l8_3d_ad
2172  module procedure mpp_global_field2d_l8_4d_ad
2173  module procedure mpp_global_field2d_l8_5d_ad
2174 #endif
2175 #ifdef OVERLOAD_R4
2176  module procedure mpp_global_field2d_r4_2d_ad
2177  module procedure mpp_global_field2d_r4_3d_ad
2178  module procedure mpp_global_field2d_r4_4d_ad
2179  module procedure mpp_global_field2d_r4_5d_ad
2180 #endif
2181 #ifdef OVERLOAD_C4
2182  module procedure mpp_global_field2d_c4_2d_ad
2183  module procedure mpp_global_field2d_c4_3d_ad
2184  module procedure mpp_global_field2d_c4_4d_ad
2185  module procedure mpp_global_field2d_c4_5d_ad
2186 #endif
2187  module procedure mpp_global_field2d_i4_2d_ad
2188  module procedure mpp_global_field2d_i4_3d_ad
2189  module procedure mpp_global_field2d_i4_4d_ad
2190  module procedure mpp_global_field2d_i4_5d_ad
2191  module procedure mpp_global_field2d_l4_2d_ad
2192  module procedure mpp_global_field2d_l4_3d_ad
2193  module procedure mpp_global_field2d_l4_4d_ad
2194  module procedure mpp_global_field2d_l4_5d_ad
2195  end interface
2196 
2198  module procedure mpp_do_global_field2d_r8_3d
2199 #ifdef OVERLOAD_C8
2200  module procedure mpp_do_global_field2d_c8_3d
2201 #endif
2202 #ifndef no_8byte_integers
2203  module procedure mpp_do_global_field2d_i8_3d
2204  module procedure mpp_do_global_field2d_l8_3d
2205 #endif
2206 #ifdef OVERLOAD_R4
2207  module procedure mpp_do_global_field2d_r4_3d
2208 #endif
2209 #ifdef OVERLOAD_C4
2210  module procedure mpp_do_global_field2d_c4_3d
2211 #endif
2212  module procedure mpp_do_global_field2d_i4_3d
2213  module procedure mpp_do_global_field2d_l4_3d
2214  end interface
2215 
2217  module procedure mpp_do_global_field2d_a2a_r8_3d
2218 #ifdef OVERLOAD_C8
2219  module procedure mpp_do_global_field2d_a2a_c8_3d
2220 #endif
2221 #ifndef no_8byte_integers
2222  module procedure mpp_do_global_field2d_a2a_i8_3d
2223  module procedure mpp_do_global_field2d_a2a_l8_3d
2224 #endif
2225 #ifdef OVERLOAD_R4
2226  module procedure mpp_do_global_field2d_a2a_r4_3d
2227 #endif
2228 #ifdef OVERLOAD_C4
2229  module procedure mpp_do_global_field2d_a2a_c4_3d
2230 #endif
2231  module procedure mpp_do_global_field2d_a2a_i4_3d
2232  module procedure mpp_do_global_field2d_a2a_l4_3d
2233  end interface
2234 
2236  module procedure mpp_global_field2d_ug_r8_2d
2237  module procedure mpp_global_field2d_ug_r8_3d
2238  module procedure mpp_global_field2d_ug_r8_4d
2239  module procedure mpp_global_field2d_ug_r8_5d
2240 #ifndef no_8byte_integers
2241  module procedure mpp_global_field2d_ug_i8_2d
2242  module procedure mpp_global_field2d_ug_i8_3d
2243  module procedure mpp_global_field2d_ug_i8_4d
2244  module procedure mpp_global_field2d_ug_i8_5d
2245 #endif
2246 #ifdef OVERLOAD_R4
2247  module procedure mpp_global_field2d_ug_r4_2d
2248  module procedure mpp_global_field2d_ug_r4_3d
2249  module procedure mpp_global_field2d_ug_r4_4d
2250  module procedure mpp_global_field2d_ug_r4_5d
2251 #endif
2252  module procedure mpp_global_field2d_ug_i4_2d
2253  module procedure mpp_global_field2d_ug_i4_3d
2254  module procedure mpp_global_field2d_ug_i4_4d
2255  module procedure mpp_global_field2d_ug_i4_5d
2256  end interface
2257 
2259  module procedure mpp_do_global_field2d_r8_3d_ad
2260 #ifdef OVERLOAD_C8
2261  module procedure mpp_do_global_field2d_c8_3d_ad
2262 #endif
2263 #ifndef no_8byte_integers
2264  module procedure mpp_do_global_field2d_i8_3d_ad
2265  module procedure mpp_do_global_field2d_l8_3d_ad
2266 #endif
2267 #ifdef OVERLOAD_R4
2268  module procedure mpp_do_global_field2d_r4_3d_ad
2269 #endif
2270 #ifdef OVERLOAD_C4
2271  module procedure mpp_do_global_field2d_c4_3d_ad
2272 #endif
2273  module procedure mpp_do_global_field2d_i4_3d_ad
2274  module procedure mpp_do_global_field2d_l4_3d_ad
2275  end interface
2276 
2277 ! <INTERFACE NAME="mpp_global_max">
2278 ! <OVERVIEW>
2279 ! Global max/min of domain-decomposed arrays.
2280 ! </OVERVIEW>
2281 ! <DESCRIPTION>
2282 ! <TT>mpp_global_max</TT> is used to get the maximum value of a
2283 ! domain-decomposed array on each PE. <TT>MPP_TYPE_</TT> can be of type
2284 ! <TT>integer</TT> or <TT>real</TT>; of 4-byte or 8-byte kind; of rank
2285 ! up to 5. The dimension of <TT>locus</TT> must equal the rank of
2286 ! <TT>field</TT>.
2287 !
2288 ! All PEs in a domain decomposition must call
2289 ! <TT>mpp_global_max</TT>, and each will have the result upon exit.
2290 !
2291 ! The function <TT>mpp_global_min</TT>, with an identical syntax. is
2292 ! also available.
2293 ! </DESCRIPTION>
2294 ! <TEMPLATE>
2295 ! mpp_global_max( domain, field, locus )
2296 ! </TEMPLATE>
2297 ! <IN NAME="domain" TYPE="type(domain2D)"></IN>
2298 ! <IN NAME="field" TYPE="MPP_TYPE_">
2299 ! <TT>field</TT> is dimensioned on either the compute domain or the
2300 ! data domain of <TT>domain</TT>.
2301 ! </IN>
2302 ! <OUT NAME="locus" TYPE="integer" DIM="(:)">
2303 ! <TT>locus</TT>, if present, can be used to retrieve the location of
2304 ! the maximum (as in the <TT>MAXLOC</TT> intrinsic of f90).
2305 ! </OUT>
2306 ! </INTERFACE>
2307 
2308  interface mpp_global_max
2309  module procedure mpp_global_max_r8_2d
2310  module procedure mpp_global_max_r8_3d
2311  module procedure mpp_global_max_r8_4d
2312  module procedure mpp_global_max_r8_5d
2313 #ifdef OVERLOAD_R4
2314  module procedure mpp_global_max_r4_2d
2315  module procedure mpp_global_max_r4_3d
2316  module procedure mpp_global_max_r4_4d
2317  module procedure mpp_global_max_r4_5d
2318 #endif
2319 #ifndef no_8byte_integers
2320  module procedure mpp_global_max_i8_2d
2321  module procedure mpp_global_max_i8_3d
2322  module procedure mpp_global_max_i8_4d
2323  module procedure mpp_global_max_i8_5d
2324 #endif
2325  module procedure mpp_global_max_i4_2d
2326  module procedure mpp_global_max_i4_3d
2327  module procedure mpp_global_max_i4_4d
2328  module procedure mpp_global_max_i4_5d
2329  end interface
2330 
2331  interface mpp_global_min
2332  module procedure mpp_global_min_r8_2d
2333  module procedure mpp_global_min_r8_3d
2334  module procedure mpp_global_min_r8_4d
2335  module procedure mpp_global_min_r8_5d
2336 #ifdef OVERLOAD_R4
2337  module procedure mpp_global_min_r4_2d
2338  module procedure mpp_global_min_r4_3d
2339  module procedure mpp_global_min_r4_4d
2340  module procedure mpp_global_min_r4_5d
2341 #endif
2342 #ifndef no_8byte_integers
2343  module procedure mpp_global_min_i8_2d
2344  module procedure mpp_global_min_i8_3d
2345  module procedure mpp_global_min_i8_4d
2346  module procedure mpp_global_min_i8_5d
2347 #endif
2348  module procedure mpp_global_min_i4_2d
2349  module procedure mpp_global_min_i4_3d
2350  module procedure mpp_global_min_i4_4d
2351  module procedure mpp_global_min_i4_5d
2352  end interface
2353 
2354 ! <INTERFACE NAME="mpp_global_sum">
2355 ! <OVERVIEW>
2356 ! Global sum of domain-decomposed arrays.
2357 ! </OVERVIEW>
2358 ! <DESCRIPTION>
2359 ! <TT>mpp_global_sum</TT> is used to get the sum of a
2360 ! domain-decomposed array on each PE. <TT>MPP_TYPE_</TT> can be of type
2361 ! <TT>integer</TT>, <TT>complex</TT>, or <TT>real</TT>; of 4-byte or
2362 ! 8-byte kind; of rank up to 5.
2363 ! </DESCRIPTION>
2364 ! <TEMPLATE>
2365 ! call mpp_global_sum( domain, field, flags )
2366 ! </TEMPLATE>
2367 ! <IN NAME="domain" TYPE="type(domain2D)"></IN>
2368 ! <IN NAME="field" TYPE="MPP_TYPE_">
2369 ! <TT>field</TT> is dimensioned on either the compute domain or the
2370 ! data domain of <TT>domain</TT>.
2371 ! </IN>
2372 ! <IN NAME="flags" TYPE="integer">
2373 ! <TT>flags</TT>, if present, must have the value
2374 ! <TT>BITWISE_EXACT_SUM</TT>. This produces a sum that is guaranteed to
2375 ! produce the identical result irrespective of how the domain is
2376 ! decomposed. This method does the sum first along the ranks beyond 2,
2377 ! and then calls <LINK
2378 ! SRC="#mpp_global_field"><TT>mpp_global_field</TT></LINK> to produce a
2379 ! global 2D array which is then summed. The default method, which is
2380 ! considerably faster, does a local sum followed by <LINK
2381 ! SRC="mpp.html#mpp_sum"><TT>mpp_sum</TT></LINK> across the domain
2382 ! decomposition.
2383 ! </IN>
2384 ! <NOTE>
2385 ! All PEs in a domain decomposition must call
2386 ! <TT>mpp_global_sum</TT>, and each will have the result upon exit.
2387 ! </NOTE>
2388 ! </INTERFACE>
2389 
2390  interface mpp_global_sum
2391  module procedure mpp_global_sum_r8_2d
2392  module procedure mpp_global_sum_r8_3d
2393  module procedure mpp_global_sum_r8_4d
2394  module procedure mpp_global_sum_r8_5d
2395 #ifdef OVERLOAD_C8
2396  module procedure mpp_global_sum_c8_2d
2397  module procedure mpp_global_sum_c8_3d
2398  module procedure mpp_global_sum_c8_4d
2399  module procedure mpp_global_sum_c8_5d
2400 #endif
2401 #ifdef OVERLOAD_R4
2402  module procedure mpp_global_sum_r4_2d
2403  module procedure mpp_global_sum_r4_3d
2404  module procedure mpp_global_sum_r4_4d
2405  module procedure mpp_global_sum_r4_5d
2406 #endif
2407 #ifdef OVERLOAD_C4
2408  module procedure mpp_global_sum_c4_2d
2409  module procedure mpp_global_sum_c4_3d
2410  module procedure mpp_global_sum_c4_4d
2411  module procedure mpp_global_sum_c4_5d
2412 #endif
2413 #ifndef no_8byte_integers
2414  module procedure mpp_global_sum_i8_2d
2415  module procedure mpp_global_sum_i8_3d
2416  module procedure mpp_global_sum_i8_4d
2417  module procedure mpp_global_sum_i8_5d
2418 #endif
2419  module procedure mpp_global_sum_i4_2d
2420  module procedure mpp_global_sum_i4_3d
2421  module procedure mpp_global_sum_i4_4d
2422  module procedure mpp_global_sum_i4_5d
2423  end interface
2424 
2425 !gag
2427  module procedure mpp_global_sum_tl_r8_2d
2428  module procedure mpp_global_sum_tl_r8_3d
2429  module procedure mpp_global_sum_tl_r8_4d
2430  module procedure mpp_global_sum_tl_r8_5d
2431 #ifdef OVERLOAD_C8
2432  module procedure mpp_global_sum_tl_c8_2d
2433  module procedure mpp_global_sum_tl_c8_3d
2434  module procedure mpp_global_sum_tl_c8_4d
2435  module procedure mpp_global_sum_tl_c8_5d
2436 #endif
2437 #ifdef OVERLOAD_R4
2438  module procedure mpp_global_sum_tl_r4_2d
2439  module procedure mpp_global_sum_tl_r4_3d
2440  module procedure mpp_global_sum_tl_r4_4d
2441  module procedure mpp_global_sum_tl_r4_5d
2442 #endif
2443 #ifdef OVERLOAD_C4
2444  module procedure mpp_global_sum_tl_c4_2d
2445  module procedure mpp_global_sum_tl_c4_3d
2446  module procedure mpp_global_sum_tl_c4_4d
2447  module procedure mpp_global_sum_tl_c4_5d
2448 #endif
2449 #ifndef no_8byte_integers
2450  module procedure mpp_global_sum_tl_i8_2d
2451  module procedure mpp_global_sum_tl_i8_3d
2452  module procedure mpp_global_sum_tl_i8_4d
2453  module procedure mpp_global_sum_tl_i8_5d
2454 #endif
2455  module procedure mpp_global_sum_tl_i4_2d
2456  module procedure mpp_global_sum_tl_i4_3d
2457  module procedure mpp_global_sum_tl_i4_4d
2458  module procedure mpp_global_sum_tl_i4_5d
2459  end interface
2460 !gag
2461 
2462 !bnc
2464  module procedure mpp_global_sum_ad_r8_2d
2465  module procedure mpp_global_sum_ad_r8_3d
2466  module procedure mpp_global_sum_ad_r8_4d
2467  module procedure mpp_global_sum_ad_r8_5d
2468 #ifdef OVERLOAD_C8
2469  module procedure mpp_global_sum_ad_c8_2d
2470  module procedure mpp_global_sum_ad_c8_3d
2471  module procedure mpp_global_sum_ad_c8_4d
2472  module procedure mpp_global_sum_ad_c8_5d
2473 #endif
2474 #ifdef OVERLOAD_R4
2475  module procedure mpp_global_sum_ad_r4_2d
2476  module procedure mpp_global_sum_ad_r4_3d
2477  module procedure mpp_global_sum_ad_r4_4d
2478  module procedure mpp_global_sum_ad_r4_5d
2479 #endif
2480 #ifdef OVERLOAD_C4
2481  module procedure mpp_global_sum_ad_c4_2d
2482  module procedure mpp_global_sum_ad_c4_3d
2483  module procedure mpp_global_sum_ad_c4_4d
2484  module procedure mpp_global_sum_ad_c4_5d
2485 #endif
2486 #ifndef no_8byte_integers
2487  module procedure mpp_global_sum_ad_i8_2d
2488  module procedure mpp_global_sum_ad_i8_3d
2489  module procedure mpp_global_sum_ad_i8_4d
2490  module procedure mpp_global_sum_ad_i8_5d
2491 #endif
2492  module procedure mpp_global_sum_ad_i4_2d
2493  module procedure mpp_global_sum_ad_i4_3d
2494  module procedure mpp_global_sum_ad_i4_4d
2495  module procedure mpp_global_sum_ad_i4_5d
2496  end interface
2497 !bnc
2498 
2499 !***********************************************************************
2500 !
2501 ! public interface from mpp_domain_util.h
2502 !
2503 !***********************************************************************
2504 
2505  ! <INTERFACE NAME="mpp_get_neighbor_pe">
2506  ! <OVERVIEW>
2507  ! Retrieve PE number of a neighboring domain.
2508  ! </OVERVIEW>
2509  ! <DESCRIPTION>
2510  ! Given a 1-D or 2-D domain decomposition, this call allows users to retrieve
2511  ! the PE number of an adjacent PE-domain while taking into account that the
2512  ! domain may have holes (masked) and/or have cyclic boundary conditions and/or a
2513  ! folded edge. Which PE-domain will be retrived will depend on "direction":
2514  ! +1 (right) or -1 (left) for a 1-D domain decomposition and either NORTH, SOUTH,
2515  ! EAST, WEST, NORTH_EAST, SOUTH_EAST, SOUTH_WEST, or NORTH_WEST for a 2-D
2516  ! decomposition. If no neighboring domain exists (masked domain), then the
2517  ! returned "pe" value will be set to NULL_PE.
2518  ! </DESCRIPTION>
2519  ! <TEMPLATE>
2520  ! call mpp_get_neighbor_pe( domain1d, direction=+1 , pe)
2521  ! call mpp_get_neighbor_pe( domain2d, direction=NORTH, pe)
2522  ! </TEMPLATE>
2523  ! </INTERFACE>
2525  module procedure mpp_get_neighbor_pe_1d
2526  module procedure mpp_get_neighbor_pe_2d
2527  end interface
2528  ! <INTERFACE NAME="operator">
2529  ! <OVERVIEW>
2530  ! Equality/inequality operators for domaintypes.
2531  ! </OVERVIEW>
2532  ! <DESCRIPTION>
2533  ! The module provides public operators to check for
2534  ! equality/inequality of domaintypes, e.g:
2535  !
2536  ! <PRE>
2537  ! type(domain1D) :: a, b
2538  ! type(domain2D) :: c, d
2539  ! ...
2540  ! if( a.NE.b )then
2541  ! ...
2542  ! end if
2543  ! if( c==d )then
2544  ! ...
2545  ! end if
2546  ! </PRE>
2547  !
2548  ! Domains are considered equal if and only if the start and end
2549  ! indices of each of their component global, data and compute domains
2550  ! are equal.
2551  ! </DESCRIPTION>
2552  ! </INTERFACE>
2553  interface operator(.EQ.)
2554  module procedure mpp_domain1d_eq
2555  module procedure mpp_domain2d_eq
2556  module procedure mpp_domainug_eq
2557  end interface
2558 
2559  interface operator(.NE.)
2560  module procedure mpp_domain1d_ne
2561  module procedure mpp_domain2d_ne
2562  module procedure mpp_domainug_ne
2563  end interface
2564 
2565  ! <INTERFACE NAME="mpp_get_compute_domain">
2566  ! <OVERVIEW>
2567  ! These routines retrieve the axis specifications associated with the compute domains.
2568  ! </OVERVIEW>
2569  ! <DESCRIPTION>
2570  ! The domain is a derived type with private elements. These routines
2571  ! retrieve the axis specifications associated with the compute domains
2572  ! The 2D version of these is a simple extension of 1D.
2573  ! </DESCRIPTION>
2574  ! <TEMPLATE>
2575  ! call mpp_get_compute_domain
2576  ! </TEMPLATE>
2577  ! </INTERFACE>
2579  module procedure mpp_get_compute_domain1d
2580  module procedure mpp_get_compute_domain2d
2581  end interface
2582 
2583  ! <INTERFACE NAME="mpp_get_compute_domains">
2584  ! <OVERVIEW>
2585  ! Retrieve the entire array of compute domain extents associated with a decomposition.
2586  ! </OVERVIEW>
2587  ! <DESCRIPTION>
2588  ! Retrieve the entire array of compute domain extents associated with a decomposition.
2589  ! </DESCRIPTION>
2590  ! <TEMPLATE>
2591  ! call mpp_get_compute_domains( domain, xbegin, xend, xsize, &
2592  ! ybegin, yend, ysize )
2593  ! </TEMPLATE>
2594  ! <IN NAME="domain" TYPE="type(domain2D)"></IN>
2595  ! <OUT NAME="xbegin,ybegin" TYPE="integer" DIM="(:)"></OUT>
2596  ! <OUT NAME="xend,yend" TYPE="integer" DIM="(:)"></OUT>
2597  ! <OUT NAME="xsize,ysize" TYPE="integer" DIM="(:)"></OUT>
2598  ! </INTERFACE>
2600  module procedure mpp_get_compute_domains1d
2601  module procedure mpp_get_compute_domains2d
2602  end interface
2603 
2604  ! <INTERFACE NAME="mpp_get_data_domain">
2605  ! <OVERVIEW>
2606  ! These routines retrieve the axis specifications associated with the data domains.
2607  ! </OVERVIEW>
2608  ! <DESCRIPTION>
2609  ! The domain is a derived type with private elements. These routines
2610  ! retrieve the axis specifications associated with the data domains.
2611  ! The 2D version of these is a simple extension of 1D.
2612  ! </DESCRIPTION>
2613  ! <TEMPLATE>
2614  ! call mpp_get_data_domain
2615  ! </TEMPLATE>
2616  ! </INTERFACE>
2618  module procedure mpp_get_data_domain1d
2619  module procedure mpp_get_data_domain2d
2620  end interface
2621 
2622  ! <INTERFACE NAME="mpp_get_global_domain">
2623  ! <OVERVIEW>
2624  ! These routines retrieve the axis specifications associated with the global domains.
2625  ! </OVERVIEW>
2626  ! <DESCRIPTION>
2627  ! The domain is a derived type with private elements. These routines
2628  ! retrieve the axis specifications associated with the global domains.
2629  ! The 2D version of these is a simple extension of 1D.
2630  ! </DESCRIPTION>
2631  ! <TEMPLATE>
2632  ! call mpp_get_global_domain
2633  ! </TEMPLATE>
2634  ! </INTERFACE>
2636  module procedure mpp_get_global_domain1d
2637  module procedure mpp_get_global_domain2d
2638  end interface
2639 
2640  ! <INTERFACE NAME="mpp_get_memory_domain">
2641  ! <OVERVIEW>
2642  ! These routines retrieve the axis specifications associated with the memory domains.
2643  ! </OVERVIEW>
2644  ! <DESCRIPTION>
2645  ! The domain is a derived type with private elements. These routines
2646  ! retrieve the axis specifications associated with the memory domains.
2647  ! The 2D version of these is a simple extension of 1D.
2648  ! </DESCRIPTION>
2649  ! <TEMPLATE>
2650  ! call mpp_get_memory_domain
2651  ! </TEMPLATE>
2652  ! </INTERFACE>
2654  module procedure mpp_get_memory_domain1d
2655  module procedure mpp_get_memory_domain2d
2656  end interface
2657 
2659  module procedure mpp_get_domain_extents1d
2660  module procedure mpp_get_domain_extents2d
2661  end interface
2662 
2663  ! <INTERFACE NAME="mpp_set_compute_domain">
2664  ! <OVERVIEW>
2665  ! These routines set the axis specifications associated with the compute domains.
2666  ! </OVERVIEW>
2667  ! <DESCRIPTION>
2668  ! The domain is a derived type with private elements. These routines
2669  ! set the axis specifications associated with the compute domains
2670  ! The 2D version of these is a simple extension of 1D.
2671  ! </DESCRIPTION>
2672  ! <TEMPLATE>
2673  ! call mpp_set_compute_domain
2674  ! </TEMPLATE>
2675  ! </INTERFACE>
2677  module procedure mpp_set_compute_domain1d
2678  module procedure mpp_set_compute_domain2d
2679  end interface
2680 
2681  ! <INTERFACE NAME="mpp_set_data_domain">
2682  ! <OVERVIEW>
2683  ! These routines set the axis specifications associated with the data domains.
2684  ! </OVERVIEW>
2685  ! <DESCRIPTION>
2686  ! The domain is a derived type with private elements. These routines
2687  ! set the axis specifications associated with the data domains.
2688  ! The 2D version of these is a simple extension of 1D.
2689  ! </DESCRIPTION>
2690  ! <TEMPLATE>
2691  ! call mpp_set_data_domain
2692  ! </TEMPLATE>
2693  ! </INTERFACE>
2695  module procedure mpp_set_data_domain1d
2696  module procedure mpp_set_data_domain2d
2697  end interface
2698 
2699  ! <INTERFACE NAME="mpp_set_global_domain">
2700  ! <OVERVIEW>
2701  ! These routines set the axis specifications associated with the global domains.
2702  ! </OVERVIEW>
2703  ! <DESCRIPTION>
2704  ! The domain is a derived type with private elements. These routines
2705  ! set the axis specifications associated with the global domains.
2706  ! The 2D version of these is a simple extension of 1D.
2707  ! </DESCRIPTION>
2708  ! <TEMPLATE>
2709  ! call mpp_set_global_domain
2710  ! </TEMPLATE>
2711  ! </INTERFACE>
2713  module procedure mpp_set_global_domain1d
2714  module procedure mpp_set_global_domain2d
2715  end interface
2716 
2717 
2718  ! <INTERFACE NAME="mpp_get_pelist">
2719  ! <OVERVIEW>
2720  ! Retrieve list of PEs associated with a domain decomposition.
2721  ! </OVERVIEW>
2722  ! <DESCRIPTION>
2723  ! The 1D version of this call returns an array of the PEs assigned to this 1D domain
2724  ! decomposition. In addition the optional argument <TT>pos</TT> may be
2725  ! used to retrieve the 0-based position of the domain local to the
2726  ! calling PE, i.e <TT>domain%list(pos)%pe</TT> is the local PE,
2727  ! as returned by <LINK SRC="mpp.html#mpp_pe"><TT>mpp_pe()</TT></LINK>.
2728  ! The 2D version of this call is identical to 1D version.
2729  ! </DESCRIPTION>
2730  ! <IN NAME="domain"></IN>
2731  ! <OUT NAME="pelist"></OUT>
2732  ! <OUT NAME="pos"></OUT>
2733  ! </INTERFACE>
2734  interface mpp_get_pelist
2735  module procedure mpp_get_pelist1d
2736  module procedure mpp_get_pelist2d
2737  end interface
2738 
2739  ! <INTERFACE NAME="mpp_get_layout">
2740  ! <OVERVIEW>
2741  ! Retrieve layout associated with a domain decomposition.
2742  ! </OVERVIEW>
2743  ! <DESCRIPTION>
2744  ! The 1D version of this call returns the number of divisions that was assigned to this
2745  ! decomposition axis. The 2D version of this call returns an array of
2746  ! dimension 2 holding the results on two axes.
2747  ! </DESCRIPTION>
2748  ! <TEMPLATE>
2749  ! call mpp_get_layout( domain, layout )
2750  ! </TEMPLATE>
2751  ! <IN NAME="domain"></IN>
2752  ! <OUT NAME="layout"></OUT>
2753  ! </INTERFACE>
2754  interface mpp_get_layout
2755  module procedure mpp_get_layout1d
2756  module procedure mpp_get_layout2d
2757  end interface
2758 
2759  ! <INTERFACE NAME="mpp_nullify_domain_list">
2760  ! <OVERVIEW>
2761  ! nullify domain list.
2762  ! </OVERVIEW>
2763  ! <DESCRIPTION>
2764  ! Nullify domain list. This interface is needed in mpp_domains_test.
2765  ! 1-D case can be added in if needed.
2766  ! </DESCRIPTION>
2767  ! <TEMPLATE>
2768  ! call mpp_nullify_domain_list( domain)
2769  ! </TEMPLATE>
2770  ! <INOUT NAME="domain"></INOUT>
2771  ! </INTERFACE>
2773  module procedure nullify_domain2d_list
2774  end interface
2775 
2776  ! Include variable "version" to be written to log file.
2777 #include<file_version.h>
2778  public version
2779 
2780 
2781 contains
2782 
2783 #include <mpp_define_nest_domains.inc>
2784 #include <mpp_domains_util.inc>
2785 #include <mpp_domains_comm.inc>
2786 #include <mpp_domains_define.inc>
2787 #include <mpp_domains_misc.inc>
2788 #include <mpp_domains_reduce.inc>
2789 #include <mpp_unstruct_domain.inc>
2790 
2791 end module mpp_domains_mod
2792 
2793 ! <INFO>
2794 
2795 ! <COMPILER NAME="">
2796 ! Any module or program unit using <TT>mpp_domains_mod</TT>
2797 ! must contain the line
2798 
2799 ! <PRE>
2800 ! use mpp_domains_mod
2801 ! </PRE>
2802 
2803 ! <TT>mpp_domains_mod</TT> <TT>use</TT>s <LINK
2804 ! SRC="mpp.html">mpp_mod</LINK>, and therefore is subject to the <LINK
2805 ! SRC="mpp.html#COMPILING AND LINKING SOURCE">compiling and linking requirements of that module.</LINK>
2806 ! </COMPILER>
2807 ! <PRECOMP FLAG="">
2808 ! <TT>mpp_domains_mod</TT> uses standard f90, and has no special
2809 ! requirements. There are some OS-dependent
2810 ! pre-processor directives that you might need to modify on
2811 ! non-SGI/Cray systems and compilers. The <LINK
2812 ! SRC="mpp.html#PORTABILITY">portability of mpp_mod</LINK>
2813 ! obviously is a constraint, since this module is built on top of
2814 ! it. Contact me, Balaji, SGI/GFDL, with questions.
2815 ! </PRECOMP>
2816 ! <LOADER FLAG="">
2817 ! The <TT>mpp_domains</TT> source consists of the main source file
2818 ! <TT>mpp_domains.F90</TT> and also requires the following include files:
2819 ! <PRE>
2820 ! <TT>fms_platform.h</TT>
2821 ! <TT>mpp_update_domains2D.h</TT>
2822 ! <TT>mpp_global_reduce.h</TT>
2823 ! <TT>mpp_global_sum.h</TT>
2824 ! <TT>mpp_global_field.h</TT>
2825 ! </PRE>
2826 ! GFDL users can check it out of the main CVS repository as part of
2827 ! the <TT>mpp</TT> CVS module. The current public tag is <TT>galway</TT>.
2828 ! External users can download the latest <TT>mpp</TT> package <LINK SRC=
2829 ! "ftp://ftp.gfdl.gov/pub/vb/mpp/mpp.tar.Z">here</LINK>. Public access
2830 ! to the GFDL CVS repository will soon be made available.
2831 
2832 ! </LOADER>
2833 
2834 ! </INFO>
integer, parameter maxoverlap
logical efp_sum_overflow_check
integer mpp_domains_stack_hwm
integer, parameter, public nonblock_update_tag
integer, parameter, public bgrid_ne
integer, parameter, public nupdate
integer, parameter, public cgrid_sw
integer nest_wait_clock
logical domain_clocks_on
integer, parameter field_s
integer nonblock_group_unpk_clock
integer group_pack_clock
integer, parameter, public one_hundred_eighty
logical start_update
integer group_unpk_clock
logical module_is_initialized
real(fp), parameter, public zero
integer, parameter, public null_pe
integer, save n_comm
integer, parameter, public dgrid_ne
integer, parameter maxlist
integer, parameter, public non_bitwise_exact_sum
subroutine, public mpp_memuse_begin
integer, parameter field_y
integer, parameter, public global_data_domain
integer, parameter field_x
integer nonblock_group_send_clock
integer nonblock_buffer_pos
integer, parameter, public corner
integer nest_pack_clock
integer, parameter max_addrs
integer(long_kind), dimension(max_fields), save dckey_sorted
integer, parameter, public mpp_domain_time
integer nonblock_group_pack_clock
integer, parameter, public scalar_bit
integer, parameter, public bgrid_sw
integer(long_kind), parameter gt_base
integer, parameter max_nonblock_update
integer, parameter, public bitwise_exact_sum
integer, parameter, public event_send
integer, dimension(max_addrs2), save a2_salvage
integer nest_unpk_clock
Definition: mpp.F90:39
integer group_send_clock
integer, parameter, public wupdate
integer, parameter, public fold_south_edge
integer(long_kind), parameter ke_base
integer current_id_update
integer, parameter max_fields
integer, parameter, public yupdate
integer, parameter, public west
subroutine, public mpp_pset_init
Definition: mpp_pset.F90:104
integer nonblock_group_buffer_pos
character(len=input_str_length), dimension(:), allocatable, target, public input_nml_file
Definition: mpp.F90:1378
integer nest_send_clock
integer, dimension(-1:max_addrs), save addrs_idx
integer(long_kind), dimension(max_addrs2), save addrs2_sorted
integer, parameter, public agrid
integer recv_clock_nonblock
integer send_pack_clock_nonblock
integer, parameter, public global
integer(long_kind), dimension(max_addrs), save addrs_sorted
integer, dimension(-1:max_fields), save d_comm_idx
integer, parameter, public mpp_debug
integer, parameter max_request
integer, dimension(max_addrs), save a_salvage
integer, save dc_sort_len
integer(long_kind) domain_cnt
integer nest_recv_clock
integer, parameter, public center
type(domain2d), save, public null_domain2d
integer, save a2_sort_len
integer, parameter, public south_west
integer nonblock_group_wait_clock
logical complete_update
integer, parameter max_addrs2
integer, parameter, public cyclic_global_domain
integer group_update_buffer_pos
integer, save a_sort_len
integer, parameter, public fold_east_edge
integer, parameter, public nonsymedge
integer, parameter, public nonsymedgeupdate
integer group_wait_clock
integer, parameter, public east
subroutine, public mpp_memuse_end(text, unit)
integer mpp_domains_stack_size
integer, parameter, public supdate
integer, parameter, public fold_north_edge
integer, save i_sort_len
logical use_alltoallw
integer, parameter, public max_tiles
integer group_recv_clock
type(domainug), save, public null_domainug
integer, save n_ids
logical debug_message_passing
logical complete_group_update_on
integer, parameter, public event_recv
integer, parameter name_length
integer, parameter, public edgeupdate
integer, parameter, public mpp_verbose
integer, parameter, public xupdate
integer(long_kind), parameter, public domain_id_base
integer, save n_addrs
************************************************************************GNU Lesser General Public License **This file is part of the GFDL Flexible Modeling System(FMS). ! *! *FMS is free software without even the implied warranty of MERCHANTABILITY or *FITNESS FOR A PARTICULAR PURPOSE See the GNU General Public License *for more details **You should have received a copy of the GNU Lesser General Public *License along with FMS If see< http:! ***********************************************************************subroutine READ_RECORD_CORE_(unit, field, nwords, data, start, axsiz) integer, intent(in) ::unit type(fieldtype), intent(in) ::field integer, intent(in) ::nwords MPP_TYPE_, intent(inout) ::data(nwords) integer, intent(in) ::start(:), axsiz(:) integer(SHORT_KIND) ::i2vals(nwords)!rab used in conjunction with transfer intrinsic to determine size of a variable integer(KIND=1) ::one_byte(8) integer ::word_sz!#ifdef __sgi integer(INT_KIND) ::ivals(nwords) real(FLOAT_KIND) ::rvals(nwords)!#else! integer ::ivals(nwords)! real ::rvals(nwords)!#endif real(DOUBLE_KIND) ::r8vals(nwords) pointer(ptr1, i2vals) pointer(ptr2, ivals) pointer(ptr3, rvals) pointer(ptr4, r8vals) if(mpp_io_stack_size< nwords) call mpp_io_set_stack_size(nwords) call mpp_error(FATAL, 'MPP_READ currently requires use_netCDF option') end subroutine READ_RECORD_CORE_ subroutine READ_RECORD_(unit, field, nwords, data, time_level, domain, position, tile_count, start_in, axsiz_in)!routine that is finally called by all mpp_read routines to perform the read!a non-netCDF record contains:! field ID! a set of 4 coordinates(is:ie, js:je) giving the data subdomain! a timelevel and a timestamp(=NULLTIME if field is static)! 3D real data(stored as 1D)!if you are using direct access I/O, the RECL argument to OPEN must be large enough for the above!in a global direct access file, record position on PE is given by %record.!Treatment of timestamp:! We assume that static fields have been passed without a timestamp.! Here that is converted into a timestamp of NULLTIME.! For non-netCDF fields, field is treated no differently, but is written! with a timestamp of NULLTIME. There is no check in the code to prevent! the user from repeatedly writing a static field. integer, intent(in) ::unit, nwords type(fieldtype), intent(in) ::field MPP_TYPE_, intent(inout) ::data(nwords) integer, intent(in), optional ::time_level type(domain2D), intent(in), optional ::domain integer, intent(in), optional ::position, tile_count integer, intent(in), optional ::start_in(:), axsiz_in(:) integer, dimension(size(field%axes(:))) ::start, axsiz integer ::tlevel !, subdomain(4) integer ::i, error, is, ie, js, je, isg, ieg, jsg, jeg type(domain2d), pointer ::io_domain=> tlevel if(PRESENT(start_in) .AND. PRESENT(axsiz_in)) then if(size(start(! the data domain and compute domain must refer to the subdomain being passed ! In this ! since that attempts to gather all data on PE size(field%axes(:)) axsiz(i)
character(len=32) debug_update_domain
integer, parameter, public fold_west_edge
integer, parameter, public minus_ninety
integer wait_clock_nonblock
integer, parameter, public north
integer, save n_addrs2
logical mosaic_defined
type(nonblock_type), dimension(:), allocatable nonblock_data
integer, parameter, public cyclic
type(domaincommunicator2d), dimension(:), allocatable, target, save d_comm
integer(long_kind), parameter addr2_base
integer, parameter no_check
integer, parameter, public eupdate
integer, parameter, public max_domain_fields
integer, parameter, public edgeonly
integer, parameter, public bitwise_efp_sum
integer, parameter, public root_global
integer, dimension(max_fields), save dc_salvage
integer, dimension(-1:max_addrs2), save addrs2_idx
integer, parameter, public north_east
integer, parameter, public south
integer nthread_control_loop
integer(long_kind), dimension(max_dom_ids), save ids_sorted
integer, parameter, public ninety
integer, parameter max_dom_ids
integer nonblock_group_recv_clock
integer, parameter, public north_west
integer, parameter, public scalar_pair
type(mpp_type), target, public mpp_byte
Definition: mpp.F90:1315
integer, dimension(-1:max_dom_ids), save ids_idx
integer, parameter, public cgrid_ne
integer num_nonblock_group_update
integer unpk_clock_nonblock
integer debug_update_level
integer, parameter, public south_east
integer, parameter, public dgrid_sw
type(domain1d), save, public null_domain1d