fftw3: Basic and advanced distribution interfaces
6.4.1 Basic and advanced distribution interfaces
------------------------------------------------
As with the planner interface, the 'fftw_mpi_local_size' distribution
interface is broken into basic and advanced ('_many') interfaces, where
the latter allows you to specify the block size manually and also to
request block sizes when computing multiple transforms simultaneously.
These functions are documented more exhaustively by the FFTW MPI
Reference, but we summarize the basic ideas here using a couple of
two-dimensional examples.
For the 100 x 200 complex-DFT example, above, we would find the
distribution by calling the following function in the basic interface:
ptrdiff_t fftw_mpi_local_size_2d(ptrdiff_t n0, ptrdiff_t n1, MPI_Comm comm,
ptrdiff_t *local_n0, ptrdiff_t *local_0_start);
Given the total size of the data to be transformed (here, 'n0 = 100'
and 'n1 = 200') and an MPI communicator ('comm'), this function provides
three numbers.
First, it describes the shape of the local data: the current process
should store a 'local_n0' by 'n1' slice of the overall dataset, in
row-major order ('n1' dimension contiguous), starting at index
'local_0_start'. That is, if the total dataset is viewed as a 'n0' by
'n1' matrix, the current process should store the rows 'local_0_start'
to 'local_0_start+local_n0-1'. Obviously, if you are running with only
a single MPI process, that process will store the entire array:
'local_0_start' will be zero and 'local_n0' will be 'n0'.
Row-major Format.
Second, the return value is the total number of data elements (e.g.,
complex numbers for a complex DFT) that should be allocated for the
input and output arrays on the current process (ideally with
'fftw_malloc' or an 'fftw_alloc' function, to ensure optimal alignment).
It might seem that this should always be equal to 'local_n0 * n1', but
this is _not_ the case. FFTW's distributed FFT algorithms require data
redistributions at intermediate stages of the transform, and in some
circumstances this may require slightly larger local storage. This is
discussed in more detail below, under Load balancing.
The advanced-interface 'local_size' function for multidimensional
transforms returns the same three things ('local_n0', 'local_0_start',
and the total number of elements to allocate), but takes more inputs:
ptrdiff_t fftw_mpi_local_size_many(int rnk, const ptrdiff_t *n,
ptrdiff_t howmany,
ptrdiff_t block0,
MPI_Comm comm,
ptrdiff_t *local_n0,
ptrdiff_t *local_0_start);
The two-dimensional case above corresponds to 'rnk = 2' and an array
'n' of length 2 with 'n[0] = n0' and 'n[1] = n1'. This routine is for
any 'rnk > 1'; one-dimensional transforms have their own interface
because they work slightly differently, as discussed below.
First, the advanced interface allows you to perform multiple
transforms at once, of interleaved data, as specified by the 'howmany'
parameter. ('hoamany' is 1 for a single transform.)
Second, here you can specify your desired block size in the 'n0'
dimension, 'block0'. To use FFTW's default block size, pass
'FFTW_MPI_DEFAULT_BLOCK' (0) for 'block0'. Otherwise, on 'P' processes,
FFTW will return 'local_n0' equal to 'block0' on the first 'P / block0'
processes (rounded down), return 'local_n0' equal to 'n0 - block0 * (P /
block0)' on the next process, and 'local_n0' equal to zero on any
remaining processes. In general, we recommend using the default block
size (which corresponds to 'n0 / P', rounded up).
For example, suppose you have 'P = 4' processes and 'n0 = 21'. The
default will be a block size of '6', which will give 'local_n0 = 6' on
the first three processes and 'local_n0 = 3' on the last process.
Instead, however, you could specify 'block0 = 5' if you wanted, which
would give 'local_n0 = 5' on processes 0 to 2, 'local_n0 = 6' on process
3. (This choice, while it may look superficially more "balanced," has
the same critical path as FFTW's default but requires more
communications.)