Collective Communication Collective Communication Collective communication is defined

  • Slides: 45
Download presentation
Collective Communication

Collective Communication

Collective Communication ¡ ¡ Collective communication is defined as communication that involves a group

Collective Communication ¡ ¡ Collective communication is defined as communication that involves a group of processes More restrictive than point to point l l Data sent is same as the data received, i. e. type, amount All processes involved make one call, no tag to match operation Processes involved can return only when operation completes ¡ blocking communication only Standard Mode only

Collective Functions ¡ ¡ ¡ ¡ ¡ Barrier synchronization across all group members Broadcast

Collective Functions ¡ ¡ ¡ ¡ ¡ Barrier synchronization across all group members Broadcast from one member to all members of a group Gather data from all group members to one member Scatter data from one member to all members of a group A variation on Gather where all members of the group receive the result. (allgather) Scatter/Gather data from all members to all members of a group (also called complete exchange or all-to-all) (alltoall) Global reduction operations such as sum, max, min, or userdefined functions, where the result is returned to all group members and a variation where the result is returned to only one member A combined reduction and scatter operation Scan across all members of a group (also called prefix)

Collective Functions

Collective Functions

Collective Functions

Collective Functions

Collective Functions – MPI_BARRIER ¡ ¡ ¡ blocks the caller until all group members

Collective Functions – MPI_BARRIER ¡ ¡ ¡ blocks the caller until all group members have called it returns at any process only after all group members have entered the call C l l ¡ int MPI_Barrier(MPI_Comm comm ) Input Parameter ¡ comm: communicator (handle) Fortran l l MPI_BARRIER(COMM, IERROR) INTEGER COMM, IERROR

Collective Functions – MPI_BCAST ¡ ¡ broadcasts a message from the process with rank

Collective Functions – MPI_BCAST ¡ ¡ broadcasts a message from the process with rank root to all processes of the group, itself included C l l int MPI_Bcast(void* buffer, int count, MPI_Datatype datatype, int root, MPI_Comm comm ) Input Parameters ¡ ¡ l Input / Output Parameter ¡ ¡ Fortran l l l count: number of entries in buffer (integer) datatype: data type of buffer (handle) root: rank of broadcast root (integer) comm: communicator (handle) buffer: starting address of buffer (choice) MPI_BCAST(BUFFER, COUNT, DATATYPE, ROOT, COMM, IERROR) <type> BUFFER(*) INTEGER COUNT, DATATYPE, ROOT, COMM, IERROR

Collective Functions – MPI_BCAST A A A

Collective Functions – MPI_BCAST A A A

Collective Functions – MPI_GATHER ¡ ¡ ¡ Each process (root process included) sends the

Collective Functions – MPI_GATHER ¡ ¡ ¡ Each process (root process included) sends the contents of its send buffer to the root process. The root process receives the messages and stores them in rank order C l l int MPI_Gather(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) Input Parameters ¡ ¡ ¡ ¡ sendbuf: starting address of send buffer (choice) sendcount: number of elements in send buffer (integer) sendtype: data type of send buffer elements (handle) recvcount: number of elements for any single receive (integer, significant only at root) recvtype: data type of recv buffer elements (significant only at root) (handle) root: rank of receiving process (integer) comm: communicator (handle)

Collective Functions – MPI_GATHER l ¡ Output Parameter ¡ recvbuf: address of receive buffer

Collective Functions – MPI_GATHER l ¡ Output Parameter ¡ recvbuf: address of receive buffer (choice, significant only at root) Fortran l l l MPI_GATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR

Collective Functions – MPI_GATHER A B C D A B

Collective Functions – MPI_GATHER A B C D A B

Collective Functions – MPI_SCATTER ¡ ¡ MPI_SCATTER is the inverse operation to MPI_GATHER C

Collective Functions – MPI_SCATTER ¡ ¡ MPI_SCATTER is the inverse operation to MPI_GATHER C l l int MPI_Scatter(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, int root, MPI_Comm comm) Input Parameters ¡ ¡ ¡ ¡ sendbuf: address of send buffer (choice, significant only at root) sendcount: number of elements sent to each process (integer, significant only at root) sendtype: data type of send buffer elements (significant only at root) (handle) recvcount: number of elements in receive buffer (integer) recvtype: data type of receive buffer elements (handle) root: rank of sending process (integer) comm: communicator (handle)

Collective Functions – MPI_SCATTER l Output Parameter ¡ ¡ recvbuf: address of receive buffer

Collective Functions – MPI_SCATTER l Output Parameter ¡ ¡ recvbuf: address of receive buffer (choice) Fortran l l l MPI_SCATTER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, ROOT, COMM, IERROR

Collective Functions – MPI_SCATTER A B C D

Collective Functions – MPI_SCATTER A B C D

Collective Functions – MPI_ALLGATHER ¡ ¡ ¡ MPI_ALLGATHER can be thought of as MPI_GATHER,

Collective Functions – MPI_ALLGATHER ¡ ¡ ¡ MPI_ALLGATHER can be thought of as MPI_GATHER, but where all processes receive the result, instead of just the root. The jth block of data sent from each process is received by every process and placed in the jth block of the buffer recvbuf. C l l int MPI_Allgather(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) Input Parameters ¡ ¡ ¡ sendbuf: starting address of send buffer (choice) sendcount: number of elements in send buffer (integer) sendtype: data type of send buffer elements (handle) recvcount: number of elements received from any process (integer) recvtype: data type of receive buffer elements (handle) comm: communicator (handle)

Collective Functions – MPI_ALLGATHER l ¡ Output Parameter ¡ recvbuf: address of receive buffer

Collective Functions – MPI_ALLGATHER l ¡ Output Parameter ¡ recvbuf: address of receive buffer (choice) Fortran l l l MPI_ALLGATHER(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

Collective Functions – MPI_ALLGATHER A B C D A B C D

Collective Functions – MPI_ALLGATHER A B C D A B C D

Collective Functions – MPI_ALLTOALL ¡ ¡ Extension of MPI_ALLGATHER to the case where each

Collective Functions – MPI_ALLTOALL ¡ ¡ Extension of MPI_ALLGATHER to the case where each process sends distinct data to each of the receivers. The jth block sent from process i is received by process j and is placed in the ith block of recvbuf C l l int MPI_Alltoall(void* sendbuf, int sendcount, MPI_Datatype sendtype, void* recvbuf, int recvcount, MPI_Datatype recvtype, MPI_Comm comm) Input Parameters ¡ ¡ ¡ sendbuf: starting address of send buffer (choice) sendcount: number of elements sent to each process (integer) sendtype: data type of send buffer elements (handle) recvcount: number of elements received from any process (integer) recvtype: data type of receive buffer elements (handle) comm: communicator (handle)

Collective Functions – MPI_ALLTOALL l Output Parameter ¡ ¡ recvbuf: address of receive buffer

Collective Functions – MPI_ALLTOALL l Output Parameter ¡ ¡ recvbuf: address of receive buffer (choice) Fortran l l l MPI_ALLTOALL(SENDBUF, SENDCOUNT, SENDTYPE, RECVBUF, RECVCOUNT, RECVTYPE, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER SENDCOUNT, SENDTYPE, RECVCOUNT, RECVTYPE, COMM, IERROR

Collective Functions – MPI_ALLTOALL Rank 0 Rank 1 Rank 2 Rank 3 A B

Collective Functions – MPI_ALLTOALL Rank 0 Rank 1 Rank 2 Rank 3 A B C D E F G H I J K L M N O P MPI_ALLTOALL A B C D E F G H I J K L M N O P A E I M B F J N C G K O D H L P

Collective Functions – MPI_REDUCE ¡ ¡ MPI_REDUCE combines the elements provided in the input

Collective Functions – MPI_REDUCE ¡ ¡ MPI_REDUCE combines the elements provided in the input buffer (sendbuf) of each process in the group, using the operation op, and returns the combined value in the output buffer (recvbuf) of the process with rank root C l l int MPI_Reduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, int root, MPI_Comm comm) Input Parameters ¡ ¡ ¡ l sendbuf: address of send buffer (choice) count: number of elements in send buffer (integer) datatype: data type of elements of send buffer (handle) op: reduce operation (handle) root: rank of root process (integer) comm: communicator (handle) Output Parameter ¡ recvbuf: address of receive buffer (choice, significant only at root)

Collective Functions – MPI_REDUCE ¡ Fortran l l l ¡ MPI_REDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE,

Collective Functions – MPI_REDUCE ¡ Fortran l l l ¡ MPI_REDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, ROOT, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, ROOT, COMM, IERROR Predefined Reduce Operations l l l [ MPI_MAX] maximum [ MPI_MIN] minimum [ MPI_SUM] sum [ MPI_PROD] product [ MPI_LAND] logical and [ MPI_BAND] bit-wise and [ MPI_LOR] logical or [ MPI_BOR] bit-wise or [ MPI_LXOR] logical xor [ MPI_BXOR] bit-wise xor [ MPI_MAXLOC] max value and location (return the max and an integer, which is the rank storing the max value) [ MPI_MINLOC] min value and location

Collective Functions – MPI_REDUCE Rank 0 Rank 1 Rank 2 Rank 3 A B

Collective Functions – MPI_REDUCE Rank 0 Rank 1 Rank 2 Rank 3 A B C D E F G H I J K L M N O P if count = 2, there will be Bo. Fo. Jo. N in the 2 nd element of the array A B C D Ao. Eo. Io. M In this case, root = 1 E F G H I J K L M N O P

Collective Functions – MPI_ALLREDUCE ¡ ¡ ¡ Variants of the reduce operations where the

Collective Functions – MPI_ALLREDUCE ¡ ¡ ¡ Variants of the reduce operations where the result is returned to all processes in the group The all-reduce operations can be implemented as a reduce, followed by a broadcast. However, a direct implementation can lead to better performance. C l int MPI_Allreduce(void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

Collective Functions – MPI_ALLREDUCE l Input Parameters ¡ ¡ ¡ l Output Parameter ¡

Collective Functions – MPI_ALLREDUCE l Input Parameters ¡ ¡ ¡ l Output Parameter ¡ ¡ sendbuf: starting address of send buffer (choice) count: number of elements in send buffer (integer) datatype: data type of elements of send buffer (handle) op: operation (handle) comm: communicator (handle) recvbuf: starting address of receive buffer (choice) Fortran l l l MPI_ALLREDUCE(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, COMM, IERROR

Collective Functions – MPI_ALLREDUCE Rank 0 Rank 1 Rank 2 Rank 3 A B

Collective Functions – MPI_ALLREDUCE Rank 0 Rank 1 Rank 2 Rank 3 A B C D E F G H I J K L M N O P Ao. Eo. Io. M A B C D E F G H I J K L M N O P

Collective Functions – MPI_REDUCE_SCATTER ¡ ¡ ¡ Variants of each of the reduce operations

Collective Functions – MPI_REDUCE_SCATTER ¡ ¡ ¡ Variants of each of the reduce operations where the result is scattered to all processes in the group on return. MPI_REDUCE_SCATTER first does an element-wise reduction on vector of count=∑i recvcounts[i] elements in the send buffer defined by sendbuf, count and datatype. Next, the resulting vector of results is split into n disjoint segments, where n is the number of members in the group. Segment i contains recvcounts[i] elements. The ith segment is sent to process i and stored in the receive buffer defined by recvbuf, recvcounts[i] and datatype. The MPI_REDUCE_SCATTER routine is functionally equivalent to: A MPI_REDUCE operation function with count equal to the sum of recvcounts[i] followed by MPI_SCATTERV with sendcounts equal to recvcounts. However, a direct implementation may run faster.

Collective Functions – MPI_REDUCE_SCATTER ¡ C l ¡ int MPI_Reduce_scatter(void* sendbuf, void* recvbuf, int

Collective Functions – MPI_REDUCE_SCATTER ¡ C l ¡ int MPI_Reduce_scatter(void* sendbuf, void* recvbuf, int *recvcounts, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm) Input Parameters ¡ ¡ ¡ l Output Parameter ¡ ¡ sendbuf: starting address of send buffer (choice) recvcounts: integer array specifying the number of elements in result distributed to each process. Array must be identical on all calling processes. datatype: data type of elements of input buffer (handle) op: operation (handle) comm: communicator (handle) recvbuf: starting address of receive buffer (choice) Fortran l l l MPI_REDUCE_SCATTER(SENDBUF, RECVCOUNTS, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER RECVCOUNTS(*), DATATYPE, OP, COMM, IERROR

Collective Functions – MPI_REDUCE_SCATTER Rank 0 recvcounts = 1 A B C D Ao.

Collective Functions – MPI_REDUCE_SCATTER Rank 0 recvcounts = 1 A B C D Ao. Eo. Io. M Rank 1 recvcounts = 2 E F G H Bo. Fo. Jo. N Rank 2 recvcounts = 0 I J K L Rank 3 recvcounts = 1 M N O P Co. Go. Ko. O Do. Ho. Lo. P

Collective Functions – MPI_SCAN ¡ ¡ MPI_SCAN is used to perform a prefix reduction

Collective Functions – MPI_SCAN ¡ ¡ MPI_SCAN is used to perform a prefix reduction on data distributed across the group. The operation returns, in the receive buffer of the process with rank i, the reduction of the values in the send buffers of processes with ranks 0, . . . , i (inclusive). The type of operations supported, their semantics, and the constraints on send and receive buffers are as for MPI_REDUCE. C l int MPI_Scan(void* sendbuf, void* recvbuf, int count, MPI_Datatype datatype, MPI_Op op, MPI_Comm comm )

Collective Functions – MPI_SCAN l Input Parameters ¡ ¡ ¡ l Output Parameter ¡

Collective Functions – MPI_SCAN l Input Parameters ¡ ¡ ¡ l Output Parameter ¡ ¡ sendbuf: starting address of send buffer (choice) count: number of elements in input buffer (integer) datatype: data type of elements of input buffer (handle) op: operation (handle) comm: communicator (handle) recvbuf: starting address of receive buffer (choice) Fortran l l l MPI_SCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR) <type> SENDBUF(*), RECVBUF(*) INTEGER COUNT, DATATYPE, OP, COMM, IERROR

Collective Functions – MPI_SCAN Rank 0 A B C D A Rank 1 E

Collective Functions – MPI_SCAN Rank 0 A B C D A Rank 1 E F G H Ao. E Rank 2 I J K L Ao. Eo. I Rank 3 M N O P Ao. Eo. Io. M

Example – MPI_BCAST ¡ To demonstrate how to use MPI_BCAST to distribute an array

Example – MPI_BCAST ¡ To demonstrate how to use MPI_BCAST to distribute an array to other process

Example – MPI_BCAST (C) ¡ ¡ ¡ /* // root broadcast the array to

Example – MPI_BCAST (C) ¡ ¡ ¡ /* // root broadcast the array to all processes */ ¡ #include<stdio. h> #include<mpi. h> ¡ #define SIZE 10 ¡ main( int argc, char** argv) { int my_rank; // the rank of each proc int array[SIZE]; int root = 0; // the rank of root int i; MPI_Comm comm = MPI_COMM_WORLD; ¡ ¡ ¡ ¡ MPI_Init(&argc, &argv); MPI_Comm_rank(comm, &my_rank); if (my_rank == 0) { for (i = 0; i < SIZE; i ++) { array[i] = i; } }

Example – MPI_BCAST (C) else { ¡ ¡ ¡ for (i = 0; i

Example – MPI_BCAST (C) else { ¡ ¡ ¡ for (i = 0; i < SIZE; i ++) { array[i] = 0; } ¡ ¡ printf("Proc %d: (Before Broadcast) ", my_rank); for (i = 0; i < SIZE; i ++) { printf("%d ", array[i]); } printf("n"); ¡ MPI_Bcast(array, SIZE, MPI_INT, root, comm); ¡ printf("Proc %d: (After Broadcast) ", my_rank); for (i = 0; i < SIZE; i ++) { printf("%d ", array[i]); } printf("n"); ¡ ¡ ¡ } MPI_Finalize(); return 0;

Example – MPI_BCAST (Fortran) ¡ ¡ ¡ ¡ C C C /* * root

Example – MPI_BCAST (Fortran) ¡ ¡ ¡ ¡ C C C /* * root broadcast the array to all processes */ PROGRAM main INCLUDE 'mpif. h' PARAMETER (SIZE = 10) INTEGER my_rank, ierr, root, i INTEGER array(SIZE) INTEGER comm INTEGER arraysize root = 0 comm = MPI_COMM_WORLD arraysize = SIZE

Example – MPI_BCAST (Fortran) ¡ ¡ ¡ ¡ CALL MPI_INIT(ierr) CALL MPI_COMM_RANK(comm, my_rank, ierr)

Example – MPI_BCAST (Fortran) ¡ ¡ ¡ ¡ CALL MPI_INIT(ierr) CALL MPI_COMM_RANK(comm, my_rank, ierr) IF (my_rank. EQ. 0) THEN DO i = 1, SIZE array(i) = i END DO ELSE DO i = 1, SIZE array(i) = 0 END DO END IF WRITE(6, *) "Proc ", my_rank, ": (Before Broadcast)", (array(i), i=1, SIZE) CALL MPI_Bcast(array, arraysize, MPI_INTEGER, root, comm, ierr) WRITE(6, *) "Proc ", my_rank, ": (After Broadcast)", (array(i), i=1, SIZE) call MPI_FINALIZE(ierr) end

Case Study 1 – MPI_SCATTER and MPI_REDUCE Master distributes (scatters) an array across processes.

Case Study 1 – MPI_SCATTER and MPI_REDUCE Master distributes (scatters) an array across processes. Processes add their elements, then combine sum in master through a reduction operation. ¡ Step 1 ¡ l l Proc 0 initializes a 16 integers array Proc 0: {1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16}

Case Study 1 – MPI_SCATTER and MPI_REDUCE ¡ Step 2 l l l ¡

Case Study 1 – MPI_SCATTER and MPI_REDUCE ¡ Step 2 l l l ¡ Scatter Proc 0: Proc 1: Proc 2: Proc 3: array among all processes {1, 2, 3, 4} {5, 6, 7, 8} {9, 10, 11, 12} {13, 14, 15, 16} Step 3 l Each process does some calculations

Case Study 1 – MPI_SCATTER and MPI_REDUCE ¡ Step 4 l l ¡ C

Case Study 1 – MPI_SCATTER and MPI_REDUCE ¡ Step 4 l l ¡ C l l Reduce to Proc 0: Total Sum mpi_scatter_reduce 01. c Compilation: ¡ l Run: ¡ ¡ mpicc mpi_scatter_reduce 01. c –o mpi_scatter_reduce 01 mpirun –np 4 mpi_scatter_reduce 01 Fortran l l mpi_scatter_reduce 01. f Compilation: ¡ l mpif 77 mpi_scatter_reduce 01. f –o mpi_scatter_reduce 01 Run: ¡ mpirun –np 4 mpi_scatter_reduce 01

Case Study 2 – MPI_GATHER Matrix Multiplication ¡ Algorithm: l l l {4 x

Case Study 2 – MPI_GATHER Matrix Multiplication ¡ Algorithm: l l l {4 x 4 matrix A} x {4 x 1 vector x} = product Each process stores a row of A and a single entry of x Use 4 gather operations to place a full copy of x in each process, then perform multiplications

Case Study 2 – MPI_GATHER Matrix Multiplication ¡ Step 1: l l l ¡

Case Study 2 – MPI_GATHER Matrix Multiplication ¡ Step 1: l l l ¡ Initialization Proc 0: {1 5 Proc 1: {2 6 Proc 2: {3 7 Proc 3: {4 8 Step 2: l l l 9 13}, {17} 10 14}, {18} 11 15}, {19} 12 16}, {20} Perform 4 times MPI_GATHER to gather the column matrix to each process Proc 0: {1 5 9 13}, {17 18 19 20} Proc 1: {2 6 10 14}, {17 18 19 20} Proc 2: {3 7 11 15}, {17 18 19 20} Proc 3: {4 8 12 16}, {17 18 19 20}

Case Study 2 – MPI_GATHER Matrix Multiplication ¡ Step 3: l l l ¡

Case Study 2 – MPI_GATHER Matrix Multiplication ¡ Step 3: l l l ¡ Perform multiplication Proc 0: 1 x 17+5 x 18+9 x 19+13 x 20=538 Proc 1: 2 x 17+6 x 18+10 x 19+14 x 20=612 Proc 2: 3 x 17+7 x 18+11 x 19+15 x 20=686 Proc 3: 4 x 17+8 x 18+12 x 19+16 x 20=760 Step 4: l Gather all process’ inner product into master process and display the result

Case Study 2 – MPI_GATHER Matrix Multiplication ¡ C l l l ¡ mpi_gather

Case Study 2 – MPI_GATHER Matrix Multiplication ¡ C l l l ¡ mpi_gather 01. c Compilation: ¡ mpicc mpi_gather 01. c –o mpi_gather 01 Run: ¡ mpirun –np 4 mpi_gather 01 Fortran l l l mpi_gather 01. f Compilation: ¡ mpif 77 mpi_gather 01. f –o mpi_gather 01 Run: ¡ mpirun –np 4 mpi_gather 01

END

END