MPI Message Passing Interface Portable Parallel Programs Message
MPI Message Passing Interface Portable Parallel Programs
Message Passing Interface • Derived from several previous libraries – PVM, P 4, Express • Standard message-passing library – includes best of several previous libraries • Versions for C/C++ and FORTRAN • Available for free • Can be installed on – Networks of Workstations – Parallel Computers (Cray T 3 E, IBM SP 2, Parsytec Power. Xplorer, other)
MPI Services • Hide details of architecture • Hide details of message passing, buffering • Provides message management services – packaging – send, receive – broadcast, reduce, scatter, gather message modes
MPI Program Organization • MIMD Multiple Instruction, Multiple Data – Every processor runs a different program • SPMD Single Program, Multiple Data – Every processor runs the same program – Each processor computes with different data – Variation of computation on different processors through if or switch statements
MPI Progam Organization • MIMD in a SPMD framework – Different processors can follow different computation paths – Branch on if or switch based on processor identity
MPI Basics • Starting and Finishing • Identifying yourself • Sending and Receiving messages
MPI starting and finishing • Statement needed in every program before any other MPI code MPI_Init(&argc, &argv); • Last statement of MPI code must be MPI_Finalize(); • Program will not terminate without this statement
MPI Messages • Message content, a sequence of bytes • Message needs wrapper – analogous to an envelope for a letter Letter Address Return Address Type of Mailing (class) Letter Weight Country Magazine Message Destination Source Message type Size (count) Communicator Broadcast
MPI Basics • Communicator – Collection of processes – Determines scope to which messages are relative – identity of process (rank) is relative to communicator – scope of global communications (broadcast, etc. )
MPI Message Protocol, Send • • • message contents count message type destination tag communicator block of memory number of items in message type of each item rank of processor to receive integer designator for message the communicator within which the message is sent
MPI Message Protocol, Receive • message contents • • • count message type source tag communicator • status buffer in memory to store received message size of buffer type of each item rank of processor sending integer designator for message the communicator within which the message is sent information about message received
Message Passing Example #include <stdio. h> #include <string. h> #include "mpi. h" /* includes MPI library code specs */ #define MAXSIZE 100 int main(int argc, char* { int my. Rank; /* int num. Proc; /* int source; /* int dest; /* int tag = 0; /* char mess[MAXSIZE]; /* int count; /* MPI_Status status; /* argv[]) rank (identity) of process number of processors rank of sender rank of destination tag to distinguish messages message (other types possible) number of items in message status of message received */ */
Message Passing Example MPI_Init(&argc, &argv); /* start MPI */ /* get number of processes */ MPI_Comm_size(MPI_COMM_WORLD, &num. Proc); /* get rank of this process */ MPI_Comm_rank(MPI_COMM_WORLD, &my. Rank); /************************/ /* code to send, receive and process messages */ /************************/ MPI_Finalize(); } /* shut down MPI */
Message Passing Example if (my. Rank != 0){/* all processes send to root */ /* create message */ sprintf(message, "Hello from %d", my. Rank); dest = 0; /* destination is root */ count = strlen(mess) + 1; /* include '