Parallel Processing CS 676 Lecture 7 Message Passing
- Slides: 35
Parallel Processing (CS 676) Lecture 7: Message Passing using MPI* Jeremy R. Johnson *Parts of this lecture was derived from chapters 3 -5, 11 in Pacheco Parallel Processing 1
Introduction • Objective: To introduce distributed memory parallel programming using message passing. Introduction to the MPI standard for message passing. • Topics – Introduction to MPI • hello. c • hello. f – Example Problem (numeric integration) – Collective Communication – Performance Model Parallel Processing 2
MPI • Message Passing Interface • Distributed Memory Model – Single Program Multiple Data (SPMD) – Communication using message passing • Send/Recv – Collective Communication • • • Broadcast Reduce (All. Reduce) Gather (All. Gather) Scatter (All. Scatter) Alltoall Parallel Processing 3
Benefits/Disadvantges • No new language is requried • Portable • Good performance • Explicitly forces programmer to deal with local/global access • Harder to program that shared memory – requires larger program/algorithm changes Parallel Processing 4
Further Information • • • http: //www-unix. mcs. anl. gov/mpi/ en. wikipedia. org/wiki/Message_Passing_Interface www. mpi-forum. org www. open-mpi. org www. mcs. anl. gov/research/projects/mpich 2 • Textbook – Peter S. Pacheco, Parallel Programming with MPI, Morgan Kaufman, 1997. Parallel Processing 5
Basic MPI Functions • int MPI_Init( • int* • char** • • int MPI_Finalize(void) argc argv /* in/out */, /* in/out */) • Int MPI_Comm_size( • MPI_Comm communicator /* in */, • int* number_of_processors /* out */) • Int MPI_Comm_rank( • MPI_Comm communicator /* in */, • int* my_rank /* out */) Parallel Processing 6
Send • Must package message in envelope containing destination, size, and an identifying tag, set of processors participating in the communication. • int MPI_Send( • void* message /* in */ • int count /* in */ • MPI_Datatype datatype /* in */ • int dest /* in */ • int tag /* in */ • MPI_Comm communicator /* in */) Parallel Processing 7
Receive • int MPI_Recv( • void* message /* out */ • int count /* in */ • MPI_Datatype datatype /* in */ • int source /* in */ • int tag /* in */ • MPI_Comm communicator /* in */ • MPI_Status* status /* out */) Parallel Processing 8
Status • Status-> MPI_SOURCE • Status-> MPI_TAG • Status-> MPI_ERROR • Int MPI_Get_count( • MPI_Status* status /* in */, • MPI_Datatype datatype /* in */, • int* count_ptr /* out */) Parallel Processing 9
#include <stdio. h> #include <string. h> #include "mpi. h" hello. c main(int argc, char * argv[]) { int my_rank; /* rank of process */ int p; /* number of processes */ int source; /* rank of sender */ int dest; /* rank of receiver */ int tag = 0; /* tag for messages */ char message[100]; /* storage for message */ MPI_Status status; /* return status for receive */ /* Start up MPI */ MPI_Init(&argc, &argv); /* Find out process rank */ MPI_Comm_rank(MPI_COMM_WORLD, &my_rank); /* Find out number of processes */ MPI_Comm_size(MPI_COMM_WORLD, &p); Parallel Processing 10
hello. c if (my_rank != 0) { /* create message */ sprintf(message, "Greetings from process %d!n", my_rank); dest = 0; /* user strlen + 1 so tat '