MPI 2 Features Overview MPI Implementations University of

  • Slides: 20
Download presentation
MPI -2 Features Overview MPI Implementations University of North Carolina - Chapel Hill ITS

MPI -2 Features Overview MPI Implementations University of North Carolina - Chapel Hill ITS Research Computing Instructor: Mark Reed Email: markreed@unc. edu its. unc. edu 1

§ MPI-2. 0 Standard since July 1997 § Extends rather than replaces MPI-1. 2

§ MPI-2. 0 Standard since July 1997 § Extends rather than replaces MPI-1. 2 § http: //www. mpi-forum. org/docs/mpi-20 -html/mpi 2 -report. html § Implementations were slow to follow § Reference: http: //www. hlrs. de/organization/par_prog_ws/pdf/mpi_2_short. pdf its. unc. edu 2

§ Process Creation and Management § One-Sided Communications § I/O § C++ Language bindings

§ Process Creation and Management § One-Sided Communications § I/O § C++ Language bindings § Extended Collective Operations § External Interfaces § Miscellany its. unc. edu 3

Dynamic Process Management § MPI-1 is static § Goal is to start new MPI

Dynamic Process Management § MPI-1 is static § Goal is to start new MPI processes § Spawn interfaces: at initiators (parents): • Spawning new processes is collective, returning an intercommunicator. • Local group is group of spawning processes. • Remote group is group of spawned processes. § Spawn interfaces: at spawned processes (children): • New processes have own MPI_COMM_WORLD • MPI_Comm_get_parent() returns intercommunicator to parent processes its. unc. edu 4

One-sided Communication § PUT and GET data relative to memory of another process §

One-sided Communication § PUT and GET data relative to memory of another process § Inherently more “dangerous” because of lack of synchronization § Subtle memory affects: • cache coherence • contiguous layout § Implemented with special synch calls surrounding the one-sided calls its. unc. edu 5

Parallel I/O § reading and writing files in parallel § Rich set of features:

Parallel I/O § reading and writing files in parallel § Rich set of features: • • • its. unc. edu Basic operations: open, close, read, write, seek noncontiguous access in both memory and file logical view via filetype and element-type physical view addressed by hints, e. g. “striping_unit” explicit offsets / individual file pointers / shared file pointer collective / non-collective blocking / non-blocking or split collective non-atomic / explicit sync “native” / “internal” / “external 32” data representation 6

C++ Language bindings § § C++ bindings match the new C bindings MPI objects

C++ Language bindings § § C++ bindings match the new C bindings MPI objects are C++ objects MPI functions are methods of C++ classes User must use MPI create and free functions instead of default constructors and destructors § Uses shallow copy semantics (except MPI: : Status objects) § C++ exceptions used instead of returning error code § declared within an MPI namespace (MPI: : . . . ) § C++/C mixed-language interoperability its. unc. edu 7

Extended Collective Operations § In MPI-1, collective operations are restricted to ordinary (intra) communicators.

Extended Collective Operations § In MPI-1, collective operations are restricted to ordinary (intra) communicators. § In MPI-2, most collective operations are extended by an additional functionality for intercommunicators • e. g. , Bcast on a parents-children intercommunicator: § sends data from one parent process to all children. § Provision to specify “ in place” buffers for collective operations on intracommunicators. § Two new collective routines: • • its. unc. edu generalized all-to-all exclusive scan 8

External Interfaces § Generalized Requests • users can create new non-blocking operations § Naming

External Interfaces § Generalized Requests • users can create new non-blocking operations § Naming objects for debuggers and profilers • label communicators, windows, datatypes § Allow users to add error codes, classes and strings § Specifies how threads are to be handled if the implementation chooses to provide them its. unc. edu 9

Miscellany § Standard startup with mpiexec • Recommended but not required § Implementations are

Miscellany § Standard startup with mpiexec • Recommended but not required § Implementations are allowed to pass NULL to MPI_Init rather than argc, argv § MPI_Finalized(flag) added for library writers § New predefined datatypes • MPI_WCHAR • MPI_SIGNED_CHAR • MPI_UNSIGNED_LONG its. unc. edu 10

MPI committee reconvened § MPI 2. 1 done mid-year 2008 • provides a simple

MPI committee reconvened § MPI 2. 1 done mid-year 2008 • provides a simple clarification of the current MPI 2. 0 standard and corrections to the MPI 2. 0 document, with no API changes. § MPI 2. 2 planned for early 2009 • Addresses clear errors and omissions in the standard. its. unc. edu 11

MPI – 3. 0 § MPI 3. 0 targeted for early 2010 • will

MPI – 3. 0 § MPI 3. 0 targeted for early 2010 • will involve a more thorough rethinking of the standard to effectively support current and future applications. • Issues that have already been raised include improved one-sided communications as well as support for generalized requests, remote memory access, non-blocking collectives, new language support, and fault-tolerance. its. unc. edu 12

A few free MPI Variations § MPICH, MPICH 2 § MPICH-G 2 § LAM/MPI

A few free MPI Variations § MPICH, MPICH 2 § MPICH-G 2 § LAM/MPI § Open-MPI § MVAPICH its. unc. edu 13

MPICH flavors § MPICH is a freely available, portable implementation of MPI from ANL

MPICH flavors § MPICH is a freely available, portable implementation of MPI from ANL and MSU • http: //www-unix. mcs. anl. gov/mpich/index. htm § MPICH-G 2, the Globus version of MPICH • MPICH-G 2 is a grid-enabled implementation • MPI v 1. 1 standard • used to couple multiple (heterogeneous) machines § MPICH 2 is an all-new implementation of MPI § MPICH 2 includes support for one-side communication, dynamic processes, intercommunicator collective operations, and expanded MPI-IO functionality. its. unc. edu 14

LAM-MPI § § http: //www. lam-mpi. org high-quality open-source implementation of MPI-1. 2 and

LAM-MPI § § http: //www. lam-mpi. org high-quality open-source implementation of MPI-1. 2 and much of MPI-2 designed for heterogenous clusters MPI-2 Support (partial list) • Process Creation and Management • • One-sided Communication (partial implementation) MPI I/O (using ROMIO) C++ Bindings MPI-2 Miscellany: w mpiexec w Thread Support (MPI_THREAD_SINGLE - MPI_THREAD_SERIALIZED) w User functions at termination w Language interoperability its. unc. edu 15

Open-MPI § Open MPI is a project combining technologies and resources from several other

Open-MPI § Open MPI is a project combining technologies and resources from several other projects (FTMPI, LA-MPI, LAM/MPI, and PACX-MPI) in order to build the best MPI library available. § Goal: free, open source, peer-reviewed, production-quality, completely new MPI-2 compliant implementation § Fault tolerance is a growing concern § First release – November 2005 • 1. 2. x out now § http: //www. open-mpi. org its. unc. edu 16

§ Implementation based on MPICH and MVICH § Pronounced “em-vah-pich” § Designed for high

§ Implementation based on MPICH and MVICH § Pronounced “em-vah-pich” § Designed for high performance for MPI-1 and MPI-2 on Infini. Band as well as other RDMA-enabled interconnects § MVAPICH is a high performance implementation of MPI-1 over Infini. Band based on MPICH 1. • MVAPICH 2 is a high performance MPI-2 implementation based on MPICH 2 § Name chosen to reflect the fact that this is an MPI implementation based on MPICH (also MVICH) over the Infini. Band VAPI interface • They also support other underlying transport interfaces for portability (u. DAPL, Open. IB/Gen 2, TCP/IP ) § http: //mvapich. cse. ohio-state. edu/index. shtml its. unc. edu 17

MPI Opinion … for discussion : ) § “Without fear of contradiction, the MPI

MPI Opinion … for discussion : ) § “Without fear of contradiction, the MPI standard has been the most significant advancement in practical parallel programming in over a decade, and it is the foundation of the vast majority of modern parallel programs. ” § Thom H. Dunning (NCSA), Jr, Robert J. Harrison and Jeffrey A. Nichols (ORNL) • its. unc. edu "NWChem: Development of a Modern Quantum Chemistry Program, " CTWatch Quarterly, Volume 2, Number 2, May 2006. 18

Opinion Cont. § “A completely consistent (and deliberately provocative) viewpoint is that MPI is

Opinion Cont. § “A completely consistent (and deliberately provocative) viewpoint is that MPI is evil. The emergence of MPI coincided with an almost complete cessation of parallel programming tool/paradigm research. This was due to many factors, but in particular to the very public and very expensive failure of HPF. The downsides of MPI are that it standardized (in order to be successful itself) only the primitive and alrady old communicating sequential process (CSP) programming model, and MPI’s success further stifled adoption of advanced parallel programming techniques since any new method was by definition not going to be as portable. ” § Thom H. Dunning (NCSA), Jr, Robert J. Harrison and Jeffrey A. Nichols (ORNL) • "NWChem: Development of a Modern Quantum Chemistry Program, " CTWatch Quarterly, Volume 2, Number 2, May 2006. its. unc. edu 19

More discussion … some MPI bashing § “But it's hard to find a real

More discussion … some MPI bashing § “But it's hard to find a real fan of MPI today. Most either tolerate it or hate it. Although it provides a widely portable and standardized programming interface for parallel computing, its shortfalls are numerous: • • • hard to learn, From HPCWIRE difficult to program, no allowance for incremental parallelization, doesn't scale easily, and so on. § It's widely acknowledged that MPI's limitations must be overcome to make parallel programming more accessible. “ § Hence this class! : ) its. unc. edu 20