Abstract
The first MPI standard specified a powerful and general message-passing model, including both point-to-point and collective communications. MPI-2 took MPI beyond simple message-passing, adding support for remote memory operations and parallel I/O. Implementations of MPI-1 appeared with the MPI standard; implementations of MPI-2 are continuing to appear. But many implementations build on top of a point-to-point communication base, leading to inefficiencies in the performance of the MPI implementation. Even for MPI-1, many MPI implementations base their collective operations on relatively simple algorithms, built on top of MPI point-to-point (or a simple lower-level communication layer). These implementations achieve the functionality but not the scalable performance that is possible in MPI. In MPI-2, providing a high-performance implementation of the remote-memory operations requires great care and attention to the opportunities for performance that are contained in the MPI standard.
One of the goals of the MPICH2 project is to provide an easily extended example of an implementation of MPI that goes beyond a simple point-to-point communication model. This talk will discuss some of the challenges in implementing collective, remote-memory, and I/O operations in MPI. For example, many of the best algorithms for collective operations involve the use of message subdivision (possibly involving less than one instance of a MPI derived datatype) and multisend or store-and-forward operations. As another example, the remote memory operations in MPI-2 specify semantics that are designed to specify precise behavior, excluding ambiguities or race conditions. These clean (if somewhat complex) semantics are sometimes seen as a barrier to performance. This talk will discuss some of the methods that can be used to exploit the RMA semantics to provide higher performance for typical application codes. The approaches taken in MPICH2, along with current results from the MPICH2 project, will be discussed.
This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2001 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Gropp, W.D. (2001). Challenges and Successes in Achieving the Potential of MPI. In: Cotronis, Y., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2001. Lecture Notes in Computer Science, vol 2131. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45417-9_3
Download citation
DOI: https://doi.org/10.1007/3-540-45417-9_3
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-42609-7
Online ISBN: 978-3-540-45417-5
eBook Packages: Springer Book Archive