Skip to main content

Challenges and Successes in Achieving the Potential of MPI

  • Conference paper
  • First Online:
Recent Advances in Parallel Virtual Machine and Message Passing Interface (EuroPVM/MPI 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2131))

  • 494 Accesses

Abstract

The first MPI standard specified a powerful and general message-passing model, including both point-to-point and collective communications. MPI-2 took MPI beyond simple message-passing, adding support for remote memory operations and parallel I/O. Implementations of MPI-1 appeared with the MPI standard; implementations of MPI-2 are continuing to appear. But many implementations build on top of a point-to-point communication base, leading to inefficiencies in the performance of the MPI implementation. Even for MPI-1, many MPI implementations base their collective operations on relatively simple algorithms, built on top of MPI point-to-point (or a simple lower-level communication layer). These implementations achieve the functionality but not the scalable performance that is possible in MPI. In MPI-2, providing a high-performance implementation of the remote-memory operations requires great care and attention to the opportunities for performance that are contained in the MPI standard.

One of the goals of the MPICH2 project is to provide an easily extended example of an implementation of MPI that goes beyond a simple point-to-point communication model. This talk will discuss some of the challenges in implementing collective, remote-memory, and I/O operations in MPI. For example, many of the best algorithms for collective operations involve the use of message subdivision (possibly involving less than one instance of a MPI derived datatype) and multisend or store-and-forward operations. As another example, the remote memory operations in MPI-2 specify semantics that are designed to specify precise behavior, excluding ambiguities or race conditions. These clean (if somewhat complex) semantics are sometimes seen as a barrier to performance. This talk will discuss some of the methods that can be used to exploit the RMA semantics to provide higher performance for typical application codes. The approaches taken in MPICH2, along with current results from the MPICH2 project, will be discussed.

This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing, U.S. Department of Energy, under Contract W-31-109-Eng-38.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gropp, W.D. (2001). Challenges and Successes in Achieving the Potential of MPI. In: Cotronis, Y., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2001. Lecture Notes in Computer Science, vol 2131. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45417-9_3

Download citation

  • DOI: https://doi.org/10.1007/3-540-45417-9_3

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42609-7

  • Online ISBN: 978-3-540-45417-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics