Skip to main content

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2840))

  • 735 Accesses

Abstract

The Message Passing Interface (MPI) has been very successful at providing a programming model for computers from small PC clusters through the world’s fastest computers. MPI has succeeded because it successfully addresses many of the requirements of an effective parallel programming model, including portability, performance, modularity, and completeness. But much remains to be done with MPI, both in terms of the performance of MPI and in the supporting the use of MPI in applications. This talk will look at three areas: programming models, implementations, and scalability.

The MPI programming model is often described as a supporting “only” basic message passing (point-to-point and collective) and (in MPI-2) simple one-sided communication. Such a description ignores the support in MPI for the creation of effective libraries built using MPI routines. This support has encouraged the development of powerful libraries that, working with MPI, provide a powerful high-level programming environment. This will be illustrated with two examples drawn from computational simulation.

MPI was designed to allow implementations to fully exploit the available hardware. It provides many features that support high performance, including a relaxed memory consistency model. While many MPI implementations take advantage of some of these opportunities, much remains to be done. This talk will describe some of the opportunities for improving the performance of MPI implementations, with particular emphasis on the relaxed memory model and both MPI’s one-sided and parallel I/O operations.

Scalability is another goal of the MPI design and many applications have demonstrated scalability to thousands of processors. In the near future, computers with more than 64,000 processors will be built. Barriers to scalability in the definition and the implementation of MPI will be discussed, along with possible future directions for MPI developments. By avoiding a few very low usage routines and with the proper implementation, MPI should scale effectively to the next generation of massively parallel computers.

This work was supported by the Mathematical, Information, and Computational Sciences Division subprogram of the Office of Advanced Scientific Computing Research, Office of Science, U.S. Department of Energy, under Contract W-31-109-ENG-38.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gropp, W.D. (2003). Future Developments in MPI. In: Dongarra, J., Laforenza, D., Orlando, S. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2003. Lecture Notes in Computer Science, vol 2840. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-39924-7_3

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-39924-7_3

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-20149-6

  • Online ISBN: 978-3-540-39924-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics