Skip to main content

Analysis of the Component Architecture Overhead in Open MPI

  • Conference paper
Recent Advances in Parallel Virtual Machine and Message Passing Interface (EuroPVM/MPI 2005)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 3666))

Abstract

Component architectures provide a useful framework for developing an extensible and maintainable code base upon which large-scale software projects can be built.Component methodologies have only recently been incorporated into applications by the High Performance Computing community, in part because of the perception that component architectures necessarily incur an unacceptable performance penalty.The Open MPI project is creating a new implementation of the Message Passing Interface standard, based on a custom component architecture the Modular Component Architecture (MCA) to enable straightforward customization of a high-performance MPI implementation. This paper reports on a detailed analysis of the performance overhead in Open MPI introduced by the MCA. We compare the MCA-based implementation of Open MPI with a modified version that bypasses the component infrastructure. The overhead of the MCA is shown to be low, on the order of 1%, for both latency and bandwidth microbenchmarks as well as for the NAS Parallel Benchmark suite.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Apple Computer, Inc. Mach-O Runtime Architecture for Mac OS X version 10.3. Technical report (August 2004)

    Google Scholar 

  2. Armstrong, R., Gannon, D., Geist, A., Keahey, K., Kohn, S.R., McInnes, L., Parker, S.R., Smolinski, B.A.: Toward a common component architecture for high-performance scientific computing. In: HPDC (1999)

    Google Scholar 

  3. Bernholdt, D.E., et al.: A component architecture for high-performance scientific computing. (to appear) in Intl. J. High-Performance Computing Applications

    Google Scholar 

  4. van der Wijngaart, R.F.: NAS Parallel Benchmarks version 2.4. Technical Report NAS Technical Report NAS-02-007, NASA Advanced Supercomputing Division, NASA Ames Research Center (October 2002)

    Google Scholar 

  5. Fagg, G.E., Bukovsky, A., Dongarra, J.J.: HARNESS and fault tolerant MPI. Parallel Computing 27, 1479–1496 (2001)

    Article  MATH  Google Scholar 

  6. Garbriel, E., et al.: Open MPI: Goals, concept, and design of a next generation MPI implementation. In: Proceedings, 11th European PVM/MPI Users’ Group Meeting (2004)

    Google Scholar 

  7. Geist, A., Gropp, W., Huss-Lederman, S., Lumsdaine, A., Lusk, E., Saphir, W., Skjellum, T., Snir, M.: MPI-2: Extending the Message-Passing Interface. In: Euro-Par 1996 Parallel Processing, pp. 128–135. Springer, Heidelberg (1996)

    Google Scholar 

  8. Graham, R.L., Choi, S.-E., Daniel, D.J., Desai, N.N., Minnich, R.G., Rasmussen, C.E., Risinger, L.D., Sukalksi, M.W.: A network-failure-tolerant message-passing system for terascale clusters. International Journal of Parallel Programming 31(4) (August 2003)

    Google Scholar 

  9. Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A high-performance, portable implementation of the MPI message passing interface standard. Parallel Computing 22(6), 789–828 (1996)

    Article  MATH  Google Scholar 

  10. Keller, R., Gabriel, E., Krammer, B., Mueller, M.S., Resch, M.M.: Towards efficient execution of parallel applications on the grid: porting and optimization issues. International Journal of Grid Computing 1(2), 133–149 (2003)

    Article  Google Scholar 

  11. Levine, J.R.: Linkers and Loaders. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  12. Message Passing Interface Forum. MPI: A Message Passing Interface. In: Proc. of Supercomputing 1993, November 1993, pp. 878–883. IEEE Computer Society Press, Los Alamitos (1993)

    Google Scholar 

  13. Squyres, J.M., Lumsdaine, A.: A Component Architecture for LAM/MPI. In: Proceedings, 10th European PVM/MPI Users’ Group Meeting, Venice, Italy, September 2003. LNCS. Springer, Heidelberg (2003)

    Google Scholar 

  14. Woodall, T.S., et al.: TEG: A high-performance, scalable, multi-network point-to-point communications methodology. In: Proceedings, 11th European PVM/MPI Users’ Group Meeting, Budapest, Hungary (September 2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Barrett, B., Squyres, J.M., Lumsdaine, A., Graham, R.L., Bosilca, G. (2005). Analysis of the Component Architecture Overhead in Open MPI. In: Di Martino, B., Kranzlmüller, D., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2005. Lecture Notes in Computer Science, vol 3666. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11557265_25

Download citation

  • DOI: https://doi.org/10.1007/11557265_25

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-29009-4

  • Online ISBN: 978-3-540-31943-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics