Skip to main content

Design Alternatives and Performance Trade-Offs for Implementing MPI-2 over InfiniBand

  • Conference paper
Book cover Recent Advances in Parallel Virtual Machine and Message Passing Interface (EuroPVM/MPI 2005)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 3666))

Abstract

MPICH2 provides a layered architecture to achieve both portability and performance. For implementations of MPI-2 over InfiniBand, it provides the flexibility for researchers at the RDMA channel, CH3 or ADI3 layer. In this paper we analyze the performance and complexity trade-offs associated with implementations at these layers. We describe our designs and implementations, as well as optimizations at each layer. To show the performance impacts of these design schemes and optimizations, we evaluate our implementations with different micro-benchmarks, HPCC and NAS test suite. Our experiments show that although the ADI3 layers adds complexity in implementation, the benefits achieved through optimizations justify moving to the ADI layer to extract the best performance.

This research is supported in part by Department of Energy’s Grant #DE-FC02-01ER25506, National Science Foundation’s grants #CNS-0204429, and #CUR-0311542, and a grant from Intel.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bailey, D.H., Barszcz, E., Dagum, L., Simon, H.D.: NAS Parallel Benchmark Results. Technical Report 94-006, RNR (1994)

    Google Scholar 

  2. HPC Challenge Benchmark, http://icl.cs.utk.edu/hpcc/

  3. Grabner, R., Mietke, F., Rehm, W.: An MPICH2 Channel Device Implementation over VAPI on InfiniBand. In: Proceedings of the International Parallel and Distributed Processing Symposium (2004)

    Google Scholar 

  4. Huang, W., Santhanaraman, G., Jin, H.W., Panda, D.K.: Scheduling of MPI-2 One Sided Operations over InfiniBand. In: Workshop On Communication Architecture on Clusters (CAC), in conjunction with IPDPS 2005 (April 2005)

    Google Scholar 

  5. InfiniBand Trade Association. InfiniBand Architecture Specification, Release 1.2

    Google Scholar 

  6. Network Based Computing Laboratory, http://nowlab.cis.ohio-state.edu/

  7. Liu, J., Jiang, W., Jin, H.W., Panda, D.K., Gropp, W., Thakur, R.: High Performance MPI-2 One-Sided Communication over InfiniBand. In: International Symposium on Cluster Computing and the Grid (CCGrid 2004) (April 2004)

    Google Scholar 

  8. Liu, J., Jiang, W., Wyckoff, P., Panda, D.K., Ashton, D., Buntinas, D., Gropp, W., Toonen, B.: Design and Implementation of MPICH2 over InfiniBand with RDMA Support. In: Proceedings of the International Parallel and Distributed Processing Symposium (2004)

    Google Scholar 

  9. Message Passing Interface Forum. MPI-2: A Message Passing Interface Standard. High Performance Computing Applications 12(1–2), 1–299 (1998)

    Google Scholar 

  10. MPICH2, http://www-unix.mcs.anl.gov/mpi/mpich2/

  11. Santhanaraman, G., Wu, J., Panda, D.K.: Zero-Copy MPI Derived Datatype Communication over InfiniBand. In: EuroPVM-MPI 2004 (September 2004)

    Google Scholar 

  12. Snir, M., Otto, S., Huss-Lederman, S., Walker, D., Dongarra, J.: MPI–The Complete Reference, 2nd edn. The MPI-1 Core, vol. 1. The MIT Press, Cambridge (1998)

    Google Scholar 

  13. Tezuka, H., O’Carroll, F., Hori, A., Ishikawa, Y.: Pin-down cache: A virtual memory management technique for zero-copy communication. In: Proceedings of the 12th International Parallel Processing Symposium (1998)

    Google Scholar 

  14. Wu, J., Wyckoff, P., Panda, D.K.: High Performance Implementation of MPI Datatype Communication over InfiniBand. In: Proceedings of the International Parallel and Distributed Processing Symposium (2004)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Huang, W., Santhanaraman, G., Jin, HW., Panda, D.K. (2005). Design Alternatives and Performance Trade-Offs for Implementing MPI-2 over InfiniBand. In: Di Martino, B., Kranzlmüller, D., Dongarra, J. (eds) Recent Advances in Parallel Virtual Machine and Message Passing Interface. EuroPVM/MPI 2005. Lecture Notes in Computer Science, vol 3666. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11557265_27

Download citation

  • DOI: https://doi.org/10.1007/11557265_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-29009-4

  • Online ISBN: 978-3-540-31943-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics