Skip to main content

pupyMPI - MPI Implemented in Pure Python

  • Conference paper
  • 1141 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 6960))

Abstract

As distributed memory systems have become common, the de facto standard for communication is still the Message Passing Interface (MPI). pupyMPI is a pure Python implementation of a broad subset of the MPI 1.3 specifications that allows Python programmers to utilize multiple CPUs with datatypes and memory handled transparently. pupyMPI also implements a few non-standard extensions such as non-blocking collectives and the option of suspending, migrating and resuming the distributed computation of a pupyMPI program. This paper introduces pupyMPI and presents benchmarks against C implementations of MPI, which show acceptable performance.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Beazley, D.: Convoy effect with I/O bound threads and New GIL (2010), http://bugs.python.org/issue7946

  2. Blochl, P.E.: Projector augmented-wave method. Phys. Rev. B 50(24), 17953–17979 (1994)

    Article  Google Scholar 

  3. Cai, X., Langtangen, H., Moe, H.: On the performance of the Python programming language for serial and parallel scientific computations. Scientific Programming 13(1), 31–56 (2005)

    Article  Google Scholar 

  4. Dalcin, L., Paz, R., Storti, M.: MPI for python. Journal of Parallel and Distributed Computing 65(9), 1108–1115 (2005)

    Article  Google Scholar 

  5. Faraj, A., Yuan, X., Lowenthal, D.: STAR-MPI: self tuned adaptive routines for MPI collective operations. In: Proceedings of the 20th Annual International Conference on Supercomputing, ICS 2006, pp. 199–208. ACM, New York (2006), http://doi.acm.org/10.1145/1183401.1183431

    Google Scholar 

  6. Hatazaki, T.: Rank reordering strategy for mpi topology creation functions. In: Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 188–195 (1998)

    Google Scholar 

  7. Hettinger, R.: What’s new in python 3.2 (2011), http://docs.python.org/dev/whatsnew/3.2.html#multi-threading

  8. Hoefler, T., Kambadur, P., Graham, R., Shipman, G., Lumsdaine, A.: A Case for Standard Non-Blocking Collective Operations. In: Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 125–134 (2007)

    Google Scholar 

  9. Hoefler, T., Lumsdaine, A., Rehm, W.: Implementation and performance analysis of non-blocking collective operations for mpi. In: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, p. 52. ACM, New York (2007)

    Google Scholar 

  10. Intel, M.: Intel® MPI Benchmarks 3.2.2 (2008), http://software.intel.com/sites/products/mpi-benchmarks/IMB_3.2.2.tgz (accessed May 2, 2011)

  11. Karonis, N.T., De Supinski, B.R., Foster, I., Gropp, W., Lusk, E., Bresnahan, J.: Exploiting hierarchy in parallel computer networks to optimize collective operation performance. In: Proceedings of 14th International Parallel and Distributed Processing Symposium, IPDPS 2000, pp. 377–384. IEEE, Los Alamitos (2000)

    Chapter  Google Scholar 

  12. Kristensen, M.R.B., Happe, H.H., Vinter, B.: GPAW optimized for Blue Gene/P using hybrid programming. In: International Parallel and Distributed Processing Symposium, vol. 0, pp. 1–6 (2009)

    Google Scholar 

  13. Leyton-Brown, K., Nudelman, E., Andrew, G., McFadden, J., Shoham, Y.: A portfolio approach to algorithm selection. In: International Joint Conference on Artificial Intelligence, Citeseer, vol. 18, pp. 1542–1543 (2003)

    Google Scholar 

  14. Thakur, R., Gropp, W.D.: Improving the performance of collective operations in MPICH. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, pp. 257–267. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  15. Vadhiyar, S., Fagg, G., Dongarra, J.: Automatically tuned collective communications. In: Proceedings of the 2000 ACM/IEEE Conference on Supercomputing (CDROM), p. 3. IEEE Computer Society, Los Alamitos (2000)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bromer, R., Hantho, F., Vinter, B. (2011). pupyMPI - MPI Implemented in Pure Python. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2011. Lecture Notes in Computer Science, vol 6960. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24449-0_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24449-0_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24448-3

  • Online ISBN: 978-3-642-24449-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics