Abstract
As distributed memory systems have become common, the de facto standard for communication is still the Message Passing Interface (MPI). pupyMPI is a pure Python implementation of a broad subset of the MPI 1.3 specifications that allows Python programmers to utilize multiple CPUs with datatypes and memory handled transparently. pupyMPI also implements a few non-standard extensions such as non-blocking collectives and the option of suspending, migrating and resuming the distributed computation of a pupyMPI program. This paper introduces pupyMPI and presents benchmarks against C implementations of MPI, which show acceptable performance.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Beazley, D.: Convoy effect with I/O bound threads and New GIL (2010), http://bugs.python.org/issue7946
Blochl, P.E.: Projector augmented-wave method. Phys. Rev. B 50(24), 17953–17979 (1994)
Cai, X., Langtangen, H., Moe, H.: On the performance of the Python programming language for serial and parallel scientific computations. Scientific Programming 13(1), 31–56 (2005)
Dalcin, L., Paz, R., Storti, M.: MPI for python. Journal of Parallel and Distributed Computing 65(9), 1108–1115 (2005)
Faraj, A., Yuan, X., Lowenthal, D.: STAR-MPI: self tuned adaptive routines for MPI collective operations. In: Proceedings of the 20th Annual International Conference on Supercomputing, ICS 2006, pp. 199–208. ACM, New York (2006), http://doi.acm.org/10.1145/1183401.1183431
Hatazaki, T.: Rank reordering strategy for mpi topology creation functions. In: Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 188–195 (1998)
Hettinger, R.: What’s new in python 3.2 (2011), http://docs.python.org/dev/whatsnew/3.2.html#multi-threading
Hoefler, T., Kambadur, P., Graham, R., Shipman, G., Lumsdaine, A.: A Case for Standard Non-Blocking Collective Operations. In: Recent Advances in Parallel Virtual Machine and Message Passing Interface, pp. 125–134 (2007)
Hoefler, T., Lumsdaine, A., Rehm, W.: Implementation and performance analysis of non-blocking collective operations for mpi. In: Proceedings of the 2007 ACM/IEEE Conference on Supercomputing, p. 52. ACM, New York (2007)
Intel, M.: Intel® MPI Benchmarks 3.2.2 (2008), http://software.intel.com/sites/products/mpi-benchmarks/IMB_3.2.2.tgz (accessed May 2, 2011)
Karonis, N.T., De Supinski, B.R., Foster, I., Gropp, W., Lusk, E., Bresnahan, J.: Exploiting hierarchy in parallel computer networks to optimize collective operation performance. In: Proceedings of 14th International Parallel and Distributed Processing Symposium, IPDPS 2000, pp. 377–384. IEEE, Los Alamitos (2000)
Kristensen, M.R.B., Happe, H.H., Vinter, B.: GPAW optimized for Blue Gene/P using hybrid programming. In: International Parallel and Distributed Processing Symposium, vol. 0, pp. 1–6 (2009)
Leyton-Brown, K., Nudelman, E., Andrew, G., McFadden, J., Shoham, Y.: A portfolio approach to algorithm selection. In: International Joint Conference on Artificial Intelligence, Citeseer, vol. 18, pp. 1542–1543 (2003)
Thakur, R., Gropp, W.D.: Improving the performance of collective operations in MPICH. In: Dongarra, J., Laforenza, D., Orlando, S. (eds.) EuroPVM/MPI 2003. LNCS, vol. 2840, pp. 257–267. Springer, Heidelberg (2003)
Vadhiyar, S., Fagg, G., Dongarra, J.: Automatically tuned collective communications. In: Proceedings of the 2000 ACM/IEEE Conference on Supercomputing (CDROM), p. 3. IEEE Computer Society, Los Alamitos (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Bromer, R., Hantho, F., Vinter, B. (2011). pupyMPI - MPI Implemented in Pure Python. In: Cotronis, Y., Danalis, A., Nikolopoulos, D.S., Dongarra, J. (eds) Recent Advances in the Message Passing Interface. EuroMPI 2011. Lecture Notes in Computer Science, vol 6960. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24449-0_16
Download citation
DOI: https://doi.org/10.1007/978-3-642-24449-0_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-24448-3
Online ISBN: 978-3-642-24449-0
eBook Packages: Computer ScienceComputer Science (R0)