Abstract
This chapter explains the basics of parallel programming and MPI (Message Passing Interface). To understand MPI, parallel computer architectures and parallel programing models are explained in the first chapter. APIs (Application Programming Interfaces) of MPI are shown with examples of MPI programming. Several key topics, such as data distribution methods and communication algorithms for MPI programming are also explained.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
On the other hand, using large memory is another reason to parallelize a program by using distributed memory machines.
- 2.
But the communication can be parallelized with respect to algorithm. Explanation of this kind of parallelization will be done in this chapter.
References
Message Passing Interface Forum. http://www.mpi-forum.org/
P.S. Pacheco, Parallel Programming with MPI (Morgan Kaufmann, Burlington, 1996)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Katagiri, T. (2019). Basics of MPI Programming. In: Geshi, M. (eds) The Art of High Performance Computing for Computational Science, Vol. 1. Springer, Singapore. https://doi.org/10.1007/978-981-13-6194-4_2
Download citation
DOI: https://doi.org/10.1007/978-981-13-6194-4_2
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-13-6193-7
Online ISBN: 978-981-13-6194-4
eBook Packages: Computer ScienceComputer Science (R0)