Abstract
Advances in microprocessor technology, power management and network communication have altered the course of development of multiprocessor architectures in order to bring higher level of processing. The introduction of multi-core technology has boosted computing power provided by high-speed network of workstations and SMPs, providing large computational power at an affordable cost using solely commodity components. In this paper, it is presented a tool for integration of several clusters in a single High-Performance System based on MPI standard. The Gateway Process is responsible for MPI process communication channels control and message forwarding, through the use of a protocol that guarantees message ordering and sender/receiver synchronization. It is implemented to support system scalability, offering resources for point to point and collective operations. Results of experimental tests show that the proposed tool is practical and efficient.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Foster, I., et al.: The Physiology of The Grid – An Open Grid Services Architecture for Distributed Systems Integration, http://www.globus.org/research/papers/ogsa.pdf
The Globus Project, http://www.globus.org
Andrade, N., et al.: OurGrid: An Approach to Easily Assemble Grids with Equitable Resource Sharing. In: Feitelson, D.G., Rudolph, L., Schwiegelshohn, U. (eds.) JSSPP 2003. LNCS, vol. 2862, pp. 61–86. Springer, Heidelberg (2003)
Snir, M., Gropp, W.: MPI the Complete Reference. MIT Press, Cambridge (1998)
Geist, A., et al.: PVM: Parallel Virtual Machine. A User’s Guide and Tutorial for Networked Parallel Computing. The MIT Press, Cambridge (1994)
Bal, H.E., et al.: Orca: A Language for Parallel Programming of Distributed Systems. IEEE Transactions on Software Engineering 18(3), 190–205 (1992)
Johnson, K.L., et al.: CRL: high-performance all-software distributed shared memory. ACM SIGOPS Operating Systems 29(5) (1995)
Karonis, N., Toonen, B., Foster, I.: MPICH-G2: A Grid-Enabled Implementation of the Message Passing Interface. Journal of Parallel and Distributed Computing (JPDC) 63(5), 551–563 (2003)
Gabriel, E., Resch, M., Ruhle, R.: Implementing MPI with optimized algorithms for metacomputing. In: Message Passing Interface Developer’s and Users Conference (1999)
Massetto, F.I., Sato, L.M., Li, K.C.: A Novel Strategy for Building Interoperable MPI Environment in Heterogeneous High Performance Systems. Journal of Supercomputing (in press, available via Online First) (2010)
Massetto, F.I., Gomes, A.M., Sato, L.M.: HyMPI – A MPI Implementation for Heterogeneous High Performance Systems. In: Chung, Y.-C., Moreira, J.E. (eds.) GPC 2006. LNCS, vol. 3947, pp. 314–323. Springer, Heidelberg (2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Massetto, F.I. et al. (2010). A Message Forward Tool for Integration of Clusters of Clusters Based on MPI Architecture. In: Hsu, CH., Malyshkin, V. (eds) Methods and Tools of Parallel Programming Multicomputers. MTPP 2010. Lecture Notes in Computer Science, vol 6083. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-14822-4_12
Download citation
DOI: https://doi.org/10.1007/978-3-642-14822-4_12
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-14821-7
Online ISBN: 978-3-642-14822-4
eBook Packages: Computer ScienceComputer Science (R0)