Skip to main content

A common messaging layer for MPI and PVM over SCI

  • 3. Computer Science
  • Conference paper
  • First Online:
High-Performance Computing and Networking (HPCN-Europe 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1401))

Included in the following conference series:

Abstract

This paper describes the design of a common message passing layer for implementing both MPI and PVM over the SCI interconnect in a workstation or PC cluster. The design is focused at obtaining low latency. The message layer encapsulates all necessary knowledge of the underlying interconnect and operating system. Yet, we claim that it can be used to implement such different message passing libraries as MPI and PVM without sacrificing efficiency. Initial results obtained from using the message layer in SCI clusters are presented.

This work is supported by the European Commission in the Fourth Framework Programme under ESPRIT HPCN Project EP23174 (SISCI).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Dolphin Interconnect Solutions, AS. Dolphin SBus-2 Cluster Adapter Card, September 1996.

    Google Scholar 

  2. Dolphin Interconnect Solutions, AS. PCI-SCI Cluster Adapter Specification, May 1996. Version 1.2.

    Google Scholar 

  3. M. Eberl, H. Hellwagner, B. Herland, and M. Schulz. SISCI — Implementing a Standard Software Infrastructure on an SCI Cluster.In W. Rehm, editor, Tagungsband 1. Workshop Cluster Computing, number CSR-97-05 in Chemnitzer Informatik-Berichte, pages 49–61, November 1997.

    Google Scholar 

  4. L. Grabowsky, T. Radke, and W. Rehm. Cluster-MPI — An optimized MPI subset for configurable cluster systems: SCI connected SMPs as a test case. In W. Rehm, editor, Tagungsband 1. Workshop Cluster Computing, number CSR-97-05 in Chemnitzer Informatik-Berichte, pages 125–135, November 1997.

    Google Scholar 

  5. W. Gropp and E. Lusk. An abstract device definition to support the implementation of a high-level point-to-point message-passing interface. Technical Report PREPRINT ANL/MCS-P342-119, Mathematics and Computer Science Division, Argonne National Laboratory, October 1994.

    Google Scholar 

  6. W. Gropp and E. Lusk. Why are PVM and MPI so Different? Technical Report PREPRINT ANL/MCS-P667-0697, Mathematics and Computer Science Division, Argonne National Laboratory, June 1997.

    Google Scholar 

  7. D. B. Gustavson and Q. Li. Local-Area MultiProcessor: the Scalable Coherent Interface. In S. F. Lundstrom, editor, Defining the Global Information Infrastructure: Infrastructure, Systems, and Services, pages 131–160. SPIE Press, 1994.

    Google Scholar 

  8. Oliver C. Ibe. Essentials of ATM Networks and Services. Addison-Wesley Longman Inc., One Jacob Way, Reading, Massachusetts 01867, 1997.

    Google Scholar 

  9. M. Ibel, K. Schauser, C. Scheiman, and M. Weis. High-Performance Cluster Computing Using SCI. In Proc. Hot Interconnects V, August 1997.

    Google Scholar 

  10. Hai Jin and Wolfgang Rehm. Performance of Message Passing and Shared Memory on SCI-based SMP Clusters. In Proc. 5th High Performance Computer Symposium, April 6–10 1997.

    Google Scholar 

  11. Mario Lauria and Andrew Chien. Mpi-fm: High performance mpi on workstation clusters. Journal of Parallel and Distributed Computing, pages 125–135, Jan 1997.

    Google Scholar 

  12. K. Omang and B. Parady. Performance of Low-Cost U1traSparc Multiprocessors Connected by SCI. In Proc. Communication Networks and Distributed Systems Modeling and Simulation, pages 109–115, January 1997.

    Google Scholar 

  13. S. J. Ryan, S. Gjessing, and M. Liaaen. Cluster Communication using a PCI to SCI Interface. In IASTED 8th International Conference on Parallel and Distributed Computing and Systems, Chicago, Illinois, October 1996.

    Google Scholar 

  14. Scali AS. ScaMPI User's Guide, 1997. Version 1.3.0.

    Google Scholar 

  15. J. Simon and O. Heinz. SCI Multiprocessor PC Cluster in a Windows NT Environment. In Workshops im Rahmen der 11,. ITG/GI-Fachtagung Architektur von Rechensystemen, pages 189–199, Rostock, Deutschland, September 1997.

    Google Scholar 

  16. IEEE Computer Society. IEEE Std 1596-1992: IEEE Standard for Scalable Coherent Interface. The Institute of Electrical and Electronics Engineers, Inc., 345 East 47th Street, New York, NY 10017, USA, August 1993.

    Google Scholar 

  17. Flow Control inthe Transport Layer. http://ganges.cs.tcd.ie/4ba2/transport/5.mp.l.html. OSI Reference Model.

    Google Scholar 

  18. SISCI Homepage. http://www.parallab.uib.no/projects/sisci/, August 1997.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Peter Sloot Marian Bubak Bob Hertzberger

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Herland, B.G., Eberl, M., Hellwagner, H. (1998). A common messaging layer for MPI and PVM over SCI. In: Sloot, P., Bubak, M., Hertzberger, B. (eds) High-Performance Computing and Networking. HPCN-Europe 1998. Lecture Notes in Computer Science, vol 1401. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0037185

Download citation

  • DOI: https://doi.org/10.1007/BFb0037185

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64443-9

  • Online ISBN: 978-3-540-69783-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics