Skip to main content

MPIT — Communication/Computation Paradigm for Networks of SMP Workstations

  • Conference paper
  • First Online:
Applied Parallel Computing (PARA 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2367))

Included in the following conference series:

  • 476 Accesses

Abstract

A need for a more efficient programming paradigm has been prompted by the introduction of networks of symmetric multiprocessor (SMP) workstations. A new programming paradigm for networks of SMP workstations is presented in this paper. The paradigm called MPIT integrates Message Passing Interface (MPI) and POSIX threads. The MPIT paradigm utilizes MPI for communication among the workstations, and uses threads to process the data within a workstation. The communication among the workstations is handled by a dedicated communication thread that runs on each workstation. The communication among the threads is handled through the shared memory. A number of theoretical and practical benefits of the MPIT paradigm are identified, such as communication/computation overlap, increased resource utilization and performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. TOP500 Supercomputer Sites, University of Mannheim and University of Tennessee, http://www.top500.org.

  2. MPI Forum: MPI-1.1 Standard. http://www.mpi-forum.org (1995).

  3. MPI Forum: MPI-2 Standard. http://www.mpi-forum.org (1997).

  4. Sunderam V.S.: PVM: A Framework for Parallel Distributed Computing. Concurrency: Theory and Practice 2 (1990) 315–339.

    Article  Google Scholar 

  5. Tang, H., Shen, K., Yang, T.: Compile/Run-time Support for Threaded MPI Execution on Multiprogrammed Shared Memory Machines. Proceedings of ACM Programming Principles of Parallel Processing (1999) 107–118.

    Google Scholar 

  6. Protopopov, B.V., Skjellum, A.: A multi-threaded Message Passing Interface (MPI) architecture: performance and program issues. Journal of Parallel and Distributed Computing 61 (2001) 449–466.

    Article  MATH  Google Scholar 

  7. Haines, M., Cronk, D., Mehrotra, P.: On the Design of Chant: A Talking Threads Package. Proceedings of the Conference on Supercomputing (1994) 350–359.

    Google Scholar 

  8. Tanaka, Y., Matsuda, M., Kubota, K., Sato, M.: COMPaS: A Pentium Pro PC-Based SMP Cluster. High Performance Cluster Computing, Volume 1. 1st edition. Prentice Hall (1999) 661–681.

    Google Scholar 

  9. Parab, N., Raghvedran, M.: Active Messages. High Performance Cluster Computing, Volume 1. 1st edition. Prentice Hall (1999) 270–300.

    Google Scholar 

  10. Bader, D.A., Jájá, J.: SIMPLE: A Methodology for Programming High Performance Algorithms on Clusters of Symmetric Multiprocessors (SMPs). Journal of Parallel and Distributed Computing 58 (1999) 92–108.

    Article  Google Scholar 

  11. Butenhof, D.R.: Programming with POSIX Threads. Addison-Wesley, 1997, ISBN 0-201-64492-2.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Huttunen, P., Ikonen, J., Porras, J. (2002). MPIT — Communication/Computation Paradigm for Networks of SMP Workstations. In: Fagerholm, J., Haataja, J., Järvinen, J., Lyly, M., Råback, P., Savolainen, V. (eds) Applied Parallel Computing. PARA 2002. Lecture Notes in Computer Science, vol 2367. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48051-X_17

Download citation

  • DOI: https://doi.org/10.1007/3-540-48051-X_17

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-43786-4

  • Online ISBN: 978-3-540-48051-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics