Skip to main content

Forward dependence folding as a method of communication optimization in SPMD Programs

  • Conference paper
  • First Online:
Applied Parallel Computing Large Scale Scientific and Industrial Problems (PARA 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1541))

Included in the following conference series:

  • 122 Accesses

Abstract

In the paper, a method is proposed for optimizing communication in SPMD programs executed in distributed-memory environments. The programs in question result from parallelizing single loops whose dependence graphs are acyclic. Upon introduction to the basics of data dependence theory, the idea of forward dependence folding is presented. Next, it is shown how dependence folding may be coupled with message aggregation as a method of reducing the number of time-costly interprocessor message transfers. Theoretical considerations are accompanied by experimental results from applying the method to programs executed in a network of workstations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bacon, D. F., Graham, S. L., Sharp, O. J.: Compiler Transformations for High-Performance Computing. ACM Comp. Surv. 26 (1994) 345–420

    Article  Google Scholar 

  2. Brandes, T.: ADAPTOR Users Guide (Version 4.0) GMD, Schloss Birlinghoven, Germany (1996)

    Google Scholar 

  3. Gropp, W., Lusk, E., Doss, N., Skjellum, A.: A High-Performance, Portable Implementation of the MPI Message Passing Interface Standard. Par. Comp. 22 (1996) 789–828

    Article  MATH  Google Scholar 

  4. Message Passing Interface Forum: MPI: A Message-Passing Interface Standard. Int. J. Supercomp. Appl. 8 (1994) (special issue)

    Google Scholar 

  5. Szczerbinski, Z.: Optimization of Parallel Loops by Elimination of redundant Data Dependences. Ph.D. Thesis (in Polish), Silesian Technical University, Faculty of Automatics, Electronics and Computer Science, Gliwice, Poland (1995)

    Google Scholar 

  6. Szczerbinski, Z.: An Algorithm for Elimination of Forward Dependences in Parallel Loops. In: Proc. 2nd Int. Conf. Par. Proc. and Appl. Math. PPAM’97, Zakopane, Poland (1997) 398–407

    Google Scholar 

  7. Tseng, C.-W.: An Optimizing Fortran D Compiler for MIMD Distributed-Memory Machines. Ph.D Thesis, Rice University, Houston, Texas (1993)

    Google Scholar 

  8. Wolfe, M.: High Performance Compilers for Parallel Computing. Addison-Wesley, Redwood City, California (1996)

    MATH  Google Scholar 

  9. Zima, H., Bast, H.-J., Gerndt, M.: SUPERB: A Tool for Semi-Automatic MIMD/SIMD Parallelization. Par. Comp. 6 (1988) 1–18

    Article  Google Scholar 

  10. Zima, H., Chapman, B.: Supercompilers for Parallel and Vector Computers. Addison-Wesley, Wokingham, England (1991)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bo Kågström Jack Dongarra Erik Elmroth Jerzy Waśniewski

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Szczerbinski, Z. (1998). Forward dependence folding as a method of communication optimization in SPMD Programs. In: Kågström, B., Dongarra, J., Elmroth, E., Waśniewski, J. (eds) Applied Parallel Computing Large Scale Scientific and Industrial Problems. PARA 1998. Lecture Notes in Computer Science, vol 1541. Springer, Berlin, Heidelberg . https://doi.org/10.1007/BFb0095380

Download citation

  • DOI: https://doi.org/10.1007/BFb0095380

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-65414-8

  • Online ISBN: 978-3-540-49261-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics