Skip to main content

New program restructuring technology

  • Conference paper
  • First Online:
Parallel Computation (ACPC 1991)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 591))

  • 148 Accesses

Abstract

Compilers for vector and parallel computers advertise the ability to automatically detect the parallelism in sequential loops. Sometimes these compilers change the structure of nested loops, for instance by interchanging loops, to uncover more parallelism. The technology used for parallelism discovery can also be used to optimize nested loop algorithms for many other architectural characteristics, such as memory hierarchies, or limited interprocessor connection networks. While much of this technology has been successfully commercialized, recent research has discovered new technology and transformations. The potential for these new optimizations and how they will interface with users is discussed.

This work was supported by NSF Grant CCR-8906909 and DARPA Grant MDA972-88-J-1004.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Randy Allen, David Callahan, and Ken Kennedy. Automatic decomposition of scientific programs for parallel execution. In Conf. Record 14th Annual ACM Symp. on Principles of Programming Languages, pages 63–76, Munich, Germany, January 1987.

    Google Scholar 

  2. John R. Allen and Ken Kennedy. Automatic loop interchange. In Proc. SIGPLAN '84 Symp. on Compiler Construction, pages 233–246, Montreal, Canada, June 1984.

    Google Scholar 

  3. John R. Allen and Ken Kennedy. Automatic translation of Fortran programs to vector form. ACM Trans. on Programming Languages and Systems, 9(4):491–542, October 1987.

    Google Scholar 

  4. Utpal Banerjee. Dependence Analysis for Supercomputing. Kluwer Academic Publishers, Norwell, MA, 1988.

    Google Scholar 

  5. Michael Wolfe and Utpal Banerjee. Data dependence and its application to parallel processing. International J. Parallel Programming, 16(2):137–178, April 1987.

    MathSciNet  Google Scholar 

  6. Michael Wolfe. Techniques for improving the inherent parallelism in programs. M.S. thesis UIUCDCS-R-78-929, Univ. Illinois, Dept. Computer Science, July 1978.

    Google Scholar 

  7. Michael Wolfe. Loop skewing: The wavefront method revisited. International J. Parallel Programming, 15(4):279–294, August 1986.

    Google Scholar 

  8. Michael Wolfe. Optimizing Supercompilers for Supercomputers. Research Monographs in Parallel and Distributed Computing. Pitman Publishing, London, 1989. (also available from MIT Press).

    Google Scholar 

  9. Michael Wolfe. Data dependence and program restructuring. J. Supercomputing, 4(4):321–344, January 1991.

    Google Scholar 

  10. Michael Wolfe and Chau-Wen Tseng. The power test for data dependence. IEEE Trans. Parallel and Distributed Systems, 1991. (accepted for publication).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Hans P. Zima

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Wolfe, M. (1992). New program restructuring technology. In: Zima, H.P. (eds) Parallel Computation. ACPC 1991. Lecture Notes in Computer Science, vol 591. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-55437-8_68

Download citation

  • DOI: https://doi.org/10.1007/3-540-55437-8_68

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-55437-0

  • Online ISBN: 978-3-540-47073-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics