Skip to main content

Run-Time Parallelization Optimization Techniques

  • Conference paper
  • First Online:
  • 323 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1863))

Abstract

In this paper we first present several compiler techniques to reduce the overhead of run-time parallelization. We show how to use static control flow information to reduce the number of memory references that need to be traced at run-time. Then we introduce several methods designed specifically for the parallelization of sparse applications. We detail some heuristics on how to speculate on the type and data structures used by the original code and thus reduce the memory requirements for tracing the sparse access patterns without performing any additional work. Optimization techniques for the sparse reduction parallelization and speculative loop distribution conclude the paper.

A full version of this paper is available as Technical Report TR99-025, Dept. of Computer Science, Texas A&M University

Research supported in part by NSF CAREER Award CCR-9734471, NSF Grant ACI-9872126, DOE ASCI ASAP Level 2 Grant B347886 and a Hewlett-Packard Equipment Grant

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. W. Blume et.al. Advanced Program Restructuring for High-Performance Computers with Polaris. IEEE Computer, 29(12):78–82, December 1996.

    Google Scholar 

  2. J. Hoeflinger. Interprocedural Parallelization Using Memory Classification Analysis. PhD thesis, University of Illinois, August, 1998.

    Google Scholar 

  3. L. Rauchwerger. Run-time parallelization: A framework for parallel computation. TR. UIUCDCS-R-95-1926, Dept of Comp. Science, University of Illinois, Sept. 1995.

    Google Scholar 

  4. L. Rauchwerger and D. Padua. The LRPD Test: Speculative Run-Time Parallelization of Loops with Privatization and Reduction Parallelization. IEEE Trans. on Parallel and Distributed Systems, 10(2), 1999.

    Google Scholar 

  5. J. Wu, et.al. Runtime compilation methods for multicomputers. In Dr.H.D. Schwetman, editor, Proc. of the 1991 Int. Conf. on Parallel Processing, pages 26–30. CRC Press, Inc., 1991. Vol. II Software.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yu, H., Rauchwerger, L. (2000). Run-Time Parallelization Optimization Techniques. In: Carter, L., Ferrante, J. (eds) Languages and Compilers for Parallel Computing. LCPC 1999. Lecture Notes in Computer Science, vol 1863. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-44905-1_36

Download citation

  • DOI: https://doi.org/10.1007/3-540-44905-1_36

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-67858-8

  • Online ISBN: 978-3-540-44905-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics