Abstract
Thus far, parallelism at the loop level (or data-parallelism) has been almost exclusively the main target of parallelizing compilers. The variety of new parallel architectures and recent progress in interprocedural dependence analysis suggest new directions for the exploitation of parallelism across loop and procedure boundaries (or functional-parallelism). This paper presents an intermediate parallel program representation which encapsulates minimal data and control dependences, and which can be used for the extraction and exploitation of functional, or task-level parallelism. We focus on the derivation of the execution conditions of tasks which maximizes task-level parallelism, and the optimization of these conditions which results in reducing synchronization overhead imposed by data and control dependences.
This is a preview of subscription content, log in via an institution.
Preview
Unable to display preview. Download preview PDF.
References
A. V. Aho, R. Sethi, and J. D. Ullman. Compilers: Principles, Techniques and Tools. Addison Wesley, March 1986.
F. E. Allen, M. Burke, P. Charles, R. Cytron, and J. Ferrante. An overview of the PTRAN analysis system for multiprocessing. The Journal of Parallel and Distributed Computing, 5(5):617–640, October 1988.
R. Allen and K. Kennedy. Automatic translation of FORTRAN programs to vector form. ACM Transactions on Programming Languages and Systems, 9(4), October 1987.
U. Banerjee. Dependence Analysis for Supercomputing. Kluwer Academic Publishers, 1988.
R. Cytron, M. Hind, and W. Hsieh. Automatic generation of DAG parallelism. In Proceedings of the 1989 SIGPLAN Conference on Programming Language Design and Implementation, pages 54–68, July 1989.
J. Ferrante, K. J. Ottenstein, and J. D. Warren. The program dependence graph and its use in optimization. ACM Trans. on Programming Languages and Systems, 9(3):319–349, July 1987.
M. Girkar. Automatic Detection and Management of Parallelism in Programs. PhD thesis, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign, August 1991. In preparation.
M. Girkar and C. D. Polychronopoulos. An intermediate representation for programs based on control and data dependences. Technical Report 1046, Center for Supercomputing Research and Development, University of Illinois at Urbana-Champaign, 1990.
H. Kasahara, H. Honda, M. Iwata, and M. Hirota. A compilation scheme for macro-dataflow compuatation on hierarchical multiprocessor systems. unpublished manuscript, 1989.
D. J. Kuck, R. H. Kuhn, D. A. Padua, B. Leasure, and M. J. Wolfe. Dependence graphs and compiler optimizations. In Proceedings of the 8th Annual ACM Symposium on Principles of Programming Languages, pages 207–218. ACM, January 1981.
C. D. Polychronopoulos, M. Girkar, M. R. Haghighat, C. L. Lee, B. Leung, and D. Schouten. Parafrase-2: An environment for parallelizing, partitioning, synchronizing, and scheduling programs on multiprocessors. In Proceedings of the 1989 International Conference on Parallel Processing, St. Charles, IL, August 1989.
M. Weiser. Programmers use slices when debugging. Communications of the ACM, 25(7):446–452, July 1982.
M. J. Wolfe. Optimizing Supercompilers for Supercomputers. The MIT Press, Cambridge, Massachusetts, 1989.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1992 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Girkar, M., Polychronopoulos, C. (1992). Optimization of data/control conditions in task graphs. In: Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1991. Lecture Notes in Computer Science, vol 589. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0038663
Download citation
DOI: https://doi.org/10.1007/BFb0038663
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-55422-6
Online ISBN: 978-3-540-47063-2
eBook Packages: Springer Book Archive