Skip to main content

Extending conventional flow analysis to deal with array references

  • VI. Analysis Techniques
  • Conference paper
  • First Online:
Languages and Compilers for Parallel Computing (LCPC 1991)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 589))

  • 130 Accesses

Abstract

Traditional optimization-oriented flow analysis provides methods for solving a wide assortment of problems (e.g., forward and backward problems; problems with confluence operators of union, intersection, etc.). Traditional methods deal extremely well with scalar variables because it is easy to determine whether or not two scalar variable references refer to the same memory location(s). Traditional methods, on the other hand, deal with references to array variables by ignoring the fact that they are array variables, i.e., by treating them as though they were references to scalar variables; the reason is, of course, that it is more difficult to determine whether two references to the same array variable refer to the same memory location(s). Using methods derived from the field of array subscript analysis, we have developed methods for the enhancement of the flow analysis of code containing array references. In the present paper we present some elementary results which are useful in solving flow problems which require must kill information, problems such as ud-chaining, du-chaining, and live variable analysis. In a later paper we will show how the principles underlying these results may be extended to the solution of problems requiring must not kill information, problems such as global common subexpressions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Lamport, L. “The Parallel Execution of DO Loops,” Communications of the ACM, Vol. 17, No. 2, February, 1974.

    Google Scholar 

  2. Banerjee, U., “Data Dependence in Ordinary Programs,” M.S. thesis, Univ. of Ill at Urbana-Champaign, Nov., 1976.

    Google Scholar 

  3. Banerjee, U., “Speedup of Ordinary Programs,” Ph.D. thesis, Univ. of Ill at Urbana-Champaign, 1979.

    Google Scholar 

  4. Banerjee, U., Dependence Analysis for Super computing, Kluwer Academic Publishers, Norwell, Mass., 1988.

    Google Scholar 

  5. Towle, R.A., “Control and Data Dependence for Program Transformations,” Ph.D. thesis, University of Illinois at Urbana-Champaign, 1976.

    Google Scholar 

  6. Kuck, D.J., R.H. Kuhn, D. Padua, B.R. Leisure, and M.J. Wolfe, “Dependence Graphs and Compiler Optimizations,” Proceedings of 8th ACM Symposium on Principles of Programming Languages. January, 1981.

    Google Scholar 

  7. Wolfe, Michael, “Optimizing Supercompilers for Supercomputers,” Ph.D. Thesis, Univ. of Ill. at Urbana-Champaign, Oct., 1982.

    Google Scholar 

  8. Wolfe, Michael, Optimizing Supercompilers for Supercomputers, Pitman Publishing Co., London, and MIT Press, Cambridge, Mass, 1989.

    Google Scholar 

  9. Allen, J.R., “Dependence Analysis for Subscripted Variables and its Application to Program Transformations”, Ph.D. Thesis, Rice University, April, 1983

    Google Scholar 

  10. Allen, J.R., and K. Kennedy, “Automatic Loop Interchange,” Sigplan '84 Symposium on Compiler Construction. Montreal, June, 1984.

    Google Scholar 

  11. Allen, J.R., and K. Kennedy, “Automatic Translation of Fortran Programs to Vector Form,” ACM Transactions on Programming Languages and Systems, Vol. 9, No. 4, October, 1987.

    Google Scholar 

  12. Burke, M., and R. Cytron, “Interprocedural Dependence Analysis and Parallelization,” Proceedings of SIGPLAN '86 Symposium on Compiler Construction. Palo Alto, CA, June, 1986.

    Google Scholar 

  13. Rosene, Carl M., “Incremental Dependence Analysis,” Ph.D. thesis, Rice University, March, 1990; Rice COMP TR90-112.

    Google Scholar 

  14. Goff, Gina, Ken Kennedy, and Chau-Wen Tseng, “Practical Dependence testing,” Proceedings of SIGPLAN '91. June. 1991

    Google Scholar 

  15. Aho, A., Sethi, R., and Ullman, J., Compilers: Principles, Techniques and Tools, Addison-Wesley, 1986

    Google Scholar 

  16. Shen, Z., Z. Li, and P. Yew, “An Empirical Study on Array Subscripts and Data Dependences,” Proceedings of 1989 International Conference on Parallel Processing. August, 1989.

    Google Scholar 

  17. Klappholz, D., K. Psarris, and X. Kong, “On the Perfect Accuracy of an Approximte Subscript Analysis Test,” Proceedings of the International Conference on Supercomputing, Amsterdam, June 1990.

    Google Scholar 

  18. Allen, Randy, and Steve Johnson, “Compiling C for Vectorization, Parallelization, and Inline Expansion,” Proceedings of the Sigplan '88 Conference on Programming Language design and Implementation. Atlanta, June, 1988.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Utpal Banerjee David Gelernter Alex Nicolau David Padua

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kallis, A., Klappholz, D. (1992). Extending conventional flow analysis to deal with array references. In: Banerjee, U., Gelernter, D., Nicolau, A., Padua, D. (eds) Languages and Compilers for Parallel Computing. LCPC 1991. Lecture Notes in Computer Science, vol 589. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0038669

Download citation

  • DOI: https://doi.org/10.1007/BFb0038669

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-55422-6

  • Online ISBN: 978-3-540-47063-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics