Skip to main content
Log in

Abstract

Memory expansions are classical means to extract parallelism from imperative programs. However, current techniques require some runtime mechanism to restore data flow when expansion maps have two definitions reaching the same use to two different memory locations (e.g., φ functions in the SSA framework). This paper presents an expansion framework for any type of data structure in any imperative program, without the need for dynamic data flow restoration. The key idea is to group together definitions that reach a common use. We show that such an expansion boils down to mapping each group to a memory cell.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

REFERENCES

  1. P. Tu and D. Padua, Automatic array privatization, in Proc. Sixth Workshop on Lang.Compilers for Parallel Computing, Lecture Notes in Computer Science, Portland, Oregon, No. 768, pp. 500–21 (August 1993).

  2. D. E. Maydan, S. P. Amarasinghe, and M. S. Lam, Array dataflow analysis and its use in array privatization, in Proc. ACM Conf. Principles Progr. Lang., pp. 2–15 (January 1993).

  3. B. Creusillet, Array region analyses and applications, Ph.D. thesis, Ecole des Mines de Paris (December 1996).

  4. K. L. Pieper, Parallelizing compilers: Implementation and effectiveness, Ph.D. thesis, Stanford University, Computer Systems Laboratory (June 1993).

  5. K. Knobe and V. Sarkar, Array SSA form and its use in parallelization, in ACM Symp. Principles Progr. Lang. (PoPL), San Diego, California, pp. 107–120 (January 1998).

  6. P. Feautrier, Dataflow analysis of scalar and array references, IJPP 20(1):23–53 (February 1991).

    Google Scholar 

  7. J.-F. Collard, The advantages of reaching definition analyses in Array SSA, in Proc. Workshop on Languages and Compilers for Parallel Computing, Chapel Hill, North Carolina (August 1998). Springer-Verlag. To appear in June 99.

  8. R. Cytron, J. Ferrante, B. K. Rosen, M. N. Wegman, and F. K. Zadeck, Efficiently computing static single assignment form and the control dependence graph, ACM Trans. Progr. Lang. Syst. 13(4):451–490 (October 1991).

    Google Scholar 

  9. P.-Y. Calland, A. Darte, Y. Robert, and Frédéric Vivien, Plugging anti-and output-dependence removal techniques into loop parallelization algorithms, Parallel Computing 23(1–2):251–266 (1997).

    Google Scholar 

  10. D. Barthou, A. Cohen, and J.-F. Collard, Maximal static expansion, in ACM Symp. Principles Progr. Lang. (PoPL), San Diego, California, pp. 98–106 (January 1998).

  11. D. Barthou, J.-F. Collard, and P. Feautrier, Fuzzy array dataflow analysis, J. Parallel Distribut. Comput. 40:210–226 (1997).

    Google Scholar 

  12. D. Barthou, Array dataflow analysis in presence of nonaffine constraints, Ph.D. thesis, Université de Versailles (February 1998). http://www.prism.uvsq.fr/bad/these.html.

  13. A. Schrijver, Theory of Linear and Integer Programming, John Wiley, New York (1986).

    Google Scholar 

  14. P. Feautrier, Some efficient solution to the affine scheduling problem, Part II, Multidimensional time, IJPP 21(6):389–420 (December 1992).

    Google Scholar 

  15. A. Darte and F. Vivien, Optimal fine and medium grain parallelism detection in polyhedral reduced dependence graphs, IJPP 25(6):447–496 (December 1997).

    Google Scholar 

  16. F. Irigoin and R. Tiolet, Supernode partitioning, in Proc. 15th POPL, San Diego, California, pp. 319–328 (January 1988).

  17. L. Carter, J. Ferrante, and S. Flynn Hummel, Efficient multiprocessor parallelism via hierarchical tiling, SIAM Conf. Parallel Proc. Sci. Comput. (1995).

  18. D. G. Wonnacott, Constraint-based array dependence analysis, Ph.D. thesis, University of Maryland (1995).

  19. W. Pugh and D. Wonnacott, Constraint-based array dependence analysis, ACM Trans. Progr. Lang. Syst. 3:635–678 (May 1998).

    Google Scholar 

  20. S. S. Muchnick, Advanced Compiler Design and Implementation, Morgan Kaufmann (1997).

  21. C. Ancourt and F. Irigoin, Scanning polyhedra with DO loops, in Proc. of ACM SIGPLAN Symp. Principles Parallel Progr., pp. 39–50 (June 1991).

  22. J.-F. Collard, P. Feautrier, and T. Risset, Construction of DO loops from systems of affine constraints, Parallel Proc. Lett. 5(3) (1995).

  23. W. Pugh, A practical algorithm for exact array dependence analysis, Commun. ACM 35(8):27–47 (August 1992).

    Google Scholar 

  24. W. Kelly, W. Pugh, E. Rosser, and T. Shpeisman, Transitive closure of infinite graphs and its applications, IJPP 24(6):579–598 (1996).

    Google Scholar 

  25. W. Blume, R. Eigenmann, K. Faigin, J. Grout, J. Hoeflinger, D. Padua, P. Petersen, W. Pottenger, L. Rauchwerger, P. Tu, and S. Weatherford, Parallel programming with Polaris, IEEE Computer 29(12):78–82 (December 1996).

    Google Scholar 

  26. M. Hall, et al., Maximizing multiprocessor performance with the SUIF compiler, IEEE Computer 29(12):84–89 (December 1996).

    Google Scholar 

  27. D. Wonnacott and W. Pugh, Nonlinear array dependence analysis, in Proc. Third Workshop on Languages, Compilers and Runtime Systems for Scalable Computers, Troy, New York (1995).

  28. J.-F. Collard, D. Barthou, and P. Feautrier, Fuzzy array dataflow analysis, in ACM SIGPLAN Symp. Principles and Practive of Parallel Progr. (PPoPP), Santa Barbara, California, pp. 92–102 (July 1995).

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Barthou, D., Cohen, A. & Collard, JF. Maximal Static Expansion. International Journal of Parallel Programming 28, 213–243 (2000). https://doi.org/10.1023/A:1007500431910

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1007500431910

Navigation