Abstract
Data dependences are known to hamper efficient parallelization of programs. Memory expansion is a general method to remove dependences in assigning distinct memory locations to dependent writes. Parallelization via memory expansion requires both moderation in the expansion degree and efficiency at run-time.We present a general storage mapping optimization framework for imperative programs, applicable to most loop nest parallelization techniques.
Chapter PDF
Keywords
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.
References
D. Barthou, J.-F. Collard, and P. Feautrier. Fuzzy array dataflow analysis. Journal of Parallel and Distributed Computing, 40:210–226, 1997.
L. Carter, J. Ferrante, and S. Flynn Hummel. Efficient multiprocessor parallelism via hierarchical tiling. In SIAM Conference on Parallel Processing for Scientific Computing, 1995.
A. Cohen and V. Lefebvre. Optimization of storage mappings for parallel programs. Technical Report 1998/46, PRiSM, U. of Versailles, 1998.
J.-F. Collard. The advantages of reaching definition analyses in Array (S)SA. In Proc. Workshop on Languages and Compilers for Parallel Computing, Chapel Hill, NC, August 1998. Springer-Verlag.
B. Creusillet. Array Region Analyses and Applications. PhD thesis, Ecole des Mines de Paris, December 1996.
R. Cytron, J. Ferrante, B.K. Rosen, M.N. Wegman, and F.K. Zadeck. Efficiently computing static single assignment form and the control dependence graph. ACM Transactions on Programming Languages and Systems, 13(4):451–490, October 1991.
A. Darte and F. Vivien. Optimal_ne and medium grain parallelism detection in polyhedral reduced dependence graphs. Int. Journal of Parallel Programming, 25(6):447–496, December 1997.
P. Feautrier. Dataflow analysis of scalar and array references. Int. Journal of Parallel Programming, 20(1):23–53, February 1991.
P. Feautrier. Some efficient solution to the affine scheduling problem, part II, multidimensional time. Int. J. of Parallel Programming, 21(6), December 1992.
F. Irigoin and R. Triolet. Supernode partitioning. In Proc. 15th POPL, pages 319–328, San Diego, Cal., January 1988.
V. Lefebvre and P. Feautrier. Automatic storage management for parallel programs. Journal on Parallel Computing, 24:649–671, 1998.
D.E. Maydan, S.P. Amarasinghe, and M.S. Lam. Array dataflow analysis and its use in array privatization. In Proc. of ACM Conf. on Principles of Programming Languages, pages 2–15, January 1993.
W. Pugh. A practical algorithm for exact array dependence analysis. Communications of the ACM, 35(8):27–47, August 1992.
Fabien Quilleré and Sanjay Rajopadhye. Optimizing memory usage in the polyhedral model. Technical Report 1228, IRISA, January 1999.
M. Mills Strout, L. Carter, J. Ferrante, and B. Simon. Schedule-independent storage mapping for loops. In ACM Int. Conf. on Arch. Support for Prog. Lang.and Oper. Sys. (ASPLOS-VIII), 1998.
P. Tu and D. Padua. Automatic array privatization. In Proc. Sixth Workshop on Languages and Compilers for Parallel Computing, number 768 in Lecture Notes in Computer Science, pages 500–521, August 1993. Portland, Oregon.
D.G. Wonnacott. Constraint-Based Array Dependence Analysis. PhD thesis, University of Maryland, 1995.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Cohen, A., Lefebvre, V. (1999). Storage Mapping Optimization for Parallel Programs. In: Amestoy, P., et al. Euro-Par’99 Parallel Processing. Euro-Par 1999. Lecture Notes in Computer Science, vol 1685. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-48311-X_49
Download citation
DOI: https://doi.org/10.1007/3-540-48311-X_49
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66443-7
Online ISBN: 978-3-540-48311-3
eBook Packages: Springer Book Archive