ABSTRACT
Parallel loops account for the greatest amount of parallelism in scientific and numerical codes. For example, most of the DO loops in SPEC CFP2000 and SPEC OMPM2001 are of DOALL type and account for a large percentage of the total execution time. One of the ways to exploit parallelism is to partition the iteration space of a DOALL loop amongst different processors in a parallel processor system. Naturally, a good partitioning is of key importance to achieve high performance and for efficient use of multiprocessor systems. Although a significant amount of work has been done in partitioning and scheduling of loops with both rectangular and non-rectangular iteration spaces, the problem of partitioning loops with conditionals has not been addressed so far to the best of our knowledge. In this paper, we present a mathematical model for partitioning parallel nested loops, both perfect and non-perfect, with conditionals, where the expressions in a conditional are affine functions of the outer loop indices. We present a loop transformation based on elimination of redundant constraints bounding the iteration space of a nested loop. The transformation plays a critical role during the (static) partitioning process as it helps to capture the "exact" lower and upper bounds (can be either a constant or symbolic) of the loop indices. We generate a canonical form of the loop nest using the transformation and employ the geometric approach we proposed earlier (in [1, 2]) for partitioning the iteration space along an axis corresponding to the outermost loop. For cases in which such a transformation does not exist, we propose a general approach for loop canonicalization. We present several examples from the literature and numerical packages to illustrate the effectiveness of our approach.
- A. Kejariwal, A. Nicolau, U. Banerjee, and C. D. Polychronopoulos. A novel approach for partitioning iteration spaces with variable densities. In Proceedings of the 10th ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 120--131, Chicago, IL, 2005. Google ScholarDigital Library
- A. Kejariwal, P. D'Alberto, A. Nicolau, and C. D. Polychronopoulos. A geometric approach for partitioning N-dimensional non-rectangular iteration spaces. In Proceedings of the 17th International Workshop on Languages and Compilers for Parallel Computing, pages 102--116, West Lafayette, IN, 2004. Google ScholarDigital Library
- M. R. Haghighat and Constantine D. Polychronopoulos. Symbolic analysis for parallelizing compilers. ACM Transactions on Programming Languages and Systems, 18(4):477--518, July 1996. Google ScholarDigital Library
- R. Sakellariou. On the Quest for Perfect Load Balance in Loop-Based Parallel Computations. PhD thesis, Department of Computer Science, University of Manchester, October 1996.Google Scholar
- C. Polychronopoulos, D. J. Kuck, and D. A. Padua. Execution of parallel loops on parallel processor systems. In Proceedings of the 1986 International Conference on Parallel Processing, pages 519--527, August 1986.Google Scholar
- E. H. D'Hollander. Partitioning and labeling of loops by unimodular transformations. IEEE Transactions on Parallel and Distributed Systems, 3(4):465--476, 1992. Google ScholarDigital Library
- S. Lundstrom and G. Barnes. A controllable MIMD architectures. In Proceedings of the 1980 International Conference on Parallel Processing, St. Charles, IL, August 1980.Google Scholar
- C. Polychronopoulos. Loop coalescing: A compiler transformation for parallel machines. In Proceedings of the 1987 International Conference on Parallel Processing, pages 235--242, August 1987.Google Scholar
- J. Foley, A. van Dam, S. Feiner, and J. Hughes. Computer Graphics: Principles and Practice. Addison-Wesley, 2nd edition in C edition, 1990. Google ScholarDigital Library
- P. Anninos. Computational cosmology: From the early universe to the large scale structure. In Living Reviews in Relativity 4, 2001.Google Scholar
- M. O'Boyle and G. A. Hedayat. Load balancing of parallel affine loops by unimodular transformations. Technical Report UMCS-92-1-1, Department of Computer Science, University of Manchester, January 1992.Google Scholar
- R. Blumofe, C. Joerg, B. Kuszmaul, C. Leiserson, K. Randall, and Y. Zhou. Cilk: An efficient multithreaded runtime system. In Proceedings of the 5th Symposium on Principles and Practice of Parallel Programming, 1995. Google ScholarDigital Library
- H. Saito, N. Stavrakos, and C. D. Polychronopoulos. Multithreading runtime support for loop and functional parallelism. In Proceedings of the Second International Symposium High Performance Computing, pages 133--144, 1999. Google ScholarDigital Library
- J. B. J. Fourier. Solution d'une question particulière du calcul des inégalités. In Ouevres II, pages 317--328. 1826.Google Scholar
- L. L. Dines. System of linear inequalities. Annals of Mathematics, 20:191--199, 1919.Google ScholarCross Ref
- L. L. Dines and N. H. McCoy. On linear inequalities. Transactions of the Royal Society of Canada, 27:217--232, 1933.Google Scholar
- T. S. Motzkin. Beitrage zur theorie der linearen Ungleichungen. PhD thesis, University of Basel, 1936.Google Scholar
- H. W. Kuhn. Solvability and consistency of linear inequalities and inequalities. American Mathematical Monthly, 63:217--232, 1956.Google ScholarCross Ref
- S. N. Chernikov. The solution of linear programming problems by elimination of unknowns. Doklady Akademii Nauk SSSR, 139:1314--1317, 1961.Google Scholar
- G. Dantzig. Linear Programming and Extensions. Princeton University Press, Princeton, NJ, 1963.Google Scholar
- G. B. Dantzig and B. C. Eaves. Fourier-Motzkin elimination and its dual. Journal of Combinatorial Theory (A), 14(3):288--297, 1973.Google ScholarCross Ref
- R. J. Duffin. On Fourier's analysis of linear inequality systems. In Mathematical Programming Study 1, pages 71--95. North-Holland, 1974.Google Scholar
- H. P. Williams. Fourier-motzkin elimination extension to integer programming problems. Journal of Combinatorial Theory (A), 21(1):118--123, 1976.Google ScholarCross Ref
- U. Banerjee. Loop Transformation for Restructuring Compilers. Kluwer Academic Publishers, Boston, MA, 1993. Google ScholarDigital Library
- A. Bik and H. Wijshoff. Implementation of Fourier-Motzkin elimination. Technical Report TR94-42, Department of Computer Science, University of Leiden, The Netherlands, 1994.Google Scholar
- W. Pugh. A practical algorithm for exact array dependence analysis. Communications of the ACM, 35(8):102--114, August 1992. Google ScholarDigital Library
- LEDAS Geometric Solver. http://lgs.ledas.com/features.php.Google Scholar
- D. Kuck, A. H. Sameh, R. Cytron, A. Veidenbaum, C. D. Polychronopoulos, G. Lee, T. McDaniel, B. R. Leasure, C. Beckman, J. R. B Davies, and C. P. Kruskal. The effects of program restructuring, algorithm change and architecture choice on program performance. In Proceedings of the 1984 International Conference on Parallel Processing, pages 129--138, August 1984.Google Scholar
- M. J. Wolfe. Optimizing Supercompilers for Supercomputers. The MIT Press, Cambridge, MA, 1989. Google ScholarDigital Library
- D. A. Padua and M. J. Wolfe. Advanced compiler optimizations for supercomputers. Communications of the ACM, 29(12):1184--1201, December 1986. Google ScholarDigital Library
- F. Irigoin and R. Triolet. Supernode partitioning. In Proceedings of the Fifteenth Annual ACM Symposium on the Principles of Programming Languages, San Diego, CA, January 1988. Google ScholarDigital Library
- C. P. Kruskal and A. Weiss. Allocating independent subtasks on parallel processors. IEEE Transactions on Software Engineering, 11(10):1001--1016, 1985. Google ScholarDigital Library
- M. Haghighat and C. Polychronopoulos. Symbolic program analysis and optimization for parallelizing compilers. In Proceedings of the Fifth Workshop on Languages and Compilers for Parallel Computing, New Haven, CT, August 1992. Google ScholarDigital Library
- M. J. Wolfe and C. W. Tseng. The Power test for data dependence. IEEE Transactions on Parallel and Distributed Systems, 3(5):591--601, September 1992. Google ScholarDigital Library
- U. Banerjee. Dependence Analysis for Supercomputing. Kluwer Academic Publishers, Boston, MA, 1988. Google ScholarDigital Library
- C. Ancourt and F. Irigoin. Scanning polyhedra with DO loops. In Proceedings of the Third ACM SIGPLAN Symposium on Principles and Practice of Parallel Programming, pages 39--50, Williamsburg, VA, April 1991. Google ScholarDigital Library
- R. Triolet. Interprocedural analysis for program restructuring with Parafrase. CSRD Rpt. No. 538, Department of Computer Science, University of Illinois at Urbana-Champaign, December 1985.Google Scholar
- D. Maydan, J. Hennessy, and M. Lam. Efficient and exact data dependence analysis. In Proceedings of the SIGPLAN '91 Conference on Programming Language Design and Implementation, Toronto, Canada, June 1991. Google ScholarDigital Library
- F. Irigoin, P. Jouvelot, and R. Triolet. Semantical interprocedural parallelization: An overview of the PIPS project. In Proceedings of the 1991 ACM International Conference on Supercomputing, Cologne, Germany, June 1991. Google ScholarDigital Library
- W. Pugh. Counting solutions to presburger formulas: How and why. ACM SIGPLAN Notices, 29(6):121--134, 1994. Google ScholarDigital Library
Index Terms
- A general approach for partitioning N-dimensional parallel nested loops with conditionals
Recommendations
A novel approach for partitioning iteration spaces with variable densities
PPoPP '05: Proceedings of the tenth ACM SIGPLAN symposium on Principles and practice of parallel programmingEfficient partitioning of parallel loops plays a critical role in high performance and efficient use of multiprocessor systems. Although a significant amount of work has been done in partitioning and scheduling of loops with rectangular iteration spaces,...
Cache-aware partitioning of multi-dimensional iteration spaces
SYSTOR '09: Proceedings of SYSTOR 2009: The Israeli Experimental Systems ConferenceThe need for high performance per watt has led to development of multi-core systems such as the Intel Core 2 Duo processor and the Intel quad-core Kentsfield processor. Maximal exploitation of the hardware parallelism supported by such systems ...
Transformations techniques for extracting parallelism in non-uniform nested loops
Executing a program in parallel machines needs not only to find sufficient parallelism in a program, but it is also important that we minimize the synchronization and communication overheads in the parallelized program. This yields to improve the ...
Comments