Synonyms
Definition
Autoparallelization is the translation of a sequential program by a compiler into a parallel form that outputs the same values as the original program. For some authors, autoparallelization means only translation for multiprocessors. However, this definition is more general and includes translation for instruction level, vector, or any other form of parallelism.
Discussion
Introduction
The compilers of most parallel machines are autoparallelizers, and this has been the case since the earliest parallel supercomputers, the Illiac IV and the TI ASC, were introduced in the 1960s. Today, there are autoparallelizers for vector processors, VLIW processors, microprocessor vector extensions, and multiprocessors.
Autoparallelization is for productivity. Autoparallelizers, when they succeed, enable the programming of parallel machines with conventional languages such as Fortran or C. In this programming paradigm, code is not complicated by parallel constructs...
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Bibliography
Allen R, Kennedy K (1987) Automatic translation of FORTRAN programs to vector form. ACM Trans Program Lang Syst 9(4):491–542. DOI= http://doi.acm.org/10.1145/29873.29875
Banerjee UK (1997) Loop transformations for restructuring compilers: dependence analysis. Kluwer Academic, Norwell
Bik AJC (May 2004) The software vectorization handbook. Intel, Hillsboro
Eigenmann R, Hoeflinger J, Padua D (Jan 1998) On the automatic parallelization of the perfect BenchmarksⓇ. IEEE Trans Parallel Distrib Syst 9(1):5–23. DOI=http://dx.doi.org/10.1109/71.655238
Kennedy K, Allen JR (2002) Optimizing compilers for modern architectures: a dependence-based approach. Morgan Kaufmann, San Francisco
Kuck DJ (1976) Parallel processing of ordinary programs. Adv Comput 15:119–179
Kuck DJ, Kuhn RH, Padua DA, Leasure B, Wolfe M (1981) Dependence graphs and compiler optimizations. In: Proceedings of the 8th ACM SIGPLAN-SIGACT symposium on principles of programming languages. POPL ’81. Williamsburg, 26–28 Jan 1981, ACM, New York, pp 207–218. DOI=http://doi.acm.org/10.1145/567532.567555
Levine D, Callahan, D, Dongarra, J (1991) A comparative study of automatic vectorizing compilers. Parallel Comput 17:1223–1244
Paek Y, Hoeflinger J, Padua D (Jan 2002) Efficient and precise array access analysis. ACM Trans Program Lang Syst 24(1) 65–109. DOI=http://doi.acm.org/10.1145/509705.509708
Presberg DL (1975) The Paralyzer: Ivtran’s parallelism analyzer and synthesizer. In: Proceedings of the conference on programming languages and compilers for parallel and vector machines, New York, 18–19 March 1975, pp 9–16. DOI=http://doi.acm.org/10.1145/800026.808396
Scarborough RG, Kolsky HG (March 1986) A vectorizing Fortran compiler. IBM J Res Dev 30(2):163–171
Wolfe M (1996)High performance compilers for parallel computing. Addison-Wesley, Reading
Zhang G, Unnikrishnan P, Ren J (2004) Experiments with auto-parallelizing SPEC2000FP benchmarks. LCPC, pp 348–362
Zima H, Chapman B (1991) Supercompilers for parallel and vector computers. ACM, New York
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer Science+Business Media, LLC
About this entry
Cite this entry
Padua, D. (2011). Parallelization, Automatic. In: Padua, D. (eds) Encyclopedia of Parallel Computing. Springer, Boston, MA. https://doi.org/10.1007/978-0-387-09766-4_197
Download citation
DOI: https://doi.org/10.1007/978-0-387-09766-4_197
Publisher Name: Springer, Boston, MA
Print ISBN: 978-0-387-09765-7
Online ISBN: 978-0-387-09766-4
eBook Packages: Computer ScienceReference Module Computer Science and Engineering