ABSTRACT
Automatic differentiation (AD) is a family of techniques to generate derivative code from a mathematical model expressed in a programming language. AD computes partial derivatives for each operation in the input code and combines them to produce the desired derivative by applying the chain rule. Activity analysis is a compiler analysis used to find active variables in automatic differentiation. By lifting the burden of computing partial derivatives for passive variables, activity analysis can reduce the memory requirement and run time of the generated derivative code. This paper compares a new context-sensitive flow-insensitive (CSFI) activity analysis with an existing context-insensitive flow-sensitive (CIFS) activity analysis in terms of execution time and the quality of the analysis results. Our experiments with eight benchmarks show that the new CSFI activity analysis runs up to 583 times faster and overestimates up to 18.5 times fewer active variables than does the existing CIFS activity analysis.
- http://mitgcm.org/.Google Scholar
- Alfred V. Aho, Ravi Sethi, and Jeffrey D. Ullman. Compilers: Principles, Techniques, and Tools. Addison-Wesley, 1986. Google ScholarDigital Library
- Brett M. Averick, Richard G. Garter, and Jorge J. More. MINPACK-2 Test Problem Collection. Technical Memorandum ANL/MCS-TM-150, Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, IL, May 1991.Google Scholar
- Martin Berz, Christian H. Bischof, George F. Corliss, and Andreas Griewank, editors. Computational Differentiation: Techniques, Applications, and Tools. SIAM, Philadelphia, PA, 1996.Google Scholar
- Christian Bischof, Alan Carle, Peyvand Khademi, and Andrew Mauer. ADIFOR 2.0: Automatic differentiation of Fortran 77 programs. IEEE Computational Science & Engineering, 3(3):18--32, 1996. Google ScholarDigital Library
- Mike Fagan and Alan Carle. Activity Analysis in ADIFOR: Algorithms and Effectiveness. Technical Report TR04-21, Department of Computational and Applied Mathematics, Rice University, Houston, TX, November 2004.Google Scholar
- Message Passing Interface Forum. MPI: A message-passing interface standard. International Journal of Supercomputer Applications, 8(3/4):165--414, 1994.Google Scholar
- Andreas Griewank and George F. Corliss, editors. Automatic Differentiation of Algorithms: Theory, Implementation, and Application. SIAM, Philadelphia, PA, 1991.Google Scholar
- L. Hascoët, U. Naumann, and V. Pascual. "To be recorded" analysis in reverse-mode automatic differentiation. Future Generation Computer Systems, 21(8), 2004. Google ScholarDigital Library
- Uday P. Khedker and Dhananjay M. Dhamdhere. A generalized theory of bit vector data flow analysis. ACM Transactions on Programming Languages and Systems, 16(5):1472--1511, 1994. Google ScholarDigital Library
- Barbara Kreaseck, Luis Ramos, Scott Easterday, Michelle Strout, and Paul Hovland. Hybrid static/dynamic activity analysis. In Proceedings of the 3rd International Workshop on Automatic Differentiation Tools and Applications (ADTA '04), Reading, England, 2006.Google Scholar
- Arun Lakhotia. Rule-based approach to computing module cohesion. In Proceedings of the 15th International Conference on Software Engineering, pages 35--44, Baltimore, MD, 1993. Google ScholarDigital Library
- Thomas Reps, Susan Horwitz, Mooly Sagiv, and Genevieve Rosay. Speeding up slicing. In Proceedings of the 2nd ACM SIGSOFT Symposium on Foundations of Software Engineering, pages 11--20, 1994. Google ScholarDigital Library
- Thomas Reps and Genevieve Rosay. Precise interprocedural chopping. In Proceedings of the 3rd ACM SIGSOFT Symposium on Foundations of Software Engineering, pages 41--52, 1995. Google ScholarDigital Library
- Rice University. Open64 project. http://www.hipersoft.rice.edu/open64/.Google Scholar
- Marc Shapiro and Susan Horwitz. The effects of the precision of pointer analysis. In International Symposium on Static Analysis, pages 16--34, 1997. Lecture Notes in Computer Science, Vol. 1302, Pascal Van Hentenryck (ed.), Springer-Verlag, New York, NY. Google ScholarDigital Library
- Michelle Mills Strout, Barbara Kreaseck, and Paul D. Hovland. Data-flow analysis for MPI programs. In International Conference on Parallel Processing, 2006. Google ScholarDigital Library
- Michelle Mills Strout, John Mellor-Crummey, and Paul D. Hovland. Representation-independent program analysis. In Proceedings of The Sixth ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering, 2005. Google ScholarDigital Library
- Jean Utke. OpenAD: Algorithm implementation user guide. Technical Memorandum ANL/MCS-TM-274, Mathematics and Computer Science Division, Argonne National Laboratory, 2004.Google Scholar
- Rob F. van der Wijngaart. NAS parallel benchmarks version 2.4. Technical Report NAS-02-007, NASA Advanced Supercomputing (NAS) Division, October 2002.Google Scholar
- Mark Weiser. Program slicing. In Proceedings of the 5th International Conference on Software Engineering, pages 439--449, 1981. Google ScholarDigital Library
Index Terms
- Comparison of two activity analyses for automatic differentiation: context-sensitive flow-insensitive vs. context-insensitive flow-sensitive
Recommendations
Algorithmic Differentiation of Code with Multiple Context-Specific Activities
Algorithmic differentiation (AD) by source-transformation is an established method for computing derivatives of computational algorithms. Static dataflow analysis is commonly used by AD tools to determine the set of active variables, that is, variables ...
Data-Flow Analysis for MPI Programs
ICPP '06: Proceedings of the 2006 International Conference on Parallel ProcessingMessage passing via MPI is widely used in singleprogram, multiple-data (SPMD) parallel programs. Existing data-flow frameworks do not model the semantics of message-passing SPMD programs, which can result in less precise and even incorrect analysis ...
Practical escape analyses: how good are they?
VEE '07: Proceedings of the 3rd international conference on Virtual execution environmentsA key analysis developed for the compilation of parallel programs is thread escape analysis (hereafter referred to as escape analysis), which determines what objects are accessed in more than one thread, and which references within a program are ...
Comments