Abstract
This paper is concerned with automatic differentiation methods for computing the reduced gradient M t G and the reduced Hessian matrix M t HM. Hereby G is the gradient and H is the Hessian matrix of a real function F of n variables, and M is a matrix with n rows and k columns where k≤n. The reduced quantities are of particular interest in constrained optimization with objective function F. Two automatic differentiation methods are described, a standard method that produces G and H as intermediate results, and an economical method that takes a shortcut directly to the reduced quantites. The two methods are compared on the basis of the reqired computing time and storage. It is shown that the costs for the economical method are less than (k 2+3k+2)/(n 2+3n+2) times the expenses for the standard method.
Similar content being viewed by others
References
J. Abadie, “The GRG method for nonlinear programming”, in: Design and Implementation of Optimization Software, H.J. Greenberg, ed., Sijthoff and Noordhoff: Alphen an den Rijn, the Netherlands, pp. 335–362, 1978.
L.C.W. Dixon, P. Dolan, and R.C. Price, “Finite element optimisation—The use of structured automatic differentiation, Technical Report #175, Numerical Optimisation Ctr., The Hatfield Polytech., 1986.
L.C.W. Dixon and R.C. Price, “Truncated Newton method for sparse unconstrained optimisation using automatic differentiation”, J. Optimization Theory and Appl., vol. 60, pp. 261–275, 1989.
L.C.W. Dixon, and R.C. Price, “The truncated Gauss-Newton method for sparse least squares problems using automatic differentiation”, Technical Report #171, Numerical Optimisation Ctr., The Hatfield Polytech., 1986.
H. Fischer, “Automatic differentiation: How to compute the Hessian matrix”, Report #26, DFG-Schwerpunkt: Anwendungsbezogene Optimierung und Steuerung, Technische Universität Müchen, 1987
H. Fischer, “Some aspects of automatic differentiation”, in: Numerical Methods and Approximation Theory III, G.V. Milovanović, ed., Univ. of Niš: Yugoslavia, pp. 199–208, 1988.
H. Fischer, “Automatic differentiation: Fast method to compute the quadratic form of Hessian matrix and given vector”, Facta Universitatis (Niš), Ser. Math. Inform., vol. 3, pp. 51–59, 1988.
H. Fischer, “Fast method to compute the scalar product of gradient and given vector”, Computing, vol. 41, pp. 261–265, 1989.
H. Fischer, “Automatic differentiation of characterizing sequences”, J. Comput. Appl. Math. vol. 28, pp. 181–185, 1989.
P.E. Gill, W. Murray, and M.H. Wright, Practical Optimization, Academic Press: London, 1981.
M. Iri, “Simultaneous computation of functions, partial derivatives and estimates of rounding errors, complexity and practicality”, Japan J. Appl. Math., vol. 1, pp. 223–252, 1984.
M. Iri, and K. Kubota, “Methods of fast automatic differentiation and applications”, Res. Memo. RMI 87-02, Dept. of Math. Engg. and Instrumentation Physics, Faculty of Engg., Univ. of Tokyo, 1987.
H. Kagiwada, R. Kalaba, N. Rasakhoo, and K. Spingarn, Numerical Derivatives and Nonlinear Analysis, Plenum Press: New York, NY, 1986.
R. Kalaba, N. Rasakhoo, and A. Tishler, “Nonlinear least squares via automatic derivative evaluation”, Appl. Math. Comp., vol. 12, pp. 119–137, 1983.
R. Kalaba, and L. Tesfatsion, “Automatic differentiation of functions of derivatives”, Comp. and Math. with Appl., vol. 12A, pp. 1091–1103, 1986.
R. Kalaba, L. Tesfatsion, and J.L. Wang, “A finite algorithm for the exact evaluation of higher order partial derivatives of functions of many variables”, J. Math. Analysis and Appl., vol. 92, pp. 552–563, 1983.
R. Kalaba, and A. Tishler, “A generalized Newton algorithm using high-order derivatives”, J. Optimization Theory and Appl., vol. 39, pp. 1–17, 1983.
C. Kredler, “Robust sequential active set programming: Theory and implementation details”, Report 227, DFG-Schwerpunkt: Anwendungsbezogene Optimierung und Steuerung, Technische Universität München, 1990.
L.S. Lasdon, and A.D. Waren, “Generalized reduced gradient software for linearly and nonlinearly constrained problems”, in: Design and Implementation of Optimization Software H.J. Greenberg, ed., Sijthoff and Noordhoff: Alphen an den Rijn, the Netherlands, pp. 363–396, 1978.
L.B. Rall, “Applications of software for automatic differentiation in numerical computation”, Computing, Supplementum 2, pp. 141–156, 1980.
L.B. Rall, Automatic Differentiation Techniques and Applications, Lecture Notes in Computer Science, vol. 120, Springer: Berlin-Heidelberg-New York, 1981.
L.B. Rall, “Differentiation in PASCAL-SC: type GRADIENT”, ACM Trans. on Math. Software, vol. 10, pp. 161–184, 1984.
L.B. Rall, “Global optimization using automatic differentiation and interval iteration”, Tech. Summary Rep. 2832, MRC, Univ. of Wisconsin, Madison, WI, 1985.
L.B. Rall, “The arithmetic of differentiation”, Math. Mag., vol. 59, pp. 275–282, 1986.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Fischer, H. Automatic differentiation: Reduced gradient and reduced Hessian matrix. Comput Optim Applic 1, 327–344 (1992). https://doi.org/10.1007/BF00249641
Received:
Revised:
Issue Date:
DOI: https://doi.org/10.1007/BF00249641