Skip to main content
Log in

Automatic differentiation: Reduced gradient and reduced Hessian matrix

  • Published:
Computational Optimization and Applications Aims and scope Submit manuscript

Abstract

This paper is concerned with automatic differentiation methods for computing the reduced gradient M t G and the reduced Hessian matrix M t HM. Hereby G is the gradient and H is the Hessian matrix of a real function F of n variables, and M is a matrix with n rows and k columns where k≤n. The reduced quantities are of particular interest in constrained optimization with objective function F. Two automatic differentiation methods are described, a standard method that produces G and H as intermediate results, and an economical method that takes a shortcut directly to the reduced quantites. The two methods are compared on the basis of the reqired computing time and storage. It is shown that the costs for the economical method are less than (k 2+3k+2)/(n 2+3n+2) times the expenses for the standard method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. J. Abadie, “The GRG method for nonlinear programming”, in: Design and Implementation of Optimization Software, H.J. Greenberg, ed., Sijthoff and Noordhoff: Alphen an den Rijn, the Netherlands, pp. 335–362, 1978.

    Google Scholar 

  2. L.C.W. Dixon, P. Dolan, and R.C. Price, “Finite element optimisation—The use of structured automatic differentiation, Technical Report #175, Numerical Optimisation Ctr., The Hatfield Polytech., 1986.

  3. L.C.W. Dixon and R.C. Price, “Truncated Newton method for sparse unconstrained optimisation using automatic differentiation”, J. Optimization Theory and Appl., vol. 60, pp. 261–275, 1989.

    Google Scholar 

  4. L.C.W. Dixon, and R.C. Price, “The truncated Gauss-Newton method for sparse least squares problems using automatic differentiation”, Technical Report #171, Numerical Optimisation Ctr., The Hatfield Polytech., 1986.

  5. H. Fischer, “Automatic differentiation: How to compute the Hessian matrix”, Report #26, DFG-Schwerpunkt: Anwendungsbezogene Optimierung und Steuerung, Technische Universität Müchen, 1987

  6. H. Fischer, “Some aspects of automatic differentiation”, in: Numerical Methods and Approximation Theory III, G.V. Milovanović, ed., Univ. of Niš: Yugoslavia, pp. 199–208, 1988.

    Google Scholar 

  7. H. Fischer, “Automatic differentiation: Fast method to compute the quadratic form of Hessian matrix and given vector”, Facta Universitatis (Niš), Ser. Math. Inform., vol. 3, pp. 51–59, 1988.

    Google Scholar 

  8. H. Fischer, “Fast method to compute the scalar product of gradient and given vector”, Computing, vol. 41, pp. 261–265, 1989.

    Google Scholar 

  9. H. Fischer, “Automatic differentiation of characterizing sequences”, J. Comput. Appl. Math. vol. 28, pp. 181–185, 1989.

    Google Scholar 

  10. P.E. Gill, W. Murray, and M.H. Wright, Practical Optimization, Academic Press: London, 1981.

    Google Scholar 

  11. M. Iri, “Simultaneous computation of functions, partial derivatives and estimates of rounding errors, complexity and practicality”, Japan J. Appl. Math., vol. 1, pp. 223–252, 1984.

    Google Scholar 

  12. M. Iri, and K. Kubota, “Methods of fast automatic differentiation and applications”, Res. Memo. RMI 87-02, Dept. of Math. Engg. and Instrumentation Physics, Faculty of Engg., Univ. of Tokyo, 1987.

  13. H. Kagiwada, R. Kalaba, N. Rasakhoo, and K. Spingarn, Numerical Derivatives and Nonlinear Analysis, Plenum Press: New York, NY, 1986.

    Google Scholar 

  14. R. Kalaba, N. Rasakhoo, and A. Tishler, “Nonlinear least squares via automatic derivative evaluation”, Appl. Math. Comp., vol. 12, pp. 119–137, 1983.

    Google Scholar 

  15. R. Kalaba, and L. Tesfatsion, “Automatic differentiation of functions of derivatives”, Comp. and Math. with Appl., vol. 12A, pp. 1091–1103, 1986.

    Google Scholar 

  16. R. Kalaba, L. Tesfatsion, and J.L. Wang, “A finite algorithm for the exact evaluation of higher order partial derivatives of functions of many variables”, J. Math. Analysis and Appl., vol. 92, pp. 552–563, 1983.

    Google Scholar 

  17. R. Kalaba, and A. Tishler, “A generalized Newton algorithm using high-order derivatives”, J. Optimization Theory and Appl., vol. 39, pp. 1–17, 1983.

    Google Scholar 

  18. C. Kredler, “Robust sequential active set programming: Theory and implementation details”, Report 227, DFG-Schwerpunkt: Anwendungsbezogene Optimierung und Steuerung, Technische Universität München, 1990.

  19. L.S. Lasdon, and A.D. Waren, “Generalized reduced gradient software for linearly and nonlinearly constrained problems”, in: Design and Implementation of Optimization Software H.J. Greenberg, ed., Sijthoff and Noordhoff: Alphen an den Rijn, the Netherlands, pp. 363–396, 1978.

    Google Scholar 

  20. L.B. Rall, “Applications of software for automatic differentiation in numerical computation”, Computing, Supplementum 2, pp. 141–156, 1980.

    Google Scholar 

  21. L.B. Rall, Automatic Differentiation Techniques and Applications, Lecture Notes in Computer Science, vol. 120, Springer: Berlin-Heidelberg-New York, 1981.

    Google Scholar 

  22. L.B. Rall, “Differentiation in PASCAL-SC: type GRADIENT”, ACM Trans. on Math. Software, vol. 10, pp. 161–184, 1984.

    Google Scholar 

  23. L.B. Rall, “Global optimization using automatic differentiation and interval iteration”, Tech. Summary Rep. 2832, MRC, Univ. of Wisconsin, Madison, WI, 1985.

    Google Scholar 

  24. L.B. Rall, “The arithmetic of differentiation”, Math. Mag., vol. 59, pp. 275–282, 1986.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fischer, H. Automatic differentiation: Reduced gradient and reduced Hessian matrix. Comput Optim Applic 1, 327–344 (1992). https://doi.org/10.1007/BF00249641

Download citation

  • Received:

  • Revised:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF00249641

Keywords

Navigation