Skip to main content

Improving neural network training based on Jacobian rank deficiency

  • Poster Presentations 1
  • Conference paper
  • First Online:
Artificial Neural Networks — ICANN 96 (ICANN 1996)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1112))

Included in the following conference series:

  • 128 Accesses

Abstract

Analysis and experimental results obtained in [1] have revealed that many network training problems are ill-conditioned and may not be solved efficiently by the Gauss-Newton method. The Levenberg-Marquardt algorithm has been used successfully in solving nonlinear least squares problems, however only for reasonable size problems due to its significant computation and memory complexities within each iteration. In the present paper we develop a new algorithm in the form of a modified Gauss-Newton which on one hand takes advantage of the Jacobian rank deficiency to reduce computation and memory complexities, and on the other hand, still has similar features to the Levenberg-Marquardt algorithm with better convergence properties than first order methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Saarinen, R.B. Bramley, and G. Cybenko, “The Numerical Solution of Neural Network Training Problems”, CRSD Report No. 1089. Center for Supercomputing Research and Development, University of Illinois, Urbana, 1991.

    Google Scholar 

  2. J.E. Dennis, and R.B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Englewood Cliffs, NJ, 1983.

    Google Scholar 

  3. M.T. Hagen, and al el., “Training Feedforward Networks with the Marquardt Algorithm”, IEEE Trans. on Neural Networks, vol.5, No.6, 1994, pp.989–993.

    Google Scholar 

  4. R. Battiti, “First-and Second-Order Methods for Learning: Between Steepest Descent and Newton's Method”, Neural Computation, No.4, 1992, pp.141–166.

    Google Scholar 

  5. J.J Dongarra, C.B. Moler, J.R. Bunch, and G.W. Stewart, LINPACK: Users' Guide. SIAM Philadelphia, 1979.

    Google Scholar 

  6. K.S. Narendra, and K. Parthasarathy, “Identification and Control of Dynamical Systems Using Neural Networks”, IEEE Trans. on Neural Networks, vol.1, No.1, 1990, pp.4–27.

    Google Scholar 

  7. W.L. Luyben, Process Modeling, Simulation and Control for Chemical Engineering, McGraw-Hill, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Christoph von der Malsburg Werner von Seelen Jan C. Vorbrüggen Bernhard Sendhoff

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Zhou, G., Si, J. (1996). Improving neural network training based on Jacobian rank deficiency. In: von der Malsburg, C., von Seelen, W., Vorbrüggen, J.C., Sendhoff, B. (eds) Artificial Neural Networks — ICANN 96. ICANN 1996. Lecture Notes in Computer Science, vol 1112. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61510-5_91

Download citation

  • DOI: https://doi.org/10.1007/3-540-61510-5_91

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61510-1

  • Online ISBN: 978-3-540-68684-2

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics