Skip to main content

Theoretical Analysis of Function of Derivative Term in On-Line Gradient Descent Learning

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7553))

Abstract

In on-line gradient descent learning, the local property of the derivative term of the output can slow convergence. Improving the derivative term, such as by using the natural gradient, has been proposed for speeding up the convergence. Beside this sophisticated method, ”simple method” that replace the derivative term with a constant has proposed and showed that this greatly increases convergence speed. Although this phenomenon has been analyzed empirically, however, theoretical analysis is required to show its generality. In this paper, we theoretically analyze the effect of using the simple method. Our results show that, with the simple method, the generalization error decreases faster than with the true gradient descent method when the learning step is smaller than optimum value η opt . When it is larger than η opt , it decreases slower with the simple method, and the residual error is larger than with the true gradient descent method. Moreover, when there is output noise, η opt is no longer optimum; thus, the simple method is not robust in noisy circumstances.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Krogh, A., Hertz, J., Palmer, R.G.: Introduction to the Theory of Neural Computation. Addison-Wesley, Redwood City (1991)

    Google Scholar 

  2. Biehl, M., Schwarze, H.: Learning by on-line gradient descent. Journal of Physics A: Mathematical and General Physics 28, 643–656 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  3. Saad, D., Solla, S.A.: On-line learning in soft-committee machines. Physical Review E 52, 4225–4243 (1995)

    Article  Google Scholar 

  4. Hara, K., Katahira, K., Okanoya, K., Okada, M.: Statistical Mechanics of On-Line Node-perturbation Learning. Information Processing Society of Japan, Transactions on Mathematical Modeling and Its Applications 4(1), 72–81 (2011)

    Google Scholar 

  5. Fukumizu, K.: A Regularity Condition of the Information Matrix of a Multilayer Perceptron Network. Neural Networks 9(5), 871–879 (1996)

    Article  Google Scholar 

  6. Rattray, M., Saad, D.: Incorporating Curvature Information into On-line learning. In: Saad, D. (ed.) On-line Learning in Neural Networks, pp. 183–207. Cambridge University Press, Cambridge (1998)

    Google Scholar 

  7. Amari, S.: Natural gradient works efficiently in learning. Neural Computation 10, 251–276 (1998)

    Article  Google Scholar 

  8. Fahlman, S.E.: An Empirical Study of Learning Speed in Back-Propagation Networks, CMU-CS-88-162 (1988)

    Google Scholar 

  9. Williams, C.K.I.: Computation with Infinite Neural Networks. Neural Computation 10, 1203–1216 (1998)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Hara, K., Katahira, K., Okanoya, K., Okada, M. (2012). Theoretical Analysis of Function of Derivative Term in On-Line Gradient Descent Learning. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds) Artificial Neural Networks and Machine Learning – ICANN 2012. ICANN 2012. Lecture Notes in Computer Science, vol 7553. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-33266-1_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-33266-1_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-33265-4

  • Online ISBN: 978-3-642-33266-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics