Skip to main content

Practical Consideration on Generalization Property of Natural Gradient Learning

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2084))

Abstract

Natural gradient learning is known to resolve the plateau problem, which is the main cause of slow learning speed of neural networks. The adaptive natural gradien tlearning, which is an adaptive method of realizing the natural gradien tlearning for neural networks, has also been developed and its practical advan tage has been confirmed. In this paper, we consider the generalization propert yof the natural gradien t method. Theoretically, the standard gradient method and the natural gradien tmethod has the same minimum in the error surface, thus the generalization performance should also be the same. However, in the practical sense, it is feasible that the natural gradien tmethod gives smaller training error when the standard method stops learning in a plateau. In this case, the solutions that are practically obtained are different from each other, and their generalization performances also come to be different. Since these situations are very often in practical problems, it is necessary to compare the generalization property of the natural gradient learning method with the standard method. In this paper, we show a case that the practical generalization performance of the natural gradient learning is poorer than the standard gradient method, and try to solve the problem by including a regularization term in the natural gradient learning.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S. Amari: Natural Gradient Works Efficiently in Learning, Neural Computation, 10, 251–276, 1998.

    Article  Google Scholar 

  2. Amari, S., Park, H., and Fukumizu, F.: Adaptive method of realizing natural gradient learning for multilayer perceptrons, Neural Computation, 12, 1399–1409, 2000.

    Article  Google Scholar 

  3. Bishop, C.: Neural Networks for Pattern Recognition, Oxford University Press, 1995.

    Google Scholar 

  4. Biehl, W., Riegler, P. and Wöhler, C.: Transient Dynamics of On-line Learning in Two-layered Neural Networks, Journal of Physics, A, 29, 4769–4780, 1996.

    Article  MATH  Google Scholar 

  5. Fukumizu, K. and Amari, S.: Local Minima and Plateaus in Hierarchical Structures of Multilayer Perceptrons, in preparation, 1999.

    Google Scholar 

  6. Park, H. and Amari, S.: Escaping from Plateaus of Multilayer Perceptron Learning by Natural Gradient, The 2nd RIEC International Symposium on Design and Architecture of Information Processing Systems Based on the Brain Information Principles, 189–192, 1998.

    Google Scholar 

  7. Park, H., Amari, S. and Fukumizu, K.: Adaptive natural gradient learning algorithms for various stochastic models, Neural Networks, 13, 755–764, 2000.

    Article  Google Scholar 

  8. Rattray, M., D. Saad, and S. Amari: Natural Gradient Descent for On-line Learning, Physical Review Letters, 81, 5461–5464, 1998.

    Article  Google Scholar 

  9. Saad, D. and Solla, S. A.: On-line Learning in Soft Committee Machines, Physical Review E, 52, 4225–4243, 1995.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Park, H. (2001). Practical Consideration on Generalization Property of Natural Gradient Learning. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_47

Download citation

  • DOI: https://doi.org/10.1007/3-540-45720-8_47

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42235-8

  • Online ISBN: 978-3-540-45720-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics