Skip to main content

Adaptive Natural Gradient Learning Algorithms for Unnormalized Statistical Models

  • Conference paper
  • First Online:
Artificial Neural Networks and Machine Learning – ICANN 2016 (ICANN 2016)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9886))

Included in the following conference series:

Abstract

The natural gradient is a powerful method to improve the transient dynamics of learning by utilizing the geometric structure of the parameter space. Many natural gradient methods have been developed for maximum likelihood learning, which is based on Kullback-Leibler (KL) divergence and its Fisher metric. However, they require the computation of the normalization constant and are not applicable to statistical models with an analytically intractable normalization constant. In this study, we extend the natural gradient framework to divergences for the unnormalized statistical models: score matching and ratio matching. In addition, we derive novel adaptive natural gradient algorithms that do not require computationally demanding inversion of the metric and show their effectiveness in some numerical experiments. In particular, experimental results in a multi-layer neural network model demonstrate that the proposed method can escape from the plateau phenomena much faster than the conventional stochastic gradient descent method.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Amari, S.-I.: Natural gradient works efficiently in learning. Neural Comput. 10(2), 251–276 (1998)

    Article  MathSciNet  Google Scholar 

  2. Amari, S.-I.: Information Geometry and Its Applications. Springer, Japan (2016)

    Book  MATH  Google Scholar 

  3. Amari, S.-I., Park, H., Fukumizu, K.: Adaptive method of realizing natural gradient learning for multilayer perceptrons. Neural Comput. 12(6), 1399–1409 (2000)

    Article  Google Scholar 

  4. Park, H., Amari, S.-I., Fukumizu, K.: Adaptive natural gradient learning algorithms for various stochastic models. Neural Netw. 13(7), 755–764 (2000)

    Article  Google Scholar 

  5. Hyvärinen, A.: Estimation of non-normalized statistical models by score matching. J. Mach. Learn. Res. 695–709 (2005)

    Google Scholar 

  6. Hyvärinen, A.: Some extensions of score matching. Comput. Stat. Data Anal. 51(5), 2499–2512 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  7. Köster, U., Hyvärinen, A.: A two-layer model of natural stimuli estimated with score matching. Neural Comput. 22(9), 2308–2333 (2010)

    Article  MATH  Google Scholar 

  8. Swersky, K., Ranzato, M., Buchman, D., Marlin, B.M., Freitas, N.D.: On autoencoders and score matching for energy based models. In: International Conference on Machine Learning, pp. 1201–1208 (2011)

    Google Scholar 

  9. Vincent, P.: A connection between score matching and denoising autoencoders. Neural Comput. 23(7), 1661–1674 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Eguchi, S.: Second order efficiency of minimum contrast estimators in a curved exponential family. Ann. Stat. 11(3), 793–803 (1983)

    Article  MathSciNet  MATH  Google Scholar 

  11. Marlin, B.M., Swersky, K., Chen, B., Freitas, N.D.: Inductive principles for restricted Boltzmann machine learning. In: International Conference on Artificial Intelligence and Statistics, pp. 509–516 (2010)

    Google Scholar 

  12. Pascanu, R., Bengio, Y.: Revisiting natural gradient for deep networks. arXiv preprint, arXiv:1301.3584 (2013)

  13. Roux, N.L., Manzagol, P.-A., Bengio, Y.: Topmoumoute online natural gradient algorithm. In: Advances in Neural Information Processing Systems, pp. 849–856 (2008)

    Google Scholar 

  14. Desjardins, G., Pascanu, R., Courville, A., Bengio, Y.: Metric-free natural gradient for joint-training of boltzmann machines, arXiv preprint, arXiv:1301.3545 (2013)

  15. Grosse, R., Salakhudinov, R.: Scaling up natural gradient by sparsely factorizing the inverse fisher matrix. In: International Conference on Machine Learning, pp. 2304–2313 (2015)

    Google Scholar 

Download references

Acknowledgments

This work was supported by a Grant-in-Aid for JSPS Fellows (No. 14J08282) from the Japan Society for the Promotion of Science (JSPS).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryo Karakida .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing Switzerland

About this paper

Cite this paper

Karakida, R., Okada, M., Amari, Si. (2016). Adaptive Natural Gradient Learning Algorithms for Unnormalized Statistical Models. In: Villa, A., Masulli, P., Pons Rivero, A. (eds) Artificial Neural Networks and Machine Learning – ICANN 2016. ICANN 2016. Lecture Notes in Computer Science(), vol 9886. Springer, Cham. https://doi.org/10.1007/978-3-319-44778-0_50

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-44778-0_50

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-44777-3

  • Online ISBN: 978-3-319-44778-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics