Skip to main content

Natural Gradient and Multiclass NLDA Networks

  • Conference paper
  • First Online:
Book cover Artificial Neural Networks — ICANN 2002 (ICANN 2002)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2415))

Included in the following conference series:

Abstract

Natural gradient has been recently introduced as a method to improve the convergence of Multilayer Perceptron (MLP) training [1] as well as that of other neural network type algorithms. The key idea is to recast the training process as a problem in quasi maximum log—likelihood estimation of a certain semipara-metric probabilistic model. This allows the natural introduction of a riemannian metric tensor G in the probabilistic model space. Once G is computed, the “natural” gradient in this setting is \(c G\left( W \right)^{ - 1} \nabla _W e\left( {X,y:W} \right) \) , rather than the ordinary euclidean gradient \( \nabla _W e\left( {X,y;W} \right) \) . Here e(X,y; W) denotes an error function associated to a concrete pattern (X, y) and weight set W. For instance, in MLP training, e(X,y;W) = (yF(X,W))2/2, with F the MLP transfer function. Viewing (yF(X, W))2/2 as the log—likelihood of a probability density, the metric tensor is

$$ G\left( W \right) = \smallint \smallint \frac{{\partial \log p}} {{\partial W}}\left( {\frac{{\partial \log p}} {{\partial W}}} \right)^t p\left( {X,y;W} \right)dXdy. $$

G(W) is also known as the Fisher Information matrix, as it gives the variance of the Cramer—Rao bound for the optimal W estimator. In this work we shall consider a natural gradient—like training for Non Linear Discriminant Analysis (NLDA) networks, a non—linear extension of Fisher’s well known Linear Discriminant Analysis introduced in [6] (more details below). Instead of following an approach along the previous lines, we observe that (1) can be viewed as the covariance \( G\left( W \right) = E\left[ {\nabla _W e\left( {X,y;W} \right)\nabla _W e\left( {X,y;W} \right)^t } \right] \) of the random vector \( \nabla _W \left( {X,y;W} \right) \) .

With partial support of TIC 01-572 and CAM 02-18

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. H. Park, S. Amari, K. Fukumizu, “Adaptive Natural Gradient Learning Algorithms for Various Stochastic Models”, Neural Networks 13 (2000), 755–764.

    Article  Google Scholar 

  2. P.A. Devijver, J. Kittler, Pattern Recognition: A Statistical Approach, Prentice Hall, 1982.

    Google Scholar 

  3. J.R. Dorronsoro, A. González, C. Santa Cruz, “Natural gradient learning in NLDA networks”, Lecture Notes in Computer Science 2084, Springer Verlag 2001, 427–434.

    Google Scholar 

  4. K. Fukunaga, Introduction to statistical pattern recognition. Academic Press, 1972.

    Google Scholar 

  5. B.D. Ripley, Pattern Recognition and Neural Networks, Cambridge U. Press, 1996.

    Google Scholar 

  6. C. Santa Cruz, J.R. Dorronsoro, “A nonlinear discriminant algorithm for feature extraction and data classification”, IEEE Transactions in Neural Networks 9 (1998), 1370–1376.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dorronsoro, J.R., González, A. (2002). Natural Gradient and Multiclass NLDA Networks. In: Dorronsoro, J.R. (eds) Artificial Neural Networks — ICANN 2002. ICANN 2002. Lecture Notes in Computer Science, vol 2415. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46084-5_110

Download citation

  • DOI: https://doi.org/10.1007/3-540-46084-5_110

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-44074-1

  • Online ISBN: 978-3-540-46084-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics