Abstract
Natural gradient has been recently introduced as a method to improve the convergence of Multilayer Perceptron (MLP) training [1] as well as that of other neural network type algorithms. The key idea is to recast the training process as a problem in quasi maximum log—likelihood estimation of a certain semipara-metric probabilistic model. This allows the natural introduction of a riemannian metric tensor G in the probabilistic model space. Once G is computed, the “natural” gradient in this setting is \(c G\left( W \right)^{ - 1} \nabla _W e\left( {X,y:W} \right) \) , rather than the ordinary euclidean gradient \( \nabla _W e\left( {X,y;W} \right) \) . Here e(X,y; W) denotes an error function associated to a concrete pattern (X, y) and weight set W. For instance, in MLP training, e(X,y;W) = (y – F(X,W))2/2, with F the MLP transfer function. Viewing (y – F(X, W))2/2 as the log—likelihood of a probability density, the metric tensor is
G(W) is also known as the Fisher Information matrix, as it gives the variance of the Cramer—Rao bound for the optimal W estimator. In this work we shall consider a natural gradient—like training for Non Linear Discriminant Analysis (NLDA) networks, a non—linear extension of Fisher’s well known Linear Discriminant Analysis introduced in [6] (more details below). Instead of following an approach along the previous lines, we observe that (1) can be viewed as the covariance \( G\left( W \right) = E\left[ {\nabla _W e\left( {X,y;W} \right)\nabla _W e\left( {X,y;W} \right)^t } \right] \) of the random vector \( \nabla _W \left( {X,y;W} \right) \) .
With partial support of TIC 01-572 and CAM 02-18
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
H. Park, S. Amari, K. Fukumizu, “Adaptive Natural Gradient Learning Algorithms for Various Stochastic Models”, Neural Networks 13 (2000), 755–764.
P.A. Devijver, J. Kittler, Pattern Recognition: A Statistical Approach, Prentice Hall, 1982.
J.R. Dorronsoro, A. González, C. Santa Cruz, “Natural gradient learning in NLDA networks”, Lecture Notes in Computer Science 2084, Springer Verlag 2001, 427–434.
K. Fukunaga, Introduction to statistical pattern recognition. Academic Press, 1972.
B.D. Ripley, Pattern Recognition and Neural Networks, Cambridge U. Press, 1996.
C. Santa Cruz, J.R. Dorronsoro, “A nonlinear discriminant algorithm for feature extraction and data classification”, IEEE Transactions in Neural Networks 9 (1998), 1370–1376.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Dorronsoro, J.R., González, A. (2002). Natural Gradient and Multiclass NLDA Networks. In: Dorronsoro, J.R. (eds) Artificial Neural Networks — ICANN 2002. ICANN 2002. Lecture Notes in Computer Science, vol 2415. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-46084-5_110
Download citation
DOI: https://doi.org/10.1007/3-540-46084-5_110
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-44074-1
Online ISBN: 978-3-540-46084-8
eBook Packages: Springer Book Archive