Abstract
Several neural network learning rules for the linear Principal Component Analysis (PCA) have been shown to be closely related to classical PCA optimization criteria. These learning rules and the corresponding criteria are extended to versions containing nonlinear functions. It can be shown that the criteria and the learning functions solve the blind source separation (BSS) problem for the linear memoryless mixture model, based on the statistical independence of the source signals. This bottom-up approach to the BSS and Independent Component Analysis (ICA) problems allows us to choose the nonlinear functions so that the learning rules not only produce independent components, but also have other desirable properties like robustness, contrary to the often used polynomial functions ensuing from cumulant expansions. Also fast batch versions of the learning rules are reviewed.
Preview
Unable to display preview. Download preview PDF.
References
S. Amari, A. Cichocki, and H.H. Yang. A new learning algorithm for blind source separation. In Advances in Neural Information Processing 8 (Proc. NIPS'95), Cambridge, MA, 1996. MIT Press.
P. Baldi and K. Hornik, Neural networks and principal components analysis: learning from examples without local minima. Neural Networks 2, 1989, 52–58.
A.J. Bell and T.J. Sejnowski. An information-maximization approach to blind separation and blind deconvolution. Neural Computation, 7:1129–1159, 1995.
J.-F. Cardoso. Eigen-structure of the fourth-order cumulant tensor with application to the blind source separation problem. In Proc. IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pages 2655–2658, Albuquerque, NM, USA, April 3–6 1990.
J.-F. Cardoso. Iterative techniques for blind source separation using only fourth-order cumulants. In Proc. EUSIPCO, pages 739–742, 1992.
J.-F. Cardoso and B. H. Laheld. Equivariant adaptive source separation. IEEE Trans. on Signal Processing, 44: 3017–3030, 1996.
Y. Chauvin, Principal component analysis by gradient descent on a constrained linear Hebbian cell. Proc. IJCNN, Washington DC, 1989, 373–380.
A. Chicocki, R. Unbehauen, and E. Rummert, Robust learning algorithm for blind separation of signals. Electronics Letters 30, 1994, pp. 1386–1387.
A. Chicocki, W. Kasprzak, and S. Amari, Multi-layer neural networks with a local adaptive learning rule for blind separation of source signals. In Proc. NOLTA'95, pages 61–65, 1995.
A. Chcocki, S. I. Amari, and R. Thawonmas. Blind signal extraction using self-adaptive non-linear hebbian learning rule. In Proc. NOLTA'96, pages 377–380, 1996.
P. Comon. Independent component analysis-a new concept? Signal Processing, 36:287–314, 1994.
G. Deco and D. Obradovic, An information-theoretic approach to neural computing. Springer, New York, 1996.
N. Delfosse and P. Loubaton, Adaptive blind separation of independent sources: a deflation approach. Signal Processing 45, 1995, pp. 59–83.
K. Diamantaras and S. Y. Kung, Principal component neural networks-Theory and applications. Wiley, New York, 1996.
P. Földiàk, Adaptive network for optimal linear feature extraction. Proc. IJCNN, Washington, DC, 1989, 401–405.
C. Fyfe, D. McGregor, and R. Baddeley, Exploratory projection pursuit: an artificial neural network approach. Dept. Comp. Science, U. of Strathclyde Res. Rep. 94/160, 1994.
K. Hornik and C. Kuan, Convergence analysis of local feature extraction algorithms. Neural Networks 5, pp. 229–240, 1991.
A. Hyvärinen and E. Oja. A fast fixed-point algorithm for independent component analysis. To appear in Neural Computation, 1997.
A. Hyvärinen and E. Oja. Simple neuron models for independent component analysis. To appear in Int. J. Neural Systems, 1997.
A. Hyvärinen. A family of fixed-point algorithms for independent component analysis. Technical Report A40, Helsinki University of Technology, Laboratory of Computer and Information Science, 1996.
A. Hyvärinen and E. Oja. Independent component analysis by general non-linear Hebbian-like learning rules. Technical Report A41, Helsinki University of Technology, Laboratory of Computer and Information Science, 1996.
A. Hyvärinen and E. Oja. A neuron that learns to separate one independent component from linear mixtures. In Proc. IEEE Int. Conf. on Neural Networks, pages 62–67, Washington, D.C., June 3–6 1996.
A. Hyvärinen and E. Oja. One-unit learning rules for independent component analysis. In NIPS*9, Denver, Colorado, 1996.
A. Hyvärinen. A family of fixed-point algorithms for independent component analysis. Proc. ICASSP'97, Munich, Germany, 1997.
C. Jutten and J. Herault, Independent component analysis (INCA) versus independent component analysis. Signal Processing IV: Theories and Applications (J. Lacoume et al, eds.), pp. 643–646, Elsevier, 1988.
C. Jutten and J. Herault. Blind separation of sources, part I: An adaptive algorithm based on neuromimetic architecture. Signal Processing, 24:1–10, 1991.
J. Karhunen and J. Joutsensalo, Tracking of sinusoidal frequencies by neural network learning algorithms. Proc. ICASSP-91, Toronto, Canada, 1991.
J. Karhunen and J. Joutsensalo. Representation and separation of signals using nonlinear PCA type learning. Neural Networks, 7(1):113–127, 1994.
J. Karhunen and J. Joutsensalo, Generalizations of Principal Component Analysis, optimization problems, and neural networks. Neural Networks 8, pp. 549–562, 1995.
J. Karhunen, E. Oja, L. Wang, R. Vigario, and J. Joutsensalo, A class of neural networks for Independent Component Analysis. IEEE Trans. Neural Networks 8, pp. 486–504, 1997.
J. Karhunen, A. Hyväxinen, R. Vigario, J. Hurri, and E. Oja. Applications of neural blind separation to signal and image processing. Proc. ICASSP'97, Munich, Germany, 1997.
A. Krogh and J. Hertz, Hebbian learning of principal components. Nordita preprint 89/50 S.
S. Kung and K. Diamantras, A neural network learning algorithm for adaptive principal component extraction (APEX). Proc. ICASSP-90, Albuquerque, NM, 1990, 861–864.
R. Linsker, Self-organization in a perceptual network. Computer, 1988, 105–117.
E. Oja, A simplified neuron model as a principal components analyzer. J. Math. Biol. 15, 1982, 267–273.
E. Oja, Subspace Methods of Pattern Recognition. RSP and J. Wiley, 1983.
E. Oja and J. Karhunen, On stochastic approximation of the eigenvectors and eigenvalues of the expectation of a random matrix. J. Math. Anal. Appl. 106, 1985, 69–84.
E. Oja, Neural networks, principal components, and subspaces. Int. J. Neural Systems 1, 1989, 61–68.
E. Oja, H. Ogawa, and J. Wangviwattana, Learning in nonlinear constrained Hebbian networks. Proc. ICANN-91, Helsinki, Finland, 1991.
E. Oja. Principal components, minor components, and linear neural networks. Neural Networks, 5:927–935, 1992.
E. Oja. The nonlinear PCA learning rule and signal separation-mathematical analysis. To appear in Neurocomputing, 1997.
E. Oja and J. Karhunen. Signal separation by nonlinear hebbian learning. In M. Palaniswami, Y. Attikiouzel, R. Marks, D. Fogel, and T. Fukuda, editors, Computational Intelligence-a Dynamic System Perspective, pages 83–97. IEEE Press, New York, 1995.
E. Oja and L. Wang, Robust fitting by nonlinear neural units. Neural Networks 9, pp. 435–444, 1996.
E. Oja and A. Hyväxinen. Blind signal separation by neural networks. In Proc. Int. Conf. on Neural Information Processing, pages 7–14, Hong Kong, 1996.
J. Rubner and P. Tavan, A self-organizing network for principal components analysis. Europhys. Lett. 10, 1989, 693–689.
T.D. Sanger, Optimal unsupervised learning in a single-layer linear feedforward network. Neural Networks 2, 1989, 459–473.
R. Williams, Feature discovery through error-correcting learning. Tech. Rep. 8501, UCSD, Institute of Cognitive Science, 1985.
L. Xu, E. Oja and C. Suen, Modified Hebbian learning for curve and surface fitting. Neural Networks 5, pp. 441–457, 1992.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1997 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Oja, E., Karhunen, J., Hyvärinen, A. (1997). From neural principal components to neural independent components. In: Gerstner, W., Germond, A., Hasler, M., Nicoud, JD. (eds) Artificial Neural Networks — ICANN'97. ICANN 1997. Lecture Notes in Computer Science, vol 1327. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020207
Download citation
DOI: https://doi.org/10.1007/BFb0020207
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-63631-1
Online ISBN: 978-3-540-69620-9
eBook Packages: Springer Book Archive