Skip to main content
Log in

A Neural Network for PCA and Beyond

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Principal Component Analysis (PCA) has been implemented by several neural methods. We discuss a Network which has previously been shown to find the Principal Component subspace though not the actual Principal Components themselves. By introducing a constraint to the learning rule (we do not allow the weights to become negative) we cause the same network to find the actual Principal Components. We then use the network to identify individual independent sources when the signals from such sources are ORed together.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. T.D. Sanger, “Analysis of the two-dimensional receptive fields learned by the generalized hebbian algorithm in response to random input”, Biological Cybernetics, 1990.

  2. E. Oja, H. Ogawa and J. Wangviwattana, “Principal component analysis by homogeneous neural networks, part 1: The weighted subspace criterion”, IEICE Trans. Inf. & Syst., E75-D:366–375, May 1992.

    Google Scholar 

  3. C. Jutten and J. Herault, “Blind separation of sources,part 1: An adaptive algorithm based on neuromimetic architecture”, Signal Processing, Vol. 24, pp. 1–10, 1991.

    Google Scholar 

  4. A.J. Bell and T.J. Sejnowski, “An information maximization approach to blind separation and blind deconvolution”, Neural Computation, Vol. 7, pp. 1129–1159, 1995.

    Google Scholar 

  5. J. Karhunen and J. Joutsensalo, “Representation and separation of signals using nonlinear pca type learning”, Neural Networks, Vol. 7, No. 1, pp. 113–127, 1994.

    Google Scholar 

  6. M. Girolami and C. Fyfe, “Stochastic ica contrast maximisation using oja's nonlinear pca algorithm”, International Journal of Neural Systems, 1997.

  7. M. Girolami and C. Fyfe, “A temporal model of linear anti-hebbian learning”, Neural Processing Letters, 1997 (in press).

  8. C. Fyfe, “Pca properties of interneurons”, in From Neurobiology to Real World Computing, ICANN 93, pp. 183–188, 1993.

    Google Scholar 

  9. E. Oja, “Neural networks, principal components and subspaces”, International Journal of Neural Systems, Vol. 1, pp. 61–68, 1989.

    Google Scholar 

  10. P. Földiák, Models of Sensory Coding, PhD thesis, University of Cambridge, 1992.

  11. E. Saund, “A multiple cause mixture model for unsupervised learning”, Neural Computation, Vol. 7, pp. 51–71, 1995.

    Google Scholar 

  12. P. Dayan and R.S. Zemel, “Competition and multiple cause models”, Neural Computation, Vol. 7, pp. 565–579, 1995.

    Google Scholar 

  13. R.H. White, “Competitive hebbian learning: Algorithm and demonstration”, Neural Networks, Vol. 5, pp. 261–275, 1992.

    Google Scholar 

  14. L. Xu, E. Oja and C.Y. Suen, “Modified hebbian learning for curve and surface fitting”, Neural Networks, Vol. 5, pp. 441–457, 1992.

    Google Scholar 

  15. J. Karhunen, E. Oja, L. Wang, R. Vigário and J. Joutsensalo, “A class of neural networks for independent component analysis”, IEEE Transactions on Neural Networks, 1997 (in press).

  16. L. Wang and J. Karhunen, “A unified neural bigradient algorithm for robust pca and mca”, International Journal of Neural Systems, 1995.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Fyfe, C. A Neural Network for PCA and Beyond. Neural Processing Letters 6, 33–41 (1997). https://doi.org/10.1023/A:1009606706736

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009606706736

Navigation