Skip to main content
Log in

A globally convergent learning algorithm for PCA neural networks

  • Original Article
  • Published:
Neural Computing & Applications Aims and scope Submit manuscript

Abstract

Principal component analysis (PCA) by neural networks is one of the most frequently used feature extracting methods. To process huge data sets, many learning algorithms based on neural networks for PCA have been proposed. However, traditional algorithms are not globally convergent. In this paper, a new PCA learning algorithm based on cascade recursive least square (CRLS) neural network is proposed. This algorithm can guarantee the network weight vector converges to an eigenvector associated with the largest eigenvalue of the input covariance matrix globally. A rigorous mathematical proof is given. Simulation results show the effectiveness of the algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Golub GH, Van Loan CF (1996) Matrix computation. The Johns Hopkins University Press

    Google Scholar 

  2. Anisse T, Gianalvo C (1999) Against the convergence of the minor component analysis neurons. IEEE Trans Neural Netw 10(1):207–210

    Article  Google Scholar 

  3. Baldi P, Hornik K (1995) Learning in linear neural networks: a survey. IEEE Trans Neural Netw 6(4):837–858

    Article  Google Scholar 

  4. Bannour S, Azimi-Sadjadi MR (1995) Principal component extraction using recursive least spares learning. IEEE Trans Neural Netw 6(2):457–469

    Article  Google Scholar 

  5. Chatterjee C, Kung Z, Roychowdhury VP (2000) Algorithms for accelerated convergence of adaptive PCA. IEEE Trans Neural Netw 11(3):338–355

    Article  Google Scholar 

  6. Cichock A, Kasprzak W, Skarbek W (1996) Adaptive learning algorithm for principal component analysis with partial data. Proc Cybern Syst 2:1014–1019

    Google Scholar 

  7. Firori S, Piazaa F (2000) A general class of φ APEX PCA Neural algorithm. IEEE Trans Circuits and Systems, I 47(9):1397–1998

    Article  Google Scholar 

  8. Marko VJ (2003) A new simple ∞OH neuron model as biologically plausible principal component analyzer. IEEE Trans Neural Netw 14(4):853–859

    Article  Google Scholar 

  9. Oja E (1989) Neural networks, principal components, and subspaces. Int J Neural Syst 1:61–68

    MATH  Google Scholar 

  10. Ouyang S, Bao Z, Liao G (2000) Robust recursive least squares learning algorithm for principal component analysis. IEEE Trans Neural Netw 11(1):215–221

    Article  Google Scholar 

  11. Sanger TD (1989) Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Netw 2(6):459–473

    Article  Google Scholar 

  12. Weingessels A, Hornik K (2000) Local PCA algorithms. IEEE Trans Neural Netw 11(6):1242–1250

    Article  Google Scholar 

  13. Zhang Q, Leung Y (2000) A class of learning algorithms for principal component analysis and minor component analysis. IEEE Trans Neural Netw 11(1):529–533

    Article  MATH  Google Scholar 

  14. Costa S, Fiori S (2001) Image compression using principal component neural networks. Image Vis Comput 19:649–668

    Article  Google Scholar 

  15. Chen T, Hua Y, Yan W (1998) Global convergence of Oja subspace algorithm for principal component extraction. IEEE Trans Neural Netw 9(1):58–67

    Article  Google Scholar 

  16. Xu L (1993) Least mean square error reconstruction principle for self-organizing neural nets. Neural Netw 6(5):627–648

    Google Scholar 

  17. Yan W, Helmke U, Moore JB (1994) Global analysis of Oja’s flow for neural networks. IEEE Trans Neural Netw 5(5):674–683

    Article  Google Scholar 

  18. Zufiria PJ (2002) On the discrete time dynamics of the basic hebbian neural network node. IEEE Trans Neural Netw 13(6):1342–1352

    Article  Google Scholar 

Download references

Acknowledgements

This work is supported in part by the National Science Foundation of China under grant number A0324638 and Youth Science and Technology Foundation of UESTC YF020801.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mao Ye.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Ye, M., Yi, Z. & Lv, J. A globally convergent learning algorithm for PCA neural networks. Neural Comput & Applic 14, 18–24 (2005). https://doi.org/10.1007/s00521-004-0435-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-004-0435-y

Keywords

Navigation