Skip to main content
Log in

SVD Algorithms: APEX-like versus Subspace Methods

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

We compare several new SVD learning algorithms which are based on the subspace method in principal component analysis with the APEX-like algorithm proposed by Diamantaras. It is shown experimentally that the convergence of these algorithms is as fast as the convergence of the APEX-like algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Pierre F. Baldi and Kurt Hornik, “Learning in Linear Neural Networks: A Survey”, IEEE Transactions on Neural Networks, Vol. 6, No. 4, pp. 837–858, July 1995.

    Google Scholar 

  2. K.I. Diamantaras and S.Y. Kung, Principal Component Neural Networks: Theory and Applications. Adaptive and Learning Systems for Signal Processing, Communications, and Control, John Wiley & Sons, Inc., 1996.

  3. Jim Kay, “Feature discovery under contextual supervision using mutual information”, Proc. of the International Joint Conference on Neural Networks, Vol. 4, pp. 79–84, 1992.

    Google Scholar 

  4. Gene H. Golub and Charles F. Van Loan, Matrix Computations, The Johns Hopkins University Press, second edition, 1989.

  5. Konstantinos I. Diamantaras, “Principal Component Learning Networks and Applications”, PhD thesis, Department of Electrical Engineering, Princeton University, October 1992.

    Google Scholar 

  6. Terence D. Sanger. “Two iterative algorithms for computing the singular value decomposition from input/output samples”, in J.D. Cowan, G. Tesauro and J. Alspector (eds.) Advances in Neural Information Processing Systems, Vol. 6, pp. 144–151. Morgan Kaufman Publishers, Incl., 1994.

  7. Erkki Oja, “A simplified neuron model as principal component analyzer”, Journal of Mathematical Biology, Vol. 15, pp. 267–273, 1982.

    Google Scholar 

  8. S.Y. Kung and K.I. Diamantaras, “A neural network learning algorithm for adaptive principal component extraction (APEX)”, Proc. of the International Conference on Acoustics, Speech and Signal Processing, pp. 861–864, IEEE, 1990.

  9. R.W. Brockett, “Dynamical systems that sort lists, diagonalize matrices, and solve linear programming problems”, Linear Algebra and its Applications, Vol. 146, pp. 79–91, 1991.

    Google Scholar 

  10. Erkki Oja, “Neural networks, principal components, and subspaces”, International Journal of Neural Systems, Vol. 1, pp.61–68, 1989.

    Google Scholar 

  11. Wei-Yong Yan, Uwe Helmke and John B. Moore, “Global analysis of Oja's flow for neural networks”, IEEE Transactions on Neural Networks, Vol. 5, No. 5, pp. 674–683, 1994.

    Google Scholar 

  12. Andreas Weingessel, “Convergence analysis of SVD algorithms”, Technical report, Institut für Statistik und Wahrscheinlichkeitstheorie, Technische Universität Wien, 1996.

  13. Mark D. Plumbley, “Lyapunov functions for convergence of principal component algorithms”, Neural Networks, Vol. 8, No. 1, pp. 11–23, 1995.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Weingessel, A., Hornik, K. SVD Algorithms: APEX-like versus Subspace Methods. Neural Processing Letters 5, 177–184 (1997). https://doi.org/10.1023/A:1009642710601

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1009642710601

Navigation