Abstract
We compare several new SVD learning algorithms which are based on the subspace method in principal component analysis with the APEX-like algorithm proposed by Diamantaras. It is shown experimentally that the convergence of these algorithms is as fast as the convergence of the APEX-like algorithm.
Similar content being viewed by others
References
Pierre F. Baldi and Kurt Hornik, “Learning in Linear Neural Networks: A Survey”, IEEE Transactions on Neural Networks, Vol. 6, No. 4, pp. 837–858, July 1995.
K.I. Diamantaras and S.Y. Kung, Principal Component Neural Networks: Theory and Applications. Adaptive and Learning Systems for Signal Processing, Communications, and Control, John Wiley & Sons, Inc., 1996.
Jim Kay, “Feature discovery under contextual supervision using mutual information”, Proc. of the International Joint Conference on Neural Networks, Vol. 4, pp. 79–84, 1992.
Gene H. Golub and Charles F. Van Loan, Matrix Computations, The Johns Hopkins University Press, second edition, 1989.
Konstantinos I. Diamantaras, “Principal Component Learning Networks and Applications”, PhD thesis, Department of Electrical Engineering, Princeton University, October 1992.
Terence D. Sanger. “Two iterative algorithms for computing the singular value decomposition from input/output samples”, in J.D. Cowan, G. Tesauro and J. Alspector (eds.) Advances in Neural Information Processing Systems, Vol. 6, pp. 144–151. Morgan Kaufman Publishers, Incl., 1994.
Erkki Oja, “A simplified neuron model as principal component analyzer”, Journal of Mathematical Biology, Vol. 15, pp. 267–273, 1982.
S.Y. Kung and K.I. Diamantaras, “A neural network learning algorithm for adaptive principal component extraction (APEX)”, Proc. of the International Conference on Acoustics, Speech and Signal Processing, pp. 861–864, IEEE, 1990.
R.W. Brockett, “Dynamical systems that sort lists, diagonalize matrices, and solve linear programming problems”, Linear Algebra and its Applications, Vol. 146, pp. 79–91, 1991.
Erkki Oja, “Neural networks, principal components, and subspaces”, International Journal of Neural Systems, Vol. 1, pp.61–68, 1989.
Wei-Yong Yan, Uwe Helmke and John B. Moore, “Global analysis of Oja's flow for neural networks”, IEEE Transactions on Neural Networks, Vol. 5, No. 5, pp. 674–683, 1994.
Andreas Weingessel, “Convergence analysis of SVD algorithms”, Technical report, Institut für Statistik und Wahrscheinlichkeitstheorie, Technische Universität Wien, 1996.
Mark D. Plumbley, “Lyapunov functions for convergence of principal component algorithms”, Neural Networks, Vol. 8, No. 1, pp. 11–23, 1995.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Weingessel, A., Hornik, K. SVD Algorithms: APEX-like versus Subspace Methods. Neural Processing Letters 5, 177–184 (1997). https://doi.org/10.1023/A:1009642710601
Issue Date:
DOI: https://doi.org/10.1023/A:1009642710601