Skip to main content

A Concise Functional Neural Network for Computing the Extremum Eigenpairs of Real Symmetric Matrices

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3971))

Abstract

Quick extraction of the extremum eigenpairs of a real symmetric matrix is very important in engineering. Using neural networks to complete this operation is in a parallel manner and can achieve high performance. So, this paper proposes a very concise functional neural network (FNN) to compute the largest (or smallest) eigenvalue and one corresponding eigenvector. After transforming the FNN into a differential equation, and obtaining the analytic solution, the convergence properties are completely analyzed. By this FNN, the method that can compute the extremum eigenpairs whether the matrix is non-definite, positive definite or negative definite is designed. Finally, three examples show the validity. Comparing with the other ones used in the same field, the proposed FNN is very simple and concise, so it is very easy to realize.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Luo, F.-L., Unbehauen, R., Cichocki, A.: A Minor Component Analysis Algorithm. Neural Networks 10(2), 291–297 (1997)

    Article  Google Scholar 

  2. Luo, F.-L., Unbehauen, R., Li, Y.-D.: A Principal Component Analysis Algorithm with Invariant Norm. Neurocomputing 8(2), 213–221 (1995)

    Article  MATH  Google Scholar 

  3. Song, J., Yam, Y.: Complex Recurrent Neural Network for Computing the Inverse and Pseudo-inverse of the Complex Matrix. Applied Mathematics and Computing 93(2-3), 195–205 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  4. Kakeya, H., Kindo, T.: Eigenspace Separation of Autocorrelation Memory Matrices for Capacity Expansion. Neural Networks 10(5), 833–843 (1997)

    Article  Google Scholar 

  5. Kobayashi, M., Dupret, G., King, O., Samukawa, H.: Estimation of Singular Values of Very Large Matrices Using Random Sampling. Computers and Mathematics with Applications 42(10-11), 1331–1352 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  6. Zhang, Y., Nan, F., Hua, J.: A Neural Networks Based Approach Computing Eigenvectors And Eigenvalues Of Symmetric Matrix. Computers and Mathematics with Applications 47(8-9), 1155–1164 (2004)

    Article  MATH  MathSciNet  Google Scholar 

  7. Luo, F.-L., Li, Y.-D.: Real-time Neural Computation of the Eigenvector Corresponding to the Largest Eigenvalue of Positive Matrix. Neurocomputing 7(2), 145–157 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  8. Reddy, V.U., Mathew, G., Paulraj, A.: Some Algorithms for Eigensubspace Estimation. Digital Signal Processing 5(2), 97–115 (1995)

    Article  Google Scholar 

  9. Perfetti, R., Massarelli, E.: Training Spatially Homogeneous Fully Recurrent Neural Net-works in Eigenvalue Space. Neural Networks 10(1), 125–137 (1997)

    Article  MATH  Google Scholar 

  10. Liu, Y., You, Z., Cao, L., Jiang, X.: A Neural Network Algorithm for Computing Matrix Eigenvalues and Eigenvectors. Journal of Software 16(6), 1064–1072 (2005)

    Article  MATH  MathSciNet  Google Scholar 

  11. Liu, Y., You, Z., Cao, L.: A Simple Functional Neural Network for Computing the Largest and Smallest Eigenvalues and Corresponding Eigenvectors of a Real Symmetric Matrix. Neurocomputing 67, 369–383 (2005)

    Article  Google Scholar 

  12. Liu, Y., You, Z., Cao, L.: A Functional Neural Network for Computing the Largest Modulus Eigenvalues and Their Corresponding Eigenvectors of an Anti-symmetric Matrix. Neurocomputing 67, 384–397 (2005)

    Article  Google Scholar 

  13. Liu, Y., You, Z., Cao, L.: A Functional Neural Network Computing Some Eigenvalues and Eigenvectors of a Special Real Matrix. Neural Networks 18(10), 1293–1300 (2005)

    Article  MATH  Google Scholar 

  14. Wang, J.: Recurrent Neural Networks for Computing Pseudoinverses of Rank-deficient Matrices. SIAM Journal on Scientific Computing 18(5), 1479–1493 (1997)

    Article  MATH  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, Y., You, Z. (2006). A Concise Functional Neural Network for Computing the Extremum Eigenpairs of Real Symmetric Matrices. In: Wang, J., Yi, Z., Zurada, J.M., Lu, BL., Yin, H. (eds) Advances in Neural Networks - ISNN 2006. ISNN 2006. Lecture Notes in Computer Science, vol 3971. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11759966_61

Download citation

  • DOI: https://doi.org/10.1007/11759966_61

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-34439-1

  • Online ISBN: 978-3-540-34440-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics