A functional neural network computing some eigenvalues and eigenvectors of a special real matrix
Introduction
Quick extraction of the eigenvalues and eigenvectors of a matrix, specially, a general real matrix, is very important in engineering such as real-time signal processing (Luo et al., 1997, Ziegaus and Lang, 2004), primary component analysis (Luo, Unbehauen, & Li, 1995), etc. In many rapid computing methods, the neural network based approach is one of the most important, many literatures about this technique have been reported (Cichocki, 1992, Cichocki and Unbehauen, 1992, Helmke and Moore, 1994, Kakeya and Kindo, 1997, Kobayashi et al., 2001, Li, 1997, Liu et al., 2005a, Liu et al., 2005b, Luo and Li, 1995, Perfetti and Massarelli, 1997, Reddy et al., 1995, Samardzija and Waterland, 1991, Song and Yam, 1998, Zhang et al., 2004). But these works mainly concentrate on extracting eigenvalues and eigenvectors of a real symmetric, or anti-symmetric, matrix, this restriction is very strict. So this paper proposes a new functional neural network (FNN) to perform the computation, which has looser restrictions to the matrix. The FNN is expressed aswhere
I denotes a n×n identity matrix, I0 denotes a n×n zero matrix,
A is the real matrix requiring extracting some eigenvalue and eigenvector. When v(t) is seen as the states of neurons, [A′−B(t)] is looked as synaptic connection weights, and the activation functions are seen as pure linear functions, formula (1) describes a continuous time functional neural network.
Let formula (1) is equal toDenotei denotes the imaginary unit, formula (2) is equivalent toi.e.
Since the identical transformations from formula (1) to formula (4), analyzing the convergence properties of formula (4) is equivalent to study those of FNN (1).
Section snippets
Some preliminaries
All eigenvalues of A are denoted as , corresponding eigenvectors and eigensubspaces are denoted as μ1,…,μn and V1,…,Vn.
Let ξ∈Cn denote the equilibrium vector. If ξ exists, there must exist the relation:
When the FNN reaches equilibrium state, it follows from formula (4) thatwhen ξ is an eigenvector of A, is assumed as the corresponding eigenvalue, then
From formula (6), (7), it follows that
Analytic solution of FNN (1)
Theorem 1 Let Sj denote μj/|μj|, and zj(t) denote the projection value of z(t) onto Sj. Then the analytic solution of FNN (1) isfor t≥0.
Proof Obviously, S1, S2,…,Sn construct a basis in n-dimensional complex vector space Cn, so As Sk is an eigenvector of A and is the corresponding eigenvalue, substituting formula (10) into formula (4) gives that
Convergence analysis
Let denote the index set (k1, k2,…,kN), the sign ‘⊕’ denotes the direct sum operator.
Theorem 2 If , the FNN cannot obtain ξ when initial complex vector z(0)≠0.
Proof When , using theorem 1 gives that Depending upon the values of , there are two cases: If for j=1, 2,…,n, we will prove that z(t) falls into a cycle procedure. Assume the period is T, then
Examples and discussion
In this section, we give three examples to illustrate the performance of the proposed FNN. The simulation platform is Matlab.
Example 1 A is randomly evaluated like When A is inputted into the FNN, the equilibrium vector is and . The true
Conclusions
This paper proposes a functional neural network model adapt for computation of some eigenvalue of a special real matrix. In complex vector space, the analytic solution of the model is obtained. By the solution, the convergence properties are analyzed. The network is feasible to the real matrix, which has following features: its eigenvalues are not all real numbers, and there is unique eigenvalue whose imaginary part is the largest, the algebraic multiplicity of the eigenvalue is not restricted.
Acknowledgements
The authors would like to thank the referees for their helpful comments.
References (17)
- et al.
Eigenspace separation of autocorrelation memory matrices for capacity expansion
Neural Networks
(1997) - et al.
Estimation of singular values of very large matrices using random sampling
Computers and Mathematics with Applications
(2001) A matrix inverse eigenvalue problem and its application
Linear Algebra and Its Applications
(1997)- et al.
A simple functional neural network for computing the largest and smallest eigenvalues and corresponding eigenvectors of a real symmetric matrix
Neurocomputing
(2005) - et al.
A minor component analysis algorithm
Neural Networks
(1997) - et al.
A principal component analysis algorithm with invariant norm
Neurocomputing
(1995) - et al.
Training spatially homogeneous fully recurrent neural networks in eigenvalue space
Neural Networks
(1997) - et al.
Some algorithms for eigensubspace estimation
Digital Signal Processing
(1995)
Cited by (13)
Low-rank matrix decomposition in L<inf>1</inf>-norm by dynamic systems
2012, Image and Vision ComputingAnother neural network based approach for computing eigenvalues and eigenvectors of real skew-symmetric matrices
2010, Computers and Mathematics with ApplicationsCitation Excerpt :Computation of eigenvalues and the corresponding eigenvectors has been an attractive topic for a long time, which is important both in theory and in many engineering fields such as image compression and signal processing, etc. Lots of neural network based methods have been proposed for solving this problem [1–16]. Two excellent review articles of this topic can be found in [17,18].
Dynamic system methods for solving mixed linear matrix inequalities and linear vector inequalities and equalities
2010, Applied Mathematics and ComputationComplex-valued neural network for hermitian matrices
2017, Engineering LettersThe Nonlinear workbook: Chaos, fractals, cellular automata, genetic algorithms, gene expression programming, support vector machine, wavelets, hidden markov models, fuzzy logic with C++, Java and SymbolicC++ programs, 6th edition
2014, The Nonlinear Workbook: Chaos, Fractals, Cellular Automata, Genetic Algorithms, Gene Expression Programming, Support Vector Machine, Wavelets, Hidden Markov Models, Fuzzy Logic with C++, Java and SymbolicC++ Programs, 6th edition