Abstract
The minor component analysis (MCA) deals with the recovery of the eigenvector associated with the smallest eigenvalue of the autocorrelation matrix of the input data, and Möller algorithm is a famous self-stability MCA method. In this paper, we present a convergence analysis of Möller algorithm for estimating minor component of an input signal via a deterministic discrete time method. Some sufficient conditions are obtained to guarantee the convergence of Möller algorithm. Simulations are carried out to further illustrate the theoretical results achieved.
Similar content being viewed by others
References
Cirrincione G, Cirrincione M, Herault J, Van Huffel S (2002) The MCA EXIN neuron for the minor component analysis. IEEE Trans Neural Netw 13:160–187
Feng DZ, Bao Z, Jiao LC (1998) Total least mean squares algorithm. IEEE Trans Signal Process 46:2122–2130
Kong X, Han C, Wei R (2006) Modified gradient algorithm for total least square filtering. Neurocomputing 70:568–576
Schmidt R (1986) Multiple emitter location and signal parameter estimation. IEEE Trans Antennas Propag 34:276–280
Mathew G, Reddy VU (1994) Development and analysis of a neural network approach to Pisarenko’s harmonic retrieval method. IEEE Trans Signal Process 42:663–667
Cirrincione G (1998) A neural approach to the structure from motion problem. Ph.D. dissertation. LIS INPG Grenoble, Grenoble, France
Xu L, Oja E, Suen C (1992) Modified Hebbian learning for curve and surface fitting. Neural Netw 5:441–457
Tian D, Wang J, Xue Y, Xue G (2005) A neural minor component analysis algorithm for robust beamforming. In: ISCIT, vol 2, pp 1182–1185
Luo F, Unbehauen R, Cichocki A (1997) A minor subspace analysis algorithm. IEEE Trans Neural Netw 8:291–297
Luo F, Unbehauen R (1999) Comment on: A unified algorithm for principal and minor components extraction. Neural Netw 12:393
Chen T (1997) Modified Oja’s algorithms for principal subspace and minor subspace extraction. Neural Process Lett 5:105–110
Oja E (1992) Principal component, minor component and linear neural networks. Neural Netw 5:927–935
Douglas SC, Kung SY, Amari S (1998) A self-stabilized minor subspace rule. IEEE Signal Process Lett 5:328–330
Möller R (2004) A self-stabilizing learning rule for minor component analysis. Int J Neural Syst 14:1–8
Ouyang S, Bao Z, Liao GS, Ching PC (2001) Adaptive minor component extraction with modular structure. IEEE Trans Signal Process 49:2127–2137
Chen T, Amari S (2001) Unified stabilization approach to principal and minor components extraction. Neural Netw 14:1377–1387
Kong XY, Hu CH, Han CZ (2010) A self-stabilizing MSA algorithm in high-dimension data stream. Neural Netw 23:865–871
Ljung L (1977) Analysis of recursive stochastic algorithm. Trans Autom Control 22:551–575
Peng D, Yi Z (2006) Convergence analysis of a deterministic discrete time system of Feng’s MCA learning algorithm. IEEE Trans Signal Process 54:3626–3632
Yi Z, Ye M, Lv JC et al (2005) Convergence analysis of a deterministic discrete time system of Oja’s PCA learning algorithm. IEEE Trans Neural Netw 16:1318–1328
Peng D, Yi D (2008) On the discrete time dynamic of a self-stabilizing MCA learning algorithm. Math Comput Model 47:903–916
Zufiria PJ (2002) On the discrete-time dynamics of the basic Hebbian neural-network nodes. IEEE Trans Neural Netw 13:1342–1352
Zhang Q (2003) On the discrete-time dynamics of a PCA learning algorithm. Neurocomputing 55:761–769
Gao J, Ye M, Li J, Xia Q (2011) A globally convergent MCA algorithm by generalized eigen-decomposition. Int J Comput Intell Syst 4:991–1001
Kong XY, An QS, Ma HG et al (2012) Convergence analysis of deterministic discrete time system of a unified self-stabilizing algorithm for PCA and MCA. Neural Netw 36:64–72
Nguyen TG, Yamada I (2013) A unified convergence analysis of normalized PAST algorithms for estimating principal and minor components. Signal Process 93:176–184
Acknowledgments
This work was supported by the National Science Fund for Distinguished Youth Scholars of China (61025014) and National Natural Science Foundation of China under Grants 61074072 and 61374120.
Author information
Authors and Affiliations
Corresponding author
Appendices
Appendix 1: Proof of Theorem 1
Proof
So, we obtain that
Define a differential function \(f(s)=(1+\eta \lambda _1 +\eta \lambda _1 s)^{2}s\) on the interval \([0,1]\), where \(s=\left\| {{\varvec{w}}(k)} \right\| ^{2}\) and \(f(s)=\left\| {{\varvec{w}}(k+1)} \right\| ^{2}\), it follows that
for all \(0\le s\le 1\). Clearly, the roots of the function \(\dot{f}(s)=0\) are
By using \(\eta >0\) and \(\lambda _1 >0\), we have that \(s_1 <s_2 <0\). So it holds that \(\dot{f}(s)>0\) for all \(0\le s\le 1\), i.e. \(f(s)\) is monotone increasing on the interval \([0,1]\). Thus, for all \(0\le s\le 1\), it follows that
So, we have \(\left\| {{\varvec{w}}(k)} \right\| <1+2\eta \lambda _1\), for all \(k\ge 0\). This completes the proof of Theorem 1.
Appendix 2: Proof of Theorem 2
Proof
Denote \(c=\{1-\eta \lambda _1 (2(1+2\eta \lambda _1 )^{2}-1)\}\). By using condition (a), we have
So, we have\(\left\| {{\varvec{w}}(k)} \right\| >c^{k}\left\| {{\varvec{w}}(0)} \right\| \), for all \(k\ge 0\). This completes the proof of Theorem 2.
Appendix 3: Proof of Lemma 1
Proof
This completes the proof of Lemma 1.
Appendix 4: Proof of Lemma 2
Proof
By using (6)–(8), we have for all \(k\ge 0\) and \(i=1,2,\ldots , n\) that
By using Theorem 1, it holds that \(\left\| {{\varvec{w}}(k)} \right\| <1+2\eta \lambda _1 \) for all \(k\ge 0\), then we have
Denote \(\beta _k =\left\{ {1-\frac{\eta (\lambda _i -\lambda _n )(2\left\| {{\varvec{w}}(k)} \right\| ^{2}-1)}{1+\eta {\varvec{w}}(k)^{T}{\varvec{Rw}}(k)-\eta \lambda _n (2\left\| {{\varvec{w}}(k)} \right\| ^{2}-1)}} \right\} ^{2}\), then we will prove that \(\beta _k <1\), which can be completed by the following two equations. To make the proof more brief, let’s denote \({\beta }'_k =\frac{\eta (\lambda _i -\lambda _n )(2\left\| {{\varvec{w}}(k)} \right\| ^{2}-1)}{1+\eta {\varvec{w}}(k)^{T}{\varvec{Rw}}(k)-\eta \lambda _n (2\left\| {{\varvec{w}}(k)} \right\| ^{2}-1)}\), then we have \(\beta _k =(1-{\beta }'_k )^{2}\) and
Denote \({\beta }_k^{\prime \prime } =\frac{1+\eta {\varvec{w}}(k)^{T}{\varvec{Rw}}(k)}{(2\left\| {{\varvec{w}}(k)} \right\| ^{2}-1)}-\eta \lambda _n\), then \({\beta }_k^{\prime } =\eta (\lambda _i -\lambda _n ){\beta }_k^{\prime \prime }\). By using condition (a), we have
By using \(\lambda _i -\lambda _n >0\) and (27), we have \(0<{\beta }_k^{'} <2\), that is to say, \(\beta _k =(1-{\beta }_k^{'})^{2}<1\).
Denote \(\beta =\max (\beta _0, \beta _1, \ldots , \beta _k, \ldots )\) and \(\theta _1 =\,{-}\ln \beta \), then we have \(\theta _1 >0\) and
By using \(\theta _1 >0\) and (28), we have for all \(i=1,2,\ldots , n-1\) that
From Theorems 1 and 2, we obtain \(z_n (k)\) must be bounded, then \(\mathop {\lim }\nolimits _{k\rightarrow \infty } z_i (k)=0,(i=1,2,\ldots , n-1)\). This completes the proof of Lemma 2.
Appendix 5: Proof of Lemma 3
Proof
By using Lemma 2, we obtain that \({\varvec{w}}(k)\) will converge to the direction of the minor component\({\varvec{v}}_{{\varvec{n}}} \), as \(k\rightarrow \infty \). Suppose \({\varvec{w}}(k)\) has converged to the direction of \({\varvec{v}}_{{\varvec{n}}} \) at time \(k_0 \), i.e. \({\varvec{w}}(k_0 )=z_n (k_0 ){\varvec{v}}_{{\varvec{n}}} \).
From (8), we have
By using (30), we have for all \(k>k_0\) that
By using \(\left\| {{\varvec{w}}(k)} \right\| <1+2\eta \lambda _1\), we have for all \(k>k_0 \) that
Denote \(\delta _k =1-\eta \lambda _n z_n (k)(z_n (k)+1)\), since \(z_n (k)>0\), then we can conclude that \(0<\delta _k <1\). By using (30)–(32), we have for all \(k>k_0 \) that
Denote \(\delta =\max (\delta _k, \delta _{k-1}, \ldots , \delta _0 )\), clearly \(0<\delta <1\). By using (33), we have for all \(k>k_0 \) that
where \(\theta ={-}\ln \delta , \prod =\left| {z_n (0)-1} \right| \).
Denote \(\prod _2 =\eta \lambda _1 \left( {1+\eta \lambda _1 } \right) \left( {2+\eta \lambda _1 } \right) \left| {z_n (0)-1} \right| \). Given any \(\varepsilon >0\), there exists a \(K\ge 1\) such that
For any \(k_1 >k_2 >K\), it follows from (34) that:
This implies that the sequence \(\left\{ {z_n (k)} \right\} \) is a Cauchy sequence. By using the Cauchy convergence principle, there must exist a constant \(z^{{*}}\) such that \(\mathop {\lim }\nolimits _{k\rightarrow \infty } z_n (k)=z^{{*}}\).
By using (7), we have that \(\mathop {\lim }\nolimits _{k\rightarrow \infty } {\varvec{w}}(k)=z^{{*}}{\varvec{v}}_{{\varvec{n}}}\). Since (6) has self-stabilizing property, we have that \(\mathop {\lim }\nolimits _{k\rightarrow \infty } {{\varvec{w}}(k+1)}/{{\varvec{w}}(k)}=1\). From (8), we have that \(z^{{*}}=\{1-\eta \lambda _n [2(z^{{*}})^{2}-1]+\eta \lambda _n (z^{{*}})^{2}\}z^{{*}}\), then we obtain that \(z^{{*}}=\pm 1\). This completes the proof of Lemma 3.
Rights and permissions
About this article
Cite this article
Gao, Y., Kong, X., Hu, C. et al. Convergence Analysis of Möller Algorithm for Estimating Minor Component. Neural Process Lett 42, 355–368 (2015). https://doi.org/10.1007/s11063-014-9360-y
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-014-9360-y