Abstract
This paper deals with the separation problem of complex-valued signals in the independent component analysis (ICA) framework, where sources are linearly and instantaneously mixed. Inspired by the recently proposed reference-based contrast criteria based on kurtosis, a new contrast function is put forward by introducing the reference-based scheme to negentropy, based on which a novel fast fixed-point (FastICA) algorithm is proposed. This method is similar in spirit to the classical negentropy-based FastICA algorithm, but differs in the fact that it is much more efficient than the latter in terms of computational speed, which is significantly striking with large number of samples. Furthermore, compared with the kurtosis-based FastICA algorithms, this method is more robust against unexpected outliers, which is particularly obvious when the sample size is small. The local consistent property of this new negentropic contrast function is analyzed in detail, together with the derivation of this novel algorithm presented. Performance analysis and comparison are investigated through computer simulations and realistic experiments, for which a simple wireless communication system with two transmitting and receiving antennas is constructed.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig1_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig2_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig3_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig4_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig5_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig6_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig7_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig8_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig9_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig10_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00034-014-9929-8/MediaObjects/34_2014_9929_Fig11_HTML.gif)
Similar content being viewed by others
References
E. Bingham, A. Hyvarinen, A fast fixed-point algorithm for independent component analysis of complex valued signals. Int. J. Neural Sys. 10(1), 1–8 (2000)
M. Castella, E. Moreau, A new method for kurtosis maximization and source separation. IEEE International Conference on Acoustics, Speech Signal Process (ICASSP2010), Dallas, TX, pp. 2670–2673, (2010).
M. Castella, E. Moreau, A new optimization method for reference-based quadratic contrast functions in a deflation scenario. IEEE International Conference on Acoustics, Speech Signal Processing (ICASSP2009), Taipei, Taiwan, R.O.C., pp. 3161–3164, (2009).
M. Castella, E. Moreau, New kurtosis optimization schemes for MISO equalization. IEEE Trans. Signal Process. 60(3), 1319–1330 (2012)
M. Castella, E. Moreau, J.C. Pesquet, A quadratic MISO contrast function for blind equalization. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2004). Montreal, Canada. pp. 681–684, (2004).
M. Castella, S. Rhioui, E. Moreau, J.C. Pesquet, Quadratic higher-order criteria for iterative blind separation of a MIMO convolutive mixture of sources. IEEE Trans. Signal Process. 55(1), 218–232 (2007)
M. Castella, S. Rhioui, E. Moreau, J.C. Pesquet, Source separation by quadratic contrast functions: a blind approach based on any higherorder statistics. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2005). Philadelphia, USA, (2005).
P. Comon, C. Jutten, Handbook of Blind Source Separation: Independent Component Analysis and Applications (Academic Press, New York, 2010)
D.N. Godard, Self-recovering equalization and carrier tracking in two-dimensional data communication systems. IEEE Trans. Commun. 28(11), 1867–1875 (1980)
P.J. Huber, Projection pursuit. Ann. Stat. 13(2), 435–475 (1985)
A. Hyvarinen, E. Oja, A fast fixed-point algorithm for independent component analysis. Neural Netw. 10(3), 626–634 (1999)
A. Hyvarinen, E. Oja, Independent component analysis: algorithms and applications. Neural Netw. 13(4–5), 411–430 (2000)
A. Hyvarinen, E. Oja, Independent component analysis by general nonlinear Hebbian-like learning rules. Signal Process. 64, 301–313 (1998)
M. Kawamoto, K. Kohno, Y. Inouye, Eigenvector algorithms incorporated with reference systems for solving blind deconvolution of MIMO-IIR linear systems. IEEE Signal Process. Lett. 14(12), 996–999 (2007)
C. Simon, P. Loubaton, C. Jutten, Separation of a class of convolutive mixtures: a contrast function approach. Signal Process. 81(4), 883–887 (2001)
J.K. Tugnait, Identification and deconvolution of multichannel linear non-Gaussian processes using higher order statistics and inverse filter criteria. IEEE Trans. Signal Process. 45(3), 658–672 (1997)
Acknowledgments
This work is supported by the NSF of Jiangsu Province of China under Grant Nos. BK2011117 and BK2012057, and by the National Natural Science Foundation of China under Grant Nos. 61172061 and 61201242.
Author information
Authors and Affiliations
Corresponding author
Appendix
Appendix
First, we denote a gradient operator by \(\nabla \) and partial gradient operators by \({\nabla _1}\) (respectively, \({\nabla _2}\)) with respect to the first (respectively, second) parameter. More precisely, \(\nabla J(\mathbf{{w}})\) is the vector composed of all partial derivatives of \(J(\mathbf{{w}})\) whereas \({\nabla _1}I(\mathbf{{w}},\mathbf{{v}})\) (respectively, \({\nabla _2}I(\mathbf{{w}},\mathbf{{v}})\)) is the vector of partial derivatives of \(I(\mathbf{{w}},\mathbf{{v}})\) with respect to \(\mathbf{{w}}\) (respectively, \(\mathbf{{v}}\) ).
Make the orthogonal change of coordinates \(\mathbf{{p}} = {\mathbf{{\mathrm{A}}}^H}\mathbf{{w}}\) and \(\mathbf{{q}} = {\mathbf{{\mathrm{A}}}^H}\mathbf{{v}}\), giving \(I(\mathbf{{p}},\mathbf{{q}}) = E\left\{ {G(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })} \right\} \). When \(\mathbf{{w}}\) coincides with one of the rows of \({\mathbf{{\mathrm{A}}}^{ - 1}}\), we have \(\mathbf{{p}} = {\left( {0, \ldots ,p,0, \ldots ,0} \right) ^T}\) and the corresponding \(\mathbf{{q}} = {\left( {0, \ldots ,q,0, \ldots ,0} \right) ^T}\) updating following \(\mathbf{{w}}\). Also, \(\mathbf{{p}}\) and \(\mathbf{{q}}\) are N-dimensional complex-valued column vectors, denoted by \({\left[ {{p_1},{p_2}, \ldots ,{p_N}} \right] ^T}\) and \({\left[ {{q_1},{q_2}, \ldots ,{q_N}} \right] ^T}\), respectively. In the following we shall analyze the stability of such \(\mathbf{{p}}\).
We now search for a Taylor expansion of \(I\) in the extrema. The partial gradient of \(I\) with respect to \(\mathbf{{p}}\) is
where
The Hessian of \(I\) with respect to \(\mathbf{{p}}\) is
where we define
Then in (39), we have
where \(n = 1,2, \ldots ,N, j = 1,2, \ldots ,N \) for (42) to (45).
Without loss of generality, we assume to analyze the stability of the points \(\mathbf{{p}} = p{\mathbf{{e}}_\mathbf{{1}}} = {(p,0, \ldots ,0)^T}\) and the corresponding \(\mathbf{{q}} = q{\mathbf{{e}}_\mathbf{{1}}} = {(q,0, \cdots ,0)^T}\), which relates to \(\mathbf{{w}} = p{\mathbf{{a}}_\mathbf{{1}}}\). Due to the normalization of \(\mathbf{{w}}\) and \(\mathbf{{v}}\) after each one-dimensional optimization, we have \({\left| {{\mathbf{{p}}^H}\mathbf{{s}}} \right| ^2} = {\left| {{\mathbf{{q}}^H}\mathbf{{s}}} \right| ^2} = {\left| {{s_1}} \right| ^2}\) and \({\left| \mathbf{{p}} \right| ^2} = {\left| \mathbf{{q}} \right| ^2} = 1\). Evaluating the gradient in (36) at points \(\mathbf{{p}} = p{\mathbf{{e}}_\mathbf{{1}}}\) and \(\mathbf{{q}} = q{\mathbf{{e}}_\mathbf{{1}}}\), under the assumptions A1 and A2, we have
Similarly, considering (39), we have
where \(\alpha = E\left\{ {\left| {{s_1}} \right| ^2}g'(({\mathbf{{p}}^H}\mathbf{{s}}){({\mathbf{{q}}^H}\mathbf{{s}})^ * })\} = E\{ {\left| {{s_1}} \right| ^2}g'({\left| {{s_1}} \right| ^2})\right\} \)
Now we make a small perturbation \(\mathbf{{\varepsilon }} = {({\varepsilon _{1r}},{\varepsilon _{1i}}, \ldots ,{\varepsilon _{Nr}},{\varepsilon _{Ni}})^T}\) and evaluate the Taylor expansion of \(I\) :
Due to the constraint \(\left\| \mathbf{{w}} \right\| = 1\), thus we have \(\left\| {p{\mathbf{{e}}_\mathbf{{1}}} + \mathbf{{\varepsilon }}} \right\| = 1\). We get
Using this, we have
Finally, Theorem 1 has been proved.
Rights and permissions
About this article
Cite this article
Zhao, W., Shen, Y., Yuan, Z. et al. A Novel Method for Complex-Valued Signals in Independent Component Analysis Framework. Circuits Syst Signal Process 34, 1893–1913 (2015). https://doi.org/10.1007/s00034-014-9929-8
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00034-014-9929-8