Skip to main content
Log in

A Novel Method for Complex-Valued Signals in Independent Component Analysis Framework

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

This paper deals with the separation problem of complex-valued signals in the independent component analysis (ICA) framework, where sources are linearly and instantaneously mixed. Inspired by the recently proposed reference-based contrast criteria based on kurtosis, a new contrast function is put forward by introducing the reference-based scheme to negentropy, based on which a novel fast fixed-point (FastICA) algorithm is proposed. This method is similar in spirit to the classical negentropy-based FastICA algorithm, but differs in the fact that it is much more efficient than the latter in terms of computational speed, which is significantly striking with large number of samples. Furthermore, compared with the kurtosis-based FastICA algorithms, this method is more robust against unexpected outliers, which is particularly obvious when the sample size is small. The local consistent property of this new negentropic contrast function is analyzed in detail, together with the derivation of this novel algorithm presented. Performance analysis and comparison are investigated through computer simulations and realistic experiments, for which a simple wireless communication system with two transmitting and receiving antennas is constructed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

References

  1. E. Bingham, A. Hyvarinen, A fast fixed-point algorithm for independent component analysis of complex valued signals. Int. J. Neural Sys. 10(1), 1–8 (2000)

    Article  Google Scholar 

  2. M. Castella, E. Moreau, A new method for kurtosis maximization and source separation. IEEE International Conference on Acoustics, Speech Signal Process (ICASSP2010), Dallas, TX, pp. 2670–2673, (2010).

  3. M. Castella, E. Moreau, A new optimization method for reference-based quadratic contrast functions in a deflation scenario. IEEE International Conference on Acoustics, Speech Signal Processing (ICASSP2009), Taipei, Taiwan, R.O.C., pp. 3161–3164, (2009).

  4. M. Castella, E. Moreau, New kurtosis optimization schemes for MISO equalization. IEEE Trans. Signal Process. 60(3), 1319–1330 (2012)

    Article  MathSciNet  Google Scholar 

  5. M. Castella, E. Moreau, J.C. Pesquet, A quadratic MISO contrast function for blind equalization. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2004). Montreal, Canada. pp. 681–684, (2004).

  6. M. Castella, S. Rhioui, E. Moreau, J.C. Pesquet, Quadratic higher-order criteria for iterative blind separation of a MIMO convolutive mixture of sources. IEEE Trans. Signal Process. 55(1), 218–232 (2007)

    Article  MathSciNet  Google Scholar 

  7. M. Castella, S. Rhioui, E. Moreau, J.C. Pesquet, Source separation by quadratic contrast functions: a blind approach based on any higherorder statistics. IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP2005). Philadelphia, USA, (2005).

  8. P. Comon, C. Jutten, Handbook of Blind Source Separation: Independent Component Analysis and Applications (Academic Press, New York, 2010)

    Google Scholar 

  9. D.N. Godard, Self-recovering equalization and carrier tracking in two-dimensional data communication systems. IEEE Trans. Commun. 28(11), 1867–1875 (1980)

    Article  Google Scholar 

  10. P.J. Huber, Projection pursuit. Ann. Stat. 13(2), 435–475 (1985)

    Article  MATH  Google Scholar 

  11. A. Hyvarinen, E. Oja, A fast fixed-point algorithm for independent component analysis. Neural Netw. 10(3), 626–634 (1999)

    Article  Google Scholar 

  12. A. Hyvarinen, E. Oja, Independent component analysis: algorithms and applications. Neural Netw. 13(4–5), 411–430 (2000)

    Article  Google Scholar 

  13. A. Hyvarinen, E. Oja, Independent component analysis by general nonlinear Hebbian-like learning rules. Signal Process. 64, 301–313 (1998)

    Article  Google Scholar 

  14. M. Kawamoto, K. Kohno, Y. Inouye, Eigenvector algorithms incorporated with reference systems for solving blind deconvolution of MIMO-IIR linear systems. IEEE Signal Process. Lett. 14(12), 996–999 (2007)

    Article  Google Scholar 

  15. C. Simon, P. Loubaton, C. Jutten, Separation of a class of convolutive mixtures: a contrast function approach. Signal Process. 81(4), 883–887 (2001)

    Article  MATH  Google Scholar 

  16. J.K. Tugnait, Identification and deconvolution of multichannel linear non-Gaussian processes using higher order statistics and inverse filter criteria. IEEE Trans. Signal Process. 45(3), 658–672 (1997)

    Article  Google Scholar 

  17. http://www.home.agilent.com

  18. http://gnuradio.org/redmine/projects/gnuradio/wiki/USRP

Download references

Acknowledgments

This work is supported by the NSF of Jiangsu Province of China under Grant Nos. BK2011117 and BK2012057, and by the National Natural Science Foundation of China under Grant Nos. 61172061 and 61201242.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Wei Zhao.

Appendix

Appendix

First, we denote a gradient operator by \(\nabla \) and partial gradient operators by \({\nabla _1}\) (respectively, \({\nabla _2}\)) with respect to the first (respectively, second) parameter. More precisely, \(\nabla J(\mathbf{{w}})\) is the vector composed of all partial derivatives of \(J(\mathbf{{w}})\) whereas \({\nabla _1}I(\mathbf{{w}},\mathbf{{v}})\) (respectively, \({\nabla _2}I(\mathbf{{w}},\mathbf{{v}})\)) is the vector of partial derivatives of \(I(\mathbf{{w}},\mathbf{{v}})\) with respect to \(\mathbf{{w}}\) (respectively, \(\mathbf{{v}}\) ).

Make the orthogonal change of coordinates \(\mathbf{{p}} = {\mathbf{{\mathrm{A}}}^H}\mathbf{{w}}\) and \(\mathbf{{q}} = {\mathbf{{\mathrm{A}}}^H}\mathbf{{v}}\), giving \(I(\mathbf{{p}},\mathbf{{q}}) = E\left\{ {G(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })} \right\} \). When \(\mathbf{{w}}\) coincides with one of the rows of \({\mathbf{{\mathrm{A}}}^{ - 1}}\), we have \(\mathbf{{p}} = {\left( {0, \ldots ,p,0, \ldots ,0} \right) ^T}\) and the corresponding \(\mathbf{{q}} = {\left( {0, \ldots ,q,0, \ldots ,0} \right) ^T}\) updating following \(\mathbf{{w}}\). Also, \(\mathbf{{p}}\) and \(\mathbf{{q}}\) are N-dimensional complex-valued column vectors, denoted by \({\left[ {{p_1},{p_2}, \ldots ,{p_N}} \right] ^T}\) and \({\left[ {{q_1},{q_2}, \ldots ,{q_N}} \right] ^T}\), respectively. In the following we shall analyze the stability of such \(\mathbf{{p}}\).

We now search for a Taylor expansion of \(I\) in the extrema. The partial gradient of \(I\) with respect to \(\mathbf{{p}}\) is

$$\begin{aligned} {\nabla _1}I(\mathbf{{p}},\mathbf{{q}}) = \left( \begin{array}{c} {\frac{{\partial I(\mathbf{{p}},\mathbf{{q}})}}{{\partial {p_{1r}}}}} \\ {\frac{{\partial I(\mathbf{{p}},\mathbf{{q}})}}{{\partial {p_{1i}}}}} \\ \vdots \\ {\frac{{\partial I(\mathbf{{p}},\mathbf{{q}})}}{{\partial {p_{Nr}}}}} \\ {\frac{{\partial I(\mathbf{{p}},\mathbf{{q}})}}{{\partial {p_{Ni}}}}} \\ \end{array} \right) = \left( \begin{array}{l} {E\left\{ ({\Delta _1}{s_{1r}} + {\Delta _2}{s_{1i}})g(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } \\ {E\left\{ ({\Delta _1}{s_{1i}} - {\Delta _2}{s_{1r}})g(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } \\ \vdots \\ {E\left\{ ({\Delta _1}{s_{Nr}} + {\Delta _2}{s_{Ni}})g(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } \\ {E\left\{ ({\Delta _1}{s_{Ni}} - {\Delta _2}{s_{Nr}})g(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } \\ \end{array} \right) \end{aligned}$$
(36)

where

$$\begin{aligned}&{\Delta _1} = {q_{1r}}{s_{1r}} + {q_{1i}}{s_{1i}} + \cdots + {q_{Nr}}{s_{Nr}} + {q_{Ni}}{s_{Ni}}\end{aligned}$$
(37)
$$\begin{aligned}&{\Delta _2} = {q_{1r}}{s_{1i}} - {q_{1i}}{s_{1r}} + \cdots + {q_{Nr}}{s_{Ni}} - {q_{Ni}}{s_{Nr}} \end{aligned}$$
(38)

The Hessian of \(I\) with respect to \(\mathbf{{p}}\) is

$$\begin{aligned} {\nabla _1}^2I(\mathbf{{p}},\mathbf{{q}}) = \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }l} {\frac{{\partial {h_{R1}}}}{{\partial {p_{1r}}}}} &{} {\frac{{\partial {h_{R1}}}}{{\partial {p_{1i}}}}} &{} \cdots &{} {\frac{{\partial {h_{R1}}}}{{\partial {p_{Nr}}}}} &{} {\frac{{\partial {h_{R1}}}}{{\partial {p_{Ni}}}}} \\ {\frac{{\partial {h_{I1}}}}{{\partial {p_{1r}}}}} &{} {\frac{{\partial {h_{I1}}}}{{\partial {p_{1i}}}}} &{} \cdots &{} {\frac{{\partial {h_{I1}}}}{{\partial {p_{Nr}}}}} &{} {\frac{{\partial {h_{I1}}}}{{\partial {p_{Ni}}}}} \\ \vdots &{} \vdots &{} \ddots &{} \vdots &{} \vdots \\ {\frac{{\partial {h_{RN}}}}{{\partial {p_{1r}}}}} &{} {\frac{{\partial {h_{RN}}}}{{\partial {p_{1i}}}}} &{} \cdots &{} {\frac{{\partial {h_{RN}}}}{{\partial {p_{Nr}}}}} &{} {\frac{{\partial {h_{RN}}}}{{\partial {p_{Ni}}}}} \\ {\frac{{\partial {h_{IN}}}}{{\partial {p_{1r}}}}} &{} {\frac{{\partial {h_{IN}}}}{{\partial {p_{1i}}}}} &{} \cdots &{} {\frac{{\partial {h_{IN}}}}{{\partial {p_{Nr}}}}} &{} {\frac{{\partial {h_{IN}}}}{{\partial {p_{Ni}}}}} \\ \end{array} \right) \end{aligned}$$
(39)

where we define

$$\begin{aligned} {h_{Rj}}&= E\left\{ ({\Delta _1}{s_{jr}} + {\Delta _2}{s_{ji}})g(({\mathbf{{p}}^H}\mathbf{{s}}){({\mathbf{{q}}^H}\mathbf{{s}})^ * })\right\} {j = 1,2, \ldots ,N}\end{aligned}$$
(40)
$$\begin{aligned} {h_{Ij}}&= E\left\{ ({\Delta _1}{s_{ji}} - {\Delta _2}{s_{jr}})g(({\mathbf{{p}}^H}\mathbf{{s}}){({\mathbf{{q}}^H}\mathbf{{s}})^ * })\right\} {j = 1,2, \ldots ,N} \end{aligned}$$
(41)

Then in (39), we have

$$\begin{aligned}&\frac{{\partial {h_{Rj}}}}{{\partial {p_{nr}}}} = E\left\{ ({\Delta _1}{s_{jr}} + {\Delta _2}{s_{ji}})({\Delta _1}{s_{nr}} + {\Delta _2}{s_{ni}})g'(({\mathbf{{p}}^H}\mathbf{{s}}){({\mathbf{{q}}^H}\mathbf{{s}})^ * })\right\} \end{aligned}$$
(42)
$$\begin{aligned}&\frac{{\partial {h_{Rj}}}}{{\partial {p_{ni}}}} = E\left\{ ({\Delta _1}{s_{jr}} + {\Delta _2}{s_{ji}})({\Delta _1}{s_{ni}} - {\Delta _2}{s_{nr}})g'(({\mathbf{{p}}^H}\mathbf{{s}}){({\mathbf{{q}}^H}\mathbf{{s}})^ * })\right\} \end{aligned}$$
(43)
$$\begin{aligned}&\frac{{\partial {h_{Ij}}}}{{\partial {p_{nr}}}} = E\left\{ ({\Delta _1}{s_{ji}} - {\Delta _2}{s_{jr}})({\Delta _1}{s_{nr}} + {\Delta _2}{s_{ni}})g'(({\mathbf{{p}}^H}\mathbf{{s}}){({\mathbf{{q}}^H}\mathbf{{s}})^ * })\right\} \end{aligned}$$
(44)
$$\begin{aligned}&\frac{{\partial {h_{Ij}}}}{{\partial {p_{ni}}}} = E\left\{ ({\Delta _1}{s_{ji}} - {\Delta _2}{s_{jr}})({\Delta _1}{s_{Ni}} - {\Delta _2}{s_{Nr}})g'(({\mathbf{{p}}^H}\mathbf{{s}}){({\mathbf{{q}}^H}\mathbf{{s}})^ * })\right\} \end{aligned}$$
(45)

where \(n = 1,2, \ldots ,N, j = 1,2, \ldots ,N \) for (42) to (45).

Without loss of generality, we assume to analyze the stability of the points \(\mathbf{{p}} = p{\mathbf{{e}}_\mathbf{{1}}} = {(p,0, \ldots ,0)^T}\) and the corresponding \(\mathbf{{q}} = q{\mathbf{{e}}_\mathbf{{1}}} = {(q,0, \cdots ,0)^T}\), which relates to \(\mathbf{{w}} = p{\mathbf{{a}}_\mathbf{{1}}}\). Due to the normalization of \(\mathbf{{w}}\) and \(\mathbf{{v}}\) after each one-dimensional optimization, we have \({\left| {{\mathbf{{p}}^H}\mathbf{{s}}} \right| ^2} = {\left| {{\mathbf{{q}}^H}\mathbf{{s}}} \right| ^2} = {\left| {{s_1}} \right| ^2}\) and \({\left| \mathbf{{p}} \right| ^2} = {\left| \mathbf{{q}} \right| ^2} = 1\). Evaluating the gradient in (36) at points \(\mathbf{{p}} = p{\mathbf{{e}}_\mathbf{{1}}}\) and \(\mathbf{{q}} = q{\mathbf{{e}}_\mathbf{{1}}}\), under the assumptions A1 and A2, we have

$$\begin{aligned} {\nabla _1}I(p{\mathbf{{e}}_\mathbf{{1}}},q{\mathbf{{e}}_\mathbf{{1}}}) = \left( {\begin{array}{c} {{p_r}E\left\{ {{\left| {{s_1}} \right| }^2}g(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } \\ {{p_i}E\left\{ {{\left| {{s_1}} \right| }^2}g(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } \\ 0 \\ \vdots \\ 0 \\ \end{array}} \right) = \left( \begin{array}{c} {{p_r}E\left\{ {{\left| {{s_1}} \right| }^2}g({{\left| {{s_1}} \right| }^2}\right\} } \\ {{p_i}E\left\{ {{\left| {{s_1}} \right| }^2}g({{\left| {{s_1}} \right| }^2}\right\} } \\ 0 \\ \vdots \\ 0 \\ \end{array} \right) \nonumber \\ \end{aligned}$$
(46)

Similarly, considering (39), we have

$$\begin{aligned}&{\nabla _1}^2I(p{e_1},q{e_1}) \nonumber \\&= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }l} {{p_r}^2E\left\{ {{\left| {{s_1}} \right| }^4}g'(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } &{} {{p_r}{p_i}E\left\{ {{\left| {{s_1}} \right| }^4}g'(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } &{} 0 &{} \cdots &{} 0 \\ {{p_r}{p_i}E\left\{ {{\left| {{s_1}} \right| }^4}g'(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } &{} {{p_i}^2E\left\{ {{\left| {{s_1}} \right| }^4}g'(({\mathbf{{p}}^H}\mathbf{{s}}){{({\mathbf{{q}}^H}\mathbf{{s}})}^ * })\right\} } &{} 0 &{} \cdots &{} 0 \\ 0 &{} 0 &{} \alpha &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} \alpha \\ \end{array} \right) \nonumber \\&= \left( \begin{array}{c@{\quad }c@{\quad }c@{\quad }c@{\quad }l} {{p_r}^2E\left\{ {{\left| {{s_1}} \right| }^4}g'({{\left| {{s_1}} \right| }^2})\right\} } &{} {{p_r}{p_i}E\left\{ {{\left| {{s_1}} \right| }^4}g'({{\left| {{s_1}} \right| }^2})\right\} } &{} 0 &{} \cdots &{} 0 \\ {{p_r}{p_i}E\left\{ {{\left| {{s_1}} \right| }^4}g'({{\left| {{s_1}} \right| }^2})\right\} } &{} {{p_i}^2E\left\{ {{\left| {{s_1}} \right| }^4}g'({{\left| {{s_1}} \right| }^2})\right\} } &{} 0 &{} \cdots &{} 0 \\ 0 &{} 0 &{} \alpha &{} \cdots &{} 0 \\ \vdots &{} \vdots &{} \vdots &{} \ddots &{} \vdots \\ 0 &{} 0 &{} 0 &{} \cdots &{} \alpha \\ \end{array} \right) \end{aligned}$$
(47)

where \(\alpha = E\left\{ {\left| {{s_1}} \right| ^2}g'(({\mathbf{{p}}^H}\mathbf{{s}}){({\mathbf{{q}}^H}\mathbf{{s}})^ * })\} = E\{ {\left| {{s_1}} \right| ^2}g'({\left| {{s_1}} \right| ^2})\right\} \)

Now we make a small perturbation \(\mathbf{{\varepsilon }} = {({\varepsilon _{1r}},{\varepsilon _{1i}}, \ldots ,{\varepsilon _{Nr}},{\varepsilon _{Ni}})^T}\) and evaluate the Taylor expansion of \(I\) :

$$\begin{aligned}&I(p{\mathbf{{e}}_\mathbf{{1}}} + \mathbf{{\varepsilon }},q{\mathbf{{e}}_\mathbf{{1}}}) \nonumber \\&= I(p{\mathbf{{e}}_\mathbf{{1}}},q{\mathbf{{e}}_\mathbf{{1}}}) + {\mathbf{{\varepsilon }}^T}{\nabla _1}I(p{\mathbf{{e}}_\mathbf{{1}}},q{\mathbf{{e}}_\mathbf{{1}}}) + \frac{1}{2}{\mathbf{{\varepsilon }}^T}{\nabla _1}^2I(p{\mathbf{{e}}_\mathbf{{1}}},q{\mathbf{{e}}_\mathbf{{1}}})\mathbf{{\varepsilon }} + o({\left\| \mathbf{{\varepsilon }} \right\| ^2}) \nonumber \\&= I(p{\mathbf{{e}}_\mathbf{{1}}},q{\mathbf{{e}}_\mathbf{{1}}}) + ({\varepsilon _{1r}}{p_r} + {\varepsilon _{1i}}{p_i})E\left\{ {\left| {{s_1}} \right| ^2}g({\left| {{s_1}} \right| ^2})\right\} + \frac{1}{2}\Bigg ({p_r}^2{\varepsilon _{1r}}^2E\left\{ {\left| {{s_1}} \right| ^4}g'({\left| {{s_1}} \right| ^2})\right\} \nonumber \\&\quad +\, 2{p_r}{p_i}{\varepsilon _{1r}}{\varepsilon _{1i}}E\left\{ {\left| {{s_1}} \right| ^4}g'({\left| {{s_1}} \right| ^2})\right\} + {p_i}^2{\varepsilon _{1i}}^2E\left\{ {\left| {{s_1}} \right| ^4}g'({\left| {{s_1}} \right| ^2})\right\} \Bigg ) \nonumber \\&\quad +\, E\left\{ {\left| {{s_1}} \right| ^2}g'({\left| {{s_1}} \right| ^2})\right\} \sum \limits _{j = 2}^N {({\varepsilon _{jr}}^2 + {\varepsilon _{ji}}^2)} + o({\left\| \varepsilon \right\| ^2}) \end{aligned}$$
(48)

Due to the constraint \(\left\| \mathbf{{w}} \right\| = 1\), thus we have \(\left\| {p{\mathbf{{e}}_\mathbf{{1}}} + \mathbf{{\varepsilon }}} \right\| = 1\). We get

$$\begin{aligned}&{({\varepsilon _{1r}} + {p_r})^2} + {({\varepsilon _{1i}} + {p_i})^2} + \sum \limits _{j = 2}^N {\left( {\varepsilon _{jr}}^2 + {\varepsilon _{ji}}^2\right) } = 1 \nonumber \\&2({\varepsilon _{1r}}{p_r} + {\varepsilon _{1i}}{p_i}) = - \sum \limits _{j = 1}^N {\left( {\varepsilon _{jr}}^2 + {\varepsilon _{ji}}^2\right) } \end{aligned}$$
(49)

Using this, we have

$$\begin{aligned}&I(p{\mathbf{{e}}_\mathbf{{1}}} + \mathbf{{\varepsilon }},q{\mathbf{{e}}_\mathbf{{1}}}) \nonumber \\&\quad \! = I(p{\mathbf{{e}}_\mathbf{{1}}},q{\mathbf{{e}}_\mathbf{{1}}}) \!-\! \frac{1}{2}E\left\{ {\left| {{s_1}} \right| ^2}g({\left| {{s_1}} \right| ^2})\right\} \sum \limits _{j = 1}^N {({\varepsilon _{jr}}^2 + {\varepsilon _{ji}}^2)} \!+\! \frac{1}{2}\Bigg (\!\!\! -\! \frac{1}{2}\sum \limits _{j = 1}^N {({\varepsilon _{jr}}^2 \!+\! {\varepsilon _{ji}}^2)\!{\Bigg )^2}} \nonumber \\&\qquad \times E\left\{ {\left| {{s_1}} \right| ^4}g'({\left| {{s_1}} \right| ^2})\right\} + \left( E\left\{ {\left| {{s_1}} \right| ^2}g'({\left| {{s_1}} \right| ^2})\right\} \right) \sum \limits _{j = 2}^N {({\varepsilon _{jr}}^2 + {\varepsilon _{ji}}^2)} + o({\left\| \varepsilon \right\| ^2}) \nonumber \\&\quad = I(p{\mathbf{{e}}_\mathbf{{1}}},q{\mathbf{{e}}_\mathbf{{1}}}) - \frac{1}{2}E\left\{ {\left| {{s_1}} \right| ^2}g({\left| {{s_1}} \right| ^2})\right\} \sum \limits _{j = 1}^N {({\varepsilon _{jr}}^2 + {\varepsilon _{ji}}^2)}\nonumber \\&\qquad +\, E\left\{ {\left| {{s_1}} \right| ^2}g'({\left| {{s_1}} \right| ^2})\right\} \sum \limits _{j = 2}^N {({\varepsilon _{jr}}^2 + {\varepsilon _{ji}}^2)} + o({\left\| \varepsilon \right\| ^2}) \end{aligned}$$
(50)

Finally, Theorem 1 has been proved.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhao, W., Shen, Y., Yuan, Z. et al. A Novel Method for Complex-Valued Signals in Independent Component Analysis Framework. Circuits Syst Signal Process 34, 1893–1913 (2015). https://doi.org/10.1007/s00034-014-9929-8

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-014-9929-8

Keywords

Navigation