Skip to main content
Log in

Performance Analysis of the nc-FastICA Algorithm

  • Published:
Circuits, Systems, and Signal Processing Aims and scope Submit manuscript

Abstract

The noncircular complex Fast Independent Component Analysis (nc-FastICA) algorithm is one of the most popular algorithms for solving the ICA problems with circular and noncircular complex-valued data. However, the performance of nc-FastICA has not been comprehensively studied in the past. In this paper, this matter is addressed. Based on the global analysis of fixed-point iteration and cost function, we get that there exist undesirable fixed points for nc-FastICA algorithm. To get rid of the undesirable fixed points, a simple check of undesirable points is proposed based on the kurtosis of the estimated sources. Furthermore, theoretical analysis in this paper shows that the nc-FastICA algorithm can work well even when the sources fall into the area of instability. Computer simulations validate the correctness of the theoretical findings in the paper.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. S. Amari, A. Cichocki, H.H. Yang, A new learning algorithm for blind signal separation, in Neural Information Processing Systems, in Advance In Neural Information Processing Systems, vol. 8, (MIT Press, Cambridge, 1996), pp. 757–763

  2. E. Bingham, A. Hyvarinen, A fast fixed-point algorithm for independent component analysis of complex valued signals. Int. J. Neural Syst. 10(1), 1–8 (2000)

    Article  Google Scholar 

  3. O. Diene, A. Bhaya, Conjugate gradient and steepest descent constant modulus algorithms applied to a blind adaptive array. Signal Process. 90(10), 2835–2841 (2010)

    Article  MATH  Google Scholar 

  4. A. Dermoune, T. Wei, FastICA algorithm: five criteria for the optimal choice of the nonlinearity function. IEEE Trans. Signal Process. 61(8), 2078–2087 (2013)

    Article  Google Scholar 

  5. A. Hyvarinen, One-unit contrast functions for independent component analysis: a statistical analysis in Proceedings of the IEEE Workshop Neural Networks for Signal Processing, Amelia Island, FL, 1997), pp. 388–397

  6. A. Hyvarinen, Fast and robust fixed-point algorithms for independent component analysis. IEEE Trans. Neural Netw. 10(3), 626–634 (1999)

    Article  Google Scholar 

  7. A. Hyvarinen, E. Oja, A fast fixed-point algorithm for independent component analysis. Neural Comput. 9, 1483–1492 (1997)

    Article  Google Scholar 

  8. A. Hyvarinen, J. Karunen, E. Oja, Independent Component Analysis (Wiley, New York, 2001)

    Book  Google Scholar 

  9. A. Kawamura, E. Nada, Y. Iiguni, Single channel blind source separation of deterministic sinusoidal signals with independent component analysis. IEICE Commun. Express 2(3), 104–110 (2013)

    Article  Google Scholar 

  10. K.J. Kim, S. Zhang, S.W. Nam, Improved FastICA algorithm using a sixth-order Newton’s method. IEICE Electron. Express 6(13), 904–909 (2009)

    Article  Google Scholar 

  11. Z. Koldovsky, P. Tichavsky, E. Oja, Efficient variant of algorithm FastICA for independent component analysis attaining the Cramer–Rao lower bound. IEEE Trans. Neural Netw. 17(5), 1265–1277 (2006)

    Article  Google Scholar 

  12. W.B. Mikhael, R. Ranganathan, T. Yang, Complex adaptive ICA employing the conjugate gradient technique for signal separation in time-varying flat fading channels. Circuits Syst. Signal Process. 29, 469–480 (2010)

    Article  MATH  MathSciNet  Google Scholar 

  13. M. Novey, T. Adali, On extending the complex FastICA algorithm to noncircular sources. IEEE Trans. Signal Process. 56(5), 2148–2154 (2008)

    Article  MathSciNet  Google Scholar 

  14. E. Oja, Z. Yuan, The FastICA algorithm revisited: convergence analysis. IEEE Trans. Neural Netw. 17(6), 1370–1381 (2006)

    Article  Google Scholar 

  15. D. Smith, J. Lukasiak, I. Burnett, An analysis of the limitations of blind signal separation application with speech. Signal Process. 86, 353–359 (2006)

    Article  MATH  Google Scholar 

  16. J. Stewart, Calculus (Brooks Cole, Pacific Grove, 2007)

    Google Scholar 

  17. P. Tichavsky, Z. Koldovsky, E. Oja, Performance analysis of the FastICA algorithm and Cramr–Rao bounds for linear independent component analysis. IEEE Trans. Signal Process. 54(4), 1189–1203 (2006)

    Article  MathSciNet  Google Scholar 

Download references

Acknowledgments

The work and the contribution were partially supported by Science and Technology on Electronic Information Control Laboratory, and partially supported by Science and Technology on Communication Information Security Control Laboratory under Grant 9140C130304120C13064.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Guobing Qian.

Appendix: Analysis of Stationary Points

Appendix: Analysis of Stationary Points

The detailed analysis of stationary points is presented as follows:

  1. (1)

    If \(E({{s}^{2}})=0\), then the stationary points are \(\large \{ _{\forall \alpha }^{\theta = 0}\), \(\large \{ _{\forall \alpha }^{\theta = \pi /2}\), \(\large \{ _{\forall \alpha }^{\theta = \pi /4}\);

    1. (i)

      As \(T(\theta =0,\forall \alpha )=0\), second derivatives test gives no information. We now make a small perturbation \(({{\varepsilon }_{\theta }},{{\varepsilon }_{\alpha }})\) around the point (\(\theta =0,\forall \alpha \)) to derive

      $$\begin{aligned} J({\varepsilon _\theta },\alpha + {\varepsilon _\alpha }) - J(0,\alpha ) = \frac{1}{4}{\sin ^2}(2{\varepsilon _\theta }) [ - E({\left| s \right| ^4}) + 2{E^2}({\left| s \right| ^2})] \end{aligned}$$
      (35)

      If \(-E({{\left| s \right| }^{4}})+2{{E}^{2}}({{\left| s \right| }^{2}})>0\), then \(J({{\varepsilon }_{\theta }},\alpha +{{\varepsilon }_{\alpha }})-J(0,\alpha )\ge 0\). In this case, the point (\(\theta =0,\forall \alpha \)) is a local minimum of cost function. If \(-E({{\left| s \right| }^{4}})+2{{E}^{2}}({{\left| s \right| }^{2}})<0\), then \(J({{\varepsilon }_{\theta }},\alpha +{{\varepsilon }_{\alpha }})-J(0,\alpha )\le 0\). In this case, the point (\(\theta =0,\forall \alpha \)) is a local maximum of cost function. If \(-E({{\left| s \right| }^{4}})+2{{E}^{2}}({{\left| s \right| }^{2}})=0\), then \(J(\theta ,\alpha )\) is a constant function. In this case, the point (\(\theta =0,\forall \alpha \)) is an ordinary point of cost function.

    2. (ii)

      The detailed analysis of stationary point \((\theta =\pi /2,\forall \alpha )\) is the same as \((\theta =0,\forall \alpha )\) and is omitted here.

    3. (iii)

      Similarly, due to \(T(\theta =\pi /4,\forall \alpha )=0\), second derivatives test gives no information. Therefore, we make a small perturbation \(({{\varepsilon }_{\theta }},{{\varepsilon }_{\alpha }})\) around the point (\(\theta =\pi /4,\alpha \)) to derive

      $$\begin{aligned} \begin{array}{l} J\Big (\frac{\pi }{4} + {\varepsilon _\theta },\alpha + {\varepsilon _\alpha }\Big ) - J\Big (\frac{\pi }{4},\alpha \Big )= \frac{1}{4}\Big [ - E\Big ({\left| s \right| ^4}\Big ) + 2{E^2}\Big ({\left| s \right| ^2}\Big )\Big ]\\ \quad \Big [{\sin ^2}\Big (\frac{\pi }{2} + 2{\varepsilon _\theta }\Big ) - 1\Big ] \end{array} \end{aligned}$$
      (36)

      If \( - E({\left| s \right| ^4}) + 2{E^2}({\left| s \right| ^2}) > 0\), then \(J(\frac{\pi }{4} + {\varepsilon _\theta },\alpha + {\varepsilon _\alpha }) > 0\). In this case, the point (\(\theta = \pi /4,\forall \alpha \)) is a local minimum of cost function. If \( - E({\left| s \right| ^4}) + 2{E^2}({\left| s \right| ^2}) < 0\), then \(J(\frac{\pi }{4} + {\varepsilon _\theta },\alpha + {\varepsilon _\alpha }) < 0\). In this case, the point (\(\theta = \pi /4,\forall \alpha \)) is a local maximum of cost function. If \( - E({\left| s \right| ^4}) + 2{E^2}({\left| s \right| ^2}) = 0\), then \(J(\theta ,\alpha )\) is a constant function. In this case, the point (\(\theta = \pi /4,\forall \alpha \)) is an ordinary point of cost function.

  2. (2)

    if \(E({{s}^{2}})\ne 0\), then the stationary points are \(\large \{ _{\forall \alpha }^{\theta = 0}\), \(\large \{ _{\forall \alpha }^{\theta = \pi /2}\), \(\large \{ _{\alpha = 0}^{\theta = \pi /4}\), \(\large \{ _{\alpha = \pi /2}^{\theta = \pi /4}\), \(\large \{ _{\alpha = \pi }^{\theta = \pi /4}\).

    1. (i)

      As \(T(\theta = 0,\forall \alpha ) = 0\), second derivatives test gives no information. We now make a small perturbation \(({\varepsilon _\theta },{\varepsilon _\alpha })\) around the point (\(\theta = 0,\forall \alpha \)) to derive

      $$\begin{aligned} \begin{array}{l} J({\varepsilon _\theta },\alpha + {\varepsilon _\alpha }) - J(0,\alpha ) = \frac{1}{4}{\sin ^2}(2{\varepsilon _\theta }) \left[ {\cos (2\alpha + 2{\varepsilon _\alpha })E({s^2})E({s^{*2}})} \right. \\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \left. { - E({{\left| s \right| }^4}) + 2{E^2}({{\left| s \right| }^2})} \right] \end{array} \end{aligned}$$
      (37)

      If \(-E({{\left| s \right| }^{4}})+2{{E}^{2}}({{\left| s \right| }^{2}})\pm E({{s}^{2}})E({{s}^{*2}})>0\), then \(J({{\varepsilon }_{\theta }},\alpha +{{\varepsilon }_{\alpha }})-J(0,\alpha )\ge 0\). In this case, the point (\(\theta =0,\forall \alpha \)) is a local minimum of cost function. If \(-E({{\left| s \right| }^{4}})+2{{E}^{2}}({{\left| s \right| }^{2}})\pm E({{s}^{2}})E({{s}^{*2}})<0\), then \(J({{\varepsilon }_{\theta }},\alpha +{{\varepsilon }_{\alpha }})-J(0,\alpha )\le 0\). In this case, the point (\(\theta =0,\forall \alpha \)) is a local maximum of cost function. If \(\left| 2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}}) \right| \le \left| E({{s}^{2}})E({{s}^{*2}}) \right| \), then the sources fall into the area of instability defined in [9]. In this case, whether the point (\(\theta =0,\forall \alpha \)) is an extreme of cost function or not depends on \(\alpha \). If \(\cos (2{{\alpha }_{0}})>\frac{2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})}{E({{s}^{2}})E({{s}^{*2}})}\), then there exists a deleted disk neighborhood area \({{D}_{0}}\) with center (\(\theta =0,\alpha ={{\alpha }_{0}}\)) such that \(J({{\varepsilon }_{\theta }},\alpha +{{\varepsilon }_{\alpha }})-J(0,{{\alpha }_{0}})>0(\forall {{\varepsilon }_{\theta }},{{\varepsilon }_{\alpha }}\in {{D}_{0}})\). Thus, the point (\(\theta =0,\alpha ={{\alpha }_{0}}\)) is a local minimum of cost function. If \(\cos (2{{\alpha }_{0}})<\frac{2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})}{E({{s}^{2}})E({{s}^{*2}})}\), then there exists a deleted disk neighborhood area \({{D}_{0}}\) with center (\(\theta =0,\alpha ={{\alpha }_{0}}\)) such that \(J({{\varepsilon }_{\theta }},\alpha +{{\varepsilon }_{\alpha }})-J(0,{{\alpha }_{0}})<0(\forall {{\varepsilon }_{\theta }},{{\varepsilon }_{\alpha }}\in {{D}_{0}})\). Thus, the point (\(\theta =0,\alpha ={{\alpha }_{0}}\)) is a local maximum of cost function. If \(\cos (2{{\alpha }_{0}})=\frac{2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})}{E({{s}^{2}})E({{s}^{*2}})}\), then every deleted disk neighborhood area with center (\(\theta =0,\alpha ={{\alpha }_{0}}\)) contains points where \(J({{\varepsilon }_{\theta }},{{\alpha }_{0}}+{{\varepsilon }_{\alpha }})-J(0,{{\alpha }_{0}})\) take positive values as well as points where \(J({{\varepsilon }_{\theta }},{{\alpha }_{0}}+{{\varepsilon }_{\alpha }})-J(0,{{\alpha }_{0}})\) take negative values. Thus, the point (\(\theta =0,{{\alpha }_{0}}\)) is a saddle point of cost function. Based on the above analysis, it can be easily obtained that the point (\(\theta =0,{{\alpha }_{0}}\ne \frac{1}{2}\arccos \left( \frac{2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})}{E({{s}^{2}})E({{s}^{*2}})} \right) \)) is a local extreme of cost function even when the sources fall into the area of instability.

    2. (ii)

      The detailed analysis of stationary point \((\theta =\pi /2,\forall \alpha )\) is the same as \((\theta =0,\forall \alpha )\) and is omitted here.

    3. (iii)

      \(T(\theta =\pi /4,\alpha =0)=2E({{s}^{2}})E({{s}^{*2}})[2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})+E({{s}^{2}})E({{s}^{*2}})]\) If \(2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})+E({{s}^{2}})E({{s}^{*2}})>0\), then \(\frac{{{\partial }^{2}}J(\theta ,\alpha )}{\partial {{\theta }^{2}}}{{|}_{\theta =\pi /4,\alpha =0}}<0~and~T(\theta =\pi /4,\alpha =0)>0\). In this case, the point \((\theta =\pi /4,\alpha =0)\) is a local maximum of cost function. If \(2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})+E({{s}^{2}})E({{s}^{*2}})<0\), then \(T(\theta =\pi /4,\alpha =0)<0\). In this case, the point \((\theta =\pi /4,\alpha =0)\) is a saddle point of cost function. If \(2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})+E({{s}^{2}})E({{s}^{*2}})=0\), then

      $$\begin{aligned} \begin{array}{l} J(\frac{\pi }{4} + {\varepsilon _\theta },{\varepsilon _\alpha }) - J(\frac{\pi }{4},0) = \frac{1}{4} [{\sin ^2}(\frac{\pi }{2} + 2{\varepsilon _\theta }) ] [\cos (2{\varepsilon _\alpha }) - 1]E({s^2})E({s^{*2}})] \end{array}\nonumber \\ \end{aligned}$$
      (38)

      In this case, the point \((\theta =\pi /4,\alpha =0)\) is a local maximum of cost function.

    4. (iv)

      \(T(\theta =\pi /4,\alpha =\pi /2)=-2E({{s}^{2}})E({{s}^{*2}})[2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})-E({{s}^{2}})E({{s}^{*2}})]\) Similarly, it can be easily obtained that, if \(2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})-E({{s}^{2}})E({{s}^{*2}})>0\), the point \((\theta =\pi /4,\alpha =\pi /2)\) is a saddle point of cost function; if \(2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})-E({{s}^{2}})E({{s}^{*2}})<0\), the point (\(\theta =\pi /4,\alpha =\pi /2\)) is a local minimum of cost function; if \(2{{E}^{2}}({{\left| s \right| }^{2}})-E({{\left| s \right| }^{4}})-E({{s}^{2}})E({{s}^{*2}})=0\), the point (\(\theta =\pi /4,\alpha =\pi /2\)) is a local minimum of cost function.

    5. (v)

      The detailed analysis of stationary point \((\theta =\pi /4,\alpha =0)\) is the same as the point \((\theta =\pi /4,\alpha =\pi )\) and is omitted here.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Qian, G., Li, L. & Liao, H. Performance Analysis of the nc-FastICA Algorithm. Circuits Syst Signal Process 34, 441–457 (2015). https://doi.org/10.1007/s00034-014-9861-y

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00034-014-9861-y

Keywords

Navigation