Skip to main content
Log in

On extending the Noisy Independent Component Analysis to Impulsive Components

  • Published:
Wireless Personal Communications Aims and scope Submit manuscript

Abstract

As an important factor in the fast fixed-point algorithm of independent component analysis (ICA), noise has a significant influence on the separate performance of ICA. Unfortunately, the traditional algorithm of noisy ICA did not address the influence of impulsive components. Because the sources were signals mixed with impulsive noise, the Gaussian noisy algorithm will be invalid for separating the sources. In general, those measurements that significantly deviate from the normal pattern of sensed data are considered impulses. In this paper, we introduce a non-linear function based on the S-estimator to identify the impulsive components in the observed data. This approach guarantees that the impulse noise can be detected from the observed signal. Furthermore, a threshold for the impulse components and methods to remove impulse noise and reconstruct the signal is proposed. The proposed technique improves the separate performance of the traditional algorithm for Gaussian noisy ICA. With the proposed method, the fast fixed-point algorithm of ICA is more reliable for noisy situations. The simulation results show the effectiveness of the proposed method.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Liu, K., Tian, X., & Cai, L. (2015). A noisy independent component analysis algorithm with low signal-to-noise ratio. Control Engineering of China, 2, 027.

    Google Scholar 

  2. Cai, L., & Tian, X. (2015). A new process monitoring method based on noisy time structure independent component analysis. Chinese Journal of Chemical Engineering, 23(1), 162–172.

    Article  MathSciNet  Google Scholar 

  3. Nassiri, V., Aminghafari, M., & Mohammad-Djafari, A. (2014). Solving noisy ICA using multivariate wavelet denoising with an application to noisy latent variables regression. Communications in Statistics-Theory and Methods, 43(10–12), 2297–2310.

    Article  MathSciNet  MATH  Google Scholar 

  4. He, X., & Zhu, T. (2014). ICA of noisy music audio mixtures based on iterative shrinkage denoising and FastICA using rational nonlinearities. Circuits, Systems, and Signal Processing, 33(6), 1917–1956.

    Article  Google Scholar 

  5. Hyvärinen, A. (1999). Fast and robust fixed-point algorithms for independent component analysis. IEEE Transactions on Neural Networks, 10(3), 626–634.

    Article  Google Scholar 

  6. Hyvärinen, A., & Oja, E. (2000). Independent component analysis: Algorithms and applications. Neural networks, 13(4), 411–430.

    Article  Google Scholar 

  7. Hyvärinen, A., & Oja, E. (1997). A fast fixed-point algorithm for independent component analysis. Neural Computation, 9(7), 1483–1492.

    Article  Google Scholar 

  8. Hyvärinen, A., Karhunen, J., & Oja, E. (2004). Independent component analysis. London: Wiley.

    Google Scholar 

  9. Bell, A. J. (2000). Information theory, independent-component analysis, and applications. Unsupervised Adaptive Filtering, 1, 237–264.

    Google Scholar 

  10. Bell, A. J., & Sejnowski, T. J. (1995). An information–maximization approach to blind separation and blind deconvolution. Neural Computation, 7(6), 1129–1159.

    Article  Google Scholar 

  11. Gaeta, M., & Lacoume, J. L. (1990). Source separation without a priori knowledge: The maximum likelihood solution. Proceedings of EUSIPCO, 90, 621–624.

    Google Scholar 

  12. Pham, D. T. (1996). Blind separation of instantaneous mixture of sources via an independent component analysis. IEEE Transactions on Signal Processing, 44(11), 2768–2779.

    Article  Google Scholar 

  13. Pham, D. T., & Garat, P. (1997). Blind separation of mixture of independent sources through a quasi-maximum likelihood approach. IEEE Transactions on Signal Processing, 45(7), 1712–1725.

    Article  MATH  Google Scholar 

  14. Pahm, D. T, Garrat, P., & Jutten, C. (1992). Separation of a mixture of independent sources through a ML approach. In Proceedings of European signal processing conference (p. 771).

  15. Moulines, E., Cardoso, J. F., & Gassiat, E. (1997). Maximum likelihood for blind separation and deconvolution of noisy signals using mixture models. In IEEE international conference on acoustics, speech, and signal processing, 1997. ICASSP-97 (Vol. 5, pp. 3617–3620). IEEE.

  16. Pajunen, P., & Karhunen, J. (1997). Least-squares methods for blind source separation based on nonlinear PCA. International Journal of Neural Systems, 8(05–06), 601–612.

    Article  Google Scholar 

  17. Hyvärinen, A. (1999). Fast ICA for noisy data using Gaussian moments. In Proceedings of IEEE international symposium on circuits and systems, 1999. ISCAS’99 (Vol. 5, pp. 57–61). IEEE.

  18. Hyvärinen, A. (1999). Gaussian moments for noisy independent component analysis. IEEE Signal Processing Letters, 6(6), 145–147.

    Article  Google Scholar 

  19. Koivunen, V., Enescu, M., & Oja, E. (2001). Adaptive algorithm for blind separation from noisy time-varying mixtures. Neural Computation, 13(10), 2339–2357.

    Article  MATH  Google Scholar 

  20. Koivunen, V., & Oja, E. (1999). Predictor–corrector structure for real-time blind separation from noisy mixtures. ICA, 99, 479–484.

    Google Scholar 

  21. Hyvärinen, A. (1999). Sparse code shrinkage: Denoising of nongaussian data by maximum likelihood estimation. Neural Computation, 11(7), 1739–1768.

    Article  Google Scholar 

  22. Hyvärinen, A., Hoyer, P., & Oja, E. (1999). Image denoising by sparse code shrinkage. In Intelligent signal processing. IEEE Press.

  23. Rousseeuw, P. J., & Leroy, A. M. (2005). Robust regression and outlier detection. London: Wiley.

    MATH  Google Scholar 

  24. Sengijpta, S. K. (1995). Fundamentals of statistical signal processing: Estimation theory. Technometrics, 37(4), 465–466.

    Article  Google Scholar 

  25. Therrien, C. W. (1992). Discrete random signals and statistical signal processing. New Jersey: Prentice Hall PTR.

    MATH  Google Scholar 

  26. Amari, S., Cichocki, A. &, Yang, H. H. (1996). A new learning algorithm for blind signal separation. In D. Touretzky, M. Mozer, M. Hasselmo (Eds.), Advances in neural information processing systems.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pingxing Feng.

Appendix

Appendix

1.1 Proof Lemma 1

We first suppose the case l odd (l = 2q + 1). Using S = s(r 1, …, r l ) for ease of notation, we can obtain

$$\frac{{med_{i} \left| {r_{i} } \right|}}{c} \le S \le \frac{{med_{i} \left| {r_{i} } \right|}}{{\text{g}^{ - 1} \left( {\frac{{\text{g} \left( c \right)}}{l + 1}} \right)}}$$
(31)

Suppose med i |r i | > cS. Because \(med_{i} \left| {r_{i} } \right| = \left| r \right|_{{q + 1{\kern 1pt} :l}}\), it holds that at least q + 1 of the \(\frac{{\left| {r_{i} } \right|}}{S}\) is larger than c. Consequently,

$$\frac{1}{l}\sum\limits_{i = 1}^{l} {\text{g} \left( {\frac{{\left| {r_{i} } \right|}}{S}} \right)} \ge \frac{1}{l}\left( {q + 1} \right)\text{g} \left( c \right) > \frac{{\text{g} \left( c \right)}}{2} = {\rm K}$$
(32)

Therefore, \(med_{i} \left| {r_{i} } \right| \le cS\).

Now, we suppose that \(\text{g}^{ - 1} \left( {\frac{{\text{g} \left( c \right)}}{l + 1}} \right)S > med_{i} \left| {r_{i} } \right|\). This would imply that the first q + 1 of the \(\frac{{\left| {r_{i} } \right|}}{S}\) is strictly smaller than \(\text{g}^{ - 1} \left( {\frac{{\text{g} \left( c \right)}}{l + 1}} \right)\). Introducing this in \(\frac{1}{l}\sum\nolimits_{i = 1}^{l} {{\text{g}}\left( {\frac{{\left| {r_{i} } \right|}}{S}} \right)}\), we find that

$$\begin{aligned} \frac{1}{l}\sum\limits_{i = 1}^{l} {\text{g} \left( {\frac{{\left| {r_{i} } \right|}}{S}} \right)} & \le \frac{1}{l}\left( {q + 1} \right)\text{g} \left( {g^{ - 1} \left( {\frac{{\text{g} \left( c \right)}}{l + 1}} \right)} \right) + \frac{{q\text{g} \left( c \right)}}{l} \\ & \le \frac{q + 1}{l}\left( {\frac{{\text{g} \left( c \right)}}{l + 1}} \right) + \frac{{q\text{g} \left( c \right)}}{l} = \frac{{\text{g} \left( c \right)}}{2} = {\rm K} \\ \end{aligned}$$
(33)

Therefore, \(\text{g}^{ - 1} \left( {\frac{g\left( c \right)}{l + 1}} \right)S \le med_{i} \left| {r_{i} } \right|\).

Now, n is even (l = 2q), which proves that

$$\frac{{med_{i} \left| {r_{i} } \right|}}{c} \le S \le \frac{{med_{i} \left| {r_{i} } \right|}}{{\frac{1}{2}\text{g}^{ - 1} \left( {\frac{{2\text{g} \left( c \right)}}{l + 2}} \right)}}$$
(34)

Because \(med_{i} \left| {r_{i} } \right| = \frac{1}{2}\left( {\left| r \right|_{{q{\kern 1pt} :l}} + \left| r \right|_{q + 1:l} } \right)\), we have that at least q of the \(\frac{{\left| {r_{i} } \right|}}{s}\) is strictly larger than c, and

$$\frac{1}{l}\sum\limits_{i = 1}^{l} {\text{g} \left( {\frac{{\left| {r_{i} } \right|}}{S}} \right)} \ge \frac{q}{l}\text{g} \left( c \right) > \frac{{\text{g} \left( c \right)}}{2} = {\rm K}$$
(35)

Except when all other |r i | are zero, the set of solutions for Eq. (10) is the interval \(\left( {0,\frac{{2\,med_{i} \left| {r_{i} } \right|}}{c}} \right]\). Therefore, \(S = \frac{{2\,med_{i} \left| {r_{i} } \right|}}{c}\). In other cases, med i |r i | ≤ cS.

Suppose now \(med_{i} \left| {r_{i} } \right| < \frac{1}{2}\text{g}^{ - 1} \left( {\frac{{2\text{g} \left( c \right)}}{l + 2}} \right)S\). Then, \(\left| r \right|_{{q + 1{\kern 1pt} {\kern 1pt} :{\kern 1pt} {\kern 1pt} n}} < \text{g}^{ - 1} \left( {\frac{{2\text{g} \left( c \right)}}{l + 2}} \right)S\). Hence, the first q + 1 of the \(\frac{{\left| {r_{i} } \right|}}{S}\) is less than \(g^{ - 1} \left( {\frac{{2\text{g} \left( c \right)}}{l + 2}} \right)\). Then, we obtain

$$\begin{aligned} \frac{1}{l}\sum\limits_{i = 1}^{l} {\text{g} \left( {\frac{{\left| {r_{i} } \right|}}{S}} \right)} & < \frac{1}{l}\left( {l + 1} \right)\frac{2}{l + 2}\text{g} \left( c \right) + \frac{{\left( {q - 1} \right)\text{g} \left( c \right)}}{l} \\ & \le \frac{q}{l}\text{g} \left( c \right) = {\rm K} \\ \end{aligned}$$
(36)

That means \(med_{i} \left| {r_{i} } \right| \ge \frac{1}{2}\text{g}^{ - 1} \left( {\frac{{2\text{g} \left( c \right)}}{l + 2}} \right)S\).

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Feng, P., Li, L. On extending the Noisy Independent Component Analysis to Impulsive Components. Wireless Pers Commun 88, 415–427 (2016). https://doi.org/10.1007/s11277-015-3135-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11277-015-3135-2

Keywords

Navigation