Skip to main content

Generalized Kernel Normalized Mixed-Norm Algorithm: Analysis and Simulations

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2015)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 9490))

Included in the following conference series:

Abstract

This paper is a continuation and extension of our previous research where kernel normalized mixed-norm (KNMN) algorithm, a combination of the kernel trick with the mixed-norm strategy, was proposed to demonstrate superior performance for system identification under non-Gaussian environment. Meanwhile, we also introduced a naive adaptive mixing parameter (AMP) updating mechanism to make KNMN more robust under nonstationary scenarios. The main contributions of this paper are threefold: firstly, the \(\ell _p\)-norm is substituted for the \(\ell _4\)-norm in the cost function, which can be viewed as a generalized version to the form of mixed-norms; secondly, instead of using the original AMP proposed in our previous work, a novel time-varying AMP is employed to provide better tracking behavior to the nonstationarity; and thirdly, the mean square convergence analysis is conducted, where the second moment behavior of weight error vector is elaborately studied. Simulations are conducted on two benchmark system identification problems, and different kinds of additive noises are added respectively to verify the effectiveness of improvements.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The posterior error \(e_p(k)\) defined here is different from \(e_{post}(k)\) in (10).

References

  1. Aboulnasr, T., Mayyas, K.: A robust variable step-size lms-type algorithm: analysis and simulations. IEEE Trans. Sig. Process. 45(3), 631–639 (1997)

    Article  Google Scholar 

  2. Chambers, J., Avlonitis, A.: A robust mixed-norm adaptive filter algorithm. Sig. Process. Lett. IEEE 4(2), 46–48 (1997)

    Article  Google Scholar 

  3. Chen, B., Yuan, Z., Zheng, N., Príncipe, J.C.: Kernel minimum error entropy algorithm. Neurocomputing 121, 160–169 (2013)

    Article  Google Scholar 

  4. Chen, B., Zhao, S., Zhu, P., Príncipe, J.C.: Mean square convergence analysis for kernel least mean square algorithm. Sig. Process. 92(11), 2624–2632 (2012)

    Article  Google Scholar 

  5. Chuah, T.C., Sharif, B.S., Hinton, O.R.: Robust adaptive spread-spectrum receiver with neural net preprocessing in non-gaussian noise. IEEE Trans. Neural Netw. 12(3), 546–558 (2001)

    Article  Google Scholar 

  6. Lee, C.H., Lin, C.R., Chen, M.S.: Sliding-window filtering: an efficient algorithm for incremental mining. In: Proceedings of the Tenth International Conference on Information and Knowledge Management, pp. 263–270. ACM (2001)

    Google Scholar 

  7. Liu, J., Qu, H., Chen, B., Ma, W.: Kernel robust mixed-norm adaptive filtering. In: 2014 International Joint Conference on Neural Networks (IJCNN), pp. 3021–3024. IEEE (2014)

    Google Scholar 

  8. Liu, W., Pokharel, P.P., Príncipe, J.C.: Correntropy: properties and applications in non-gaussian signal processing. IEEE Trans. Sig. Process. 55(11), 5286–5298 (2007)

    Article  MathSciNet  Google Scholar 

  9. Liu, W., Pokharel, P.P., Principe, J.C.: The kernel least-mean-square algorithm. IEEE Trans. Sig. Process. 56(2), 543–554 (2008)

    Article  MathSciNet  Google Scholar 

  10. Liu, W., Principe, J.C., Haykin, S.: Kernel Adaptive Filtering: A Comprehensive Introduction, vol. 57. Wiley, Hoboken (2011)

    Google Scholar 

  11. Liu, W., Príncipe, J.: Kernel affine projection algorithms. EURASIP J. Adv. Sig. Process. 2008(1), 784292 (2008)

    Google Scholar 

  12. Mandic, D.P., Papoulis, E.V., Boukis, C.G.: A normalized mixed-norm adaptive filtering algorithm robust under impulsive noise interference. In: 2003 IEEE International Conference on Acoustics, Speech, and Signal Processing, 2003. Proceedings, (ICASSP 2003), vol. 6, pp. VI-333. IEEE (2003)

    Google Scholar 

  13. Richard, C., Bermudez, J.C.M., Honeine, P.: Online prediction of time series data with kernels. IEEE Trans. Sig. Process. 57(3), 1058–1067 (2009)

    Article  MathSciNet  Google Scholar 

  14. Sayed, A.H.: Fundamentals of Adaptive Filtering. Wiley, Hoboken (2003)

    Google Scholar 

  15. Slavakis, K., Giannakis, G., Mateos, G.: Modeling and optimization for big data analytics: (statistical) learning tools for our era of data deluge. Sig. Process. Mag. IEEE 31(5), 18–31 (2014)

    Article  Google Scholar 

  16. Yu, S., You, X., Zhao, K., Ou, W., Tang, Y.: Kernel normalized mixed-norm algorithm for system identification. In: 2015 International Joint Conference on Neural Networks (IJCNN). IEEE (2015, in press)

    Google Scholar 

  17. Zidouri, A.: Convergence analysis of a mixed controlled \(\ell _2\)-\(\ell _p\) adaptive algorithm. EURASIP J. Adv. Sig. Process. 2010, 103 (2010)

    Google Scholar 

Download references

Acknowledgments

This work is supported partially by the National Natural Science Foundation of China (no.61402122) and the 2014 Ph.D. Recruitment Program of Guizhou Normal University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shujian Yu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2015 Springer International Publishing Switzerland

About this paper

Cite this paper

Yu, S. et al. (2015). Generalized Kernel Normalized Mixed-Norm Algorithm: Analysis and Simulations. In: Arik, S., Huang, T., Lai, W., Liu, Q. (eds) Neural Information Processing. ICONIP 2015. Lecture Notes in Computer Science(), vol 9490. Springer, Cham. https://doi.org/10.1007/978-3-319-26535-3_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-26535-3_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-26534-6

  • Online ISBN: 978-3-319-26535-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics