Skip to main content

Nonlinear Adaptive Filtering with MEE, MCC, and Applications

  • Chapter
  • First Online:
Information Theoretic Learning

Abstract

Our emphasis on the linear model in Chapter 4 was only motivated by simplicity and pedagogy. As we have demonstrated in the simple case studies, under the linearity and Gaussianity conditions, the final solution of the MEE algorithms was basically equivalent to the solution obtained with the LMS. Because the LMS algorithm is computationally simpler and better understood, there is really no advantage to use MEE in such cases.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Ahmad I., Lin P., A nonparametric estimation of the entropy for absolutely continuous distributions, IEEE Trans. on Inf. Theor., 22:372–375, 1976.

    Article  MATH  MathSciNet  Google Scholar 

  2. Battiti R., First and second order methods for learning: Between steepest descent and Newton’s method, Neural Comput., 4(2):141–166, 1992.

    Article  Google Scholar 

  3. Benedetto S., Biglieri E., Nonlinear equalization of digital satellite channels, IEEE J. Select. Areas Commun., 1:57–62, Jan 1983.

    Article  Google Scholar 

  4. Bishop C., Neural Networks for Pattern Recognition, Clarendon Press, Oxford, 1995.

    Google Scholar 

  5. Erdogmus D., Principe J.C., An error-entropy minimization algorithm for supervised training of nonlinear adaptive systems, IEEE Trans. Signal Process., 50(7):1780–1786, 2002.

    Article  MathSciNet  Google Scholar 

  6. Erdogmus D., J. Principe, Generalized information potential for adaptive systems training, IEEE Trans. Neural Netw., 13(5):1035–1044, 2002.

    Article  Google Scholar 

  7. Glass L., Mackey M., From clock to chaos, Princeton University Press, Princeton, NJ, 1998.

    Google Scholar 

  8. Han S., Rao S., Erdogmus D., Principe J., A minimum error entropy algorithm with self adjusting stepsize (MEE-SAS), Signal Process. 87:2733–2745.

    Google Scholar 

  9. Haykin S., Principe J., Dynamic modeling with neural networks, in IEEE Signal Process. Mag., 15(3):66–72, 1998.

    Article  Google Scholar 

  10. Liu W., Pokharel P., Principe J., Correntropy: Properties and applications in non Gaussian signal processing, IEEE Trans. Sig. Proc., 55(11):5286–5298, 2007.

    Article  MathSciNet  Google Scholar 

  11. Lorenz E., Deterministic non-periodic flow, J. Atmospheric Sci., 20:130–141, 1963.

    Article  Google Scholar 

  12. Moller M., A scaled conjugate gradient algorithm for fast supervised learning, Neural Netw., 6:525–533, 1993.

    Article  Google Scholar 

  13. Morejon R., An information theoretic approach to sonar automatic target recognition, Ph.D. dissertation, University of Florida, Spring 2003

    Google Scholar 

  14. Morejon R., Principe J., Advanced parameter search algorithms for information-theoretic learning, IEEE Trans. Neural Netw. Special Issue Inf. Theor. Learn., 15(4):874–884, 2004.

    Google Scholar 

  15. Principe J., Euliano N., Lefebvre C., Neural Systems: Fundamentals through Simulations, CD-ROM textbook, John Wiley, New York, 2000.

    Google Scholar 

  16. Proakis J., Digital Communications, Prentice-Hall, Englewood Clifts, NJ,1988.

    Google Scholar 

  17. Riedmiller M., Braun H., A direct adaptive method for faster backpropagation learning: The RPROP Algorithm, in Proc. of the IEEE Intl. Conf. on Neural Networks, San Francisco, 1993, pp. 586–591.

    Google Scholar 

  18. Rumelhart D., McClelland J., Eds, Parallel Distributed Processing: Explorations in the Microstructure of Cognition, vol. 1, MIT Press, Cambridge, MA, 1986.

    Google Scholar 

  19. Sands N., Cioffi J., Nonlinear channel models for digital magnetic recording, IEEE Trans. Magnetics, 29(6):3996–3998, Nov 1993.

    Article  Google Scholar 

  20. Sands N., Cioffi J., An improved detector for channels with nonlinear intersymbol interference, Proc. Intl. Conf. on Communications, vol 2, pp 1226–1230, 1994.

    Google Scholar 

  21. Santamaria I., Erdogmus D., Principe J.C., Entropy minimization for supervised communication channel equalization, IEEE Trans. Signal Process., 50(5):1184–1192, 2002.

    Article  Google Scholar 

  22. Sheppard A., Second Order Methods for Neural Networks, Springer, London, 1997.

    Google Scholar 

  23. Werbos P., Beyond regression: New tools for prediction and analysis in the behavioral sciences, Ph.D. Thesis, Harvard University, Cambridge, 1974.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer Science+Business Media, LLC

About this chapter

Cite this chapter

Erdogmus, D., Morejon, R., Liu, W. (2010). Nonlinear Adaptive Filtering with MEE, MCC, and Applications. In: Information Theoretic Learning. Information Science and Statistics. Springer, New York, NY. https://doi.org/10.1007/978-1-4419-1570-2_5

Download citation

  • DOI: https://doi.org/10.1007/978-1-4419-1570-2_5

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4419-1569-6

  • Online ISBN: 978-1-4419-1570-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics