Skip to main content

Balancing Ensemble Learning between Known and Unknown Data

  • Conference paper
Computational Intelligence and Intelligent Systems (ISICA 2012)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 316))

Included in the following conference series:

  • 2208 Accesses

Abstract

Without guidance on the unseen data, learning models could possibly approximate the known data by having different output on those unseen data. The results of such differences are the large variances in learning. Such large variances could lead to overfitting on many noisy data. This paper proposed one way of guidance by setting a middle value on the unknown data in balanced ensemble learning. Although balanced ensemble learning could learn faster and better than negative correlation learning, it also carried higher risk of overfitting in case of having limited number of training data points. Experimental results were conducted to show how such random learning could regulate the variances in balanced ensemble learning.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Liu, Y.: A Balanced Ensemble Learning with Adaptive Error Functions. In: Kang, L., Cai, Z., Yan, X., Liu, Y. (eds.) ISICA 2008. LNCS, vol. 5370, pp. 1–8. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  2. Liu, Y.: Balanced Learning for Ensembles with Small Neural Networks. In: Cai, Z., Li, Z., Kang, Z., Liu, Y. (eds.) ISICA 2009. LNCS, vol. 5821, pp. 163–170. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  3. Liu, Y.: Create weak learners with small neural networks by balanced ensemble learning. In: Proceedings of the 2011 IEEE International Conference on Signal Processing, Communications and Computing (2011)

    Google Scholar 

  4. Liu, Y.: Target shift awareness in balanced ensemble learning. In: Proceedings of the 3rd International Conference on Awareness Science and Technology

    Google Scholar 

  5. Liu, Y.: Balancing ensemble learning through error shift. In: Proceedings of the Fourth International Workshop on Advanced Computational Intelligence

    Google Scholar 

  6. Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. on Systems, Man, and Cybernetics, Part B: Cybernetics 29(6), 716–725 (1999)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Liu, Y. (2012). Balancing Ensemble Learning between Known and Unknown Data. In: Li, Z., Li, X., Liu, Y., Cai, Z. (eds) Computational Intelligence and Intelligent Systems. ISICA 2012. Communications in Computer and Information Science, vol 316. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-34289-9_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-34289-9_42

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-34288-2

  • Online ISBN: 978-3-642-34289-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics