Skip to main content

A Novel Selective Ensemble Learning Based on K-means and Negative Correlation

  • Conference paper
  • First Online:
Cloud Computing and Security (ICCCS 2016)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 10040))

Included in the following conference series:

Abstract

Selective ensemble learning has drawn high attention for improving the diversity of the ensemble learning. However, the performance is limited by the conflicts and redundancies among its child classifiers. In order to solve these problems, we put forward a novel method called KNIA. The method mainly makes use of K-means algorithm, which is used in the integration algorithm as an effective measure to choose the representative classifiers. Then, negative correlation theory is used to select the diversity of classifiers derived from the representative classifiers. Compared with the classical selective learning, our algorithm which is inverse growth process can improve the generalization ability in the condition of ensuring the accuracy. The extensive experiments demonstrate that the robustness and precision of the proposed method outperforms four classical algorithms from multiple UCI data sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Gu, B., Sheng, V.S., Tay, K.Y., Romano, X., Li, S.: Incremental support vector learning for ordinal regression. IEEE Trans. Neural Netw. Learn. Syst. 26(7), 1403–1416 (2015)

    Article  MathSciNet  Google Scholar 

  2. Gu, B., Sheng, V.S.: A robust regularization path algorithm for -support vector classification. IEEE Trans. Neural Netw. Learn. Syst. (2016). doi:10.1109/TNNLS.2016.2527796

    Google Scholar 

  3. Gu, B., Sun, X., Sheng, V.S.: Structural minimax probability machine. IEEE Trans. Neural Netw. Learn. Syst. (2016). doi:10.1109/TNNLS.2016.2544779

    Google Scholar 

  4. Hansen, L.K., Salamon, P.: Neural network ensemble. IEEE Trans. Pattern Anal. Mach. Intell. 12(10), 993–1001 (1990)

    Article  Google Scholar 

  5. Dietterich, T.G.: Machine learning research: four current directions. AI Mag. 18(4), 97–136 (1977)

    Google Scholar 

  6. Liu, X., Wang, L., Huang, G.B., et al.: Multiple kernel extreme learning machine. Neurocomputing 149, 253–264 (2015)

    Article  Google Scholar 

  7. Yao, W., Chen, X.Q., Zhao, Y.: Efficient resources provisioning based on load forecasting in cloud. IEEE Trans. Neural Netw. Learn. Syst. 23(2), 247–259 (2012)

    Article  Google Scholar 

  8. Tang, J., Cao, Y., Xiao, J., et al.: Predication of plasma concentration of remifentanil based on Elman neural network. J. Central South Univ. 20, 3187–3192 (2013)

    Article  Google Scholar 

  9. Krogh, P.S.A.: Learning with ensembles: how over-fitting can be useful. In: Proceedings of the 1995 Conference, vol. 8, p. 190 (1996)

    Google Scholar 

  10. Liu, J., Chen, H., Cai, B., et al.: State estimation of connected vehicles using a nonlinear ensemble filter. J. Central South Univ. 22, 2406–2415 (2015)

    Article  Google Scholar 

  11. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  12. Liu, Y., Yao, X., Higuchi, T.: Evolutionary ensembles with negative correlation learning. IEEE Trans. Evol. Comput. 4(4), 380–387 (2000)

    Article  Google Scholar 

  13. Tao, H., Ma, X., Qiao, M.: Subspace selective ensemble algorithm based on feature clustering. J. Comput. 8(2), 509–516 (2013)

    Article  Google Scholar 

  14. Cheng, X., Guo, H.: The technology of selective multiple classifiers ensemble based on kernel clustering. In: Second International Symposium on Intelligent Information Technology Application, IITA 2008, vol. 2, pp. 146–150. IEEE (2008)

    Google Scholar 

  15. Zhou, Z.H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1), 239–263 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  16. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MathSciNet  MATH  Google Scholar 

  17. Efron, B., Tibshirani, R.J.: An Introduction to the Bootstrap. CRC Press, Boca Raton (1994)

    MATH  Google Scholar 

  18. Freund, Y.: Boosting a weak learning algorithm by majority. Inf. Comput. 121(2), 256–285 (1995)

    Article  MathSciNet  MATH  Google Scholar 

  19. Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. J. Comput. Syst. Sci. 55(1), 119–139 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  20. Bryll, R., Gutierrez-Osuna, R., Quek, F.: Attribute bagging: improving accuracy of classifier ensembles by using random feature subsets. Pattern Recogn. 36(6), 1291–1302 (2003)

    Article  MATH  Google Scholar 

  21. Hu, Q., Yu, D., Xie, Z., et al.: EROS: ensemble rough subspaces. Pattern Recogn. 40(12), 3728–3739 (2007)

    Article  MATH  Google Scholar 

  22. Thompson, S.: Pruning boosted classifiers with a real valued genetic algorithm. Knowl.-Based Syst. 12(5), 277–284 (1999)

    Article  Google Scholar 

  23. Fu, Q., Hu, S.X., Zhao, S.Y.: A PSO-based approach for neural network ensemble. J. Zhejiang Univ. (Eng. Sci.) 38(12), 1596–1600 (2004)

    Google Scholar 

  24. Margineantu, D.D., Dietterich, T.G.: Pruning adaptive boosting. ICML 97, 211–218 (1997)

    Google Scholar 

  25. Ting, K.M., Witten, I.H.: Issues in stacked generalization. J. Artif. Intell. Res. (JAIR) 10, 271–289 (1999)

    MATH  Google Scholar 

  26. Jordan, M.I., Jacobs, R.A.: Hierarchical mixtures of experts and the EM algorithm. Neural Comput. 6(2), 181–214 (1994)

    Article  Google Scholar 

  27. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MathSciNet  MATH  Google Scholar 

  28. Liu, Y., Yao, X.: Ensemble learning via negative correlation. Neural Netw. 12(10), 1399–1404 (1999)

    Article  Google Scholar 

  29. Minku, F.L., Inoue, H., Yao, X.: Negative correlation in incremental learning. Nat. Comput. 8(2), 289–320 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  30. Liu, Y., Yao, X.: Simultaneous training of negatively correlated neural networks in an ensemble. IEEE Trans. Syst. Man Cybern. B Cybern. 29(6), 716–725 (1999)

    Article  Google Scholar 

Download references

Acknowledgment

The author is grateful to Baosheng Wang and Bo Yu for the guidance and advice, and thanks to the support of the project. The work was supported by Science Foundation of China under (NSFC) Grant No. 61472437 and No. 61303264.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Liu Liu , Baosheng Wang , Bo Yu or Qiuxi Zhong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2016 Springer International Publishing AG

About this paper

Cite this paper

Liu, L., Wang, B., Yu, B., Zhong, Q. (2016). A Novel Selective Ensemble Learning Based on K-means and Negative Correlation. In: Sun, X., Liu, A., Chao, HC., Bertino, E. (eds) Cloud Computing and Security. ICCCS 2016. Lecture Notes in Computer Science(), vol 10040. Springer, Cham. https://doi.org/10.1007/978-3-319-48674-1_51

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-48674-1_51

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-48673-4

  • Online ISBN: 978-3-319-48674-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics