Skip to main content

MC3: A Multi-class Consensus Classification Framework

  • Conference paper
  • First Online:
Advances in Knowledge Discovery and Data Mining (PAKDD 2017)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 10234))

Included in the following conference series:

  • 3824 Accesses

Abstract

In this paper, we propose MC3, an ensemble framework for multi-class classification. MC3 is built on “consensus learning”, a novel learning paradigm where each individual base classifier keeps on improving its classification by exploiting the outcomes obtained from other classifiers until a consensus is reached. Based on this idea, we propose two algorithms, MC3-R and MC3-S that make different trade-offs between quality and runtime. We conduct rigorous experiments comparing MC3-R and MC3-S with 12 baseline classifiers on 13 different datasets. Our algorithms perform as well or better than the best baseline classifier, achieving on average, a 5.56% performance improvement. Moreover, unlike existing baseline algorithms, our algorithms also improve the performance of individual base classifiers up to 10%. (The code is available at https://github.com/MC3-code.)

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We use boldface lower case letters for vectors (e.g., \(\mathbf {x}\)).

  2. 2.

    The patterns are exactly the same for the other datasets.

  3. 3.

    We separately measure the importance of each feature by dropping it in isolation and calculate the decrease in accuracy (more decrease implies more relevance).

References

  1. Titanic dataset. https://www.kaggle.com/c/titanic. Accessed 30 Sept 2016

  2. Baldi, P., Sadowski, P., Whiteson, D.: Searching for exotic particles in high-energy physics with deep learning. Nat. Commun. 5 (2014). Article No. 4308

    Google Scholar 

  3. Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Lechevallier, Y., Saporta, G. (eds.) Proceedings of COMPSTAT 2010, pp. 177–186. Physica-Verlag HD, Heidelberg (2010)

    Chapter  Google Scholar 

  4. Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996)

    MATH  Google Scholar 

  5. Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)

    Article  MATH  Google Scholar 

  6. Cai, X., Wang, H., Huang, H., Ding, C.: Simultaneous Image classification and annotation via biased random walk on tri-relational graph. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012. LNCS, vol. 7577, pp. 823–836. Springer, Heidelberg (2012). doi:10.1007/978-3-642-33783-3_59

    Chapter  Google Scholar 

  7. Cannings, T.I., Samworth, R.J.: Random projection ensemble classification. ArXiv (2015)

    Google Scholar 

  8. Conroy, B., Eshelman, L., Potes, C., Xu-Wilson, M.: A dynamic ensemble approach to robust classification in the presence of missing data. Mach. Learn. 102(3), 443–463 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  9. Datta, S., Pihur, V., Datta, S.: An adaptive optimal ensemble classifier via bagging and rank aggregation with applications to high dimensional data. BMC Bioinform. 11(1), 427 (2010)

    Article  Google Scholar 

  10. Džeroski, S., Ženko, B.: Is combining classifiers with stacking better than selecting the best one? Mach. Learn. 54(3), 255–273 (2004)

    Article  MATH  Google Scholar 

  11. Guo, L., Boukir, S.: Margin-based ordered aggregation for ensemble pruning. Pattern Recognit. Lett. 34(6), 603–609 (2013)

    Article  Google Scholar 

  12. Karabulut, E.M., İbrikçi, T.: Effective diagnosis of coronary artery disease using the rotation forest ensemble method. J. Med. Syst. 36(5), 3011–3018 (2012)

    Article  Google Scholar 

  13. Lichman, M.: UCI repository (2013). http://archive.ics.uci.edu/ml

  14. Manning, C.D., Raghavan, P., Schütze, H.: Introduction to Information Retrieval. Cambridge University Press, Cambridge (2008)

    Book  MATH  Google Scholar 

  15. Reid, S., Grudic, G.: Regularized linear models in stacked generalization. In: Benediktsson, J.A., Kittler, J., Roli, F. (eds.) MCS 2009. LNCS, vol. 5519, pp. 112–121. Springer, Heidelberg (2009). doi:10.1007/978-3-642-02326-2_12

    Chapter  Google Scholar 

  16. Rokach, L.: Ensemble-based classifiers. Artif. Intell. Rev. 33(1–2), 1–39 (2010)

    Article  Google Scholar 

  17. Schapire, R.E.: A brief introduction to boosting. In: IJCAI, Stockholm, Sweden, pp. 1401–1406 (1999)

    Google Scholar 

  18. Schclar, A., Rokach, L., Amit, A.: Ensembles of classifiers based on dimensionality reduction. CoRR abs/1305.4345 (2013)

    Google Scholar 

  19. Shunmugapriya, P., Kanmani, S.: Optimization of stacking ensemble configurations through Artificial Bee Colony algorithm. Swarm Evol. Comput. 12, 24–32 (2013)

    Article  Google Scholar 

  20. Sun, J., Liao, B., Li, H.: Adaboost and bagging ensemble approaches with neural network as base learner for financial distress prediction of chinese construction and real estate companies. Recent Pat. Comput. Sci. 6(1), 47–59 (2013)

    Article  Google Scholar 

  21. Ting, K.M., Witten, I.H.: Issues in stacked generalization. J. Artif. Intell. Res. (JAIR) 10, 271–289 (1999)

    MATH  Google Scholar 

  22. Xiao, H., Xiao, Z., Wang, Y.: Ensemble classification based on supervised clustering for credit scoring. Appl. Soft Comput. 43, 73–86 (2016)

    Article  Google Scholar 

  23. Yeh, I.C., Lien, C.H.: The comparisons of data mining techniques for the predictive accuracy of probability of default of credit card clients. Expert Syst. Appl. 36(2), 2473–2480 (2009)

    Article  Google Scholar 

Download references

Acknowledgments

Parts of this work were funded by ONR grants N00014-15-R-BA010, N00014-16-R-BA01, N000141612739 and N000141612918.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tanmoy Chakraborty .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Chakraborty, T., Chandhok, D., Subrahmanian, V.S. (2017). MC3: A Multi-class Consensus Classification Framework. In: Kim, J., Shim, K., Cao, L., Lee, JG., Lin, X., Moon, YS. (eds) Advances in Knowledge Discovery and Data Mining. PAKDD 2017. Lecture Notes in Computer Science(), vol 10234. Springer, Cham. https://doi.org/10.1007/978-3-319-57454-7_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-57454-7_27

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-57453-0

  • Online ISBN: 978-3-319-57454-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics