Skip to main content

Mutual Information Measures for Subclass Error-Correcting Output Codes Classification

  • Conference paper
Artificial Intelligence: Theories, Models and Applications (SETN 2010)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 6040))

Included in the following conference series:

Abstract

Error-Correcting Output Codes (ECOCs) reveal a common way to model multi-class classification problems. According to this state of the art technique, a multi-class problem is decomposed into several binary ones. Additionally, on the ECOC framework we can apply the subclasses technique (sub-ECOC), where by splitting the initial classes of the problem we aim to the creation of larger but easier to solve ECOC configurations. The multi-class problem’s decomposition is achieved via a searching procedure known as sequential forward floating search (SFFS). The SFFS algorithm in each step searches for the optimum binary separation of the classes that compose the multi-class problem. The separation decision is based on the maximization or minimization of a criterion function. The standard criterion used is the maximization of the mutual information (MI) between the bi-partitions created in each step of the SFFS. The materialization of the MI measure is achieved by a method called fast quadratic Mutual Information (FQMI). Although FQMI is quite accurate in modelling the MI, its computation is of high algorithmic complexity, which as a consequence makes the ECOC and sub-ECOC techniques applicable only on small datasets. In this paper we present some alternative separation criteria of reduced computational complexity that can be used in the SFFS algorithm. Furthermore, we compare the performance of these criteria over several multi-class classification problems.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Allwein, E.L., Schapire, R.E., Singer, Y.: Reducing multi-class to binary: A unifying approach for margin classifiers. Journal of Machine Learning Research 1, 113–141 (2002)

    Article  MathSciNet  Google Scholar 

  2. Dietterich, T.G., Bakiri, G.: Solving multi-class learning problems via error-correcting output codes. Journal of Machine Learning Research 2, 263–282 (1995)

    MATH  Google Scholar 

  3. Kong, E., Dietterich., T.: Error-correcting output coding corrects bias and variance. In: Proc. 12th Intl Conf. Machine Learning, pp. 313–321 (1995)

    Google Scholar 

  4. Escalera, S., Tax, D.M., Pujol, O., Radeva, P., Duin, R.P.: Subclass problem-dependent design for error-correcting output codes. IEEE Transactions on Pattern Analysis and Machine Intelligence 30(6), 1041–1054 (2008)

    Article  Google Scholar 

  5. Torkkola, K.: Feature extraction by non-parametric mutual information maximization. Journal of Machine Learning Research 3, 1415–1438 (2003)

    Article  MATH  MathSciNet  Google Scholar 

  6. Asuncion, A., Newman, D.: Uci machine learning repository (2007)

    Google Scholar 

  7. Escalera, S., Pujol, O., Radeva, P.: Loss-weighted decoding for error-correcting output coding. In: Proc. Int’l Conf. Computer Vision Theory and Applications, June 2008, vol. 2, pp. 117–122 (2008)

    Google Scholar 

  8. Pujol, O., Radeva, P., Vitria, J.: Discriminant ecoc: A heuristic method for application dependent design of error correcting output codes. IEEE Transactions on Pattern Analysis and Machine Intelligence 6, 1001–1007 (2006)

    Google Scholar 

  9. Pudil, P., Ferri, F., Novovicova, J., Kittler., J.: Floating search methods for feature selection with non-monotonic criterion functions. In: Proc. Int’l Conf. Pattern Recognition, March 1994, vol. 3, pp. 279–283 (1994)

    Google Scholar 

  10. Kapur, J., Kesavan, H.: Entropy Optimization principles with Applications (1992)

    Google Scholar 

  11. Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines (2001)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Arvanitopoulos, N., Bouzas, D., Tefas, A. (2010). Mutual Information Measures for Subclass Error-Correcting Output Codes Classification. In: Konstantopoulos, S., Perantonis, S., Karkaletsis, V., Spyropoulos, C.D., Vouros, G. (eds) Artificial Intelligence: Theories, Models and Applications. SETN 2010. Lecture Notes in Computer Science(), vol 6040. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-12842-4_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-12842-4_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-12841-7

  • Online ISBN: 978-3-642-12842-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics