Skip to main content
Log in

An automatic extraction method of the domains of competence for learning classifiers using data complexity measures

  • Regular Paper
  • Published:
Knowledge and Information Systems Aims and scope Submit manuscript

Abstract

The constant appearance of algorithms and problems in data mining makes impossible to know in advance whether the model will perform well or poorly until it is applied, which can be costly. It would be useful to have a procedure that indicates, prior to the application of the learning algorithm and without needing a comparison with other methods, whether the outcome will be good or bad using the information available in the data. In this work, we present an automatic extraction method to determine the domains of competence of a classifier using a set of data complexity measures proposed for the task of classification. These domains codify the characteristics of the problems that are suitable or not for it, relating the concepts of data geometrical structures that may be difficult and the final accuracy obtained by any classifier. In order to do so, this proposal uses 12 metrics of data complexity acting over a large benchmark of datasets in order to analyze the behavior patterns of the method, obtaining intervals of data complexity measures with good or bad performance. As a representative for classifiers to analyze the proposal, three classical but different algorithms are used: C4.5, SVM and K-NN. From these intervals, two simple rules that describe the good or bad behaviors of the classifiers mentioned each are obtained, allowing the user to characterize the response quality of the methods from a dataset’s complexity. These two rules have been validated using fresh problems, showing that they are general and accurate. Thus, it can be established when the classifier will perform well or poorly prior to its application.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Notes

  1. The software that implements the automatic extraction method can be downloaded from the associated webpage.

  2. http://keel.es/datasets.php.

References

  1. Alcalá-Fdez J, Sánchez L, García S, del Jesus MJ, Ventura S, Garrell JM, Otero J, Romero C, Bacardit J, Rivas VM, Fernández JC, Herrera F (2008) Keel: a software tool to assess evolutionary algorithms for data mining problems. Soft Comput 13(3):307–318

    Article  Google Scholar 

  2. Alcalá-Fdez Jesús, Fernández Alberto, Luengo Julián, Derrac Joaquín, García Salvador (2011) Keel data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. Multi Valued Log Soft Comput 17(2–3):255–287

    Google Scholar 

  3. Baskiotis N, Sebag M (2004) C4.5 competence map: a phase transition-inspired approach. In: ICML ’04: Proceedings of the twenty-first international conference on Machine learning, page 8. ACM, New York, NY, USA

  4. Basu Mitra, Ho Tin Kam (2006) Data complexity in pattern recognition (advanced information and knowledge processing). Springe New York Inc., Secaucus, NJ

    Book  Google Scholar 

  5. Baumgartner R, Somorjai RL (2006) Data complexity assessment in undersampled classification of high-dimensional biomedical data. Pattern Recognit Lett 12:1383–1389

    Article  Google Scholar 

  6. Bensusan H, Kalousis A (2001) Estimating the predictive accuracy of a classifier. In EMCL ’01: Proceedings of the 12th european conference on machine learning Springer, London, pp 25–36

  7. Bernadó-Mansilla Ester, Ho Tin Kam (2005) Domain of competence of XCS classifier system in complexity measurement space. IEEE Trans Evol Comput 9(1):82–104

    Article  Google Scholar 

  8. Brazdil P, Giraud-Carrier C, Soares C, Vilalta R (2009) Metalearning: applications to data mining. Cognitive Technologies, Springer

    Google Scholar 

  9. Cheeseman P, Kanefsky B, Taylor WM (1991) Where the really hard problems are. In: IJCAI’91: Proceedings of the 12th international joint conference on artificial intelligence. Morgan Kaufmann Publishers Inc, San Francisco, CA, pp 331–337

  10. Demšar Janez (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  11. Derrac Joaquín, Triguero Isaac, García Salvador, Herrera Francisco (2012) Integrating instance selection, instance weighting, and feature weighting for nearest neighbor classifiers by coevolutionary algorithms. IEEE Trans Syst Man Cybern Part B 42(5):1383–1397

    Article  Google Scholar 

  12. Dong M, Kothari R (2003) Feature subset selection using a new definition of classificabilty. Pattern Recognit Lett 24:1215–1225

    Article  MATH  Google Scholar 

  13. Fernández A, García S, José M, del Jesús MJ, Francisco H (2008) A study of the behaviour of linguistic fuzzy rule based classification systems in the framework of imbalanced data-sets. Fuzzy Sets Syst 159(18):2378–2398

    Article  Google Scholar 

  14. García Salvador, Cano José Ramón, Bernadó-Mansilla Esther, Herrera Francisco (2009) Diagnose of effective evolutionary prototype selection using an overlapping measure. Int J Pattern Recognit Artif Intell 23(8):2378–2398

    Google Scholar 

  15. García Salvador, Herrera Francisco (2008) An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J Mach Learn Res 9:2677–2694

    MATH  Google Scholar 

  16. Ho Tin Kam, Baird Henry S (1998) Pattern classification with compact distribution maps. Comput Vis Image Underst 70(1):101–110

    Article  Google Scholar 

  17. Ho Tin Kam, Basu Mitra (2002) Complexity measures of supervised classification problems. IEEE Trans Pattern Anal Mach Intell 24(3):289–300

    Article  Google Scholar 

  18. Hoekstra A, Duin RPW (1996) On the nonlinearity of pattern classifiers. In: ICPR ’96: Proceedings of the international conference on pattern recognition (ICPR ’96) volume IV-Volume 7472. IEEE Computer Society, Washington, DC, USA, pp 271–275

  19. Kalousis A (2002) Algorithm selection via meta-learning. PhD thesis, Université de Geneve

  20. Kuncheva LI, Rodrguez JJ (2013) A weighted voting framework for classifiers ensembles. Knowl Inf Syst (in press) doi:10.1007/s10115-012-0586-6

  21. Lebourgeois F, Emptoz H (1996) Pretopological approach for supervised learning. In: ICPR ’96: Proceedings of the international conference on pattern recognition (ICPR ’96) volume IV-Volume 7472. IEEE Computer Society, Washington, DC, USA, pp 256–260

  22. Lorena AC, Costa IG, Spolaôr N, de Souto MCP (2012) Analysis of complexity indices for classification problems: Cancer gene expression data. Neurocomputing 75(1):33–42

    Article  Google Scholar 

  23. Lorena AC, de Carvalho ACPLF (2010) Building binary-tree-based multiclass classifiers using separability measures. Neurocomputing 73(16–18):2837–2845

    Article  Google Scholar 

  24. Luengo Julián, Fernández Alberto, García Salvador, Herrera Francisco (2011) Addressing data complexity for imbalanced data sets: analysis of smote-based oversampling and evolutionary undersampling. Soft Comput 15(10):1909–1936

    Article  Google Scholar 

  25. Luengo Julián, Herrera Francisco (2010) Domains of competence of fuzzy rule based classification systems with data complexity measures: a case of study using a fuzzy hybrid genetic based machine learning method. Fuzzy Sets Syst 161(1):3–19

    Article  MathSciNet  Google Scholar 

  26. Luengo Julián, Herrera Francisco (2012) Shared domains of competence of approximate learning models using measures of separability of classes. Inf Sci 185(1):43–65

    Article  MathSciNet  Google Scholar 

  27. Macia N, Bernadó-Mansilla E, Orriols-Puig A, Kam Ho T (2012) Learner excellence biased by data set selection: A case for data characterisation and artificial data sets. Pattern Recognit (in press). doi:10.1016/j.patcog.2012.09.022

  28. McLachlan GJ (2004) Discriminant analysis and statistical pattern recognition. Wiley, New York

    MATH  Google Scholar 

  29. Mollineda RA, Sánchez JS, Sotoca JM (2005) Data characterization for effective prototype selection. In: Proceedings of the 2nd Iberian conference on pattern recognition and image analysis. Springer, pp 27–34

  30. Okun Oleg, Priisalu Helen (2009) Dataset complexity in gene expression based cancer classification using ensembles of k-nearest neighbors. Artif Intell Med 45(2–3):151–162

    Article  Google Scholar 

  31. Orriols-Puig Albert, Bernadó-Mansilla Ester (2008) Evolutionary rule-based systems for imbalanced data sets. Soft Comput 13(3):213–225

    Article  Google Scholar 

  32. Orriols-Puig Albert, Casillas Jorge (2011) Fuzzy knowledge representation study for incremental learning in data streams and classification problems. Soft Comput 15(12):2389–2414

    Article  Google Scholar 

  33. Pfahringer B, Bensusan H, Giraud-Carrier CG (2000) Meta-learning by landmarking various learning algorithms. In: ICML ’00: Proceedings of the seventeenth international conference on machine learning. Morgan Kaufmann Publishers Inc, San Francisco, CA, USA, pp 743–750

  34. Platt J (1998) Machines using sequential minimal optimization. In: Schoelkopf B, Burges C, Smola A (eds) Advances in Kernel methods—support vector learning. MIT Press, Cambridge

    Google Scholar 

  35. Quinlan JR (1993) C4.5: programs for machine learning. Morgan Kaufmann Publishers, San Mateo-California

    Google Scholar 

  36. Ramentol Enislay, Caballero Yaile, Bello Rafael, Herrera Francisco (2012) Smote-rsb *: a hybrid preprocessing approach based on oversampling and undersampling for high imbalanced data-sets using smote and rough sets theory. Knowl Inf Syst 33(2):245–265

    Article  Google Scholar 

  37. Sáez JA, Galar M, Luengo J, Herrera F (2013) Analyzing the presence of noise in multi-class problems: alleviating its influence with the one-vs-one decomposition. Knowl Inf Syst (in press) doi:10.1007/s10115-012-0570-1

  38. Sáez José A, Luengo Julián, Herrera Francisco (2013) Predicting noise filtering efficacy with data complexity measures for nearest neighbor classification. Pattern Recognit 46(1):355–364

    Article  Google Scholar 

  39. Sánchez José Salvador, Mollineda Ramón Alberto, Sotoca José Martínez (2007) An analysis of how training data complexity affects the nearest neighbor classifiers. Pattern Anal Appl 10(3):189–201

    Article  MathSciNet  Google Scholar 

  40. Singh S (2003) Multiresolution estimates of classification complexity. IEEE Trans Pattern Anal Mach Intell 25(12):1534–1539

    Article  Google Scholar 

  41. Smith FW (1968) Pattern classifier design by linear programming. IEEE Trans Comput 17(4):367–372

    Article  Google Scholar 

  42. Vainer Igor, Kaminka Gal A, Kraus Sarit, Slovin Hamutal (2011) Obtaining scalable and accurate classification in large scale spatio-temporal domains. Knowl Inf Syst 29(3):527–564

    Article  Google Scholar 

  43. Vapnik VN (1998) Statistical learning theory. Wiley, New York

    MATH  Google Scholar 

  44. Wolpert David H (1996) The lack of a priori distinctions between learning algorithms. Neural Comput 8(7):1341–1390

    Article  Google Scholar 

Download references

Acknowledgments

Supported by the Research Projects TIN2011-28488 and P10-TIC-06858.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Julián Luengo.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Luengo, J., Herrera, F. An automatic extraction method of the domains of competence for learning classifiers using data complexity measures. Knowl Inf Syst 42, 147–180 (2015). https://doi.org/10.1007/s10115-013-0700-4

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10115-013-0700-4

Keywords

Navigation