Skip to main content
Log in

MoNGEL: monotonic nested generalized exemplar learning

  • Theoretical Advances
  • Published:
Pattern Analysis and Applications Aims and scope Submit manuscript

Abstract

In supervised prediction problems, the response attribute depends on certain explanatory attributes. Some real problems require the response attribute to represent ordinal values that should increase with some of the explaining attributes. They are called classification problems with monotonicity constraints. In this paper, we aim at formalizing the approach to nested generalized exemplar learning with monotonicity constraints, proposing the monotonic nested generalized exemplar learning (MoNGEL) method. It accomplishes learning by storing objects in \({\mathbb {R}}^n\), hybridizing instance-based learning and rule learning into a combined model. An experimental analysis is carried out over a wide range of monotonic data sets. The results obtained have been verified by non-parametric statistical tests and show that MoNGEL outperforms well-known techniques for monotonic classification, such as ordinal learning model, ordinal stochastic dominance learner and k-nearest neighbor, considering accuracy, mean absolute error and simplicity of constructed models.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Aha DW (ed) (1997) Lazy learning. Springer, New York

  2. Aha DW, Kibler D, Albert MK (1991) Instance-based learning algorithms. Mach Learn 6(1):37–66

    Google Scholar 

  3. Alcala-Fdez J, Fernández A, Luengo J, Derrac J, García S, Sánchez L, Herrera F (2011) KEEL data-mining software tool: data set repository, integration of algorithms and experimental analysis framework. J Mult-Valued Log Soft Comput 17(2–3):255–287

    Google Scholar 

  4. Bache K, Lichman M (2013) UCI machine learning repository. http://archive.ics.uci.edu/ml. Accessed 21 May 2015

  5. Ben-David A (1992) Automatic generation of symbolic multiattribute ordinal knowledge-based dsss: methodology and applications. Decis Sci 23:1357–1372

    Article  Google Scholar 

  6. Ben-David A (1995) Monotonicity maintenance in information-theoretic machine learning algorithms. Mach Learn 19(1):29–43

    Google Scholar 

  7. Ben-David A, Sterling L, Pao YH (1989) Learning, classification of monotonic ordinal concepts. Comput Intel 5:45–49

    Article  Google Scholar 

  8. Ben-David A, Sterling L, Tran T (2009) Adding monotonicity to learning algorithms may impair their accuracy. Expert Syst Appl 36(3):6627–6634

    Article  Google Scholar 

  9. Cao-Van K (2003) Supervised ranking, from semantics to algorithms. Ph.D. dissertation, Ghent University, Ghent

  10. Cao-Van K, Baets BD (2003) Growing decision trees in an ordinal setting. Int J Intel Syst 18(7):733–750

    Article  MATH  Google Scholar 

  11. Chen CC, Li ST (2014) Credit rating with a monotonicity-constrained support vector machine model. Expert Syst Appl 41(16):7235–7247

    Article  Google Scholar 

  12. Daniels H, Velikova M (2010) Monotone and partially monotone neural networks. IEEE Trans Neural Netw 21(6):906–917

    Article  Google Scholar 

  13. Dembczyński K, Kotłowski W, Słowiński R (2009) Learning rule ensembles for ordinal classification with monotonicity constraints. Fundam Inform 94(2):163–178

    MathSciNet  MATH  Google Scholar 

  14. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  MATH  Google Scholar 

  15. Derrac J, García S, Herrera F (2014) Fuzzy nearest neighbor algorithms: taxonomy, experimental analysis and prospects. Inf Sci 260:98–119

    Article  Google Scholar 

  16. Domingos P (1996) Unifying instance-based and rule-based induction. Mach Learn 24(2):141–168

    MathSciNet  Google Scholar 

  17. Duivesteijn W, Feelders A (2008) Nearest neighbour classification with monotonicity constraints. In: ECML/PKDD (1), pp 301–316

  18. Escalante HJ, Marin-Castro M, Morales-Reyes A, Graff M, Rosales-Pérez A, y Gómez MM, Reyes CA, González JA (2015) MOPG: a multi-objective evolutionary algorithm for prototype generation. Pattern Anal Appl. doi:10.1007/s10044-015-0454-6 (in press)

  19. Feelders AJ, Pardoel M (2003) Pruning for monotone classification trees. In: IDA. Lecture notes in computer science, vol, 2810. Springer, New York, pp 1–12

  20. Fernández-Navarro F, Riccardi A, Carloni S (2014) Ordinal neural networks without iterative tuning. IEEE Trans Neural Netw Learn Syst 25(11):2075–2085

    Article  Google Scholar 

  21. Fürnkranz J (1999) Separate-and-conquer rule learning. Artif Intel Rev 13:3–54

    Article  MATH  Google Scholar 

  22. García S, Herrera F (2008) An extension on “statistical comparisons of classifiers over multiple data sets” for all pairwise comparisons. J Mach Learn Res 9:2677–2694

    MATH  Google Scholar 

  23. García S, Fernández A, Luengo J, Herrera F (2010) Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power. Inf Sci 180(10):2044–2064

    Article  Google Scholar 

  24. García S, Derrac J, Luengo J, Carmona CJ, Herrera F (2011) Evolutionary selection of hyperrectangles in nested generalized exemplar learning. Appl Soft Comput 11(3):3032–3045

    Article  Google Scholar 

  25. García S, Derrac J, Triguero I, Carmona CJ, Herrera F (2012) Evolutionary-based selection of generalized instances for imbalanced classification. Knowl-Based Syst 25(1):3–12

    Article  Google Scholar 

  26. García S, Luengo J, Herrera F (2015) Data preprocessing in data mining. Springer, New York

  27. Gaudette L, Japkowicz N (2009) Evaluation methods for ordinal classification. In: Canadian conference on AI. Lecture notes in computer science, vol 5549, pp 207–210

  28. Han J, Kamber M (2011) Data mining: concepts and techniques. Morgan Kaufmann Publishers Inc., San Francisco

  29. Hu Q, Che X, Zhang L, Zhang D, Guo M, Yu D (2012) Rank entropy-based decision trees for monotonic classification. IEEE Trans Knowl Data Eng 24(11):2052–2064

    Article  Google Scholar 

  30. Japkowicz N, Shah M (eds) (2011) Evaluating learning algorithms: a classification perspective. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  31. Kotłowski W, Słowiński R (2009) Rule learning with monotonicity constraints. In: ICML, vol 382

  32. Kotłowski W, Słowiński R (2013) On nonparametric ordinal classification with monotonicity constraints. IEEE Trans Knowl Data Eng 25(11):2576–2589

    Article  Google Scholar 

  33. Lievens S, Baets BD, Cao-Van K (2008) A probabilistic framework for the design of instance-based supervised ranking algorithms in an ordinal setting. Ann Oper Res 163(1):115–142

    Article  MathSciNet  MATH  Google Scholar 

  34. Liu T, Moore AW, Gray A (2006) New algorithms for efficient high-dimensional nonparametric classification. J Mach Learn Res 7:1135–1158

    MathSciNet  MATH  Google Scholar 

  35. López V, Fernández A, García S, Palade V, Herrera F (2013) An insight into classification with imbalanced data: empirical results and current trends on using data intrinsic characteristics. Inf Sci 250:113–141

    Article  Google Scholar 

  36. Mariolis IG, Dermatas E (2013) Automatic classification of seam pucker images based on ordinal quality grades. Pattern Anal Appl 16(3):447–457

    Article  MathSciNet  Google Scholar 

  37. Potharst R, Feelders AJ (2002) Classification trees for problems with monotonicity constraints. SIGKDD Explor 4(1):1–10

    Article  Google Scholar 

  38. Potharst R, Ben-David A, van Wezel MC (2009) Two algorithms for generating structured and unstructured monotone ordinal datasets. Eng Appl Artif Intel 22(4–5):491–496

    Article  Google Scholar 

  39. Salzberg S (1991) A nearest hyperrectangle learning method. Mach Learn 6(3):251–276

    Google Scholar 

  40. Triguero I, Peralta D, Bacardit J, García S, Herrera F (2015) MRPR: a mapreduce solution for prototype reduction in big data classification. Neurocomputing 150:331–345

    Article  Google Scholar 

  41. Wettschereck D, Dietterich TG (1995) An experimental comparison of the nearest-neighbor and nearest-hyperrectangle algorithms. Mach Learn 19(1):5–27

    Google Scholar 

  42. Wolpert DH (1996) The lack of a priori distinctions between learning algorithms. Neural Comput 8(7):1341–1390

    Article  Google Scholar 

Download references

Acknowledgments

The authors are very grateful to the anonymous reviewers for their valuable suggestions and comments to improve the quality of this paper.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Salvador García.

Ethics declarations

Conflict of interest

We declare that we have no conflict of interest.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

García, J., Fardoun, H.M., Alghazzawi, D.M. et al. MoNGEL: monotonic nested generalized exemplar learning. Pattern Anal Applic 20, 441–452 (2017). https://doi.org/10.1007/s10044-015-0506-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10044-015-0506-y

Keywords

Navigation