Skip to main content

Advertisement

Log in

Performance analysis of rough set ensemble of learning classifier systems with differential evolution based rule discovery

  • Special Issue
  • Published:
Evolutionary Intelligence Aims and scope Submit manuscript

Abstract

Data mining, and specifically supervised data classification, is a key application area for Learning Classifier Systems (LCS). Scaling to larger classification problems, especially to higher dimensional problems, is a key challenge. Ensemble based approaches can be applied to LCS to address scalability issues. To this end a rough set based ensemble of LCS is proposed, which relies on a pre-processed feature partitioning step to train multiple LCS on feature subspaces. Each base classifier in the ensemble is a Michigan style supervised LCS. The traditional genetic algorithm based rule evolution is replaced by a differential evolution based rule discovery, to improve generalisation capabilities of LCS. A voting mechanism is then used to generate output for test instances. This paper describes the proposed ensemble algorithm in detail, and compares its performance with different versions of base LCS on a number of benchmark classification tasks. Analysis of computational time and model accuracy show the relative merits of the ensemble algorithm and base classifiers on the tested data sets. The rough set based ensemble learning approach and differential evolution based rule searching out-perform the base LCS on classification accuracy over the data sets considered. Results also show that small ensemble size is sufficient to obtain good performance.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. LCS are commonly categorised based on two ways of representing an individual in the population: Michigan style LCS evolve a single set of rules, while Pittsburgh style LCS evolve multiple sets of rules simultaneously. This paper is concerned with the Michigan type of LCS, unless otherwise mentioned.

References

  1. Abedini M., Kirley M. (2009) CoXCS: a coevolutionary learning classifier based on feature space partitioning. In: Nicholson A, Li X (eds) AI 2009: advances in artificial intelligence, lecture notes in computer science, vol 5866. Springer, Berlin, pp 360–369

    Chapter  Google Scholar 

  2. Abedini M, Kirleymkirley M (2010) A multiple population XCS: evolving condition-action rules based on feature space partitions. In: IEEE congress on evolutionary computation (CEC), 2010, pp 1–8

  3. Bacardit J, Butz MV (2007) Data mining in learning classifier systems: comparing XCS with GAssist. In: Proceedings of the 2003–2005 international conference on learning classifier systems

  4. Bacardit J, Krasnogor N (2006) Smart crossover operator with multiple parents for a pittsburgh learning classifier system. In: Proceedings of the 8th annual conference on Genetic and evolutionary computation. ACM, pp 1441–1448

  5. Bernado-Mansilla E, Garrell-Guiu JM (2003) Accuracy-based learning classifier systems: models, analysis and applications to classification tasks. Evol Comput 11(3):209–238

    Article  Google Scholar 

  6. Breiman L (2001) Random forests. In: Machine learning, pp 5–32

  7. Bull L, Studley M, Bagnall T, Whittley I (2005) On the use of rule-sharing in learning classifier system ensembles. In: The 2005 IEEE congress on evolutionary computation 2005, vol 1, pp 612–617

  8. Butz M, Pelikan M, Llor X, Goldberg D (2006) Automated global structure extraction for effective local building block processing in XCS. Evol Comput 14(3):345–380

    Article  Google Scholar 

  9. Debie E, Shafi K, Lokan C, Merrick K (2013) Investigating differential evolution based rule discovery in learning classifier systems. In: Differential Evolution (SDE), IEEE symposium on (IEEE SSCI 2013). IEEE Press, Singapore, pp 77–84

  10. Debie E, Shafi K, Lokan C, Merrick K (2013) Reduct based ensemble of learning classifier system for real-valued classification problems. In: Computational intelligence and ensemble learning (CIEL) IEEE symposium. IEEE Press, Singapore, pp 66–73. doi:10.1109/CIEL.2013.6613142

  11. Debie ES, Shafi K, Lokan C (2013) REUCS-CRG: reduct based ensemble of supervised classifier system with combinatorial rule generation for data mining. In: Proceeding of the fifteenth annual conference companion on genetic and evolutionary computation conference companion. ACM, pp 1251–1258

  12. Demšar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30. http://dl.acm.org/citation.cfm?id=1248547.1248548

    Google Scholar 

  13. Gao Y, Huang JZ, Rong H, Gu D (2005) Learning classifier system ensemble for data mining. In: Proceedings of the 2005 workshops on genetic and evolutionary computation, GECCO ’05. ACM, New York, NY, pp 63–66. doi:10.1145/1102256.1102268

  14. Hu Q, Yu D, Xie Z, Li X (2007) EROS: ensemble rough subspaces. Pattern Recogn 40(12):3728–3739

    Article  MATH  Google Scholar 

  15. Iorio A., Li X. (2011) Improving the performance and scalability of differential evolution on problems exhibiting parameter interactions. Soft Comput Fusion Found Methodol Appl 15(9):1769–1792

    Google Scholar 

  16. Komorowski J, Ohrn A (1997) ROSETTA–a rough set toolkit for analysis of data. In: Fifth international workshop on rough sets and soft computing, Tokyo, Japan, pp 403–407

  17. Lanzi P, Loiacono D, Zanini M (2008) Evolving classifiers ensembles with heterogeneous predictors. In: Bacardit J, Bernad-Mansilla E, Butz M, Kovacs T, Llor X, Takadama K (eds) Learning classifier systems, Lecture notes in computer science, vol 4998. Springer, Berlin, pp 218–234. doi:10.1007/978-3-540-88138-4_13.

  18. Lanzi P.L., Loiacono D., Zanini M. (2008) Evolving classifier ensembles with voting predictors. In: IEEE congress on evolutionary computation, 2008. CEC 2008. IEEE world congress on computational intelligence, pp 3760–3767. doi:10.1109/CEC.2008.4631307

  19. Morales-Ortigosa S, Orriols-Puig A, Bernad-Mansilla E (2008) New crossover operator for evolutionary rule discovery in XCS. In: IEEE Eighth international conference on hybrid intelligent systems, 2008. HIS’08. pp 867–872

  20. Morales-Ortigosa S., Orriols-Puig A., Bernado-Mansilla E. (2009) Analysis and improvement of the genetic discovery component of XCS. Int J Hybrid Intell Syst 6(2):81–95

    MATH  Google Scholar 

  21. Price K, Storn R, Lampinen J (2005) Differential evolution: a practical approach to global optimization. Springer, New York

    Google Scholar 

  22. Rauszer C, Skowron A (1992) The discernibility matrices and functions in information systems. In: Intelligent decision support. Handbook of applications and advances in the rough set theory. Kluwer, Dordrecht, pp 331–362

  23. Shafi K, Kovacs T, Abbass H, Zhu W (2009) Intrusion detection with evolutionary learning classifier systems. Nat Comput 8(1):3–27

    Article  MathSciNet  MATH  Google Scholar 

  24. Skinner BT, Nguyen HT, Liu DK (2007) Distributed classifier migration in xcs for classification of electroencephalographic signals. In: IEEE congress on evolutionary computation, 2007. CEC 2007, pp 2829–2836

  25. Stalph PO, Butz MV, Goldberg DE, Llorà X (2009) On the scalability of XCS(F). In: Proceedings of the 11th annual conference on genetic and evolutionary computation, GECCO ’09. ACM, New York, NY, pp 1315–1322. doi:10.1145/1569901.1570077.

  26. Stone C, Bull L (2003) For real! XCS with continuous-valued inputs. Evol Comput 11(3):299–336

    Article  Google Scholar 

  27. Tusar T, Filipic B (2007) Differential evolution versus genetic algorithms in multiobjective optimization, lecture notes in computer science, vol 4403, chap 22. Springer, Berlin, pp 257–271

  28. Vinterbo S, Hrn A (2000) Minimal approximate hitting sets and rule templates. Int J Approx Reason 25(2):123–143

    Article  MATH  Google Scholar 

  29. Wilson S. (2001) Mining oblique data with XCS. In: LucaLanzi P, Stolzmann W, Wilson S (eds) Advances in learning classifier systems, lecture notes in computer science, vol 1996. Springer, Berlin, pp 283–290

    Google Scholar 

  30. Wilson SW (2000) Get real! XCS with continuous-valued inputs. In: Learning classifier systems. Springer, Berlin, pp 209–219

  31. Ye Y, Wu Q, Huang JZ, Ng MK, Li X (2013) Stratified sampling for feature subspace selection in random forests for high dimensional data. Pattern Recogn 46(3):769–787. doi:10.1016/j.patcog.2012.09.005. URL http://www.sciencedirect.com/science/article/pii/S0031320312003974

  32. Zhu F., Guan S.U. (2008) Cooperative co-evolution of GA-based classifiers based on input decomposition. Eng Appl Artif Intell 21(8):1360–1369

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Essam Debie.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Debie, E., Shafi, K., Lokan, C. et al. Performance analysis of rough set ensemble of learning classifier systems with differential evolution based rule discovery. Evol. Intel. 6, 109–126 (2013). https://doi.org/10.1007/s12065-013-0093-z

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s12065-013-0093-z

Keywords

Navigation