Skip to main content

An Empirical Comparison of Rule Induction Using Feature Selection with the LEM2 Algorithm

  • Conference paper
Advances on Computational Intelligence (IPMU 2012)

Abstract

The main objective of this paper is to compare a strategy to rule induction based on feature selection with another strategy, not using feature selection, exemplified by the LEM2 algorithm. It is shown that LEM2 significantly outperforms the strategy or rule induction based on feature selection in terms of an error rate (5% significance level, two-tailed test). At the same time, the LEM2 algorithm induces smaller rule sets with the smaller total number of conditions as well.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blum, A., Langley, P.: Selection of relevant features and examples in machine learning. Artificial Intelligence 97, 245–271 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  2. Kohavi, R., John, G.: Wrappers for feature selection. Artificial Intelligence 97, 273–324 (1997)

    Article  MATH  Google Scholar 

  3. Guyon, I., Elisseeff, A.: An introduction to variable and feature selection. Journal of Machine Learning Research 3, 1157–1182 (2003)

    MATH  Google Scholar 

  4. Guyon, I., Gunn, S., Nikravesh, M., Zadeh, L.A.: Feauture Extraction. Foundations and Applications. Springer, Heidelberg (2006)

    Google Scholar 

  5. Liu, H., Motoda, H.: Computational Methods of Feature Selection. Chapman and Hall/CRC, Boca Raton, FL (2007)

    MATH  Google Scholar 

  6. Pawlak, Z.: Rough sets. International Journal of Computer and Information Sciences 11, 341–356 (1982)

    Article  MathSciNet  MATH  Google Scholar 

  7. Pawlak, Z.: Rough Sets. Theoretical Aspects of Reasoning about Data. Kluwer Academic Publishers, Dordrecht (1991)

    MATH  Google Scholar 

  8. Pawlak, Z., Grzymala-Busse, J.W., Slowinski, R., Ziarko, W.: Rough sets. Communications of the ACM 38, 89–95 (1995)

    Article  Google Scholar 

  9. Grzymala-Busse, J.W.: Rule induction. In: Maimon, O., Rokach, L. (eds.) Data Mining and Knowledge Discovery Handbook, 2nd edn., pp. 249–265. Springer, Heidelberg (2010)

    Google Scholar 

  10. Chan, C.C., Grzymala-Busse, J.W.: On the attribute redundancy and the learning programs ID3, PRISM, and LEM2. Technical report, Department of Computer Science, University of Kansas (1991)

    Google Scholar 

  11. Grzymala-Busse, J.W.: LERS—a system for learning from examples based on rough sets. In: Slowinski, R. (ed.) Intelligent Decision Support. Handbook of Applications and Advances of the Rough Set Theory, pp. 3–18. Kluwer Academic Publishers, Dordrecht (1992)

    Google Scholar 

  12. Grzymala-Busse, J.W.: A new version of the rule induction system LERS. Fundamenta Informaticae 31, 27–39 (1997)

    MATH  Google Scholar 

  13. Booker, L.B., Holland, D.E., Goldberg, J.F.: Classifier systems and genetic algorithms. In: Carbonell, J.G. (ed.) Machine Learning. Paradigms and Methods, pp. 235–282. MIT Press, Boston (1990)

    Google Scholar 

  14. Holland, J.H., Holyoak, K.J., Nisbett, R.E.: Induction. Processes of Inference, Learning, and Discovery. MIT Press, Boston (1986)

    Google Scholar 

  15. Stefanowski, J.: Algorithms of Decision Rule Induction in Data Mining. Poznan University of Technology Press, Poznan (2001)

    Google Scholar 

  16. Grzymala-Busse, J.W., Grzymala-Busse, W.J.: Handling missing attribute values. In: Maimon, O., Rokach, L. (eds.) Data Mining and Knowledge Discovery Handbook, 2nd edn., pp. 33–51. Springer, Heidelberg (2010)

    Google Scholar 

  17. Chmielewski, M.R., Grzymala-Busse, J.W.: Global discretization of continuous attributes as preprocessing for machine learning. International Journal of Approximate Reasoning 15(4), 319–331 (1996)

    Article  MATH  Google Scholar 

  18. Fang, J., Grzymala-Busse, J.: Leukemia prediction from gene expression data—a rough set approach. In: Proceedings of the Eighth International Conference on Artifical Intelligence and Soft Computing, pp. 899–908 (2006)

    Google Scholar 

  19. Fang, J., Grzymala-Busse, J.W.: Mining of MicroRNA Expression Data—A Rough Set Approach. In: Wang, G.-Y., Peters, J.F., Skowron, A., Yao, Y. (eds.) RSKT 2006. LNCS (LNAI), vol. 4062, pp. 758–765. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Grzymala-Busse, J.W. (2012). An Empirical Comparison of Rule Induction Using Feature Selection with the LEM2 Algorithm. In: Greco, S., Bouchon-Meunier, B., Coletti, G., Fedrizzi, M., Matarazzo, B., Yager, R.R. (eds) Advances on Computational Intelligence. IPMU 2012. Communications in Computer and Information Science, vol 297. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-31709-5_28

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-31709-5_28

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-31708-8

  • Online ISBN: 978-3-642-31709-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics