Skip to main content

Improved Comprehensibility and Reliability of Explanations via Restricted Halfspace Discretization

  • Conference paper
Machine Learning and Data Mining in Pattern Recognition (MLDM 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5632))

  • 2393 Accesses

Abstract

A number of two-class classification methods first discretize each attribute of two given training sets and then construct a propositional DNF formula that evaluates to True for one of the two discretized training sets and to False for the other one. The formula is not just a classification tool but constitutes a useful explanation for the differences between the two underlying populations if it can be comprehended by humans and is reliable. This paper shows that comprehensibility as well as reliability of the formulas can sometimes be improved using a discretization scheme where linear combinations of a small number of attributes are discretized.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abidi, S., Hoe, K.: Symbolic exposition of medical data-sets: A data mining workbench to inductively derive data-defining symbolic rules. In: Proceedings of the 15th IEEE Symposium on Computer-based Medical Systems (CBMS 2002) (2002)

    Google Scholar 

  2. Agrawal, R., Imielinski, T., Swami, A.N.: Mining association rules between sets of items in large databases. In: Proceedings of the 1993 ACM SIGMOD International Conference on Management of Data (1993)

    Google Scholar 

  3. An, A.: Learning classification rules from data. Computers and Mathematics with Applications 45, 737–748 (2003)

    Article  MathSciNet  MATH  Google Scholar 

  4. An, A., Cercone, N.: Discretization of continuous attributes for learning classification rules. In: Zhong, N., Zhou, L. (eds.) PAKDD 1999. LNCS, vol. 1574, pp. 509–514. Springer, Heidelberg (1999)

    Chapter  Google Scholar 

  5. Atzmueller, M., Puppe, F., Buscher, H.-P.: Subgroup mining for interactive knowledge refinement. In: Miksch, S., Hunter, J., Keravnou, E.T. (eds.) AIME 2005. LNCS, vol. 3581, pp. 453–462. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  6. Au, W.-H., Chan, K.C.C., Wong, A.K.C.: A fuzzy approach to partitioning continuous attributes for classification. IEEE Transactions on Knowledge and Data Engineering 18, 715–719 (2006)

    Article  Google Scholar 

  7. Bartnikowski, S., Granberry, M., Mugan, J., Truemper, K.: Transformation of rational and set data to logic data. In: Data Mining and Knowledge Discovery Approaches Based on Rule Induction Techniques. Springer, Heidelberg (2006)

    Google Scholar 

  8. Bay, S., Pazzani, M.: Detecting group differences: Mining contrast sets. Data Mining and Knowledge Discovery 5, 213–246 (2001)

    Article  MATH  Google Scholar 

  9. Bay, S.D.: Multivariate discretization of continuous variables for set mining. In: Proceedings of the sixth ACM SIGKDD international conference on Knowledge discovery and data mining (2000)

    Google Scholar 

  10. Boros, E., Hammer, P., Ibaraki, T., Kogan, A.: A logical analysis of numerical data. Mathematical Programming 79, 163–190 (1997)

    MathSciNet  MATH  Google Scholar 

  11. Boros, E., Hammer, P., Ibaraki, T., Kogan, A., Mayoraz, E., Muchnik, I.: An implementation of logical analysis of data. IEEE Transactions on Knowledge and Data Engineering 12, 292–306 (2000)

    Article  Google Scholar 

  12. Boullé, M.: Khiops: A statistical discretization method of continuous attributes. Machine Learning 55, 53–69 (2004)

    Article  MATH  Google Scholar 

  13. Boullé, M.: MODL: A Bayes optimal discretization method for continuous attributes. Machine Learning 65, 131–165 (2006)

    Article  Google Scholar 

  14. Chao, S., Li, Y.: Multivariate interdependent discretization for continuous attribute. In: Proceedings of the Third International Conference on Information Technology and Applications (ICITA 2005)(2005)

    Google Scholar 

  15. Chmielewski, M.R., Grzymala-Busse, J.W.: Global discretization of continuous attributes as preprocessing for machine learning. International Journal of Approximate Reasoning 15, 319–331 (1996)

    Article  MATH  Google Scholar 

  16. Clark, D., Schreter, Z., Adams, A.: A quantitative comparison of dystal and backpropagation. In: Proceedings of Seventh Australian Conference on Neural Networks (ACNN 1996) (1996)

    Google Scholar 

  17. Clark, P., Boswell, R.: Rule induction with CN2: Some recent improvements. In: Proceedings Fifth European Working Session on Learning (1991)

    Google Scholar 

  18. Cohen, W.W.: Fast effective rule induction. In: Machine Learning: Proceedings of the Twelfth International Conference (1995)

    Google Scholar 

  19. Cohen, W.W., Singer, Y.: A simple, fast, and effective rule learner. In: Proceedings of the Sixteenth National Conference on Artificial Intelligence (1999)

    Google Scholar 

  20. Cowan, N.: The magical number 4 in short-term memory: A reconsideration of mental storage capacity. Behavioral and Brain Sciences 24, 87–185 (2001)

    Article  Google Scholar 

  21. Dougherty, J., Kohavi, R., Sahami, M.: Supervised and unsupervised discretization of continuous features. In: Machine Learning: Proceedings of the Twelfth International Conference (1995)

    Google Scholar 

  22. Fayyad, U., Irani, K.: On the handling of continuous-valued attributes in decision tree generation. Machine Learning 8, 87–102 (1992)

    MATH  Google Scholar 

  23. Fayyad, U., Irani, K.: Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of the Thirteenth International Joint Conference on Artificial Intelligence (1993)

    Google Scholar 

  24. Felici, G., Sun, F., Truemper, K.: Learning logic formulas and related error distributions. In: Data Mining and Knowledge Discovery Approaches Based on Rule Induction Techniques. Springer, Heidelberg (2006)

    Google Scholar 

  25. Felici, G., Truemper, K.: A MINSAT approach for learning in logic domain. INFORMS Journal of Computing 14, 20–36 (2002)

    Article  MathSciNet  MATH  Google Scholar 

  26. Friedman, N., Goldszmidt, M.: Discretizing continuous attributes while learning Bayesian networks. In: International Conference on Machine Learning (1996)

    Google Scholar 

  27. Gamberger, D., Lavrač, N.: Expert-guided subgroup discovery: Methodology and application. Journal of Artificial Intelligence Research 17, 501–527 (2002)

    MATH  Google Scholar 

  28. Gamberger, D., Lavrač, N., Krstačic, G.: Active subgroup mining: a case study in coronary heart disease risk group detection. Artificial Intelligence in Medicine 28 (2003)

    Google Scholar 

  29. Gamberger, D., Lavrač, N., Železný, F., Tolar, J.: Induction of comprehensible models for gene expression datasets by subgroup discovery methodology. Journal of Biomedical Informatics 37 (2004)

    Google Scholar 

  30. Guyon, I., Elisseef, A.: An introduction to variable and feature selection. Journal of Machine Learning Research 3, 1157–1182 (2003)

    MATH  Google Scholar 

  31. Halford, G.S., Baker, R., McCredden, J.E., Bain, J.D.: How many variables can humans process? Psychological Science 16, 70–76 (2005)

    Article  Google Scholar 

  32. Halford, G.S., Cowan, N., Andrews, G.: Separating cognitive capacity from knowledge: a new hypothesis. Trends in Cognitive Sciences 11, 236–242 (2007)

    Article  Google Scholar 

  33. Jin, R., Breitbart, Y., Muoh, C.: Data discretization unification. In: Proceedings of the IEEE International Conference on Data Mining (ICDM 2007) (2007)

    Google Scholar 

  34. Klösgen, W.: EXPLORA: A multipattern and multistrategy discovery assistant. In: Advances in Knowledge Discovery and Data Mining. AAAI Press, Menlo Park (1996)

    Google Scholar 

  35. Kohavi, R., Sahami, M.: Error-based and entropy-based discretization of continuous features. In: Proceedings of the Second International Conference on Knowledge Discovery and Data Mining (1996)

    Google Scholar 

  36. Koller, D., Sahami, M.: Toward optimal feature selection. In: International Conference on Machine Learning (1996)

    Google Scholar 

  37. Kurgan, L.A., Cios, K.J.: CAIM discretization algorithm. IEEE Transactions on Knowledge and Data Engineering 16, 145–153 (2004)

    Article  Google Scholar 

  38. Lavrač, N., Cestnik, B., Gamberger, D., Flach, P.: Decision support through subgroup discovery: Three case studies and the lessons learned. Machine Learning 57, 115–143 (2004)

    Article  MATH  Google Scholar 

  39. Liu, H., Yu, L.: Toward integrating feature selection algorithms for classification and clustering. IEEE Transactions on Knowledge and Data Engineering 17, 491–502 (2005)

    Article  Google Scholar 

  40. Miller, G.A.: The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review 63, 81–97 (1956)

    Article  Google Scholar 

  41. Monti, S., Cooper, G.F.: A multivariate discretization method for learning Bayesian networks from mixed data. In: Proceedings of the Fourteenth Conference of Uncertainty in AI (1998)

    Google Scholar 

  42. Mugan, J., Truemper, K.: Discretization of rational data. In: Proceedings of MML 2004 (Mathematical Methods for Learning). IGI Publishing Group (2007)

    Google Scholar 

  43. Muhlenbach, F., Rakotomalala, R.: Multivariate supervised discretization, a neighborhood graph approach. In: Proceedings of the IEEE International Conference on Data Mining (ICDM 2002) (2002)

    Google Scholar 

  44. Perner, P., Trautzsch, S.: Multi-interval discretization for decision tree learning. In: Advances in Pattern Recognition. Springer, Heidelberg (2004)

    Google Scholar 

  45. Quinlan, J.: Induction of decision trees. Machine Learning 1, 81–106 (1986)

    Google Scholar 

  46. Riehl, K.: Data Mining Logic Explanations from Numerical Data. PhD thesis, Department of Computer Science, University of Texas at Dallas (2006)

    Google Scholar 

  47. Triantaphyllou, E.: Data Mining and Knowledge Discovery via a Novel Logic-based Approach. Springer, Heidelberg (2008)

    Google Scholar 

  48. Vapnik, V., Levin, E., Cun, Y.L.: Measuring the VC-dimension of a learning machine. International Journal of Human Computer Systems 6, 851–876 (2008)

    Google Scholar 

  49. Wrobel, S.: An algorithm for multi-relational discovery of subgroups. In: Proceedings of First European Conference on Principles of Data Mining and Knowledge Discovery (1997)

    Google Scholar 

  50. Yang, Y., Webb, G.I.: Weighted proportional k-interval discretization for Naive-Bayes classifiers. In: Whang, K.-Y., Jeon, J., Shim, K., Srivastava, J. (eds.) PAKDD 2003. LNCS, vol. 2637. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Truemper, K. (2009). Improved Comprehensibility and Reliability of Explanations via Restricted Halfspace Discretization. In: Perner, P. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2009. Lecture Notes in Computer Science(), vol 5632. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-03070-3_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-03070-3_1

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-03069-7

  • Online ISBN: 978-3-642-03070-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics