Skip to main content

Decision tree learning system with switching evaluator

  • Learning III: Techniques and Issues
  • Conference paper
  • First Online:
Advances in Artifical Intelligence (Canadian AI 1996)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1081))

  • 139 Accesses

Abstract

In this paper, we introduce the notion of the local strategy of constructing decision trees that includes the information theoretic entropy algorithm in ID3 (or C4.5) and any other local algorithms. Simply put, given a sample, a local algorithm constructs a decision tree in the top-down manner using an evaluation function. We propose a new local algorithm that is very different from the entropy algorithm. We analyze behaviors of the two algorithms on a simple model. Based on these analyses, we propose a learning system of decision trees which can change an evaluation function while constructing decision trees, and verify the effect of the system by experiments with real databases. The system not only achieves a high accuracy, but also produces well-balanced decision trees, which have the advantage of fast classification.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. H. Almuallim and T. G. Dietterich. Learning with many irrelevant features. In Proceedings of the 9th National Conference on Artificial Intelligence, Vol. 2, pp. 547–552. Morgan Kaufmann, 1991.

    Google Scholar 

  2. H. Almuallim and T. G. Dietterich. Efficient algorithms for identifying relevant features. In Proceedings of the 9th Biennial Conference of the Canadian Society for Computational Studies of Intelligence, pp. 38–45. Morgan Kaufmann, 1992.

    Google Scholar 

  3. L. Breiman, J. H. Friedman, R. A. Olsen, and C. J. Stone. Classification and Regression Trees. Wadsworth International Group, Belmont, California, 1984.

    Google Scholar 

  4. C. E. Brodley. Automatic selection of split criterion during tree growing based on node location. In Proceedings of the 12th International Conference on Machine Learning, pp. 73–80. Morgan Kaufmann, 1995.

    Google Scholar 

  5. W. Buntine and T. Niblett. A further comparison of splitting rules for decisiontree induction. Machine Learning, 8(1):75–85, 1992.

    Google Scholar 

  6. R. Caruana and D. Freitag. Greedy attribute selection. In Proceedings of the 11th International Workshop on Machine Learning, pp. 28–36. Morgan Kaufmann, 1994.

    Google Scholar 

  7. J. Catlett. Peepholing: choosing attributes efficiently for megainduction. In Proceedings of the 9th International Conference on Machine Learning, pp. 49–54. Morgan Kaufmann, 1992.

    Google Scholar 

  8. G. H. John, R. Kohavi, and K. Pfleger. Irrelevant features and the subset selection problem. In Proceedings of the 11th International Workshop on Machine Learning, pp. 121–129. Morgan Kaufmann, 1994.

    Google Scholar 

  9. D. J. Lubinsky. Increasing the performance and consistency of classification trees by using the accuracy criterion at the leaves. In Proceedings of the 12th International Conference on Machine Learning, pp. 371–377. Morgan Kaufmann, 1995.

    Google Scholar 

  10. J. Mingers. An empirical comparison of selection measures for decision-tree induction. Machine Learning, 3(4):319–342, 1989.

    Google Scholar 

  11. R. Musick, J. Catlett, and S. Russell. Decision theoretic subsampling for induction on large databases. In Proceedings of the 10th International Conference on Machine Learning, pp. 212–219. Morgan Kaufmann, 1993.

    Google Scholar 

  12. J. R. Quinlan. Induction of decision trees. Machine Learning, 1(1):81–106, 1986.

    Google Scholar 

  13. S. R. Safavian and D. Landgrebe. A survey of decision tree classifier methodlogy. IEEE Transactions on Systems, Man, and Cybernetics, 21(3):660–674, 1991.

    Google Scholar 

  14. Y. Sakakibara. Noise-tolerant Occam algorithms and their applications to learning decision trees. Machine Learning, 11:37–62, 1993.

    Google Scholar 

  15. Y. Sakakibara, K. Misue, and T. Koshiba. A machine learning approach to knowledge acquisitions from text databases. To appear in International Journal of Human-Computer Interaction.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Gordon McCalla

Rights and permissions

Reprints and permissions

Copyright information

© 1996 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Koshiba, T. (1996). Decision tree learning system with switching evaluator. In: McCalla, G. (eds) Advances in Artifical Intelligence. Canadian AI 1996. Lecture Notes in Computer Science, vol 1081. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-61291-2_64

Download citation

  • DOI: https://doi.org/10.1007/3-540-61291-2_64

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-61291-9

  • Online ISBN: 978-3-540-68450-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics