Skip to main content

Should decision trees be learned from examples or from decision rules?

  • Learning and Adaptive Systems I
  • Conference paper
  • First Online:
Book cover Methodologies for Intelligent Systems (ISMIS 1993)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 689))

Included in the following conference series:

Abstract

A standard method for determining decision trees is to learn them from examples. A disadvantage of this approach is that once a decision tree is learned, it is difficult to modify it to suit different decision making situations. An attractive approach that avoids this problem is to learn and store knowledge in a declarative form, e.g., as decision rules, and then, whenever needed, generate from it a decision free that is most suitable in any given situation. This paper describes an efficient method for this purpose, called AQDT-1, which takes decision rules generated by the learning system AQ15 and builds from them a decision tree optimized according to a given quality criterion. The method is able to build conventional decision trees, as well as the so-called “skip noder” trees, in which measuring attributes assigned to some nodes may be avoided. It is shown that “skip-node” trees can be significantly simpler than conventional ones. In the experiments comparing AQDT-1 with C4.5, the former outperformed the latter both in terms of the predictive accuracy as well as the simplicity of the generated decision trees.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • Bratko, I. & Lavrac, N. (Eds.), Progress in Machine Learning, Sigma Wilmslow, England, Press, 1987.

    Google Scholar 

  • Bratko, I. & Kononenko, L. “Learning Diagnostic Rules from Incomplete and Noisy Data,” Interactions in AI and statistics, B. Phelps, (edt.), Gower Technical Press, 1987

    Google Scholar 

  • Breiman, L., Friedman, J.H., Olshen, R.A. & Stone, C.J., “Classification and Regression Trees,” Belmont, California: Wadsworth Int. Group, 1984.

    Google Scholar 

  • Cestnik, B. & Karalic, A., “The Estimation of Probabilities in Attribute Selection Measures for Decision Tree Induction”, Proceeding of the European Summer School on Machine Learning, July 22–31, Priory Corsendonk, Belgium, 1991.

    Google Scholar 

  • Clark, P. & Niblett, T. “Induction in Noisy Domains,” Progress in Machine Learning, I. Bratko and N. Lavrac, (Eds.), Sigma Press, Wilmslow, 1987.

    Google Scholar 

  • Hunt, E., Marin, J. & Stone, P., Experiments in induction, NY: Academic Press, 1966.

    Google Scholar 

  • Michalski, R.S. “AQVAL/1-Computer Implementation of a Variable-Valued Logic System VL1 and Examples of its Application to Pattern Recognition,” Proceeding of the First International Joint Conference on Pattern Recognition, pp. 3–17, 1973.

    Google Scholar 

  • Michalski, R.S., Mozetic, I., Hong, J. & Lavrac, N., The Multi-Purpose Incremental Learning System AQ15 and Its Testing Application to Three Medical Domains,” Proceedings of AAAI-86, Philadelphia, PA, 1986.

    Google Scholar 

  • Michalski, R.S., “Learning Flexible Concepts: Fundamental Ideas and a Method Based on Two-tiered Representation,” Machine Learning: An Artificial Intelligence Approach, Vol. III, Y.Kodratoff & R.S.Michalski (Eds.), Morgan Kaufmann, pp. 63–111, 1990.

    Google Scholar 

  • Mingers, J., “An Empirical Comparison of selection Measures for Decision-Tree Induction,” Machine Learning, pp.319–342, Vol. 3, No.4, Kluwer Academic Pub., 1989.

    Google Scholar 

  • Niblett, T. & Bratko, I., “Learning Decision Rules in Noisy Domains,” Proceeding Expert Systems 86, Brighton, Cambridge: Cambridge University Press, 1986.

    Google Scholar 

  • Quinlan, J.R., “Discovering Rules By Induction from Large Collections of Examples,” Expert Systems in the Microelectronic Age, Ed. D. Michie, Edinburgh Unv. Press, 1979.

    Google Scholar 

  • Quinlan, J.R., “Learning Efficient Classification Procedures and Their Application to Chess End Games,” R.S. Michalski, J.G. Carbonell and T.M. Mitchell, (Eds.), Machine Learning: An Artificial Intelligence Approach. Los Altos: Morgan Kaufmann, 1983.

    Google Scholar 

  • Quinlan, J.R., “Induction of Decision Trees,”Machine Learning Vol. 1, No. 1, Kluwer Academic Publishers, 1986.

    Google Scholar 

  • Smyth, P., Goodman, R.M. & Higgins, C., “A Hybrid Rule-based/Bayesian Classifier,” Proceedings of ECAI 90, Stockholm, August, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Jan Komorowski Zbigniew W. Raś

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Imam, I.F., Michalski, R.S. (1993). Should decision trees be learned from examples or from decision rules?. In: Komorowski, J., Raś, Z.W. (eds) Methodologies for Intelligent Systems. ISMIS 1993. Lecture Notes in Computer Science, vol 689. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56804-2_37

Download citation

  • DOI: https://doi.org/10.1007/3-540-56804-2_37

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56804-9

  • Online ISBN: 978-3-540-47750-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics