Skip to main content

Decision Tree Learning Using a Bayesian Approach at Each Node

  • Conference paper
Advances in Artificial Intelligence (Canadian AI 2009)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 5549))

Included in the following conference series:

  • 1645 Accesses

Abstract

We explore the problem of learning decision trees using a Bayesian approach, called TREBBLE (TREe Building by Bayesian LE- arning), in which a population of decision trees is generated by constructing trees using probability distributions at each node. Predictions are made either by using Bayesian Model Averaging to combine information from all the trees (TREBBLE-BMA) or by using the single most likely tree (TREBBLE-MAP), depending on what is appropriate for the particular application domain. We show on benchmark data sets that this method is more accurate than the traditional decision tree learning algorithm C4.5 and is as accurate as the Bayesian method SimTree while being much simpler to understand and implement.

In many application domains, such as help-desks and medical diagnoses, a decision tree needs to be learned from a prior tree (provided by an expert) and some (usually small) amount of training data. We show how TREBBLE-MAP can be used to learn a single tree that performs better than using either the prior tree or the training data alone.

This paper is based on work at IBM T.J. Watson Research Center, Hawthorne NY 10532 USA.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anderson, J., Kline, P.: A learning system and its psychological implications. In: Proc. of the Sixth International Joint Conference on Artificial Intelligence, pp. 16–21 (1979)

    Google Scholar 

  2. Asuncion, A., Newman, D.: UCI machine learning repository (2007)

    Google Scholar 

  3. Bennett, K.P., Mangasarian, O.L.: Robust linear programming discrimination of two linearly inseparable sets. Optimization Methods and Software 1, 23–34 (1992)

    Article  Google Scholar 

  4. Bohanec, M., Rajkovic, V.: Knowledge acquisition and explanation for multi-attribute decision making. In: Proc. of the 8th Intl. Workshop on Expert Systems and their Applications, pp. 59–78 (1988)

    Google Scholar 

  5. Breiman, L.: Random Forests. Machine Learning 45(1), 5–32 (2001)

    Article  MATH  Google Scholar 

  6. Chipman, H.A., George, E.I., McCulloch, R.E.: Bayesian CART Model Search. Journal of the American Statistical Association 93(443), 935–947 (1998)

    Article  Google Scholar 

  7. Denison, D.G., Mallick, B.K., Smith, A.F.: Bayesian CART. Biometrika 85(2), 363–377 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  8. Duch, W., Adamczak, R., Grabczewski, K., Ishikawa, M., Ueda, H.: Extraction of crisp logical rules using constrained backpropagation networks - comparison of two new approaches. In: Proc. of the European Symposium on Artificial Neural Networks, pp. 109–114 (1997)

    Google Scholar 

  9. Ho, T.K.: Random Decision Forest. In: Proc. of the 3rd Int’l. Conf. on Document Analysis and Recognition, pp. 278–282 (1995)

    Google Scholar 

  10. Kalles, D., Morris, T.: Efficient Incremental Induction of Decision Trees. Machine Learning 24(3), 231–242 (1996)

    Google Scholar 

  11. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann Publishers, San Francisco (1993)

    Google Scholar 

  12. Schlimmer, J.C.: Concept acquisition through representational adjustment. PhD thesis (1987)

    Google Scholar 

  13. Shultz, T., Mareschal, D., Schmidt, W.: Modeling cognitive development on balance scale phenomena. Machine Learning 16, 59–88 (1994)

    Google Scholar 

  14. Utgoff, P.E.: Incremental Induction of Decision Trees. Machine Learning 4, 161–186 (1989)

    Article  Google Scholar 

  15. Wu, Y.: Bayesian Tree Models. PhD thesis (2006)

    Google Scholar 

  16. Wu, Y., Tjelmeland, H., West, M.: Bayesian CART: Prior Specification and Posterior Simulation. Journal of Computational and Graphical Statistics 16(1), 44–66 (2007)

    Article  MathSciNet  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Andronescu, M., Brodie, M. (2009). Decision Tree Learning Using a Bayesian Approach at Each Node. In: Gao, Y., Japkowicz, N. (eds) Advances in Artificial Intelligence. Canadian AI 2009. Lecture Notes in Computer Science(), vol 5549. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01818-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-01818-3_4

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-01817-6

  • Online ISBN: 978-3-642-01818-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics