Skip to main content

A simple algorithm for predicting nearly as well as the best pruning labeled with the best prediction values of a decision tree

  • Session 11
  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 1997)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 1316))

Included in the following conference series:

Abstract

Given an unpruned decision tree, Helmbold and Schapire gave an on-line prediction algorithm whose performance will not be much worse than the predictions made by the best pruning of the given decision tree. In this paper, based on the observation that finding the best pruning can be efficiently solved by a dynamic programming in the “batch” setting where all the data to be predicted are given in advance, a new on-line prediction algorithm is constructed. Although it is shown that its performance is slightly weaker than Helmbold and Schapire's algorithm with respect to the loss bound, it is so simple and general that it could be applied to many on-line optimization problems solved by dynamic programming. We also explore algorithms that are competitive not only with the best pruning but also with the best prediction values. In this setting, a greatly simplified algorithm is given, and it is shown that the algorithm can easily be generalized to the case where, instead of using decision trees, data are classified in some arbitrarily fixed manner.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. N. Bshouty. Exact learning via the monotone theory. Information and Computation, 123:146–153, 1995.

    Google Scholar 

  2. A. Blum, M. Furst, J. Jackson, M. Kearns, Y. Mansour and S. Rudich. Weakly Learning DNF and Characterizing Statistical Query Learning Using Fourier Analysis. Proc. 26th STOC, pp. 253–262, 1994.

    Google Scholar 

  3. Y. Freund, R. Schapire, Y. Singer and M. Warmuth. Using and combining predictors that specialize. Proc. 29th STOC, 1997.

    Google Scholar 

  4. D. Helmbold and R. Schapire. Predicting nearly as well as the best pruning of a decision tree. Proc. 8th COLT, pp. 61–68,1995.

    Google Scholar 

  5. M. Kearns and Y. Mansour. On the boosting ability of top-down decision tree learning algorithms. Proc. 28th STOC, pp. 459–468, 1996.

    Google Scholar 

  6. N. Littlestone. Learning quickly when irrelevant attributes abound: A new linearthreshold algorithm. Machine Learning, 2(2):285–318, 1988.

    Google Scholar 

  7. N. Littlestone and M. K. Warmuth. The weighted majority algorithm. Information and Computation, 108:212–261, 1994.

    Google Scholar 

  8. N. Cesa-Bianchi, Y. Freund, D. Helmbold, D. Haussler and M. Warmuth. How to use expert advice. Proc. 25th STOC, pp. 382–391, 1993.

    Google Scholar 

  9. J. Quinlan. C4.5: Programs for Machine Learning. Morgan Kaufmann, 1993. *** DIRECT SUPPORT *** A0008157 00007

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Ming Li Akira Maruoka

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Takimoto, E., Hirai, K., Maruoka, A. (1997). A simple algorithm for predicting nearly as well as the best pruning labeled with the best prediction values of a decision tree. In: Li, M., Maruoka, A. (eds) Algorithmic Learning Theory. ALT 1997. Lecture Notes in Computer Science, vol 1316. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-63577-7_56

Download citation

  • DOI: https://doi.org/10.1007/3-540-63577-7_56

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63577-2

  • Online ISBN: 978-3-540-69602-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics