Skip to main content

On the Size of a Classification Tree

  • Conference paper
  • First Online:
  • 1407 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2734))

Abstract

We discuss some estimates for the misclassification rate of a classification tree in terms of the size of the learning set, following some ideas introduced in [3]. We develop some mathematical ideas of [3], extending the analysis to the case with an arbitrary finite number of classes.

Partly supported by a grant of the University of Rome “La Sapienza”

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Breiman L.: Bagging Predictors. Machine Learning 26 (1996) 123–140

    Google Scholar 

  2. Breiman L.: Arcing Classifiers, discussion paper. Ann. Stat. 26 (1998) 801–824

    Article  MATH  MathSciNet  Google Scholar 

  3. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and Regression Trees. Chapman & Hall/CRC Boca Raton, London, New York, Washington D. C, (1993)

    Google Scholar 

  4. Feller, F.: An Introduction to Probability Theory and Its Applications. Vol. I,II. Wiley, New York (1971)

    MATH  Google Scholar 

  5. Freund, Y., Shapire, R.: Experiments with a new boosting algorithm. Machine Learning: Proceedings of the Thirteenth International Conference, July, 1996 (1996)

    Google Scholar 

  6. On bias, variance, 0/1 loss, and the curse of dimensionality. Journ. of Data Mining and Knowledge Discovery. (1997) 1–55

    Google Scholar 

  7. Geman, S., Bienestock, E., Doursat, R.: Neural networks and the bias / variance dilemma. Neural Computation 4 (1992) 1–58

    Article  Google Scholar 

  8. Tibshirani, R.: Bias, Variance and Prediction Error for Classification Rules. Technical Report, Statistics Department, University of Toronto (1996).

    Google Scholar 

  9. Weiss, S.M., Indurkhya, N.: Predictive data mining: a practical guide. Ed. Morgan Kaufmann Publishers, San Francisco (1998)

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Scaringella, A. (2003). On the Size of a Classification Tree. In: Perner, P., Rosenfeld, A. (eds) Machine Learning and Data Mining in Pattern Recognition. MLDM 2003. Lecture Notes in Computer Science, vol 2734. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45065-3_6

Download citation

  • DOI: https://doi.org/10.1007/3-540-45065-3_6

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40504-7

  • Online ISBN: 978-3-540-45065-8

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics