Skip to main content

Progressive Boosting for Classifier Committee Learning

  • Conference paper
Book cover Applied Computing (AACC 2004)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 3285))

Included in the following conference series:

  • 1446 Accesses

Abstract

Most applications of artificial intelligence to tasks of practical importance are based on constructing a model of the knowledge used by a human expert. In a classification model, the connection between classes and properties can be defined by something as simple as a flowchart or as complex and unstructured as a procedures manual. Classifier committee learning methods generate multiple classifiers to form a committee by repeated application of a single base learning algorithm. The committee members vote to decide the final classification. Two such methods are bagging and boosting for improving the predictive power of classifier learning systems. This paper studies a different approach progressive boosting of decision trees. Instead of sampling the same number of data points at each boosting iteration t, our progressive boosting algorithm draws n t data according to the sampling schedule. an empirical evaluation of a variant of this method shows that the progressive boosting can significantly reduce the error rate of decision tree learning. On average this is more accurate than bagging and boosting.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Quinlan, J.R.: Bagging, Boosting, and C4.5, Programs for Machine Learning. Morgan Kaufmann, San Mateo (1996)

    Google Scholar 

  2. Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and regression trees. CA Wadsworth, Belmont (1984)

    MATH  Google Scholar 

  3. Brodley, C.E.: Addressing the selective superiority problem: automatic algorithm/model class selection. In: Proceedings 10th International Conference on Machine Learning, pp. 17–24. Morgan Kaufmann, San Francisc (1993)

    Google Scholar 

  4. Buntine, W.L.: Learning classi_cation trees. In: Hand, D.J. (ed.) Artificial Intelligence Frontiers in Statistics, pp. 182–201. Chapman & Hall, London (1991)

    Google Scholar 

  5. Catlett, J.: Megainduction: a test ight. In: Proceedings 8th International Workshop on Machine Learning, pp. 596–599. Morgan Kaufmann, San Francisco (1991)

    Google Scholar 

  6. Chan, P.K., Stolfo, S.J.: A comparative evaluation of voting and meta-learning on partitioned data. In: Proceedings 12th International Conference on Machine Learning, pp. 90–98. Morgan Kaufmann, San Francisco (1995)

    Google Scholar 

  7. Kohavi, R., John, G.H.: Automatic parameter selection by minimizing estimated error. In: Proceedings 12th International Conference on Machine Learning, pp. 304–311. Morgan Kaufmann, San Francisco (1995)

    Google Scholar 

  8. Murphy, P.M., Pazzani, M.J.: ID2-of-3: constructive induction of M-of-N concepts for discriminators in decision trees. In: Proceedings 8th International Workshop on Machine Learning, pp. 183–187. Morgan Kaufmann, San Francisco (1991)

    Google Scholar 

  9. Quinlan, J.R.: Inductive knowledge acquisition: a case study. In: Quinlan, J.R. (ed.) Applications of Expert Systems, Addison Wesley, Wokingham (1987)

    Google Scholar 

  10. Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2004 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Sadid, M.W.H., Mondal, M.N.I., Alam, M.S., Sohail, A.S.M., Ahmed, B. (2004). Progressive Boosting for Classifier Committee Learning. In: Manandhar, S., Austin, J., Desai, U., Oyanagi, Y., Talukder, A.K. (eds) Applied Computing. AACC 2004. Lecture Notes in Computer Science, vol 3285. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30176-9_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-30176-9_7

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-23659-7

  • Online ISBN: 978-3-540-30176-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics