Abstract
Most applications of artificial intelligence to tasks of practical importance are based on constructing a model of the knowledge used by a human expert. In a classification model, the connection between classes and properties can be defined by something as simple as a flowchart or as complex and unstructured as a procedures manual. Classifier committee learning methods generate multiple classifiers to form a committee by repeated application of a single base learning algorithm. The committee members vote to decide the final classification. Two such methods are bagging and boosting for improving the predictive power of classifier learning systems. This paper studies a different approach progressive boosting of decision trees. Instead of sampling the same number of data points at each boosting iteration t, our progressive boosting algorithm draws n t data according to the sampling schedule. an empirical evaluation of a variant of this method shows that the progressive boosting can significantly reduce the error rate of decision tree learning. On average this is more accurate than bagging and boosting.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Quinlan, J.R.: Bagging, Boosting, and C4.5, Programs for Machine Learning. Morgan Kaufmann, San Mateo (1996)
Breiman, L., Friedman, J.H., Olshen, R.A., Stone, C.J.: Classification and regression trees. CA Wadsworth, Belmont (1984)
Brodley, C.E.: Addressing the selective superiority problem: automatic algorithm/model class selection. In: Proceedings 10th International Conference on Machine Learning, pp. 17–24. Morgan Kaufmann, San Francisc (1993)
Buntine, W.L.: Learning classi_cation trees. In: Hand, D.J. (ed.) Artificial Intelligence Frontiers in Statistics, pp. 182–201. Chapman & Hall, London (1991)
Catlett, J.: Megainduction: a test ight. In: Proceedings 8th International Workshop on Machine Learning, pp. 596–599. Morgan Kaufmann, San Francisco (1991)
Chan, P.K., Stolfo, S.J.: A comparative evaluation of voting and meta-learning on partitioned data. In: Proceedings 12th International Conference on Machine Learning, pp. 90–98. Morgan Kaufmann, San Francisco (1995)
Kohavi, R., John, G.H.: Automatic parameter selection by minimizing estimated error. In: Proceedings 12th International Conference on Machine Learning, pp. 304–311. Morgan Kaufmann, San Francisco (1995)
Murphy, P.M., Pazzani, M.J.: ID2-of-3: constructive induction of M-of-N concepts for discriminators in decision trees. In: Proceedings 8th International Workshop on Machine Learning, pp. 183–187. Morgan Kaufmann, San Francisco (1991)
Quinlan, J.R.: Inductive knowledge acquisition: a case study. In: Quinlan, J.R. (ed.) Applications of Expert Systems, Addison Wesley, Wokingham (1987)
Quinlan, J.R.: C4.5: Programs for Machine Learning. Morgan Kaufmann, San Mateo (1993)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2004 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sadid, M.W.H., Mondal, M.N.I., Alam, M.S., Sohail, A.S.M., Ahmed, B. (2004). Progressive Boosting for Classifier Committee Learning. In: Manandhar, S., Austin, J., Desai, U., Oyanagi, Y., Talukder, A.K. (eds) Applied Computing. AACC 2004. Lecture Notes in Computer Science, vol 3285. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-30176-9_7
Download citation
DOI: https://doi.org/10.1007/978-3-540-30176-9_7
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-23659-7
Online ISBN: 978-3-540-30176-9
eBook Packages: Springer Book Archive