Abstract
Boosting algorithms are a means of building a strong ensemble classifier by aggregating a sequence of weak hypotheses. In this paper, multiple TAN classifiers generated by GTAN are combined by a combination method called Boosting-MultiTAN. This TAN combination classifier is compared with the Boosting-BAN classifier which is boosting based on BAN combination. We conduct an empirical study to compare the performance of two algorithms, measured in terms of overall test correct rate, on ten real data sets. Finally, experimental results show that the Boosting-BAN has higher classification accuracy on most data sets, but Boosting-MultiTAN has good effect on others. These results argue that boosting algorithms deserve more attention in machine learning and data mining communities.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Freund, Y., Schapire, R.E.: A decision-theoretic generalization of on-line learning and an application to boosting. Unpublished manuscript available electronically (on our web pages, or by email request). An extended abstract appeared in Computational Learning Theory: Second European Conf., EuroCOLT 1995, pp. 23–37 (1995)
Schapire, R.E., Freund, Y., Bartlett, Y., et al.: Boosting the margin: A new explanation for the effectiveness of voting methods. In: Fisher, D.H. (ed.) Proc. of the 14th Int’l Conf on Machine Learning, pp. 322–330. Morgan Kaufmann, San Francisco (1997)
Freund, Y.: Boosting a weak learning algorithm by majority. Information and Computation 121(2), 256–285 (1995)
Quinlan, J.R.: Bagging, Boosting, and C4.5. In: Ben-Eliyahu, R. (ed.) Proc of the 13th National Conf. on Artificial Intelligence, pp. 725–730. AAAI Press, Menlo Park (1996)
Schapire, R.E.: The strength of weak learnability. Machine Learning 5(2), 197–227 (1990)
Friedman, N., Geiger, D., Goldszmidt, M.: Bayesian network classifiers. Machine Learning 29(2/3), 131–163 (1997)
Cheng, J., Greiner, R.: Comparing Bayesian network classifiers. In: Laskey, K.B., Prade, H. (eds.) Proc of the 15th Conf. on Uncertainty in Artificial Intelligence, pp. 101–108. Morgan Kaufmann, San Francisco (1999)
Cheng, J., Bell, D.A., Liu, W.: An algorithm for Bayesian belief network construction from data. In: Proc of AI & STAT 1997, Lauderdale, Florida, pp. 83–90 (1997)
Shi, H., Huang, H., Wang, Z.: Boosting-Based TAN Combination Classifier. Journal of Computer Research and Development 41(2), 340–345 (2004)
UCI Machine Learning Repository, http://www.ics.uci.edu/~mlearn/MLRepository.html
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2011 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Sun, X., Zhou, H. (2011). An Empirical Comparison of Two Boosting Algorithms on Real Data Sets Based on Analysis of Scientific Materials. In: Jin, D., Lin, S. (eds) Advances in Computer Science, Intelligent System and Environment. Advances in Intelligent and Soft Computing, vol 105. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-23756-0_53
Download citation
DOI: https://doi.org/10.1007/978-3-642-23756-0_53
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-23755-3
Online ISBN: 978-3-642-23756-0
eBook Packages: EngineeringEngineering (R0)