Reference Hub1
Elitist and Ensemble Strategies for Cascade Generalization

Elitist and Ensemble Strategies for Cascade Generalization

Huimin Zhao, Atish P. Sinha, Sudha Ram
Copyright: © 2006 |Volume: 17 |Issue: 3 |Pages: 16
ISSN: 1063-8016|EISSN: 1533-8010|ISSN: 1063-8016|EISBN13: 9781615200498|EISSN: 1533-8010|DOI: 10.4018/jdm.2006070105
Cite Article Cite Article

MLA

Zhao, Huimin, et al. "Elitist and Ensemble Strategies for Cascade Generalization." JDM vol.17, no.3 2006: pp.92-107. http://doi.org/10.4018/jdm.2006070105

APA

Zhao, H., Sinha, A. P., & Ram, S. (2006). Elitist and Ensemble Strategies for Cascade Generalization. Journal of Database Management (JDM), 17(3), 92-107. http://doi.org/10.4018/jdm.2006070105

Chicago

Zhao, Huimin, Atish P. Sinha, and Sudha Ram. "Elitist and Ensemble Strategies for Cascade Generalization," Journal of Database Management (JDM) 17, no.3: 92-107. http://doi.org/10.4018/jdm.2006070105

Export Reference

Mendeley
Favorite Full-Issue Download

Abstract

Several methods have been proposed for cascading other classification algorithms with decision tree learners to alleviate the representational bias of decision trees and, potentially, to improve classification accuracy. Such cascade generalization of decision trees increases the flexibility of the decision boundaries between classes and promotes better fitting of the training data. However, more flexible models may not necessarily lead to more predictive power. Because of potential overfitting problems, the true classification accuracy on test data may not increase. Recently, a generic method for cascade generalization has been proposed. The method uses a parameter — the maximum cascading depth — to constrain the degree that other classification algorithms are cascaded with decision tree learners. A method for efficiently learning a collection (i.e., a forest) of generalized decision trees, each with other classification algorithms cascaded to a particular depth, also has been developed. In this article, we propose several new strategies, including elitist and ensemble (weighted or unweighted), for using the various decision trees in such a collection in the prediction phase. Our empirical evaluation using 32 data sets in the UCI machine learning repository shows that, on average, the elitist strategy outperforms the weighted full ensemble strategy, which, in turn, outperforms the unweighted full ensemble strategy. However, no strategy is universally superior across all applications. Since the same training process can be used to evaluate the various strategies, we recommend that several promising strategies be evaluated and compared before selecting the one to use for a given application.

Request Access

You do not own this content. Please login to recommend this title to your institution's librarian or purchase it from the IGI Global bookstore.