Abstract
Several methods (e.g., Bagging, Boosting) of constructing and combining an ensemble of classifiers have recently been shown capable of improving accuracy of a class of commonly used classifiers (e.g., decision trees, neural networks). The accuracy gain achieved, however, is at the expense of a higher requirement for storage and computation. This storage and computation overhead can decrease the utility of these methods when applied to real-world situations. In this Letter, we propose a learning approach which allows a single neural network to approximate a given ensemble of classifiers. Experiments on a large number of real-world data sets show that this approach can substantially save storage and computation while still maintaining accuracy similar to that of the entire ensemble.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Breiman, L.: Bagging predictors, Machine Learning, 24 (1996), 123-140.
Freund, Y. and Schapire, R.: Experiments with a new boosting algorithm, In: Proc. Thirteenth Nat. Conf. Machine Learning, Morgan Kaufmann, 1996, pp. 148-156
Quinlan, J. R.: Bagging, boosting, and c4.5, In: Proc. Thirteenth Nat. Conf. Artificial Intelligence, AAAI/MIT Press, 1996, pp. 725-730.
Bauer, E. and Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting and variants, Machine Learning, 36 (1999), 105-139.
Maclin, R. and Opitz, D.: An empirical evaluation of bagging and boosting, In: Proc. Fourteenth Nat. Conf. Artificial Intelligence, AAAI/MIT Press, 1997, pp. 546-551.
Dietterich, T. G.: Machine-learning research-four current directions, AI Magazine, Winter (1997), 97-136.
Margineantu, D. D. and Dietterich, T. G.: Pruning adaptive boosting, In: Proc. Fourteenth Int. Conf. Machine Learning, 1997, pp. 98-106.
Dominggos, P.: Knowledge acquisition from examples vis multiple models, In: Proc. Fourteenth Int. Conf. Machine Learning, 1997, pp. 211-218.
Craven, M. W. and Shavlik, J. W.: Learning symbolic rules using artificial neural networks, In: Proc. 10th Int. Conf. Machine Learning, Amherst, MA, Kaufmann, 1993, pp. 73-80.
Craven, M. W. and Shavlik, J. W.: Extracting tree-structured representation from trained networks, In: D. S. Touretzky, M. C. Mozer and M. Hasselmo, eds., Advances in Neural Information Processing System 8, MIT Press, 1996, pp. 24-30.
Efron, B. and Tibshirani, R.: An Introduction to the Bootstrap, New York, Chapman and Hall, 1993.
Merz, C. J. and Murphy, P. M.: UCI repository of machine learning databases, http://www.ics.uci.edu/~mlearn/MLRepository.html, 1996.
Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J.: Classification and Regression Trees, Wadsworth International Group, 1984.
Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection, In: Proc. Int. Joint Conf. Artificial Intelligence, 1995, pp. 1137-1143.
Author information
Authors and Affiliations
Rights and permissions
About this article
Cite this article
Zeng, X., Martinez, T.R. Using a Neural Network to Approximate an Ensemble of Classifiers. Neural Processing Letters 12, 225–237 (2000). https://doi.org/10.1023/A:1026530200837
Issue Date:
DOI: https://doi.org/10.1023/A:1026530200837