Skip to main content

Advertisement

Log in

Using a Neural Network to Approximate an Ensemble of Classifiers

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

Several methods (e.g., Bagging, Boosting) of constructing and combining an ensemble of classifiers have recently been shown capable of improving accuracy of a class of commonly used classifiers (e.g., decision trees, neural networks). The accuracy gain achieved, however, is at the expense of a higher requirement for storage and computation. This storage and computation overhead can decrease the utility of these methods when applied to real-world situations. In this Letter, we propose a learning approach which allows a single neural network to approximate a given ensemble of classifiers. Experiments on a large number of real-world data sets show that this approach can substantially save storage and computation while still maintaining accuracy similar to that of the entire ensemble.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Breiman, L.: Bagging predictors, Machine Learning, 24 (1996), 123-140.

    Google Scholar 

  2. Freund, Y. and Schapire, R.: Experiments with a new boosting algorithm, In: Proc. Thirteenth Nat. Conf. Machine Learning, Morgan Kaufmann, 1996, pp. 148-156

  3. Quinlan, J. R.: Bagging, boosting, and c4.5, In: Proc. Thirteenth Nat. Conf. Artificial Intelligence, AAAI/MIT Press, 1996, pp. 725-730.

  4. Bauer, E. and Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting and variants, Machine Learning, 36 (1999), 105-139.

    Google Scholar 

  5. Maclin, R. and Opitz, D.: An empirical evaluation of bagging and boosting, In: Proc. Fourteenth Nat. Conf. Artificial Intelligence, AAAI/MIT Press, 1997, pp. 546-551.

  6. Dietterich, T. G.: Machine-learning research-four current directions, AI Magazine, Winter (1997), 97-136.

  7. Margineantu, D. D. and Dietterich, T. G.: Pruning adaptive boosting, In: Proc. Fourteenth Int. Conf. Machine Learning, 1997, pp. 98-106.

  8. Dominggos, P.: Knowledge acquisition from examples vis multiple models, In: Proc. Fourteenth Int. Conf. Machine Learning, 1997, pp. 211-218.

  9. Craven, M. W. and Shavlik, J. W.: Learning symbolic rules using artificial neural networks, In: Proc. 10th Int. Conf. Machine Learning, Amherst, MA, Kaufmann, 1993, pp. 73-80.

    Google Scholar 

  10. Craven, M. W. and Shavlik, J. W.: Extracting tree-structured representation from trained networks, In: D. S. Touretzky, M. C. Mozer and M. Hasselmo, eds., Advances in Neural Information Processing System 8, MIT Press, 1996, pp. 24-30.

  11. Efron, B. and Tibshirani, R.: An Introduction to the Bootstrap, New York, Chapman and Hall, 1993.

    Google Scholar 

  12. Merz, C. J. and Murphy, P. M.: UCI repository of machine learning databases, http://www.ics.uci.edu/~mlearn/MLRepository.html, 1996.

  13. Breiman, L., Friedman, J. H., Olshen, R. A. and Stone, C. J.: Classification and Regression Trees, Wadsworth International Group, 1984.

  14. Kohavi, R.: A study of cross-validation and bootstrap for accuracy estimation and model selection, In: Proc. Int. Joint Conf. Artificial Intelligence, 1995, pp. 1137-1143.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Zeng, X., Martinez, T.R. Using a Neural Network to Approximate an Ensemble of Classifiers. Neural Processing Letters 12, 225–237 (2000). https://doi.org/10.1023/A:1026530200837

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1026530200837

Navigation