Skip to main content

Performance Analysis of Classifier Ensembles: Neural Networks Versus Nearest Neighbor Rule

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 4477))

Abstract

We here compare the performance (predictive accuracy and processing time) of different neural network ensembles with that of nearest neighbor classifier ensembles. Concerning the connectionist models, the multilayer perceptron and the modular neural network are employed. Experiments on several real-problem data sets demonstrate a certain superiority of the nearest-neighbor-based schemes, in terms of both accuracy and computing time. When comparing the neural network ensembles, one can observe a better behavior of the multilayer perceptron than that of the modular networks.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Alpaydin, E.: Voting over multiple condensed nearest neighbors. Artificial Intelligence Research 11, 115–132 (1997)

    Article  Google Scholar 

  2. Bahler, D., Navarro, L.: Methods for combining heterogeneous sets of classifiers. In: Proc. 17th Natl. Conf. on Artificial Intelligence, Workshop on New Research Problems for Machine Learning (2000)

    Google Scholar 

  3. Barandela, R., Valdovinos, R.M., Sánchez, J.S.: New applications of ensembles of classifiers. Pattern Analysis and Applications 6, 245–256 (2003)

    Article  Google Scholar 

  4. Bauckhage, C., Thurau, C.: Towards a fair’n square aimbot — Using mixture of experts to learn context aware weapon handling. In: Proc. GAME-ON, Ghent, Belgium, pp. 20–24 (2004)

    Google Scholar 

  5. Bay, S.: Combining nearest neighbor classifiers through multiple feature subsets. In: Proc. 15th Intl. Conf. on Machine Learning, Madison, WI, pp. 37–45 (1998)

    Google Scholar 

  6. Breiman, L.: Bagging predictors. Machine Learning 24, 123–140 (1996)

    MATH  MathSciNet  Google Scholar 

  7. Breiman, L.: Arcing classifiers. Annals of Statistics 26, 801–823 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  8. Brown, G., Wyatt, J.: Negative correlation learning and the ambiguity family of ensemble methods. In: Windeatt, T., Roli, F. (eds.) MCS 2003. LNCS, vol. 2709, pp. 266–275. Springer, Heidelberg (2003)

    Chapter  Google Scholar 

  9. Dasarathy, B.V. (ed.): Nearest Neighbor Norms: NN Pattern Classification Techniques. IEEE Computer Society Press, Los Alamitos (1991)

    Google Scholar 

  10. Dietterich, G.T.: Machine learning research: four current directions. AI Magazine 18, 97–136 (1997)

    Google Scholar 

  11. Funahashi, K.: On the approximate realization of continuous mapping by neural networks. Neural Networks 2, 183–192 (1989)

    Article  Google Scholar 

  12. Hartono, P., Hashimoto, S.: Ensemble of linear perceptrons with confidence level output. In: Proc. 4th Intl. Conf. on Hybrid Intelligent Systems, Kitakyushu, Japan, pp. 186–191 (2004)

    Google Scholar 

  13. Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  14. Jacobs, R., Jordan, M., Hinton, G.: Adaptive mixture of local experts. Neural Computation 3, 79–87 (1991)

    Article  Google Scholar 

  15. Kuncheva, L.I.: Using measures of similarity and inclusion for multiple classifier fusion by decision templates. Fuzzy Sets and Systems 122, 401–407 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  16. Kuncheva, L.I., Whitaker, C.J.: Measures of diversity in classifier ensembles. Machine Learning 51, 181–207 (2003)

    Article  MATH  Google Scholar 

  17. Pao, Y.H.: Adaptive Pattern Recognition and Neural Networks. Addison-Wesley, Reading (1989)

    MATH  Google Scholar 

  18. Raviv, Y., Intrator, N.: Bootstrapping with noise: an effective regularization technique. Connection Science 8, 356–372 (1996)

    Article  Google Scholar 

  19. Skalak, D.B.: Prototype Selection for Composite Nearest Neighbor Classification. Ph.D. Thesis, University of Massachusetts (1996)

    Google Scholar 

  20. Valdovinos, R.M., Sánchez, J.S.: Class-dependant resampling for medical applications. In: Proc. 4th Intl. Conf. on Machine Learning and Applications, Los Angeles, CA, pp. 351–356 (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Joan Martí José Miguel Benedí Ana Maria Mendonça Joan Serrat

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer Berlin Heidelberg

About this paper

Cite this paper

Valdovinos, R.M., Sánchez, J.S. (2007). Performance Analysis of Classifier Ensembles: Neural Networks Versus Nearest Neighbor Rule. In: Martí, J., Benedí, J.M., Mendonça, A.M., Serrat, J. (eds) Pattern Recognition and Image Analysis. IbPRIA 2007. Lecture Notes in Computer Science, vol 4477. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-72847-4_15

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-72847-4_15

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-72846-7

  • Online ISBN: 978-3-540-72847-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics