Skip to main content
Log in

Ensembles of ARTMAP-based neural networks: an experimental study

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

ARTMAP-based models are neural networks which use a match-based learning procedure. The main advantage of ARTMAP-based models over error-based models, such as Multi-Layer Perceptron, is the learning time, which is considered as significantly fast. This feature is extremely important in complex systems that require the use of several models, such as ensembles or committees, since they produce robust and fast classifiers. Subsequently, some extensions of the ARTMAP model have been proposed, such as: ARTMAP-IC, RePART, among others. Aiming to add an extra contribution to ARTMAP context, this paper presents an analysis of ARTMAP-based models in ensemble systems. As a result of this analysis, two main goals are aimed, which are: to analyze the influence of the RePART model in ensemble systems and to detect any relation between diversity and accuracy in ensemble systems in order to use this relation in the design of these systems.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

References

  1. Banfield RE, Hall LO, Bowyer KW, Kegelmeyer WP (2005) Ensemble diversity measures and their application to thinning. Inf Fusion 6(1):49–62

    Article  Google Scholar 

  2. Barbu C, Zhang K, Peng J, Buckles B (2005) Boosting in classifier fusion vs. fusing boosted classifiers. In: Proceedings of IEEE international conference of information reuse and integration, pp 332–337

  3. Blake CL, Merz CJ (1998) UCI repository of machine learning databases. Department of Information and Computer Science. University of California, Irvine, CA. http://www.ics.uci.edu/~mlearn/MLRepository.html

  4. Bertolami R, Bunke H (2006) Diversity analysis for ensembles of word sequence recognisers. In: Proceedings of 11th international workshop on structural and syntatic pattern recognition, pp 677–686

  5. Brown G, Wyatt J, Harris R, Yao X (2004) Diversity creation methods: a survey and categorisation. J Inf Fusion 6(1):5–20

    Article  Google Scholar 

  6. Canuto A (2001) Combining neural networks and fuzzy logic for applications in character recognition. PhD thesis, University of Kent

  7. Canuto A, Abreu M, Oliveira LM, Xavier JC Jr, Santos A (2007) Investigating the influence of the choice of the ensemble members in accuracy and diversity of selection-based and fusion-based methods for ensembles. Pattern Recognit Lett 28:472–486

    Article  Google Scholar 

  8. Canuto A, Fairhurst M, Howells G (2001) Improving artmap learning through variable vigilance. Int J Neural Syst 11(6):509–522

    Google Scholar 

  9. Canuto A, Howells G, Fairhurst M (2000) An investigation of the effects of variable vigilance within the repart neuro-fuzzy network. J Intell Robot Syst 29(4):317–334. Special issue on Neural-Fuzzy Systems

    Article  MATH  Google Scholar 

  10. Canuto A, Santos A (2004) A comparative investigation of the RePART neural network in pattern recognition tasks. In: Proceedings of IEEE international joint conference on neural networks

  11. Santos A, Canuto AM (2008) Investigating the influence of RePART in ensemble systems designed by boosting. In: Proceedings of IEEE international joint conference on neural networks

  12. Carpenter GA, Grossberg S (1991) A massively parallel architecture for a self-organizing neural pattern recognition machine. Comput Vis Graph Image Process 37:54–115

    Article  Google Scholar 

  13. Carpenter G, Grossberg S, Reynolds JH (1991) Artmap: supervised real-time learning and classifcation of nonstationary data by a self-organizing neural network. Neural Netw 4:565–588

    Article  Google Scholar 

  14. Carpenter G, Grossberg S, Markunzo M, Reynolds JH, Rosen DB (1992) Fuzzy artmap: a neural network architecture for incremental supervised learning of analog multidimensional maps. IEEE Trans Neural Netw 3:698–713

    Article  Google Scholar 

  15. Carpenter G, Markuzon N (1998) Artmap-IC and medical diagnosis: instance counting and inconsistent cases. Neural Netw 11:323–336

    Article  Google Scholar 

  16. Czyz J, Sadeghi M, Kittler J, Vandendorpe L (2004) Decision fusion for face authentication. In: Proceedings of the first international conference on biometric authentication, pp 686–693

  17. Demsar J (2006) Statistical comparisons of classifiers over multiple data sets. J Mach Learn Res 7:1–30

    MathSciNet  Google Scholar 

  18. Gal-Or M, May JH, Spangler WE (2005) Design methods using decision tree models and diversity measures in the selection of ensemble classification models. In: Multiple classifier systems: 6th international workshop. Lecture notes in computer science, pp 186–195

  19. Giacinto G, Roli F (2001) Dynamic classifier selection based on multiple classifier behaviour. Pattern Recognit 34(9):1879–1881

    Article  MATH  Google Scholar 

  20. Haykin S (1998) Neural networks, a comprehensive foundation, 2nd ed. Prentice Hall, New York

    Google Scholar 

  21. Kuncheva LI (2004) Combining pattern classifiers. Methods and algorithms. Wiley, New York

    Book  MATH  Google Scholar 

  22. Kuncheva L, Skurichina M, Duin R (2002) An experimental study on diversity for bagging and boosting with linear classifier. Inf Fusion 3(2):245–258

    Article  Google Scholar 

  23. Kuncheva L, Whitaker CJ (2002) Using diversity with three variants of boosting: aggressive, conservative, and inverse. In: Multiple classifier systems. Lecture notes in computer science, pp 81–90

  24. Kuncheva L, Whitaker C (2003) Measures of diversity in classifier ensembles. Mach Learn 51:181–207

    Article  MATH  Google Scholar 

  25. Ranawana R, Palade V (2006) Multi classifier systems—a review and roadmap for developers. J Hybrid Intell Syst 3(1):35–61

    MATH  Google Scholar 

  26. Ruspini E, Bonissone P, Pedryez W (1998) Handbook of fuzzy computation. IOP, Bristol

    Book  MATH  Google Scholar 

  27. Sharkey AJC (1999) Multi-net system. In: Sharkey AJC (ed) Combining artificial neural nets: ensemble and modular multi-net systems. Springer, Berlin, pp 1–30

    Google Scholar 

  28. Spearman C (1904) The proof and measurement of association between two things. Am J Psychol 15:72–101

    Article  Google Scholar 

  29. Tang E, Suganthan P, Yao X (2006) An analysis of diversity measures. Mach Learn 65(1):247–271

    Article  Google Scholar 

  30. Tian Q, Yu J, Huang TS (2005) Boosting multiple classifiers constructed by hybrid discriminant analysis. In: Lecture notes in computer science, pp 42–52

  31. Tsymbal A, Pechenizkiy M, Cunningham P (2003) Diversity in ensemble feature selection. Technical report, Trinity College, Dublin

  32. Windeatt T (2005) Diversity measures for multiple classifier system analysis and design. Inf Fusion 6(1):21–36

    Article  Google Scholar 

  33. Zadeh L (1965) Fuzzy sets. Inf Cont 8:338–353

    Article  MathSciNet  MATH  Google Scholar 

  34. Carpenter G, Grossberg S, Markunzo M, Reynolds JH, Rosen DB (1991) Fuzzy art: fats stable learning and categorization of analogpatterns by an adaptive resonance system. Neural Netw 4:759–771

    Article  Google Scholar 

  35. Chebira A, Bouyoucef E, Rybnik M, Madani K (2006) ATNS: an adaptive tree neural structure. Int J Inf Technol Intell Comput 1(3):463–476

    Google Scholar 

  36. Dietterich TG (2000) Ensemble methods in machine learning. In: Kittler J, Roli F (eds) Multiple classifier systems. Lecture notes in computer science, vol 1857. Springer, Berlin, pp 1–15

    Chapter  Google Scholar 

  37. Ko AH-.R, Sabourin R, Britto AS (2006) Combining diversity and classification accuracy for ensemble selection in random subspaces. In: International joint conference on neural networks (IJCNN), pp 2144–2151

  38. Madani K, Thiaw L (2007) Self-organizing multi-modeling: a different way to design intelligent predictors. Neurocomputing 70(1618):2836–2852

    Article  Google Scholar 

  39. Platt J (1998) Machines using sequential minimal optimization. In: B. Schoelkopf, C. Burges, A. Smola (eds), Advances in kernel methods—support vector learning

  40. Rybnik M, Chabira A, Madani K (2003) Auto-adaptive neural network tree structure based on complexity estimator. In: Computational methods in neural modeling. Lecture notes in computer science, vol 2686, pp 558–565

  41. Canuto AMP, Santana L, Abreu M, Xavier J Jr (2008) An analysis of data distribution in the ClassAge system: An agent-based system for classification tasks. Neurocomputing (Amsterdam) 71:3319–3325

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Anne M. P. Canuto.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Canuto, A.M.P., Santos, A.M. & Vargas, R.R. Ensembles of ARTMAP-based neural networks: an experimental study. Appl Intell 35, 1–17 (2011). https://doi.org/10.1007/s10489-009-0199-2

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-009-0199-2