Skip to main content
Log in

Construction of classifier ensembles by means of artificial immune systems

Journal of Heuristics Aims and scope Submit manuscript

Abstract

This paper presents the application of Artificial Immune Systems to the design of classifier ensembles. Ensembles of classifiers are a very interesting alternative to single classifiers when facing difficult problems. In general, ensembles are able to achieve better performance in terms of learning and generalisation errors.

Several papers have shown that the processes of classifier design and combination must be related in order to obtain better ensembles. Artificial Immune Systems are a recent paradigm based on the immune systems of animals. The features of this new paradigm make it very appropriate for the design of systems where many components must cooperate to solve a given task. The design of classifier ensembles can be considered within such a group of systems, as the cooperation of the individual classifiers is able to improve the performance of the overall system.

This paper studies the viability of Artificial Immune Systems when dealing with ensemble design. We construct a population of classifiers that is evolved using an Artificial Immune algorithm. From this population of classifiers several different ensembles can be extracted. These ensembles are favourably compared with ensembles obtained using standard methods in 35 real-world classification problems from the UCI Machine Learning Repository.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

References

  • Agresti, A.: Categorical Data Analysis. Wiley, New York (1990)

    MATH  Google Scholar 

  • Anderson, T.W.: An Introduction to Multivariate Statistical Analysis, 2nd edn. Wiley Series in Probability and Mathematical Statistics. Wiley, New York (1984)

    MATH  Google Scholar 

  • Bauer, E., Kohavi, R.: An empirical comparison of voting classification algorithms: bagging, boosting, and variants. Mach. Learn. 36(1/2), 105–142 (1999)

    Article  Google Scholar 

  • Breiman, L.: Bagging predictors. Mach. Learn. 24(2), 123–140 (1996a)

    MathSciNet  MATH  Google Scholar 

  • Breiman, L.: Stacked regressions. Mach. Learn. 24(1), 49–64 (1996b)

    MathSciNet  MATH  Google Scholar 

  • Breiman, L.: Arcing classifiers. Ann. Stat. 26, 801–824 (1998)

    Article  MathSciNet  MATH  Google Scholar 

  • Carter, J.H.: The immune system as a model for pattern recognition and classification. J. Am. Med. Inform. Assoc. 7(1), 28–41 (2000)

    Google Scholar 

  • Castro, P., Coelho, G., Caetano, M., von Zuben, F.: Designing ensembles of fuzzy classification systems: an immune approach. In: Jacob, C. (ed.) Proceedings of the Fourth International Conference on Artificial Immune Systems (ICARIS’2005), pp. 469–482. Springer, Calgary (2005)

    Google Scholar 

  • Dasgupta, D.: Artificial Immune Systems and their Applications. Springer, Berlin (2000)

    Google Scholar 

  • de Castro, L.N., Timmis, J.: Artificial Immune Systems: A New Computational Intelligence Approach. Springer, London (2002)

    MATH  Google Scholar 

  • de Castro, L.N., von Zuben, F.J.: Artificial immune systems: Part ii—a survey of applications. Technical Report TR–DCA 02/00, Department of Engineering and Industrial Automation, School of Electrical and Computer Engineering, State University of Campinas (1991)

  • de Castro, L.N., von Zuben, F.J.: Artificial immune systems: Part i—basic theory and applications. Technical Report TR–DCA 01/99, Department of Engineering and Industrial Automation, School of Electrical and Computer Engineering, State University of Campinas (1999)

  • de Castro, L.N., von Zuben, F.J.: An evolutionary immune network for data clustering. In: Proceedings of the IEEE Brazilian Symposium on Artificial Neural Networks, pp. 84–89 (2000)

  • Dietterich, T.G.: Approximate statistical tests for comparing supervised classification learning algorithms. Neural Comput. 10(7), 1895–1923 (1998)

    Article  Google Scholar 

  • Dietterich, T.G.: Ensemble methods in machine learning. In: Lecture Notes in Computer Science, vol. 1857, pp. 1–15. Springer, Berlin (2000a)

    Google Scholar 

  • Dietterich, T.G.: Ensemble methods in machine learning. In: Kittler, J., Roli, F. (eds.) Proceedings of the First International Workshop on Multiple Classifier Systems, pp. 1–15. Springer, Berlin (2000b)

    Chapter  Google Scholar 

  • Dietterich, T.G.: An experimental comparison of three methods for constructing ensembles of decision trees: bagging, boosting, and randomization. Mach. Learn. 40, 139–157 (2000c)

    Article  Google Scholar 

  • Domingo, C., Watanabe, O.: MadaBoost: a modification of AdaBoost. In: Proceedings of the 13th Annual Conference on Computational Learning Theory, pp. 180–189. Morgan Kaufmann, San Francisco (2000)

    Google Scholar 

  • Drucker, H.: Boosting using neural networks. In: Sharkey, A.J.C. (ed.) Combining Artificial Neural Nets, pp. 51–78. Springer, London (1999)

    Google Scholar 

  • Dzeroski, S., Zenko, B.: Is combining classifiers with stacking better than selecting the best one? Mach. Learn. 54, 255–273 (2004)

    Article  MATH  Google Scholar 

  • Fern, A., Givan, R.: Online ensemble learning: an empirical study. Mach. Learn. 53, 71–109 (2003)

    Article  MATH  Google Scholar 

  • Freund, Y., Schapire, R.: Experiments with a new boosting algorithm. In: Proceedings of the Thirteenth International Conference on Machine Learning, pp. 148–156, Bari, Italy (1996)

  • Friedman, J., Hastie, T., Tibshirani, R.: Additive logistic regression: a statistical view of boosting. Ann. Stat. 28(2), 337–407 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  • García-Pedrajas, N., Hervás-Martínez, C., Ortiz-Boyer, D.: Cooperative coevolution of artificial neural network ensembles for pattern classification. IEEE Trans. Evol. Comput. 9(3), 271–302 (2005)

    Article  Google Scholar 

  • García-Pedrajas, N., García-Osorion, Fyfe, C.: Nonlinear boosting projections for ensemble construction. J. Mach. Learn. Res. 8, 1–33 (2007)

    MathSciNet  Google Scholar 

  • Garret, S.M.: How do we evaluate artificial immune systems? Evol. Comput. 13(2), 145–178 (2005)

    Article  Google Scholar 

  • Hall, L., Bowyer, K., Banfield, R., Bhadoria, D., Kegelmeyer, W., Eschrich, S.: Comparing pure parallel ensemble creation techniques against bagging. In: Third IEEE International Conference on Data Mining, Melbourne, FL, USA (2003)

  • Haykin, S.: Neural Networks—A Comprehensive Foundation, 2nd edn. Prentice–Hall, Upper Saddle River (1999)

    MATH  Google Scholar 

  • Hettich, S., Blake, C., Merz, C.: UCI repository of machine learning databases. http://www.ics.uci.edu/~mlearn/MLRepository.html (1998)

  • Hillis, W.D.: Co-evolving parasites improves simulated evolution as an optimization technique. In: Langton, C.G., Taylor, C., Farmer, J.D., Rasmussen, S. (eds.) Artificial Life II, pp. 313–384. Addison–Wesley, Reading (1991)

    Google Scholar 

  • Ho, T.K.: The random subspace method for constructing decision forests. IEEE Trans. Pattern Anal. Mach. Intell. 20(8), 832–844 (1998)

    Article  Google Scholar 

  • Jerne, K.N.: Towards a network theory of the immune system. Ann. Immun. 125C, 373–389 (1974)

    Google Scholar 

  • Jolliffe, I.T.: Principal Components Analysis. Springer, New York (1986)

    Google Scholar 

  • Juillé, H.: Methods for statistical inference: extending the evolutionary computation paradigm. PhD thesis, Brandeis University (1999)

  • Kleinberg, E.: On the algorithmic implementation of stochastic discrimination. IEEE Trans. Pattern Anal. Mach. Intell. 22(5), 473–490 (2000)

    Article  Google Scholar 

  • Kuncheva, L.I.: Combining classifiers: soft computing solutions. In: Pal, S.K., Pal, A. (eds.) Pattern Recognition: From Classical to Modern Approaches, pp. 427–451. World Scientific, Singapore (2001)

    Google Scholar 

  • LeCun, Y., Bottou, L., Orr, G.B., Müller, K.-R.: Efficient backprop. In: Orr, G.B., Müller, K.-R. (eds.) Neural Networks: Tricks of the Trade, pp. 9–50. Springer, Berlin (1998)

    Chapter  Google Scholar 

  • Liu, Y., Yao, X., Higuchi, T.: Evolutionary ensembles with negative correlation learning. IEEE Trans. Evol. Comput. 4(4), 380–387 (2000)

    Article  Google Scholar 

  • Liu, Y., Yao, X., Zhao, Q., Higuchi, T.: Evolving a cooperative population of neural networks by minimizing mutual information. In: Proceedings of the 2001 IEEE Congress on Evolutionary Computation, pp. 384–389, Seoul, Korea (2001)

  • Margineantu, D.D., Dietterich, T.G.: Pruning adaptive boosting. In: Fisher, D.H. (ed.) Proceedings of the Fourteenth International Conference on Machine Learning, pp. 211–218, San Francisco, CA, USA. Morgan Kaufmann, San Francisco (1997)

    Google Scholar 

  • Melville, P., Mooney, R.J.: Creating diversity in ensembles using artificial data. Inform. Fusion 6, 99–111 (2005)

    Article  Google Scholar 

  • Merz, C.J.: Using correspondence analysis to combine classifiers. Mach. Learn. 36(1), 33–58 (1999)

    Article  Google Scholar 

  • Michalewicz, Z.: Genetic Algorithms + Data Structures = Evolution Programs. Springer, New York (1994)

    MATH  Google Scholar 

  • Moriarty, D.E.: Symbiotic evolution of neural networks in sequential decision tasks. PhD thesis, University of Texas at Austin. Report AI97-257 (1997)

  • Munro, R., Ler, D., Patrick, J.: Meta-learning orthographic and contextual models for language independent named entity recognition. In: Proceedings of the Seventh Conference on Natural Language Learning (2003)

  • Opitz, D., Maclin, R.: Popular ensemble methods: an empirical study. J. Art. Intell. Res. 11, 169–198 (1999)

    MATH  Google Scholar 

  • Paredis, J.: Coevolutionary computation. Artif. Life 2, 355–375 (1995)

    Article  Google Scholar 

  • Perrone, M.P., Cooper, L.N.: When networks disagree: ensemble methods for hybrid neural networks. In: Mammone, R.J. (ed.) Neural Networks for Speech and Image Processing, pp. 126–142. Chapman & Hall, London (1993)

    Google Scholar 

  • Rosin, C.D., Belew, R.K.: New methods for competitive coevolution. Evol. Comput. 5(1), 1–29 (1997)

    Article  Google Scholar 

  • Rumelhart, D., Hinton, G., Williams, R.J.: Learning internal representations by error propagation. In: Rumelhart, D., McClelland, J. (eds.) Parallel Distributed Processing, pp. 318–362. MIT Press, Cambridge (1986)

    Google Scholar 

  • Schapire, R.E., Singer, Y.: Boostexter: a boosting-based system for text categorization. Mach. Learn. 39, 135–168 (2000)

    Article  MATH  Google Scholar 

  • Sharkey, A.J.C.: On combining artificial neural nets. Connect. Sci. 8, 299–313 (1996)

    Article  Google Scholar 

  • Skurichina, M., Duin, R.P.W.: Bagging and the random subspace method for redundant feature spaces. In: Kittler, J., Poli, R. (eds.) Proceedings of the Second International Workshop on Multiple Classifier Systems MCS 2001, pp. 1–10, Cambridge, UK (2001)

  • Timmis, J., Neal, M.: Investigating the evolution and stability of a resource limited artificial immune system. In: Proceedings of the Genetic and Evolutionary Computation Conference, Workshop on Artificial Immune Systems and Their Applications, pp. 40–41 (2000)

  • Ting, K.M., Zheng, Z.: A study of adaboost with naive Bayesian classifiers: weakness and improvement. Comput. Intell. 19(2), 186–200 (2003)

    Article  MathSciNet  Google Scholar 

  • Todorovski, L., Dzeroski, S.: Combining classifiers with meta decision trees. Mach. Learn. 50, 223–249 (2003)

    Article  MATH  Google Scholar 

  • Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1999)

    Google Scholar 

  • Webb, G.I.: Multiboosting: a technique for combining boosting and wagging. Mach. Learn. 40(2), 159–196 (2000)

    Article  Google Scholar 

  • Yao, X., Liu, Y.: A new evolutionary system for evolving artificial neural networks. IEEE Trans. Neural Netw. 8(3), 694–713 (1997)

    Article  MathSciNet  Google Scholar 

  • Yao, X., Liu, Y.: Making use of population information in evolutionary artificial neural networks. IEEE Trans. Syst. Man Cybern. Part B Cybern. 28(3), 417–425 (1998)

    MathSciNet  Google Scholar 

  • Zenobi, G., Cunningham, P.: Using diversity in preparing ensembles of classifiers based on different feature subsets to minimize generalization error. In: de Raedt, L., Flach, P. (eds.) 12th European Conference on Machine Learning (ECML 2001). LNAI, vol. 2167, pp. 576–587. Springer, Berlin (2001)

    Google Scholar 

  • Zhang, X., Wang, S., Shan, T., Jiao, L.: Selective SVMs ensemble driven by immune clonal algorithm. In: Rothlauf, F. (ed.) Proceedings of EvoWorkshops, pp. 325–333. Springer, Berlin (2005)

    Google Scholar 

  • Zhou, Z.-H., Wu, J., Tang, W.: Ensembling neural networks: many could be better than all. Artif. Intell. 137(1–2), 239–253 (2002)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nicolás García-Pedrajas.

Rights and permissions

Reprints and permissions

About this article

Cite this article

García-Pedrajas, N., Fyfe, C. Construction of classifier ensembles by means of artificial immune systems. J Heuristics 14, 285–310 (2008). https://doi.org/10.1007/s10732-007-9036-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10732-007-9036-0

Keywords

Navigation