Skip to main content
Log in

Neural network ensembles: immune-inspired approaches to the diversity of components

  • Published:
Natural Computing Aims and scope Submit manuscript

Abstract

This work applies two immune-inspired algorithms, namely opt-aiNet and omni-aiNet, to train multi-layer perceptrons (MLPs) to be used in the construction of ensembles of classifiers. The main goal is to investigate the influence of the diversity of the set of solutions generated by each of these algorithms, and if these solutions lead to improvements in performance when combined in ensembles. omni-aiNet is a multi-objective optimization algorithm and, thus, explicitly maximizes the components’ diversity at the same time it minimizes their output errors. The opt-aiNet algorithm, by contrast, was originally designed to solve single-objective optimization problems, focusing on the minimization of the output error of the classifiers. However, an implicit diversity maintenance mechanism stimulates the generation of MLPs with different weights, which may result in diverse classifiers. The performances of opt-aiNet and omni-aiNet are compared with each other and with that of a second-order gradient-based algorithm, named MSCG. The results obtained show how the different diversity maintenance mechanisms presented by each algorithm influence the gain in performance obtained with the use of ensembles.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

References

  • Abbass HA (2003a) Speeding up backpropagation using multiobjective evolutionary algorithms. Neural Comput 15(11):2705–2726

    Article  MATH  Google Scholar 

  • Abbass HA (2003b) Pareto neuro-evolution: constructing ensemble of neural networks using multi-objective optimization. In: Proceedings of the IEEE conference on evolutionary computation. Los Alamitos, 2003, pp 2074–2080

  • Abbass HA (2003c) Pareto neuro-ensemble. In: Proceedings of the 16th Australian joint conference on artificial intelligence, Perth, 2003, pp 554–566

  • Abbass HA, Sarker R, Newton C (2001) PDE: a Pareto-frontier differential evolution approach for multi-objective optimization problems. In: Proceedings of the IEEE congress on evolutionary computation, Seoul, 2001, pp 971–978

  • Breiman L (1996) Bagging predictors. Mach Learning 24(2):123–140

    MATH  MathSciNet  Google Scholar 

  • Brown G, Wyatt J, Harris R, Yao X (2005) Diversity creation methods: a survey and categorisation. J Inf Fusion 6(1):5–20

    Article  Google Scholar 

  • Burnet FM (1978) Clonal selection and after. In: Bell GI, Perelson AS, Pimgley GH Jr (eds) Theoretical Immunology. Marcel Dekker Inc, New York, pp 63–85

    Google Scholar 

  • Chandra A, Yao X (2006) Ensemble learning using multi-objective evolutionary algorithms. J Math Model Algorithms 5(4):417–445

    Article  MATH  MathSciNet  Google Scholar 

  • Coelho GP, Von Zuben FJ (2006a) omni-aiNet: an immune-inspired approach for omni optimization. In: Proceedings of the international conference on artificial immune systems, Banff, 2006

  • Coelho GP, Von Zuben FJ (2006b) The influence of the pool of candidates on the performance of selection and combination techniques in ensembles. In: Proceedings of the IEEE international joint conference on neural networks, Vancouver, 2006, pp 10588–10595

  • de Castro LN, Timmis J (2002a) An introduction to artificial immune systems: a new computational intelligence paradigm. Springer-Verlag, London

    Google Scholar 

  • de Castro LN, Timmis J (2002b) An artificial immune network for multimodal function optimization. In: Proceedings of the IEEE congress on evolutionary computation, vol 1, pp 699–674, Honolulu

  • de França FO, Von Zuben FJ, de Castro LN (2005) An artificial immune network for multimodal function optimization on dynamic environments. In: Proceedings of the genetic and evolutionary computation conference (GECCO). Washington DC, 2005, pp 289–296

  • Deb K (2001) Multi-objective optimization using evolutionary algorithms. Wiley, Sussex

    MATH  Google Scholar 

  • Deb K, Tiwari S (2005) Omni-optimizer: a procedure for single and multi-objective optimization. In: Proceedings of the 3rd international conference on evolutionary multi-criterion optimization (EMO), Guanajuato, 2005

  • Deb K, Pratap A, Agarwal S, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evol Comput 6(2):182–197

    Article  Google Scholar 

  • Eiben AE, Smith JE (2003) Introduction to evolutionary computing. Springer, Berlin

    MATH  Google Scholar 

  • Freund Y, Shapire R (1996) Experiments with new boosting algorithm. In: Proceedings of the 13th international conference on machine learning, Bari, 1996, pp 149–156

  • Hansen L, Salamon P (1990) Neural network ensembles. IEEE Trans Pattern Anal Mach Intell 12:993–1005

    Article  Google Scholar 

  • Hashem S (1997) Optimal linear combinations of neural networks. Neural Netw 10(4):599–614

    Article  Google Scholar 

  • Hashem S, Schmeiser B (1995) Improving model accuracy using optimal linear combinations of trained neural networks. IEEE Trans Neural Netw 6(3):792–794

    Article  Google Scholar 

  • Hashem S, Schmeiser B, Yih Y (1994) Optimal linear combinations of neural networks: an overview. In: Proceedings of the IEEE international conference on neural networks, Orlando, 1994

  • Haykin S (1999) Neural networks: a comprehensive foundation, 2nd edn. Prentice Hall, New Jersey

    MATH  Google Scholar 

  • Holland PW, Garcia-Fernandez J, Williams NA, Sidow A (1994) Gene duplications and the origins of vertebrate development. Dev Suppl 125–133

  • Jerne NK (1974) Towards a network theory of the immune system. Ann Immunol (Paris) 125C:373–389

    Google Scholar 

  • Jin Y, Sendhoff B, Korner E (2004) Neural network regularization and ensembling using multi-objective evolutionary algorithms. In: Proceedings of the IEEE congress on evolutionary computation, Portland, 2004, pp 1–8

  • Liu Y (1998) Negative correlation learning and evolutionary neural network ensembles. Ph.D. thesis, University College, The University of New South Wales, Australian Defense Force Academy

  • Moller MF (1993) A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw 6:525–533

    Article  Google Scholar 

  • Newman DJ, Hettich S, Blake CL, Merz CJ (1998) UCI repository of machine learning databases [http://www.ics.uci.edu/~mlearn/MLRepository.html]. Irvine, CA: University of California, Department of Information and Computer Science, 1998

  • Ohno S (1970) Evolution by gene duplication. Allen and Unwin, London

    Google Scholar 

  • Pasti R, de Castro LN (2006) An immune and a gradient-based method to train multi-layer perceptron neural networks. Proceedings of the international joint conference on neural networks (World Congress of Computational Intelligence), pp 4077–4084

  • Pasti R, de Castro LN (2007) The influence of diversity in an immune-based algorithm to train MLP networks. In Proceedings of the international conference on artificial immune systems, Santos, 2007

  • Rudolph G, Agapie A (2000) Convergence properties of some multi-objective evolutionary algorithms. In: Proceedings of the IEEE conference on evolutionary computation, Piscataway, 2000, pp 1010–1016

  • Skalak D (1996) The sources of increased accuracy for two proposed boosting algorithms. In: Proceedings of the American association for artificial intelligence (AAAI-96) integrating multiple learned models workshop. Portland, 1996

  • Srinivas N, Deb K (1994) Multi-objective function optimization using non-dominated sorting genetic algorithms. Evol Comput 2(3):221–248

    Article  Google Scholar 

  • Tumer K, Ghosh J (1996) Error correlation and error reduction in ensemble classifiers. Connect Sci 8(3–4):385–404

    Article  Google Scholar 

  • Witten IH, Frank E (2005) Data mining: practical learning tool and techniques, 2nd edn. Morgan Kauffman Publishers, San Francisco

    Google Scholar 

  • Zhou Z, Wu J, Tang W (2002) Ensembling neural networks: many could be better than all. Artif Intell 137(1–2):239–263

    Article  MATH  MathSciNet  Google Scholar 

  • Zitzler E, Thiele L (1999) Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach. IEEE Trans Evol Comput 3(4):257–271

    Article  Google Scholar 

Download references

Acknowledgments

The authors thank CAPES, Fapesp and CNPq for the financial support.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Rodrigo Pasti.

Rights and permissions

Reprints and permissions

About this article

Cite this article

Pasti, R., de Castro, L.N., Coelho, G.P. et al. Neural network ensembles: immune-inspired approaches to the diversity of components. Nat Comput 9, 625–653 (2010). https://doi.org/10.1007/s11047-009-9124-1

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11047-009-9124-1

Keywords

Navigation