Skip to main content

Evolving Artificial Neural Network Ensembles

  • Chapter
Computational Intelligence: A Compendium

Part of the book series: Studies in Computational Intelligence ((SCI,volume 115))

Artificial neural networks (ANNs) and evolutionary algorithms (EAs) are both abstractions of natural processes. In the mid 1990s, they were combined into a computational model in order to utilize the learning power of ANNs and adaptive capabilities of EAs. Evolutionary ANNs (EANNs) is the outcome of such a model. They refer to a special class of ANNs in which evolution is another fundamental form of adaptation in addition to learning [52–57]. The essence of EANNs is their adaptability to a dynamic environment. The two forms of adaptation in EANNs – namely evolution and learning – make their adaptation to a dynamic environment much more effective and efficient. In a broader sense, EANNs can be regarded as a general framework for adaptive systems – in other words, systems that can change their architectures and learning rules appropriately without human intervention.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 389.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD 499.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Abbass HA, Sarker R, Newton C 2001 PDE: A Pareto-frontier differential evolution approach for multi-objective optimization problems. In: Kim J-H (ed.) Proc. IEEE Conf. Evolutionary Computation (CEC2001), 27-30 May, Seoul, South Korea. IEEE Press, Piscataway, NJ: 971-978.

    Google Scholar 

  2. Abbass HA 2002 The self-adaptive Pareto differential evolution algorithm. In: Fogel DB, El-Sharkawi MA, Yao X, Greenwood G, Iba H, Marrow P, Shackleton M (eds.) Proc. IEEE Conf. Evolutionary Computation (CEC2002), 12-17 May, Honolulu, HI. IEEE Press, Piscataway, NJ: 831-836.

    Google Scholar 

  3. Abbass HA 2003 Speeding up backpropagation using multiobjective evolu-tionary algorithms. Neural Computation, 15(11): 2705-2726.

    Article  MATH  Google Scholar 

  4. Abbass HA 2003 Pareto neuro-evolution: constructing ensemble of neural net-works using multi-objective optimization. In: Sarker R, Reynolds R, Abbass H, Tan KC, McKay B, Essam D, Gedeon T (eds.) Proc. IEEE Conf. Evolutionary Computation (CEC2003), 8-12 December, Canberra, Australia. IEEE Press, Pisctaway, NJ: 2074-2080.

    Google Scholar 

  5. Abbass HA 2003 Pareto neuro-ensemble. In: Gedeon TD, Chun L, Fung C (eds.) Proc. 16th Australian Joint Conf. Artificial Intelligence, 3-5 December, Perth, Australia. Springer-Verlag, Berlin: 554-566.

    Google Scholar 

  6. Angeline PJ, Sauders GM, Pollack JB 1994 An evolutionary algorithm that constructs recurrent neural networks. IEEE Trans. Neural Networks, 5(1): 54-65.

    Article  Google Scholar 

  7. Baldi PF, Hornik K 1995 Learning in linear neural networks: a survey. IEEE Trans. Neural Networks, 6(4): 837-858.

    Article  Google Scholar 

  8. Blake C, Merz C UCI repository of machine learning databases. (available online at http://www.ics.uci.edu/ m learn/MLRepository.html - last accessed September 2007).

  9. Belew RK, McInerney J, Schraudolph NN 1991 Evolving networks: using genetic algorithm with connectionist learning. Technical Report CS90-174 (revised), Computer Science and Engineering Department (C-014), University of California, San Diego, February.

    Google Scholar 

  10. Bollé D, Dominguez DRC, Amari S 2000 Mutual information of sparsely coded associative memory with self-control and tenary neurons. Neural Networks, 1: 452-462.

    Google Scholar 

  11. Brown G, Wyatt JL 2003 Negative correlation learning and the ambiguity family of ensemble methods. In: Windeatt T, Roli F (eds.) Proc. Intl. Workshop Multiple Classifier Systems, 11-13 June, Guildford, UK. Springer-Verlag, Berlin: 266-275.

    Chapter  Google Scholar 

  12. Brown G, Wyatt JL, Harris R, Yao X 2005 Diversity creation methods: a survey and categorisation. J. Information Fusion, 6: 5-20.

    Article  Google Scholar 

  13. Chandra A, Yao X 2006 Ensemble learning using multi-objective evolutionary algorithms. J. Mathematical Modeling and Algorithms, 5(4): 417-445.

    Article  MATH  MathSciNet  Google Scholar 

  14. Darwen PJ, Yao X 1996 Every niching method has its niche: fitness sharing and implicit sharing compared. In: Ebeling W, Rechenberg I, Schwefel H-P, Voight H-M (eds.) Parallel Problem Solving from Nature (PPSN) IV, 22-26 September, Berlin, Germany. Lecture Notes in Computer Science 1141. Springer-Verlag, Berlin: 398-407.

    Google Scholar 

  15. Darwen PJ, Yao X (1997) Speciation as automatic categorical modularization. IEEE Trans. Evolutionary Computation, 1: 101-108.

    Article  Google Scholar 

  16. Dietterich TG 1998 Machine-learning research: four current directions. AI Magazine, 18(4): 97-136.

    Google Scholar 

  17. Finnoff W, Hergent F, Zimmermann HG 1993 Improving model selection by nonconvergent methods. Neural Networks, 6: 771-783.

    Article  Google Scholar 

  18. Fogel LJ, Owens AJ, Walsh MJ 1966 Artificial Intelligence Through Simulated Evolution. Wiley, New York, NY.

    MATH  Google Scholar 

  19. Fogel GB, Fogel DB 1995 Continuous evolutionary programming: analysis and experiments. Cybernetic Systems, 26: 79-90.

    Article  MATH  Google Scholar 

  20. Fogel DB 1995 Evolutionary Computation: Toward a New Philosophy of Ma-chine Intelligence. IEEE Press, Piscataway, NJ.

    Google Scholar 

  21. Goldberg DE 1989 Genetic Algorithms in Search, Optimization, and Machine Learning. Addison-Wesley, Reading, MA.

    MATH  Google Scholar 

  22. Hancock PJB 1992 Genetic algorithms and permutation problems: a com-parison of recombination operators for neural net structure specification. In: Whitley D, and Schaffer JD (eds.) in Proc. Intl. Workshop Combinations Genetic Algorithms Neural Networks (COGANN-92), 6 June, Maryland, IEEE Computer Society Press, Los Alamitos, CA: 108-122.

    Chapter  Google Scholar 

  23. Hansen LK, Salamon P 1990 Neural network ensembles. IEEE Trans. Pattern Analysis and Machine Intelligence, 12(10): 993-1001.

    Article  Google Scholar 

  24. Hashem S 1993 Optimal linear combinations of neural networks. PhD disser-tation. School of Industrial Engineering, Purdue University, West Lafayette, IN, December.

    Google Scholar 

  25. Deb K 2001 Multi-Objective Optimization Using Evolutionary Algorithms. Wiley, Chichester, UK.

    MATH  Google Scholar 

  26. Khare V, Yao X, and B. Sendhoff B 2006 Multi-network evolutionary systems and automatic problem decomposition. Intl. J. General Systems, 35(3): 259-274.

    Article  MATH  Google Scholar 

  27. Krogh A, Vedelsby J 1995 Neural network ensembles, cross validation, and active learning. Neural Information Processing Systems, 7: 231-238.

    Google Scholar 

  28. Krogh A, Sollich P 1997 Statistical mechanics of ensemble learning. Physics Reviews E, 55: 811-825.

    Article  Google Scholar 

  29. Kwok TY, Yeung DY 1997 Constructive algorithms for structure learning in feedforward neural networks for regression problems. IEEE Trans. Neural Networks, 8: 630-645.

    Article  Google Scholar 

  30. Kwok TY, Yeung DY 1997 Objective functions for training new hidden units in constructive neural networks. IEEE Trans. Neural Networks, 8: 1131-1148.

    Article  Google Scholar 

  31. Lehtokangas M 1999 Modeling with constructive backpropagation,” Neural Networks, 12: 707-716.

    Article  Google Scholar 

  32. Lee CY, Yao X 2004 Evolutionary programming using the mutations based on the Lévy probability distribution. IEEE Trans. Evolutionary Computation, 8(1): 1-13.

    Article  Google Scholar 

  33. Liu Y, Yao X 1999 Ensemble learning via negative correlation,” Neural Networks, 12: 1399-1404.

    Article  Google Scholar 

  34. Liu Y, Yao X, Higuchi T 2000 Evolutionary ensembles with negative corre-lation learning. IEEE Trans. Evolutionary Computation, 4(4): 380-387.

    Article  Google Scholar 

  35. MacQueen J (1967) Some methods for classification and analysis of multi-variate observation. In: Proc. 5th Berkely Symp. Mathematical Statistics and Probability, Berkely, CA, University of California Press, 1: 281-297.

    Google Scholar 

  36. Mahfoud SW 1995 Niching methods for genetic algorithms. PhD Thesis, Dep-tartment of General Engineering, University of Illinois, Urbana-Champaign, IL.

    Google Scholar 

  37. Monirul Islam M, Yao X, Murase K 2003 A constructive algorithm for training cooperative neural network ensembles. IEEE Trans. Neural Networks, 14: 820-834.

    Article  Google Scholar 

  38. Mulgrew B, Cowan CFN 1988 Adaptive Filters and Equalizers. Kluwer, Boston, MA.

    Google Scholar 

  39. Odri SV, Petrovacki DP, Krstonosic GA 1993 Evolutional development of a multilevel neural network. Neural Networks, 6(4): 583-595.

    Article  Google Scholar 

  40. Opitz DW, Shavlik JW 1996 Generating accurate and diverse members of a neural-network ensemble. Neural Information Processing Systems, 8: 535-541.

    Google Scholar 

  41. Opitz D, Maclin R 1999 Popular ensemble methods: an empirical study. J. Artificial Intelligence Research, 11: 169-198.

    MATH  Google Scholar 

  42. Perrone MP 1993 Improving regression estimation: averaging methods for vari-ance reduction with extensions to general convex measure optimization. PhD Dissertation, Department of Physics, Brown University, Providence, RI, May.

    Google Scholar 

  43. Prechelt L 1994 Proben1-A set of neural network benchmark problems and benchmarking rules. Technical Report 21/94, Fakultät für Informatik, University of Karlsruhe, Germany, September.

    Google Scholar 

  44. Prechelt L 1995 Some notes on neural learning algorithm benchmarking. Neurocomputing, 9(3): 343-347

    Article  Google Scholar 

  45. Rissanen J 1978 Modeling by shortest data description. Automatica, 14: 465-471.

    Article  MATH  Google Scholar 

  46. Rumelhart DE, Hinton GE, Williams RJ 1986 Learning internal represen-tations by error propagation. In: Rumelhart DE, McClelland JL (eds.) Parallel Distributed Processing: Explorations in the Microstructures of Cognition, I. MIT Press, Cambridge, MA: 318-362.

    Google Scholar 

  47. Sharkey AJC 1996 On combining artificial neural nets. Connection Science, 8(3/4): 299-313.

    Article  Google Scholar 

  48. Schaffer JD, Whitley D, Eshelman LJ 1992 Combinations of genetic algorithms and neural networks: a survey of the state of the art. In: Whitley D, Schaffer JD (eds.) Proc. Intl. Workshop Combinations Genetic Algorithms Neural Net-works (COGANN-92), 6 June, Maryland. IEEE Computer Society Press, Los Alamitos, CA: 1-37.

    Chapter  Google Scholar 

  49. Srinivas N, Deb K 1994 Multi-objective function optimization using non-dominated sorting genetic algorithms. Evolutionary Computation, 2(3): 221-248.

    Article  Google Scholar 

  50. Storn R, Price K 1996 Minimizing the real functions of the ICEC’96 contest by differential evolution. In: Fukuda T, Furuhashi T, Back T, Kitano H, Michalewicz (eds.) Proc. IEEE Intl. Conf. Evolutionary Computation, 20-22 May, Nagoya, Japan. IEEE Computer Society Press, Los Alamitos, CA: 842-844.

    Chapter  Google Scholar 

  51. Syswerda G 1991 A study of reproduction in generational and steady state genetic algorithms. In: Rawlins GJE (ed.) Foundations of Genetic Algorithms. Morgan Kaufmann, San Mateo, CA: 94-101.

    Google Scholar 

  52. Yao X (1991) Evolution of connectionist networks. In: Proc. Intl. Symp. AI, Reasoning & Creativity, Griffith University, Queensland, Australia, 49-52.

    Google Scholar 

  53. Yao X 1993 An empirical study of genetic operators in genetic algorithms. Microprocessors and Microprogramming, 38: 707-714.

    Article  Google Scholar 

  54. Yao X 1993 A review of evolutionary artificial neural networks. Int. J. Intelligent Systems, 8(4): 539-567.

    Article  Google Scholar 

  55. Yao X 1993 Evolutionary artificial neural networks. Int. J. Neural Systems, 4(3): 203-222.

    Article  Google Scholar 

  56. Yao X 1994 The evolution of connectionist networks. In: Dartnall T. (ed.) Artificial Intelligence and Creativity. Kluwer, Dordrecht, The Netherlands: 233-243.

    Google Scholar 

  57. Yao X 1995 Evolutionary artificial neural networks. In: Kent A, Williams JG (eds.) Encyclopedia of Computer Science and Technology 33, Marcel Dekker, New York, NY: 137-170.

    Google Scholar 

  58. Yao X, Shi Y 1995 A preliminary study on designing artificial neural net-works using co-evolution. In: Toumodge S, Lee TH, Sundarajan N (eds.) Proc. IEEE Intl. Conf. Intelligent Control Instrumentation, 2-8 July, Singapore. IEEE Computer Society Press, Los Alamitos, CA: 149-154.

    Google Scholar 

  59. Yao X 1999 Evolving artificial neural networks. Proc. IEEE, 87: 1423-1447.

    Article  Google Scholar 

  60. Yao X, Liu Y 1996 Ensemble structure of evolutionary artificial neural net-works. In: Fukuda T, Furuhashi T, Back T, Kitano H, Michalewicz (eds.) Proc. 1996 IEEE Intl. Conf. Evolutionary Computation (ICEC96), 20-22 May, Nagoya, Japan. IEEE Computer Society Press, Los Alamitos, CA: 659-664.

    Chapter  Google Scholar 

  61. Yao X, Liu Y 1997 A new evolutionary system for evolving artificial neural networks. IEEE Trans. Neural Networks, 8(3): 694-713.

    Article  MathSciNet  Google Scholar 

  62. Yao X, Liu Y 1998 Making use of population information in evolutionary arti-ficial neural networks. IEEE Trans. Systems, Man, and Cybernetics B, 28(3): 417-425.

    MathSciNet  Google Scholar 

  63. Yao X, Liu Y, Darwen P 1996 ‘How to make best use of evolutionary learning’. In: Stocker R, Jelinek H, Durnota B (eds.) Complex Systems: From Local Interactions to Global Phenomena. IOS Press, Amsterdam, The Netherlands: 229-242.

    Google Scholar 

  64. Yao X, Liu Y, Lin G 1999 Evolutionary Programming Made Faster. IEEE Trans. Evolutionary Computation, 3(2): 82-102.

    Article  Google Scholar 

  65. Yao X, Islam MM (2008) Evolving artificial neural network ensembles. IEEE Computational Intelligence Magazine, 3(1) (in press).

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this chapter

Cite this chapter

Islam, M.M., Yao, X. (2008). Evolving Artificial Neural Network Ensembles. In: Fulcher, J., Jain, L.C. (eds) Computational Intelligence: A Compendium. Studies in Computational Intelligence, vol 115. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-78293-3_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-78293-3_20

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-78292-6

  • Online ISBN: 978-3-540-78293-3

  • eBook Packages: EngineeringEngineering (R0)

Publish with us

Policies and ethics