Skip to main content
Log in

Improving Neuroevolution with Complementarity-Based Selection Operators

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

This paper is concerned with the problem of improving the convergence properties of evolutionary neural networks, particularly in the context of hybrid neural networks that adopt a diversity of transfer functions at their nodes, i.e.: neural diversity machines. The paper explores the potential of solution complementarity, in the context of pattern recognition problems, and focuses on its incorporation in the selection process of recombination heuristics. In a pattern recognition context, complementarity is defined as the ability of different solutions to correctly classify complementary subsets of patterns. A broad set of experiments was conducted demonstrating that solution selection based on complementarity is statistically significantly better than random selection, in a wide range of conditions, e.g.: different datasets, recombination heuristics and architectural constraints. Although the experiments demonstrated the statistical significance and robustness of the effect, they also indicated that more work is required to increase the degree of the effect and to scale-up to larger and more complex datasets.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10

Similar content being viewed by others

References

  1. Ackley DH (1987) A connectionist machine for genetic hillclimbing, vol 28. Kluwer Academic Publishers, Boston

    Google Scholar 

  2. Belew RK, McInerney J, Schraudolph NN (1991) Evolving networks: using the genetic algorithm with connectionist learning. Technical report CS90-174. Computer Science Engineering Department, University of California, San Diego (revised)

  3. Cao J, Lin Z, Huang GB (2012) Self-adaptive evolutionary extreme learning machine. Neural Process Lett 36(3):285–305

    Article  Google Scholar 

  4. Chicano F, Whitley D, Alba E (2014) Exact computation of the expectation surfaces for uniform crossover along with bit-flip mutation. Theor Comput Sci 545:76–93

  5. Ding S, Li H, Su C, Yu J, Jin F (2013) Evolutionary artificial neural networks: a review. Artif Intell Rev 39(3):251–260

    Article  Google Scholar 

  6. Duarte-Mermoud M, Beltrán N, Salah S (2013) Probabilistic adaptive crossover applied to Chilean wine classification. Math Probl Eng 2013:10. doi:10.1155/2013/734151

  7. Duch W, Jankowski N (2001) Transfer functions: hidden possibilities for better neural networks. In: ESANN. Citeseer, Bruges, pp 81–94

  8. Evans IK (1997) Enhancing recombination with the complementary surrogate genetic algorithm. In: IEEE international conference on evolutionary computation. IEEE, Indianapolis, pp 97–102

  9. Fahlman SE, Lebiere C (1989) The cascade-correlation learning architecture. In: Touretzky DS, Hinton G, Sejnowski T (eds) Advances in neural information processing systems II. Morgan Kaufmann, San Mateo

    Google Scholar 

  10. Floreano D, Dürr P, Mattiussi C (2008) Neuroevolution: from architectures to learning. Evol Intell 1(1):47–62

    Article  Google Scholar 

  11. Freire AL, Barreto GA (2014) A new model selection approach for the ELM network using metaheuristic optimization. In: ESANN 2014 proceedings, European symposium on artificial neural networks, computational intelligence and machine learning, pp 619–624

  12. Gomez F, Miikkulainen R (1997) Incremental evolution of complex general behavior. Adapt Behav 5(3–4):317–342

    Article  Google Scholar 

  13. Gomez F, Schmidhuber J, Miikkulainen R (2008) Accelerated neural evolution through cooperatively coevolved synapses. J Mach Learn Res 9:937–965

    MathSciNet  MATH  Google Scholar 

  14. Gutiérrez PA, Hervás-Martínez C (2011) Hybrid artificial neural networks: models, algorithms and data. In: Advances in computational intelligence. Springer, Berlin,pp 177–184

  15. Gutiérrez PA, Hervás-Martínez C, Carbonero M, Fernández JC (2009) Combined projection and kernel basis functions for classification in evolutionary neural networks. Neurocomputing 72(13):2731–2742

    Article  Google Scholar 

  16. Gutiérrez P, Segovia-Vargas M, Salcedo-Sanz S, Hervás-Martínez C, Sanchis A, Portilla-Figueras J, Fernández-Navarro F (2010) Hybridizing logistic regression with product unit and RBF networks for accurate detection and prediction of banking crises. Omega 38(5):333–344

    Article  Google Scholar 

  17. Hancock PJB (1992) Genetic algorithms and permutation problems: a comparison of recombination operators for neural net structure specification. In: International workshop on combinations of genetic algorithms and neural networks, 1992, COGANN-92. IEEE, pp 108–122

  18. Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554

    Article  MathSciNet  MATH  Google Scholar 

  19. Howard G, Bull L, de Lacy Costello B, Gale E, Adamatzky A (2014) Evolving spiking networks with variable resistive memories. Evol Comput 22(1):79–103

    Article  Google Scholar 

  20. Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501

    Article  Google Scholar 

  21. Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667):78–80

    Article  Google Scholar 

  22. Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207

    Article  MATH  Google Scholar 

  23. Masland RH (2001) Neuronal diversity in the retina. Curr Opin Neurobiol 11(4):431–436

    Article  Google Scholar 

  24. Maul T (2013) Early experiments with neural diversity machines. Neurocomputing 113:36–48

    Article  Google Scholar 

  25. Maul T, Baba S (2011) Unsupervised learning in second-order neural networks for motion analysis. Neurocomputing 74(6):884–895

    Article  Google Scholar 

  26. McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133

    Article  MathSciNet  MATH  Google Scholar 

  27. Miikkulainen R (2014) Evolving neural networks. In: Proceedings of the 2014 conference companion on genetic and evolutionary computation companion. ACM, New York,pp 487–512

  28. Moriarty DE (1997) Symbiotic evolution of neural networks in sequential decision tasks. PhD Thesis, University of Texas at Austin

  29. Prechelt L (1994) Proben1a set of neural network benchmark problems and benchmarking rules. Technical report 21/94. Fakultät für Informatik, University of Karlsruhe, Karlsruhe

  30. Rempis C, Pasemann F (2012) An interactively constrained neuro-evolution approach for behavior control of complex robots. In: Variants of evolutionary algorithms for real-world applications. Springer, Berlin, pp 305–341

  31. Schaffer JD, Whitley D, Eshelman LJ (1992) Combinations of genetic algorithms and neural networks: a survey of the state of the art. In: International workshop on combinations of genetic algorithms and neural networks, 1992, COGANN-92. IEEE, Baltimore, pp 1–37

  32. Schapire RE, Freund Y (2012) Boosting: foundations and algorithms. MIT Press, Cambridge

    MATH  Google Scholar 

  33. Semenkin E, Semenkina M (2012) Self-configuring genetic programming algorithm with modified uniform crossover. In: 2012 IEEE congress on evolutionary computation (CEC). IEEE, pp 1–6

  34. Soltesz I (2006) Diversity in the neuronal machine: order and variability in interneuronal microcircuits. Oxford University Press, New York

    Book  Google Scholar 

  35. Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99–127

    Article  Google Scholar 

  36. Stanley KO, D’Ambrosio DB, Gauci J (2009) A hypercube-based encoding for evolving large-scale neural networks. Artif Life 15(2):185–212

    Article  Google Scholar 

  37. Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359

    Article  MathSciNet  MATH  Google Scholar 

  38. Whitley D, Starkweather T, Bogart C (1990) Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput 14(3):347–361

    Article  Google Scholar 

  39. Xiao J, Zhou J, Li C, Xiao H, Zhang W, Zhu W (2013) Multi-fault classification based on the two-stage evolutionary extreme learning machine and improved artificial bee colony algorithm. Proc Inst Mech Eng C 228:1797–1807

  40. Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447

    Article  Google Scholar 

  41. Zhu QY, Qin AK, Suganthan PN, Huang GB (2005) Evolutionary extreme learning machine. Pattern Recognit 38(10):1759–1763

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tomás H. Maul.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Maul, T.H. Improving Neuroevolution with Complementarity-Based Selection Operators. Neural Process Lett 44, 887–911 (2016). https://doi.org/10.1007/s11063-016-9501-6

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11063-016-9501-6

Keywords

Navigation