Abstract
This paper is concerned with the problem of improving the convergence properties of evolutionary neural networks, particularly in the context of hybrid neural networks that adopt a diversity of transfer functions at their nodes, i.e.: neural diversity machines. The paper explores the potential of solution complementarity, in the context of pattern recognition problems, and focuses on its incorporation in the selection process of recombination heuristics. In a pattern recognition context, complementarity is defined as the ability of different solutions to correctly classify complementary subsets of patterns. A broad set of experiments was conducted demonstrating that solution selection based on complementarity is statistically significantly better than random selection, in a wide range of conditions, e.g.: different datasets, recombination heuristics and architectural constraints. Although the experiments demonstrated the statistical significance and robustness of the effect, they also indicated that more work is required to increase the degree of the effect and to scale-up to larger and more complex datasets.
Similar content being viewed by others
References
Ackley DH (1987) A connectionist machine for genetic hillclimbing, vol 28. Kluwer Academic Publishers, Boston
Belew RK, McInerney J, Schraudolph NN (1991) Evolving networks: using the genetic algorithm with connectionist learning. Technical report CS90-174. Computer Science Engineering Department, University of California, San Diego (revised)
Cao J, Lin Z, Huang GB (2012) Self-adaptive evolutionary extreme learning machine. Neural Process Lett 36(3):285–305
Chicano F, Whitley D, Alba E (2014) Exact computation of the expectation surfaces for uniform crossover along with bit-flip mutation. Theor Comput Sci 545:76–93
Ding S, Li H, Su C, Yu J, Jin F (2013) Evolutionary artificial neural networks: a review. Artif Intell Rev 39(3):251–260
Duarte-Mermoud M, Beltrán N, Salah S (2013) Probabilistic adaptive crossover applied to Chilean wine classification. Math Probl Eng 2013:10. doi:10.1155/2013/734151
Duch W, Jankowski N (2001) Transfer functions: hidden possibilities for better neural networks. In: ESANN. Citeseer, Bruges, pp 81–94
Evans IK (1997) Enhancing recombination with the complementary surrogate genetic algorithm. In: IEEE international conference on evolutionary computation. IEEE, Indianapolis, pp 97–102
Fahlman SE, Lebiere C (1989) The cascade-correlation learning architecture. In: Touretzky DS, Hinton G, Sejnowski T (eds) Advances in neural information processing systems II. Morgan Kaufmann, San Mateo
Floreano D, Dürr P, Mattiussi C (2008) Neuroevolution: from architectures to learning. Evol Intell 1(1):47–62
Freire AL, Barreto GA (2014) A new model selection approach for the ELM network using metaheuristic optimization. In: ESANN 2014 proceedings, European symposium on artificial neural networks, computational intelligence and machine learning, pp 619–624
Gomez F, Miikkulainen R (1997) Incremental evolution of complex general behavior. Adapt Behav 5(3–4):317–342
Gomez F, Schmidhuber J, Miikkulainen R (2008) Accelerated neural evolution through cooperatively coevolved synapses. J Mach Learn Res 9:937–965
Gutiérrez PA, Hervás-Martínez C (2011) Hybrid artificial neural networks: models, algorithms and data. In: Advances in computational intelligence. Springer, Berlin,pp 177–184
Gutiérrez PA, Hervás-Martínez C, Carbonero M, Fernández JC (2009) Combined projection and kernel basis functions for classification in evolutionary neural networks. Neurocomputing 72(13):2731–2742
Gutiérrez P, Segovia-Vargas M, Salcedo-Sanz S, Hervás-Martínez C, Sanchis A, Portilla-Figueras J, Fernández-Navarro F (2010) Hybridizing logistic regression with product unit and RBF networks for accurate detection and prediction of banking crises. Omega 38(5):333–344
Hancock PJB (1992) Genetic algorithms and permutation problems: a comparison of recombination operators for neural net structure specification. In: International workshop on combinations of genetic algorithms and neural networks, 1992, COGANN-92. IEEE, pp 108–122
Hinton GE, Osindero S, Teh YW (2006) A fast learning algorithm for deep belief nets. Neural Comput 18(7):1527–1554
Howard G, Bull L, de Lacy Costello B, Gale E, Adamatzky A (2014) Evolving spiking networks with variable resistive memories. Evol Comput 22(1):79–103
Huang GB, Zhu QY, Siew CK (2006) Extreme learning machine: theory and applications. Neurocomputing 70(1):489–501
Jaeger H, Haas H (2004) Harnessing nonlinearity: predicting chaotic systems and saving energy in wireless communication. Science 304(5667):78–80
Kuncheva LI, Whitaker CJ (2003) Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Mach Learn 51(2):181–207
Masland RH (2001) Neuronal diversity in the retina. Curr Opin Neurobiol 11(4):431–436
Maul T (2013) Early experiments with neural diversity machines. Neurocomputing 113:36–48
Maul T, Baba S (2011) Unsupervised learning in second-order neural networks for motion analysis. Neurocomputing 74(6):884–895
McCulloch WS, Pitts W (1943) A logical calculus of the ideas immanent in nervous activity. Bull Math Biophys 5(4):115–133
Miikkulainen R (2014) Evolving neural networks. In: Proceedings of the 2014 conference companion on genetic and evolutionary computation companion. ACM, New York,pp 487–512
Moriarty DE (1997) Symbiotic evolution of neural networks in sequential decision tasks. PhD Thesis, University of Texas at Austin
Prechelt L (1994) Proben1a set of neural network benchmark problems and benchmarking rules. Technical report 21/94. Fakultät für Informatik, University of Karlsruhe, Karlsruhe
Rempis C, Pasemann F (2012) An interactively constrained neuro-evolution approach for behavior control of complex robots. In: Variants of evolutionary algorithms for real-world applications. Springer, Berlin, pp 305–341
Schaffer JD, Whitley D, Eshelman LJ (1992) Combinations of genetic algorithms and neural networks: a survey of the state of the art. In: International workshop on combinations of genetic algorithms and neural networks, 1992, COGANN-92. IEEE, Baltimore, pp 1–37
Schapire RE, Freund Y (2012) Boosting: foundations and algorithms. MIT Press, Cambridge
Semenkin E, Semenkina M (2012) Self-configuring genetic programming algorithm with modified uniform crossover. In: 2012 IEEE congress on evolutionary computation (CEC). IEEE, pp 1–6
Soltesz I (2006) Diversity in the neuronal machine: order and variability in interneuronal microcircuits. Oxford University Press, New York
Stanley KO, Miikkulainen R (2002) Evolving neural networks through augmenting topologies. Evol Comput 10(2):99–127
Stanley KO, D’Ambrosio DB, Gauci J (2009) A hypercube-based encoding for evolving large-scale neural networks. Artif Life 15(2):185–212
Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Glob Optim 11(4):341–359
Whitley D, Starkweather T, Bogart C (1990) Genetic algorithms and neural networks: optimizing connections and connectivity. Parallel Comput 14(3):347–361
Xiao J, Zhou J, Li C, Xiao H, Zhang W, Zhu W (2013) Multi-fault classification based on the two-stage evolutionary extreme learning machine and improved artificial bee colony algorithm. Proc Inst Mech Eng C 228:1797–1807
Yao X (1999) Evolving artificial neural networks. Proc IEEE 87(9):1423–1447
Zhu QY, Qin AK, Suganthan PN, Huang GB (2005) Evolutionary extreme learning machine. Pattern Recognit 38(10):1759–1763
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Maul, T.H. Improving Neuroevolution with Complementarity-Based Selection Operators. Neural Process Lett 44, 887–911 (2016). https://doi.org/10.1007/s11063-016-9501-6
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11063-016-9501-6