Abstract
ENZO-M combines two successful search techniques using two different timescales: learning (gradient descent) for finetuning of each offspring and evolution for coarse optimization steps of the network topology. Therefore, our evolutionary algorithm is a metaheuristic based on the best available local heuristic. Through training each offspring by fast gradient methods the search space of our evolutionary algorithm is considerably reduced to the set of local optima.
Using the parental weights for initializing the weights of each offspring both the gradient descent (learning) is speeded up by 1–2 orders of magnitude and the expected value of the local minimum (fitness of the trained offspring) is far above the mean value for randomly initialized offsprings. Thus, ENZO-M takes full advantage of both the knowledge transfer from the parental genstring using the evolutionary search and the efficiently computable gradient information using finetuning.
By the cooperation of the discrete mutation operator and the continuous weight decay method ENZO-M impressively thins out the topology of feedforward neural networks. Especially, ENZO-M also tries to cut off the connections to possibly redundant input units. Therefore ENZO-M not only supports the user in the network design but also recognizes redundant input units.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
H. BraunMassiv parallele Algorithmen für kombinatorische Optimierungsprobleme und ihre Implementierung auf einem Parallelrechner, Diss., TH Karlsruhe, 1990
H. BraunOn solving traveling salesman problems by genetic algorithms, in: Parallel Problem Solving from Nature, LNCS 496 (Berlin, 1991)
H. Braun, J. Feulner, V. UllrichLearning strategies for solving the problem of planning using backpropagation, in: Proc. 4th ICNN, Nimes, 1991
H. Braun and J. WeisbrodEvolving neural feedforward networks, in: Proc. Int. Conf. Artificial Neural Nets and Genetic Algorithms, R. F. Albrecht, C. R. Reeves and N.C. Steele, eds. Springer, Wien, 1993
H. Braun and J. WeisbrodEvolving neural networks for application oriented problems, in: Proc. of the second annual conference on evolutionary programming, Evolutionary programming Society, San Diego, 1993, S.62–71
M. Riedmiller and H. BraunA direct adaptive method for faster backpropagation learning: The RPROP algorithm, in: Proc. of the ICNN 93, San Francisco, 1993
D.B. FogelSystem Identifikation through simulated evolution; a machine learning approach to modelling, Needham Heights, Ginn Press, 1991
L. J. Fogel, A.J. Owens and M.J. WalshArtificial intelligence through simulated evolution, John Wiley, NY., 1962
J. H. HollandAdaptation in natural and artificial systems, The University of Michigan Press, Ann Arbor, 1975
John R. Mc Donnell and Don WaagenNeural Network Structure Design by Evolutionary Programming, Proc. of the Second Annual Conf. on Evolutionary Programming, San Diego, 1993
I. RechenbergEvolutionsstrategie: Optimierung technischer Systeme nach Prinzipien der biologischen Evolution, Frommann-Holzboog Verlag, Stuttgart, 1973
D. E. Rumelhart and J. McClellandParallel Distributed Processing, 1986
W. Schiffmann, M. Joost, R. WernerApplication of genetic algorithms to the construction of topologies for multilayer perceptrons, in: Proc. of the int. conf. Artificial Neural Nets and Genetic Algorithms, R. F. Albrecht, C. R. Reeves and N.C. Steele, eds. Springer, Wien, 1993
W. Schiffmann, M. Joost, R. WernerOptimization of the backpropagation algorithm for training multilayer perceptrons, Techn. rep., University of Koblenz, 1993
H. P. SchwefelNumerische Optimierung von Computermodellen mittels der Evolutionsstrategie, in: Interdisciplinary research (vol. 26), Birkhäuser, Basel, 1977
H. P. SchwefelEvolutionsstrategie und Numerische Optimierung, Diss., TU Berlin, 1975
A. S. Weigend and D. E. Rumelhart and B. A. Huberman Generalisation by Weight-Elimination with Application to Forecasting
P. ZagorskiEntwicklung evolutionärer Algorithmen zur Optimierung der Topologie und des Generalisierungsverhaltens von Multilayer Perceptrons, Diplomarbeit, Universität Karlsruhe, 1993
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1994 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Braun, H., Zagorski, P. (1994). ENZO-M — A hybrid approach for optimizing neural networks by evolution and learning. In: Davidor, Y., Schwefel, HP., Männer, R. (eds) Parallel Problem Solving from Nature — PPSN III. PPSN 1994. Lecture Notes in Computer Science, vol 866. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-58484-6_287
Download citation
DOI: https://doi.org/10.1007/3-540-58484-6_287
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-58484-1
Online ISBN: 978-3-540-49001-2
eBook Packages: Springer Book Archive