Abstract
In modeling with neural networks, the choice of network architecture and size is of utmost importance. The use of a too small network may result in poor performance because of lack of expressional capacity, while a too large network fits noise or apparent relations in the data sets studied. The work required to find a parsimonious network is often considerable with respect to both time and computational effort. This paper presents a method for training feedforward neural networks based on a genetic algorithm (GA), which simultaneously optimizes both weights and network connectivity structure. The proposed method has been found to yield dense and descriptive networks even from training sets of few observations.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Haykin, S., (1994) Neural Networks - A Comprehensive Foundation, Macmillan Publishing Co., New York.
Battiti, R., (1992) “First- and Second-Order Methods for Learning: Between Steepest Descent and Newton’s Method”, Neural Computation 4, 141–166.
Frean, M., (1989) “The Upstart Algorithm. A method for Constructing and Training Feed-forward Neural Networks”, Edinburgh Physics Department, Preprint 89/469, Scotland.
Fahlman, S.E., and C. Lebiere (1990) “The Cascade-Correlation Learning Architecture”, in Advances in Neural Information Processing Systems II, (Ed. D.S. Touretzky), pp. 524–532.
Le Chun, Y., J. S. Denker and S. A. Solla, (1990) “Optimal Brain Damage”, in Advances in Neural Information Processing Systems 2, ed. D.S. Touretzky, pp. 598–605, (Morgan
Thimm, G., and E. Fiesler, (1995) “Evaluating pruning methods”, Proc. of the 1995 International Symposium on Artificial Neural Networks (ISANN’95), Hsinchu, Taiwan, ROC.
Duran, M. A., and I. E. Grossmann, (1986) “An Outer-Approximation Algorithm for a Class of Mixed-Integer Nonlinear Programs”, Mathematical Programming 6, 307–339.
Westerlund, T., and F. Pettersson, (1995) “An extended cutting plane method for solving convex MINLP problems”, Computers and Chem. Eng. 19, S131—S136.
Ryoo, H.S., and N. V. Sahinidis, (1995) “Global Optimization of Nonconvex NLPs and MILPs with Applications in Process Design”, Computers and Chem. Eng. 19, 551–566.
Fogel, D.B., L. J. Fogel and V. W. Porto, (1990) “Evolving Neural Networks”, Biol. Cybern. 63, 487–493.
Maniezzo, V., (1994) “Genetic Evolution of the Topology and Weight Distribution of Neural Networks”, IEEE Transactions on Neural Networks 5, 39–53.
Reeves, C., and N. Steele, (1993) “Applications of Genetic Algorithms in Artificial Neural Networks”, Systems Science 19, 63–76.
Angeline, P.J., G. M. Saunders and J. B. Pollack, (1993) “An Evolutionary Algorithm That Constructs Recurrent Neural Networks”, IEEE Trans. Neural Networks 5, 54–65.
Gao, F., M. Li, F. Wang, B. Wang and P. Yue, (1999) “Genetic Algorithms and Evolutionary Programming Hybrid Strategy for Structure and Weight Learning for Multilayer Feedforward Neural Networks”, Ind. Eng. Chem. Res. 38, 4330–4336.
Schmitz, G.P.J., (1999) Combinatorial evolution of feedforward neural network models for chemical processes, Ph.D. Dissertation, University of Stellenbosch, Republic of South Africa.
Holland, J.H., (1975) Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor.
Goldberg, D.E., (1989) Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, MA, USA.
Wehrens, R., E. Pretch and L.M.C. Buydens, (1998) “Quality Criteria of Genetic Algorithms for Structure Optimization” J. Chem. Inf. Comput. Sci. 38, 151–157
Golub, G., (1965) “Numerical methods for solving linear least squares problems”, Numer. Math. 7, 206–216.
Saxén, H., and B. Saxén, (1994) “A Tool for Modeling, Simulation and Prediction Using Feedforward and Recurrent Neural Networks”, Proc. of I Brazilian Symposium on Neural Networks, Caxambú, MG, Brazil, pp. 55–60.
Hinnelä, J., and H. Saxén, (2001) “Neural Network Model of Burden Layer Formation Dynamics in the Blast Furnace”, ISIJ International 41, 142–150.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Wien
About this paper
Cite this paper
Pettersson, F., Saxén, H. (2003). A hybrid algorithm for weight and connectivity optimization in feedforward neural networks. In: Pearson, D.W., Steele, N.C., Albrecht, R.F. (eds) Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-0646-4_10
Download citation
DOI: https://doi.org/10.1007/978-3-7091-0646-4_10
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-00743-3
Online ISBN: 978-3-7091-0646-4
eBook Packages: Springer Book Archive