Abstract
Training neural networks is a complex task of great importance in problems of supervised learning. The Particle Swarm Optimization (PSO) consists of a stochastic global search originated from the attempt to graphically simulate the social behavior of a flock of birds looking for resources. In this work we analyze the use of the PSO algorithm and two variants with a local search operator for neural network training and investigate the influence of the GL 5 stop criteria in generalization control for swarm optimizers. For evaluating these algorithms we apply them to benchmark classification problems of the medical field. The results showed that the hybrid GCPSO with local search operator had the best results among the particle swarm optimizers in two of the three tested problems.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Blum, C., Socha, K.: Training feed-forward neural networks with ant colony optimization: An application to pattern classification. In: Fifth International Conference on Hybrid Intelligent Systems (HIS 2005), pp. 233–238 (2005)
Marquardt, D.: An algorithm for least squares estimation of non-linear parameters. J. Soc. Ind. Appl. Math., 431–441 (1963)
Rumelhart, D., Hilton, G.E., Williams, R.J.: Learning representations of back-propagation errors. Nature 323, 523–536
Alba, E., Chicano, J.F.: Training Neural Networks with GA Hybrid Algorithms. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3102, pp. 852–863. Springer, Heidelberg (2004)
Eiben, E., Smith, J.E.: Introduction to Evolutionary Computing. Natural Computing Series. MIT Press, Springer, Berlin (2003)
van den Bergh, F.: An Analysis of Particle Swarm Optimizers. PhD dissertation, Faculty of Natural and Agricultural Sciences, Univ. Pretoria, Pretoria, South Africa (2002)
van den Bergh, F., Engelbrecht, A.P.: A Cooperative Approach to Particle Swarm Optimization. IEEE Transactions on Evolutionary Computation 8(3), 225–239 (2004)
Glover, F.: Future paths for integer programming and links to artificial intelligence. Computers and Operation Research 13, 533–549 (1986)
Holland, J.H.: Adaptation in natural and artificial systems, University of Michigan Press, Ann Arbor, MI (1975)
Kennedy, J., Eberhart, R.: Swarm Intelligence. Morgan Kaufmann, San Francisco (2001)
Kennedy, J., Eberhart, R.: Particle Swarm Optimization. In: Proc. IEEE Intl. Conf. on Neural Networks, Perth, Australia, vol. IV, pp. 1942–1948. IEEE Service Center, Piscataway (1995)
Levenberg, K.: A method for the solution of certain problems in least squares. Quart. Appl. Math. 2, 164–168 (1944)
Prechelt, L.: Proben1 - A set of neural network benchmark problems and benchmark rules. Technical Report 21/94, Fakultät für Informatik, Universität Karlsruhe, Germany (September 1994)
Dorigo, M., Maniezzo, V., Colorni, A.: Ant System: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics - Part B 26(1), 29–41 (1996)
Riedmiller, M.: Rprop - description and implementations details, Technical report, University of Karlsruhe (1994)
Treadgold, N.K., Gedeon, T.D.: Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm. IEEE Transactions on Neural Networks 9, 662–668 (1998)
Eberhart, R.C., Shi, Y.: Comparison between Genetic Algorithms and Particle Swarm Optimization. In: Porto, V.W., Waagen, D. (eds.) EP 1998. LNCS, vol. 1447, pp. 611–616. Springer, Heidelberg (1998)
Sexton, R.S., Alidaee, B., Dorsey, R.E., Johnson, J.D.: Global optimization for artificial neural networks: a tabu search application. European Journal of Operational Research 2(106), 570–584 (1998)
Sexton, R.S., Dorsey, R.E., Johnson, J.D.: Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing. European Journal of Operational Research (114), 589–601 (1999)
Haykin, S.: Neural Networks: A comprehensive Foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1998)
Kirkpatrick, S., Gellat Jr., C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220, 671–680 (1983)
Ludermir, T.B., Yamazaki, A., Zanchetin, C.: An Optimization Methodology for Neural Network Weights and Architectures. IEEE Transactions on Neural Networks 17(5) (to be published, 2006)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2006 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Carvalho, M., Ludermir, T.B. (2006). Hybrid Training of Feed-Forward Neural Networks with Particle Swarm Optimization. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4233. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893257_116
Download citation
DOI: https://doi.org/10.1007/11893257_116
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-46481-5
Online ISBN: 978-3-540-46482-2
eBook Packages: Computer ScienceComputer Science (R0)