Skip to main content

Hybrid Training of Feed-Forward Neural Networks with Particle Swarm Optimization

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4233))

Abstract

Training neural networks is a complex task of great importance in problems of supervised learning. The Particle Swarm Optimization (PSO) consists of a stochastic global search originated from the attempt to graphically simulate the social behavior of a flock of birds looking for resources. In this work we analyze the use of the PSO algorithm and two variants with a local search operator for neural network training and investigate the influence of the GL 5 stop criteria in generalization control for swarm optimizers. For evaluating these algorithms we apply them to benchmark classification problems of the medical field. The results showed that the hybrid GCPSO with local search operator had the best results among the particle swarm optimizers in two of the three tested problems.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Blum, C., Socha, K.: Training feed-forward neural networks with ant colony optimization: An application to pattern classification. In: Fifth International Conference on Hybrid Intelligent Systems (HIS 2005), pp. 233–238 (2005)

    Google Scholar 

  2. Marquardt, D.: An algorithm for least squares estimation of non-linear parameters. J. Soc. Ind. Appl. Math., 431–441 (1963)

    Google Scholar 

  3. Rumelhart, D., Hilton, G.E., Williams, R.J.: Learning representations of back-propagation errors. Nature 323, 523–536

    Google Scholar 

  4. Alba, E., Chicano, J.F.: Training Neural Networks with GA Hybrid Algorithms. In: Deb, K., et al. (eds.) GECCO 2004. LNCS, vol. 3102, pp. 852–863. Springer, Heidelberg (2004)

    Chapter  Google Scholar 

  5. Eiben, E., Smith, J.E.: Introduction to Evolutionary Computing. Natural Computing Series. MIT Press, Springer, Berlin (2003)

    MATH  Google Scholar 

  6. van den Bergh, F.: An Analysis of Particle Swarm Optimizers. PhD dissertation, Faculty of Natural and Agricultural Sciences, Univ. Pretoria, Pretoria, South Africa (2002)

    Google Scholar 

  7. van den Bergh, F., Engelbrecht, A.P.: A Cooperative Approach to Particle Swarm Optimization. IEEE Transactions on Evolutionary Computation 8(3), 225–239 (2004)

    Article  Google Scholar 

  8. Glover, F.: Future paths for integer programming and links to artificial intelligence. Computers and Operation Research 13, 533–549 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  9. Holland, J.H.: Adaptation in natural and artificial systems, University of Michigan Press, Ann Arbor, MI (1975)

    Google Scholar 

  10. Kennedy, J., Eberhart, R.: Swarm Intelligence. Morgan Kaufmann, San Francisco (2001)

    Google Scholar 

  11. Kennedy, J., Eberhart, R.: Particle Swarm Optimization. In: Proc. IEEE Intl. Conf. on Neural Networks, Perth, Australia, vol. IV, pp. 1942–1948. IEEE Service Center, Piscataway (1995)

    Chapter  Google Scholar 

  12. Levenberg, K.: A method for the solution of certain problems in least squares. Quart. Appl. Math. 2, 164–168 (1944)

    MATH  MathSciNet  Google Scholar 

  13. Prechelt, L.: Proben1 - A set of neural network benchmark problems and benchmark rules. Technical Report 21/94, Fakultät für Informatik, Universität Karlsruhe, Germany (September 1994)

    Google Scholar 

  14. Dorigo, M., Maniezzo, V., Colorni, A.: Ant System: optimization by a colony of cooperating agents. IEEE Transactions on Systems, Man and Cybernetics - Part B 26(1), 29–41 (1996)

    Article  Google Scholar 

  15. Riedmiller, M.: Rprop - description and implementations details, Technical report, University of Karlsruhe (1994)

    Google Scholar 

  16. Treadgold, N.K., Gedeon, T.D.: Simulated annealing and weight decay in adaptive learning: the SARPROP algorithm. IEEE Transactions on Neural Networks 9, 662–668 (1998)

    Article  Google Scholar 

  17. Eberhart, R.C., Shi, Y.: Comparison between Genetic Algorithms and Particle Swarm Optimization. In: Porto, V.W., Waagen, D. (eds.) EP 1998. LNCS, vol. 1447, pp. 611–616. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  18. Sexton, R.S., Alidaee, B., Dorsey, R.E., Johnson, J.D.: Global optimization for artificial neural networks: a tabu search application. European Journal of Operational Research 2(106), 570–584 (1998)

    Article  Google Scholar 

  19. Sexton, R.S., Dorsey, R.E., Johnson, J.D.: Optimization of neural networks: A comparative analysis of the genetic algorithm and simulated annealing. European Journal of Operational Research (114), 589–601 (1999)

    Google Scholar 

  20. Haykin, S.: Neural Networks: A comprehensive Foundation, 2nd edn. Prentice-Hall, Englewood Cliffs (1998)

    Google Scholar 

  21. Kirkpatrick, S., Gellat Jr., C.D., Vecchi, M.P.: Optimization by simulated annealing. Science 220, 671–680 (1983)

    Article  MathSciNet  Google Scholar 

  22. Ludermir, T.B., Yamazaki, A., Zanchetin, C.: An Optimization Methodology for Neural Network Weights and Architectures. IEEE Transactions on Neural Networks 17(5) (to be published, 2006)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Carvalho, M., Ludermir, T.B. (2006). Hybrid Training of Feed-Forward Neural Networks with Particle Swarm Optimization. In: King, I., Wang, J., Chan, LW., Wang, D. (eds) Neural Information Processing. ICONIP 2006. Lecture Notes in Computer Science, vol 4233. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11893257_116

Download citation

  • DOI: https://doi.org/10.1007/11893257_116

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-46481-5

  • Online ISBN: 978-3-540-46482-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics