Skip to main content

Enhance Neural Networks Training Using GA with Chaos Theory

  • Conference paper
  • 1342 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5552))

Abstract

There are numerous algorithms available for training artificial neural networks. Besides classical algorithms for supervised learning such as backpropagation, associative memory and radial basis function, this training task can be employed by evolutionary computation since most of the gradient descent related algorithms can be view as an application of optimization theory and stochastic search. In this paper, the logistic model of population growth from ecology is integrated into initialization, selection and crossover operators of genetic algorithms for neural network training. These chaotic operators are very efficient in maintaining the population diversity during the evolution process of genetic algorithms. A comparison is done on the basis of a benchmark comprising several data classification problems for neural networks. Three variants of training – Backpropagation (BP), Genetic Algorithms (GA) and Genetic Algorithms with Chaotic Operators (GACO) – are described and compared. The experimental results confirm the dynamic mobility of chaotic algorithms in GACO network training, which can overcome saturation and improve the convergence rate.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   129.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   169.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Carlo, M.F., Peter, J.F.: An Overview of Evolutionary Algorithms in Multiobjective Optimization. Evolutionary Computation 3(1), 1–16 (1995)

    Article  Google Scholar 

  2. Fredric, M.H., Ivica, K.: Principles of Neurocomputing for Science and Engineering, pp. 106–110. McGraw-Hill International Edition, New York (2000)

    Google Scholar 

  3. Liao, G.C.: Hybrid Chaos Search Genetic Algorithm and Meta-Heuristics Method for Short-Term Load Forecasting. Electrical Engineering 88, 165–176 (2006)

    Article  Google Scholar 

  4. James, C.S.L.: Introduction to Stochastic Search and Optimization – Estimation, Simulation and Control. Wiley-Interscience Series, pp. 22–25 (2006)

    Google Scholar 

  5. Lu, H.J., Zhang, H.M., Ma, L.H.: A New Optimization Algorithm Based on Chaos. Journal of Zhengjiang University Science A 7(4), 539–542 (2006)

    Article  MATH  Google Scholar 

  6. Paulo, J.G.L., et al.: Artificial Neural Networks in Biomedicine. Springer, London (2000)

    Google Scholar 

  7. Robert, L.D.: A First Course in Chaotic Dynamical Systems, pp. 114–120. Addison-Wesley Publishing Company, Reading (1992)

    Google Scholar 

  8. Maniezzo, V.: Genetic Evolution of the Topology and Weight Distribution of Neural Networks. IEEE Transactions on Neural Networks 5(1), 39–53 (1994)

    Article  Google Scholar 

  9. Wang, S.A., Guo, Z.L.: Application of A Novel Fuzzy Clustering Method Based on Chaos Immune Evolutionary Algorithm for Edge Detection in Image Processing. In: Front. Mech. Eng. China, vol. 1, pp. 85–89. Higher Education Press and Springer-Verlag, Heidelberg (2006)

    Google Scholar 

  10. Yan, W., et al.: Hybrid Genetic / BP Algorithm and Its Application for Radar Target Classification. In: Proceeding of the 1997 IEEE National Aerospace and Electronics Conference (NAECON), pp. 981–984. IEEE Press, USA (1997)

    Google Scholar 

  11. Zuo, X.Q., Li, S.Y.: The Chaos Artificial Immune Algorithm and Its Application to RBF Neuro-Fuzzy Controller Design. IEEE, Los Alamitos (2003)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Leong, K.Y., Sitiol, A., Anbananthen, K.S.M. (2009). Enhance Neural Networks Training Using GA with Chaos Theory. In: Yu, W., He, H., Zhang, N. (eds) Advances in Neural Networks – ISNN 2009. ISNN 2009. Lecture Notes in Computer Science, vol 5552. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-01510-6_59

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-01510-6_59

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-01509-0

  • Online ISBN: 978-3-642-01510-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics