Skip to main content

Multilevel Genetic Algorithm for the Complete Development of ANN

  • Conference paper
  • First Online:
Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence (IWANN 2001)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 2084))

Included in the following conference series:

Abstract

The utilization of Genetic Algorithms (GA) in the development of Artificial Neural Networks is a very active area of investigation. The works that are being carried out at present not only focus on the adjustment of the weights of the connections, but also they tend, more and more, to the development of systems which realize tasks of design and training, in parallel. To cover these necessities and, as an open platform for new developments, in this article it is shown a multilevel GA architecture which establishes a difference between the design and the training tasks. In this system, the design tasks are performed in a parallel way, by using different machines. Each design process has associated a training process as an evaluation function. Every design GA interchanges solutions in such a way that they help one each other towards the best solution working in a cooperative way during the simulation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. McCulloch, W. S. & Pitts, W.: “A Logical Calculus of Ideas Immanent in Nervous Activity”. Bulletin of Mathematical Biophysics. No 5. Pp. 115–133. 1943.

    Google Scholar 

  2. Rosenblatt, F.: “The Perceptron: A Probabilistic Model for Information Storage and Oganization in the Brain”. Psylogical Review. No 65. 1958.

    Google Scholar 

  3. Yee, P.: “Clasification Experiments involving Backpropagation and RBF Networks”. Communications Research Laboratory. Report no 249. McMaster university.1992.

    Google Scholar 

  4. Specht, D. F. (1990) “Probabilistic neural networks,” Neural Networks, 3, 110–118.

    Article  Google Scholar 

  5. Dorado, J., Santos, A. y Pazos, A.: “Methodology for the Construction of more Efficient Artificial Neural Networks by Means of Studying and Selecting the Training Set”. International Conference on Neural Networks ICNN96. pp. 1285–1290. 1996.

    Google Scholar 

  6. I. T. Jolliffe, “Principal Component Analysis”, Springer Verlag, 1986.

    Google Scholar 

  7. Williams, R. J. & Zipser, D.: “A Learning Algorithm for Continually Running Fully Recurrent Neural Networks”. Neural Computation no 1. Pp. 270–280. 1989.

    Google Scholar 

  8. Hee Yeal, & y Sung Yan B.: “An Improved Time Series Prediction by Applying the Layerby-Layer Learning Method to FIR Neural Networks”. Neural Networks. Vol. 10 No. 9. Pp. 1717–1729. 1997.

    Article  Google Scholar 

  9. Vassilios Petridis & Athanasios Kehagias: “A Recurrent Network Implementation of Time Series Classification”. Neural Computation. No. 8. Pp. 357–372. 1996.

    Google Scholar 

  10. Montana, D. J. & Davis, L.: “Training Feedforward NN Using Genetic Algorithms”. Proc. of Eleventh International Joint Conference on Artificial Intelligence". Pp. 762–767. 1989.

    Google Scholar 

  11. Joya, G., Frias, J. J., Marín, M. M. & Sandoval, F. (1993): “New Learning Strategies from the Microscopy Level of an ANN”. Electronics Letters. Vol. 29. No 20. Pp. 1775–1777.

    Article  Google Scholar 

  12. Radcliffe, N. J.: “Genetic Set Recombination and its Application to NN Topology Optimization”. Neural Computing and Applications. Vol. 1. No 1. Pp. 67–90. 1993.

    Article  MATH  MathSciNet  Google Scholar 

  13. A. Pazos, J. Dorado, A. Santos, J. R. Rabuñal y N. Pedreira. “AG paralelo multinivel para el desarrollo de RR.NN.AA.”. CAEPIA-99. Murcia. Spain. 1999.

    Google Scholar 

  14. J. Dorado, A. Santos, A. Pazos, J. R. Rabuñal y N. Pedreira. “Automatic Selection of the Training Set with Genetic Algorithm for Training Artificial Neural Networks”. Gecco 2000. Las Vegas, NV, EE.UU. 2000

    Google Scholar 

  15. McDonnell, J. R. & Waagen, D.: “Evolving Recurrent Perceptrons for Time Series Modeling”. IEEE Transactions on Neural Networks. Vol. 5 No 1. Pp. 24–38. 1996.

    Article  Google Scholar 

  16. Karnin, E. D. (1990): “A Simple Procedure for Pruning Back-Propagation Trained Neural Networks”. IEEE Transactions on Neural Networks. Vol. 1 No 2. Pp. 239–242.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Dorado, J., Santos, A., Rabuñal, J.R. (2001). Multilevel Genetic Algorithm for the Complete Development of ANN. In: Mira, J., Prieto, A. (eds) Connectionist Models of Neurons, Learning Processes, and Artificial Intelligence. IWANN 2001. Lecture Notes in Computer Science, vol 2084. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45720-8_86

Download citation

  • DOI: https://doi.org/10.1007/3-540-45720-8_86

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42235-8

  • Online ISBN: 978-3-540-45720-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics