Skip to main content

The Evolution of a Feedforward Neural Network trained under Backpropagation

  • Conference paper
Artificial Neural Nets and Genetic Algorithms

Abstract

This paper presents a theoretical and empirical analysis of the evolution of a feedforward neural network (FFNN) trained using backpropagation (BP). The results of two sets of experiments axe presented which illustrate the nature of BP’s search through weight space as the network learns to classify the training data. The search is shown to be driven by the initial values of the weights in the output layer of neurons.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. S.E. Fahlman. Faster-learning variations on BP: An empirical study. In D. Touretszky, G. Hinton, and T. Sejinowski, editors, Proceedings of the 1988 Connectionist Models Summer School, pages 38–51, 1989.

    Google Scholar 

  2. R.A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1:295–307, 1988.

    Article  Google Scholar 

  3. D. Rumelhart, G. Hinton, and R. Williams. Learning representations by BP errors. Letters to Nature, 323:533–535, 1996.

    Article  Google Scholar 

  4. I.K. Sethi. Entropy nets: From decision trees to neural networks. Proceedings of the IEEE, 78(10):1605–1613, 1990.

    Google Scholar 

  5. H. Sompolinsky and A. Crisanti. Chaos in random neural networks. Physical Review Letters, 61(3):259–262, 1988.

    Article  MathSciNet  Google Scholar 

  6. P.F.M.J. Verschure. Chaos-based learning. Complex Systems, 5:359–370, 1991.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Wien

About this paper

Cite this paper

McLean, D., Bandar, Z., O’Shea, J.D. (1998). The Evolution of a Feedforward Neural Network trained under Backpropagation. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6492-1_114

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6492-1_114

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83087-1

  • Online ISBN: 978-3-7091-6492-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics