Skip to main content

Backpropagation growing networks: Towards local minima elimination

  • Neural Network Architectures And Algorithms
  • Conference paper
  • First Online:
Artificial Neural Networks (IWANN 1991)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 540))

Included in the following conference series:

Abstract

The problem of local minima in backpropagation [Rumelhart 86] based networks is addressed. Networks that should have fast learning behavior fall into large number of epochs learning. This is usually produced by a combination of initial conditions (network size, training set relative to the global problem size, learning factors, etc...) that forces the network to fall into local minima, large valleys or huge plateaus of the error surface. Several procedures have been proposed to solve this problem, some of them affecting the training set, others taking advantage of problem preprocessing to find the initial position of the network. We will show a fully local method to avoid the problem: the detection (by the neuron) of the local minimum in which it may be involved. Once this step is covered, several methods to pick the network out of the minimum are proposed. The first method is to increase the network size by producing new neurons (Meiosis [Hanson 90b]), other compatible methods are presented too.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

5. References

  1. Fahlman, S. E. (1988) An Empirical Study of Learning Speed in Back-Propagation Networks, Carnegie Mellon University Internal Report, CMU-CS-88-162.

    Google Scholar 

  2. Fahlman, S. E. (1990) "The Cascade-Correlation Learning Architecture" in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann.

    Google Scholar 

  3. Hanson, S. J. (1990) A Stochastic Version of the Delta Rule, Physica D.

    Google Scholar 

  4. Hanson, S. J. (1990) "Meiosis Networks" in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann.

    Google Scholar 

  5. Le Cun, Y., Denker, J. S. and Solla, S. A., (1990) "Optimal Brain Damage" in D. S. Touretzky (ed.), Advances in Neural Information Processing Systems 2, Morgan Kaufmann.

    Google Scholar 

  6. Rumelhart, D. E., Hinton, G. E., and Williams, R. J., (1986) "Learning Internal Representations by Error Propagation" in Rumelhart, D. E. and McClelland, J. L., Parallel Distributed Processing: Explorations in the Microstructure of Cognition, MIT Press.

    Google Scholar 

  7. Rumelhart, D. E., (1988) "Learning and Generalization", Proceedings of the IEEE International Conference on Neural Networks, San Diego.

    Google Scholar 

  8. Weigend, A. S., Huberman, B. A. and Rumelhart, D. E., (1990) "Predicting the Future: A Connectionist Approach", International Journal of Neural Systems 1.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Bellido, I., Fernández, G. (1991). Backpropagation growing networks: Towards local minima elimination. In: Prieto, A. (eds) Artificial Neural Networks. IWANN 1991. Lecture Notes in Computer Science, vol 540. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0035887

Download citation

  • DOI: https://doi.org/10.1007/BFb0035887

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-54537-8

  • Online ISBN: 978-3-540-38460-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics