Skip to main content

Acceleration techniques for the backpropagation algorithm

  • Part II Theory, Algorithms
  • Conference paper
  • First Online:
Neural Networks (EURASIP 1990)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 412))

Included in the following conference series:

Abstract

Like other gradient descent techniques, backpropagation converges slowly, even for medium sized network problems. This fact results from the usually large dimension of the weight space and from the particular shape of the error surface in each iteration point. Oscillation between the sides of deep and narrow valleys, for example, is a well known case where gradient descent provides poor convergence rates.

In this work, we present an acceleration technique for the backpropagation algorithm based on individual adaptation of the learning rate parameter of each synapse. The efficiency of the method is discussed and several related issues are analyzed.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Richard S. Sutton. “Two Problems With Backpropagation and other Steepest-Descent Learning Procedures for Networks”, Proceedings of the 8th. Annual Conf. of the Cognitive Science Society, 1986, pp 823–831.

    Google Scholar 

  2. D. E. Rumelhart, G. E. Hinton, and R. J. Williams. “Learning Internal Representations by Error Propagation”, in Parallel Distributed Processing: Explorations in the Microstructure of Cognition, Cambridge, MA: MIT Press, 1986.

    Google Scholar 

  3. L-W. Chan and F. Fallside. “An Adaptive Training Algorithm for Back Propagation Networks”, Computer Speech & Language, vol. 2., Sep/Dec. 1987, pp 205–218.

    Google Scholar 

  4. H. Kesten. “Accelerated Stochastic Approximation”, Annals of Mathematical Statistics, 29, 1957, pp 41–59

    Google Scholar 

  5. R. Jacobs. “Increased Rates of Convergence Trough Learning Rate Adaptation”, Neural Networks, Vol 1, N. 4, 1988.

    Google Scholar 

  6. S. Fahlman. “Faster-Learning Variations on Back-Propagation: An Empirical Study”, Proc. of the 1988 Connectionist Models Summer School, Carnegie Mellon, 1988.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Luis B. Almeida Christian J. Wellekens

Rights and permissions

Reprints and permissions

Copyright information

© 1990 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Silva, F.M., Almeida, L.B. (1990). Acceleration techniques for the backpropagation algorithm. In: Almeida, L.B., Wellekens, C.J. (eds) Neural Networks. EURASIP 1990. Lecture Notes in Computer Science, vol 412. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-52255-7_32

Download citation

  • DOI: https://doi.org/10.1007/3-540-52255-7_32

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-52255-3

  • Online ISBN: 978-3-540-46939-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics