Skip to main content

Optimal speedup conditions for a parallel back-propagation algorithm

  • Conference paper
  • First Online:
Parallel Processing: CONPAR 92—VAPP V (VAPP 1992, CONPAR 1992)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 634))

Abstract

One way for implementing a parallel back-propagation algorithm is based on distributing the examples to be learned among different processors. This method provides with spectacular speedups for each epoch of back-propagation learning, but it shows a major drawback: parallelization implies alterations of the gradient descent algorithm. This paper presents an implementation of this parallel algorithm on a transputer network. It mentions experimental laws about the back-propagation convergence speed, and claims that optimal conditions still exist for performing an actual speedup by implementing such a parallel algorithm. It points out theoretical and experimental optimal conditions, in terms of the number of processors and the size of the example packets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. F.Desprez, B.Tourancheau, Modélisation des performances de communication sur le Tnode avec le Logical system transputer toolset, La lettre du transputer et des calculateurs distribués 7 (Lab. d'Info, de Besançon, 1990) 65–72

    Google Scholar 

  2. P.Fraigniaud, S.Miguet, Y.Robert, Scattering on a ring of processors, Parallel Computing 13 (North Holland, 1990) 377–383

    Google Scholar 

  3. J.Ghosh, K.Hwang, Mapping neural networks onto message-passing multicomputers, Journal of Parallel and Distributed Computing 6 (Academic Press, 1989) 291–330

    Google Scholar 

  4. H.Paugam-Moisy, On the convergence of a block-gradient algorithm for backpropagaton learning, Proc. of IJCNN'92-Baltimore (1992) III-919-924

    Google Scholar 

  5. A.Singer, Implementations of artificial neural networks on the Connection Machine, Parallel Computing 14 (North-Holland, 1990) 305–316

    Google Scholar 

  6. A.Singer, Exploiting the inherent parallelism of artificial neural networks to achieve 1300 M interconnects per second, Proc. of INNC-90-Paris (1990) 656–660

    Google Scholar 

  7. S.Wang, Communication problems in simulating neural networks on a hypercube machine, 1st European Workshop on Hypercube and Distributed Computers (1988)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Luc Bougé Michel Cosnard Yves Robert Denis Trystram

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Paugam-Moisy, H. (1992). Optimal speedup conditions for a parallel back-propagation algorithm. In: Bougé, L., Cosnard, M., Robert, Y., Trystram, D. (eds) Parallel Processing: CONPAR 92—VAPP V. VAPP CONPAR 1992 1992. Lecture Notes in Computer Science, vol 634. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-55895-0_474

Download citation

  • DOI: https://doi.org/10.1007/3-540-55895-0_474

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-55895-8

  • Online ISBN: 978-3-540-47306-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics