Abstract
One way for implementing a parallel back-propagation algorithm is based on distributing the examples to be learned among different processors. This method provides with spectacular speedups for each epoch of back-propagation learning, but it shows a major drawback: parallelization implies alterations of the gradient descent algorithm. This paper presents an implementation of this parallel algorithm on a transputer network. It mentions experimental laws about the back-propagation convergence speed, and claims that optimal conditions still exist for performing an actual speedup by implementing such a parallel algorithm. It points out theoretical and experimental optimal conditions, in terms of the number of processors and the size of the example packets.
Preview
Unable to display preview. Download preview PDF.
References
F.Desprez, B.Tourancheau, Modélisation des performances de communication sur le Tnode avec le Logical system transputer toolset, La lettre du transputer et des calculateurs distribués 7 (Lab. d'Info, de Besançon, 1990) 65–72
P.Fraigniaud, S.Miguet, Y.Robert, Scattering on a ring of processors, Parallel Computing 13 (North Holland, 1990) 377–383
J.Ghosh, K.Hwang, Mapping neural networks onto message-passing multicomputers, Journal of Parallel and Distributed Computing 6 (Academic Press, 1989) 291–330
H.Paugam-Moisy, On the convergence of a block-gradient algorithm for backpropagaton learning, Proc. of IJCNN'92-Baltimore (1992) III-919-924
A.Singer, Implementations of artificial neural networks on the Connection Machine, Parallel Computing 14 (North-Holland, 1990) 305–316
A.Singer, Exploiting the inherent parallelism of artificial neural networks to achieve 1300 M interconnects per second, Proc. of INNC-90-Paris (1990) 656–660
S.Wang, Communication problems in simulating neural networks on a hypercube machine, 1st European Workshop on Hypercube and Distributed Computers (1988)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1992 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Paugam-Moisy, H. (1992). Optimal speedup conditions for a parallel back-propagation algorithm. In: Bougé, L., Cosnard, M., Robert, Y., Trystram, D. (eds) Parallel Processing: CONPAR 92—VAPP V. VAPP CONPAR 1992 1992. Lecture Notes in Computer Science, vol 634. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-55895-0_474
Download citation
DOI: https://doi.org/10.1007/3-540-55895-0_474
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-55895-8
Online ISBN: 978-3-540-47306-0
eBook Packages: Springer Book Archive