Abstract
We derive cost formulae for three different parallelisation techniques for training supervised networks. These formulae are parameterised by properties of the target computer architecture. It is therefore possible to decide the best match between parallel computer and training technique. One technique, exemplar parallelism, is far superior for almost all parallel computer architectures. Formulae also take into account optimal batch learning as the overall training approach.
Preview
Unable to display preview. Download preview PDF.
References
D. Anguita, A. Da Canal, W. Da Canal, A. Falcone, and A.M. Scapolla. On the distributed implementation of the back-propagation. In Proceedings of the International Conference on Artificial Neural Networks (ICANN'94), pages 1376–1379, Sorrento, Italy, 1996.
M. Besch and H.W. Pohl. Flexible data parallel training of neural networks using MIMD computers. In Third Euromicro Workshop on Parallel and Distributed Processing. San Remo, Italy, 1995.
M.W. Goudreau, J.M.D. Hill, K. Lang, W.F. McColl, S.D. Rao, D.C. Stefanescu, T. Suel, and T. Tsantilas. A proposal for a BSP Worldwide standard. BSP Worldwide, http://www.bsp-worldwide.org/, April 1996.
S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, New Jersey, 1994.
J.M.D. Hill, P.I. Crumpton, and D.A. Burgess. Theory, practice, and a tool for BSP performance prediction. In Europar'96, volume 1124 of LNCS, pages 697–705. Springer-Verlag, 1996.
R.O. Rogers. A framework for parallel data mining using neural networks. Technical Report 97-413, Queen's University, Department of Computing and Information Science, November 1997.
R.O. Rogers and D.B. Skillicorn. Batch size and training times in supervised and unsupervised networks, December 1997.
N. Serbedzija. Simulating artificial neural networks on parallel architectures. In Computer, volume 29, pages 53–63, 1996.
D.B. Skillicorn, J.M.D. Hill, and W.F. McColl. Questions and answers about BSP. Scientific Programming, to appear. Also appears as Oxford University Computing Laboratory, Technical Report TR-15-96, November 1996.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1998 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Rogers, R.O., Skillicorn, D.B. (1998). Using the BSP cost model to optimise parallel neural network training. In: Rolim, J. (eds) Parallel and Distributed Processing. IPPS 1998. Lecture Notes in Computer Science, vol 1388. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64359-1_700
Download citation
DOI: https://doi.org/10.1007/3-540-64359-1_700
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-64359-3
Online ISBN: 978-3-540-69756-5
eBook Packages: Springer Book Archive