Skip to main content

Using the BSP cost model to optimise parallel neural network training

  • Workshop on Biologically Inspired Solutions to Parallel Processing Problems Albert V. Zomaya, The University of Western Australia Fikret Ercal, University of Missouri-Rolla Stephan Olariu, Old Dominion University
  • Conference paper
  • First Online:
Parallel and Distributed Processing (IPPS 1998)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1388))

Included in the following conference series:

Abstract

We derive cost formulae for three different parallelisation techniques for training supervised networks. These formulae are parameterised by properties of the target computer architecture. It is therefore possible to decide the best match between parallel computer and training technique. One technique, exemplar parallelism, is far superior for almost all parallel computer architectures. Formulae also take into account optimal batch learning as the overall training approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. D. Anguita, A. Da Canal, W. Da Canal, A. Falcone, and A.M. Scapolla. On the distributed implementation of the back-propagation. In Proceedings of the International Conference on Artificial Neural Networks (ICANN'94), pages 1376–1379, Sorrento, Italy, 1996.

    Google Scholar 

  2. M. Besch and H.W. Pohl. Flexible data parallel training of neural networks using MIMD computers. In Third Euromicro Workshop on Parallel and Distributed Processing. San Remo, Italy, 1995.

    Google Scholar 

  3. M.W. Goudreau, J.M.D. Hill, K. Lang, W.F. McColl, S.D. Rao, D.C. Stefanescu, T. Suel, and T. Tsantilas. A proposal for a BSP Worldwide standard. BSP Worldwide, http://www.bsp-worldwide.org/, April 1996.

    Google Scholar 

  4. S. Haykin. Neural Networks: A Comprehensive Foundation. Prentice Hall, New Jersey, 1994.

    Google Scholar 

  5. J.M.D. Hill, P.I. Crumpton, and D.A. Burgess. Theory, practice, and a tool for BSP performance prediction. In Europar'96, volume 1124 of LNCS, pages 697–705. Springer-Verlag, 1996.

    Google Scholar 

  6. R.O. Rogers. A framework for parallel data mining using neural networks. Technical Report 97-413, Queen's University, Department of Computing and Information Science, November 1997.

    Google Scholar 

  7. R.O. Rogers and D.B. Skillicorn. Batch size and training times in supervised and unsupervised networks, December 1997.

    Google Scholar 

  8. N. Serbedzija. Simulating artificial neural networks on parallel architectures. In Computer, volume 29, pages 53–63, 1996.

    Article  Google Scholar 

  9. D.B. Skillicorn, J.M.D. Hill, and W.F. McColl. Questions and answers about BSP. Scientific Programming, to appear. Also appears as Oxford University Computing Laboratory, Technical Report TR-15-96, November 1996.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Rolim

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Rogers, R.O., Skillicorn, D.B. (1998). Using the BSP cost model to optimise parallel neural network training. In: Rolim, J. (eds) Parallel and Distributed Processing. IPPS 1998. Lecture Notes in Computer Science, vol 1388. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64359-1_700

Download citation

  • DOI: https://doi.org/10.1007/3-540-64359-1_700

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64359-3

  • Online ISBN: 978-3-540-69756-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics