Skip to main content

A scalable performance prediction method for parallel neural network simulations

  • Conference paper
  • First Online:
High-Performance Computing and Networking (HPCN-Europe 1994)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 796))

Included in the following conference series:

  • 160 Accesses

Abstract

A performance prediction method is presented for indicating the performance range of MIMD parallel processor systems for neural network simulations. The total execution time of a parallel application is modeled as the sum of its calculation and communication times. The method is scalable because based on the times measured on one processor and one communication link, the performance, speedup, and efficiency can be predicted for a larger processor system. It is validated quantitatively by applying it to two popular neural networks, backpropagation and the Kohonen self-organizing feature map, decomposed on a GCel-512, a 512 transputer system. Agreement of the model with the measurements is within 9%.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L-C. Chu and B.W. Wah. Optimal Mapping of Neural Network Learning on Message-Passing Multicomputers. Journal of Parallel and Distributed Computing, 14:319–339, 1992.

    Google Scholar 

  2. A.J.G. Hey. The Genesis Distributed Memory Benchmarks. Parallel Computing, 17(2):1275–1283, 1991.

    Google Scholar 

  3. T. Kohonen. Self-Organization and Associative Memory. Springer Verlag, Berlin, second edition, 1988.

    Google Scholar 

  4. Inmos/SGS-Thomson Microelectronics. The T9000 Transputer Product Overview, 1991.

    Google Scholar 

  5. K. Obermayer, H.Heller, H. Ritter, and K. Schulten. Simulation of Self-Organizing Neural Nets: a Comparison between a Transputer Ring and a Connection Machine CM-2. In Proceedings of the Third Conference of NATUG, Sunnyvale, CA, 1990.

    Google Scholar 

  6. D.E. Rumelhart and J.L. McClelland. Parallel Distributed Processing: Explorations in the Microstructure of Cognition, volume 1. MIT Press, 1986.

    Google Scholar 

  7. L.G. Vuurpijl and Th.E. Schouten. Performance of MIMD Execution Platforms for PNNs: How many MCUPS? Technical report, Department of Real-Time Systems, Faculty of Mathematics and Informatics, University of Nijmegen, Toernooiveld 1, 6525 ED Nijmegen, The Netherlands, August 1993. In progress.

    Google Scholar 

  8. M. Witbrock and M. Zagha. An Implementation of Backpropagation Learning on GF11, a Large SIMD Parallel Computer. Parallel Computing, 14:329–346, 1990.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Wolfgang Gentzsch Uwe Harms

Rights and permissions

Reprints and permissions

Copyright information

© 1994 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Vuurpijl, L., Schouten, T., Vytopil, J. (1994). A scalable performance prediction method for parallel neural network simulations. In: Gentzsch, W., Harms, U. (eds) High-Performance Computing and Networking. HPCN-Europe 1994. Lecture Notes in Computer Science, vol 796. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0020405

Download citation

  • DOI: https://doi.org/10.1007/BFb0020405

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-57980-9

  • Online ISBN: 978-3-540-48406-6

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics