Paper
Performance prediction of large MIMD systems for parallel neural network simulations

https://doi.org/10.1016/0167-739X(94)00064-LGet rights and content

Abstract

In this paper, we present a performance prediction model for indicating the performance range of MIMD parallel processor systems for neural network simulations. The model expresses the total execution time of a simulation as a function of the execution times of a small number of kernel functions, which have to be measured on only one processor and one physical communication link. The functions depend on the type of neural network, its geometry, decomposition and the connection structure of the MIMD machine. Using the model, the execution time, speedup, scalability and efficiency of large MIMD systems can be predicted. The model is validated quantitatively by applying it to two popular neural networks, backpropagation and the Kohonen self-organizing feature map, decomposed on a GCel-512, a 512 transputer system. Measurements are taken from network simulations decomposed via dataset and network decomposition techniques. Agreement of the model with the measurements is within 1–14%. Estimates are given for the performances that can be expected for the new T9000 transputer systems. The presented method can also be used for other application areas such as image processing.

References (17)

There are more references available in the full text version of this article.

Cited by (5)

  • Parallel codebook design for vector quantization on a message passing MIMD architecture

    2002, Parallel Computing
    Citation Excerpt :

    It is not reasonable to compare those results with the ones reported here for many reasons: the platform is different, the parallelization method is not the same, and the speed-up is calculated differently. There are some MIMD parallel implementations of neural networks which have been trained using the competitive learning algorithms such as the Kohonen maps on the GCEL-512 which is a 512-transputer system [20]. In this paper, a master/worker parallel implementation of a VQ algorithm to train a codebook on gray image database has been evaluated using the Alex AVX-2 parallel computer.

  • On the implementation of backpropagation on the Alex AVX-2 parallel system

    1997, IEEE International Conference on Neural Networks - Conference Proceedings
  • Estimating the parallel start-up overhead for parallelizing compilers

    1997, Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

We would like to acknowledge the University of Amsterdam and Parsytec Gmbh for allowing us to use the GCel located in Amsterdam during the CAMPP '93 programme.

View full text