Skip to main content

Exploiting parallel computers to reduce neural network training time of real applications

  • VII Poster Session Papers
  • Conference paper
  • First Online:
High Performance Computing (ISHPC 1997)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1336))

Included in the following conference series:

Abstract

Neural networks have been proposed to solve difficult problems like speech and character recognition. However, there has so far not come up any revolutionary system. This paper gives the results of a survey of the ongoing research on neural network applications. Moreover, we point out the demands for the mapping of neural applications onto parallel computer hardware. We propose a flexible mapping of back propagation trained neural networks onto a highly parallel computer.

The experiments undertaken show the need for application specific mapping of the given neural network and training set.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Tom Kavli. Nevrale nett: Hvor vil vi de neste årene ? In Proc. of the Norwegian Neural Network Seminar. SINTEF Instrumentation, November 1994.

    Google Scholar 

  2. D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning internal representation by error propagation. In Parallel Distributed Processing, volume 1, pages 318–362. The MIT Press, 1986.

    Google Scholar 

  3. Vipin Kumar et al. A scalable parallel formulation of the back propagation algorithm for hypercubes and related architectures. IEEE Trans. on Parallel and Distributed Systems, 5(10):1073–1090, October 1994.

    Article  Google Scholar 

  4. Hiroaki Ishihata et al. Third generation message passing computer AP1000. In Proc. of the International Symposium on Supercomputing, pages 46–55, Nov. 1991.

    Google Scholar 

  5. Bernard Widrow et al. Neural networks: Applications in industry, business and science. Communication of ACM, 37(3):93–105, March 1994.

    Article  Google Scholar 

  6. Jim Tørresen. Parallelization of Backpropagation Training for Feed-Forward Neural Networks. PhD thesis, Norwegian University of Science and Technology, 1996. ISBN 82-7119-906-4.

    Google Scholar 

  7. Terrence J.Sejnowski.NETtalk corpus, obtainable fromftp.idiap.ch in pub/benchmarks/ neural/ nettalk. tar. z.

    Google Scholar 

  8. Alexander Singer. Implementation of artificial neural networks on the Connection Machine. Parallel Computing, 14:305–315, Summer 1990.

    Article  Google Scholar 

  9. Tomas Nordstrom and Bertil Svensson. Using and designing massively parallel computers for artificial neural networks. Journal of Parallel and Distributed Computing, 14(3):260–285, March 1992.

    Article  Google Scholar 

  10. Darin Jackson and Dan Hammerstrom. Distributing back propagation networks over the Intel iPSC/860 hypercube. In Proc. of Int. Joint Conference on Neural Networks, volume 1, pages 569–574, 1991.

    Google Scholar 

  11. G. Chinn et al. Systolic array implementations of neural nets on the MasPar MP-1 massively parallel processor. In Proc. of Int. Joint Conference on Neural Networks, volume II, pages 169–173, 1990.

    Article  Google Scholar 

  12. Andreas Zell et al. Problems of massive parallelism in neural network simulation. In Proc. of IEEE Int. Conference on Neural Networks, pages 1890–1895, 1993.

    Google Scholar 

  13. Helene Paugam-Moisy. Parallel neural computing based on neural network duplicating. In Ioannis Pitas, editor, Parallel algorithms for digital image processing, computer vision and neural networks, chapter 10, pages 305–340. John Wiley & Sons, 1993.

    Google Scholar 

  14. Jim Torresen, Shin-ichiro Mori, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. Parallel back propagation training algorithm for MIMD computer with 2D-torus network. In Proceeding of International Conference On Neural Information Processing (ICONIP'94), Seoul, Korea, volume 1, pages 140–145, October 1994.

    Google Scholar 

  15. Jim Torresen, Shin-ichiro Mori, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. Exploiting multiple degrees of BP parallelism on the highly parallel computer AP1000. In Fourth International Conference on Artificial Neural Networks (ANN'95), pages 483–488, Cambridge, UK, June 1995. IEE.

    Google Scholar 

  16. Jim Torresen, Hiroshi Nakashima, Shinji Tomita, and Olav Landsverk. General mapping of feed-forward neural networks onto an MIMD computer. In Proc. of IEEE Int. Conference on Neural Networks (ICNN'95), Perth, Western Australia, 27 November–1 December 1995. IEEE.

    Google Scholar 

  17. Jim Torresen, Shinji Tomita, and Olav Landsverk. The relation of weight update frequency to convergence of BP. In Proc. Of World Congress on Neural Networks (WCNN'95), volume 1, pages 679–682, Washington, D.C., July 1995. INNS Press.

    Google Scholar 

  18. Hiroaki Ishihata. Performance evaluation of the AP1000. In Proc. of CAP workshop, pages N-1-8, 1991.

    Google Scholar 

  19. Terrence J. Sejnowski and Charles R. Rosenberg. Parallel networks that learn to pronounce English text. Complex Systems, 1:145–168, 1987.

    Google Scholar 

  20. Kwang Bo Cho et al. Image compression using multi-layer perceptron with block classification and SOFM coding. In Proc. of World Congress on Neural Networks, volume 3, pages 26–31, 1994.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Constantine Polychronopoulos Kazuki Joe Keijiro Araki Makoto Amamiya

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Torresen, J., Mori, Si., Nakashima, H., Tomita, S., Landsverk, O. (1997). Exploiting parallel computers to reduce neural network training time of real applications. In: Polychronopoulos, C., Joe, K., Araki, K., Amamiya, M. (eds) High Performance Computing. ISHPC 1997. Lecture Notes in Computer Science, vol 1336. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0024236

Download citation

  • DOI: https://doi.org/10.1007/BFb0024236

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-63766-0

  • Online ISBN: 978-3-540-69644-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics