Skip to main content

Artificial neural networks on reconfigurable meshes

  • Workshop on Biologically Inspired Solutions to Parallel Processing Problems Albert Y. Zomaya, The University of Western Australia Fikret Ercal, University of Missouri-Rolla Stephan Olariu, Old Dominion Univesity
  • Conference paper
  • First Online:
  • 115 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1388))

Abstract

Artificial neural networks(ANN) have been used successfully in applications such as pattern recognition, image processing, automation and control. Majority of today's applications use backpropagate feedforward ANN. In this paper, two methods of P pattern L layer ANN learning on n x n RMESH have been presented. One required memory space of O(nL) but conceptually is simpler to develop and the other uses pipelined approach which reduces the memory requirement to O(L). Both of these algorithms take O(PL) time and are optimal for RMESH architecture.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. G. Chinn, K. A. Grajski, C. Chen, C. Kuszmaul, and S. Tomboulian, “Systolic Array implementations of Neural Nets on the MasPar MP-1 Massively Parallel Processor”, International Conference Neural Networks, vol. 2, pp 169–173, San Diego, 1990

    Article  Google Scholar 

  2. L. Chu and W. Wah “Optimal mapping of Neural-Network Learning on Message-Passing Multicomputers”, Journal of Parallel and Distributed Computing, vol. 14, pp-319–339, 1992

    Article  Google Scholar 

  3. J. Chung, H. Yoon, and S. R. Maeng, “A systolic Array Exploiting the Inherent Parallelisms of Artificial Neural Networks”, International Conference on Parallel Processing, vol. 1, pp 652–653, 1991

    Google Scholar 

  4. T. G. Clarkson, C. K. Ng, and Y. Guan, “The pRAM: An Adaptive VLSI Chip”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 408–411, May 1993

    Article  Google Scholar 

  5. A. El-Amawy, and P. Kulasinghe, “Algorithmic Mapping of Feedforward Neural Networks onto Multiple Bus Systems”, IEEE transactions on Parallel and Distributed Systems, vol. 8, ppl30–136, Feb. 1997

    Article  Google Scholar 

  6. D. Hammerstrom, “A VLSI Architecture for High-Performance, Low-Cost, Onchip Learning”, International Joint Conference on Neural Networks, vol. 2, pp 537–543, 1990

    Article  Google Scholar 

  7. S. Haykin, Neural Networks, A Comprehensive Foundation, IEEE Press, 1994

    Google Scholar 

  8. A. Hiraiwa, S. Kurosu, S. Arisawa and M. Ionue, “A Two Level pipeline RISC Processor Array for ANN”, International Joint Conference on Neural Networks, Washington DC, vol. 2, ppl37–140, 1990

    Google Scholar 

  9. J Jenq and S. Sahni “Reconfigurable Mesh Algorithms for Fundamental Data Manipulation Operations”, Computing on Distributed Memory Multiprocessors, NATO Series F, ed. F. Ozguner, Springer Verlag, 1993

    Google Scholar 

  10. J. Jenq and W. Li “Artificial Neural Networks on Reconfigurable Meshes”, CSCI-TR-98-01 Department of Computer Science, University of Arkansas

    Google Scholar 

  11. V. Kumar, S. Shekhar, and M. Amin, “A Scaleable Parallel Formulation of the Backpropagation Algorithm for Hypercubes and Related Architectures”, IEEE Transactions on Parallel and Distributed Systems, vol. 5, no. 10, pp 1073–1090, Oct. 1994

    Article  Google Scholar 

  12. S. Y. Kung, “Parallel Architectures for Artificial Neural Nets”, International Conference on Systolic Arrays, pp 163–174, 1988

    Google Scholar 

  13. J. Lansner and T. Lehmann, “An Analog CMOS Chip Set for Neural Networks with Arbitrary Topologies”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 441–444, May 1993

    Article  Google Scholar 

  14. C. Lehmann, M. Viredaz, and F. Blayo, “A Generic Systolic Array Building Block for Neural Networks with on-Chip Learning”, IEEE Transactions on Neural Networks, vol. 4, no. 3, pp 400–407, May 1993

    Article  Google Scholar 

  15. W. Lin, V. K. Prasanna, and K. W. Przytula, “Algorithmic Mapping of Neural Network Models onto Parallel SIMD Machines”, IEEE Transactions on Computers, vol. 40, no. 12, pp 1390–1401, Dec. 1991

    Article  Google Scholar 

  16. Q. M. Malluhi, M. Bayoumi, and T. R. N. Rao, “Efficient mapping of ANNs on Hypercube Massively Parallel Machines”, IEEE Transactions on Computers, vol. 44, no. 6, pp769–779, June 1995

    Article  Google Scholar 

  17. T. Nordstrom and B. Svensson, “Using and Designing Massively Parallel Computers for Artificial Neural Networks”, Journal of Parallel and Distributed Computing, vol. 14, pp 260–285, 1992

    Article  Google Scholar 

  18. K. Parker and A. Thornbrugh, “Parallelized Back-Propagation Training and Its Effectiveness”, vol. 2, International Conference on Neural Networks, Washington DC, vol. 2, pp, 179–182, Jan. 1990

    Google Scholar 

  19. U. Ramacher, “SYNAPSE-A Neuralcomputer that Synthesizes Neural Algorithms on a Parallel Systolic Engine”, Journal of Parallel and Distributed Computing, vol. 14, pp 306–318, 1992

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Rolim

Rights and permissions

Reprints and permissions

Copyright information

© 1998 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Jenq, JF., Ning Li, W. (1998). Artificial neural networks on reconfigurable meshes. In: Rolim, J. (eds) Parallel and Distributed Processing. IPPS 1998. Lecture Notes in Computer Science, vol 1388. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-64359-1_693

Download citation

  • DOI: https://doi.org/10.1007/3-540-64359-1_693

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-64359-3

  • Online ISBN: 978-3-540-69756-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics