Skip to main content

A massively parallel neurocomputer with a reconfigurable arithmetical unit

  • Implementation
  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 930))

Abstract

This paper presents a massively parallel neurocomputer system which is mainly based on a new reconfigurable arithmetical unit optimized for the simulation of neural networks. The system offers a very high performance for all typical neural network operations combined with a high flexibility to adapt the available hardware resources to the requirements of a user-selected neural network model. The main system features are the support of many different bitlengths, a high memory bandwidth, a good scalability and a dynamic reconfigurability.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Avellana, N., Carrabina, J., Lisa, F., Reyes, M., and Valderrama, E. “Unidad aritmetica con paralelismo configurable para emular redes neuronales. In IX Congreso de diseno de Circuitos Integrados (Gran Canarias, Spain, 1994).

    Google Scholar 

  2. Avellana, N., Carrabina, J., Rabaneda, L., and Valderrama, E. Experience on the development of a neuroemulator system. In Fifth Eurochip Workshop on VLSI Training (Dresden, Germany, 1994).

    Google Scholar 

  3. Hammerstrom, D. A VLSI Architecture for High-Performance, Low-Cost, On-chip Learning. In Proc. IJCNN (San Diego, 1990), pp. 537–543.

    Google Scholar 

  4. Holt, J., and Hwang, J. Finite precision error analysis of neural networks hardware implementations. IEEE Transactions on Computers 42 (1993), 281–290.

    Google Scholar 

  5. Morgan, N., Beck, J., Kohn, P., Bilmes, J., Allman, E., and Beer, J. The Ring Array Processor: A Multiprocessing Peripheral for Connectionist Applications. Journal of Parallel and Distributed Computing 14 (1992), 248–259.

    Google Scholar 

  6. Ramacher, U. Synapse — A Neurocomputer that Synthesizes Neural Algorithms on a Parallel Systolic Engine. Journal of Parallel and Distributed Computing 14 (1992), 306–318.

    Article  Google Scholar 

  7. Riedmiller, M., and Braun, H. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In IEEE International Conference on Neural Networks (ICNN) (1993), H. Ruspini, Ed., pp. 586–591.

    Google Scholar 

  8. Tollenaere, T. SuperSAB: Fast adaptive backpropagation with good scaling properties. Neural Networks 3, 5 (1990), 561–573.

    Google Scholar 

  9. Viredaz, M. MANTRA I: An SIMD Processor Array for Neural Computation. In Euro-ARCH '93 (1993), P. Spies, Ed., Springer-Verlag, pp. 99–110.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Francisco Sandoval

Rights and permissions

Reprints and permissions

Copyright information

© 1995 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Strey, A., Avellana, N., Holgado, R., Fernández, J.A., Capillas, R., Valderrama, E. (1995). A massively parallel neurocomputer with a reconfigurable arithmetical unit. In: Mira, J., Sandoval, F. (eds) From Natural to Artificial Neural Computation. IWANN 1995. Lecture Notes in Computer Science, vol 930. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-59497-3_253

Download citation

  • DOI: https://doi.org/10.1007/3-540-59497-3_253

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-59497-0

  • Online ISBN: 978-3-540-49288-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics