Skip to main content

Systolic implementation of hopfield networks of arbitrary size

  • Hardware Implementations
  • Conference paper
  • First Online:
Artificial Neural Networks (IWANN 1991)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 540))

Included in the following conference series:

  • 189 Accesses

Abstract

In this paper we present a systolic architecture for the VLSI digital implementation of Hopfield Networks of arbitrary size. This implementation is based on DBT transformed matrix-vector computations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • ATLAS, L.E., and SUZUKI, Y., "Digital Systems for Artificial Neural Networks". IEEE Circuits and Devices Magazine, November (1989) 20–24.

    Google Scholar 

  • BLAYO, F. and HURAT, P., "A VLSI systolic array dedicated to Hopfield neural network". In: VLSI for Artificial Intelligence. Kluwer Academic Publishers, (1989) 255–264.

    Google Scholar 

  • CARD, H.C. and MOORE, W.R., "VLSI devices and circuits for neural networks". Int. J. of Neural Systems, Vol. 1, No. 2 (1989) 149–165.

    Google Scholar 

  • CASTILLO, F.C. and CABESTANY, J., "A VLSI neural net architecture — a proposal". Proc. NEURONIMES 90, (1990) 515–523.

    Google Scholar 

  • GRAF, H.P., JACKEL, L.D. and HUBBARD, W.E., "VLSI implementation of a neural network model". IEEE Computer, March (1988) 41–49.

    Google Scholar 

  • GRAF, G., JACKEL, L. HOWARD, R. et al., "VLSI implementation of a neural network memory with several hundreds of neurons". Proc. Amer. Inst. Physics Conf., 151 (1986) 414–419.

    Google Scholar 

  • HOPFIELD, J.J. and TANK, D.W., "Computing with neural circuits: a model". Science, Vol. 233, (1986) 625–633.

    Google Scholar 

  • HOPFIELD, J.J., "Neurons with graded response have collective computational properties like those of two-state neurons". Proc. Natl. Acad. Sci., USA, Vol. 81, (1984) 3088–3092.

    Google Scholar 

  • HOPFIELD, J.J., "Neural networks and physical systems with emergent collective computational abilities". Proc. Natl. Acad. Sci., USA, Vol. 79, (1982) 2554–2558.

    Google Scholar 

  • JACKEL, L.D., GRAF, H.P. and HOWARD, R.E., "Electronic neural network chips". Applied Optics, Vol. 26, No. 23, (1987) 5077–5080.

    Google Scholar 

  • KUNG, S.Y., "Parallel architectures for artificial neural nets". Proc. Int. Conf. on Systolic Arrays, San Diego (1988) 163–174.

    Google Scholar 

  • McELIECE, R.J., POSNER, E.C., RODEMICH, E.R. and VENKATESH, S.S., "The capacity of the hopfield associative memory". IEEE Trans. Information Theory, Vol. 33, No. 4 (1987) 461–482.

    Google Scholar 

  • NAVARRO, J.J., LLABERIA, J.M., VALERO, M., "Partitioning: An Essential Step in Mapping Algorithms into Systolic Array Processors". IEEE Computer. July 1987 77–89.

    Google Scholar 

  • PERSONNAZ, L., GUYON, I. and DREYFUS, G., "Collective computational properties of neural networks: new learning mechanisms". Phys. Rev. A, Vol. 34 (1986) 4217–4228.

    Google Scholar 

  • PERSONNAZ, L., JOHANNET, A. and DREYFUS, G., "Problems and trends in integrated neural networks". In: Connectionism in Perspective. Elsevier Science Publishers (North-Holland) (1989).

    Google Scholar 

  • RÜCKERT, U. and GOSER, K., "VLSI-design of associative networks". In: VLSI for Artificial Intelligence. Kluwer Academic Publishers, (1989) 227–235.

    Google Scholar 

  • RUMELHART, D.E., HINTON, G.F. and WILLIAMS, R.J., "Learning Representations by Back-Propagating Errors". Nature, Vol. 323, (1986) 533–536.

    Google Scholar 

  • TANK, D.W. and HOPFIELD, J.J., "Simple neural optimization networks: and A/D converter, signal decision circuit, and a linear programming circuit". IEEE Trans. Circuits and Systems, Vol. 33, No. 5 (1986) 533–541.

    Google Scholar 

  • THAKOOR, A.P., MOOPENN, A., LAMBE, J. and KHANNA, K., "Electronic hardware implementations of neural networks". Applied Optics, Vol. 26, No. 23, (1987) 5085–5092.

    Google Scholar 

  • WEINFELD, M., "A fully digital integrated CMOS Hopfield network including the learning algorithm". In: VLSI for Artificial Intelligence. Kluwer Academic Publishers, (1989) 169–178.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1991 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Barro, S., Bugarín, A., Yáñez, A. (1991). Systolic implementation of hopfield networks of arbitrary size. In: Prieto, A. (eds) Artificial Neural Networks. IWANN 1991. Lecture Notes in Computer Science, vol 540. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0035903

Download citation

  • DOI: https://doi.org/10.1007/BFb0035903

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-54537-8

  • Online ISBN: 978-3-540-38460-1

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics