Skip to main content
Log in

Abstract

We present a survey of recent electronic implementations of neural nets in the US and Canada with an emphasis on integrated circuits. Well over 50 different circuits were built during the last two years, representing a remarkable variety of designs. They range from digital emulators to fully analog CMOS networks operating in the subthreshold region. A majority of these circutis, over 40 designs, use analog computation to some extent. Several neural net chips are now commercially available, and many companies are working on the development of products for an introduction in the near future.

Most of the neural net circuits have been built in standard CMOS technology, except for a few designs in CCD technology. EEPROM cells are investigated by several researchers for building compact, analog storage elements for the weights.

While a large number of circuits have been built, there are still only few reports of applications of any of these chips to large, real-world problems. In fact, system integration and applications with neural net chips are just beginning to be explored. We describe some experiences gained with applications of analog neural net chips to machine vision from our laboratory.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. M. Griffin, G. Tahara, K. Knorpp. R. Pinkham, and B. Riley, “An 11-million transistor neural network execution engine,”ISSCC Dig. Tech. Papers, pp. 180–181, 1991.

  2. R.W. Means, “A new two-dimensional systolic array for image processing and neural network applications,”Proc. Int. Joint Conf. Neural Networks, pp. II/A-925, 1991.

  3. R.W. Means and L. Lisenbee, “Extensible linear floating point simd neurocomputer array processor,”Proc. Int. Joint Conf. Neural Networks, 1991, pp. I-587–I-592.

  4. C.L. Scofield and D.L. Reilly, “Into silicon: Real time learning in a high density rbf neural network, ”Proc. Int. Joint Conf. Neural Networks, 1991, 551 –556.

  5. K. Asanovic, N. Morgan, and J. Wawrzynek, “Using simulations of reduced precision arithmetic to design a neuro-microprocessor,”J. of VLSI Signal Processing, vol. 6, 1992, pp. 33–44.

    Article  Google Scholar 

  6. J.G. Elias, M.D. Fisher, and C.M. Monemi, “A multiprocessor machine for large-scale network simulation,”Proc. Int. Joint Conf. Neural Networks, 1991, I-469–I-474.

  7. N. Morgan, J. Beck, P. Kohn, J. Bilmes, E. Allman, and J. Beer, “The rap: A ring array processor for layered network calculations,”Proc. Conf. Application Specific Array Processors, 1990, pp. 296–308.

  8. T.P. Washburne, M.M. Okamura, D.F. Specht, and W.A. Fisher, “The lockheed probabilistic neural network processor,”Proc. Int. Joint Conf. Neural Networks, 1991, pp. I-513–I-518.

  9. J.L. Hennessy and D.A. Patterson,Computer Architecture, A Quantitative Approach, Morgan Kaufmann, San Mateo, 1990.

    Google Scholar 

  10. M.S. Tomlinson, D.J. Walker, and M.A. Sivilotti, “A digital neural network architecture for VLSI,”Proc. Int. Joint Conf. Neural Networks, 2, 1990, pp. 545–550.

    Google Scholar 

  11. H.P. Graf and D. Henderson, “A reconfigurable CMOS neural network,”ISSCC Dig. Tech. Papers, 1990, pp. 144–145.

  12. H.P. Graf, R. Janow, D. Henderson, and R. Lee, “Reconfigurable neural net chip with 32k connections, R.P. Lippmann, J.E. Moody, and D.S. Touretzky, ed.,Neural Information Processing Systems, vol. 3, Morgan Kaufmann Publ., San Mateo, 1991, pp. 1032–1038.

    Google Scholar 

  13. A.J. Agranat, C.F. Neugebauer, R.D. Nelson, and A. Yariv, “The CCD neural processor: a neural network integrated circuit with 65536 programmable analog synapses,”IEEE Trans. Circuits Syst., vol. 37, 1990, pp. 1073–1075.

    Article  Google Scholar 

  14. J. Alspector, R.A. Allen, A. Jayakumar, T. Zeppenfeld, and R. Meir, “Relaxation networks for large supervised learning problems,” R.P. Lippmann, J.E. Moody, and D.S. Touretzky, eds.,Neural Information Processing Systems, vol. 3, Morgan Kaufmann Publ., 1991, pp. 1015–1021.

  15. M. Holler, S. Tam, H. Castro, and R. Beson, “An electrically trainable artificial neural network (ETANN), ”Proc. Int. Joint Conf. Neural Networks, 1989, pp. 191–196.

  16. B. Boser and E. Sackinger, “An analog neural network processor with programmable network topology,”ISSCC Dig. Tech. Papers, 1991, pp. 184–185.

  17. A. Chiang, et al., “A programmable CCD signal processor,”ISSCC Dig. Tech. Papers, 1990, pp. 146–147.

  18. A. Moopenn, T. Duong, and A.P. Thakoor, “Digital-analog hybrid synapse chips for electronic neural networks,” In D.S Touretzky, ed.Neural Information Processing Systems, vol. 2. Morgan Kaufmann, San Mateo, 1990.

    Google Scholar 

  19. S. Satyanarayana, Y. Tsividis, and H.P. Graf, “A reconfigurable analog VLSI neural network chip,” In D.S. Touretzky, ed.Neural Information Processing Systems, vol. 2, Morgan Kaufmann, San Mateo, 1990, pp. 758–768.

    Google Scholar 

  20. S. Satyanarayana, Y. Tsividis, and H.P. Graf, “A reconfigurable VLSI neural network,”IEEE J. Solid-State Circuits, vol. 27, 1992, pp. 67–81.

    Article  Google Scholar 

  21. E.K.F. Lee and P.G. Gulak, “A CMOS field-programmable analog array,”ISSCC Dig. Tech. Papers, 1991, pp. 186–187.

  22. M. Hatamian and S.K. Rao, “A 100 MHz 40-tap programmable FIR filter chip,”Digest ISCAS, 1990, pp. 3050–3052.

  23. P.A. Ruetz, “The architectures and design of a 20-MHz realtime DSP chip set,”IEEE J. Solid-State Circuits, vol. 24, 1989, pp. 338–348.

    Article  Google Scholar 

  24. C.A. Mead,Analog VLSI and Neural Systems, Reading, MA: Addison-Wesley, 1989.

    Book  MATH  Google Scholar 

  25. C.A. Mead and M. Ismail, ed.,Analog VLSI Implementation of Neural Systems, Boston: Kluwer Academic Publishers, 1989.

    Google Scholar 

  26. A.G. Andreou, et al., “Currently-mode subthreshold MOS circuits for analog VLSI neural systems,”IEEE Trans. Neural Networks, vol. 2, 1991, pp. 205–213.

    Article  Google Scholar 

  27. W. Bair and C. Koch, “An analog VLSI chip for finding edges from zero-crossings,” In R.P. Lippmann, J.E. Moody, and D.S. Touretzky, ed.,Neural Information Processing Systems, vol. 3, Morgan Kaufmann, 1991, San Mateo, pp. 399–405.

    Google Scholar 

  28. T. Delbruck, “Bump circuits for computing similarity and dissimilarity of analog voltaes,”Proc. Int. Joint Conf. Neural Networks, 1991, pp. I-475–I-479.

  29. S.P. DeWeerth, L. Nielsen, C.A. Mead, and K.J. Astromm, “A simple neuron servo,”IEEE Trans. Neural Networks, vol. 2, 1991, pp. 248–251.

    Article  Google Scholar 

  30. S.P. DeWeerth and C.A. Mead, An analog VLSI model of adaptation in the vestibulo-ocular reflex, ” In D.S. Touretzky, ed.,Neural Information Processing Systems, vol. 2, Morgan Kaufmann, San Mateo, 1990, pp. 742–749.

    Google Scholar 

  31. T. Horiuchi, J. Lazzaro, A. Moore, and C. Koch, “A delay-line based motions detection chip,” In R.P. Lippmann, J.E. Moody, and D.S. Touretzky, eds.,Neural Information Processing Systems, vol. 3, pp. Morgan Kaufmann, San Mateo, 1991, pp. 406–412.

    Google Scholar 

  32. H. Kobayashi, J.L. White, and A.A. Abidi, “An active resistor network for gaussian filtering of images,”IEEE J. Solid-State Circuits, vol. 26, 1991, pp. 738–748.

    Article  Google Scholar 

  33. A. Lumsdaine, J.L. Wyatt, and I.M. Elfadel, “Nonlinear analog networks for image smoothing and segmentation,”J. VLSI Signal Processing, vol. 3, 1991, pp. 53–68.

    Article  Google Scholar 

  34. C. Koch, et al., “Real-time computer vision and robotics using analog VLSI circuits,” In D.S. Touretzky, ed.,Neural Information Processing Systems, vol. 2, Morgan Kaufmann, San Mateo, 1990, pp. 750–757.

    Google Scholar 

  35. J. Lazzaro, “A silicon model of an auditory neural representation of spectral shape,”IEEE J. Solid-State Circuits, vol. 26, 1991, pp. 772–777.

    Article  Google Scholar 

  36. C.A. Mead, X. Arreguit, and J. Lazzaro, “Analog VLSI model of binaural hearing, ”IEEE Trans. Neural Networks, vol. 2, 1991, pp. 230–236.

    Article  Google Scholar 

  37. P. Mueller, J. van der Spiegel, V. Agami, P. Aziz, D. Blackman, P. Chance, A. Choudhury, C. Donham, R. Etienne, L. Jones, P. Kinget, W. von Koch, J. Kim, and J. Xin, “Design and performance of a prototype general purpose analog neural computer,”Proc. Int. Joint Conf. Neural Networks, 1991, pp. I-463–I-468,

  38. F. Faggin, “Neural network analog VLSI implementations,”Neural Information Processing Systems, to be published, 1992.

  39. H.C. Card and W.R. Moore, “Biological learning primitives in analog EEPROM synapses,”Proc. Int. Joint Conf. Neural Networks, 1990, pp. 106–109.

  40. A. Kramer, C.K. Sin, R. Chu, and P.K. Ko, “Compact EEPROM-based weight functions,” R.P. Lippmann, J.E. Moody, and D.S. Touretzky, ed.,Neural Information Processing Systems, vol. 3, Morgan Kaufmann, San Mateo, 1991, pp. 1001–1007.

    Google Scholar 

  41. B.W. Lee, B.J. Sheu, and H. Yang, “ Analog floating-gate synapses for general-purpose VLSI neural computation,”IEEE Trans. Circuits Syst., vol. 38, 1991, pp. 654–658.

    Article  Google Scholar 

  42. R. Tawel, R. Benson, and A.P. Thakoor, “A CMOS UV-programmable nonvolatile synaptic array, ”Proc. Int. Joint Conf. Neural Networks, vol. 1, 1991, pp. 581–585.

    Google Scholar 

  43. T. Blyth, S. Khan, and R. Simko, “A non-volatile analog storage device using EEPROM technology,”ISSCC Dig. Tech. Papers, 1991, pp. 192–193.

  44. Y. LeCun, I. Kanter, and S.A. Solla, “Eigenvalues of covariance matrices: Application to neural-network learning,”Phys. Rev. Lett., vol. 66, 1991, pp. 2396–2399.

    Article  Google Scholar 

  45. M. Stevenson, R. Winter, and B. Widrow, “Sensitivity of feed-forward neural networks to weight errors,”IEEE Trans. Neural Networks, vol. 1, 1990, pp. 71–80.

    Article  Google Scholar 

  46. J.L. Holt and J.N. Hwang, “Finite precision error analysis of neural network electronic hardware implementations,”Proc. Int. Joint Conf. Neural Networks, 1991, pp. I-519–I-525.

  47. Z. Obradovic and I. Parberry, “Learning with discrete multivalued neurons,”Proc. Seventh Int. Conf. on Machine Learning, 1991, pp. 392–399.

  48. S.S. Venkatesh, “Directed drift: A new linear threshold algorithm for learning binary weights on-line, ”J. Computer Science and Systems, 1991.

  49. S.S. Venkatesh, “On learning binary weights for majority functions,”Proc. Conf. Computational Learning Theory (COLT 1991), 1991.

  50. A. Dembo and T. Kailath, “Model-free distribution learning,”IEEE Trans. Neural Networks, vol. 1 1990, pp. 58–70.

    Article  Google Scholar 

  51. M. Jabri and B. Flower, “Weight perturbation: An optimal architecture and learning technique for analog VLSI feedforward and recurrent multilayer networks,”IEEE Trans. Neural Networks, vol. 3, 1992, pp. 154–157.

    Article  Google Scholar 

  52. H.P. Graf, R. Janow, C.R. Nohl, and J. Ben, “A neural-net board system for machine vision applications,Proc. Int. Joint Conf. Neural Networks, 1991, I-481–I-486.

  53. E. Sackinger, B.E. Boser, J. Bromley, Y. LeCun, and L.D. Jackel, “Application of the anna neural network chip to high-speed character recognition,”Proc. Int. Joint Conf. Neural Networks, 1992, pp. 498–505.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Graf, H.P., Sackinger, E. & Jackel, L.D. Recent developments of electronic neural nets in North America. J VLSI Sign Process Syst Sign Image Video Technol 6, 19–31 (1993). https://doi.org/10.1007/BF01581956

Download citation

  • Received:

  • Revised:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/BF01581956

Keywords

Navigation