Abstract
Neural networks seem to be very interesting in problems such as optimization, pattern recognition and many tasks where perception is more important than a huge amount of computations. However, the interesting properties (speed, fault-tolerance, convergence,…) do not exist when these networks are simulated on conventional computers; the need for dedicated VLSI chips is thus obvious, but the problem is to realize chips where a great number of neurons and synapses can be connected together and integrated on the same chip.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Hopfield, J.J.: Neural networks and physical systems with emergent collective computational abilities. Proc. Nad. Acad. Sci. USA, vol.79, pp.2554–2558 (April 1982)
Graf, H.P., de Vegvar, P.: A CMOS implementation of a neural network model. Proc. Stanford Conf. on Advanced Research in VLSI, P. Losleben (ed.), MIT Press (1987)
Verleysen, M., Sirletti, B., Vandemeulebroecke, A., Jespers, P.: Neural networks for high- storage content-addressable memories: VLSI circuit and learning algorithm. To be published in IEEE Journal of Solid-State Circuits (June 1989)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 1990 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Verleysen, M., Jespers, P. (1990). An Analog VLSI Architecture for Large Neural Networks. In: Soulié, F.F., Hérault, J. (eds) Neurocomputing. NATO ASI Series, vol 68. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-76153-9_17
Download citation
DOI: https://doi.org/10.1007/978-3-642-76153-9_17
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-76155-3
Online ISBN: 978-3-642-76153-9
eBook Packages: Springer Book Archive