Abstract
Artificial neural networks have been mainly implemented as simulations on sequential machines. More recently, the implementation of neurocomputers is being recognised as the way to achieve the real potential of artificial neural networks2. However, current hardware implementations lean either to the optimisation of the network performance, as happens in the case of special-purpose neurocomputers, or to provide more flexibility for the execution of a large range of neural network models, as occurs with the general-purpose neurocomputers. Hence, it is desired to achieve a compromise between these two trends in order to provide high-performance application-specific neurocomputers and, at the same time, allow the user to cost-effectively execute different neural algorithms. This paper reports the results of the VLSI implementation of the so called generic neuron architecture. This architecture serves as an architectural framework for the automatic generation of application-specific integrated circuits (ASICs), granting the necessary flexibility and high performance execution.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Alippi, C. and Nigri, M.E., “Hardware Requirements for Digital VLSI Implementation of Neural Networks,” Int. Joint Conf. on Neural Networks, Singapore, November 18–21, 1991.
Atlas, L. and Suzuki, Y., “Digital Systems for Artificial Neural Networks,” IEEE Circuits and Dev. Magazine, Nov. 1989.
Floyd, R.W. and Ullman, J.D., “The Compilation of Regular Expressions into Integrated Circuits,” Report STAN-CS-80-798, Computer Science Department, Stanford University, April 1980.
Hammerstrom, D., “A VLSI Architecture for High-Performance, Low-Cost, On-Chip Learning,” Int. Joint Conf. on Neural Networks-IJCNN 90, vol. II, pp. 537–544, San Diego, California, June 17–21, 1990.
Murray, A.F., Smith, A.V.W., and Butler, Z.F., “Bit-Serial Neural Networks,” Neural Information Processing Systems (Proc. 1987 NIPS Conf.), p. 573, Denver, November 1987.
Myers, D.J. and Brebner, G.E., “The Implementation of Hardware Neural Net Systems,” The First IEE Int. Conf. on Artificial Neural Networks, pp. 57–61, October 16–18, 1989.
Nigri, M.E., Treleaven, P.C., and Vellasco, M.M.B.R., “Silicon Compilation of Neural Networks,” Proceedings of the IEEE CompEuro'91, pp. 541–546, Bologna, Italy, May 13–16, 1991.
Rumelhart, D.E. and McClelland, J.L., “Parallel Distributed Processing: Explorations in the Microstructurc of Cognition,” in MIT Press, Cambridge, Mass., vol. 1 & 2, 1986.
Vellasco, M.M.B.R. and Trelcaven, P.C., “A Neurocomputer Exploiting Silicon Compilation,” Proc. Neural Computing Meeting, The Institute of Physics, pp. 163–170, London, April 1989.
Vellasco, M.M.B.R., “A VLSI Architecture for Neural Network Chips,” PhD Thesis, Dept. Computer Science, University College London, University of London, February 1992.
Vellasco, M.M.B.R., “A VLSI Architecture for the Automatic Generation of Neuro-Chips,” Int. Joint Conf. on Neural Networks — IJCNN'92, Beijing, China, November 3–6, 1992.
Yasunaga, M.et al, “Design, Fabricatioon and Evaluation of a 5-inch Wafer Scale Neural Network LSI Composed of 576 Digital Neurons,” Int. Joint Conf. on Neural Networks — IJCNN 90, vol. II, San Diego, June 17–21, 1990.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Vellasco, M.M.B.R., Treleaven, P.C. (1993). The generic neuron architectural framework for the automatic generation of ASICs. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_191
Download citation
DOI: https://doi.org/10.1007/3-540-56798-4_191
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-56798-1
Online ISBN: 978-3-540-47741-9
eBook Packages: Springer Book Archive