Abstract
In this work we study the behavior of restricted connectionism schemes aimed at solving one of the problems found in the implementation of Artificial Neural Networks (ANNs) in VLSI technology. We limit our study to the classical backpropagation trained MLPs and we discuss the limitations of restricted connectionism by means of a simulation of two tasks that are useful in the practical application of ANNs. From the results of these simulations it can be deduced that the perspectives given by other authors are, perhaps, excessively optimistic. The needs with respect to the number of neurons in each hidden layer increase with restricted connectionism. We have also been able to see that, independently from the type of connection structure, MLPs with a large number of layers cannot be correctly trained. Both of these problems negatively influence restrictive eonnectionism approaches to this type of networks.
For all these reasons, restricted connectionism does not appear to be a strategy which permits the solution or elimination of the problems presented by the implementation of MLPs. Greater progress can only be expected from technological developments or the improvement of systolic ring architectures [9][19], specially if, as some authors [12] point out, for implementing gradient decrease algorithms such as Backpropagation a minimum of 8-12 bit precision is needed, as this is something that cannot be achieved with current analog technologies.
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
L.A. Akers, D.K. Ferry and R.O. Grondin, “Synthetic Neural Systems en VLSI”, in A Introduction to Neural and Electronic Networks, S.F. Zornetzer, J.C. Davis and C. Lau, Eds., Academic Press, pp. 317–337, 1990.
L.A. Akers and M.R. Walker, “A Limited-Interconnect Synthetic Neural IC”, IEEE Procedings ICNN, San Diego, CA, July 1988.
L.A. Akers, M. Walker, D.K. Ferry and R. O. Grondin, “A limited-interconnect, highly layered synthetic neural architecture”, in VLSI for AI, J.G. Delgado-Frias and W.R. Moore, Eds., Kluwer Academic, 1989.
J. Bailey and D. Hammerstrom, “Why VLSI Implementations of Associative VLCNs Require Connection Multiplexing”, IEEE Proceedings ICNN, San Diego, CA, July 1988.
F. Faggin and C. Mead, “VLSI Implementation of Neural Networks”, in An Introduction to Neural and Electronic Networks, S.F. Zornetzer, J.L. Davis and C. Lau, Eds., Academic Press, pp. 275–292, 1990.
K. Goser, U. Hilleringmann, U. Rueckert and K. Schumacher, “VSLI Technologies for Artificial Neural Networks”, IEEE Micro, pp.28–44, 1989.
E.D. Karnin, “A Simple Procedure for Prunning Back-Propagation Trained Neural Networks”, IEEE Transactions on Neural Networks, June 1990.
J.F. Kolen and J.B. Pollack, “BackPropagation is Sensitive to Inicial Conditions”, Complex Systems, Vol. 4 No. 3, June, pp. 269–280, 1990.
S.Y. Kung and J.N. Hwang, “A Unifying Algorithm/Architecture for Artificial Neural Networks”, International Conference on Acoustics, Speech and Signal Processing, Glasgow, Vol. 4, pp. 2505–2508, 1989.
M.A.C. Maher, S.P. DeWeerth, M.A. Mahowald and C.A. Mead, “Implementing Neural Architectures Using Analog VLSI Circuits”, IEEE Transactions on Circuits and Systems, May 1989.
T. Markussen, “A New Architectural Approach to Flexible Digital Neural Network Chip Systems”, in VLSI for Artificial Intelligence and Neural Networks, J.G. Delgado-Frias and W.R. Moore, Eds., pp. 315–324, Plenum Press, 1991.
N. Morgan, editor. “Artificial Neural Networks: Electronic Implementations”, Computer Society Press Technology Series and Computer Society Press of the IEEE, 1990.
M.C. Mozer and P. Smolensky, “Skeletonization: A technique for trimming the fat from a network via relevance assessment”, in Advances in Neural Information Processing 1, D.S. Touretzky, Ed., Morgan Kaufmann, pp. 107–115, 1989.
D. Röckmann and C. Moraga, “Using quadratic perceptrons to reduce interconnection density in multilayer neural networks”, in Lecture Notes in Computer Science 540, Springer Verlag, pp. 86–92, 1991.
D.E. Rumelhart, G.E. Hinton and R.J. Williams, “Learning internal representations by error propagation”, in Parallel Distributed Processing: Explorations in the Microstructures of Cognition, D. E. Rumelhart and J. L. McClelland, Eds., Vol. 1, Ch. 8, Cambridge MA: MIT Press, 1986.
J. Sietsma and R.J.F. Dow, “Neural net prunning-Why and how?”, in Proceedings IEEE ICNN, Vol. 1, San Diego CA, pp. 325–332, 1988.
P. Treleaven, “Neurocomputers”,International Journal of Neurocomputing, Vol. 1, 1989.
P. Treleaven, M. Pacheco and M. Vellasco, “VLSI architectures for Neural Networks”, IEEE Micro, pp. 8–27, 1989.
A. Yáñez, S. Barro and A. Bugarín, “Backpropagation multilayer perceptron: a modular implementation”, in Lecture Notes in Computer Science 540, Springer Verlag, pp. 285–295, 1991.
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1993 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Regueiro, C.V., Barro, S., Yáñez, A. (1993). Limitation of connectionism in MLP. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_185
Download citation
DOI: https://doi.org/10.1007/3-540-56798-4_185
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-56798-1
Online ISBN: 978-3-540-47741-9
eBook Packages: Springer Book Archive