Skip to main content

Limitation of connectionism in MLP

  • Conference paper
  • First Online:
New Trends in Neural Computation (IWANN 1993)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 686))

Included in the following conference series:

  • 309 Accesses

Abstract

In this work we study the behavior of restricted connectionism schemes aimed at solving one of the problems found in the implementation of Artificial Neural Networks (ANNs) in VLSI technology. We limit our study to the classical backpropagation trained MLPs and we discuss the limitations of restricted connectionism by means of a simulation of two tasks that are useful in the practical application of ANNs. From the results of these simulations it can be deduced that the perspectives given by other authors are, perhaps, excessively optimistic. The needs with respect to the number of neurons in each hidden layer increase with restricted connectionism. We have also been able to see that, independently from the type of connection structure, MLPs with a large number of layers cannot be correctly trained. Both of these problems negatively influence restrictive eonnectionism approaches to this type of networks.

For all these reasons, restricted connectionism does not appear to be a strategy which permits the solution or elimination of the problems presented by the implementation of MLPs. Greater progress can only be expected from technological developments or the improvement of systolic ring architectures [9][19], specially if, as some authors [12] point out, for implementing gradient decrease algorithms such as Backpropagation a minimum of 8-12 bit precision is needed, as this is something that cannot be achieved with current analog technologies.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. L.A. Akers, D.K. Ferry and R.O. Grondin, “Synthetic Neural Systems en VLSI”, in A Introduction to Neural and Electronic Networks, S.F. Zornetzer, J.C. Davis and C. Lau, Eds., Academic Press, pp. 317–337, 1990.

    Google Scholar 

  2. L.A. Akers and M.R. Walker, “A Limited-Interconnect Synthetic Neural IC”, IEEE Procedings ICNN, San Diego, CA, July 1988.

    Google Scholar 

  3. L.A. Akers, M. Walker, D.K. Ferry and R. O. Grondin, “A limited-interconnect, highly layered synthetic neural architecture”, in VLSI for AI, J.G. Delgado-Frias and W.R. Moore, Eds., Kluwer Academic, 1989.

    Google Scholar 

  4. J. Bailey and D. Hammerstrom, “Why VLSI Implementations of Associative VLCNs Require Connection Multiplexing”, IEEE Proceedings ICNN, San Diego, CA, July 1988.

    Google Scholar 

  5. F. Faggin and C. Mead, “VLSI Implementation of Neural Networks”, in An Introduction to Neural and Electronic Networks, S.F. Zornetzer, J.L. Davis and C. Lau, Eds., Academic Press, pp. 275–292, 1990.

    Google Scholar 

  6. K. Goser, U. Hilleringmann, U. Rueckert and K. Schumacher, “VSLI Technologies for Artificial Neural Networks”, IEEE Micro, pp.28–44, 1989.

    Google Scholar 

  7. E.D. Karnin, “A Simple Procedure for Prunning Back-Propagation Trained Neural Networks”, IEEE Transactions on Neural Networks, June 1990.

    Google Scholar 

  8. J.F. Kolen and J.B. Pollack, “BackPropagation is Sensitive to Inicial Conditions”, Complex Systems, Vol. 4 No. 3, June, pp. 269–280, 1990.

    Google Scholar 

  9. S.Y. Kung and J.N. Hwang, “A Unifying Algorithm/Architecture for Artificial Neural Networks”, International Conference on Acoustics, Speech and Signal Processing, Glasgow, Vol. 4, pp. 2505–2508, 1989.

    Google Scholar 

  10. M.A.C. Maher, S.P. DeWeerth, M.A. Mahowald and C.A. Mead, “Implementing Neural Architectures Using Analog VLSI Circuits”, IEEE Transactions on Circuits and Systems, May 1989.

    Google Scholar 

  11. T. Markussen, “A New Architectural Approach to Flexible Digital Neural Network Chip Systems”, in VLSI for Artificial Intelligence and Neural Networks, J.G. Delgado-Frias and W.R. Moore, Eds., pp. 315–324, Plenum Press, 1991.

    Google Scholar 

  12. N. Morgan, editor. “Artificial Neural Networks: Electronic Implementations”, Computer Society Press Technology Series and Computer Society Press of the IEEE, 1990.

    Google Scholar 

  13. M.C. Mozer and P. Smolensky, “Skeletonization: A technique for trimming the fat from a network via relevance assessment”, in Advances in Neural Information Processing 1, D.S. Touretzky, Ed., Morgan Kaufmann, pp. 107–115, 1989.

    Google Scholar 

  14. D. Röckmann and C. Moraga, “Using quadratic perceptrons to reduce interconnection density in multilayer neural networks”, in Lecture Notes in Computer Science 540, Springer Verlag, pp. 86–92, 1991.

    Google Scholar 

  15. D.E. Rumelhart, G.E. Hinton and R.J. Williams, “Learning internal representations by error propagation”, in Parallel Distributed Processing: Explorations in the Microstructures of Cognition, D. E. Rumelhart and J. L. McClelland, Eds., Vol. 1, Ch. 8, Cambridge MA: MIT Press, 1986.

    Google Scholar 

  16. J. Sietsma and R.J.F. Dow, “Neural net prunning-Why and how?”, in Proceedings IEEE ICNN, Vol. 1, San Diego CA, pp. 325–332, 1988.

    Google Scholar 

  17. P. Treleaven, “Neurocomputers”,International Journal of Neurocomputing, Vol. 1, 1989.

    Google Scholar 

  18. P. Treleaven, M. Pacheco and M. Vellasco, “VLSI architectures for Neural Networks”, IEEE Micro, pp. 8–27, 1989.

    Google Scholar 

  19. A. Yáñez, S. Barro and A. Bugarín, “Backpropagation multilayer perceptron: a modular implementation”, in Lecture Notes in Computer Science 540, Springer Verlag, pp. 285–295, 1991.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Joan Cabestany Alberto Prieto

Rights and permissions

Reprints and permissions

Copyright information

© 1993 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Regueiro, C.V., Barro, S., Yáñez, A. (1993). Limitation of connectionism in MLP. In: Mira, J., Cabestany, J., Prieto, A. (eds) New Trends in Neural Computation. IWANN 1993. Lecture Notes in Computer Science, vol 686. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-56798-4_185

Download citation

  • DOI: https://doi.org/10.1007/3-540-56798-4_185

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-56798-1

  • Online ISBN: 978-3-540-47741-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics