Skip to main content

Complexity issues in neural network computations

  • Conference paper
  • First Online:
LATIN '92 (LATIN 1992)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 583))

Included in the following conference series:

Abstract

In this paper we described new results on the complexity of computing dichotomies and dichotomies on examples particularly on the number of units in the hidden layers. Traditionnally the number of units is bounded by functions of the number of examples. We have introduced a new parameter: the distance between the classes. These two parameters are complementary and it is still unknown if another parameters could be used. The bounds that we derived are not tight and should be improved.

We have also shown that the use of second hidden layer could reduce the total number of hidden units. What can be proved if we add more layers? More generally the relationship between the capabilities of multilayer artificial neural networks and the number of layers and number of hidden units is still a completely open problem.

This article was processed using the LATEX macro package with LMAMULT style

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  • BAUM, E.B. (1988), On the capabilities of multilayer perceptions, Journal of Complexity 4, 193–215.

    Google Scholar 

  • BLUM, E.K., LI, L.K. (1991), Approximatiom theory and feedforward networks, Neural Networks 4, 511–516

    Google Scholar 

  • CHESTER, D. (1991), The generalization capabilities of piecewise linear nets, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24

    Google Scholar 

  • COVER, T.M. (1965), Geometrical and statistical properties of systems of linear inequalities with applications in pattern recognition, IEEE Trans. Electron. Comput. 14, 326–334.

    Google Scholar 

  • CYBENKO, G. (1988), Continuous valued neural networks: approximation theoretic results, in: Virginia Amer. Stat. Assn., Alexandria, editor, Comp. Sc. and Stat., proc.of the 20th Symp. on the Interface, 174–183.

    Google Scholar 

  • FUNAHASHI, K. (1989), On the approximate realization of continuous mappings by neural networks, Neural Networks, 2, 183–192.

    Google Scholar 

  • HORNIK, K., STINCHCOMBE, M., WHITE H. (1989), Multilayer feedforward networks are universal approximators, Neural Networks, 2, 359–366.

    Google Scholar 

  • HORNIK, K., STINCHCOMBE, M., WHITE H. (1990), Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural Networks, 3, 551–560.

    Google Scholar 

  • KOIRAN, P. (1991), Approximating of mappings and application to translational invariant networks, proc. of IJCNN-Singapore '91, 3, 2294–2298.

    Google Scholar 

  • LIPPMANN, R. (1987), An introduction to computing with neural nets, IEEE ASSP Magazine, 4–22

    Google Scholar 

  • MAKHOUL, J. (1991), Partitioning capabilities of two-layer neural networks, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24

    Google Scholar 

  • MCCULLOCH, W., PITTS, W. (1943), A logical calculus of the ideas immanent in nervous activity, Bulletin of Mathematical Biophysics 5, 115–133

    Google Scholar 

  • MCCULLOCH, W., PITTS, W. (1947), How we know universals, Bulletin of Mathematical Biophysics 9, 127–147

    Google Scholar 

  • MINSKY, M., PAPERT, S. (1969), Perceptions, an introduction to computational geometry, MIT Press

    Google Scholar 

  • MINSKY, M., Papert, S. (1988), Perceptrons — Expanded Edition, MIT Press

    Google Scholar 

  • NILSSON, N.J. (1965), The Mathematical Foundations of Learning Machines, Morgan Kaufmann

    Google Scholar 

  • ROSENBLATT, F. (1962), Principles of neurodynamics, SPARTAN New York

    Google Scholar 

  • RUMELHART, D.E., HINTON, G.E., WILLIAMS, R.J. (1986), Learning internal representations by error propagation, in: Parallel Distributed Processing, vol.1, 318–362, MIT Press

    Google Scholar 

  • SCHLÄFLI, L. (1950), Gesammelte Mathematische Abhandlungen I, Verlag Birkhäuser, Basel (Switzerland), 209–212

    Google Scholar 

  • SONTAG, E.D. (1990), Feedback stabilization using two-hidden-layer nets, Report SYCON-90-11, Rutgers Center for Systems and Control

    Google Scholar 

  • VENKATESH, S.S. (1991), Probabilistic capacity and links to distribution dependent learning, oral presentation at the DIMACS Workshop on Theoretical issues in neural nets, Rutgers Univ., may 20–24

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Imre Simon

Rights and permissions

Reprints and permissions

Copyright information

© 1992 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Cosnard, M., Koiran, P., Paugam-Moisy, H. (1992). Complexity issues in neural network computations. In: Simon, I. (eds) LATIN '92. LATIN 1992. Lecture Notes in Computer Science, vol 583. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0023854

Download citation

  • DOI: https://doi.org/10.1007/BFb0023854

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-55284-0

  • Online ISBN: 978-3-540-47012-0

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics