Skip to main content

Layered neural networks as universal approximators

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1226))

Abstract

The paper considers Ito's results on the approximation capability of layered neural networks with sigmoid units in two layers. First of all the paper recalls one of Ito's main results. Then the results of Ito regarding Heaviside function as sigmoid functions are extended using a signum function. For Heaviside functions a layered neural network implementation is presented that is also valid for signum functions. The focus of paper is on the implementation of Ito's appoximators as four layer feed-forward neural networks.

This is a preview of subscription content, log in via an institution.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cybenko G. 1989, Approximation by superpositions of a sigmoidal function. Math. Control Signal System 2. 303–314

    Google Scholar 

  2. Funahashi K. 1989, On the approximate realisation of continuous mapping by neural networks. Neural Networks 2. 183–192

    Google Scholar 

  3. Girosi F. and Poggio T. 1989, Representation properties of network: Kolmogorov's theorem is irrelevant. Neural Computation 1. 456–469

    Google Scholar 

  4. Hecht-Nielsen R. 1987, Kolmogorov's mapping neural network existence theorem. IEEE First Conf. Neural Networks III. 11–13

    Google Scholar 

  5. Hecht-Nielsen R. 1989. Theory of the back propagation neural network, '89 IJCNN Proc. I. 593–605

    Google Scholar 

  6. Hornik K. 1991. Approximation capabilities of multilayer feedforward networks, Neural Networks 4. 251–257

    Google Scholar 

  7. Hornik K., Stinchcombe M. and White H. 1989. Multilayer feedforward networks are universal approximators. Neural Networks 2. 359–366

    Google Scholar 

  8. Hornik K., Stinchcombe M. and White H. 1990, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks. Neural Networks 3. 551–560

    Google Scholar 

  9. Ito Y. 1991a. Representation of Functions by superposition of a step or sigmoid function and their applications to neural network theory. Neural Networks 4. 385–394.

    Google Scholar 

  10. Ito Y. 1991b, Approximation of functions on a compact set by finite sums of a sigmoid function without scaling. Neural Networks 4. 817–826

    Google Scholar 

  11. Ito Y. 1992, Approximation of continuous functions of R by linear combinations of shifted rotations of a sigmoid function with and without scaling, Neural Networks 5. 105–115.

    Google Scholar 

  12. Ito Y. 1993, Approximations of differentiable functions and their derivatives on compact set by neural networks. Math. Scient. 18. 11–19

    Google Scholar 

  13. Ito Y. 1994, Approximation capability of layered neural networks with sigmoid units on two layers. Neural Computation 6. 1233–1243

    Google Scholar 

  14. Kolmogorov A. N. 1957, On the representations of continuous functions of many variables by superpositions of continuous functions of one variable and addition. Dokl. Akad. Nauk USSR 114 (5). 953–956.

    Google Scholar 

  15. Kurkova V. 1991. Kolmogorov's theorem is relevant. Neural Computation 3. 617–622

    Google Scholar 

  16. Kurkova V. 1992. Kolmogorov's theorem and multilayer neural networks. Neural Networks 5. 501–506

    Google Scholar 

  17. Stinchcombe M. and White H. 1989. Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions. '89 IJCNN, Proc. I, 613–617.

    Google Scholar 

  18. Cardaliaguet P., Euvrard G., 1992, Approximation of a Function and its Derivatives with a Neural Network, Neural Networks, Vol 5 pp 207–220

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Bernd Reusch

Rights and permissions

Reprints and permissions

Copyright information

© 1997 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ciuca, I., Ware, J.A. (1997). Layered neural networks as universal approximators. In: Reusch, B. (eds) Computational Intelligence Theory and Applications. Fuzzy Days 1997. Lecture Notes in Computer Science, vol 1226. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-62868-1_133

Download citation

  • DOI: https://doi.org/10.1007/3-540-62868-1_133

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-62868-2

  • Online ISBN: 978-3-540-69031-3

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics