Skip to main content

On the three layer neural networks using sigmoidal functions

  • Neural Modeling (Biophysical and Structural Models)
  • Conference paper
  • First Online:
Foundations and Tools for Neural Modeling (IWANN 1999)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1606))

Included in the following conference series:

Abstract

Paper deals with the approximation of continuous functions by feedforward neural networks. After presenting one of the main results of Ito and some elements from Cybenko theory, paper presents a universal approximator implementable as three layer feedforward neural network using Heaviside function on hidden layer. Trying to get an explicit formula for this approximator it results that the explicite expreesions of the coefficients cannot be obtained using a similar method with that from Ito theorem.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Cybenko G., Approximation by superpositions of a sigmoidal function, Math. Control Signal System 2, (1989) 303–314

    Article  MathSciNet  MATH  Google Scholar 

  2. Funahashi K., On the approximate realization of continuous mapping by neural networks, Neural Networks 2, (1989), 183–192

    Article  Google Scholar 

  3. Girosi F, and Poggio T., Representation properties of network: Kolmogorov’s theorem is irrelevant, Neural Computation 1, (1989), 456–469

    Article  Google Scholar 

  4. Hecht-Nielsen R., Kolmogorov’s mapping neural network existence theorem, IEEE First Conf. Neural Networks III, (1987), 11–13

    Google Scholar 

  5. Hecht-Nielsen R., Theory of the back propagation neural network, 89 IJCNN Proc. I, (1989), 593–605.

    Google Scholar 

  6. Hornik K. Approximation capabilities of multilayer feedforward networks, Neural Networks 4, (1991), 251–257

    Article  Google Scholar 

  7. Hornik k, Stinchcombe M, and White, H., Multilayer feedforward networks are universal approximators, Neural Networks 2, (1989), 359–366

    Article  Google Scholar 

  8. Hornik K., Stinchcombe M. and White H. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural Networks 3, (1990), 551–560

    Article  Google Scholar 

  9. Cardaliaguet P., Euvrard G., Approximation of a Function and its derivatives with a Neural Network, Neural Networks, Vol 5, (1992), 207–220

    Article  Google Scholar 

  10. Ito Y., Representation of functions by superposition of a step or sigmoid function and their applications to neural network theory, Neural Networks 4, (1991a), 385–394.

    Article  Google Scholar 

  11. Ito Y., Approximation of functions on a compact set by finite sums of a sigmoid function without scaling, Neural Networks 4, (1991b), 817–826

    Article  Google Scholar 

  12. Ito Y., Approximation of continuous functions of Rd by linear combinations of shifted rotations of a sigmoid function with and without scaling, Neural Networks 5, (1992), 105–115.

    Article  Google Scholar 

  13. Ito Y., Approximations of differentiable functions and their derivatives on compact set by neural networks, Math. Scient. 18, (1993), 11–19

    MathSciNet  MATH  Google Scholar 

  14. Ito Y., Approximation capability of layered neural networks with sigmoid units on two layers, Neural Computation 6, (1994), 1233–1243

    Article  MATH  Google Scholar 

  15. Ito Y., Nonlinearity createslinear independence, Adv. Compt. Math., 55, (1996), pp 31–35.

    Google Scholar 

  16. Ito Y. and K. Saito, Superposition of linearly independent functions and finite mappings by neural networks, Math. Scient., 21, (1996), 27–33.

    MathSciNet  MATH  Google Scholar 

  17. Kolmogorov A. N., On the representations of continuous functions of many variables by superpositions of continuous functions of one variable and addition, Dokl. Akad. Nauk USSR 114 (5), (1957), 953–956.

    MathSciNet  MATH  Google Scholar 

  18. Timan A. F., Theory of Approximation of Functions of Real Variable, Dover, NY, (1994)

    Google Scholar 

  19. Kurkova V., Kolmogorov’s theorem is relevant, Neural Computation 3, (1991), 617–622

    Article  Google Scholar 

  20. Kurkova V., Kolmogorov’s theorem and multilayer neuralnetworks, Neural Networks 5, (1992), 501–506

    Article  Google Scholar 

  21. Stincheombe M. and White H., Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions, ’89 IJCNN Proc. I, (1989), 613–617

    Google Scholar 

  22. Zhang Q and Benveniste A., Wavelet Networks, IEEE Trans. On Neural Networks, Vol 3, No. 6, Nov., (1992), 889–898.

    Article  Google Scholar 

  23. Barron A., Universal approximation bounds for superposition of a sigmoidal function, IEEE Transactions on Information Theory, Vol 3, No. 3, (1993), 930–945.

    Article  MathSciNet  MATH  Google Scholar 

  24. Mhaskar H. N., Neural networks for localised approximation of smooth and analytic functions, in Neural networks for signal processing III, (Eds. Kamm, Huhn, Chellappa and Kung), (1993), 190–196.

    Google Scholar 

  25. Mhaskar H. N. and Micchelli. C. A., Dimension-indepdendent bounds on the degree of approximation by neural networks, IBM Journal of Research and Development Vol 38, No. 3, (1994), 277–283.

    Article  MATH  Google Scholar 

  26. Katsuura H. and Sprecher D. A., Computational aspects of Kolmogorov’s superposition theorem, Neural Networks, Vol. 7, No. 3 (1994), 455–461.

    Article  MATH  Google Scholar 

  27. Ellacott S. W. and Bose D, (1996), Neural networks:deterministic methods of analysis, International Tompson Computer Press, ISBN 1-85032-244-9., (1996)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

José Mira Juan V. Sánchez-Andrés

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Ciuca, I., Jitaru, E. (1999). On the three layer neural networks using sigmoidal functions. In: Mira, J., Sánchez-Andrés, J.V. (eds) Foundations and Tools for Neural Modeling. IWANN 1999. Lecture Notes in Computer Science, vol 1606. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0098188

Download citation

  • DOI: https://doi.org/10.1007/BFb0098188

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-66069-9

  • Online ISBN: 978-3-540-48771-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics