Abstract
Paper deals with the approximation of continuous functions by feedforward neural networks. After presenting one of the main results of Ito and some elements from Cybenko theory, paper presents a universal approximator implementable as three layer feedforward neural network using Heaviside function on hidden layer. Trying to get an explicit formula for this approximator it results that the explicite expreesions of the coefficients cannot be obtained using a similar method with that from Ito theorem.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Cybenko G., Approximation by superpositions of a sigmoidal function, Math. Control Signal System 2, (1989) 303–314
Funahashi K., On the approximate realization of continuous mapping by neural networks, Neural Networks 2, (1989), 183–192
Girosi F, and Poggio T., Representation properties of network: Kolmogorov’s theorem is irrelevant, Neural Computation 1, (1989), 456–469
Hecht-Nielsen R., Kolmogorov’s mapping neural network existence theorem, IEEE First Conf. Neural Networks III, (1987), 11–13
Hecht-Nielsen R., Theory of the back propagation neural network, 89 IJCNN Proc. I, (1989), 593–605.
Hornik K. Approximation capabilities of multilayer feedforward networks, Neural Networks 4, (1991), 251–257
Hornik k, Stinchcombe M, and White, H., Multilayer feedforward networks are universal approximators, Neural Networks 2, (1989), 359–366
Hornik K., Stinchcombe M. and White H. Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural Networks 3, (1990), 551–560
Cardaliaguet P., Euvrard G., Approximation of a Function and its derivatives with a Neural Network, Neural Networks, Vol 5, (1992), 207–220
Ito Y., Representation of functions by superposition of a step or sigmoid function and their applications to neural network theory, Neural Networks 4, (1991a), 385–394.
Ito Y., Approximation of functions on a compact set by finite sums of a sigmoid function without scaling, Neural Networks 4, (1991b), 817–826
Ito Y., Approximation of continuous functions of Rd by linear combinations of shifted rotations of a sigmoid function with and without scaling, Neural Networks 5, (1992), 105–115.
Ito Y., Approximations of differentiable functions and their derivatives on compact set by neural networks, Math. Scient. 18, (1993), 11–19
Ito Y., Approximation capability of layered neural networks with sigmoid units on two layers, Neural Computation 6, (1994), 1233–1243
Ito Y., Nonlinearity createslinear independence, Adv. Compt. Math., 55, (1996), pp 31–35.
Ito Y. and K. Saito, Superposition of linearly independent functions and finite mappings by neural networks, Math. Scient., 21, (1996), 27–33.
Kolmogorov A. N., On the representations of continuous functions of many variables by superpositions of continuous functions of one variable and addition, Dokl. Akad. Nauk USSR 114 (5), (1957), 953–956.
Timan A. F., Theory of Approximation of Functions of Real Variable, Dover, NY, (1994)
Kurkova V., Kolmogorov’s theorem is relevant, Neural Computation 3, (1991), 617–622
Kurkova V., Kolmogorov’s theorem and multilayer neuralnetworks, Neural Networks 5, (1992), 501–506
Stincheombe M. and White H., Universal approximation using feedforward networks with non-sigmoid hidden layer activation functions, ’89 IJCNN Proc. I, (1989), 613–617
Zhang Q and Benveniste A., Wavelet Networks, IEEE Trans. On Neural Networks, Vol 3, No. 6, Nov., (1992), 889–898.
Barron A., Universal approximation bounds for superposition of a sigmoidal function, IEEE Transactions on Information Theory, Vol 3, No. 3, (1993), 930–945.
Mhaskar H. N., Neural networks for localised approximation of smooth and analytic functions, in Neural networks for signal processing III, (Eds. Kamm, Huhn, Chellappa and Kung), (1993), 190–196.
Mhaskar H. N. and Micchelli. C. A., Dimension-indepdendent bounds on the degree of approximation by neural networks, IBM Journal of Research and Development Vol 38, No. 3, (1994), 277–283.
Katsuura H. and Sprecher D. A., Computational aspects of Kolmogorov’s superposition theorem, Neural Networks, Vol. 7, No. 3 (1994), 455–461.
Ellacott S. W. and Bose D, (1996), Neural networks:deterministic methods of analysis, International Tompson Computer Press, ISBN 1-85032-244-9., (1996)
Author information
Authors and Affiliations
Editor information
Rights and permissions
Copyright information
© 1999 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ciuca, I., Jitaru, E. (1999). On the three layer neural networks using sigmoidal functions. In: Mira, J., Sánchez-Andrés, J.V. (eds) Foundations and Tools for Neural Modeling. IWANN 1999. Lecture Notes in Computer Science, vol 1606. Springer, Berlin, Heidelberg. https://doi.org/10.1007/BFb0098188
Download citation
DOI: https://doi.org/10.1007/BFb0098188
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-66069-9
Online ISBN: 978-3-540-48771-5
eBook Packages: Springer Book Archive