Abstract
We present some comparisons between the approximation rates relevant to linear approximators and the rates relevant to neural networks, i.e., nonlinear approximators represented by sets of parametrized functions corresponding to a type of computational unit. Our analysis uses the concept of variation of a function with respect to a set. The comparison is made in terms of Kolmogorov n-width for linear spaces and a proper nonlinear n-width for the nonlinear context represented by neural networks. The results of this paper contribute to the theoretical understanding of the superiority of neural networks with respect to linear approximators in complex tasks, as is confirmed by a wide variety of applications (recognition of handwritten characters and spoken numerals, approximate solution of functional optimization problems from control theory, etc.).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory 39, pp. 930–945, 1993.
Barron, A. R.: Neural net approximation. Proc. 7th Yale Workshop on Adaptive and Learning Systems. K. Narendra Ed., Yale University Press, 1992.
Burr, D.J.: Experiments on neural net recognition of spoken and written text. IEEE Trans. Acoust. Speech and Signal Processing 36, pp. 1162–1168, 1988.
Cybenko, G.: Approximation by superposition of a sigmoidal function, Math. Control Signal Systems 2, pp. 303–314, 1989.
Girosi, F., Jones, M. and Poggio, T.: Regularization theory and neural networks architectures. Neural Computation 7, pp. 219–269, 1995.
Hlavácková, K., Sanguineti, M.: On the rates of linear and nonlinear approximations. Proc. 3rd IEEE European Workshop on Computer-Intensive Methods in Control and Signal Processing (CMP), pp. 211–216, 1998.
Hornik, K., Stinchcombe, M., White H.: Multilayer feedforward networks are universal ap-proximators. Neural Networks 2, pp. 359–366, 1989.
Kainen, P.C., Kůrková, V., Vogt, A.: Approximation by neural networks is not continuous. Submitted to Neurocomputing.
Kurkova, V.: Dimension-independent rates of approximation by neural networks. Computer-intensive methods in Control and Signal Processing: Curse of Dimensionality (Eds. K. Warwick, M. Kárny). Birkhäuser, Boston, pp. 261–270, 1997.
Kůrková, V., Savicků, P., Hlavácková, K.: Representations and rates of approximation of real-valued Boolean functions by neural networks. Neural Networks 11, pp. 651–659, 1998.
Mhaskar, H.N., Micchelli, C.A.: Dimension-independent bounds on the degree of approximation by neural networks. IBM Journal of Research and Development 38, pp. 277–284, 1994.
Parisini, T., Sanguineti, M., Zoppoli, R.: Nonlinear stabilization by receding-horizon neural regulators. International Journal of Control 70, no.3, pp. 341–362, 1998.
Park J., Sandberg, I. W.: Approximation and radial-basis-function networks. Neural Computation 5, pp. 305–316, 1993.
Pinkus, A.: N — Widths in Approximation Theory. Springer-Verlag, New York, 1986.
Sejnowski, T.J., Rosenberg, C: Parallel networks that learn to pronounce English text. Complex Systems 1, pp. 145–168, 1987.
Author information
Authors and Affiliations
Rights and permissions
Copyright information
© 1999 Springer-Verlag Wien
About this paper
Cite this paper
Sanguineti, M., Hlaváčková-Schindler, K. (1999). Some Comparisons Between Linear Approximation and Approximation by Neural Networks. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6384-9_30
Download citation
DOI: https://doi.org/10.1007/978-3-7091-6384-9_30
Publisher Name: Springer, Vienna
Print ISBN: 978-3-211-83364-3
Online ISBN: 978-3-7091-6384-9
eBook Packages: Springer Book Archive