Skip to main content

Some Comparisons Between Linear Approximation and Approximation by Neural Networks

  • Conference paper
Artificial Neural Nets and Genetic Algorithms

Abstract

We present some comparisons between the approximation rates relevant to linear approximators and the rates relevant to neural networks, i.e., nonlinear approximators represented by sets of parametrized functions corresponding to a type of computational unit. Our analysis uses the concept of variation of a function with respect to a set. The comparison is made in terms of Kolmogorov n-width for linear spaces and a proper nonlinear n-width for the nonlinear context represented by neural networks. The results of this paper contribute to the theoretical understanding of the superiority of neural networks with respect to linear approximators in complex tasks, as is confirmed by a wide variety of applications (recognition of handwritten characters and spoken numerals, approximate solution of functional optimization problems from control theory, etc.).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory 39, pp. 930–945, 1993.

    Article  MathSciNet  MATH  Google Scholar 

  2. Barron, A. R.: Neural net approximation. Proc. 7th Yale Workshop on Adaptive and Learning Systems. K. Narendra Ed., Yale University Press, 1992.

    Google Scholar 

  3. Burr, D.J.: Experiments on neural net recognition of spoken and written text. IEEE Trans. Acoust. Speech and Signal Processing 36, pp. 1162–1168, 1988.

    Article  MATH  Google Scholar 

  4. Cybenko, G.: Approximation by superposition of a sigmoidal function, Math. Control Signal Systems 2, pp. 303–314, 1989.

    Article  MathSciNet  MATH  Google Scholar 

  5. Girosi, F., Jones, M. and Poggio, T.: Regularization theory and neural networks architectures. Neural Computation 7, pp. 219–269, 1995.

    Article  Google Scholar 

  6. Hlavácková, K., Sanguineti, M.: On the rates of linear and nonlinear approximations. Proc. 3rd IEEE European Workshop on Computer-Intensive Methods in Control and Signal Processing (CMP), pp. 211–216, 1998.

    Google Scholar 

  7. Hornik, K., Stinchcombe, M., White H.: Multilayer feedforward networks are universal ap-proximators. Neural Networks 2, pp. 359–366, 1989.

    Article  Google Scholar 

  8. Kainen, P.C., Kůrková, V., Vogt, A.: Approximation by neural networks is not continuous. Submitted to Neurocomputing.

    Google Scholar 

  9. Kurkova, V.: Dimension-independent rates of approximation by neural networks. Computer-intensive methods in Control and Signal Processing: Curse of Dimensionality (Eds. K. Warwick, M. Kárny). Birkhäuser, Boston, pp. 261–270, 1997.

    Chapter  Google Scholar 

  10. Kůrková, V., Savicků, P., Hlavácková, K.: Representations and rates of approximation of real-valued Boolean functions by neural networks. Neural Networks 11, pp. 651–659, 1998.

    Article  Google Scholar 

  11. Mhaskar, H.N., Micchelli, C.A.: Dimension-independent bounds on the degree of approximation by neural networks. IBM Journal of Research and Development 38, pp. 277–284, 1994.

    Article  MATH  Google Scholar 

  12. Parisini, T., Sanguineti, M., Zoppoli, R.: Nonlinear stabilization by receding-horizon neural regulators. International Journal of Control 70, no.3, pp. 341–362, 1998.

    Article  MathSciNet  MATH  Google Scholar 

  13. Park J., Sandberg, I. W.: Approximation and radial-basis-function networks. Neural Computation 5, pp. 305–316, 1993.

    Article  Google Scholar 

  14. Pinkus, A.: N — Widths in Approximation Theory. Springer-Verlag, New York, 1986.

    Google Scholar 

  15. Sejnowski, T.J., Rosenberg, C: Parallel networks that learn to pronounce English text. Complex Systems 1, pp. 145–168, 1987.

    MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

Copyright information

© 1999 Springer-Verlag Wien

About this paper

Cite this paper

Sanguineti, M., Hlaváčková-Schindler, K. (1999). Some Comparisons Between Linear Approximation and Approximation by Neural Networks. In: Artificial Neural Nets and Genetic Algorithms. Springer, Vienna. https://doi.org/10.1007/978-3-7091-6384-9_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-7091-6384-9_30

  • Publisher Name: Springer, Vienna

  • Print ISBN: 978-3-211-83364-3

  • Online ISBN: 978-3-7091-6384-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics