Skip to main content

On Tractability of Neural-Network Approximation

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 5495))

Abstract

The tractability of neural-network approximation is investigated. The dependence of worst-case errors on the number of variables is studied. Estimates for Gaussian radial-basis-function and perceptron networks are derived.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Traub, J.F., Werschulz, A.G.: Complexity and Information. Cambridge University Press, Cambridge (1999)

    MATH  Google Scholar 

  2. Wasilkowski, G.W., Woźniakowski, H.: Complexity of weighted approximation over \(\Re^d\). J. of Complexity 17, 722–740 (2001)

    Article  MathSciNet  MATH  Google Scholar 

  3. Woźniakowski, H.: Tractability and strong tractability of linear multivariate problems. J. of Complexity 10, 96–128 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  4. Mhaskar, H.N.: On the tractability of multivariate integration and approximation by neural networks. J. of Complexity 20, 561–590 (2004)

    Article  MathSciNet  MATH  Google Scholar 

  5. Kainen, P.C., Kůrková, V., Sanguineti, M.: Complexity of Gaussian radial basis networks approximating smooth functions. J. of Complexity 25, 63–74 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Transactions on Information Theory 39, 930–945 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  7. Knuth, D.E.: Big omicron and big omega and big theta. SIGACT News 8(2), 18–24 (1976)

    Article  Google Scholar 

  8. Bellman, R.: Dynamic Programming. Princeton University Press, Princeton (1957)

    MATH  Google Scholar 

  9. Kainen, P.C., Kůrková, V., Vogt, A.: A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves. J. of Approximation Theory 147, 1–10 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  10. Kůrková, V., Savický, P., Hlaváčková, K.: Representations and rates of approximation of real–valued Boolean functions by neural networks. Neural Networks 11, 651–659 (1998)

    Article  Google Scholar 

  11. Beliczynski, B., Ribeiro, B.: Several enhancements to hermite-based approximation of one-variable functions. In: Kůrková, V., Neruda, R., Koutník, J. (eds.) ICANN 2008, Part I. LNCS, vol. 5163, pp. 11–20. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  12. Barron, A.R.: Neural net approximation. In: Narendra, K. (ed.) Proc. 7th Yale Workshop on Adaptive and Learning Systems, pp. 69–72. Yale University Press (1992)

    Google Scholar 

  13. Breiman, L.: Hinging hyperplanes for regression, classification and function approximation. IEEE Transactions on Information Theory 39, 999–1013 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  14. Darken, C., Donahue, M., Gurvits, L., Sontag, E.: Rate of approximation results motivated by robust neural network learning. In: Proceedings of the Sixth Annual ACM Conference on Computational Learning Theory, pp. 303–309. The Association for Computing Machinery, New York (1993)

    Google Scholar 

  15. Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 20, 608–613 (1992)

    Article  MathSciNet  MATH  Google Scholar 

  16. Pisier, G.: Remarques sur un résultat non publié de B. Maurey. In: Séminaire d’Analyse Fonctionnelle 1980-1981, École Polytechnique, Centre de Mathématiques, Palaiseau, France, vol. I(12)

    Google Scholar 

  17. Gurvits, L., Koiran, P.: Approximation and learning of convex superpositions. J. of Computer and System Sciences 55, 161–170 (1997)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261–270. Birkhäuser, Basel (1997)

    Chapter  Google Scholar 

  19. Kolmogorov, A.N., Fomin, S.V.: Introductory Real Analysis. Dover Publications Inc. (1970)

    Google Scholar 

  20. Kůrková, V., Sanguineti, M.: Error estimates for approximate optimization by the extended Ritz method. SIAM J. on Optimization 15, 461–487 (2005)

    Article  MathSciNet  MATH  Google Scholar 

  21. Stein, E.M.: Singular Integrals and Differentiability Properties of Functions. Princeton University Press, Princeton (1970)

    MATH  Google Scholar 

  22. Courant, R.: Differential and Integral Calculus, vol.  II. Wiley-Interscience, Hoboken (1988)

    Book  MATH  Google Scholar 

  23. Kainen, P.C.: Utilizing geometric anomalies of high dimension: When complexity makes computation easier. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 283–294. Birkhäuser, Basel (1997)

    Chapter  Google Scholar 

  24. Kůrková, V., Kainen, P.C., Kreinovich, V.: Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks 10, 1061–1068 (1997)

    Article  Google Scholar 

  25. Narcowich, F.J., Ward, J.D., Wendland, H.: Sobolev error estimates and a Bernstein inequality for scattered data interpolation via radial basis functions. Constructive Approximation 24, 175–186 (2006)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Kainen, P.C., Kůrková, V., Sanguineti, M. (2009). On Tractability of Neural-Network Approximation. In: Kolehmainen, M., Toivanen, P., Beliczynski, B. (eds) Adaptive and Natural Computing Algorithms. ICANNGA 2009. Lecture Notes in Computer Science, vol 5495. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-04921-7_2

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-04921-7_2

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-04920-0

  • Online ISBN: 978-3-642-04921-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics