Skip to main content

Some Comparisons of Model Complexity in Linear and Neural-Network Approximation

  • Conference paper
Artificial Neural Networks – ICANN 2010 (ICANN 2010)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 6354))

Included in the following conference series:

  • 3304 Accesses

Abstract

Capabilities of linear and neural-network models are compared from the point of view of requirements on the growth of model complexity with an increasing accuracy of approximation. Upper bounds on worst-case errors in approximation by neural networks are compared with lower bounds on these errors in linear approximation. The bounds are formulated in terms of singular numbers of certain operators induced by computational units and high-dimensional volumes of the domains of the functions to be approximated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kainen, P.C., Kůrková, V., Vogt, A.: Best approximation by Heaviside perceptron networks. Neural Networks 13, 695–697 (2000)

    Article  Google Scholar 

  2. Kainen, P.C., Kůrková, V., Vogt, A.: Geometry and topology of continuous best and near best approximations. J. of Approximation Theory 105, 252–262 (2000)

    Article  MATH  Google Scholar 

  3. Kainen, P.C., Kůrková, V., Vogt, A.: Continuity of approximation by neural networks in L p -spaces. Annals of Operational Research 101, 143–147 (2001)

    Article  MATH  Google Scholar 

  4. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. on Information Theory 39, 930–945 (1993)

    Article  MATH  MathSciNet  Google Scholar 

  5. Kůrková, V., Sanguineti, M.: Comparison of worst case errors in linear and neural network approximation. IEEE Trans. on Information Theory 48, 264–275 (2002)

    Article  MATH  Google Scholar 

  6. Pisier, G.: Remarques sur un résultat non publié de B. Maurey. In: Séminaire d’Analyse Fonctionnelle 1980-1981, École Polytechnique, Centre de Mathématiques, Palaiseau, France, vol. I(12) (1981)

    Google Scholar 

  7. Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 24, 608–613 (1992)

    Article  Google Scholar 

  8. Kůrková, V., Sanguineti, M.: Geometric upper bounds on rates of variable-basis approximation. IEEE Trans. on Information Theory 54, 5681–5688 (2008)

    Article  Google Scholar 

  9. Gribonval, R., Vandergheynst, P.: On the exponential convergence of matching pursuits in quasi-incoherent dictionaries. IEEE Trans. on Information Theory 52, 255–261 (2006)

    Article  MathSciNet  Google Scholar 

  10. Kolmogorov, A.N.: Über die beste Annäherung von Funktionen einer gegebenen Funktionenklasse. Annals of Mathematics 37(1), 107–110 (1936)

    Article  MathSciNet  Google Scholar 

  11. Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M., eds.: Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261–270. Birkhäuser, Boston (1997)

    Google Scholar 

  12. Barron, A.R.: Neural net approximation. In: Proc. 7th Yale Workshop on Adaptive and Learning Systems, pp. 69–72. Yale University Press, New Haven (1992)

    Google Scholar 

  13. Kůrková, V.: High-dimensional approximation and optimization by neural networks. In: Suykens, J., Horváth, G., Basu, S., Micchelli, C., Vandewalle, J. (eds.) Advances in Learning Theory: Methods, Models and Applications, ch. 4, pp. 69–88. IOS Press, Amsterdam (2003)

    Google Scholar 

  14. Kůrková, V.: Model complexity of neural networks and integral transforms. In: Alippi, C., Polycarpou, M., Panayiotou, C., Ellinas, G. (eds.) ICANN 2009. LNCS, vol. 5768, pp. 708–718. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  15. Girosi, F., Anzellotti, G.: Rates of convergence for radial basis functions and neural networks. In: Mammone, R.J. (ed.) Artificial Neural Networks for Speech and Vision, pp. 97–113. Chapman & Hall, Boca Raton (1993)

    Google Scholar 

  16. Kůrková, V., Kainen, P.C., Kreinovich, V.: Estimates of the number of hidden units and variation with respect to half-spaces. Neural Networks 10, 1061–1068 (1997)

    Article  Google Scholar 

  17. Kainen, P.C., Kůrková, V.: An integral upper bound for neural network approximation. Neural Computation 21, 2970–2989 (2009)

    Article  MathSciNet  Google Scholar 

  18. Kainen, P.C., Kůrková, V., Vogt, A.: A Sobolev-type upper bound for rates of approximation by linear combinations of Heaviside plane waves. J. of Approximation Theory 147, 1–10 (2007)

    Article  MATH  Google Scholar 

  19. Kainen, P.C., Kůrková, V., Sanguineti, M.: Complexity of Gaussian radial basis networks approximating smooth functions. J. of Complexity 25, 63–74 (2009)

    Article  MATH  Google Scholar 

  20. Pinkus, A.: n-Widths in Approximation Theory. Springer, Heidelberg (1985)

    MATH  Google Scholar 

  21. Shubin, M.A.: Pseudodifferential operators and spectral theory. Springer, Heidelberg (2001)

    MATH  Google Scholar 

  22. Birman, M.S., Solomyak, M.Z.: Spectral Theory of Self-Adjoint Operators in Hilbert Space. D. Reidel Publishing, Dordrecht (1987)

    MATH  Google Scholar 

  23. Gnecco, G., Kůrková, V., Sanguineti, M.: Some comparisons of complexity of dictionary-based and linear computational models (2010) (manuscript)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gnecco, G., Kůrková, V., Sanguineti, M. (2010). Some Comparisons of Model Complexity in Linear and Neural-Network Approximation. In: Diamantaras, K., Duch, W., Iliadis, L.S. (eds) Artificial Neural Networks – ICANN 2010. ICANN 2010. Lecture Notes in Computer Science, vol 6354. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15825-4_48

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15825-4_48

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15824-7

  • Online ISBN: 978-3-642-15825-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics