Skip to main content

Complexity of Shallow Networks Representing Functions with Large Variations

  • Conference paper
Artificial Neural Networks and Machine Learning – ICANN 2014 (ICANN 2014)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 8681))

Included in the following conference series:

Abstract

Model complexities of networks representing multivariable functions is studied in terms of variational norms tailored to types of network units. It is shown that the size of the variational norm reflects both the number of hidden units and sizes of output weights. Lower bounds on growth of variational norms with increasing input dimension d are derived for Gaussian units and perceptrons. It is proven that variation of the d-dimensional parity with respect to Gaussian Support Vector Machines grows exponentially with d and for large values of d, almost any randomly-chosen Boolean function has variation with respect to perceptrons depending on d exponentially.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bengio, Y.: Learning deep architectures for AI. Foundations and Trends in Machine Learning 2, 1–127 (2009)

    Article  MATH  Google Scholar 

  2. Hinton, G.E., Osindero, S., Teh, Y.W.: A fast learning algorithm for deep belief nets. Neural Computation 18, 1527–1554 (2006)

    Article  MATH  MathSciNet  Google Scholar 

  3. Chow, T.W.S., Cho, S.Y.: Neural Networks and Computing: Learning Algorithms and Applications. World Scientific (2007)

    Google Scholar 

  4. Leshno, M., Lin, V.Y., Pinkus, A., Schocken, S.: Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Networks 6, 861–867 (1993)

    Article  Google Scholar 

  5. Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numerica 8, 143–195 (1999)

    Article  MathSciNet  Google Scholar 

  6. Park, J., Sandberg, I.: Approximation and radial-basis-function networks. Neural Computation 5, 305–316 (1993)

    Article  Google Scholar 

  7. Mhaskar, H.N.: Versatile Gaussian networks. In: Proc. of IEEE Workshop of Nonlinear Image Processing, pp. 70–73 (1995)

    Google Scholar 

  8. Kůrková, V.: Some comparisons of networks with radial and kernel units. In: Villa, A.E.P., Duch, W., Érdi, P., Masulli, F., Palm, G. (eds.) ICANN 2012, Part II. LNCS, vol. 7553, pp. 17–24. Springer, Heidelberg (2012)

    Chapter  Google Scholar 

  9. Kainen, P.C., Kůrková, V., Sanguineti, M.: Dependence of computational models on input dimension: Tractability of approximation and optimization tasks. IEEE Transactions on Information Theory 58, 1203–1214 (2012)

    Article  Google Scholar 

  10. Maiorov, V.: On best approximation by ridge functions. J. of Approximation Theory 99, 68–94 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  11. Maiorov, V., Pinkus, A.: Lower bounds for approximation by MLP neural networks. Neurocomputing 25, 81–91 (1999)

    Article  MATH  Google Scholar 

  12. Bartlett, P.L.: The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network. IEEE Trans. on Information Theory 44, 525–536 (1998)

    Article  MATH  MathSciNet  Google Scholar 

  13. Bengio, Y., Delalleau, O., Roux, N.L.: The curse of highly variable functions for local kernel machines. In: Advances in Neural Information Processing Systems 18, pp. 107–114. MIT Press (2006)

    Google Scholar 

  14. Ito, Y.: Finite mapping by neural networks and truth functions. Mathematical Scientist 17, 69–77 (1992)

    MATH  MathSciNet  Google Scholar 

  15. Micchelli, C.A.: Interpolation of scattered data: Distance matrices and conditionally positive definite functions. Constructive Approximation 2, 11–22 (1986)

    Article  MATH  MathSciNet  Google Scholar 

  16. Barron, A.R.: Neural net approximation. In: Narendra, K. (ed.) Proc. 7th Yale Workshop on Adaptive and Learning Systems, pp. 69–72. Yale University Press (1992)

    Google Scholar 

  17. Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261–270. Birkhäuser, Boston (1997)

    Chapter  Google Scholar 

  18. Kůrková, V., Sanguineti, M.: Comparison of worst-case errors in linear and neural network approximation. IEEE Transactions on Information Theory 48, 264–275 (2002)

    Article  MATH  Google Scholar 

  19. Gnecco, G., Sanguineti, M.: On a variational norm tailored to variable-basis approximation schemes. IEEE Trans. on Information Theory 57, 549–558 (2011)

    Article  MathSciNet  Google Scholar 

  20. Kůrková, V., Savický, P., Hlaváčková, K.: Representations and rates of approximation of real-valued Boolean functions by neural networks. Neural Networks 11, 651–659 (1998)

    Article  Google Scholar 

  21. Kůrková, V.: Complexity estimates based on integral transforms induced by computational units. Neural Networks 33, 160–167 (2012)

    Article  MATH  Google Scholar 

  22. Schläfli, L.: Theorie der vielfachen Kontinuität. Zürcher & Furrer, Zürich (1901)

    Book  Google Scholar 

  23. Roychowdhury, V., Siu, K., Orlitsky, A.: Neural models and spectral methods. In: Roychowdhury, V., Siu, K., Orlitsky, A. (eds.) Theorertical Advances in Neural Computation and Learning, pp. 3–36. Kluwer Academic Publishers (1997)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Kůrková, V., Sanguineti, M. (2014). Complexity of Shallow Networks Representing Functions with Large Variations. In: Wermter, S., et al. Artificial Neural Networks and Machine Learning – ICANN 2014. ICANN 2014. Lecture Notes in Computer Science, vol 8681. Springer, Cham. https://doi.org/10.1007/978-3-319-11179-7_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-11179-7_42

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-11178-0

  • Online ISBN: 978-3-319-11179-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics