Skip to main content

Sparsity of Shallow Networks Representing Finite Mappings

  • Conference paper
  • First Online:

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 744))

Abstract

Limitations of capabilities of shallow networks to represent sparsely real-valued functions on finite domains is investigated. Influence of sizes of function domains and of sizes dictionaries of computational units on sparsity of networks computing finite mappings is explored. It is shown that when dictionary is not sufficiently large with respect to the size of the finite domain, then almost any uniformly randomly chosen function on the domain either cannot be sparsely represented or its computation is unstable.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Ito, Y.: Finite mapping by neural networks and truth functions. Math. Sci. 17, 69–77 (1992)

    MathSciNet  MATH  Google Scholar 

  2. Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numerica 8, 143–195 (1999)

    Article  MathSciNet  MATH  Google Scholar 

  3. Fine, T.L.: Feedforward Neural Network Methodology. Springer, Heidelberg (1999)

    MATH  Google Scholar 

  4. Bengio, Y., LeCun, Y.: Scaling learning algorithms towards AI. In: Bottou, L., Chapelle, O., DeCoste, D., Weston, J. (eds.) Large-Scale Kernel Machines. MIT Press (2007)

    Google Scholar 

  5. Ba, L.J., Caruana, R.: Do deep networks really need to be deep? In: Ghahrani, Z., et al. (eds.) Advances in Neural Information Processing Systems, vol. 27, pp. 1–9 (2014)

    Google Scholar 

  6. Kainen, P.C., Kůrková, V., Sanguineti, M.: Dependence of computational models on input dimension: tractability of approximation and optimization tasks. IEEE Trans. Inf. Theory 58, 1203–1214 (2012)

    Article  MathSciNet  MATH  Google Scholar 

  7. Maiorov, V.E., Pinkus, A.: Lower bounds for approximation by MLP neural networks. Neurocomputing 25, 81–91 (1999)

    Article  MATH  Google Scholar 

  8. Maiorov, V.E., Meir, R.: On the near optimality of the stochastic approximation of smooth functions by neural networks. Adv. Comput. Math. 13, 79–103 (2000)

    Article  MathSciNet  MATH  Google Scholar 

  9. Bengio, Y., Delalleau, O., Roux, N.L.: The curse of highly variable functions for local kernel machines. In: Advances in Neural Information Processing Systems, vol. 18, pp. 107–114. MIT Press (2006)

    Google Scholar 

  10. Bianchini, M., Scarselli, F.: On the complexity of neural network classifiers: a comparison between shallow and deep architectures. IEEE Trans. Neural Netw. Learn. Syst. 25, 1553–1565 (2014)

    Article  Google Scholar 

  11. Kůrková, V., Sanguineti, M.: Model complexities of shallow networks representing highly varying functions. Neurocomputing 171, 598–604 (2016)

    Article  Google Scholar 

  12. Kůrková, V.: Lower bounds on complexity of shallow perceptron networks. In: Jayne, C., Iliadis, L. (eds.) EANN 2016. CCIS, vol. 629, pp. 283–294. Springer, Heidelberg (2016)

    Chapter  Google Scholar 

  13. Kůrková, V.: Constructive lower bounds on model complexity of shallow perceptron networks. Neural Comput. Appl. (2017). doi:10.1007/s00521-017-2965-0

  14. Kůrková, V., Sanguineti, M.: Approximate minimization of the regularized expected error over kernel models. Math. Oper. Res. 33, 747–756 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  15. Barron, A.R.: Neural net approximation. In: Narendra, K.S. (ed.) Proceedings of the 7th Yale Workshop on Adaptive and Learning Systems, pp. 69–72. Yale University Press (1992)

    Google Scholar 

  16. Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing, The Curse of Dimensionality, pp. 261–270. Birkhäuser, Boston (1997)

    Google Scholar 

  17. Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39, 930–945 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  18. Kůrková, V.: Complexity estimates based on integral transforms induced by computational units. Neural Netw. 33, 160–167 (2012)

    Article  MATH  Google Scholar 

  19. Kůrková, V., Savický, P., Hlaváčková, K.: Representations and rates of approximation of real-valued Boolean functions by neural networks. Neural Netw. 11, 651–659 (1998)

    Article  Google Scholar 

  20. Ball, K.: An elementary introduction to modern convex geometry. In: Levy, S. (ed.) Flavors of Geometry, pp. 1–58. Cambridge University Press (1997)

    Google Scholar 

  21. Matoušek, J.: Lectures on Discrete Geometry. Springer, New York (2002)

    Book  MATH  Google Scholar 

  22. Schläfli, L.: Theorie der Vielfachen Kontinuität. Zürcher & Furrer, Zürich (1901)

    Google Scholar 

  23. Cover, T.M.: Geometrical and statistical properties of systems of linear inequalities with applictions in pattern recognition. IEEE Trans. Electron. Comput. 14, 326–334 (1965)

    Article  MATH  Google Scholar 

  24. Candès, E.J.: The restricted isometric property and its implications for compressed sensing. C. R. Acad. Sci. Paris I 346, 589–592 (2008)

    Article  MATH  Google Scholar 

  25. Roychowdhury, V., Siu, K.Y., Orlitsky, A.: Neural models and spectral methods. In: Roychowdhury, V., Siu, K., Orlitsky, A. (eds.) Theoretical Advances in Neural Computation and Learning, pp. 3–36. Springer, New York (1994)

    Chapter  Google Scholar 

  26. Laughlin, S.B., Sejnowski, T.J.: Communication in neural networks. Science 301, 1870–1874 (2003)

    Article  Google Scholar 

Download references

Acknowledgments

This work was partially supported by the Czech Grant Agency grant 15-18108S and institutional support of the Institute of Computer Science RVO 67985807.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Věra Kůrková .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Kůrková, V. (2017). Sparsity of Shallow Networks Representing Finite Mappings. In: Boracchi, G., Iliadis, L., Jayne, C., Likas, A. (eds) Engineering Applications of Neural Networks. EANN 2017. Communications in Computer and Information Science, vol 744. Springer, Cham. https://doi.org/10.1007/978-3-319-65172-9_29

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-65172-9_29

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-65171-2

  • Online ISBN: 978-3-319-65172-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics