Abstract
A probabilistic model describing relevance of tasks to be computed by a class of feedforward networks is studied. Bounds on correlations of network input-output functions with almost all randomly-chosen functions are derived. Impact of sizes of function domains on correlations are analyzed from the point of view of the concentration of measure phenomenon. It is shown that on large domains, errors of approximation of randomly chosen functions by fixed input-output functions are almost deterministic.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ito, Y.: Finite mapping by neural networks and truth functions. Math. Sci. 17, 69–77 (1992)
Pinkus, A.: Approximation theory of the MLP model in neural networks. Acta Numer. 8, 143–195 (1999). https://doi.org/10.1017/S0962492900002919
Bengio, Y., Delalleau, O., Roux, N.L.: The curse of highly variable functions for local kernel machines. In: Advances in Neural Information Processing Systems, vol. 18, pp. 107–114. MIT Press (2006)
Kůrková, V.: Constructive lower bounds on model complexity of shallow perceptron networks. Neural Comput. Appl. 29, 305– 315 (2018). https://doi.org/10.1007/s00521-017-2965-0
Kůrková, V.: Limitations of shallow networks representing finite mappings. Neural Comput. Appl. (2018). https://doi.org/10.1007/s00521-018-3680-1
Lin, H., Tegmark, M., Rolnick, D.: Why does deep and cheap learning work so well? J. Stat. Phys. 168, 1223–1247 (2017). https://doi.org/10.1007/s10955-017-1836-5
Kainen, P.C., Kůrková, V.: Quasiorthogonal dimension of Euclidean spaces. Appl. Math. Lett. 6, 7–10 (1993). https://doi.org/10.1016/0893-9659(93)90023-G
Kůrková, V., Sanguineti, M.: Classification by sparse neural networks. IEEE Trans. Neural Netw. Learn. Syst. (2019). https://doi.org/10.1109/TNNLS.2018.2888517
Hoeffding, W.: Probability inequalities for sums of bounded random variables. J. Am. Stat. Assoc. 58, 13–30 (1963). https://doi.org/10.1080/01621459.1963.10500830
Chernoff, H.: A measure of asymptotic efficiency for tests of a hypothesis based on the sum of observations. Ann. Math. Stat. 23, 493–507 (1952). https://doi.org/10.1214/aoms/1177729330
Azuma, K.: Weighted sums of certain dependent random variables. Tohoku Math. J. 19, 357–367 (1967). https://doi.org/10.2748/tmj/1178243286
Dubhashi, D., Panconesi, A.: Concentration of Measure for the Analysis of Randomized Algorithms. Cambridge University Press, Cambridge (2009). https://doi.org/10.1017/CBO9780511581274
Ball, K.: An elementary introduction to modern convex geometry. In: Levy, S. (ed.) Flavors of Geometry, pp. 1–58. Cambridge University Press, Cambridge (1997)
Matoušek, J.: Lectures on Discrete Geometry. Springer, New York (2002). https://doi.org/10.1007/978-1-4613-0039-7
Kůrková, V., Sanguineti, M.: Probabilistic lower bounds for approximation by shallow perceptron networks. Neural Netw. 91, 34–41 (2017). https://doi.org/10.1016/j.neunet.2017.04.003
Haussler, D.: Sphere packing numbers for subsets of the Boolean n-cube with bounded Vapnik-Chervonenkis dimension. J. Comb. Theory A 69(2), 217–232 (1995). https://doi.org/10.1016/0097-3165(95)90052-7
Cucker, F., Smale, S.: On the mathematical foundations of learning. Bull. Am. Math. Soc. 39, 1–49 (2002). https://doi.org/10.1090/S0273-0979-01-00923-5
Vapnik, V.: The Nature of Statistical Learning Theory. Springer, New York (1997). https://doi.org/10.1007/978-1-4757-3264-1
Gorban, A., Tyukin, I.: Stochastic separation theorems. Neural Netw. 94, 255–259 (2017). https://doi.org/10.1016/j.neunet.2017.07.014
Gorban, A.N., Golubkov, A., Grechuk, B., Mirkes, E.M., Tyukin, I.Y.: Correction of AI systems by linear discriminants: probabilistic foundations. Inf. Sci. 466, 303–322 (2018). https://doi.org/10.1016/j.ins.2018.07.040
Rennie, J., Shih, L., Teevan, J., Karger, D.: Tackling the poor assumptions of naive Bayes classifiers. In: Proceedings of 20th International Conference on Machine Learning (ICML 2003), pp. 616–623 (2003)
Acknowledgments
V. K. was partially supported by the Czech Grant Foundation grant GA19-05704S and the institutional support of the Institute of Computer Science RVO 67985807.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Kůrková, V. (2019). Probabilistic Bounds for Approximation by Neural Networks. In: Tetko, I., Kůrková, V., Karpov, P., Theis, F. (eds) Artificial Neural Networks and Machine Learning – ICANN 2019: Theoretical Neural Computation. ICANN 2019. Lecture Notes in Computer Science(), vol 11727. Springer, Cham. https://doi.org/10.1007/978-3-030-30487-4_33
Download citation
DOI: https://doi.org/10.1007/978-3-030-30487-4_33
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30486-7
Online ISBN: 978-3-030-30487-4
eBook Packages: Computer ScienceComputer Science (R0)