Abstract
In this article an innovative neural-network architecture, called the Kolmogorov’s Spline Network (KSN) and based on the Kolmogorov’s Superposition Theorem and cubic splines, is proposed and elucidated. The main result is the Theorem giving the bound on the approximation error and the number of adjustable parameters, which favorably compares KSN with other one-hidden layer feed-forward neural-network architectures. The sketch of the proof is presented. The implementation of the KSN is discussed.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Kolmogorov, A.N.: On the Representation of Continuous Functions of Many Variables by Superposition of Continuous Functions of One Variable and Addition. Dokl. Akad. Nauk SSSR 114, 953–956 (1957); Trans. Amer. Math. Soc. 2(28), 55–59 (1963)
Hecht-Nielsen, R.: Counter Propagation Networks. In: Proc. IEEE Int. Conf. Neural Networks, vol. 2, pp. 19–32 (1987)
Sprecher, D.A.: On the Structure of Continuous Functions of Several Variables. Trans. Amer. Math. Soc. 115, 340–355 (1965)
Girosi, F., Poggio, T.: Representation Properties of Networks: Kolmogorov’s Theorem Is Irrelevant. Neur. Comp. 1, 465–469 (1989)
Kurkova, V.: Kolmogorov’s Theorem Is Relevant. Neur. Comp. 3, 617–622 (1991)
Kurkova, V.: Kolmogorov’s Theorem and Multilayer Neural Networks. Neural Networks 5, 501–506 (1992)
Nakamura, M., Mines, R., Kreinovich, V.: Guaranteed Intervals for Kolmogorov’s Theorem (and their Possible Relations to Neural Networks). Interval Comp. 3, 183–199 (1993)
Nees, M.: Approximative Versions of Kolmogorov’s Superposition Theorem, Proved Constructively. J. Comp. Appl. Math. 54, 239–250 (1994)
Sprecher, D.A.: A Numerical Implementation of Kolmogorov’s Superpositions. Neural Networks 9, 765–772 (1996)
Sprecher, D.A.: A Numerical Implementation of Kolmogorov’s Superpositions. Neural Networks II 10, 447–457 (1997)
Guarnieri, S., Piazza, F., Uncini, A.: Multilayer Neural Networks with Adaptive Spline-based Activation Function. In: Proc. Int. Neural Network Soc. Annu. Meet., pp. 1695–1699 (1995)
Vecci, L., Piazza, F., Uncini, A.: Learning and Generalization Capabilities of Adaptive Spline Activation Function Neural Networks. Neural Networks 11, 259–270 (1998)
Uncini, A., Vecci, L., Campolucci, P., Piazza, F.: Complex-valued Neural Networks with Adaptive Spline Activation Function for Digital Radio Links Nonlinear Equalization. In: IEEE Trans. Signal Proc., vol. 47, pp. 505–514 (1999)
Guarnieri, S., Piazza, F., Uncini, A.: Multilayer Feedforward Networks with Adaptive Spline Activation Function. IEEE Trans. Neural Networks 10, 672–683 (1999)
Igelnik, B.: Some New Adaptive Architectures for Learning, Generalization, and Visualization of Multivariate Data. In: Sincak, P., Vascak, J. (eds.) Quo Vadis Computational Intelligence? New Trends and Approaches in Computational Intelligence, pp. 63–78. Physica-Verlag, Heidelberg (2000)
Shidlovskii, A.V.: Transcendental Numbers. Walter de Gruyter, Berlin (1989)
Barron, A.R.: Universal Approximation Bounds for Superpositions of a Sigmoidal Function. IEEE Trans. Inform. Theory 39, 930–945 (1993)
Breiman, L.: Hinging Hyperplanes for Regression, Classification, and Function Approximation. IEEE Trans. Inform. Theory 39, 999–1013 (1993)
Jones, L.K.: Good Weights and Hyperbolic Kernels for Neural Networks, Projection Pursuit, and Pattern Classification: Fourier Strategies for Extracting Information from High-dimensional Data. IEEE Trans. Inform. Theory 40, 439–454 (1994)
Makovoz, Y.: Random Approximants and Neural Networks. Jour. Approx. Theory 85, 98–109 (1996)
Scarcelli, F., Tsoi, A.C.: Universal Approximation Using Feedforward Neural Networks: a Survey of Some Existing Methods and Some New Results. Neural Networks 11, 15–37 (1998)
Townsend, N.W., Tarassenko, L.: Estimating of Error Bounds for Neural- Network Function Approximators. IEEE Trans. Neural Networks 10, 217–230 (1999)
Igelnik, B., Parikh, N.: Kolmogorov’s Spline Network. IEEE Trans. Neural Networks (2003) (accepted for publication)
Igelnik, B., Pao, Y.-H., LeClair, S.R., Chen, C.Y.: The Ensemble Approach to Neural Net Training and Generalization. IEEE Trans. Neural Networks 10, 19–30 (1999)
Igelnik, B., Tabib-Azar, M., LeClair, S.R.: A Net with Complex Coefficients. IEEE Trans. Neural Networks 12, 236–249 (2001)
Albert, A.: Regression and the Moore-Penrose Pseudoinverse. Academic Press, New York (1972)
Shapire, R.E.: The Strength of Weak Learnability. Machine Learning 5, 197–227 (1990)
Ji, S., Ma, S.: Combinations of Weak Classifiers. IEEE Trans. Neural Networks 8, 32–42 (1997)
Breiman, L.: Combining Predictors. In: Sharkey, A.J.C. (ed.) Combining Artificial Neural Nets. Ensemble and Modular Nets., pp. 31–48. Springer, London (1999)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2003 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Igelnik, B. (2003). Use of the Kolmogorov’s Superposition Theorem and Cubic Splines for Efficient Neural-Network Modeling. In: Palade, V., Howlett, R.J., Jain, L. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2003. Lecture Notes in Computer Science(), vol 2773. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45224-9_27
Download citation
DOI: https://doi.org/10.1007/978-3-540-45224-9_27
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-40803-1
Online ISBN: 978-3-540-45224-9
eBook Packages: Springer Book Archive