Skip to main content

Use of the Kolmogorov’s Superposition Theorem and Cubic Splines for Efficient Neural-Network Modeling

  • Conference paper
  • 944 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2773))

Abstract

In this article an innovative neural-network architecture, called the Kolmogorov’s Spline Network (KSN) and based on the Kolmogorov’s Superposition Theorem and cubic splines, is proposed and elucidated. The main result is the Theorem giving the bound on the approximation error and the number of adjustable parameters, which favorably compares KSN with other one-hidden layer feed-forward neural-network architectures. The sketch of the proof is presented. The implementation of the KSN is discussed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   74.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   99.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Kolmogorov, A.N.: On the Representation of Continuous Functions of Many Variables by Superposition of Continuous Functions of One Variable and Addition. Dokl. Akad. Nauk SSSR 114, 953–956 (1957); Trans. Amer. Math. Soc. 2(28), 55–59 (1963)

    MathSciNet  MATH  Google Scholar 

  2. Hecht-Nielsen, R.: Counter Propagation Networks. In: Proc. IEEE Int. Conf. Neural Networks, vol. 2, pp. 19–32 (1987)

    Google Scholar 

  3. Sprecher, D.A.: On the Structure of Continuous Functions of Several Variables. Trans. Amer. Math. Soc. 115, 340–355 (1965)

    Article  MathSciNet  MATH  Google Scholar 

  4. Girosi, F., Poggio, T.: Representation Properties of Networks: Kolmogorov’s Theorem Is Irrelevant. Neur. Comp. 1, 465–469 (1989)

    Article  Google Scholar 

  5. Kurkova, V.: Kolmogorov’s Theorem Is Relevant. Neur. Comp. 3, 617–622 (1991)

    Article  Google Scholar 

  6. Kurkova, V.: Kolmogorov’s Theorem and Multilayer Neural Networks. Neural Networks 5, 501–506 (1992)

    Article  Google Scholar 

  7. Nakamura, M., Mines, R., Kreinovich, V.: Guaranteed Intervals for Kolmogorov’s Theorem (and their Possible Relations to Neural Networks). Interval Comp. 3, 183–199 (1993)

    MathSciNet  Google Scholar 

  8. Nees, M.: Approximative Versions of Kolmogorov’s Superposition Theorem, Proved Constructively. J. Comp. Appl. Math. 54, 239–250 (1994)

    Article  MathSciNet  MATH  Google Scholar 

  9. Sprecher, D.A.: A Numerical Implementation of Kolmogorov’s Superpositions. Neural Networks 9, 765–772 (1996)

    Article  Google Scholar 

  10. Sprecher, D.A.: A Numerical Implementation of Kolmogorov’s Superpositions. Neural Networks II 10, 447–457 (1997)

    Article  Google Scholar 

  11. Guarnieri, S., Piazza, F., Uncini, A.: Multilayer Neural Networks with Adaptive Spline-based Activation Function. In: Proc. Int. Neural Network Soc. Annu. Meet., pp. 1695–1699 (1995)

    Google Scholar 

  12. Vecci, L., Piazza, F., Uncini, A.: Learning and Generalization Capabilities of Adaptive Spline Activation Function Neural Networks. Neural Networks 11, 259–270 (1998)

    Article  Google Scholar 

  13. Uncini, A., Vecci, L., Campolucci, P., Piazza, F.: Complex-valued Neural Networks with Adaptive Spline Activation Function for Digital Radio Links Nonlinear Equalization. In: IEEE Trans. Signal Proc., vol. 47, pp. 505–514 (1999)

    Google Scholar 

  14. Guarnieri, S., Piazza, F., Uncini, A.: Multilayer Feedforward Networks with Adaptive Spline Activation Function. IEEE Trans. Neural Networks 10, 672–683 (1999)

    Article  Google Scholar 

  15. Igelnik, B.: Some New Adaptive Architectures for Learning, Generalization, and Visualization of Multivariate Data. In: Sincak, P., Vascak, J. (eds.) Quo Vadis Computational Intelligence? New Trends and Approaches in Computational Intelligence, pp. 63–78. Physica-Verlag, Heidelberg (2000)

    Google Scholar 

  16. Shidlovskii, A.V.: Transcendental Numbers. Walter de Gruyter, Berlin (1989)

    Book  MATH  Google Scholar 

  17. Barron, A.R.: Universal Approximation Bounds for Superpositions of a Sigmoidal Function. IEEE Trans. Inform. Theory 39, 930–945 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  18. Breiman, L.: Hinging Hyperplanes for Regression, Classification, and Function Approximation. IEEE Trans. Inform. Theory 39, 999–1013 (1993)

    Article  MathSciNet  MATH  Google Scholar 

  19. Jones, L.K.: Good Weights and Hyperbolic Kernels for Neural Networks, Projection Pursuit, and Pattern Classification: Fourier Strategies for Extracting Information from High-dimensional Data. IEEE Trans. Inform. Theory 40, 439–454 (1994)

    Article  MATH  Google Scholar 

  20. Makovoz, Y.: Random Approximants and Neural Networks. Jour. Approx. Theory 85, 98–109 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  21. Scarcelli, F., Tsoi, A.C.: Universal Approximation Using Feedforward Neural Networks: a Survey of Some Existing Methods and Some New Results. Neural Networks 11, 15–37 (1998)

    Article  Google Scholar 

  22. Townsend, N.W., Tarassenko, L.: Estimating of Error Bounds for Neural- Network Function Approximators. IEEE Trans. Neural Networks 10, 217–230 (1999)

    Article  Google Scholar 

  23. Igelnik, B., Parikh, N.: Kolmogorov’s Spline Network. IEEE Trans. Neural Networks (2003) (accepted for publication)

    Google Scholar 

  24. Igelnik, B., Pao, Y.-H., LeClair, S.R., Chen, C.Y.: The Ensemble Approach to Neural Net Training and Generalization. IEEE Trans. Neural Networks 10, 19–30 (1999)

    Article  Google Scholar 

  25. Igelnik, B., Tabib-Azar, M., LeClair, S.R.: A Net with Complex Coefficients. IEEE Trans. Neural Networks 12, 236–249 (2001)

    Article  Google Scholar 

  26. Albert, A.: Regression and the Moore-Penrose Pseudoinverse. Academic Press, New York (1972)

    MATH  Google Scholar 

  27. Shapire, R.E.: The Strength of Weak Learnability. Machine Learning 5, 197–227 (1990)

    Google Scholar 

  28. Ji, S., Ma, S.: Combinations of Weak Classifiers. IEEE Trans. Neural Networks 8, 32–42 (1997)

    Article  Google Scholar 

  29. Breiman, L.: Combining Predictors. In: Sharkey, A.J.C. (ed.) Combining Artificial Neural Nets. Ensemble and Modular Nets., pp. 31–48. Springer, London (1999)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2003 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Igelnik, B. (2003). Use of the Kolmogorov’s Superposition Theorem and Cubic Splines for Efficient Neural-Network Modeling. In: Palade, V., Howlett, R.J., Jain, L. (eds) Knowledge-Based Intelligent Information and Engineering Systems. KES 2003. Lecture Notes in Computer Science(), vol 2773. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-45224-9_27

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-45224-9_27

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-40803-1

  • Online ISBN: 978-3-540-45224-9

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics