Skip to main content

Nonlinear Function Learning by the Normalized Radial Basis Function Networks

  • Conference paper
  • 1588 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 4029))

Abstract

We study strong universal consistency and the rates of convergence of nonlinear regression function learning algorithms using normalized radial basis function networks. The parameters of the network including centers, covariance matrices and synaptic weights are trained by the empirical risk minimization. We show the rates of convergence for the networks whose parameters are learned by the complexity regularization.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   139.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Anthony, M., Bartlett, P.L.: Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge (1999)

    Book  MATH  Google Scholar 

  2. Barron, A.R., Birgé, L., Massart, P.: Risk bounds for model selection via penalization. Probability Theory and Related Fields 113, 301–413 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  3. Cybenko, G.: Approximations by superpositions of sigmoidal functions. Mathematics of Control, Signals, and Systems 2, 303–314 (1989)

    Article  MATH  MathSciNet  Google Scholar 

  4. Devroye, L., Györfi, L., Lugosi, G.: Probabilistic Theory of Pattern Recognition. Springer, New York (1996)

    MATH  Google Scholar 

  5. Duchon, J.: Sur l’erreur d’interpolation des fonctions de plusieurs variables par les \(D\sp{m}\)-splines. RAIRO Anal. Numér. 12(4), 325–334 (1978)

    MATH  MathSciNet  Google Scholar 

  6. Faragó, A., Lugosi, G.: Strong universal consistency of neural network classifiers. IEEE Trans. on Information Theory 39, 1146–1151 (1993)

    Article  MATH  Google Scholar 

  7. Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural network architectures. Neural Computation 7, 219–267 (1995)

    Article  Google Scholar 

  8. Györfi, L., Kohler, M., Krzyżak, A., Walk, H.: A Distribution-Free Theory of Nonparametric Regression. Springer, New York (2002)

    Book  MATH  Google Scholar 

  9. Hornik, K., Stinchocombe, S., White, H.: Multilayer feed-forward networks are universal approximators. Neural Networks 2, 359–366 (1989)

    Article  Google Scholar 

  10. Kolmogorov, A.N., Tihomirov, V.M.: ε-entropy and ε-capacity of sets in function spaces. Translations of the American Mathematical Society 17, 277–364 (1961)

    MathSciNet  Google Scholar 

  11. Krzyżak, A., Linder, T., Lugosi, G.: Nonparametric estimation and classification using radial basis function nets and empirical risk minimization. IEEE Trans. Neural Networks 7(2), 475–487 (1996)

    Article  Google Scholar 

  12. Krzyżak, A., Linder, T.: Radial basis function networks and complexity regularization in function learning. IEEE Trans. Neural Networks 9(2), 247–256 (1998)

    Article  Google Scholar 

  13. Krzyżak, A., Niemann, H.: Convergence and rates of convergence of radial basis functions networks in function learning. Nonlinear Analysis 47, 281–292 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  14. Krzyżak, A., Schäfer, D.: Nonparametric regression estimation by normalized radial basis function networks. IEEE Transactions on Information Theory 51(3), 1003–1010 (2005)

    Article  Google Scholar 

  15. Lugosi, G., Zeger, K.: Nonparametric estimation via empirical risk minimization. IEEE Trans. on Information Theory 41, 677–687 (1995)

    Article  MATH  MathSciNet  Google Scholar 

  16. Lugosi, G., Nobel, A.: Adaptive model selection using empirical complexities. Annals of Statistics 27, 1830–1864 (1999)

    Article  MATH  MathSciNet  Google Scholar 

  17. Moody, J., Darken, J.: Fast learning in networks of locally-tuned processing units. Neural Computation 1, 281–294 (1989)

    Article  Google Scholar 

  18. Park, J., Sandberg, I.W.: Universal approximation using Radial-Basis-Function networks. Neural Computation 3, 246–257 (1991)

    Article  Google Scholar 

  19. Pollard, D.: Convergence of Stochastic Processes. Springer, New York (1984)

    MATH  Google Scholar 

  20. Shorten, R., Murray-Smith, R.: Side effects of normalising radial basis function networks. International Journal of Neural Systems 7, 167–179 (1996)

    Article  Google Scholar 

  21. Specht, D.F.: Probabilistic neural networks. Neural Networks 3, 109–118 (1990)

    Article  Google Scholar 

  22. Ripley, B.: Pattern Recognition and Neural Networks. Cambridge University Press, Cambridge (1996)

    MATH  Google Scholar 

  23. Vapnik, V.N., Ya, A.: Chervonenkis, On the uniform convergence of relative frequencies of events to their probabilities. Theory of Probability and its Applications 16, 264–280 (1971)

    Article  MATH  Google Scholar 

  24. Vapnik, V.N.: Estimation of Dependences Based on Empirical Data, 2nd edn. Springer, New York (1999)

    Google Scholar 

  25. Xu, L., Krzyżak, A., Yuille, A.L.: On radial basis function nets and kernel regression: approximation ability, convergence rate and receptive field size. Neural Networks 7, 609–628 (1994)

    Article  MATH  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2006 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Krzyżak, A., Schäfer, D. (2006). Nonlinear Function Learning by the Normalized Radial Basis Function Networks. In: Rutkowski, L., Tadeusiewicz, R., Zadeh, L.A., Żurada, J.M. (eds) Artificial Intelligence and Soft Computing – ICAISC 2006. ICAISC 2006. Lecture Notes in Computer Science(), vol 4029. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11785231_6

Download citation

  • DOI: https://doi.org/10.1007/11785231_6

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-35748-3

  • Online ISBN: 978-3-540-35750-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics