Skip to main content
Log in

On the Kernel Widths in Radial-Basis Function Networks

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

RBFN (Radial-Basis Function Networks) represent an attractive alternative to other neural network models. Their learning is usually split into an unsupervised part, where center and widths of the basis functions are set, and a linear supervised part for weight computation. Although available literature on RBFN learning widely covers how basis function centers and weights must be set, little effort has been devoted to the learning of basis function widths. This paper addresses this topic: it shows the importance of a proper choice of basis function widths, and how inadequate values can dramatically influence the approximation performances of the RBFN. It also suggests a one-dimensional searching procedure as a compromise between an exhaustive search on all basis function widths, and a non-optimal a priori choice.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Park, J. and Sandberg, I.: Approximation and radial basis function networks, Neural Comput. 5 (1993), 305–316.

    Google Scholar 

  2. Bishop, C. M.: Neural Networks for Pattern Recognition, Oxford university press, 1995.

  3. Park, J. and Sandberg, I. W.: Universal approximation using radial-basis-function networks, Neural Comput. 3 (1991), 246–257.

    Google Scholar 

  4. Young-Sup Hwang and Sung-Yang Bang.: An efficient method to construct a radial basis function neural network classifier, Neural Networks, 10(8) (1997), 1495–1503.

    Google Scholar 

  5. Robert J. Howlett and Lakhmi C. Jain, Radial Basis Function Networks 2: New Advances in Design, Physica-Verlag: Heidelberg, 2001.

    Google Scholar 

  6. Ahalt, S. C. and Fowler, J. E.: Vector quantization using artificial neural networks models. In Proceedings of the International Workshop on Adaptive Methods and Emergent Techniques for Signal Processing and Communications, June 1993, pp. 42–61.

  7. Gresho, A. and Gray, R. M.: Vector Quanitzation and Signal Compression, Kluwer International series in engineering and computer science, Norwell, Kluwer Academic Publishers, 1992.

    Google Scholar 

  8. David Sanchez, V. A.: Second derivative dependent placement of RBF centers, Neurocomputing 7(3) (1995), 311–317.

    Google Scholar 

  9. Omohundro, S. M.: Efficient algorithms with neural networks behavior, Complex Systems 1 (1987), 273–347.

    Google Scholar 

  10. Verleysen, M. and Hlavá čková, K.: An optimized RBF network for approximation of functions, In: European Symposium on Artificial Neural Networks (ESANN 94), pp. 175–180, Brussels, April 20-21-22, 1994.

  11. Tomaso Poggio and Federico Girosi: Networks for approximation and learning, Proceedings of the IEEE, 78(9) (1990), 1481–1497.

    Google Scholar 

  12. Orr, M. J.: Introduction to Radial Basis Functions Networks, Technical reports, April 1996, www.anc.ed.ac.uk/~mjo/papers/intro.ps.

  13. David Sanchez, V. A.: On the number and the distribution of RBF centers, Neurocomputing 7(2) (1995), 197–202.

    Google Scholar 

  14. Chen, S. and Billings, S. A.: Neural networks for nonlinear dynamic system modelling and identification, Int. J. Control, 56(2) (1992), 319–346.

    Google Scholar 

  15. Haykin, S.: Neural Networks a Comprehensive Foundation, Prentice-Hall Inc, second edition, 1999.

  16. Moody, J. and Darken, C. J.: Fast learning in networks of locally-tuned processing units, Neural Comput. 1 (1989), 281–294.

    Google Scholar 

  17. Verleysen, M. and Hlavá čková, K.: Learning in RBF Networks, International Conference on Neural Networks (ICNN), Washington, DC, June 3–9 (1996), pp. 199–204.

  18. Saha, A. and Keeler, J. D.: Algorithms for Better Representation and Faster Learning in Radial Basis Function Networks, Advances in Neural Information ProcessingSystems 2, Edited by David S. Touretzky, pp. 482–489, 1989.

  19. Musavi, M. T., Ahmed, W., Chan, K. H., Faris, K. B. and Hummels, D. M.: On the Training of radial basis function classifiers, Neural Networks, 5 (1992), 595–603.

    Google Scholar 

  20. Ripley, B. D.: Pattern Recognition and Neural Network, Cambridge University Press, first edition, 1996.

  21. Lázaro, M., Santamaría, I. and Pantaleón, C.: A new EM-based training algorithm for RBF networks, Neural Networks, 16 (2003), 69–77.

    Google Scholar 

  22. Archambeau, C., Lee, J. and Verleysen, M.: On convergence problems of the EM algorithm for finite Gaussian mixtures, In: European Symposium on Artificial Neural Networks (ESANN'2003), pp. 99–104, Bruges, April 23-24-25, 2003.

  23. Xu, L. and Jordan, M. I.: On convergence properties of the EM algorithm for Gaussian mixtures, Neural Computation, 8(1) (1996), 129–151.

    Google Scholar 

  24. Benoudjit, N., Archambeau, C., Lendasse, A., Lee, J. and Verleysen, M.: Width optimization of the Gaussian kernels in radial basis function networks, In: European Symposium on Artificial Neural Networks (ESANN'2002), pp. 425–432, Bruges, April 24-25-26, 2002.

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Benoudjit, N., Verleysen, M. On the Kernel Widths in Radial-Basis Function Networks. Neural Processing Letters 18, 139–154 (2003). https://doi.org/10.1023/A:1026289910256

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1026289910256

Navigation