Skip to main content

RBF Neural Networks and Descartes’ Rule of Signs

  • Conference paper
  • First Online:
Algorithmic Learning Theory (ALT 2002)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2533))

Included in the following conference series:

Abstract

We establish versions of Descartes’ rule of signs for radial basis function (RBF) neural networks. These RBF rules of signs provide tight bounds for the number of zeros of univariate networks with certain parameter restrictions. Moreover, they can be used to derive tight bounds for the Vapnik-Chervonenkis (VC) dimension and pseudo-dimension of these networks. In particular, we show that these dimensions are no more than linear. This result contrasts with previous work showing that RBF neural networks with two and more input nodes have superlinear VC dimension. The rules give rise also to lower bounds for network sizes, thus demonstrating the relevance of network parameters for the complexity of computing with RBF neural networks.

This work has been supported in part by the ESPRIT Working Group in Neural and Computational Learning II, NeuroCOLT2, No. 27150.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Bruce Anderson, Jeffrey Jackson, and Meera Sitharam. Descartes’ rule of signs revisited. American Mathematical Monthly, 105:447–451, 1998.

    Article  MATH  MathSciNet  Google Scholar 

  2. Alexander Andrianov. On pseudo-dimension of certain sets of functions. East Journal on Approximations, 5:393–402, 1999.

    MATH  MathSciNet  Google Scholar 

  3. Martin Anthony and Peter L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge, 1999.

    MATH  Google Scholar 

  4. Peter L. Bartlett and Robert C. Williamson. The VC dimension and pseudodimension of two-layer neural networks with discrete inputs. Neural Computation, 8:625–628, 1996.

    Article  Google Scholar 

  5. D. S. Broomhead and David Lowe. Multivariable functional interpolation and adaptive networks. Complex Systems, 2:321–355, 1988.

    MATH  MathSciNet  Google Scholar 

  6. René Descartes. The Geometry of Ren´e Descartes with a Facsimile of the First Edition. Dover Publications, New York, NY, 1954. Translated by D. E. Smith and M. L. Latham.

    Google Scholar 

  7. Dinh Dung. Non-linear approximations using sets of finite cardinality or finite pseudo-dimension. Journal of Complexity, 17:467–492, 2001.

    Article  MATH  MathSciNet  Google Scholar 

  8. David J. Grabiner. Descartes’ rule of signs: Another construction. American Mathematical Monthly, 106:854–856, 1999.

    Article  MATH  MathSciNet  Google Scholar 

  9. Peter Henrici. Applied and Computational Complex Analysis 1: Power Series, Integration, Conformal Mapping, Location of Zeros. Wiley, New York, NY, 1974.

    Google Scholar 

  10. Marek Karpinski and Angus Macintyre. Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. Journal of Computer and System Sciences, 54:169–176, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  11. Marek Karpinski and Thorsten Werther. VC dimension and uniform learnability of sparse polynomials and rational functions. SIAM Journal on Computing, 22:1276–1285, 1993.

    Article  MATH  MathSciNet  Google Scholar 

  12. Pascal Koiran and Eduardo D. Sontag. Neural networks with quadratic VC dimension. Journal of Computer and System Sciences, 54:190–198, 1997.

    Article  MATH  MathSciNet  Google Scholar 

  13. S.-T. Li and E. L. Leiss. On noise-immune RBF networks. In R. J. Howlett and L. C. Jain, editors, Radial Basis Function Networks 1: Recent Developments in Theory and Applications, volume 66 of Studies in Fuzziness and Soft Computing, chapter 5, pages 95–124. Springer-Verlag, Berlin, 2001.

    Google Scholar 

  14. V. Maiorov and J. Ratsaby. On the degree of approximation by manifolds of finite pseudo-dimension. Constructive Approximation, 15:291–300, 1999.

    Article  MATH  MathSciNet  Google Scholar 

  15. Bartlett W. Mel and Stephen M. Omohundro. How receptive field parameters affect neural learning. In Richard P. Lippmann, John E. Moody, and David S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 757–763. Morgan Kaufmann, San Mateo, CA, 1991.

    Google Scholar 

  16. John Moody and Christian J. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1:281–294, 1989.

    Article  Google Scholar 

  17. J. Park and I. W. Sandberg. Universal approximation using radial-basis-function networks. Neural Computation, 3:246–257, 1991.

    Article  Google Scholar 

  18. Jooyoung Park and Irwin W. Sandberg. Approximation and radial-basis-function networks. Neural Computation, 5:305–316, 1993.

    Article  Google Scholar 

  19. George Pólya and Gabor Szegö. Problems and Theorems in Analysis II: Theory of Functions. Zeros. Polynomials. Determinants. Number Theory. Geometry. Springer-Verlag, Berlin, 1976.

    Google Scholar 

  20. Michael Schmitt. Radial basis function neural networks have superlinear VC dimension. In David Helmbold and Bob Williamson, editors, Proceedings of the 14th Annual Conference on Computational Learning Theory COLT 2001 and 5th European Conference on Computational Learning Theory EuroCOLT 2001, volume 2111of Lecture Notes in Artificial Intelligence, pages 14–30, springer-Verlag, Berlin, 2001.

    Google Scholar 

  21. Michael Schmitt. Neural networks with local receptive fields and superlinear VC dimension. Neural Computation, 14:919–956, 2002.

    Article  MATH  Google Scholar 

  22. D. J. Struik, editor. A Source Book in Mathematics, 1200-1800. Princeton University Press, Princeton, NJ, 1986.

    MATH  Google Scholar 

  23. Zheng-ou Wang and Tao Zhu. An efficient learning algorithm for improving generalization performance of radial basis function neural networks. Neural Networks, 13:545–553, 2000.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2002 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Schmitt, M. (2002). RBF Neural Networks and Descartes’ Rule of Signs. In: Cesa-Bianchi, N., Numao, M., Reischuk, R. (eds) Algorithmic Learning Theory. ALT 2002. Lecture Notes in Computer Science(), vol 2533. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36169-3_26

Download citation

  • DOI: https://doi.org/10.1007/3-540-36169-3_26

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-00170-6

  • Online ISBN: 978-3-540-36169-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics