Abstract
We establish versions of Descartes’ rule of signs for radial basis function (RBF) neural networks. These RBF rules of signs provide tight bounds for the number of zeros of univariate networks with certain parameter restrictions. Moreover, they can be used to derive tight bounds for the Vapnik-Chervonenkis (VC) dimension and pseudo-dimension of these networks. In particular, we show that these dimensions are no more than linear. This result contrasts with previous work showing that RBF neural networks with two and more input nodes have superlinear VC dimension. The rules give rise also to lower bounds for network sizes, thus demonstrating the relevance of network parameters for the complexity of computing with RBF neural networks.
This work has been supported in part by the ESPRIT Working Group in Neural and Computational Learning II, NeuroCOLT2, No. 27150.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Bruce Anderson, Jeffrey Jackson, and Meera Sitharam. Descartes’ rule of signs revisited. American Mathematical Monthly, 105:447–451, 1998.
Alexander Andrianov. On pseudo-dimension of certain sets of functions. East Journal on Approximations, 5:393–402, 1999.
Martin Anthony and Peter L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University Press, Cambridge, 1999.
Peter L. Bartlett and Robert C. Williamson. The VC dimension and pseudodimension of two-layer neural networks with discrete inputs. Neural Computation, 8:625–628, 1996.
D. S. Broomhead and David Lowe. Multivariable functional interpolation and adaptive networks. Complex Systems, 2:321–355, 1988.
René Descartes. The Geometry of Ren´e Descartes with a Facsimile of the First Edition. Dover Publications, New York, NY, 1954. Translated by D. E. Smith and M. L. Latham.
Dinh Dung. Non-linear approximations using sets of finite cardinality or finite pseudo-dimension. Journal of Complexity, 17:467–492, 2001.
David J. Grabiner. Descartes’ rule of signs: Another construction. American Mathematical Monthly, 106:854–856, 1999.
Peter Henrici. Applied and Computational Complex Analysis 1: Power Series, Integration, Conformal Mapping, Location of Zeros. Wiley, New York, NY, 1974.
Marek Karpinski and Angus Macintyre. Polynomial bounds for VC dimension of sigmoidal and general Pfaffian neural networks. Journal of Computer and System Sciences, 54:169–176, 1997.
Marek Karpinski and Thorsten Werther. VC dimension and uniform learnability of sparse polynomials and rational functions. SIAM Journal on Computing, 22:1276–1285, 1993.
Pascal Koiran and Eduardo D. Sontag. Neural networks with quadratic VC dimension. Journal of Computer and System Sciences, 54:190–198, 1997.
S.-T. Li and E. L. Leiss. On noise-immune RBF networks. In R. J. Howlett and L. C. Jain, editors, Radial Basis Function Networks 1: Recent Developments in Theory and Applications, volume 66 of Studies in Fuzziness and Soft Computing, chapter 5, pages 95–124. Springer-Verlag, Berlin, 2001.
V. Maiorov and J. Ratsaby. On the degree of approximation by manifolds of finite pseudo-dimension. Constructive Approximation, 15:291–300, 1999.
Bartlett W. Mel and Stephen M. Omohundro. How receptive field parameters affect neural learning. In Richard P. Lippmann, John E. Moody, and David S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 757–763. Morgan Kaufmann, San Mateo, CA, 1991.
John Moody and Christian J. Darken. Fast learning in networks of locally-tuned processing units. Neural Computation, 1:281–294, 1989.
J. Park and I. W. Sandberg. Universal approximation using radial-basis-function networks. Neural Computation, 3:246–257, 1991.
Jooyoung Park and Irwin W. Sandberg. Approximation and radial-basis-function networks. Neural Computation, 5:305–316, 1993.
George Pólya and Gabor Szegö. Problems and Theorems in Analysis II: Theory of Functions. Zeros. Polynomials. Determinants. Number Theory. Geometry. Springer-Verlag, Berlin, 1976.
Michael Schmitt. Radial basis function neural networks have superlinear VC dimension. In David Helmbold and Bob Williamson, editors, Proceedings of the 14th Annual Conference on Computational Learning Theory COLT 2001 and 5th European Conference on Computational Learning Theory EuroCOLT 2001, volume 2111of Lecture Notes in Artificial Intelligence, pages 14–30, springer-Verlag, Berlin, 2001.
Michael Schmitt. Neural networks with local receptive fields and superlinear VC dimension. Neural Computation, 14:919–956, 2002.
D. J. Struik, editor. A Source Book in Mathematics, 1200-1800. Princeton University Press, Princeton, NJ, 1986.
Zheng-ou Wang and Tao Zhu. An efficient learning algorithm for improving generalization performance of radial basis function neural networks. Neural Networks, 13:545–553, 2000.
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2002 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Schmitt, M. (2002). RBF Neural Networks and Descartes’ Rule of Signs. In: Cesa-Bianchi, N., Numao, M., Reischuk, R. (eds) Algorithmic Learning Theory. ALT 2002. Lecture Notes in Computer Science(), vol 2533. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-36169-3_26
Download citation
DOI: https://doi.org/10.1007/3-540-36169-3_26
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-00170-6
Online ISBN: 978-3-540-36169-5
eBook Packages: Springer Book Archive