Skip to main content
Log in

A Note on the Universal Approximation Capability of Support Vector Machines

  • Published:
Neural Processing Letters Aims and scope Submit manuscript

Abstract

The approximation capability of support vector machines (SVMs) is investigated. We show the universal approximation capability of SVMs with various kernels, including Gaussian, several dot product, or polynomial kernels, based on the universal approximation capability of their standard feedforward neural network counterparts. Moreover, it is shown that an SVM with polynomial kernel of degree p − 1 which is trained on a training set of size p can approximate the p training points up to any accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Barron, A. R.: Universal approximation bounds for superpositions of a sigmoidal function, IEEE Transactions on Information Theory 39 (1993), 930–945.

    Article  MATH  MathSciNet  Google Scholar 

  2. Ben-David, S., Eiron, N. and Simon, H. U.: Limitations of Learning via Embedding in Euclidean Half Spaces. NeuroCOLT Technical Report 2001-086, 2001.

  3. Burges, C. J. C.: A tutorial on support vector machines for pattern recognition, Data Mining and Knowledge Discovery 2(2) (1998), 121–167.

    Article  Google Scholar 

  4. Cortes, C. and Vapnik, V.: Support vector network, Machine Learning 20 (1995), 1–20.

    Google Scholar 

  5. Cybenko, G.: Approximation by superposition of a sigmoidal function, Mathematics of Control, Signals, and Systems 2 (1989), 303–314.

    MATH  MathSciNet  Google Scholar 

  6. Funahashi, K.: On the approximate realization of continuous mappings by neural networks, NeuralNetworks 2 (1989), 183–192.

    Google Scholar 

  7. Gurvits, L.: A note on a scale-sensitive dimension of linear bounded functionals in Banach spaces, In: M. Li and A. Maruoska (eds.), Algorithmic Learning Theory ALT-97, Springer, 1997, pp. 352–363.

  8. Hecht-Nielsen, R.: Kolmogorov's mapping neural network existence theorem, In: InternationalConference on NeuralNetworks, vol. 3, IEEE, Washington DC, 1989, pp.11–14.

    Google Scholar 

  9. Hornik, K.: Some new results on neural network approximation, NeuralNetworks 6 (1993), 1069–1072.

    Google Scholar 

  10. Hornik, K., Stinchcombe, M. and White, H.: Multilayer feedforward networks are universal approximators, NeuralNetworks 2 (1989), 359–366.

    Google Scholar 

  11. Joachims, T.: Text categorization with support vector machines: Learning with many relevant features, In: Proceedings of the European Conference on Machine Learning, Springer, Berlin, 1998, pp. 137–142.

    Google Scholar 

  12. Kurkova, V., Kainen, P. C. and Kreinovich, V.: Estimates of the number of hidden neurons and variation with respect to half-spaces, NeuralNetworks 10 (1997), 1061–1068.

    Google Scholar 

  13. Müller, K.-R., Smola, A., Rätsch, G., Schölkopf, B., Kohlmorgen, J. and Vapnik, V.: Predicting time series with support vector machines, In: W. Gerstner, A. Germond, M. Hasler and J.-D. Nicoud (eds.), ArtificialNeuralNetworks — ICANN '97, Springer, 1997, pp. 999–1004.

  14. Park, J. and Sandberg, I. W.: Universal approximation using radial-basis-function networks, NeuralComputation 3(2) (1991), 246–257.

    Google Scholar 

  15. Park, J. and Sandberg, I. W.: Approximation and radial-basis-function networks, Neural Computation 5 (1993), 305–316.

    Google Scholar 

  16. Pinkus, A.: Approximation theory of the MLP model in neural networks, Acta Numerica, (1999), 143–195.

  17. Scarselli, F. and Tsoi, A. C.: Universal approximation using feedforward neural networks: A survey of some existing methods, and some new results, Neural Networks 11(1) (1998), 15–37.

    Article  Google Scholar 

  18. Shawe-Taylor, J., Bartlett, P. L., Williamson, R. and Anthony, M.: Structural risk minimization over data dependent hierarchies. Technical report, NeuroCOLT, 1996.

  19. Schölkopf, B. and Smola, A. J.: Learning with kernels, The MIT Press, 2001.

  20. Smola, A.: Support Vector Machines, Tutorial at ICANN '01, Vienna, Austria, 2001.

  21. Vapnik, V. and Chervonenkis, A.: Theory of Pattern Recognition, Nauka, Moscow, 1974.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Rights and permissions

Reprints and permissions

About this article

Cite this article

Hammer, B., Gersmann, K. A Note on the Universal Approximation Capability of Support Vector Machines. Neural Processing Letters 17, 43–53 (2003). https://doi.org/10.1023/A:1022936519097

Download citation

  • Issue Date:

  • DOI: https://doi.org/10.1023/A:1022936519097

Navigation