Skip to main content

Classifier Complexity Reduction by Support Vector Pruning in Kernel Matrix Learning

  • Conference paper

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 4507))

Abstract

This paper presents an algorithm for reducing a classifier’s complexity by pruning support vectors in learning the kernel matrix. The proposed algorithm retains the ‘best’ support vectors such that the span of support vectors, as defined by Vapnik and Chapelle, is as small as possible. Experiments on real world data sets show that the number of support vectors can be reduced in some cases by as much as 85% with little degradation in generalization performance.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Burges, C.J.C.: Simplified support vector decision rules. In: 13th International Conference on Machine Learning, p. 71 (1996)

    Google Scholar 

  2. Burges, C.J.C.: Improving the accuracy and speed of support vector machines. In: Neural Information Processing Systems (1997)

    Google Scholar 

  3. Chapelle, O., Vapnik, V., Bosquet, O., Mukherjee, S.: Choosing kernel parameters for support vector machines. Machine Learning 46(1-3), 131 (2001)

    Google Scholar 

  4. Downs, T., Gates, K.E., Masters, A.: Exact simplification of support vector solutions. Journal of Machine Learning Research 2, 293 (2001)

    Article  Google Scholar 

  5. Keerthi, S.S., Chapelle, O., DeCoste, D.: Building support vector machines with reduced classifier complexity. Journal of Machine Learning Research 7 (2006)

    Google Scholar 

  6. Lanckriet, G.R.G., Cristianini, N., Bartlett, P., El Ghaoui, L., Jordan, M.I.: Learning the kernel matrix with semidefinite programming. Journal of Machine Learning Research 5, 27 (2004)

    Google Scholar 

  7. Lee, Y.-J., Mangasarian, O.L.: Rsvm: reduced support vector machines. In: CD Proceedings of the first SIAM International Conference on Data Mining, Chicago (2001)

    Google Scholar 

  8. Löfberg, J.: YALMIP: A toolbox for modeling and optimization in MATLAB. In: Proceedings of the CACSD Conference, Taipei, Taiwan (2004), Available from http://control.ee.ethz.ch/~joloef/yalmip.php

  9. Nguyen, D., Ho, T.: An efficient method for simplifying support vector machines. In: 22nd International Conference on Machine Learning, Bonn, Germany, pp. 617–624 (2005)

    Google Scholar 

  10. Rätsch, G.: Benchmark repository. Technical report, Intelligent Data Analysis Group, Fraunhofer-FIRST (2005)

    Google Scholar 

  11. Schoelkopf, B., Smola, A.: Learning with Kernels. MIT Press, Cambridge (2002)

    Google Scholar 

  12. Sturm, J.F.: Using sedumi 1.02, a matlab toolbox for optimization over symmetric cones. Optimization Methods and Software 11-12, 625–653 (1999)

    Article  MathSciNet  Google Scholar 

  13. Tipping, M.E.: Sparse bayesian learning and the relevance vector machine. Journal of Machine Learning Research 1, 211 (2001)

    Article  MATH  MathSciNet  Google Scholar 

  14. Vapnik, V.: Statistical Learning Theory. John Wiley and Sons, New York (1998)

    MATH  Google Scholar 

  15. Vapnik, V., Chapelle, O.: Bounds on error expectation for SVM. Neural Computation 12, 2013 (2000)

    Article  Google Scholar 

  16. Wu, M., Scholkopf, B., Bakir, G.: Building sparse large margin classifiers. In: 22nd International Conference on Machine Learning, Bonn, Germany (2005)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Francisco Sandoval Alberto Prieto Joan Cabestany Manuel Graña

Rights and permissions

Reprints and permissions

Copyright information

© 2007 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Saradhi, V.V., Karnick, H. (2007). Classifier Complexity Reduction by Support Vector Pruning in Kernel Matrix Learning. In: Sandoval, F., Prieto, A., Cabestany, J., Graña, M. (eds) Computational and Ambient Intelligence. IWANN 2007. Lecture Notes in Computer Science, vol 4507. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-73007-1_33

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-73007-1_33

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-73006-4

  • Online ISBN: 978-3-540-73007-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics