Skip to main content

A Random Sampling Technique for Training Support Vector Machines

For Primal-Form Maximal-Margin Classifiers

  • Conference paper
  • First Online:
Book cover Algorithmic Learning Theory (ALT 2001)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 2225))

Included in the following conference series:

Abstract

Random sampling techniques have been developed for combinatorial optimization problems. In this note, we report an application of one of these techniques for training support vector machines (more precisely, primal-form maximal-margin classifiers) that solve two-group classification problems by using hyperplane classifiers. Through this research, we are aiming (I) to design efficient and theoretically guaranteed support vector machine training algorithms, and (II) to develop systematic and efficient methods for finding “outliers”, i.e., examples having an inherent error.

This work was started when the first and third authors visited Centre de Recerca Mathemática, Spain.

Supported in part by EU ESPRIT IST-1999-14186 (ALCOM-FT), EU EP27150 (Neurocolt II), Spanish Government PB98-0937-C04 (FRESCO), and CIRIT 1997SGR-00366.

Supported in part by a Grant-in-Aid (C-13650444) from the Ministry of Education, Science, Sports and Culture of Japan.

Supported in part by a Grant-in-Aid for Scientific Research on Priority Areas “Discovery Science” from the Ministry of Education, Science, Sports and Culture of Japan.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. I. Adler and R. Shamir, A randomized scheme for speeding up algorithms for linear and convex programming with high constraints-to-variable ratio, Math. Programming 61, 39–52, 1993.

    Article  MathSciNet  Google Scholar 

  2. J. Balcázar, Y. Dai, and O. Watanabe, in preparation.

    Google Scholar 

  3. K.P. Bennett and E.J. Bredensteiner, Duality and geometry in SVM classifiers, in Proc. the 17th Int’l Conf. on Machine Learning (ICML’2000), 57–64, 2000.

    Google Scholar 

  4. D.P. Bertsekas, Nonlinear Programming, Athena Scientific, 1995.

    Google Scholar 

  5. P.S. Bradley, O.L. Mangasarian, and D.R. Musicant, Optimization methods in massive datasets, in Handbook of Massive Datasets (J. Abello, P.M. Pardalos, and M.G.C. Resende, eds.) Kluwer Academic Pub., 2000, to appear.

    Google Scholar 

  6. C.J. Lin, On the convergence of the decomposition method for support vector machines, IEEE Trans. on Neural Networks, 2001, to appear.

    Google Scholar 

  7. K.L. Clarkson, Las Vegas algorithms for linear and integer programming, J.ACM 42, 488–499, 1995.

    Article  Google Scholar 

  8. C. Cortes and V. Vapnik, Support-vector networks, Machine Learning 20, 273–297, 1995.

    MATH  Google Scholar 

  9. N. Cristianini and J. Shawe-Taylor, An Introduction to Support Vector Machines, Cambridge University Press 2000.

    Google Scholar 

  10. B. Gärtner and E. Welzl, A simple sampling lemma: Analysis and applications in geometric optimization, Discr. Comput. Geometry, 2000, to appear.

    Google Scholar 

  11. S.S. Keerthi and E.G. Gilbert, Convergence of a generalized SMO algorithm for SVM classifier design, Technical Report CD-00-01, Dept. of Mechanical and Production Eng., National University of Singapore, 2000.

    Google Scholar 

  12. E. Osuna, R. Freund, and F. Girosi, An improved training algorithm for support vector machines, in Proc. IEEE Workshop on Neural Networks for Signal Processing, 276–285, 1997.

    Google Scholar 

  13. J. Platt, Fast training of support vector machines using sequential minimal optimization, in Advances in Kernel Methods-Support Vector Learning (B. Scholkopf, C.J.C. Burges, and A.J. Smola, eds.), MIT Press, 185–208, 1999.

    Google Scholar 

  14. A.J. Smola and B. Scholkopf, A tutorial on support vector regression, NeuroCOLT Technical Report NC-TR-98-030, Royal Holloway College, University of London, 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2001 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Balcázar, J., Dai, Y., Watanabe, O. (2001). A Random Sampling Technique for Training Support Vector Machines. In: Abe, N., Khardon, R., Zeugmann, T. (eds) Algorithmic Learning Theory. ALT 2001. Lecture Notes in Computer Science(), vol 2225. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-45583-3_11

Download citation

  • DOI: https://doi.org/10.1007/3-540-45583-3_11

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-42875-6

  • Online ISBN: 978-3-540-45583-7

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics