Skip to main content

Stochastic Sequential Minimal Optimization for Large-Scale Linear SVM

  • Conference paper
  • First Online:
Neural Information Processing (ICONIP 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10634))

Included in the following conference series:

  • 4639 Accesses

Abstract

Linear support vector machine (SVM) is a popular tool in machine learning. Compared with nonlinear SVM, linear SVM produce competent performances, and is more efficient in tacking larg-scale and high dimensional tasks. In order to speed up its training, various algorithms have been developed, such as Liblinear, SVM-perf and Pegasos. In this paper, we propose a new fast algorithm for linear SVMs. This algorithm uses the stochastic sequence minimization optimization (SSMO) method. There are two main differences between our algorithm and other linear SVM algorithms. Our algorithm updates two variables, simultaneously, rather than updating a single variable. We maintain the bias term b in discriminant functions. Experiments indicate that the proposed algorithm is much faster than some state of the art solvers, such as Liblinear, and achieves higher classification accuracy.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995)

    MATH  Google Scholar 

  2. Yuan, G.X., Ho, C.H., Lin, C.J.: Recent advances of large-scale linear classification. Proc. IEEE 100(9), 2584–2603 (2012)

    Article  Google Scholar 

  3. Chu, D., Zhang, C., Tao, Q.: A faster cutting plane algorithm with accelerated line search for linear SVM. Pattern Recogn. 67, 127–138 (2017)

    Article  Google Scholar 

  4. Paul, S., Magdon-Ismail, M., Drineas, P.: Feature selection for linear SVM with provable guarantees. Pattern Recogn. 60, 205–214 (2016)

    Article  Google Scholar 

  5. Tang, Y.: Deep learning using linear support vector machines. arXiv preprint arXiv:1306.0239 (2013)

  6. Joachims, T.: Training linear SVMs in linear time. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 217–226. ACM (2006)

    Google Scholar 

  7. Shalev-Shwartz, S., Singer, Y., Srebro, N., Cotter, A.: Pegasos: Primal estimated sub-gradient solver for SVM. Math. Program. 127(1), 3–30 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  8. Zhang, T.: Solving large scale linear prediction problems using stochastic gradient descent algorithms. In: Proceedings of the Twenty-First International Conference on Machine Learning, p. 116. ACM (2004)

    Google Scholar 

  9. Le, Q.V., Smola, A.J., Vishwanathan, S.: Bundle methods for machine learning. In: Advances in Neural Information Processing Systems, pp. 1377–1384 (2007)

    Google Scholar 

  10. Shalev-Shwartz, S.: Pegasos: primal estimated sub-gradient solver for SVM. In: Proceedings 24th International Conference on Machine Learning, pp. 807–814. ACM (2007)

    Google Scholar 

  11. Chang, K.W., Hsieh, C.J., Lin, C.J.: Coordinate descent method for large-scale l2-loss linear support vector machines. J. Mach. Learn. Res. 9, 1369–1398 (2008)

    MATH  MathSciNet  Google Scholar 

  12. Platt, J.C.: Fast training of support vector machines using sequential minimal optimization. In: Advances in Kernel Methods, pp. 185–208 (1999)

    Google Scholar 

  13. Chang, C.C., Lin, C.J.: Libsvm: a library for support vector machines. ACM Trans. Intell. Syst. Technol. (TIST) 2(3), 27 (2011)

    Google Scholar 

  14. Keerthi, S.S., Shevade, S.K., Bhattacharyya, C., Murthy, K.R.K.: Improvements to platt’s smo algorithm for SVM classifier design. Neural Comput. 13(3), 637–649 (2001)

    Article  MATH  Google Scholar 

  15. Hsieh, C.J., Yu, H.F., Dhillon, I.S.: Passcode: Parallel asynchronous stochastic dual co-ordinate descent. In: Proceedings of the 32th International Conference on Machine Learning. ACM (2015)

    Google Scholar 

  16. Hsieh, C.J., Chang, K.W., Lin, C.J., Keerthi, S.S., Sundararajan, S.: A dual coordinate descent method for large-scale linear SVM. In: Proceedings of the 25th International Conference on Machine Learning, pp. 408–415. ACM (2008)

    Google Scholar 

  17. Fan, R.E., Chang, K.W., Hsieh, C.J., Wang, X.R., Lin, C.J.: Liblinear: a library for large linear classification. J. Mach. Learn. Res. 9, 1871–1874 (2008)

    MATH  Google Scholar 

  18. Lichman, M.: UCI machine learning repository (2013). http://archive.ics.uci.edu/ml

Download references

Acknowledgments

This work is supported by National Program on Key Basic Research Project under Grant 2013CB329304.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Qinghua Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Peng, S., Hu, Q., Dang, J., Peng, Z. (2017). Stochastic Sequential Minimal Optimization for Large-Scale Linear SVM. In: Liu, D., Xie, S., Li, Y., Zhao, D., El-Alfy, ES. (eds) Neural Information Processing. ICONIP 2017. Lecture Notes in Computer Science(), vol 10634. Springer, Cham. https://doi.org/10.1007/978-3-319-70087-8_30

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-70087-8_30

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-70086-1

  • Online ISBN: 978-3-319-70087-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics