Skip to main content

On Low-Rank Regularized Least Squares for Scalable Nonlinear Classification

  • Conference paper
Neural Information Processing (ICONIP 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7063))

Included in the following conference series:

Abstract

In this paper, we revisited the classical technique of Regularized Least Squares (RLS) for the classification of large-scale nonlinear data. Specifically, we focus on a low-rank formulation of RLS and show that it has linear time complexity in the data size only and does not rely on the number of labels and features for problems with moderate feature dimension. This makes low-rank RLS particularly suitable for classification with large data sets. Moreover, we have proposed a general theorem for the closed-form solutions to the Leave-One-Out Cross Validation (LOOCV) estimation problem in empirical risk minimization which encompasses all types of RLS classifiers as special cases. This eliminates the reliance on cross validation, a computationally expensive process for parameter selection, and greatly accelerate the training process of RLS classifiers. Experimental results on real and synthetic large-scale benchmark data sets have shown that low-rank RLS achieves comparable classification performance while being much more efficient than standard kernel SVM for nonlinear classification. The improvement in efficiency is more evident for data sets with higher dimensions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Scholkopf, B., Smola, A.: Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press (2002)

    Google Scholar 

  2. Fan, R.E., Chen, P.H., Lin, C.J.: Working set selection using the second order information for training SVM. Journal of Machine Learning Research 6, 1889–1918 (2005)

    MATH  Google Scholar 

  3. Joachims, T.: Training linear SVMs in linear time. In: SIGKDD (2006)

    Google Scholar 

  4. Hsieh, C.-J., Chang, K.W., Lin, C.J., Keerthi, S., Sundararajan, S.: A dual coordinate descent method for large-scale linear SVM. In: Intl. Conf. on Machine Learning (2008)

    Google Scholar 

  5. Rifkin, R.: Everything Old Is New Again: A Fresh Look at Historical Approaches. PhD thesis, Mass. Inst. of Tech (2002)

    Google Scholar 

  6. Rifkin, R., Klautau, A.: In defense of one-vs-all classification. Journal of Machine Learning Research 5, 101–141 (2004)

    MathSciNet  MATH  Google Scholar 

  7. Frank, A., Asuncion, A.: UCI machine learning repository (2010)

    Google Scholar 

  8. Tsoumakas, G., Katakis, I., Vlahavas, I.: Mining multi-label data Data Mining and Knowledge Discovery Handbook, pp. 667–685 (2010)

    Google Scholar 

  9. Platt, J.: Fast training of support vector machines using sequential minimal optimization. In: Advances in Kernel Methods - Support Vector Learning. MIT Press (1998)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Fu, Z., Lu, G., Ting, KM., Zhang, D. (2011). On Low-Rank Regularized Least Squares for Scalable Nonlinear Classification. In: Lu, BL., Zhang, L., Kwok, J. (eds) Neural Information Processing. ICONIP 2011. Lecture Notes in Computer Science, vol 7063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24958-7_57

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24958-7_57

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24957-0

  • Online ISBN: 978-3-642-24958-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics