Skip to main content

Model Selection for Regularized Least-Squares Classification

  • Conference paper
Advances in Natural Computation (ICNC 2005)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 3610))

Included in the following conference series:

  • 1403 Accesses

Abstract

Regularized Least-Squares Classification (RLSC) can be regarded as a kind of 2 layers neural network using regularized square loss function and kernel trick. Poggio and Smale recently reformulated it in the framework of the mathematical foundations of learning and called it a key algorithm of learning theory. The generalization performance of RLSC depends heavily on the setting of its kernel and hyper parameters. Therefore we presented a novel two-step approach for optimal parameters selection: firstly the optimal kernel parameters are selected by maximizing kernel target alignment, and then the optimal hyper-parameter is determined via minimizing RLSC’s leave-one-out bound. Compared with traditional grid search, our method needs no independent validation set. We worked on IDA’s benchmark datasets using Gaussian kernel, the results demonstrate that our method is feasible and time efficient.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 119.00
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Poggio, T., Smale, S.: The Mathematics of Learning: Dealing With Data. Notice of American Mathematical Society 5, 537–544 (2003)

    MathSciNet  Google Scholar 

  2. Cucker, F., Smale, S.: On the Mathematical Foundations of Learning. Bulletin of American Mathematical Society 39, 1–49 (2001)

    Article  MathSciNet  Google Scholar 

  3. Vapnik, V.: The Nature of Statistical Learning Theory, 2nd edn. Springer, Heidelberg (2000)

    MATH  Google Scholar 

  4. Rifkin, R.M.: Everything Old Is New Again: A Fresh Look at Historical Approaches to Machine Learning. PhD thesis, Massachusetts Institute of Technology (2002)

    Google Scholar 

  5. Rifkin, R.M., Yeo, G., Poggio, T.: Regularized Least-Squares Classification. In: Advances in Learning Theory: Methods, Model and Applications. NATO Science Series III: Computer and Systems Sciences, vol. l, p. 190. IOS Press, Amsterdam (2003)

    Google Scholar 

  6. Suykens, J.A.K., Gestel, V., De Bra banter, J., et al.: Least Squares Support Vector Machine. World Scientific, Singapore (2002)

    Book  Google Scholar 

  7. Fung, G., Mangasarian, O.L.: Proximal Support Vector Machine Classifiers. In: KDD 2001: Seventh ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco, CA (2001)

    Google Scholar 

  8. Chapelle, O., Vapnik, V., Mukjerjee, S.: Choosing Multiple Parameters for Support Vector Machines. Machine Learning 46, 131–159 (2002)

    Article  MATH  Google Scholar 

  9. Cristianini, N., Kandola, J., Elisseeff, A., Shawe-Taylor, J.: On Kernel Target Alignment. Journal of Machine Learning Research (submitted)

    Google Scholar 

  10. Jaakkola, T., Haussler, D.: Probabilistic Kernel Regression Models. In: Advances in Neural Information Processing Systems, vol. 11 (1998)

    Google Scholar 

  11. Rätsch, G., Onoda, T., Müller, K.-R.: Soft Margin for AdaBoost. Machine Learning 1, 1–35 (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yang, HH., Wang, XY., Wang, Y., Gao, HH. (2005). Model Selection for Regularized Least-Squares Classification. In: Wang, L., Chen, K., Ong, Y.S. (eds) Advances in Natural Computation. ICNC 2005. Lecture Notes in Computer Science, vol 3610. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11539087_72

Download citation

  • DOI: https://doi.org/10.1007/11539087_72

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-28323-2

  • Online ISBN: 978-3-540-31853-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics