Skip to main content

Speedy Local Search for Semi-Supervised Regularized Least-Squares

  • Conference paper
KI 2011: Advances in Artificial Intelligence (KI 2011)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 7006))

Included in the following conference series:

Abstract

In real-world machine learning scenarios, labeled data is often rare while unlabeled data can be obtained easily. Semi-supervised approaches aim at improving the prediction performance by taking both the labeled as well as the unlabeled part of the data into account. In particular, semi-supervised support vector machines favor decision hyperplanes which lie in a “low-density area” induced by the unlabeled patterns (while still considering the labeled part of the data). The associated optimization problem, however, is of combinatorial nature and, hence, difficult to solve. In this work, we present an efficient implementation of a simple local search strategy that is based on matrix updates of the intermediate candidate solutions. Our experiments on both artificial and real-world data sets indicate that the approach can successfully incorporate unlabeled data in an efficient manner.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Adankon, M., Cheriet, M., Biem, A.: Semisupervised least squares support vector machine. IEEE Transactions on Neural Networks 20(12), 1858–1870 (2009)

    Article  Google Scholar 

  2. Bie, T.D., Cristianini, N.: Convex methods for transduction. In: Adv. in Neural Information Proc. Systems, vol. 16, pp. 73–80. MIT Press, Cambridge (2004)

    Google Scholar 

  3. Boyd, S., Vandenberghe, L.: Convex Optimization. Cambridge Uni. Press, Cambridge (2004)

    Book  MATH  Google Scholar 

  4. Chapelle, O., Chi, M., Zien, A.: A continuation method for semi-supervised SVMs. In: Proc. Int. Conf. on Machine Learning, pp. 185–192 (2006)

    Google Scholar 

  5. Chapelle, O., Schölkopf, B., Zien, A. (eds.): Semi-Supervised Learning. MIT Press, Cambridge (2006)

    Google Scholar 

  6. Chapelle, O., Sindhwani, V., Keerthi, S.S.: Branch and bound for semi-supervised support vector machines. In: Adv. in Neural Information Proc. Systems, vol. 19, pp. 217–224. MIT Press, Cambridge (2007)

    Google Scholar 

  7. Chapelle, O., Zien, A.: Semi-supervised classification by low density separation. In: Proc. 10th Int. Workshop on Artificial Intell. and Statistics, pp. 57–64 (2005)

    Google Scholar 

  8. Collobert, R., Sinz, F., Weston, J., Bottou, L.: Trading convexity for scalability. In: Proc. International Conference on Machine Learning, pp. 201–208 (2006)

    Google Scholar 

  9. Gieseke, F., Pahikkala, T., Kramer, O.: Fast evolutionary maximum margin clustering. In: Proc. Int. Conf. on Machine Learning, pp. 361–368 (2009)

    Google Scholar 

  10. Hastie, T., Tibshirani, R., Friedman, J.: The Elements of Statistical Learning. Springer, Heidelberg (2009)

    Book  MATH  Google Scholar 

  11. Joachims, T.: Transductive inference for text classification using support vector machines. In: Proc. Int. Conf. on Machine Learning, pp. 200–209 (1999)

    Google Scholar 

  12. Rifkin, R., Yeo, G., Poggio, T.: Regularized least-squares classification. In: Adv. in Learning Theory: Methods, Models and Applications. IOS Press, Amsterdam (2003)

    Google Scholar 

  13. Schölkopf, B., Herbrich, R., Smola, A.J.: A generalized representer theorem. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001 and EuroCOLT 2001. LNCS (LNAI), vol. 2111, pp. 416–426. Springer, Heidelberg (2001)

    Chapter  Google Scholar 

  14. Sindhwani, V., Keerthi, S., Chapelle, O.: Deterministic annealing for semi-supervised kernel machines. In: Proc. Int. Conf. on Machine Learning, pp. 841–848 (2006)

    Google Scholar 

  15. Steinwart, I., Christmann, A.: Support Vector Machines. Springer, New York (2008)

    MATH  Google Scholar 

  16. Vapnik, V., Sterin, A.: On structural risk minimization or overall risk in a problem of pattern recognition. Aut. and Remote Control 10(3), 1495–1503 (1977)

    MATH  Google Scholar 

  17. Zhang, K., Kwok, J.T., Parvin, B.: Prototype vector machine for large scale semi-supervised learning. In: Proceedings of the International Conference on Machine Learning (2009)

    Google Scholar 

  18. Zhu, X., Goldberg, A.B.: Introduction to Semi-Supervised Learning. Morgan and Claypool (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Gieseke, F., Kramer, O., Airola, A., Pahikkala, T. (2011). Speedy Local Search for Semi-Supervised Regularized Least-Squares. In: Bach, J., Edelkamp, S. (eds) KI 2011: Advances in Artificial Intelligence. KI 2011. Lecture Notes in Computer Science(), vol 7006. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24455-1_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24455-1_8

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24454-4

  • Online ISBN: 978-3-642-24455-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics