Skip to main content

Multi-Task Low-Rank Metric Learning Based on Common Subspace

  • Conference paper
Neural Information Processing (ICONIP 2011)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 7063))

Included in the following conference series:

Abstract

Multi-task learning, referring to the joint training of multiple problems, can usually lead to better performance by exploiting the shared information across all the problems. On the other hand, metric learning, an important research topic, is however often studied in the traditional single task setting. Targeting this problem, in this paper, we propose a novel multi-task metric learning framework. Based on the assumption that the discriminative information across all the tasks can be retained in a low-dimensional common subspace, our proposed framework can be readily used to extend many current metric learning approaches for the multi-task scenario. In particular, we apply our framework on a popular metric learning method called Large Margin Component Analysis (LMCA) and yield a new model called multi-task LMCA (mtLMCA). In addition to learning an appropriate metric, this model optimizes directly on the transformation matrix and demonstrates surprisingly good performance compared to many competitive approaches. One appealing feature of the proposed mtLMCA is that we can learn a metric of low rank, which proves effective in suppressing noise and hence more resistant to over-fitting. A series of experiments demonstrate the superiority of our proposed framework against four other comparison algorithms on both synthetic and real data.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Argyriou, A., Evgeniou, T.: Convex multi-task feature learning. Machine Learning 73(3), 243–272 (2008)

    Article  Google Scholar 

  2. Caruana, R.: Multitask learning. Machine Learning 28(1), 41–75 (1997)

    Article  MathSciNet  Google Scholar 

  3. Davis, J.V., Kulis, B., Jain, P., Sra, S., Dhillon, I.S.: Information-theoretic metric learning. In: Proceedings of the 24th International Conference on Machine Learning, pp. 209–216 (2007)

    Google Scholar 

  4. Evgeniou, T., Pontil, M.: Regularized multi-task learning. In: Proceedings of the Tenth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 109–117 (2004)

    Google Scholar 

  5. Fanty, M.A., Cole, R.: Spoken letter recognition. In: Advances in Neural Information Processing Systems, p. 220 (1990)

    Google Scholar 

  6. Goldberger, J., Roweis, S., Hinton, G., Salakhutdinov, R.: Neighbourhood component analysis. In: Advances in Neural Information Processing Systems (2004)

    Google Scholar 

  7. Huang, K., Ying, Y., Campbell, C.: Gsml: A unified framework for sparse metric learning. In: Ninth IEEE International Conference on Data Mining, pp. 189–198 (2009)

    Google Scholar 

  8. Micchelli, C.A., Ponti, M.: Kernels for multi-task learning. In: Advances in Neural Information Processing, pp. 921–928 (2004)

    Google Scholar 

  9. Parameswaran, S., Weinberger, K.Q.: Large margin multi-task metric learning. In: Advances in Neural Information Processing Systems (2010)

    Google Scholar 

  10. Rosales, R., Fung, G.: Learning sparse metrics via linear programming. In: Proceedings of the 12th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 367–373 (2006)

    Google Scholar 

  11. Torresani, L., Lee, K.: Large margin component analysis. In: Advances in Neural Information Processing, pp. 505–512 (2007)

    Google Scholar 

  12. Weinberger, K.Q., Saul, L.K.: Distance metric learning for large margin nearest neighbor classification. The Journal of Machine Learning Research 10 (2009)

    Google Scholar 

  13. Xing, E.P., Ng, A.Y., Jordan, M.I., Russell, S.: Distance metric learning, with application to clustering with side-information. In: Advances in Neural Information Processing Systems, vol. 15, pp. 505–512 (2003)

    Google Scholar 

  14. Zhang, Y., Yeung, D.Y., Xu, Q.: Probabilistic multi-task feature selection. In: Advances in Neural Information Processing Systems, pp. 2559–2567 (2010)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2011 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Yang, P., Huang, K., Liu, CL. (2011). Multi-Task Low-Rank Metric Learning Based on Common Subspace. In: Lu, BL., Zhang, L., Kwok, J. (eds) Neural Information Processing. ICONIP 2011. Lecture Notes in Computer Science, vol 7063. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-24958-7_18

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-24958-7_18

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-24957-0

  • Online ISBN: 978-3-642-24958-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics