Abstract
Metric learning has been widely studied in machine learning due to its capability to improve the performance of various algorithms. Meanwhile, multi-task learning usually leads to better performance by exploiting the shared information across all tasks. In this paper, we propose a novel framework to make metric learning benefit from jointly training all tasks. Based on the assumption that discriminative information is retained in a common subspace for all tasks, our framework can be readily used to extend many current metric learning methods. In particular, we apply our framework on the widely used Large Margin Component Analysis (LMCA) and yield a new model called multi-task LMCA. It performs remarkably well compared to many competitive methods. Besides, this method is able to learn a low-rank metric directly, which effects as feature reduction and enables noise compression and low storage. A series of experiments demonstrate the superiority of our method against three other comparison algorithms on both synthetic and real data.
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-012-0956-8/MediaObjects/521_2012_956_Fig1_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-012-0956-8/MediaObjects/521_2012_956_Fig2_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-012-0956-8/MediaObjects/521_2012_956_Fig3_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-012-0956-8/MediaObjects/521_2012_956_Fig4_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-012-0956-8/MediaObjects/521_2012_956_Fig5_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-012-0956-8/MediaObjects/521_2012_956_Fig6_HTML.gif)
![](http://media.springernature.com/m312/springer-static/image/art%3A10.1007%2Fs00521-012-0956-8/MediaObjects/521_2012_956_Fig7_HTML.gif)
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
Note that it is straightforward to extend our framework to other metric learning models that optimize the objective function with the transformation matrix.
References
Xing EP, Ng AY, Jordan MI, Russell S (2003) Distance metric learning, with application to clustering with side-information. In: Advances in neural information processing systems, vol 15, pp 505–512
Goldberger J, Roweis S, Hinton G, Salakhutdinov R (2004) Neighbourhood components analysis. In: Advances in neural information processing systems, vol 17, pp 513–520
Weinberger KQ, Saul LK (2009) Distance metric learning for large margin nearest neighbor classification. J Mach Learn Res 10:207–244
Huang K, Ying Y, Campbell C (2011) Generalized sparse metric learning with relative comparisons. Knowl Inf Syst 28(1):25–45
Torresani L, Lee K (2007) Large margin component analysis. In: Advances in neural information processing, pp 505–512
Davis JV, Kulis B, Jain P, Sra S, Dhillon IS (2007) Information-theoretic metric learning. In: Proceedings of the 24th international conference on machine learning, pp 209–216
Caruana R (1997) Multitask learning. Mach Learn 28(1):41–75
Evgeniou T, Pontil M (2004) Regularized multi-task learning. In: Proceedings of the tenth ACM SIGKDD international conference on knowledge discovery and data mining, pp 109–117
Argyriou A, Evgeniou T (2008) Convex multi-task feature learning. Mach Learn 73(3):243–272
Micchelli CA, Ponti M (2004) Kernels for multi-task learning. In: Advances in neural information processing, pp 921–928
Zhang Y, Yeung DY, Xu Q (2010) Probabilistic multi-task feature selection. In: Advances in neural information processing systems, pp 2559–2567
Cole R, Fanty M (1990) Spoken letter recognition. In: Proceedings of the workshop on speech and natural language, pp 385–390
Parameswaran S, Weinberger K (2010) Large margin multi-task metric learning. In: Advances in neural information processing systems, vol 23, pp 1867–1875
Webb AR (2002) Statistical pattern recognition. 2nd edn. Wiley, Chichester
Yang P, Huang K, Liu CL (2011) Multi-task low-rank metric learning based on common subspace. In : Proceedings of International Conference on Neural information processing, vol 7063, pp 151–159
Rosales R, Fung G (2006) Learning sparse metrics via linear programming. In: Proceedings of the 12th ACM SIGKDD international conference on knowledge discovery and data mining, pp 367–373
Acknowledgments
This work was supported by National Basic Research Program of China (973 Program) grant 2012CB316301, National Natural Science Foundation of China (NSFC) under grants 61075052 and 60825301, and Tsinghua National Laboratory for Information Science and Technology (TNList) Cross-discipline Foundation.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
About this article
Cite this article
Yang, P., Huang, K. & Liu, CL. A multi-task framework for metric learning with common subspace. Neural Comput & Applic 22, 1337–1347 (2013). https://doi.org/10.1007/s00521-012-0956-8
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s00521-012-0956-8