ABSTRACT
Multi-task feature selection (MTFS) is an important tool to learn the explanatory features across multiple related tasks. Previous MTFS methods fulfill this task in batch-mode training. This makes them inefficient when data come in sequence or when the number of training data is so large that they cannot be loaded into the memory simultaneously. To tackle these problems, we propose the first online learning framework for MTFS. A main advantage of the online algorithms is the efficiency in both time complexity and memory cost due to the closed-form solutions in updating the model weights at each iteration. Experimental results on a real-world dataset attest to the merits of the proposed algorithms.
- A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243--272, 2008. Google ScholarDigital Library
- B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. Journal of Machine Learning Research, 4:83--99, 2003. Google ScholarDigital Library
- S. Balakrishnan and D. Madigan. Algorithms for sparse linear classifiers in the massive data setting. Journal of Machine Learning Research, 9:313--337, 2008. Google ScholarDigital Library
- S. Ben-David and R. Schuller. Exploiting task relatedness for mulitple task learning. In COLT, pages 567--580, 2003.Google Scholar
- O. Dekel, P. M. Long, and Y. Singer. Online multitask learning. In COLT, pages 453--467, 2006. Google ScholarDigital Library
- J. Duchi and Y. Singer. Efficient learning using forward-backward splitting. Journal of Machine Learning Research, 10:2873--2898, 2009. Google ScholarDigital Library
- T. Evgeniou and M. Pontil. Regularized multi-task learning. In KDD, pages 109--117, 2004. Google ScholarDigital Library
- J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. Journal of Machine Learning Research, 10:777--801, 2009. Google ScholarDigital Library
- J. Liu, S. Ji, and J. Ye. Multi-task feature learning via efficient l2;1 norm minimization. In UAI, 2009. Google ScholarDigital Library
- Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221--259, 2009. Google ScholarDigital Library
- G. Obozinski, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, 2009. Google ScholarDigital Library
- R. Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B, 58(1):267--288, 1996.Google ScholarCross Ref
- L. Xiao. Dual averaging method for regularized stochastic learning and online optimization. Technical Report MSR-TR-2009-100, Microsoft Research, 2009.Google Scholar
- H. Yang, I. King, and M. R. Lyu. Multi-task learning for one-class classification. In IJCNN, Barcelona, Spain, 2010.Google ScholarCross Ref
- H. Yang, Z. Xu, I. King, and M. R. Lyu. Online learning for group lasso. In ICML, Haifa, Israel, 2010.Google Scholar
Index Terms
- Online learning for multi-task feature selection
Recommendations
Efficient online learning for multitask feature selection
Learning explanatory features across multiple related tasks, or MultiTask Feature Selection (MTFS), is an important problem in the applications of data mining, machine learning, and bioinformatics. Previous MTFS methods fulfill this task by batch-mode ...
Multi-label feature selection with streaming labels
In this paper, we study a novel and challenging issue, multi-label feature selection with streaming labels, in which the number of labels is unknown in advance, and the size of the feature set is constant. In this problem, we assume that the labels ...
Supervised Shallow Multi-task Learning: Analysis of Methods
AbstractThe last decade has witnessed a continuous boom in the application of machine learning techniques in pattern recognition, with much more focus on single-task learning models. However, the increasing amount of multimedia data in the real world also ...
Comments