skip to main content
10.1145/1871437.1871706acmconferencesArticle/Chapter ViewAbstractPublication PagescikmConference Proceedingsconference-collections
poster

Online learning for multi-task feature selection

Authors Info & Claims
Published:26 October 2010Publication History

ABSTRACT

Multi-task feature selection (MTFS) is an important tool to learn the explanatory features across multiple related tasks. Previous MTFS methods fulfill this task in batch-mode training. This makes them inefficient when data come in sequence or when the number of training data is so large that they cannot be loaded into the memory simultaneously. To tackle these problems, we propose the first online learning framework for MTFS. A main advantage of the online algorithms is the efficiency in both time complexity and memory cost due to the closed-form solutions in updating the model weights at each iteration. Experimental results on a real-world dataset attest to the merits of the proposed algorithms.

References

  1. A. Argyriou, T. Evgeniou, and M. Pontil. Convex multi-task feature learning. Machine Learning, 73(3):243--272, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. B. Bakker and T. Heskes. Task clustering and gating for bayesian multitask learning. Journal of Machine Learning Research, 4:83--99, 2003. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. S. Balakrishnan and D. Madigan. Algorithms for sparse linear classifiers in the massive data setting. Journal of Machine Learning Research, 9:313--337, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. S. Ben-David and R. Schuller. Exploiting task relatedness for mulitple task learning. In COLT, pages 567--580, 2003.Google ScholarGoogle Scholar
  5. O. Dekel, P. M. Long, and Y. Singer. Online multitask learning. In COLT, pages 453--467, 2006. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. J. Duchi and Y. Singer. Efficient learning using forward-backward splitting. Journal of Machine Learning Research, 10:2873--2898, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. T. Evgeniou and M. Pontil. Regularized multi-task learning. In KDD, pages 109--117, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. Journal of Machine Learning Research, 10:777--801, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. Liu, S. Ji, and J. Ye. Multi-task feature learning via efficient l2;1 norm minimization. In UAI, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Y. Nesterov. Primal-dual subgradient methods for convex problems. Mathematical Programming, 120(1):221--259, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. G. Obozinski, B. Taskar, and M. I. Jordan. Joint covariate selection and joint subspace selection for multiple classification problems. Statistics and Computing, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. R. Tibshirani. Regression shrinkage and selection via the lasso. J. Roy. Statist. Soc. Ser. B, 58(1):267--288, 1996.Google ScholarGoogle ScholarCross RefCross Ref
  13. L. Xiao. Dual averaging method for regularized stochastic learning and online optimization. Technical Report MSR-TR-2009-100, Microsoft Research, 2009.Google ScholarGoogle Scholar
  14. H. Yang, I. King, and M. R. Lyu. Multi-task learning for one-class classification. In IJCNN, Barcelona, Spain, 2010.Google ScholarGoogle ScholarCross RefCross Ref
  15. H. Yang, Z. Xu, I. King, and M. R. Lyu. Online learning for group lasso. In ICML, Haifa, Israel, 2010.Google ScholarGoogle Scholar

Index Terms

  1. Online learning for multi-task feature selection

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        CIKM '10: Proceedings of the 19th ACM international conference on Information and knowledge management
        October 2010
        2036 pages
        ISBN:9781450300995
        DOI:10.1145/1871437

        Copyright © 2010 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 26 October 2010

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • poster

        Acceptance Rates

        Overall Acceptance Rate1,861of8,427submissions,22%

        Upcoming Conference

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader