Skip to main content
Log in

Kernel collaborative online algorithms for multi-task learning

  • Published:
Annals of Mathematics and Artificial Intelligence Aims and scope Submit manuscript

Abstract

In many real time applications, we often have to deal with classification, regression or clustering problems that involve multiple tasks. The conventional machine learning approaches solve these tasks independently by ignoring the task relatedness. In multi-task learning (MTL), these related tasks are learned simultaneously by extracting and utilizing the shared information across tasks. This approach of learning related tasks together increases the sample size for each task and improves the generalization performance. Thus MTL is especially beneficial when the training size is small for each task. This paper describes multi-task learning using kernel online learning approach. As many real world applications are online in nature, development of efficient online learning techniques is very much needed. Since online learning processes only one data at a time, these techniques could be effectively applied on large data sets. The MTL model we developed involves a global function and a task specific function corresponding to each task. The cost function used for finding the task specific function makes use of the global model for incorporating the necessary information from other tasks. Such modeling strategies improve the generalization capacity of the model. The problem of finding global and task specific functions is formulated as two separate problems and at each step of arrival of new data, the global vector is solved at the first instant and its information is used to update the task specific vector. The updation rule for the task specific function involves the approximation of global components using task specific components by means of projection. We applied the developed frame work on real world problems and the results were found to be promising.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Vandenberghe, L., Boyd, S.: Semidefinite programming. SIAM Rev. 38(1), 49–95 (1996)

    Article  MathSciNet  MATH  Google Scholar 

  2. Ji, S, Ye, J.: An accelerated gradient method for trace norm minimization. In: Proceedings of the 26th Annual International Conference on Machine Learning, pp. 457–464 (2009)

  3. Abernethy, J, Bach, F, Evgeniou, T, Vert, J.: Low-rank matrix factorization with attributes. arXiv:cs/0611124 (2006)

  4. Obozinski, G., Taskar, B., Jordan, M.: Joint covariate selection and joint subspace selection for multiple classification problems. Stat. Comput. 20, 231–252 (2008)

  5. Pong, T.K., Tseng, P., Ji, S., Ye, J.: Regularization: trace norm reformulations, algorithms, and multi-task learning. SIAM J. Optim. 20(6), 3465–3489 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  6. Tibshirani, R.: Regression shrinkage and selection via the lasso. J. R. Stat. Soc. Ser. B 58(1), 267—288 (1996)

    MathSciNet  MATH  Google Scholar 

  7. Argyriou, A., Evgeniou, T., Pontil, M.: Convex multi-task feature learning. Mach. Learn. 73(3), 243—272 (2008)

    Article  Google Scholar 

  8. Gong, P, Ye, J., Zhang, C.: Robust multi-task feature learning. In: Internation Conference on Knowledge Data and Data Mining(KDD’12), Beijing, China, pp. 12–16 (2012)

  9. Evgeniou, T., Pontil, M.: Regularized multitask learning. In: KDD-04 (2004)

  10. Li, G., Zhao, G.: Online learning with kernels in classification and regression. In: IEEE Conference on Evolving and Adaptive Intelligent Systems (2012)

  11. Li, G., Hoi, S.C.H., Chang, K., Liu, W., Jain, R.: Collaborative online multitask learning. IEEE Trans. Knowl. Data Eng. 26(8), 1866–1876 (2014)

  12. de Souza, J.G.C., Negri, M., Ricci, E., Turchi, M.: Online multitask learning for machine translation quality estimation. In: Proceedings of the 53rd Annaual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, July 26-31, Beijing, China, pp. 219–228 (2015)

  13. Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., Singer, Y.: Online passive-aggressive algorithms. J. Mach. Learn. Res. 7, 551–585 (2006)

    MathSciNet  MATH  Google Scholar 

  14. Tongliang, L., Dacheng, T., Mingli, S., Stephen, J.M.: Algorithm-dependent generalization bounds for multi-task learning. IEEE Trans. Pattern Anal. Mach. Intell. 39, 1–1 (2017). https://doi.org/10.1109/TPAMI.2016.2544314

    Article  Google Scholar 

  15. Lee, H., Yang, E., Hwang, S.J.: Deep asymmetric multi-task feature learning. arXiv:1708.00260 (2017)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to S. Sumitra.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Aravindh, A., Shiju, S.S. & Sumitra, S. Kernel collaborative online algorithms for multi-task learning. Ann Math Artif Intell 86, 269–286 (2019). https://doi.org/10.1007/s10472-019-09650-w

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10472-019-09650-w

Keywords

Mathematics Subject Classification (2010)

Navigation