Skip to main content
Log in

Lazy Multi-label Learning Algorithms Based on Mutuality Strategies

  • Published:
Journal of Intelligent & Robotic Systems Aims and scope Submit manuscript

Abstract

Lazy multi-label learning algorithms have become an important research topic within the multi-label community. These algorithms usually consider the set of standard k-Nearest Neighbors of a new instance to predict its labels (multi-label). The prediction is made by following a voting criteria within the multi-labels of the set of k-Nearest Neighbors of the new instance. This work proposes the use of two alternative strategies to identify the set of these examples: the Mutual and Not Mutual Nearest Neighbors rules, which have already been used by lazy single-learning algorithms. In this work, we use these strategies to extend the lazy multi-label algorithm BRkNN. An experimental evaluation carried out to compare both mutuality strategies with the original BRkNN algorithm and the well-known MLkNN lazy algorithm on 15 benchmark datasets showed that MLkNN presented the best predictive performance for the Hamming-Loss evaluation measure, although it was significantly outperformed by the mutuality strategies when F-Measure is considered. The best results of the lazy algorithms were also compared with the results obtained by the Binary Relevance approach using three different base learning algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Institutional subscriptions

Similar content being viewed by others

References

  1. Cheng, W., Hüllermeier, E.: Combining instance-based learning and logistic regression for multilabel classification. Mach. Learn. 76(2-3), 211–225 (2009)

    Article  Google Scholar 

  2. Cherman, E.A., Spolaôr, N., Valverde-Rebaza, J., Monard, M.C.: Graph based algorithms for multi-label classification (in portuguesse). In: Encontro Nacional de Inteligṅcia Computacional, pp. 1–12. SBC (2013)

  3. Clare, A., King, R.D.: Knowledge discovery in multi-label phenotype data. In: European Conference on Principles of Data Mining and Knowledge Discovery, pp. 42–53. Springer-Verlag (2001)

  4. Dembczynski, K., Waegeman, W., Cheng, W., Hüllermeier, E.: On label dependence and loss minimization in multi-label classification. Mach. Learn. 88(1-2), 5–45 (2012)

    Article  MATH  MathSciNet  Google Scholar 

  5. Demsar, J.: Statistical comparisons of classifiers over multiple data sets. J. Mach. Learn. Res. 7, 1–30 (2006)

    MATH  MathSciNet  Google Scholar 

  6. Liu, H., Zhang, S., Zhao, J., Zhao, X., Mo, Y.: A new classification algorithm using mutual nearest neighbors. In: 9th International Conference on Grid and Cooperative Computing (GCC), pp 52–57 (2010)

  7. Marques, G., Domingues, M.A., Langlois, T., Gouyon, F.: Three current issues in music autotagging. In: Klapuri, A., Leider, C. (eds.) ISMIR, pp 795–800. University of Miami (2011)

  8. Metz, J, de Abreu, L.F.D., Cherman, E.A., Monard, M.C.: On the estimation of predictive evaluation measure baselines for multi-label learning. In: IBERAMIA, pp 189–198 (2012)

  9. Pang-Ning, T., Steinbach, M., Kumar, V., et al.: Introduction to Data Mining (2006)

  10. Rossi, R.G., Rezende, S.O.: Building a topic hierarchy using the bag-of-related-words representation. In: Symposium on Document Engineering, pp. 195–204. ACM (2011)

  11. Spyromitros, E., Tsoumakas, G., Vlahavas, I.: An empirical study of lazy multilabel classification algorithms. In: Hellenic conference on Artificial Intelligence, pp 401–406. Springer-Verlag, Berlin, Heidelberg (2008)

  12. Tsoumakas, G., Katakis, I., Vlahavas, I.P.: Mining multi-label data. In: Maimon, O., Rokach, L. (eds.) Data Mining and Knowledge Discovery Handbook, pp 667–685, Springer (2010)

  13. Tsoumakas, G., Spyromitros, E., Vilcek, J., Vlahavas, I.: Mulan: A java library for multi-label learning. J. Mach. Learn. Res. 12, 2411–2414 (2011)

    MATH  MathSciNet  Google Scholar 

  14. Wilson, D.R., Martinez, T.R.: Reduction techniques for exemplar-based learning algorithms. Mach. Learn. 38(3), 257–286 (2000)

    Article  MATH  Google Scholar 

  15. Younes, Z., Abdallah, F., Denoeux, T., Snoussi, H.: A dependent multilabel classification method derived from the k-nearest neighbor rule. EURASIP J. Adv. Signal Process. 2011, 1–14 (2011)

    Article  Google Scholar 

  16. Zhang, M.L.: Multilabel neural networks with applications to functional genomics and text categorization. IEEE Trans. Knowl. Data Eng. 18(10), 1338–1351 (2006)

    Article  Google Scholar 

  17. Zhang, M.L., Zhou, Z.H.: A k-nearest neighbor based algorithm for multi-label classification. IEEE International Conference on Granular Computing 2, 718–721 (2005)

    Google Scholar 

  18. Zhang, M.L., Zhou, Z.H.: A review on multi-label learning algorithms. IEEE Trans. Knowl. Data Eng. 99, 1 (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Everton Alvares Cherman.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cherman, E.A., Spolaôr, N., Valverde-Rebaza, J. et al. Lazy Multi-label Learning Algorithms Based on Mutuality Strategies. J Intell Robot Syst 80 (Suppl 1), 261–276 (2015). https://doi.org/10.1007/s10846-014-0144-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10846-014-0144-4

Keywords

Navigation