Skip to main content
Log in

Text multi-label learning method based on label-aware attention and semantic dependency

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Text multi-label learning deals with examples having multiple labels simultaneously. It can be applied to many fields, such as text categorization, medical diagnosis recognition and topic recommendation. Existing multi-label learning methods treat a label as an atomic symbol without considering semantic information, while labels are texts with semantic information composed of words, which can guide to obtain discriminative text features. In order to select discriminatory features from redundant content, we consider the semantic labels and establish the relationship between labels and texts based on the attention mechanism. Label relationship modeling helps to further improve the model’s effectiveness and we model the high label relationship based on the principle of graph convolutional networks (GCN). Then the LAA_SD method is proposed, which combines enhanced text feature representation with label semantic dependency to perform text multi-label learning. A comparative study with state-of-the-art approaches manifests the competitive performance of the proposed model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2

Similar content being viewed by others

References

  1. Cao P, Liu X, Zhao D (2016) Cost sensitive ranking support vector machine for multi-label data learning. In: International Conference on Hybrid Intelligent Systems. Springer, Cham, pp 244–255

    Google Scholar 

  2. Chu H M, Yeh C K, Frank Wang Y C (2018) Deep generative models for weakly-supervised multi-label classification (ECCV). pp 400-415.

  3. Dai L, Zhang J, Li C et al (2018) Multi-label feature selection with application to TCM state identification. Concurrency and Computation: Practice and Experience 31(23):e4634

    Google Scholar 

  4. Dong H, Wang W, Huang K (2019) Joint Multi-Label Attention Networks for Social Text Annotation. In: NAACL HLT 2019 Conference of the North American Chapter of the Association for Computational Linguistics 1:1348-1354.

  5. Du C, Chen Z, Feng F (2019) Explicit interaction model towards text classification. Proceedings of the AAAI Conference on Artificial Intelligence (AAAI) 33(01):6359–6366

    Article  Google Scholar 

  6. Gan Y, Xiang Y, Zou G, Miao H, Zhang B (2018) Multi-label recommendation of web services with the combination of deep neural networks. In: International Conference on Collaborative Computing: Networking (ICCCN). Springer, Cham, pp133-150.

  7. Guo L, Zhang D, Wang L (2018) CRAN: a hybrid CNN-RNN attention-based model for text classification. In: International Conference on Conceptual Modeling. Springer, Cham, pp 571–585

    Chapter  Google Scholar 

  8. Ho Y, Wookey S (2019) The real-world-weight cross-entropy loss function: modeling the costs of mislabeling. IEEE Access 11(16):9079–9087

    Google Scholar 

  9. Huang SJ, Gao W, Zhou ZH (2019) Fast multi-instance multi-label learning. IEEE Trans pattern analysis and machine intelligence 41(11):2614–2627

    Article  Google Scholar 

  10. Huang J, Qin F, Zheng X (2019) Improving multi-label classification with missing labels by learning label-specific features. Inf Sci 492:124–146

    Article  MathSciNet  Google Scholar 

  11. Lian S, Liu J, Lu R (2019) Captured multi-label relations via joint deep supervised auto encoder. Appl Soft Comput 74:709–728

    Article  Google Scholar 

  12. Liu J, Chang WC, Wu Y, Yang Y (2017)Deep learning for extreme multi-label text classification. In Proceedings of the 40th International ACM SIGIR Conference on Research and Development in Information Retrieval. pp.15-124.

  13. Lu X, Wang W, Ma C, Shen J, Shao L, Porikli F (2019) See more, know more: Unsupervised video object segmentation with co-attention siamese networks. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 3623-3632.

  14. Lu, X, Wang W, Shen J, Tai YW, Crandall DJ, Hoi SC (2020). Learning video object segmentation from unlabeled videos. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. pp 8960-8970.

  15. Madroñal D, Lazcano R, Salvador R, Fabelo H et al (2018) SVM-based real-time hyperspectral image classifier on a manycore architecture. J Syst Archit 30:40

    Google Scholar 

  16. Pereira R, Plastino A, Zadrozny B et al (2018) Categorizing feature selection methods for multi-label classification. Artif Intell Rev 49(1):57–78

    Article  Google Scholar 

  17. Prabhu Y, Kag A, Gopinath S et al (2018) Extreme multi-label learning with label features for warm-start tagging, ranking & recommendation. In: Proceedings of the Eleventh ACM International Conference on Web Search and Data Mining (WSDM). pp 441-449.

  18. Qi S, Wang W, Jia B, Shen J, Zhu SC (2018) Learning human-object interactions by graph parsing neural networks. In Proceedings of the European Conference on Computer Vision (ECCV). pp 401-417.

  19. Schlichtkrull M, Kipf TN, Bloem P (2018) Modeling relational data with graph convolutional networks. In European semantic web conference. Springer, Cham, pp 593–607

    Google Scholar 

  20. Si C, Chen W (2019) An attention enhanced graph convolutional LSTM network for skeleton-based action recognition. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. pp 1227-1236.

  21. Song P, Jing LP (2018) Exploiting Label Relationships in Multi-Label Classification with Neural Networks. Computer Research and Development 55(8):1751–1759

    Google Scholar 

  22. Teisseyre P (2017) CCnet: Joint multi-label classification and feature selection using classifier chains and elastic net regularization. Neurocomputing 235:98–111

    Article  Google Scholar 

  23. Wang H, Zhao M, Xie X (2019) Knowledge graph convolutional networks for recommender systems. In The world wide web conference (WWW). pp 3307-3313.

  24. Wang W, Lu X, Shen J, Crandall DJ, Shao L (2019) Zero-shot video object segmentation via attentive graph neural networks. Proceedings of the IEEE/CVF International Conference on Computer Vision, In, pp 9236–9245

    Google Scholar 

  25. Wu MC, Chiu CT (2020) Multi-teacher knowledge distillation for compressed video action recognition based on deep learning. J Syst Archit 103:101695

    Article  Google Scholar 

  26. Yan Y, Wang Y, Gao WC, Zhang BW, Yang C (2018) Multi-label ranking for document classification. Neural Process Lett 47(1):117–138

    Article  Google Scholar 

  27. Yang W, Wang G, Bhuiyan MZ (2017) Hypergraph partitioning for social networks based on information entropy modularity. Network and Computer Applications 86:59–71

    Article  Google Scholar 

  28. Yang P, Sun X, Li W (2018) SGM: sequence generation model for multi-label classification. ACL pp 3915-3926.

  29. You R, Dai S, Zhang Z (2018) Attention xml: “Extreme multi-label text classification with multi-label attention based recurrent neural networks. ArXiv preprint arXiv:1811.01727,137:138-187.

  30. You X, Zhang Y, Li B, Han J (2019) VDIF-M: multi-label classification of vehicle defect information collection based on Seq2seq model. In: International conference on mobile computing, applications, and services. Springer, Cham. pp 96-111.

  31. Yusheng C, Dawei Z, Wenfa Z (2018) Multi-label learning of non-equilibrium labels completion with mean shift. Neurocomputing 321:92–102

    Article  Google Scholar 

  32. Zhang Q, Jin Q (2018) Kernel-weighted graph convolutional network: a deep learning approach for traffic forecasting. In 2018 24th International Conference on Pattern Recognition (ICPR). IEEE, pp1018-1023.

  33. Zhao B, Li X, Lu X (2018) A CNN–RNN architecture for multi-label weather recognition. Neurocomputing 322:47–57

    Article  Google Scholar 

  34. Zheng K, Wang X (2018) Feature selection method with joint maximal information entropy between features and class. Pattern Recogn 77:20–29

    Article  Google Scholar 

Download references

Acknowledgments

I would like to express my gratitude to my supervisor, Prof.Liu, who has given the most scientific suggestions and supervision. He also critically reviewed the study proposal and made necessary writing assistance. Then I am greatly indebted to the postgraduate Hao Ren for participating in technical editing and necessary corrections. And I also owe a lot to Prof.Qian, who has shown much consideration for the research and has helped with the acquisition of funding. Finally, I would like to express my thanks to engineer Wang for his excellent technical assistance and data curation. This work was supported in part by Zhejiang NSF Grant No. LZ20F020001, China NSF Grants No. 61472194 as well as programs sponsored by K.C. Wong Magna Fund in Ningbo University.

Funding

This work was supported in part by Zhejiang NSF Grant No. LZ20F020001, China NSF Grants No. 61472194 as well as programs sponsored by K.C. Wong Magna Fund in Ningbo University

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Baisong Liu.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, B., Liu, X., Ren, H. et al. Text multi-label learning method based on label-aware attention and semantic dependency. Multimed Tools Appl 81, 7219–7237 (2022). https://doi.org/10.1007/s11042-021-11663-9

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-021-11663-9

Keywords

Navigation