Skip to main content

Using Multi-level Attention Based on Concept Embedding Enrichen Short Text to Classification

  • Conference paper
  • First Online:
Intelligent Information Processing XI (IIP 2022)

Part of the book series: IFIP Advances in Information and Communication Technology ((IFIPAICT,volume 643))

Included in the following conference series:

  • 800 Accesses

Abstract

Aiming at the defects of short text, which lack context information and weak ability to describe topic, this paper proposes an attention network based solution for enriching topic information of short text, which can leverage both text information and concept embedding to represent short text. Specifically, short text encoder is used to enhance the representation of short texts in the semantic space. The concept encoder obtains the distribution representation of the concept through the attention network composed of C-ST attention and C-CS attention. Finally, Concatenating outputs from the two encoders creates a longer target representation of short text. Experimental results on two benchmark datasets show that our model achieves inspiring performance and outperforms baseline methods significantly.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. Wang, Z., Wang, H.: Understanding short texts. In: The Association for Computational Linguistics (Tutorial), Stroudsburg, Pennsylvania. ACL (2016)

    Google Scholar 

  2. Bengio, Y., Ducharme, R., Vincent, P., Janvin, C.: A neural probabilistic language model. J. Mach. Learn. Res. 3, 1137–1155 (2003)

    Google Scholar 

  3. Vaswani, A., et al: Attention is all you need. In: Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems, La Jolla, CA, pp. 5998–6008. NIPS (2017)

    Google Scholar 

  4. Wang, J., Wang, Z., Zhang, D., Yan, J.: Combining knowledge with deep convolutional neural networks for short text classification. In: 26th International Joint Conference on Artificial Intelligence, pp. 2915–2921. IJCAI.org, USA (2017). https://doi.org/10.24963/ijcai.2017/406

  5. Wang, Z., Wang, H., Wen, J., Xiao, Y.: An inference approach to basic level of categorization. In: Proceedings of the 24th ACM International Conference on Information and Knowledge Management, pp. 653–662. ACM, New York (2015). https://doi.org/10.1145/2806416.2806533

  6. Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Stroudsburg, Pennsylvania, pp. 4171–4186. ACL (2019). https://doi.org/10.18653/v1/n19-1423

  7. Zhang, S., Zheng, D., Hu, X., Yang, M.: Bidirectional long short-term memory networks for relation classification. In: Proceedings of the 29th Pacific Asia Conference on Language, Information and Computation, Stroudsburg, Pennsylvania. ACL (2015)

    Google Scholar 

  8. Chen, J., Hu, Y., Liu, J., Xiao, Y., Jiang, H.: Deep short text classification with knowledge powered attention. In: The Thirty-Third AAAI Conference on Artificial Intelligence, pp. 6252–6259. AAAI Press, Palo Alto (2019). https://doi.org/10.1609/aaai.v33i01.33016252

  9. Vitale, D., Ferragina, P., Scaiella, U.: Classification of short texts by deploying topical annotations. In: Baeza-Yates, R., et al. (eds.) ECIR 2012. LNCS, vol. 7224, pp. 376–387. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-28997-2_32

    Chapter  Google Scholar 

  10. Xuan, H.P., Nguyen, M.L., Horiguchi, S.: Learning to classify short and sparse text & web with hidden topics from large-scale data collections. In: 17th International Conference on World Wide Web, pp. 91–100. ACM, New York (2008). https://doi.org/10.1145/1367497.1367510

  11. Wang, S., Manning, C.: Baselines and bigrams: simple, good sentiment and topic classification. In: 50th Annual Meeting of the Association for Computational Linguistics, Stroudsburg, pp. 90–94. ACL (2012)

    Google Scholar 

  12. Blei, D., Ng, A., Jordan, M.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)

    Google Scholar 

  13. Zhang, D., Wang, D.: Relation classification via recurrent neural network. CoPR arXiv:1508.01006 (2015)

  14. Kim, Y.: Convolutional neural networks for sentence classification. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Stroudsburg, Pennsylvania, pp. 1746–1751. ACL (2014). https://doi.org/10.3115/v1/d14-1181

  15. Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, Stroudsburg, Pennsylvania, pp 1532–1543. ACL (2014). https://doi.org/10.3115/v1/d14-1162

  16. Zeng, J., Li, J., Song, Y., Gao, C., Lyu, M., King, I.: Topic memory networks for short text classification. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, Stroudsburg, Pennsylvania, pp. 3120–3131. ACL (2018). https://doi.org/10.18653/v1/d18-1351

Download references

Acknowledgements

This work was supported in part by National Natural Science Foundation of China (No. 61762078, 61967013), University Innovation and entrepreneurship Fund Project (2020B-089), Supported by science and technology program of Province (20JR5RA518), Natural Science Foundation of Province (20JR10RA076).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to XiaoHong Li .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 IFIP International Federation for Information Processing

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

You, B., Li, X., Peng, Q., Li, R. (2022). Using Multi-level Attention Based on Concept Embedding Enrichen Short Text to Classification. In: Shi, Z., Zucker, JD., An, B. (eds) Intelligent Information Processing XI. IIP 2022. IFIP Advances in Information and Communication Technology, vol 643. Springer, Cham. https://doi.org/10.1007/978-3-031-03948-5_13

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-03948-5_13

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-03947-8

  • Online ISBN: 978-3-031-03948-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics