Skip to main content

Multi-label Aspect Classification on Question-Answering Text with Contextualized Attention-Based Neural Network

  • Conference paper
  • First Online:
  • 4219 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 11856))

Abstract

In the e-commerce websites, such as Taobao and Amazon, interactive question-answering (QA) style reviews usually carry rich aspect information of products. To well automatically analyze the aspect information inside QA style reviews, it’s worthwhile to perform aspect classification on them. Unfortunately, until now, there are few papers that focus on performing aspect classification on the QA style reviews. For short, we referred to this novel task as QA aspect classification (QA-AC). In this study, we model this task as a multi-label classification problem where each QA style review is explicitly mapped to multiple aspect categories instead of only one aspect category. To solve this issue, we propose a contextualized attention-based neural network approach to capture both the contextual information and the QA matching information inside QA style reviews for the task of QA-AC. Specifically, we first propose two aggregating strategies to integrate multi-layer contextualized word embeddings of the pre-trained language representation model (i.e., BERT) so as to capture contextual information. Second, we propose a bidirectional attention layer to capture the QA matching information. Experimental results demonstrate the effectiveness of our approach to QA-AC.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Fan, F., Feng, Y., Zhao, D.: Multi-grained attention network for aspect-level sentiment classification. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3433–3442 (2018)

    Google Scholar 

  2. He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: Exploiting document knowledge for aspect-level sentiment classification. arXiv preprint. arXiv:1806.04346 (2018)

  3. Wang, Y., Huang, M., Zhao, L., et al.: Attention-based LSTM for aspect-level sentiment classification. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 606–615 (2016)

    Google Scholar 

  4. Rubtsova, Y., Koshelnikov, S.: Aspect extraction from reviews using conditional random fields. In: Klinov, P., Mouromtsev, D. (eds.) KESW 2015. CCIS, vol. 518, pp. 158–167. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-24543-0_12

    Chapter  Google Scholar 

  5. Xu, H., Liu, B., Shu, L., Yu, P.S.: Double embeddings and CNN-based sequence labeling for aspect extraction. arXiv preprint. arXiv:1805.04601 (2018)

  6. He, R., Lee, W.S., Ng, H.T., Dahlmeier, D.: An unsupervised neural attention model for aspect extraction. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (vol. 1: Long Papers), pp. 388–397 (2017)

    Google Scholar 

  7. Wu, H., Liu, M., Wang, J., Xie, J., Shen, C.: Question-answering aspect classification with hierarchical attention network. In: Sun, M., Liu, T., Wang, X., Liu, Z., Liu, Y. (eds.) CCL/NLP-NABD -2018. LNCS (LNAI), vol. 11221, pp. 225–237. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01716-3_19

    Chapter  Google Scholar 

  8. Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. In: Advances in Neural Information Processing Systems, pp. 3079–3087 (2015)

    Google Scholar 

  9. Kim, Y., Jernite, Y., Sontag, D., Rush, A.M.: Character-aware neural language models. In: Thirtieth AAAI Conference on Artificial Intelligence (2016)

    Google Scholar 

  10. Peters, M.E., et al.: Deep contextualized word representations. arXiv preprint. arXiv:1802.05365 (2018)

  11. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training (2018). https://s3-us-west-2.amazonaws.com/openai-assets/research-covers/languageunsupervised/language understanding paper.pdf

  12. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint. arXiv:1810.04805 (2018)

  13. Adhikari, A., Ram, A., Tang, R., Lin, J.: DocBERT: BERT for document classification. arXiv preprint. arXiv:1904.08398 (2019)

  14. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint. arXiv:1412.6980 (2014)

  15. Godbole, S., Sarawagi, S.: Discriminative methods for multi-labeled classification. In: Dai, H., Srikant, R., Zhang, C. (eds.) PAKDD 2004. LNCS (LNAI), vol. 3056, pp. 22–30. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24775-3_5

    Chapter  Google Scholar 

  16. Szymański, P., Kajdanowicz, T.: A scikit-based Python environment for performing multi-label classification. ArXiv e-prints (February 2017)

    Google Scholar 

Download references

Acknowledgements

This work is supported in part by Industrial Prospective Project of Jiangsu Technology Department under Grant No. BE2017081 and the National Natural Science Foundation of China under Grant No. 61572129.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hanqian Wu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wu, H., Zhang, S., Wang, J., Liu, M., Li, S. (2019). Multi-label Aspect Classification on Question-Answering Text with Contextualized Attention-Based Neural Network. In: Sun, M., Huang, X., Ji, H., Liu, Z., Liu, Y. (eds) Chinese Computational Linguistics. CCL 2019. Lecture Notes in Computer Science(), vol 11856. Springer, Cham. https://doi.org/10.1007/978-3-030-32381-3_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-32381-3_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-32380-6

  • Online ISBN: 978-3-030-32381-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics