Abstract
Aspect sentiment triple extraction aims to extract all aspects, opinions, and sentiments in a sentence and pair them into triples. The main challenge lies at mining the dependency between the aspect and corresponding opinion with the specific sentiment. Existing methods capture the dependency via either pipeline framework or collapsed sequence labeling model. However, the pipeline framework may suffer from error propagation, while collapsed tags cannot deal with complex pairing situations where the overlap or long dependency exists. In this paper, we propose a novel semantic-syntax cascade injection model (SSCIM) to address above issues. SSCIM adopts a cascade framework with joint training schema, where its lower layer extracts the aspects and injects those aspects into the upper layer to extract opinion and sentiment simultaneously. Such design is inspired by the fact that the sentiment is often conveyed in opinions, and the joint training schema alleviates error propagation effectively. Moreover, a novel semantic-syntax information injection gate (IIG) is designed to bridge the upper and lower layers of our model, enabling SSCIM to better capture the dependency between aspects and opinions. Experimental results on four benchmark datasets demonstrate the superior performance of the proposed model over state-of-the-art baselines.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Chen, D., Manning, C.D.: A fast and accurate dependency parser using neural networks. In: Proceedings of EMNLP, pp. 740–750 (2014)
Chen, S., Liu, J., Wang, Y., Zhang, W., Chi, Z.: Synchronous double-channel recurrent network for aspect-opinion pair extraction. In: Proceedings of ACL, pp. 6515–6524 (2020)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL, pp. 4171–4186 (2019)
Dozat, T., Manning, C.D.: Deep biaffine attention for neural dependency parsing. In: Proceedings of ICLR (Poster) (2017)
Hu, M., Peng, Y., Huang, Z., Li, D., Lv, Y.: Open-domain targeted sentiment analysis via span-based extraction and classification. In: Proceedings of ACL, pp. 537–546 (2019)
Katiyar, A., Cardie, C.: Investigating LSTMS for joint extraction of opinion entities and relations. In: Proceedings of ACL, pp. 919–929 (2016)
Li, K., Chen, C., Quan, X., Ling, Q., Song, Y.: Conditional augmentation for aspect term extraction via masked sequence-to-sequence generation. In: Proceedings of ACL, pp. 7056–7066 (2020)
Li, N., Chow, C.-Y., Zhang, J.-D.: EMOVA: a semi-supervised end-to-end moving-window attentive framework for aspect mining. In: Lauw, H.W., Wong, R.C.-W., Ntoulas, A., Lim, E.-P., Ng, S.-K., Pan, S.J. (eds.) PAKDD 2020. LNCS (LNAI), vol. 12085, pp. 811–823. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-47436-2_61
Li, X., Bing, L., Li, P., Lam, W., Yang, Z.: Aspect term extraction with history attention and selective transformation. In: Proceedings of IJCAI, pp. 4194–4200 (2018)
Ma, D., Li, S., Wu, F., Xie, X., Wang, H.: Exploring sequence-to-sequence learning in aspect term extraction. In: Proceedings of ACL, pp. 3538–3547 (2019)
Peng, H., Xu, L., Bing, L., Huang, F., Lu, W., Si, L.: Knowing what, how and why: a near complete solution for aspect-based sentiment analysis. In: Proceedings of AAAI, pp. 8600–8607 (2020)
Pennington, J., Socher, R., Manning, C.D.: Glove: global vectors for word representation. In: Proceedings of EMNLP, pp. 1532–1543 (2014)
Wang, W., Pan, S.J., Dahlmeier, D., Xiao, X.: Coupled multi-layer attentions for co-extraction of aspect and opinion terms. In: Proceedings of AAAI (2017)
Wei, Z., Su, J., Wang, Y., Tian, Y., Chang, Y.: A novel cascade binary tagging framework for relational triple extraction. In: Proceedings of ACL, pp. 1476–1488 (2020)
Xu, L., Li, H., Lu, W., Bing, L.: Position-aware tagging for aspect sentiment triplet extraction. In: Proceedings of EMNLP, pp. 2339–2349 (2020)
Ye, H., Yan, Z., Luo, Z., Chao, W.: Dependency-tree based convolutional neural networks for aspect term extraction. In: Kim, J., Shim, K., Cao, L., Lee, J.-G., Lin, X., Moon, Y.-S. (eds.) PAKDD 2017. LNCS (LNAI), vol. 10235, pp. 350–362. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-57529-2_28
Zhang, C., Li, Q., Song, D., Wang, B.: A multi-task learning framework for opinion triplet extraction. In: Proceedings of EMNLP, pp. 819–828 (2020)
Zhang, J., Xu, G., Wang, X., Sun, X., Huang, T.: Syntax-Aware Representation for Aspect Term Extraction. In: Yang, Q., Zhou, Z.-H., Gong, Z., Zhang, M.-L., Huang, S.-J. (eds.) PAKDD 2019. LNCS (LNAI), vol. 11439, pp. 123–134. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16148-4_10
Zhao, H., Huang, L., Zhang, R., Lu, Q., et al.: Spanmlt: a span-based multi-task learning framework for pair-wise aspect and opinion terms extraction. In: Proceedings of ACL, pp. 3239–3248 (2020)
Zheng, Y., Zhang, R., Mensah, S., Mao, Y.: Replicate, walk, and stop on syntax: An effective neural network model for aspect-level sentiment classification. Proceedings of AAAI 34, 9685–9692 (2020)
Acknowledgments
This paper is funded by the National Natural Science Foundation of China under Grant Nos. 91746301 and 62002347. Huawei Shen is also funded by Beijing Academy of Artificial Intelligence (BAAI) and K.C. Wong Education Foundation.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Ke, W., Gao, J., Shen, H., Cheng, X. (2021). Semantic-Syntax Cascade Injection Model for Aspect Sentiment Triple Extraction. In: Karlapalem, K., et al. Advances in Knowledge Discovery and Data Mining. PAKDD 2021. Lecture Notes in Computer Science(), vol 12713. Springer, Cham. https://doi.org/10.1007/978-3-030-75765-6_59
Download citation
DOI: https://doi.org/10.1007/978-3-030-75765-6_59
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-75764-9
Online ISBN: 978-3-030-75765-6
eBook Packages: Computer ScienceComputer Science (R0)