Skip to main content

Multi-level Cohesion Information Modeling for Better Written and Dialogue Discourse Parsing

  • Conference paper
  • First Online:
Natural Language Processing and Chinese Computing (NLPCC 2021)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13028))

  • 2681 Accesses

Abstract

Discourse parsing has attracted more and more attention due to its importance on Natural Language Understanding. Accordingly, various neural models proposed and have achieved certain success. However, due to the scale limitation of corpus, outstanding performance still depends on additional features. Different from previous neural studies employing simple flat word level EDU (Elementary Discourse Unit) representation, we improve the performance of discourse parsing by employing cohesion information (In this paper, we regard lexical chain and coreference chain as cohesion information) enhanced EDU representation. In particular, firstly we use WordNet and a coreference resolution model to extract lexical and coreference chain respectively and automatically. Secondly, we construct EDU level graph based on the extracted chains. Finally, using Graph Attention Network, we incorporate the obtained cohesion information into EDU representation to improve discourse parsing. Experiments on RST-DT, CDTB and STAC show our proposed cohesion information enhanced EDU representation can benefit both written and dialogue discourse parsing, compared with the baseline model we duplicated.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 89.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In the dialogue text, each utterance corresponds to an EDU.

  2. 2.

    The split position between any two neighboring EDUs is called the split point.

  3. 3.

    There will be \(n-2\) split points for n EDUs.

  4. 4.

    A word similarity calculation method provided by WordNet, return a score between 0 and 1, denoting how similar two word senses are, based on the shortest path that connects the senses. Moreover, when there is no path between two senses, −1 will be returned.

  5. 5.

    we use the same method to build lexical and coreference graph.

  6. 6.

    In order to simplify the expression, the word and mention in lexical and coreference chain are collectively referred to as element.

  7. 7.

    Following previous study, we used the version released on March 21, 2018.

References

  1. Afantenos, S., Kow, E., Asher, N., Perret, J.: Discourse parsing for multi-party chat dialogues. Association for Computational Linguistics (ACL) (2015)

    Google Scholar 

  2. Asher, N., Hunter, J., Morey, M., Benamara, F., Afantenos, S.: Discourse structure and dialogue acts in multiparty dialogue: the STAC corpus (2016)

    Google Scholar 

  3. Asher, N., Lascarides, A.: Logics of Conversation. Peking University Press (2003)

    Google Scholar 

  4. Carlson, L., Marcu, D., Okurowski, M.E.: Building a discourse-tagged corpus in the framework of rhetorical structure theory. Association for Computational Linguistics (2001)

    Google Scholar 

  5. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1724–1734. Association for Computational Linguistics, Doha, Qatar, October 2014. https://doi.org/10.3115/v1/D14-1179. https://www.aclweb.org/anthology/D14-1179

  6. Dozat, T., Manning, C.D.: Deep biaffine attention for neural dependency parsing (2016)

    Google Scholar 

  7. Fang, K., Fu, J.: Incorporating structural information for better coreference resolution. In: Twenty-Eighth International Joint Conference on Artificial Intelligence IJCAI-19 (2019)

    Google Scholar 

  8. Feng, V.W., Hirst, G.: A linear-time bottom-up discourse parser with constraints and post-editing. In: Meeting of the Association for Computational Linguistics (2014)

    Google Scholar 

  9. Ji, Y., Eisenstein, J.: Representation learning for text-level discourse parsing. In: Meeting of the Association for Computational Linguistics (2014)

    Google Scholar 

  10. Ji, Y., Smith, N.: Neural discourse structure for text categorization. arXiv preprint arXiv:1702.01829 (2017)

  11. Kobayashi, N., Hirao, T., Kamigaito, H., Okumura, M., Nagata, M.: Top-down RST parsing utilizing granularity levels in documents. In: Proceedings of the AAAI Conference on Artificial Intelligence (2020)

    Google Scholar 

  12. Li, J., Li, R., Hovy, E.: Recursive deep models for discourse parsing. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 2061–2069 (2014)

    Google Scholar 

  13. Li, Q., Li, T., Chang, B.: Discourse parsing with attention-based hierarchical neural networks. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 362–371 (2016)

    Google Scholar 

  14. Li, Y., Feng, W., Jing, S., Fang, K., Zhou, G.: Building Chinese discourse corpus with connective-driven dependency tree structure. In: Conference on Empirical Methods in Natural Language Processing (2014)

    Google Scholar 

  15. Morris, J., Hirst, G.: Lexical cohesion computed by thesaural relations as an indicator of the structure of text. Comput. Linguistics (1991)

    Google Scholar 

  16. Nan, Y., Zhang, M., Fu, G.: Transition-based neural RST parsing with implicit syntax features (2018)

    Google Scholar 

  17. Perret, J., Afantenos, S., Asher, N., Morey, M.: Integer linear programming for discourse parsing. In: Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (NAACL-HLT 2016), pp. 99–109 (2016)

    Google Scholar 

  18. Shi, Z., Huang, M.: A deep sequential model for discourse parsing on multi-party dialogues (2018)

    Google Scholar 

  19. Sun, C., Fang, K.: A transition-based framework for Chinese discourse structure parsing. J. Chin. Inf. Process. (2018)

    Google Scholar 

  20. Takanobu, R., Huang, M., Zhao, Z., Li, F., Nie, L.: A weakly supervised method for topic segmentation and labeling in goal-oriented dialogues via reinforcement learning. In: Twenty-Seventh International Joint Conference on Artificial Intelligence IJCAI-18 (2018)

    Google Scholar 

  21. Velikovi, P., Cucurull, G., Casanova, A., Romero, A., Liò, P., Bengio, Y.: Graph attention networks (2017)

    Google Scholar 

  22. Xu, J., Gan, Z., Cheng, Y., Liu, J.: Discourse-aware neural extractive model for text summarization (2019)

    Google Scholar 

  23. Zhang, L., Xing, Y., Kong, F., Li, P., Zhou, G.: A top-down neural architecture towards text-level parsing of discourse rhetorical structure. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics (2020)

    Google Scholar 

  24. Zhang, L., Xing, Y., Kong, F., Li, P., Zhou, G.: A top-down neural architecture towards text-level parsing of discourse rhetorical structure. arXiv preprint arXiv:2005.02680 (2020)

  25. Zhu, X., Runcong, M.A., Sun, L., Chen, H.: Word semantic similarity computation based on HowNet and CiLin. J. Chin. Inf. Process. (2016)

    Google Scholar 

Download references

Acknowledgements

The authors would like to thank the anonymous reviewers for the helpful comments. We are very grateful to Zixin Ni for her help in Reference Resolution we used in this work. This work was supported by Project 61876118 under the National Natural Science Foundation of China and the Priority Academic Program Development of Jiangsu Higher Education Institutions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Fang Kong .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, J., Zhang, L., Kong, F. (2021). Multi-level Cohesion Information Modeling for Better Written and Dialogue Discourse Parsing. In: Wang, L., Feng, Y., Hong, Y., He, R. (eds) Natural Language Processing and Chinese Computing. NLPCC 2021. Lecture Notes in Computer Science(), vol 13028. Springer, Cham. https://doi.org/10.1007/978-3-030-88480-2_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-88480-2_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-88479-6

  • Online ISBN: 978-3-030-88480-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics