Abstract
Dialogue systems in open domain have achieved great success due to the easily obtained single-turn corpus and the development of deep learning, but the multi-turn scenario is still a challenge because of the frequent coreference and information omission. In this paper, we aim to quickly retrieve the omitted or coreferred expressions contained in history dialogue and restore them into the incomplete utterance. Jointly inspired by the generative method for text generation and extractive method for span extraction, we propose a fusion extractive-generative dialogue ellipsis and coreference integrated resolution model(FEGI). In detail, we introduce two training tasks OMIT and SPAN to extract missing semantic expressions, then integrate the expressions obtained into the decoding initial and copy stages of the generative model respectively. To support the training tasks, we introduce an algorithm for secondary reconstruction annotation based on existing publicly available corpora via unsupervised technique, which can work in cases of no annotation of the missing semantic expressions. Moreover, We conduct dozens of joint learning experiments on the CamRest676 and RiSAWOZ datasets. Experimental results show that our proposed model significantly outperforms the state-of-the-art models in terms of quality.
Supported by organization x.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Because of the FEGI-G actually do not finetune on the SPAN task in the second stage, the FEGI-G-BERT equals to the FEGI-G.
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7–9, 2015, Conference Track Proceedings, pp. 1–15 (2015)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)
Elgohary, A., Peskov, D., Boyd-Graber, J.L.: Can you unpack that? Learning to rewrite questions-in-context. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3–7, 2019, pp. 5917–5923. Association for Computational Linguistics (2019)
Grave, E., Bojanowski, P., Gupta, P., Joulin, A., Mikolov, T.: Learning word vectors for 157 languages. In: Calzolari, N., et al. (eds.) Proceedings of the Eleventh International Conference on Language Resources and Evaluation, LREC 2018, Miyazaki, Japan, May 7–12, 2018, pp. 1–5. European Language Resources Association (ELRA) (2018)
Gu, J., Lu, Z., Li, H., Li, V.O.K.: Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7–12, 2016, Berlin, Germany, Volume 1: Long Papers, pp. 1631–1640. The Association for Computer Linguistics (2016)
Hao, J., Song, L., Wang, L., Xu, K., Tu, Z., Yu, D.: Robust dialogue utterance rewriting as sequence tagging. CoRR abs/2012.14535, 1–11 (2020)
Huang, M., Zhu, X., Gao, J.: Challenges in building intelligent open-domain dialog systems. ACM Trans. Inf. Syst. (TOIS) 38, 1–32 (2019)
Li, P.: An empirical investigation of pre-trained transformer language models for open-domain dialogue generation. ArXiv abs/2003.04195 (2020)
Li, Q., Kong, F.: Transition-based mention representation for neural coreference resolution. In: Huang, D.S., Premaratne, P., Jin, B., Qu, B., Jo, K.H., Hussain, A. (eds.) Advanced Intelligent Computing Technology and Applications, pp. 563–574. Springer, Singapore (2023). https://doi.org/10.1007/978-981-99-4752-2_46
Ni, Z., Kong, F.: Enhancing long-distance dialogue history modeling for better dialogue ellipsis and coreference resolution. In: Wang, L., Feng, Y., Hong, Yu., He, R. (eds.) NLPCC 2021. LNCS (LNAI), vol. 13028, pp. 480–492. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88480-2_38
Pan, Z.F., Bai, K., Wang, Y., Zhou, L., Liu, X.: Improving open-domain dialogue systems via multi-turn incomplete utterance restoration. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3–7, 2019, pp. 1824–1833. Association for Computational Linguistics (2019)
Papineni, K., Roukos, S., Ward, T., Zhu, W.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics, July 6–12, 2002, Philadelphia, PA, USA, pp. 311–318. ACL (2002)
Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Moschitti, A., Pang, B., Daelemans, W. (eds.) Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, EMNLP 2014, October 25–29, 2014, Doha, Qatar, A meeting of SIGDAT, a Special Interest Group of the ACL, pp. 1532–1543. ACL (2014)
Quan, J., Xiong, D., Webber, B., Hu, C.: GECOR: an end-to-end generative ellipsis and co-reference resolution model for task-oriented dialogue. In: Inui, K., Jiang, J., Ng, V., Wan, X. (eds.) Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing, EMNLP-IJCNLP 2019, Hong Kong, China, November 3–7, 2019, pp. 4546–4556. Association for Computational Linguistics (2019)
Quan, J., Zhang, S., Cao, Q., Li, Z., Xiong, D.: RiSAWOZ: a large-scale multi-domain Wizard-of-Oz dataset with rich semantic annotations for task-oriented dialogue modeling. In: Webber, B., Cohn, T., He, Y., Liu, Y. (eds.) Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16–20, 2020, pp. 930–940. Association for Computational Linguistics (2020)
Su, H., et al.: Improving multi-turn dialogue modelling with utterance rewriter. In: Korhonen, A., Traum, D.R., Màrquez, L. (eds.) Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28- August 2, 2019, Volume 1: Long Papers, pp. 22–31. Association for Computational Linguistics (2019)
Acknowledgments
This work was supported by Projects 62276178 under the National Natural Science Foundation of China, the National Key RD Program of China under Grant No.2020AAA0108600 and Priority Academic Program Development of Jiangsu Higher Education Institutions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Li, Q., Kong, F. (2024). FEGI: A Fusion Extractive-Generative Model for Dialogue Ellipsis and Coreference Integrated Resolution. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Communications in Computer and Information Science, vol 1967. Springer, Singapore. https://doi.org/10.1007/978-981-99-8178-6_37
Download citation
DOI: https://doi.org/10.1007/978-981-99-8178-6_37
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8177-9
Online ISBN: 978-981-99-8178-6
eBook Packages: Computer ScienceComputer Science (R0)