Abstract
This paper presents an overview of user feedback prediction and response generation in the NLPCC 2023 shared task. We focus on how to utilize feedback data of user likes and dislikes to guide conversation response generation. The goal of this task is to predict accurate user preference and improve response quality to increase user likes. Participants need to integrate preference information into their models to generate responses that align with the user needs. In this paper, we summarize the key components of this task, including task description, dataset, evaluation metrics, participant methods, and final results. We also highlight the potential applications of incorporating like and dislike data in conversation generation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Roller, S., et al.: Recipes for building an open-domain chatbot. In: Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pp. 300–325 (2021)
Kottur, S., Moura, J., Lee, S., Batra, D.: Natural language does not emerge ‘naturally’ in multi-agent dialog. In: Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pp. 2962–2967 (2017)
Zhang, S., Dinan, E., Urbanek, J., Szlam, A., Kiela, D., Weston, J.: Personalizing dialogue agents: I have a dog, do you have pets too? In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 2204–2213 (2018)
Ashfaq, M., Yun, J., Shubin, Yu., Loureiro, S.M.C.: I, chatbot: modeling the determinants of users’ satisfaction and continuance intention of AI-powered service agents. Telematics Inform. 54, 101473 (2020)
Serban, I.V., et al.: A deep reinforcement learning chatbot. arXiv preprint arXiv:1709.02349 (2017)
Ritter, A., Cherry, C., Dolan, W.B.: Data-driven response generation in social media. In: Proceedings of the Conference on Empirical Methods in Natural Language Processing, pp. 583–593 (2011)
Medsker, L.R., Jain, L.C.: Recurrent neural networks. Des. Appl. 5(64–67), 2 (2001)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017)
Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: Bert: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of NAACL-HLT, pp. 4171–4186 (2019)
Sordoni, A., et al.: A neural network approach to context-sensitive generation of conversational responses. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 196–205 (2015)
Li, J., Monroe, W., Ritter, A., Jurafsky, D., Galley, M., Gao, J.: Deep reinforcement learning for dialogue generation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1192–1202 (2016)
Li, X., Chen, Y.-N., Li, L., Gao, J., Celikyilmaz, A.: Investigation of language understanding impact for reinforcement learning based dialogue systems. arXiv preprint arXiv:1703.07055 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Teng, H. et al. (2023). Overview of the NLPCC 2023 Shared Task 9: User Feedback Prediction and Response Generation. In: Liu, F., Duan, N., Xu, Q., Hong, Y. (eds) Natural Language Processing and Chinese Computing. NLPCC 2023. Lecture Notes in Computer Science(), vol 14304. Springer, Cham. https://doi.org/10.1007/978-3-031-44699-3_35
Download citation
DOI: https://doi.org/10.1007/978-3-031-44699-3_35
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-44698-6
Online ISBN: 978-3-031-44699-3
eBook Packages: Computer ScienceComputer Science (R0)