Abstract
Response Generation which is a crucial component of a dialogue system can be modeled using the Sequence to Sequence (Seq2Seq) architecture. However, this kind of method suffers from vague responses of little meaningful content. One possible reason for generating vague responses is the different distribution of the first word between the generated responses and human responses. In fact, the Seq2Seq based method tends to generate high-frequency words in the beginning, which influences the following prediction resulting in vague responses. In this paper, we proposed a novel approach, namely learning to start (LTS), to learn how to generate the first word in the sequence to sequence architecture for response generation. Experimental results show that the proposed LTS model can enhance the performance of the start-of-the-art Seq2Seq model as well as other Seq2Seq models for response generation of short text conversation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
All annotators are well-educated students and have a Bachelor or higher degree.
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473 (2014)
Cho, K., Gulcehre, B.v.M.C., Bahdanau, D., Schwenk, F.B.H., Bengio, Y.: Learning phrase representations using rnn encoder-decoder for statistical machine translation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing (2014)
Fleiss, J.L.: Measuring nominal scale agreement among many raters. Psychol. Bull. 76(5), 378 (1971)
Galley, M., et al.: deltaBLEU: a discriminative metric for generation tasks with intrinsically diverse targets. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, vol. 2, pp. 445â450 (2015)
Goodfellow, I., et al.: Generative adversarial nets. In: Advances in Neural Information Processing Systems, pp. 2672â2680 (2014)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735â1780 (1997)
Li, J., Galley, M., Brockett, C., Gao, J., Dolan, B.: A diversity-promoting objective function for neural conversation models. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 110â119 (2016)
Li, J., Monroe, W., Ritter, A., Jurafsky, D., Galley, M., Gao, J.: Deep reinforcement learning for dialogue generation. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1192â1202 (2016)
Li, J., Monroe, W., Shi, T., Jean, S., Ritter, A., Jurafsky, D.: Adversarial learning for neural dialogue generation. arXiv preprint arXiv:1701.06547 (2017)
Litman, D., Singh, S., Kearns, M., Walker, M.: NJFun: a reinforcement learning spoken dialogue system. In: Proceedings of the ANLP-NAACL 2000 Workshop on Conversational Systems, pp. 17â20. Association for Computational Linguistics (2000)
Mikolov, T., Karafiat, M., Burget, L., Cernocky, J., Khudanpur, S.: Recurrent neural network based language model. In: Eleventh Annual Conference of the International Speech Communication Association (2010)
Mou, L., Song, Y., Yan, R., Li, G., Zhang, L., Jin, Z.: Sequence to backward and forward sequences: a content-introducing approach to generative short-text conversation. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 3349â3358 (2016)
Papineni, K., Roukos, S., Ward, T., Zhu, W.J.: Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting on association for computational linguistics, pp. 311â318. Association for Computational Linguistics (2002)
Rush, A.M., Chopra, S., Weston, J.: A neural attention model for abstractive sentence summarization. arXiv preprint arXiv:1509.00685 (2015)
Schatzmann, J., Georgila, K., Young, S.: Quantitative evaluation of user simulation techniques for spoken dialogue systems. In: 6th SIGdial Workshop on DISCOURSE and DIALOGUE (2005)
Schatzmann, J., Weilhammer, K., Stuttle, M., Young, S.: A survey of statistical user simulation techniques for reinforcement-learning of dialogue management strategies. Knowl. Eng. Rev. 21(2), 97â126 (2006)
Serban, I.V., et al.: Multiresolution recurrent neural networks: an application to dialogue response generation. In: AAAI, pp. 3288â3294 (2017)
Shang, L., Lu, Z., Li, H.: Neural responding machine for short-text conversation. In: Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, vol. 1, pp. 1577â1586 (2015)
Sordoni, A., et al.: A neural network approach to context-sensitive generation of conversational responses. In: Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 196â205 (2015)
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. In: Advances in Neural Information Processing Systems, pp. 3104â3112 (2014)
Vinyals, O., Le, Q.: A neural conversational model. arXiv preprint arXiv:1506.05869 (2015)
Weizenbaum, J.: Elizaâa computer program for the study of natural language communication between man and machine. Commun. ACM 9(1), 36â45 (1966)
Xing, C., et al.: Topic aware neural response generation. AAAI 17, 3351â3357 (2017)
Young, S., GaĊĦiÄ, M., Thomson, B., Williams, J.D.: Pomdp-based statistical spoken dialog systems: a review. Proc. IEEE 101(5), 1160â1179 (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
Âİ 2018 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhu, Q., Zhang, W., Liu, T. (2018). Learning to Start for Sequence to Sequence Based Response Generation. In: Zhang, S., Liu, TY., Li, X., Guo, J., Li, C. (eds) Information Retrieval. CCIR 2018. Lecture Notes in Computer Science(), vol 11168. Springer, Cham. https://doi.org/10.1007/978-3-030-01012-6_22
Download citation
DOI: https://doi.org/10.1007/978-3-030-01012-6_22
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-01011-9
Online ISBN: 978-3-030-01012-6
eBook Packages: Computer ScienceComputer Science (R0)