Skip to main content

DeepDial: Passage Completion on Dialogs

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 11641))

Abstract

Many neural models have been built to carry out reading comprehension tasks. However, these models mainly focus on formal passages like news and book stories. Although human dialog is the most important part of daily life, machine reading comprehension on dialogs (i.e., passage completion on dialogs) has not been sufficiently explored. Existing models show some weaknesses when comprehending dialogs and they are unable to capture global information over a distance and local detailed information at the same time. This paper introduces a neural network model DeepDial that aims at addressing the problems mentioned above. The model explores both word-level and utterance-level information in a dialog, and achieves the state-of-the-art performance on the benchmark dataset constructed from a TV series Friends.

N. Hu and J. Zhou are equally contributed.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Hermann, K.M., et al.: Teaching machines to read and comprehend. In: NIPS 2015, pp. 1693–1701 (2015)

    Google Scholar 

  2. Ma, K., Jurczyk, T., Choi, J.D.: Passage completion on multiparty dialog. In: Proceedings of NAACL-HLT 2018, pp. 2039–2048 (2018)

    Google Scholar 

  3. Gu, Y., Yang, K., Fu, S., Chen, S., Li, X., Marsic, I.: Multimodal affective analysis using hierarchical attention strategy with word-level alignment. arXiv preprint arXiv:1805.08660 (2018)

  4. Cui, Y., Chen, Z., Wei, S., Wang, S., Liu, T., Hu, G.: Attention-over-attention neural networks for reading comprehension. In: ACL 2017, pp. 593–602 (2017)

    Google Scholar 

  5. Hill, F., Bordes, A., Chopra, S., Weston, J.: The goldilocks principle: reading children’s books with explicit memory representations. In: ICLR 2016 (2016)

    Google Scholar 

  6. Kim, Y.: Convolutional neural networks for sentence classification. arXiv preprint arXiv:1408.5882 (2014)

  7. Mikolov, T., Karafiát, M., Burget, L., Černocký, J., Khudanpur, S.: INTERSPEECH-2010, pp. 1045–1048 (2010)

    Google Scholar 

  8. Wang, W., Yang, N., Wei, F., Chang, B., Zhou, M.: Gated self-matching networks for reading comprehension and question answering. In: ACL 2017, pp. 189–198 (2017)

    Google Scholar 

  9. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June 2016

    Google Scholar 

  10. Pham, H., Manzini, T., Liang, P.P., Poczos, B.: Seq2Seq2Sentiment: multimodal sequence to sequence models for sentiment analysis. arXiv preprint arXiv:1807.03915 (2018)

  11. Sordoni, A., Bachman, P., Bengio, Y.: Iterative alternating neural attention for machine reading. arXiv preprint arXiv:1606.02245 (2016)

  12. Cui, Y., Liu, T., Chen, Z., Wang, S., Hu, G.: Consensus attention-based neural networks for Chinese reading comprehension. arXiv preprint arXiv:1607.02250 (2016)

  13. Vaswani, A., et al.: Attention is all you need. arXiv preprint arXiv:1706.03762 (2017)

  14. Turing, A.M.: Computing machinery and intelligence. Mind 59(236), 433–460 (1950)

    Article  MathSciNet  Google Scholar 

  15. Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: EMNLP 2014, pp. 1532–1543 (2014)

    Google Scholar 

  16. Asghar, N., Poupart, P., Hoey, J., Jiang, X., Mou, L.: Affective neural response generation. arXiv preprint arXiv:1709.03968 (2017)

  17. Kadlec, R., Schmid, M., Bajgar, O., Kleindienst, J.: Text understanding with the attention sum reader network. In: ACL 2016, pp. 908–918 (2016)

    Google Scholar 

  18. Dhingra, B., Liu, H., Cohen, W.W., Salakhutdinov, R.: Gated-attention readers for text comprehension. CoRR 2016 (2016)

    Google Scholar 

  19. Chen, D., Bolton, J., Manning, C.D.: A thorough examination of the CNN/daily mail reading comprehension task. In: ACL 2016, pp. 2358–2367 (2016)

    Google Scholar 

  20. Cheng, J., Dong, L., Lapata, M.: Long short-term memory-networks for machine reading. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP 2016, Austin, Texas, USA, 1–4 November (2016)

    Google Scholar 

  21. Taylor, W.L.: Cloze procedure: a new tool for measuring readability. Journal. Q. 30(4), 415–433 (1953)

    Google Scholar 

  22. Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. CoRR, abs/1409.0473 (2014)

    Google Scholar 

  23. Lample, G., Ballesteros, M., Subramanian, S., Kawakami, K., Dyer, C.: Neural architectures for named entity recognition. arXiv preprint arXiv:1603.01360 (2016)

  24. Tai, K.S., Socher, R., Manning, C.D.: Improved semantic representations from tree-structured long short-term memory networks. arXiv preprint arXiv:1503.00075 (2015)

  25. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  26. Srivastava, N., Hinton, G.E., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.: Dropout: a simple way to prevent neural networks from overfitting. J. Mach. Learn. Res. 15(1), 1929–1958 (2014)

    MathSciNet  MATH  Google Scholar 

  27. Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)

  28. Trischler, A., Ye, Z., Yuan, X., Suleman, K.: Natural language comprehension with the EpiReader. arXiv preprint arXiv:1606.02270 (2016)

  29. Kim, Y., Denton, C., Hoang, L., Rush, A.M.: Structured attention networks. In: ICLR 2017 (2017)

    Google Scholar 

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (61772036) and Key Laboratory of Science, Technology and Standard in Press Industry (Key Laboratory of Intelligent Press Media Technology). We appreciate the anonymous reviewers for their helpful comments.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaojun Wan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hu, N., Zhou, J., Wan, X. (2019). DeepDial: Passage Completion on Dialogs. In: Shao, J., Yiu, M., Toyoda, M., Zhang, D., Wang, W., Cui, B. (eds) Web and Big Data. APWeb-WAIM 2019. Lecture Notes in Computer Science(), vol 11641. Springer, Cham. https://doi.org/10.1007/978-3-030-26072-9_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-26072-9_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-26071-2

  • Online ISBN: 978-3-030-26072-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics