Skip to main content

BERT-CoQAC: BERT-Based Conversational Question Answering in Context

  • Conference paper
  • First Online:
Parallel Architectures, Algorithms and Programming (PAAP 2020)

Abstract

As one promising way to inquire about any particular information through a dialog with the bot, question answering dialog systems have gained increasing research interests recently. Designing interactive QA systems has always been a challenging task in natural language processing and used as a benchmark to evaluate machine’s ability of natural language understanding. However, such systems often struggle when the question answering is carried out in multiple turns by the users to seek more information based on what they have already learned, thus, giving rise to another complicated form called Conversational Question Answering (CQA). CQA systems are often criticized for not understanding or utilizing the previous context of the conversation when answering the questions. To address the research gap, in this paper, we explore how to integrate the conversational history into the neural machine comprehension system. On one hand, we introduce a framework based on publicly available pre-trained language model called BERT for incorporating history turns into the system. On the other hand, we propose a history selection mechanism that selects the turns that are relevant and contributes the most to answer the current question. Experimentation results revealed that our framework is comparable in performance with the state-of-the-art models on the QuAC (http://quac.ai/) leader board. We also conduct a number of experiments to show the side effects of using entire context information which brings unnecessary information and noise signals resulting in a decline in the model’s performance.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    http://dialogue.mi.eng.cam.ac.uk/index.php/corpus/.

  2. 2.

    http://yanran.li/dailydialog.html.

  3. 3.

    https://quac.ai/.

  4. 4.

    http://quac.ai/.

References

  1. Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: SQuAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Austin, Texas, pp. 2383–2392. Association for Computational Linguistics, November 2016

    Google Scholar 

  2. Joshi, M., Choi, E., Weld, D., Zettlemoyer, L.: TriviaQA: a large scale distantly supervised challenge dataset for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Vancouver, Canada, pp. 1601–1611. Association for Computational Linguistics, July 2017

    Google Scholar 

  3. Kočiský, T., et al.: The NarrativeQA reading comprehension challenge. Trans. Assoc. Comput. Linguist. 6, 317–328 (2018)

    Article  Google Scholar 

  4. Reddy, S., Chen, D., Manning, C.D.: CoQA: a conversational question answering challenge. TACL Trans. Assoc. Comput. Linguist. 7, 249–266 (2019)

    Google Scholar 

  5. Chen, Y., Wu, L., Zaki, M.J.: GraphFlow: exploiting conversation flow with graph neural networks for conversational machine comprehension. CoRR, abs/1908.00059 (2019)

    Google Scholar 

  6. Qu, C., Yang, L., Qiu, M., Bruce Croft, W., Zhang, Y., Iyyer, M.M.: BERT with history answer embedding for conversational question answering. In: Proceedings of the 42nd International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2019, Paris, France, 21–25 July 2019, pp. 1133–1136 (2019)

    Google Scholar 

  7. Choi, E., et al.: QuAC: question answering in context. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 2174–2184, October-November 2018

    Google Scholar 

  8. Serban, I.V., Sordoni, A., Bengio, Y., Courville, A.C., Pineau, J.: Building end-to-end dialogue systems using generative hierarchical neural network models. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, Phoenix, Arizona, USA, 12–17 February 2016, pp. 3776–3784 (2016)

    Google Scholar 

  9. Xing, C., Wu, Y., Wu, W., Huang, Y., Zhou, M.: Hierarchical recurrent attention network for response generation. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI 2018), the 30th innovative Applications of Artificial Intelligence (IAAI 2018), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI 2018), New Orleans, Louisiana, USA, 2–7 February 2018, pp. 5610–5617 (2018)

    Google Scholar 

  10. Tian, Z., Yan, R., Mou, L., Song, Y., Feng, Y., Zhao, D.: How to make context more useful? An empirical study on context-aware neural conversational models. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, 30 July–4 August, Volume 2: Short Papers, pp. 231–236 (2017)

    Google Scholar 

  11. Seo, M.J., Kembhavi, A., Farhadi, A., Hajishirzi, H.: Bidirectional attention flow for machine comprehension. In: 5th International Conference on Learning Representations, ICLR 2017, Toulon, France, 24–26 April 2017, Conference Track Proceedings (2017)

    Google Scholar 

  12. Huang, H.-Y., Choi, E., Yih, W.: FlowQa: grasping flow in history for conversational machine comprehension. CoRR, abs/1810.06683 (2018)

    Google Scholar 

  13. Yeh, Y.T., Chen, Y.-N.: FlowDelta: modeling flow information gain in reasoning for conversational machine comprehension. CoRR, abs/1908.05117 (2019)

    Google Scholar 

  14. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, Minneapolis, MN, USA, 2–7 June 2019, Volume 1 (Long and Short Papers), pp. 4171–4186 (2019)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Munazza Zaib .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zaib, M., Tran, D.H., Sagar, S., Mahmood, A., Zhang, W.E., Sheng, Q.Z. (2021). BERT-CoQAC: BERT-Based Conversational Question Answering in Context. In: Ning, L., Chau, V., Lau, F. (eds) Parallel Architectures, Algorithms and Programming. PAAP 2020. Communications in Computer and Information Science, vol 1362. Springer, Singapore. https://doi.org/10.1007/978-981-16-0010-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-0010-4_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-0009-8

  • Online ISBN: 978-981-16-0010-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics