ABSTRACT
This paper focuses on the topic of multi-hop question generation, which aims to generate questions needed reasoning over multiple sentences and relations to derive answers. In particular, we first build an entity graph to integrate various entities scattered over text based on their contextual relations. We then heuristically extract the sub-graph by the evidential relations and type, so as to obtain the reasoning chain and textual related contents for each question. Guided by the chain, we propose a holistic generator-evaluator network to form the questions, where such guidance helps to ensure the rationality of generated questions which need multi-hop deduction to correspond to the answers. The generator is a sequence-to-sequence model, designed with several techniques to make the questions syntactically and semantically valid. The evaluator optimizes the generator network by employing a hybrid mechanism combined of supervised and reinforced learning. Experimental results on HotpotQA data set demonstrate the effectiveness of our approach, where the generated samples can be used as pseudo training data to alleviate the data shortage problem for neural network and assist to learn the state-of-the-arts for multi-hop machine comprehension.
- Y. Chali and S.A. Hasan. 2015. Towards Topic-to-Question Generation. In Journal of Computational Linguistics. 1–20.Google Scholar
- D. Chen, J. Bolton, and C.D. Manning. 2016. A Thorough Examination of the CNN/Daily Mail Reading Comprehension Task. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 2358–2367.Google Scholar
- K. Cho, B.V. Merrienboer, C. Gulcehre, D. Bahdanau, F. Bougares, H. Schwenk, and Y. Bengio. 2014. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proceedings of the 2014 conference on Empirical Methods in Natural Language Processing. 1724–1734.Google Scholar
- C. Clark and M. Gardner. 2018. Simple and Effective Multi-Paragraph Reading Comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. 845–855.Google Scholar
- J. Devlin, M.W. Chang, K. Lee, and K. Toutanova. 2019. BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of of the The North American Chapter of the Association for Computational Linguistics. 4171–4186.Google Scholar
- M. Ding, C. Zhou, Q. Chen, H. Yang, and J. Tang. 2019. Cognitive Graph for Multi-Hop Reading Comprehension at Scale. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2694–2703.Google Scholar
- X. Du and C. Cardie. 2017. Identifying Where to Focus in Reading Comprehension for Neural Question Generation. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. 2067–2073.Google Scholar
- X. Du and C. Cardie. 2018. Harvesting Paragraph-Level Question-Answer Pairs from Wikipedia. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 1907–1917.Google Scholar
- Y. Feldman and R. El-Yaniv. 2019. Multi-Hop Paragraph Retrieval for Open-Domain Question Answering. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2296–2309.Google Scholar
- H. Gong, S. Bhat, L. Wu, J. Xiong, and W. Hwu. 2019. 2019. Reinforcement Learning Based Text Style Transfer without Parallel Training Corpus. In Proceedings of the 2019 Conference of of the The North American Chapter of the Association for Computational Linguistics. 3168–3180.Google Scholar
- Ç. Gülçehre, S. Ahn, R. Nallapati, B. Zhou, and Y. Bengio. 2016. Pointing the Unknown Words. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. 140–149.Google Scholar
- D. Guo, Y. Sun, D. Tang, N. Duan, J. Yin, H. Chi, J. Cao, P. Chen, and M. Zhou. 2018. Question Generation from SQL Queries Improves Neural Semantic Parsing. In Proceedings of the 2018 conference on Empirical Methods in Natural Language Processing. 1597–1607.Google Scholar
- V. Harrison and M.A. Walker. 2018. Neural Generation of Diverse Questions using Answer Focus, Contextual and Linguistic Features. In Proceedings of the 11th International Conference on Natural Language Generation. 296–306.Google Scholar
- M. Heilman and D.J. Litman. 2011. Automatic Factual Question Generation from Text. In arXiv:1809.02040. 224–231.Google Scholar
- D.P. Kingma and J. Ba. 2015. Adam: A Method for Stochastic Optimization. In Proceedings of International Conference on Learning Representations. 324–331.Google Scholar
- T.N. Kipf and M. Welling. 2017. Semi-Supervised Classification with Graph Convolutional Networks. In Proceedings of International Conference on Learning Representations. 243–253.Google Scholar
- C. Manning, M. Surdeanu, J. Bauer, J. Finkel, S. Bethard, and D. McClosky. 2014. The Stanford Corenlp Natural Language Processing Toolkit. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. 55–60.Google Scholar
- S. Min, V. Zhong, L. Zettlemoyer, and H. Hajishirzi. 2019. Multi-hop Reading Comprehension through Question Decomposition and Rescoring. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 6097–6109.Google Scholar
- R. Mitkov and L.A. Ha. 2003. Computer-Aided Generation of Multiple-Choice Tests. In Proceedings of the HLT-NAACL 03 Workshop on Building Educational Applications Using Natural Language Processing. 17–22.Google Scholar
- K. Nishida, K. Nishida, M. Nagata, A. Otsuka, I. Saito, H. Asano, and J. Tomita. 2019. Answering while Summarizing: Multi-task Learning for Multi-hop QA with Evidence Extraction. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2335–2345.Google Scholar
- L. Pan, W. Lei, T.S. Chua, and M.Y. Kan. 2017. Recent Advances in Neural Question Generation. In arXiv:1905.08949.Google Scholar
- K. Papineni, S. Roukos, T. Ward, and W.J. Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. 311–318.Google Scholar
- R. Pascanu, T. Mikolov, and Y. Bengio. 2013. On the Difficulty of Training Recurrent Neural Networks. In Proceedings of International Conference on Learning Representations. 1310–1318.Google Scholar
- R. Paulus, C. Xiong, and R. Socher. 2017. A Deep Reinforced Model for Abstractive Summarization. In arXiv:1705.04304.Google Scholar
- M.A. Ranzato, S. Chopra, M. Auli, and W. Zaremba. 2016. Sequence Level Training with Recurrent Neural Networks. In Proceedings of International Conference on Learning Representations. 54–63.Google Scholar
- S.J. Rennie, E. Marcheret, Y. Mroueh, J. Ross, and V. Goel. 2017. Self-Critical Sequence Training for Image Captioning. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition. 7008–7024.Google Scholar
- L. Song, Z. Wang, M. Yu, Y. Zhang, R. Florian, and D. Gildea. 2018. Exploring Graph-structured Passage Representation for Multi-hop Reading Comprehension with Graph Neural Networks. In arXiv:1809.02040.Google Scholar
- L. Song, Y. Zhang, Z. Wang, and D. Gildea. 2018. A Graph-to-Sequence Model for AMR-to-Text Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 1616–1626.Google Scholar
- S. Subramanian, T. Wang, X. Yuan, S. Zhang, A. Trischler, and Y. Bengio. 2018. Neural Models for Key Phrase Extraction and Question Generation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. 78–88.Google Scholar
- C. Sun, A. Shrivastava, S. Singh, and A. Gupta. 2017. Revisiting Unreasonable Effectiveness of Data in Deep Learning Era. In Proceedings of the 30th IEEE Conference on Computer Vision and Pattern Recognition. 843–852.Google Scholar
- A. Talmor and J. Berant. 2019. MultiQA: An Empirical Investigation of Generalization and Transfer in Reading Comprehension. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 4911–4921.Google Scholar
- A. Trischler, Z. Ye, X. Yuan, and K. Suleman. 2016. Natural Language Comprehension with the EpiReader. In Proceedings of the 2016 conference on Empirical Methods in Natural Language Processing. 128–137.Google Scholar
- R.J. Williams and D. Zipser. 1989. A Learning Algorithm for Continually Running Fully Recurrent Neural Networks. In Journal of Neural Computation. 270–280.Google Scholar
- Y. Xiao, Y. Qu, L. Qiu, H. Zhou, L. Li, W. Zhang, and Y. Yu. 2019. Dynamically Fused Graph Network for Multi-hop Reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 6140–6150.Google Scholar
- Z. Yang, P. Qi, S. Zhang, Y. Bengio, W.W. Cohen, R. Salakhutdinov, and C.D. Manning. 2018. HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. 2369–2380.Google Scholar
- J. Yu, Z.J. Zha, and T.S. Chua. 2012. Answering Opinion Questions on Products by Exploiting Hierarchical Organization of Consumer Reviews. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 391–401.Google Scholar
- J. Yu, Z.J. Zha, M. Wang, and T.S. Chua. 2011. Aspect Ranking: Identifying Important Product Aspects from Online Consumer Reviews. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. 1496–1505.Google Scholar
- J. Yu, Z.J. Zha, M. Wang, K. Wang, and T.S. Chua. 2011. Domain-Assisted Product Aspect Hierarchy Generation: Towards Hierarchical Organization of Unstructured Consumer Reviews. In Proceedings of the Conference on Empirical Methods in Natural Language Processing. 140–150.Google Scholar
- J. Yu, Z. Zha, and J. Yin. 2019. Inferential Machine Comprehension: Answering Questions by Recursively Deducing the Evidence Chain from Text. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. 2241–2251.Google Scholar
- X. Zhang and M. Lapata. 2017. Sentence Simplification with Deep Reinforcement Learning. In Proceedings of the 2017 conference on Empirical Methods in Natural Language Processing. 584–594.Google Scholar
Index Terms
- Generating Multi-hop Reasoning Questions to Improve Machine Reading Comprehension
Recommendations
Multi-task joint training model for machine reading comprehension
AbstractMachine reading comprehension is an important topic in natural language processing. However, the redundancy of text and the diversity of answer types are neglected in existing machine reading comprehension. To tackle these issues, a ...
Improving the robustness of machine reading comprehension via contrastive learning
AbstractPre-trained language models achieve high performance on machine reading comprehension task, but these models lack robustness and are vulnerable to adversarial samples. Most of the current methods for improving model robustness are based on data ...
EviDR: Evidence-Emphasized Discrete Reasoning for Reasoning Machine Reading Comprehension
Natural Language Processing and Chinese ComputingAbstractReasoning machine reading comprehension (R-MRC) aims to answer complex questions that require discrete reasoning based on text. To support discrete reasoning, evidence, typically the concise textual fragments that describe question-related facts, ...
Comments