Abstract
Neural question generation (NQG) aims to generate a question from a given passage with neural networks. NQG has attracted more attention in recent years, due to its wide applications in reading comprehension, question answering, and dialogue systems. Existing works on NQG mainly use the sequence-to-sequence (Seq2Seq) or graph-to-sequence (Graph2Seq) framework. The former ignores rich structure information of the passage, while the latter is insufficient in modeling semantic information. Moreover, the target answer plays an important role in the task, because without the answer the generated question has great randomness. To effectively utilize answer information and capture both structure and semantic information of the passage, we propose a graph augmented sequence-to-sequence (GA-Seq2Seq) model. Firstly, we design an answer-aware passage representation module to integrate the answer information into the passage. Then, to discover both the structure and semantic information of the passage, we present a graph augmented passage encoder which consists of a graph encoder and a sequence encoder. Finally, we leverage an attention-based long short-term memory decoder to generate the question. Experimental results on the SQuAD and MS MARCO datasets show that our proposed model outperforms the existing state-of-the-art baselines in terms of automatic and human evaluations. The implementation is available at https://github.com/butterfliesss/GA-Seq2Seq.




Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Notes
The average of BERT embeddings of a word’s sub-tokens is used as the BERT embedding for the word.
The parameter matrix is capitalized and the parameter vector is lowercased in the paper.
References
Heilman M, Smith NA (2010) Good question! statistical ranking for question generation. In: Human language technologies: the 2010 annual conference of the north american chapter of the association for computational linguistics, pp 609–617
Duan N, Tang D, Chen P, Zhou M (2017) Question generation for question answering. In: Proceedings of the 2017 conference on empirical methods in natural language processing, pp 866–874
Tang D, Duan N, Qin T, Yan Z, Zhou M (2017) Question answering and question generation as dual tasks. arXiv:1706.02027
Yuan X, Wang T, Gulcehre C, Sordoni A, Bachman P, Zhang S, Subramanian S, Trischler A (2017) Machine comprehension by text-to-text neural question generation. In: Proceedings of the 2nd workshop on representation learning for NLP, pp 15–25
Mostafazadeh N, Misra I, Devlin J, Mitchell M, He X, Vanderwende L (2016) Generating natural questions about an image. In: Proceedings of the 54th annual meeting of the association for computational linguistics (vol 1: long papers), pp 1802–1813
Wang Y, Liu C, Huang M, Nie L (2018) Learning to ask questions in open-domain conversational systems with typed decoders. In: Proceedings of the 56th annual meeting of the association for computational linguistics (vol 1: long papers), pp 2193–2203
Mostow J, Chen W (2009) Generating instruction automatically for the reading strategy of self-questioning. In: Proceedings of the 2009 conference on artificial intelligence in education: building learning systems that care: from knowledge representation to affective modelling, pp 465–472
Du X, Shao J, Cardie C (2017) Learning to ask: neural question generation for reading comprehension. In: Proceedings of the 55th annual meeting of the association for computational linguistics (vol 1: long papers), pp 1342–1352
Song L, Wang Z, Hamza W, Zhang Y, Gildea D (2018) Leveraging context information for natural question generation. In: Proceedings of the 2018 conference of the north american chapter of the association for computational linguistics: human language technologies, vol 2 (short papers), pp 569–574
Chen Y, Wu L, Zaki MJ (2020) Reinforcement learning based graph-to-sequence model for natural question generation. In: International conference on learning representations
Tuan LA, Shah D, Barzilay R (2020) Capturing greater context for question generation. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 9065–9072
Zeng H, Zhi Z, Liu J, Wei B (2021) Improving paragraph-level question generation with extended answer network and uncertainty-aware beam search. Inf Sci 571:50–64
Huang Q, Fu M, Mo L, Cai Y, Xu J, Li P, Li Q, Leung H-F (2021) Entity guided question generation with contextual structure and sequence information capturing. In: Proceedings of the AAAI conference on artificial intelligence, vol 35, pp 13064–13072
Scialom T, Piwowarski B, Staiano J (2019) Self-attention architectures for answer-agnostic neural question generation. In: Proceedings of the 57th annual meeting of the association for computational linguistics, pp 6027–6032
Bahdanau D, Cho K, Bengio Y (2014) Neural machine translation by jointly learning to align and translate. arXiv:1409.0473
Tu Z, Lu Z, Liu Y, Liu X, Li H (2016) Modeling coverage for neural machine translation. In: Proceedings of the 54th annual meeting of the association for computational linguistics (volume 1: long papers), pp 76–85
See A, Liu PJ, Manning CD (2017) Get to the point: summarization with pointer-generator networks. In: Proceedings of the 55th annual meeting of the association for computational linguistics (vol 1: long papers), pp 1073–1083
Vinyals O, Fortunato M, Jaitly N (2015) Pointer networks. In: Advances in neural information processing systems, vol 28, pp 2692–2700
Gu J, Lu Z, Li H, Li VOK (2016) Incorporating copying mechanism in sequence-to-sequence learning. In: Proceedings of the 54th annual meeting of the association for computational linguistics (vol 1: long papers), pp 1631–1640
Zhou Q, Yang N, Wei F, Tan C, Bao H, Zhou M (2018) Neural question generation from text: a preliminary study. In: Huang X, Jiang J, Zhao D, Feng Y, Hong Y (eds) Natural language processing and chinese computing, pp 662–671
Song L, Wang Z, Hamza W (2017) A unified query-based generative model for question generation and question answering. arXiv:1709.01058
Wang B, Wang X, Tao T, Zhang Q, Xu J (2020) Neural question generation with answer pivot. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 9138–9145
Cho K, van Merriënboer B, Gulcehre C, Bahdanau D, Bougares F, Schwenk H, Bengio Y (2014) Learning phrase representations using RNN encoder–decoder for statistical machine translation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp 1724–1734
Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I (2017) Attention is all you need. In: Advances in neural information processing systems, pp 5998–6008
Kim Y, Lee H, Shin J, Jung K (2019) Improving neural question generation using answer separation. In: Proceedings of the AAAI conference on artificial intelligence, vol 33, pp 6602–6609
Chen Q, Zhu X, Ling Z-H, Wei S, Jiang H, Inkpen D (2017) Enhanced LSTM for natural language inference. In: Proceedings of the 55th annual meeting of the association for computational linguistics (vol 1: long papers), pp 1657–1668
Tang D, Qin B, Feng X, Liu T (2016) Effective LSTMs for target-dependent sentiment classification. In: Proceedings of COLING 2016, the 26th international conference on computational linguistics: technical papers, pp 3298–3307
Liu B, Zhao M, Niu D, Lai K, He Y, Wei H, Xu Y (2019) Learning to generate questions by learning what not to generate. In: The world wide web conference, pp 1106–1118
Ma X, Zhu Q, Zhou Y, Li X (2020) Improving question generation with sentence-level semantic matching and answer position inferring. In: Proceedings of the AAAI conference on artificial intelligence, vol 34, pp 8464–8471
Devlin J, Chang M-W, Lee K, Toutanova K (2019) BERT: pre-training Of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 conference of the north american chapter of the association for computational linguistics: human language technologies, vol 1 (long and short papers), pp 4171–4186
Bai J, Rong W, Xia F, Wang Y, Ouyang Y, Xiong Z (2021) Paragraph level multi-perspective context modeling for question generation. In: ICASSP 2021-2021 IEEE international conference on acoustics, speech and signal processing (ICASSP), pp 7418–7422
Fei Z, Zhang Q, Zhou Y (2021) Iterative GNN-based decoder for question generation. In: Proceedings of the 2021 conference on empirical methods in natural language processing, pp 2573–2582
Liu B, Wei H, Niu D, Chen H, He Y (2020) Asking questions the human way: scalable question-answer generation from text corpus. In: Proceedings of the web conference 2020, pp 2032–2043
Chan Y-H, Fan Y-C (2019) A recurrent BERT-based model for question generation. In: Proceedings of the 2nd workshop on machine reading for question answering, pp 154–162
Dong L, Yang N, Wang W, Wei F, Liu X, Wang Y, Gao J, Zhou M, Hon H-W (2019) Unified language model pre-training for natural language understanding and generation. In: Advances in neural information processing systems, pp 13042–13054
Bao H, Dong L, Wei F, Wang W, Yang N, Liu X, Wang Y, Gao J, Piao S, Zhou M et al (2020) Unilmv2: pseudo-masked language models for unified language model pre-training. In: International conference on machine learning, pp 642–652
Qi W, Yan Y, Gong Y, Liu D, Duan N, Chen J, Zhang R, Zhou M (2020) Prophetnet: predicting future n-gram for sequence-to-sequence pre-training. In: Findings of the association for computational linguistics: EMNLP 2020, pp 2401–2410
Xiao D, Zhang H, Li Y, Sun Y, Tian H, Wu H, Wang H (2020) Ernie-gen: an enhanced multi-flow pre-training and fine-tuning framework for natural language generation. In: Proceedings of the twenty-ninth international joint conference on artificial intelligence, pp 3997–4003
Pennington J, Socher R, Manning C (2014) Glove: global vectors for word representation. In: Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pp 1532–1543
Graves A, Schmidhuber J (2005) Framewise phoneme classification with bidirectional lstm and other neural network architectures. Neural Netw 18(5):602–610
Hochreiter S, Schmidhuber J (1997) Long short-term memory. Neural Computat 9(8):1735–1780
Bengio S, Vinyals O, Jaitly N, Shazeer N (2015) Scheduled sampling for sequence prediction with recurrent neural networks. In: Advances in neural information processing systems, pp 1171–1179
Rajpurkar P, Zhang J, Lopyrev K, Liang P (2016) SQUAD: 100,000+ questions for machine comprehension of text. In: Proceedings of the 2016 conference on empirical methods in natural language processing, pp 2383–2392
Nguyen T, Rosenberg M, Song X, Gao J, Tiwary S, Majumder R, Deng L (2016) Ms marco: a human generated machine reading comprehension dataset. In: Proceedings of 30th conference on neural information processing system
Zhao Y, Ni X, Ding Y, Ke Q (2018) Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 3901–3910
Kingma DP, Ba J (2014) Adam: a method for stochastic optimization. arXiv:1412.6980
Papineni K, Roukos S, Ward T, Zhu W-J (2002) Bleu: a method for automatic evaluation of machine translation. In: Proceedings of the 40th annual meeting of the association for computational linguistics, pp 311–318
Denkowski M, Lavie A (2014) Meteor universal: language specific translation evaluation for any target language. In: Proceedings of the ninth workshop on statistical machine translation, pp 376–380
Lin C-Y (2004) ROUGE: a package for automatic evaluation of summaries. In: Text summarization branches out, pp 74–81
Nema P, Khapra MM (2018) Towards a better metric for evaluating question generation systems. In: Proceedings of the 2018 conference on empirical methods in natural language processing, pp 3950–3959
Acknowledgements
This work is partially supported by the Natural Science Foundation of China (No. 62076046, 62006034), the Natural Science Foundation of Liaoning Province (No. 2021-BS-067), the Fundamental Research Funds for the Central Universities (No.DUT21RC(3)015).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Competing interests
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
Additional information
Publisher’s note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Ma, H., Wang, J., Lin, H. et al. Graph augmented sequence-to-sequence model for neural question generation. Appl Intell 53, 14628–14644 (2023). https://doi.org/10.1007/s10489-022-04260-2
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10489-022-04260-2