Skip to main content
Log in

Qadg: Generating question–answer-distractors pairs for real examination

  • Original Article
  • Published:
Neural Computing and Applications Aims and scope Submit manuscript

Abstract

Reading comprehension question generation aims to generate questions from a given article, while distractor generation involves generating multiple distractors from a given article, question, and answer. Most existing research has mainly focused on one of the above tasks, with limited attention to the joint task of Question–Answer-Distractor (QAD) generation. While previous work has achieved success in the joint generation of answer-aware questions and distractors, applying these answer-aware approaches to practical applications in the education domain remains challenging. In this study, we propose a unified and high-performance Question–Answer-Distractors Generation model, named QADG. Our model comprises two components: Question–Answer Generation (QAG) and Distractor Generation (DG). This model is capable of generating Question–Answer pairs based on a given context and then generating distractors based on the context and QA pairs. To address the unconstrained nature of question-and-answer generation in QAG, we employ a key phrase extraction as reported by Willis (in: proceedings of the Sixth ACM Conference on Learning@ Scale, 2019) module to extract key phrases from the article. The extracted key phrases, as the constraints that can be used to match answers. To enhance the quality of distractors, we propose a novel ranking-rewriting mechanism. We employ a fine-tuned model to rank distractors and introduce a rewriting module to improve the quality of distractors. Furthermore, the Knowledge-Dependent-Answerability (KDA) as reported by Moon (Evaluating the knowledge dependency of questions, 2022) is used as a filter to ensure the answerability of the generated QAD pairs. Experiments on SQuAD and RACE datasets demonstrate that the proposed QADG exhibits superior performance, particularly in the DG phase. Additionally, human evaluations also confirm the effectiveness and educational relevance of our model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

All data included in this study are available upon request by contact with the corresponding author.

References

  1. Willis, A, Davis, G, Ruan, S, Manoharan, L, Landay, J, Brunskill, E (2019) Key phrase extraction for generating educational question-answer pairs. In: proceedings of the Sixth ACM Conference on Learning@ Scale, pp. 1–10

  2. Moon, H, Yang, Y, Shin, J, Yu, H, Lee, S, Jeong, M, Park, J, Kim, M, Choi, S (2022) Evaluating the knowledge dependency of questions. arXiv preprint arXiv:2211.11902

  3. Lai, G, Xie, Q, Liu, H, Yang, Y, Hovy, E (2017) Race Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683

  4. Zhou, Q, Yang, N, Wei, F, Tan, C, Bao, H, Zhou, M (2018) Neural question generation from text: a preliminary study. In: natural Language Processing and Chinese Computing: 6th CCF International Conference, NLPCC 2017, Dalian, China, November 8–12, 2017, Proceedings 6, pp. 662–671. Springer

  5. Zhao, Y, Ni, X, Ding, Y, Ke, Q (2018) Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In: proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3901–3910

  6. Qi, W, Yan, Y, Gong, Y, Liu, D, Duan, N, Chen, J, Zhang, R, Zhou, M (2020) Prophetnet: Predicting future n-gram for sequence-to-sequence pre-training. arXiv preprint arXiv:2001.04063

  7. Jia, X, Zhou, W, Sun, X, Wu, Y (2020) How to ask good questions? try to leverage paraphrases. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pp. 6130–6140

  8. Sun, Y, Liu, S, Dan, Z, Zhao, X (2022) Question generation based on grammar knowledge and fine-grained classification. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 6457–6467

  9. Wang, S, Wei, Z, Fan, Z, Liu, Y, Huang, X (2019) A multi-agent communication framework for question-worthy phrase extraction and question generation. In: proceedings of the AAAI Conference on Artificial Intelligence, 33: 7168–7175

  10. Cui, S, Bao, X, Zu, X, Guo, Y, Zhao, Z, Zhang, J, Chen, H (2021) Onestop qamaker: extract question-answer pairs from text in a one-stop approach. arXiv preprint arXiv:2102.12128

  11. Subramanian, S, Wang, T, Yuan, X, Zhang, S, Bengio, Y, Trischler, A (2017) Neural models for key phrase detection and question generation. arXiv preprint arXiv:1706.04560

  12. Qu, F, Jia, X, Wu, Y (2021) Asking questions like educational experts: automatically generating question-answer pairs on real-world examination data. arXiv preprint arXiv:2109.05179

  13. Rodriguez-Torrealba R, Garcia-Lopez E, Garcia-Cabot A (2022) End-to-end generation of multiple-choice questions using text-to-text transfer transformer models. Exp Syst Appl 208:118258

    Article  MATH  Google Scholar 

  14. Vachev, K, Hardalov, M, Karadzhov, G, Georgiev, G, Koychev, I, Nakov, P (2022) Leaf: multiple-choice question generation. In: European Conference on Information Retrieval, pp. 321–328. Springer

  15. Bulathwela, S, Muse, H, Yilmaz, E (2023) Scalable educational question generation with pre-trained language models. In: international Conference on Artificial Intelligence in Education, pp. 327–339. Springer

  16. Shuai P, Li L, Liu S, Shen J (2023) Qdg: a unified model for automatic question-distractor pairs generation. Appl Intell 53(7):8275–8285

    Article  MATH  Google Scholar 

  17. Ren, S, Zhu, KQ (2021) Knowledge-driven distractor generation for cloze-style multiple choice questions. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 35, pp. 4339–4347

  18. Liang, C, Yang, X, Dave, N, Wham, D, Pursel, B, Giles, CL (2018) Distractor generation for multiple choice questions using learning to rank. In: proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 284–290

  19. Rodriguez-Torrealba R, Garcia-Lopez E, Garcia-Cabot A (2022) End-to-end generation of multiple-choice questions using text-to-text transfer transformer models. Exp Syst Appl 208:118258

    Article  MATH  Google Scholar 

  20. Kumar AP, Nayak A, Shenoy M, Goyal S et al (2023) A novel approach to generate distractors for multiple choice questions. Exp Syst Appl 225:120022

    Article  Google Scholar 

  21. Raffel C, Shazeer N, Roberts A, Lee K, Narang S, Matena M, Zhou Y, Li W, Liu PJ (2020) Exploring the limits of transfer learning with a unified text-to-text transformer. The J Mach Learn Res 21(1):5485–5551

    MathSciNet  Google Scholar 

  22. Sanh, V, Debut, L, Chaumond, J, Wolf, T (2019) Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. arXiv preprint arXiv:1910.01108

  23. Qiu, Z, Wu, X, Fan, W (2020) Automatic distractor generation for multiple choice questions in standard tests. arXiv preprint arXiv:2011.13100

  24. Adamson, D, Bhartiya, D, Gujral, B, Kedia, R, Singh, A, Rosé, CP (2013) Automatically generating discussion questions. In: artificial Intelligence in Education: 16th International Conference, AIED 2013, Memphis, TN, USA, July 9-13, 2013. Proceedings 16, pp. 81–90. Springer

  25. Heilman, M, Smith, NA (2010) Good question! statistical ranking for question generation. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 609–617

  26. Dong, L, Yang, N, Wang, W, Wei, F, Liu, X, Wang, Y, Gao, J, Zhou, M, Hon, H-W (2019) Unified language model pre-training for natural language understanding and generation. Advances in neural information processing systems 32

  27. Sun, X, Liu, J, Lyu, Y, He, W, Ma, Y, Wang, S (2018) Answer-focused and position-aware neural question generation. In: proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 3930–3939

  28. Scialom, T, Piwowarski, B, Staiano, J (2019) Self-attention architectures for answer-agnostic neural question generation. In: proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pp. 6027–6032

  29. Lewis, M, Liu, Y, Goyal, N, Ghazvininejad, M, Mohamed, A, Levy, O, Stoyanov, V, Zettlemoyer, L (2019) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461

  30. Radford A, Wu J, Child R, Luan D, Amodei D, Sutskever I et al (2019) Language models are unsupervised multitask learners. OpenAI blog 1(8):9

    Google Scholar 

  31. Bao, H, Dong, L, Wei, F, Wang, W, Yang, N, Liu, X, Wang, Y, Gao, J, Piao, S, Zhou, M, et al. (2020) Unilmv2: pseudo-masked language models for unified language model pre-training. In: International Conference on Machine Learning, pp. 642–652. PMLR

  32. Sun, Y, Liu, S, Dan, Z, Zhao, X (2022) Question generation based on grammar knowledge and fine-grained classification. In: Proceedings of the 29th International Conference on Computational Linguistics, pp. 6457–6467

  33. Pennington, J, Socher, R, Manning, CD (2014) Glove: global vectors for word representation. In: proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543

  34. Welbl, J, Liu, NF, Gardner, M (2017) Crowdsourcing multiple choice science questions. arXiv preprint arXiv:1707.06209

  35. Guo, Q, Kulkarni, C, Kittur, A, Bigham, JP, Brunskill, E (2016) Questimator: generating knowledge assessments for arbitrary topics. In: IJCAI-16: Proceedings of the AAAI Twenty-Fifth International Joint Conference on Artificial Intelligence

  36. Kumar, G, Banchs, RE, D’Haro, LF (2015) Revup: Automatic gap-fill question generation from educational texts. In: Proceedings of the Tenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 154–161

  37. Stasaski, K, Hearst, MA (2017) Multiple choice question generation utilizing an ontology. In: proceedings of the 12th Workshop on Innovative Use of NLP for Building Educational Applications, pp. 303–312

  38. Liang, C, Yang, X, Dave, N, Wham, D, Pursel, B, Giles, CL (2018) Distractor generation for multiple choice questions using learning to rank. In: proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 284–290

  39. Liang, C, Yang, X, Wham, D, Pursel, B, Passonneaur, R, Giles, CL (2017) Distractor generation with generative adversarial nets for automatically creating fill-in-the-blank questions. In: proceedings of the Knowledge Capture Conference, pp. 1–4

  40. Gao, Y, Bing, L, Li, P, King, I, Lyu, MR (2019) Generating distractors for reading comprehension questions from real examinations. In: proceedings of the AAAI Conference on Artificial Intelligence, 33: 6423–6430

  41. Zhou, X, Luo, S, Wu, Y (2020) Co-attention hierarchical network: generating coherent long distractors for reading comprehension. In: proceedings of the AAAI Conference on Artificial Intelligence, 34: 9725–9732

  42. Xie J, Peng N, Cai Y, Wang T, Huang Q (2021) Diverse distractor generation for constructing high-quality multiple choice questions. IEEE/ACM Trans Audio, Speech, and Language Processing 30:280–291

    Article  MATH  Google Scholar 

  43. Ye, X, Yavuz, S, Hashimoto, K, Zhou, Y, Xiong, C (2021) Rng-kbqa: generation augmented iterative ranking for knowledge base question answering. arXiv preprint arXiv:2109.08678

  44. Yao, B, Wang, D, Wu, T, Zhang, Z, Li, TJ-J, Yu, M, Xu, Y (2021) It is ai’s turn to ask humans a question: question–answer pair generation for children’s story books. arXiv preprint arXiv:2109.03423

  45. Ming, X (2022) Similarities: similarity calculation and semantic search toolkit. https://github.com/shibing624/similarities

  46. Devlin, J, Chang, M-W, Lee, K, Toutanova, K (2019) BERT: Pre-training of deep bidirectional transformers for language understanding. In: proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota

  47. Wang, X, Fan, S, Houghton, J, Wang, L (2022) Towards process-oriented, modular, and versatile question generation that meets educational needs. arXiv preprint arXiv:2205.00355

  48. Dong, Q, Wan, X, Cao, Y (2021) Parasci: a large scientific paraphrase dataset for longer paraphrase generation. arXiv preprint arXiv:2101.08382

  49. Lee, M, Won, S, Kim, J, Lee, H, Park, C, Jung, K (2021) Crossaug: a contrastive data augmentation method for debiasing fact verification models. In: Ppoceedings of the 30th ACM International Conference on Information & Knowledge Management, pp. 3181–3185

  50. Rajpurkar, P, Zhang, J, Lopyrev, K, Liang, P (2016) Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250

  51. Bai, J, Rong, W, Xia, F, Wang, Y, Ouyang, Y, Xiong, Z (2021) Paragraph level multi-perspective context modeling for question generation. In: ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 7418–7422. IEEE

  52. Jia, X, Zhou, W, Sun, X, Wu, Y (2021) Eqg-race: examination-type question generation. In: Proceedings of the AAAI Conference on Artificial Intelligence, 35: 13143–13151

  53. Zhao, Z, Hou, Y, Wang, D, Yu, M, Liu, C, Ma, X (2022) Educational question generation of children storybooks via question type distribution learning and event-centric summarization. In: Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics, pp. 5073–5085. Association for Computational Linguistics, Dublin, Ireland

  54. Ma H, Wang J, Lin H, Xu B (2023) Graph augmented sequence-to-sequence model for neural question generation. Appl Intell 53(11):14628–14644

    Article  MATH  Google Scholar 

  55. Maurya, KK, Desarkar, MS (2020) Learning to distract: a hierarchical multi-decoder network for automated generation of long distractors for multiple-choice questions for reading comprehension. In: proceedings of the 29th ACM International Conference on Information & Knowledge Management, pp. 1115–1124

  56. Liu, Y, Ott, M, Goyal, N, Du, J, Joshi, M, Chen, D, Levy, O, Lewis, M, Zettlemoyer, L, Stoyanov, V (2019) Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692

  57. Lewis, M, Liu, Y, Goyal, N, Ghazvininejad, M, Mohamed, A, Levy, O, Stoyanov, V, Zettlemoyer, L (2019) Bart: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461

Download references

Funding

This study was supported by the National Natural Science Foundation of China (No.61877051)

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Li Li.

Ethics declarations

Conflict of interest

The authors declared no potential conflict of interest with respect to the research, authorship, and publication of this article.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, H., Li, L. Qadg: Generating question–answer-distractors pairs for real examination. Neural Comput & Applic 37, 1157–1170 (2025). https://doi.org/10.1007/s00521-024-10658-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00521-024-10658-5

Keywords