Skip to main content

Exploring Artificial Jabbering for Automatic Text Comprehension Question Generation

  • Conference paper
  • First Online:
Addressing Global Challenges and Quality Education (EC-TEL 2020)

Abstract

Many educational texts lack comprehension questions and authoring them consumes time and money. Thus, in this article, we ask ourselves to what extent artificial jabbering text generation systems can be used to generate textbook comprehension questions. Novel machine learning-based text generation systems jabber on a wide variety of topics with deceptively good performance. To expose the generated texts as such, one often has to understand the actual topic the systems jabbers about. Hence, confronting learners with generated texts may cause them to question their level of knowledge. We built a novel prototype that generates comprehension questions given arbitrary textbook passages. We discuss the strengths and weaknesses of the prototype quantitatively and qualitatively. While our prototype is not perfect, we provide evidence that such systems have great potential as question generators and identify the most promising starting points may leading to (semi) automated generators that support textbook authors and self-studying.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Using NLTK-3.4.5.

  2. 2.

    https://github.com/openai/gpt-2.

References

  1. Anderson, R.C., Biddle, W.B.: On asking people questions about what they are reading. Psychol. Learn. Motiv. Adv. Res. Theor. 9(C), 89–132 (1975). https://doi.org/10.1016/S0079-7421(08)60269-8

    Article  Google Scholar 

  2. Campos, R., Mangaravite, V., Pasquali, A., Jorge, A., Nunes, C., Jatowt, A.: YAKE! keyword extraction from single documents using multiple local features. Inf. Sci. 509, 257–289 (2020)

    Article  Google Scholar 

  3. Campos, R., Mangaravite, V., Pasquali, A., Jorge, A.M., Nunes, C., Jatowt, A.: YAKE! collection-independent automatic keyword extractor. In: Pasi, G., Piwowarski, B., Azzopardi, L., Hanbury, A. (eds.) ECIR 2018. LNCS, vol. 10772, pp. 806–810. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-76941-7_80

    Chapter  Google Scholar 

  4. Chen, X., Mitrovic, T., Mathews, M.: Do novices and advanced students benefit from erroneous examples differently. In: Proceedings of 24th International Conference on Computers in Education (2016)

    Google Scholar 

  5. Dong, L., et al.: Unified language model pre-training for natural language understanding and generation. In: Advances in Neural Information Processing Systems, pp. 13042–13054 (2019)

    Google Scholar 

  6. Du, X., Shao, J., Cardie, C.: Learning to ask: neural question generation for reading comprehension. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 8, pp. 1342–1352. Association for Computational Linguistics, Stroudsburg (2017). https://doi.org/10.18653/v1/P17-1123. http://aclweb.org/anthology/P17-1123

  7. Duke, N.K., Pearson, P.D.: Effective practices for developing reading comprehension. J. Educ. 189(1–2), 107–122 (2009)

    Article  Google Scholar 

  8. Gao, Y., Bing, L., Chen, W., Lyu, M.R., King, I.: Difficulty controllable generation of reading comprehension questions. In: Proceedings of the 28th International Joint Conference on Artificial Intelligence, pp. 4968–4974. AAAI Press (2019)

    Google Scholar 

  9. Graesser, A., Rus, V., Cai, Z.: Question classification schemes. In: Proceedings of the Workshop on Question Generation, pp. 10–17 (2008)

    Google Scholar 

  10. Große, C.S., Renkl, A.: Finding and fixing errors in worked examples: can this foster learning outcomes? Learn. Instr. 17(6), 612–634 (2007)

    Article  Google Scholar 

  11. Heilman, M., Smith, N.A.: Good question! Statistical ranking for question generation. In: Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pp. 609–617. Association for Computational Linguistics (2010)

    Google Scholar 

  12. Holtzman, A., Buys, J., Forbes, M., Choi, Y.: The curious case of neural text degeneration. arXiv preprint arXiv:1904.09751 (2019)

  13. Ippolito, D., Duckworth, D., Callison-Burch, C., Eck, D.: Human and automatic detection of generated text. arXiv preprint arXiv:1911.00650 (2019)

  14. Kopp, V., Stark, R., Fischer, M.R.: Fostering diagnostic knowledge through computer-supported, case-based worked examples: effects of erroneous examples and feedback. Med. Educ. 42(8), 823–829 (2008)

    Article  Google Scholar 

  15. Kurdi, G., Leo, J., Parsia, B., Sattler, U., Al-Emari, S.: A systematic review of automatic question generation for educational purposes. Int. J. Artif. Intell. Educ. 30, 1–84 (2019)

    Google Scholar 

  16. Liao, Y., Wang, Y., Liu, Q., Jiang, X.: GPT-based generation for classical chinese poetry. arXiv preprint arXiv:1907.00151 (2019)

  17. Liu, M., Calvo, R.A., Rus, V.: G-asks: an intelligent automatic question generation system for academic writing support. Dialogue Discourse 3(2), 101–124 (2012)

    Article  Google Scholar 

  18. Mayring, P.: Qualitative content analysis. A Companion Qual. Res. 1, 159–176 (2004)

    Google Scholar 

  19. Ohlsson, S.: Learning from performance errors. Psychol. Rev. 103(2), 241 (1996)

    Article  Google Scholar 

  20. Pan, L., Lei, W., Chua, T.S., Kan, M.Y.: Recent advances in neural question generation. arXiv preprint arXiv:1905.08949 (2019)

  21. Qin, L., Bosselut, A., Holtzman, A., Bhagavatula, C., Clark, E., Choi, Y.: Counterfactual story reasoning and generation. arXiv preprint arXiv:1909.04076 (2019)

  22. Qin, L., et al.: Conversing by reading: contentful neural conversation with on-demand machine reading. arXiv preprint arXiv:1906.02738 (2019)

  23. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)

    Google Scholar 

  24. Richey, J.E., et al.: More confusion and frustration, better learning: the impact of erroneous examples. Comput. Educ. 139, 173–190 (2019)

    Article  Google Scholar 

  25. Rouet, J.F., Vidal-Abarca, E.: Mining for meaning: cognitive effects of inserted questions in learning from scientific text. Psychol. Sci. Text Comprehension, pp. 417–436 (2002)

    Google Scholar 

  26. See, A., Liu, P.J., Manning, C.D.: Get to the point: summarization with pointer-generator networks. arXiv preprint arXiv:1704.04368 (2017)

  27. See, A., Pappu, A., Saxena, R., Yerukola, A., Manning, C.D.: Do massively pretrained language models make better storytellers? In: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pp. 843–861 (2019)

    Google Scholar 

  28. Solaiman, I., et al.: Release strategies and the social impacts of language models. arXiv preprint arXiv:1908.09203 (2019)

  29. Tsovaltzi, D., McLaren, B.M., Melis, E., Meyer, A.K.: Erroneous examples: effects on learning fractions in a web-based setting. Int. J. Technol. Enhanced Learn. 4(3–4), 191–230 (2012)

    Article  Google Scholar 

  30. Tsovaltzi, D., Melis, E., McLaren, B.M., Meyer, A.-K., Dietrich, M., Goguadze, G.: Learning from erroneous examples: when and how do students benefit from them? In: Wolpers, M., Kirschner, P.A., Scheffel, M., Lindstaedt, S., Dimitrova, V. (eds.) EC-TEL 2010. LNCS, vol. 6383, pp. 357–373. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-16020-2_24

    Chapter  Google Scholar 

  31. Watts, G.H., Anderson, R.C.: Effects of three types of inserted questions on learning from prose. J. Educ. Psychol. 62(5), 387 (1971)

    Article  Google Scholar 

  32. Willis, A., Davis, G., Ruan, S., Manoharan, L., Landay, J., Brunskill, E.: Key phrase extraction for generating educational question-answer pairs. In: Proceedings of the 6th (2019) ACM Conference on Learning@ Scale, pp. 1–10 (2019)

    Google Scholar 

  33. Zhang, S., Bansal, M.: Addressing semantic drift in question generation for semi-supervised question answering. arXiv preprint arXiv:1909.06356 (2019)

  34. Zhao, Y., Ni, X., Ding, Y., Ke, Q.: Paragraph-level neural question generation with maxout pointer and gated self-attention networks. In: EMNLP, pp. 3901–3910 (2018). http://aclweb.org/anthology/D18-1424

  35. Zhou, Q., Yang, N., Wei, F., Tan, C., Bao, H., Zhou, M.: Neural question generation from text: a preliminary study. In: Huang, X., Jiang, J., Zhao, D., Feng, Y., Hong, Y. (eds.) NLPCC 2017. LNCS (LNAI), vol. 10619, pp. 662–671. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73618-1_56

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Tim Steuer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Steuer, T., Filighera, A., Rensing, C. (2020). Exploring Artificial Jabbering for Automatic Text Comprehension Question Generation. In: Alario-Hoyos, C., Rodríguez-Triana, M.J., Scheffel, M., Arnedillo-Sánchez, I., Dennerlein, S.M. (eds) Addressing Global Challenges and Quality Education. EC-TEL 2020. Lecture Notes in Computer Science(), vol 12315. Springer, Cham. https://doi.org/10.1007/978-3-030-57717-9_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-57717-9_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-57716-2

  • Online ISBN: 978-3-030-57717-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics