Skip to main content

Q&A Generation for Flashcards Within a Transformer-Based Framework

  • Conference paper
  • First Online:
Book cover Higher Education Learning Methodologies and Technologies Online (HELMeTO 2022)

Abstract

Flashcards are the main tool used in the Spaced Repetition memorization method, yet there are not always available for many topics due to the high effort required to create them. The combination of Transformer-based models with a Recommender System (RS) can enable a dynamic model to auto generate flashcards recommendation for learners and serious game players in order to improve their skills in learning programming. In previous work we introduced an Intelligent Serious Games (ISG) model that combined Deep Knowledge Tracing (DKT) with a Transformer-based Recommender. The ISG aimed at predicting the outcomes of the next missions in gameplay and enabling flashcard recommendations to complete the missions successfully. This research extends previous work by introducing a novel architecture and specifications for a Transformer-based recommender. We introduce a novel Transformer-based framework tailored to three different NLP tasks to dynamically generate flashcards in the form of supporting paragraphs, questions, and answers. We fine-tuned GPT-2, GPT-Neo, BART and T5 models on three new programming skills datasets, and evaluated them using standard metrics that target coherence and semantics. Our findings revealed that the framework is capable of generating coherent flashcards in a fully automated process using a single input string as prompt.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, vol. 30 (2017). https://proceedings.neurips.cc/paper/2017/file/3f5ee243547dee91fbd053c1c4a845aa-Paper.pdf

  2. Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I.: Language models are unsupervised multitask learners (2019)

    Google Scholar 

  3. Lewis, M., et al.: BART: denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. In: Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, Online, 2020, pp. 7871–7880. https://doi.org/10.18653/v1/2020.acl-main.703

  4. Raffel, C., et al.: Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv 28 Jul 2020. Accessed 25 May 2022. http://arxiv.org/abs/1910.10683

  5. Zhang, R., Guo, J., Chen, L., Fan, Y., Cheng, X.: A review on question generation from natural language text. ACM Trans. Inf. Syst. 40(1), 1–43 (2022). https://doi.org/10.1145/3468889

    Article  Google Scholar 

  6. Chen, X., Wu, Y., Wang, Z., Liu, S., Li, J.: Developing real-time streaming transformer transducer for speech recognition on large-scale dataset. In: ICASSP 2021 - 2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Toronto, ON, Canada, pp. 5904–5908 (2021). https://doi.org/10.1109/ICASSP39728.2021.9413535

  7. Dunn, D.S., Saville, B.K., Baker, S.C., Marek, P.: Evidence-based teaching: tools and techniques that promote learning in the psychology classroom. Aust. J. Psychol. 65(1), 5–13 (2013). https://doi.org/10.1111/ajpy.12004

    Article  Google Scholar 

  8. Smolen, P., Zhang, Y., Byrne, J.H.: The right time to learn: mechanisms and optimization of spaced learning. Nat. Rev. Neurosci. 17(2), 77–88 (2016). https://doi.org/10.1038/nrn.2015.18

    Article  Google Scholar 

  9. Thabet, B., Zanichelli, F.: Towards intelligent serious games: deep knowledge tracing with hybrid prediction models. In: 2022 17th International Conference on Computer Science & Education (ICCSE), Ningbo, China (2022). https://ieeexplore.ieee.org/

  10. Piech, C., et al.: Deep knowledge tracing. In: Advances in Neural Information Processing Systems, vol. 28 (2015). https://proceedings.neurips.cc/paper/2015/file/bac9162b47c56fc8a4d2a519803d51b3-Paper.pdf

  11. Cho, K., et al.: Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014). https://doi.org/10.48550/ARXIV.1406.1078

  12. Niculescu, M.A., Ruseti, S., Dascalu, M.: RoGPT2: Romanian GPT2 for text generation. In: 2021 IEEE 33rd International Conference on Tools with Artificial Intelligence (ICTAI), Washington, DC, USA, pp. 1154–1161 (2021). https://doi.org/10.1109/ICTAI52525.2021.00183

  13. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding (2018). https://doi.org/10.48550/ARXIV.1810.04805

  14. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding by generative pre-training resented at the OpenAI (2018)

    Google Scholar 

  15. Brown, T.B., et al.: Language models are few-shot learners (2020). https://doi.org/10.48550/ARXIV.2005.14165

  16. Lee, J.-S., Hsiang, J.: Patent claim generation by fine-tuning OpenAI GPT-2. World Pat. Inf. 62, 101983 (2020). https://doi.org/10.1016/j.wpi.2020.101983

    Article  Google Scholar 

  17. van Stegeren, J., Myśliwiec, J.: Fine-tuning GPT-2 on annotated RPG quests for NPC dialogue generation. In: The 16th International Conference on the Foundations of Digital Games (FDG) 2021, Montreal QC Canada, pp. 1–8 (2021). https://doi.org/10.1145/3472538.3472595

  18. Lee, J.-S., Hsiang, J.: PatentTransformer-2: controlling patent text generation by structural metadata (2020). https://doi.org/10.48550/ARXIV.2001.03708

  19. Fabbri, A.R., Kryściński, W., McCann, B., Xiong, C., Socher, R., Radev, D.: SummEval: re-evaluating summarization evaluation. Trans. Assoc. Comput. Linguist. 9, 391–409 (2021). https://doi.org/10.1162/tacl_a_00373

    Article  Google Scholar 

  20. Grover, K., Kaur, K., Tiwari, K., Rupali, G., Kumar, P.: Deep learning based question generation using T5 transformer. In: Advanced Computing, vol. 1367, Garg, D., Wong, K., Sarangapani, J., Gupta, S.K., Eds., pp. 243–255: Springer, Singapore (2021). https://doi.org/10.1007/978-981-16-0401-0_18

  21. Pyatkin, V., Roit, P., Michael, J., Goldberg, Y., Tsarfaty, R., Dagan, I.: Asking It All: generating contextualized questions for any semantic role. In: Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing, Online and Punta Cana, Dominican Republic, pp. 1429–1441 (2021). https://doi.org/10.18653/v1/2021.emnlp-main.108

  22. Zhao, X., Xiao, F., Zhong, H., Yao, J., Chen, H.: Condition aware and revise transformer for question answering. In: Proceedings of the Web Conference 2020, Taipei Taiwan, pp. 2377–2387 (2020). https://doi.org/10.1145/3366423.3380301

  23. Aithal, S.G., Rao, A.B., Singh, S.: Automatic question-answer pairs generation and question similarity mechanism in question answering system. Appl. Intell. 51(11), 8484–8497 (2021). https://doi.org/10.1007/s10489-021-02348-9

    Article  Google Scholar 

  24. Qi, W., et al.: ProphetNet: predicting future N-gram for sequence-to-sequencepre-training”, in findings of the association for computational linguistics: EMNLP. Online 2020, 2401–2410 (2020). https://doi.org/10.18653/v1/2020.findings-emnlp.217

    Article  Google Scholar 

  25. Kurdi, G., Leo, J., Parsia, B., Sattler, U., Al-Emari, S.: A systematic review of automatic question generation for educational purposes. Int. J. Artif. Intell. Educ. 30(1), 121–204 (2019). https://doi.org/10.1007/s40593-019-00186-y

    Article  Google Scholar 

  26. Tondello, G.F., Orji, R., Nacke, L.E.: Recommender Systems for Personalized Gamification. In: Adjunct Publication of the 25th Conference on User Modeling, Adaptation and Personalization, Bratislava Slovakia, pp. 425–430 (2017). https://doi.org/10.1145/3099023.3099114

  27. Amoretti, M., Belli, L., Zanichelli, F.: UTravel: smart mobility with a novel user profiling and recommendation approach. Pervasive Mob. Comput. 38, 474–489 (2017). https://doi.org/10.1016/j.pmcj.2016.08.008

    Article  Google Scholar 

  28. Agarwal, P.K., Bain, P.M.: Powerful teaching: unleash the science of learning. John Wiley & Sons (2019)

    Google Scholar 

  29. Post, M.: A call for clarity in reporting BLEU scores. In: Proceedings of the Third Conference on Machine Translation: Research Papers, Belgium, Brussels, pp. 186–191 (2018) https://doi.org/10.18653/v1/W18-6319

  30. Papineni, K., Roukos, S., Ward, T., Zhu, W.-J.: BLEU: a method for automatic evaluation of machine translation. In: Proceedings of the 40th Annual Meeting on Association for Computational Linguistics - ACL ’02, Philadelphia, Pennsylvania, p. 311 (2001). https://doi.org/10.3115/1073083.1073135

  31. Lin, C.-Y.: ROUGE: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, Barcelona, Spain, Jul. 2004, pp. 74–81. https://aclanthology.org/W04-1013

  32. Reimers, N., Gurevych, I.: Sentence-BERT: sentence embeddings using siamese BERT-networks. ArXiv190810084 Cs (2019). Accessed 24 Apr 2022. http://arxiv.org/abs/1908.10084

  33. Pica, T., Young, R., Doughty, C.: The impact of interaction on comprehension. TESOL Q. 21(4), 737 (1987). https://doi.org/10.2307/3586992

    Article  Google Scholar 

  34. Rathod, M., Tu, T., Stasaski, K.: Educational multi-question generation for reading comprehension. In: Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022), Seattle, Washington, pp. 216–223 (2022). https://doi.org/10.18653/v1/2022.bea-1.26

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Baha Thabet .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Thabet, B., Zanichelli, N., Zanichelli, F. (2023). Q&A Generation for Flashcards Within a Transformer-Based Framework. In: Fulantelli, G., Burgos, D., Casalino, G., Cimitile, M., Lo Bosco, G., Taibi, D. (eds) Higher Education Learning Methodologies and Technologies Online. HELMeTO 2022. Communications in Computer and Information Science, vol 1779. Springer, Cham. https://doi.org/10.1007/978-3-031-29800-4_59

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-29800-4_59

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-29799-1

  • Online ISBN: 978-3-031-29800-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics