Abstract
Feedback mechanisms for academic courses have been widely used to measure students opinions and satisfaction towards different components of a course; concurrently, open-text detailed impressions enable professors to continually improve their course. However, the process of reading through hundreds of student feedback responses across multiple subjects, followed by the extraction of important ideas is very time consuming. In this work, we propose an automated feedback summarizer to extract the main ideas expressed by all students on various components for each course, based on a pipeline integrating state-of-the-art Natural Language Processing techniques. Our method involves the usage of BERT language models to extract keywords for each course, identify relevant contexts for recurring keywords, and cluster similar contexts. We validate our tool on 8,201 feedback responses for 168 distinct courses from the Computer Science Department of University Politehnica of Bucharest for the 2019–2020 academic year. Our approach achieves a size reduction of 59% on the overall volume of text, while only increasing the mean average error when predicting course ratings from student open-text feedback by an absolute value of 0.06.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
https://spacy.io/models/ro#ro_core_news_lg. Retrieved April 15, 2021.
- 2.
https://spacy.io/. Retrieved April 15, 2021.
- 3.
https://www.tensorflow.org/tensorboard. Retrieved April 15, 2021.
References
Seldin, P.: Using student feedback to improve teaching. Improve Acad. 16(1), 335–345 (1997)
Flodén, J.: The impact of student feedback on teaching in higher education. Assess. Eval. High. Educ. 42(7), 1054–1068 (2017)
Leckey, J., Neill, N.: Quantifying quality: the importance of student feedback. Qual. High. Educ. 7(1), 19–32 (2001)
Moore, S., Kuol, N.: Students evaluating teachers: exploring the importance of faculty reaction to feedback on teaching. Teach. High. Educ. 10(1), 57–73 (2005)
Perera, J., Lee, N., Win, K., Perera, J., Wijesuriya, L.: Formative feedback to students: the mismatch between faculty perceptions and student expectations. Med. Teach. 30(4), 395–399 (2008)
Luo, W., Liu, F., Litman, D.: An improved phrase-based approach to annotating and summarizing student course responses. In: Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pp. 53–63. The COLING 2016 Organizing Committee, Osaka (2016)
Unankard, S., Nadee, W.: Topic detection for online course feedback using LDA. In: Popescu, E., Hao, T., Hsu, T.-C., Xie, H., Temperini, M., Chen, W. (eds.) SETE 2019. LNCS, vol. 11984, pp. 133–142. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-38778-5_16
Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent Dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)
Luo, W., Litman, D.: Summarizing student responses to reflection prompts. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 1955–1960 (2015)
Luo, W., Liu, F., Liu, Z., Litman, D.: Automatic summarization of student course feedback. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 80–85 (2016)
Miller, D.: Leveraging BERT for extractive text summarization on lectures. arXiv preprint arXiv:1906.04165 (2019)
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pp. 4171–4186 (2019)
Lin, C.Y.: ROUGE: a package for automatic evaluation of summaries. In: Text Summarization Branches Out, pp. 74–81. Association for Computational Linguistics, Barcelona (2004)
Masala, M., Ruseti, S., Dascalu, M.: RoBERT - a Romanian BERT model. In: Proceedings of the 28th International Conference on Computational Linguistics, pp. 6626–6637 (2020)
Grootendorst, M.: KeyBERT: minimal keyword extraction with BERT (2020). https://github.com/MaartenGr/KeyBERT
Carbonell, J., Goldstein, J.: Use of MMR, diversity-based reranking for reordering documents and producing summaries. In: SIGIR Forum (ACM Special Interest Group on Information Retrieval), pp. 335–336 (1998)
Van der Maaten, L., Hinton, G.: Visualizing data using t-SNE. J. Mach. Learn. Res. 9(11), 2579–2605 (2008)
Grootendorst, M.: BERTopic: leveraging BERT and c-TF-IDF to create easily interpretable topics (2020). https://github.com/MaartenGr/BERTopic
Acknowledgments
This research was supported by a grant of the Romanian National Authority for Scientific Research and Innovation, CNCS – UEFISCDI, project number TE 70 PN-III-P1-1.1-TE-2019-2209, ATES – “Automated Text Evaluation and Simplification”.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Masala, M., Ruseti, S., Dascalu, M., Dobre, C. (2021). Extracting and Clustering Main Ideas from Student Feedback Using Language Models. In: Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2021. Lecture Notes in Computer Science(), vol 12748. Springer, Cham. https://doi.org/10.1007/978-3-030-78292-4_23
Download citation
DOI: https://doi.org/10.1007/978-3-030-78292-4_23
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78291-7
Online ISBN: 978-3-030-78292-4
eBook Packages: Computer ScienceComputer Science (R0)