Abstract
The use of artificial intelligence proved to be useful to automating the grading process, especially when the assessment involves a large number of students. The general problem we are addressing is the automated grading of assignments, which solutions are composed of a list of commands, their outputs, and possible comments. In this paper, we focus on the automated classification of the comments, as “right” or “wrong”. In particular, we investigated the effect of different features (i.e., fastText, BERT, distance-based and custom features), fed to several classifiers (i.e., Logistic Regression, Support Vector Machines, Random Forest, Multi-Layer Perceptron – MLP), to select the best one in terms of best balanced accuracy. In the experiment carried out, the best result was obtained by the MLP classifier using the fastText embeddings. When instead fed with BERT embeddings, MLP obtained a slightly lower accuracy and F1 score, even if it remains the best option with respect to the other classifiers. Furthermore, we tested the classifier with comments given to different assignments (of the same structure), given by different students and evaluated by a different professor. Also in this case, we achieved a relatively good accuracy and F1 score.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
In English: “major”, “minor”, “greater”, “less”.
- 3.
Tuned through grid-search.
- 4.
The forest was made up of 100 trees.
- 5.
We used 5 hidden layers and a decay equal to \(10^{-5}\).
- 6.
\(K \le 0.2 \rightarrow \) poor, \(K \in (0.2, 0.4] \rightarrow \) low, \(K \in (0.4, 0.6] \rightarrow \) average, \(K \in (0.6, 0.8] \rightarrow \) good, \(K \in (0.8, 1.0] \rightarrow \) excellent.
References
Angelone, A.M., Galassi, A., Vittorini, P.: Other sentences about data science exercises, April 2021. https://doi.org/10.5281/zenodo.4680493
Angelone, A.M., Galassi, A., Vittorini, P.: Sentences about data science exercises, April 2021. https://doi.org/10.5281/zenodo.4671898
Aprosio, A.P., Moretti, G.: Tint 2.0: an all-inclusive suite for NLP in Italian. In: Proceedings of the Fifth Italian Conference on Computational Linguistics (CLiC-it 2018). Torino (2018). http://ceur-ws.org/Vol-2253/paper58.pdf
Bernardi, A., et al.: On the design and development of an assessment system with adaptive capabilities. In: Di Mascio, T., et al. (eds.) MIS4TEL 2018. AISC, vol. 804, pp. 190–199. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-98872-6_23
Bojanowski, P., Grave, E., Joulin, A., Mikolov, T.: Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606 (2016)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001). https://doi.org/10.1023/A:1010933404324
Camus, L., Filighera, A.: Investigating transformers for automatic short answer grading. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) Artificial Intelligence in Education, pp. 43–48. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-52240-7_8
Cohen, J.: A coefficient of agreement for nominal scales. Educ. Psychol. Measur. 20(1), 37–46 (1960). https://doi.org/10.1177/001316446002000104
Cortes, C., Vapnik, V.: Support-vector networks. Mach. Learn. 20(3), 273–297 (1995). https://link.springer.com/article/10.1007/BF00994018
Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pp. 4171–4186. Association for Computational Linguistics, Minneapolis, Minnesota, June 2019. https://www.aclweb.org/anthology/N19-1423
Galassi, A., Vittorini, P.: Automated feedback to students in data science assignments: improved implementation and results. In: CHItaly 2021: 14th Biannual Conference of the Italian SIGCHI Chapter (CHItaly 2021). ACM, New York (2021)
Galassi, A., Vittorini, P.: Improved feedback in automated grading of data science assignments. In: Kubincová, Z., Lancia, L., Popescu, E., Popescu, M., Scarano, Y., Gil, A.B. (eds.) MIS4TEL 2020. AISC, vol. 1236, pp. 296–300. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-52287-2_31
Grave, E., Bojanowski, P., Gupta, P., Joulin, A., Mikolov, T.: Learning word vectors for 157 languages. In: Proceedings of the International Conference on Language Resources and Evaluation (LREC 2018) (2018)
Kusner, M., Sun, Y., Kolkin, N., Weinberger, K.: From word embeddings to document distances. In: International Conference on Machine Learning, pp. 957–966 (2015)
LeCounte, J.F., Johnson, D.: The MOOCs: characteristics, benefits, and challenges to both industry and higher education. In: Handbook of Research on Innovative Technology Integration in Higher Education. IGI Global (2015)
Mohler, M., Bunescu, R., Mihalcea, R.: Learning to grade short answer questions using semantic similarity measures and dependency graph alignments. In: Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies - Volume 1, pp. 752–762, HLT 2011. Association for Computational Linguistics, Stroudsburg, PA, USA (2011), http://dl.acm.org/citation.cfm?id=2002472.2002568
Murtagh, F.: Multilayer perceptrons for classification and regression. Neurocomputing 2(5–6), 183–197 (1991)
R Core Team: R: A Language and Environment for Statistical Computing (2018). https://www.R-project.org/
Refaeilzadeh, P., Tang, L., Liu, H.: Cross-validation. In: Liu, L., Özsu, M.T. (eds.) Encyclopedia of Database Systems, pp. 532–538. Springer, Boston (2009). https://doi.org/10.1007/978-0-387-39940-9_565
Souza, D.M., Felizardo, K.R., Barbosa, E.F.: A systematic literature review of assessment tools for programming assignments. In: 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET), pp. 147–156. IEEE, April 2016
Sultan, M.A., Salazar, C., Sumner, T.: Fast and easy short answer grading with high accuracy. In: Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pp. 1070–1075 (2016)
Sung, C., Dhamecha, T., Saha, S., Ma, T., Reddy, V., Arora, R.: Pre-training BERT on domain resources for short answer grading. In: Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 6071–6075. Association for Computational Linguistics, Hong Kong, China, November 2019. https://www.aclweb.org/anthology/D19-1628
Sung, C., Dhamecha, T.I., Mukhi, N.: Improving short answer grading using transformer-based pre-training. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) Artificial Intelligence in Education, pp. 469–481. Springer International Publishing, Cham (2019). https://doi.org/10.1007/978-3-030-23204-7_39
Vittorini, P., Menini, S., Tonelli, S.: An AI-based system for formative and summative assessment in data science courses. Int. J. Artif. Intell. Educ. 31(2), 159–185 (2020). https://doi.org/10.1007/s40593-020-00230-2
Walker, S.H., Duncan, D.B.: Estimation of the probability of an event as a function of several independent variables. Biometrika 54(1/2), 167 (1967)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Angelone, A.M., Galassi, A., Vittorini, P. (2022). Improved Automated Classification of Sentences in Data Science Exercises. In: De la Prieta, F., et al. Methodologies and Intelligent Systems for Technology Enhanced Learning, 11th International Conference. MIS4TEL 2021. Lecture Notes in Networks and Systems, vol 326. Springer, Cham. https://doi.org/10.1007/978-3-030-86618-1_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-86618-1_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-86617-4
Online ISBN: 978-3-030-86618-1
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)