Abstract
The inevitable shift towards online learning due to the emergence of the COVID-19 pandemic triggered a strong need to assess students using shorter exams whilst ensuring reliability. This study explores a data-centric approach that utilizes feature importance to select a discriminative subset of questions from the original exam. Furthermore, the discriminative question subset’s ability to approximate the students exam scores is evaluated by measuring the prediction accuracy and by quantifying the error interval of the prediction. The approach was evaluated using two real-world exam datasets of the Scholastic Aptitude Test (SAT) and Exame Nacional do Ensino Médio (ENEM) exams, which consist of student response data and the corresponding the exam scores. The evaluation was conducted against randomized question subsets of sizes 10, 20, 30 and 50. The results show that our method estimates the full scores more accurately than a baseline model in most question sizes while maintaining a reasonable error interval. The encouraging evidence found in this paper provides support for the strong potential of the on-going study to provide a data-centric approach for exam size reduction.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Choi, Y., et al: Assessment modeling: fundamental pre-training tasks for interactive educational systems. arXiv preprint arXiv:2002.05505 (2020)
Hellas, A., et al.: Predicting academic performance: a systematic literature review. In: Proceedings Companion of the 23rd Annual ACM Conference on Innovation and Technology in Computer Science Education, pp. 175–199 (2018)
James, G., Witten, D., Hastie, T., Tibshirani, R.: An Introduction to Statistical Learning, vol. 112. Springer, Heidelberg (2013). https://doi.org/10.1007/978-1-4614-7138-7
Lakshminarayanan, B., Pritzel, A., Blundell, C.: Simple and scalable predictive uncertainty estimation using deep ensembles. In: Advances in Neural Information Processing Systems, pp. 6402–6413 (2017)
Meier, Y., Xu, J., Atan, O., Van der Schaar, M.: Predicting grades. IEEE Trans. Signal Process. 64(4), 959–972 (2015)
Mouta, A., Sánchez, E.T., Llorente, A.P.: Blending machines, learning, and ethics. In: Proceedings of the Seventh International Conference on Technological Ecosystems for Enhancing Multiculturality, pp. 993–998 (2019)
Sani, S.M., Bichi, A.B., Ayuba, S.: Artificial intelligence approaches in student modeling: half decade review (2010–2015). IJCSN-Int. J. Comput. Scie. Netw. 5(5) (2016)
Sweeney, M., Rangwala, H., Lester, J., Johri, A.: Next-term student performance prediction: a recommender systems approach. arXiv preprint arXiv:1604.01840 (2016)
Vie, J.J., Popineau, F., Bruillard, É., Bourda, Y.: A review of recent advances in adaptive assessment. In: Peña-Ayala, A. (ed.) Learning Analytics: Fundaments, Applications, and Trends, vol. 94, pp. 113–142. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-52977-6_4
Zhang, S., Chang, H.H.: From smart testing to smart learning: how testing technology can assist the new generation of education. Int. J. Smart Technol. Learn. 1(1), 67–92 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Kim, J.H., Baek, J., Hwang, C., Bae, C., Park, J. (2021). Condensed Discriminative Question Set for Reliable Exam Score Prediction. In: Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2021. Lecture Notes in Computer Science(), vol 12749. Springer, Cham. https://doi.org/10.1007/978-3-030-78270-2_79
Download citation
DOI: https://doi.org/10.1007/978-3-030-78270-2_79
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-78269-6
Online ISBN: 978-3-030-78270-2
eBook Packages: Computer ScienceComputer Science (R0)