Abstract
While many learning and assessment models focus on the binary correctness of student responses, previous studies have shown that having access to extra information—such as the time it takes students to respond to a question—can improve the performance of these models. As much of the previous work in this area has focused on knowledge tracing and next answer correctness, in this study we take a different approach and analyze the relationship between these extra types of information and the overall knowledge of the student, as measured by the end result of an adaptive assessment. In addition to looking at student response times, we investigate the benefit of having detailed information on the responses in the form of answer feedback tags from the adaptive assessment system. After using feature embeddings to encode the information from these feedback tags, we build several models and perform a feature importance analysis to compare the relative significance of these different variables. Although it appears that the response time variable does contain useful information, the answer feedback tags are ultimately much more important to the models.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
To account for students with multiple assessments, the confidence intervals are computed using the cluster bootstrap method [3].
References
Beck, J.E.: Engagement tracing: Using response times to model student disengagement. In: Artificial Intelligence in Education (2005)
Breiman, L.: Random forests. Mach. Learn. 45(1), 5–32 (2001)
Field, C.A., Welsh, A.H.: Bootstrapping clustered data. J. R. Stat. Soc. Ser. B (Stat. Methodol.) 69(3), 369–390 (2007)
González-Espada, W.J., Bullock, D.W.: Innovative applications of classroom response systems: investigating students item response times in relation to final course grade, gender, general point average, and high school ACT scores. Electron. J. Integr. Technol. Educ. 6, 97–108 (2007)
Inwegen, E.V., Adjei, S.A., Wang, Y., Heffernan, N.T.: Using partial credit and response history to model user knowledge. In: Educational Data Mining (2015)
Jurafsky, D., Martin, J.H.: Speech and Language Processing (3rd ed. draft) (2021). https://web.stanford.edu/~jurafsky/slp3/
Lin, C., Shen, S., Chi, M.: Incorporating student response time and tutor instructional interventions into student modeling. In: User Modeling Adaptation and Personalization (2016)
Liu, N., Wang, Z., Baraniuk, R.G., Lan, A.: Open-ended knowledge tracing (2022). https://doi.org/10.48550/ARXIV.2203.03716. https://arxiv.org/abs/2203.03716
Matayoshi, J., Uzun, H., Cosyn, E.: Using a randomized experiment to compare the performance of two adaptive assessment engines. In: Educational Data Mining, pp. 821–827 (2022)
Paszke, A., et al.: PyTorch: an imperative style, high-performance deep learning library. CoRR abs/1912.01703 (2019). http://arxiv.org/abs/1912.01703
Pelánek, R.: Bayesian knowledge tracing, logistic models, and beyond: an overview of learner modeling techniques. User Model. User Adap. Inter. 27(3), 313–350 (2017)
Pelánek, R.: Exploring the utility of response times and wrong answers for adaptive learning. In: Learning @ Scale, pp. 1–4 (2018)
Pelánek, R., Effenberger, T.: Beyond binary correctness: classification of students’ answers in learning systems. User Model. User Adap. Inter. 30(5), 867–893 (2020)
Pelánek, R., Rihák, J.: Properties and applications of wrong answers in online educational systems. In: Educational Data Mining (2016)
Piech, C., Huang, J., Nguyen, A., Phulsuksombati, M., Sahami, M., Guibas, L.: Learning program embeddings to propagate feedback on student code. In: International Conference on Machine Learning, pp. 1093–1102. PMLR (2015)
Wang, Y., Heffernan, N.T.: Leveraging first response time into the knowledge tracing model. In: Educational Data Mining (2012)
Wang, Y., Heffernan, N.T.: Extending knowledge tracing to allow partial credit: using continuous versus binary nodes. In: Artificial Intelligence in Education (2013)
Wang, Y., Heffernan, N.T., Beck, J.E.: Representing student performance with partial credit. In: Educational Data Mining (2010)
Wang, Y., Heffernan, N.T., Heffernan, C.: Towards better affect detectors: effect of missing skills, class features and common wrong answers. In: Learning Analytics and Knowledge (2015)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Matayoshi, J., Uzun, H., Cosyn, E. (2023). Analyzing Response Times and Answer Feedback Tags in an Adaptive Assessment. In: Wang, N., Rebolledo-Mendez, G., Dimitrova, V., Matsuda, N., Santos, O.C. (eds) Artificial Intelligence in Education. Posters and Late Breaking Results, Workshops and Tutorials, Industry and Innovation Tracks, Practitioners, Doctoral Consortium and Blue Sky. AIED 2023. Communications in Computer and Information Science, vol 1831. Springer, Cham. https://doi.org/10.1007/978-3-031-36336-8_46
Download citation
DOI: https://doi.org/10.1007/978-3-031-36336-8_46
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-36335-1
Online ISBN: 978-3-031-36336-8
eBook Packages: Computer ScienceComputer Science (R0)