Abstract
Many universities have started to adopt online programming tools to support students’ programming practice, yet the services currently offered by the existing tools are somewhat passive with respect to considering a student’s programming skill level and providing appropriate code questions. To enhance students’ learning experience and improve their programming skills, it would be helpful to examine students’ programming abilities and provide them with the most suitable code questions and guidelines. Machine learning can play a role in modeling the level of students’ programming skills as well as the difficulty of questions by taking the students’ programming experience and code submissions into account. This paper presents a study on the development of machine learning models to classify the levels of students’ programming skills and those of programming questions, based on the data of students’ code submissions. We extracted a total of 197 features of code quality, code readability and system time. We used those features to build classification models. The model for the student level (four classes) and the question level (five classes) yielded 0.60 and 0.82 F1-scores, respectively, showing reasonable classification performance. We discuss our study highlights and their implications, such as group and question matching based on code submissions and user experience improvement.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
- 3.
- 4.
- 5.
- 6.
- 7.
- 8.
- 9.
More information about each formula is explained at https://bit.ly/2viSaen.
References
Alon, U., Zilberstein, M., Levy, O., Yahav, E.: code2vec: learning distributed representations of code. In: Proceedings of the ACM on Programming Languages 3(POPL), pp. 1–29 (2019)
Bennedsen, J., Caspersen, M.E.: Failure rates in introductory programming: 12 years later. ACM Inroads 10(2), 30–36 (2019)
Hiltunen, T.: Learning and teaching programming skills in finnish primary schools-the potential of games. University of Oulu (2016). Accessed 16 Dec 2016
Ke, G., et al.: LightGBM: a highly efficient gradient boosting decision tree. In: Advances in Neural Information Processing Systems, pp. 3146–3154 (2017)
Romero, C., Ventura, S.: Educational data mining: a review of the state of the art. IEEE Trans. Syst. Man Cybern. Part C (Appl. Rev.) 40(6), 601–618 (2010)
Spinellis, D., Louridas, P., Kechagia, M.: The evolution of c programming practices: a study of the unix operating system 1973–2015. In: 2016 IEEE/ACM 38th International Conference on Software Engineering (ICSE), pp. 748–759. IEEE (2016)
Wang, Z., Bergin, C., Bergin, D.A.: Measuring engagement in fourth to twelfth grade classrooms: the classroom engagement inventory. School Psychol. Q. 29(4), 517 (2014)
Watson, C., Li, F.W.: Failure rates in introductory programming revisited. In: Proceedings of the 2014 Conference on Innovation & Technology in Computer Science Education, pp. 39–44 (2014)
Zhou, M., Ma, M., Zhang, Y., Sui, A,K., Pei, D., Moscibroda, T.: EDUM: classroom education measurements via large-scale WIFI networks. In: Proceedings of the 2016 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 316–327 (2016)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Kim, W., Rhim, S., Choi, J.Y.J., Han, K. (2020). Modeling Learners’ Programming Skills and Question Levels Through Machine Learning. In: Stephanidis, C., Antona, M., Ntoa, S. (eds) HCI International 2020 – Late Breaking Posters. HCII 2020. Communications in Computer and Information Science, vol 1294. Springer, Cham. https://doi.org/10.1007/978-3-030-60703-6_36
Download citation
DOI: https://doi.org/10.1007/978-3-030-60703-6_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-60702-9
Online ISBN: 978-3-030-60703-6
eBook Packages: Computer ScienceComputer Science (R0)