Abstract
Despite the importance of introductory programming disciplines, it is quite common to find problems related to academic students performance. In such environments, we easily find unmotivated students with some doubts and that do not understand basic programming concepts. Monitoring each of the students is not trivial because the number of students is high and, to do so, it would be necessary to observe many characteristics of each code submitted for practical activities. The teacher, even when helped by TAs (Teacher Assistants), is not able to perform the reviews quickly, for this activity requires a huge amount of time. Fast feedback is extremely important to enable the learning of any concept. In this research, we investigate an adaptive approach to cluster codes in order to minimize the effort of evaluation. The results vary from reasonable to perfect concordances, considering the semiautomatic evaluations obtained with the clustering and the expert evaluations.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
McCracken, M., Almstrum, V., Diaz, D., Guzdial, M., Hagan, D., Kolikant, Y.B.D., Laxer, C., Thomas, L., Utting, I., Wilusz, T.: A multi-national, multi-institutional study of assessment of programming skills of first-year CS students. In: Working Group Reports from ITiCSE on Innovation and Technology in Computer Science Education, ITiCSE-WGR 2001, pp. 125–180. ACM, New York (2001)
Stegeman, M., Barendsen, E., Smetsers, S.: Towards an empirically validated model for assessment of code quality. In: Proceedings of the 14th Koli Calling International Conference on Computing Education Research, Koli Calling 2014, pp. 99–108. ACM, New York (2014)
de Raadt, M., Toleman, M., Watson, R.: An evaluation of electronic individual peer assessment in an introductory programming course. In: Lister, R., Simon (eds.) Seventh Baltic Sea Conference on Computing Education Research (Koli Calling 2007), Koli National Park, Finland. CRPIT, vol. 88, pp. 53–64. ACS (2007)
Sitthiworachart, J., Joy, M.: Computer support of effective peer assessment in an undergraduate programming class. J. Comput. Assist. Learn. 24(3), 217–231 (2008)
Warren, J., Rixner, S., Greiner, J., Wong, S.: Facilitating human interaction in an online programming course. In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education, SIGCSE 2014, pp. 665–670. ACM, New York (2014)
Kulkarni, C., Wei, K.P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., Klemmer, S.R.: Peer and self assessment in massive online classes. ACM Trans. Comput.-Hum. Interact. 20(6), 33:1–33:31 (2013)
Piech, C., Huang, J., Chen, Z., Do, C.B., Ng, A.Y., Koller, D.: Tuned models of peer assessment in MOOCs. CoRR abs/1307.2579 (2013)
Hext, J.B., Winings, J.W.: An automatic grading scheme for simple programming exercises. Commun. ACM 12(5), 272–275 (1969)
Forsythe, G.E., Wirth, N.: Automatic grading programs. Commun. ACM 8(5), 275–278 (1965)
Yulianto, S.V., Liem, I.: Automatic grader for programming assignment using source code analyzer. In: 2014 International Conference on Data and Software Engineering (ICODSE), pp. 1–4. IEEE (2014)
Gaudencio, M., Dantas, A., Guerrero, D.D.: Can computers compare student code solutions as well as teachers? In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education (2014)
Biggers, L.R., Kraft, N.A.: Quantifying the similiarities between source code lexicons. In: Proceedings of the 49th Annual Southeast Regional Conference, ACM-SE 2011, pp. 80–85. ACM, New York (2011)
Li, S., Xiao, X., Bassett, B., Xie, T., Tillmann, N.: Measuring code behavioral similarity for programming and software engineering education. In: Proceedings of the 38th International Conference on Software Engineering Companion, ICSE 2016, pp. 501–510. ACM, New York (2016)
Srikant, S., Aggarwal, V.: A system to grade computer programming skills using machine learning. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD 2014, pp. 1887–1896. ACM, New York (2014)
Choudhury, R.R., Yin, H., Moghadam, J., Chen, A., Fox, A.: Autostyle: scale-driven hint generation for coding style. In: Proceedings of the 13th International Conference on Intelligent Tutoring Systems, ITS, vol. 201, pp. 122–132 (2016)
Yin, H., Moghadam, J., Fox, A.: Clustering student programming assignments to multiply instructor leverage. In: Proceedings of the Second (2015) ACM Conference on Learning @ Scale, L@S 2015, pp. 367–372. ACM, New York (2015)
Fitzpatrick, J.: More C++ Gems, pp. 245–264. Cambridge University Press, New York (2000)
Sommerville, I.: Software Engineering, 9th edn. Addison-Wesley Publishing Company, Boston (2010)
McCabe, T.J.: Cyclomatic complexity and the year 2000. IEEE Softw. 13(3), 115–117 (1996)
Halstead, M.H.: Elements of Software Science (Operating and Programming Systems Series). Elsevier Science Inc., New York (1977)
Piech, C.: K means. http://stanford.edu/cpiech/cs221/handouts/kmeans.html
Cohen, J.: Weighted kappa: nominal scale agreement with provision for scaled disagreement or partial credit. Psychol. Bull. 70, 213–220 (1968)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG, part of Springer Nature
About this paper
Cite this paper
Barbosa, A.d.A., Costa, E.d.B., Brito, P.H. (2018). Adaptive Clustering of Codes for Assessment in Introductory Programming Courses. In: Nkambou, R., Azevedo, R., Vassileva, J. (eds) Intelligent Tutoring Systems. ITS 2018. Lecture Notes in Computer Science(), vol 10858. Springer, Cham. https://doi.org/10.1007/978-3-319-91464-0_2
Download citation
DOI: https://doi.org/10.1007/978-3-319-91464-0_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-91463-3
Online ISBN: 978-3-319-91464-0
eBook Packages: Computer ScienceComputer Science (R0)