Abstract
Automatic assessment of programming tasks in MOOCs (Massive Open Online Courses) is essential due to the large number of submissions. However, this often limits the scope of the assignments since task requirements must be strict for the solutions to be automatically gradable, reducing the opportunity for solutions to be creative. In order to alleviate this problem, we introduce a system capable of assessing the graphical output of a solution program using image recognition. This idea is applied to introductory computer graphics programming tasks whose solutions are programs that produce images of a given object on the screen. The image produced by the solution program is analysed using image recognition, resulting in a probability of a given object appearing in the image. The solution is accepted or rejected based on this score. The system was tested in a MOOC on 2,272 solution submissions. The results contained 4.6% cases of false negative and 0.5% cases of false positive grades. The method introduced in this paper saved approximately one minute per submission of the instructors’ time compared to manual grading. A participant survey revealed that the system was perceived to be functioning well or very well by 82.1% of the respondents, with an average rating of 4.4 out of 5.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Abstract Syntax Trees. https://docs.python.org/3.6/library/ast.html. Last accessed 1 April 2017
Clarifai Homepage. https://clarifai.com/. Last accessed 1 April 2017
Doherty, I., Harbutt, D., Sharma, N.: Designing and developing a MOOC. Med. Sci. Educ. 25(2), 177–181 (2015)
Dougiamas, M., Taylor, P.C.: Moodle: Using learning communities to create an open source course management system. In: Proceedings of the EDMEDIA 2003 Conference, Honolulu, Hawaii (2003)
English, J.: Automated assessment of GUI programs using JEWL. ACM SIGCSE Bull. 36(3), 137–141 (2004)
Ghostscript Homepage. https://www.ghostscript.com/. Last accessed 1 April 2017
Graphical User Interfaces with Tk. https://docs.python.org/3/library/tk.html. Last accessed 1 April 2017
Higgins, C.A., Gray, G., Symeonidis, P.: Automated assessment and experiences of teaching programming. J. Educ. Resour. Comput 5(3), 5 (2005)
Kulkarni, C., Wei, K.P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Koller, D., Klemmer, S.R.: Peer and self assessment in massive online classes. In: Plattner, H., Meinel, C., Leifer, L. (eds.) Design Thinking Research. UI, pp. 131–168. Springer, Cham (2015). doi:10.1007/978-3-319-06823-7_9
Papathoma, T., Blake, C., Clow, D., Scanlon, E.: Investigating learners’ views of assessment types in massive open online courses (MOOCs). In: Conole, G., Klobučar, T., Rensing, C., Konert, J., Lavoué, É. (eds.) EC-TEL 2015. LNCS, vol. 9307, pp. 617–621. Springer, Cham (2015). doi:10.1007/978-3-319-24258-3_72
Pears, A., Seidman, S., et al.: A survey of literature on the teaching of introductory programming. ACM SIGCSE Bull. 39(4), 204–223 (2007)
Pieterse, V.: Automated assessment of programming assignments. In: Proceedings of the 3rd Computer Science Education Research Conference on Computer Science Education Research, pp. 45–56. Open Universiteit, Heerlen, Arnhem, Netherlands (2013)
Rodríguez-del-Pino, J.C., Rubio-Royo, E., Hernández-Figueroa, Z. J.: A virtual programming lab for Moodle with automatic assessment and anti-plagiarism features. In: Proceedings of The 2012 International Conference on e-Learning, e-Business, Enterprise Information Systems, & e-Government (2012)
Sánchez-Vera, M.M., Prendes-Espinosa, M.P.: Beyond objective testing and peer assessment: alternative ways of assessment in MOOCs. Int. J. Educ. Technol. High. Educ. 12(1), 119–130 (2015)
Siemens, G.: Massive open online courses: innovation in education? Open Educ. Resour: Innov. Res. Prac. 5, 5–15 (2013)
Staubitz, T., Klement, H., Renz, J., Teusner, R., Meinel, C.: Towards practical programming exercises and automated assessment in Massive Open Online Courses. In: IEEE International Conference on Teaching, Assessment, and Learning for Engineering (TALE), 2015, pp. 23–30. IEEE (2015)
Thornton, M., Edwards, S.H., Tan, R.P., Pérez-Quiñones, M.A.: Supporting student-written tests of GUI programs. ACM SIGCSE Bull. 40(1), 537–541 (2008)
Vihavainen, A., Luukkainen, M., Kurhila, J.: Multi-faceted support for MOOC in programming. In: Proceedings of the 13th annual conference on Information technology education, pp. 171–176. ACM (2012)
VPL Homepage. http://vpl.dis.ulpgc.es/. Last accessed 1 April 2017
Wang, Y., Liang, Y., Liu, L., Liu, Y.: A multi-peer assessment platform for programming language learning: considering group non-consensus and personal radicalness. Interact. Learn. Environ. 24(8), 2011–2031 (2016)
Wulf, J., Blohm, I., Leimeister, J.M., et al.: Massive open online courses. Bus. Inf. Syst. Eng. (BISE) 6(2), 111–114 (2014)
Xvfb Homepage. https://www.x.org/archive/X11R7.6/doc/man/man1/Xvfb.1.xhtml. Last accessed 1 April 2017
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2017 Springer International Publishing AG
About this paper
Cite this paper
Muuli, E. et al. (2017). Automatic Assessment of Programming Assignments Using Image Recognition. In: Lavoué, É., Drachsler, H., Verbert, K., Broisin, J., Pérez-Sanagustín, M. (eds) Data Driven Approaches in Digital Education. EC-TEL 2017. Lecture Notes in Computer Science(), vol 10474. Springer, Cham. https://doi.org/10.1007/978-3-319-66610-5_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-66610-5_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-66609-9
Online ISBN: 978-3-319-66610-5
eBook Packages: Computer ScienceComputer Science (R0)