ABSTRACT
As computer science is integrated into a wider variety of fields, block-based programming languages like Snap!, which assemble code with visual blocks rather than text syntax, are increasingly used to teach computational thinking (CT) to students from diverse backgrounds. Although automated evaluators (autograders) for programming assignments usually focus on runtime efficiency and output accuracy, effective evaluation of a student's CT skills requires assessing coding best practices, such as decomposition, abstraction, and algorithm design. While autograders are commonplace for text languages like Python, we present a machine learning approach to assess how effectively block-based code demonstrates understanding of CT fundamentals. Our dataset consists of Snap! programs written by students new to coding and evaluated by instructors using a CT rubric. We explore how to best transform these programs into low-dimensional features to allow encapsulation and repetition patterns to emerge. Experimentation involves comparing the effectiveness of a suite of clustering models and similarity metrics by analyzing how directly automated feedback correlates to the course staff's manual evaluation. Lastly, we demonstrate the practical application of the autograder in a classroom setting and discuss scalability and feasibility in other domains of CS education.
Supplemental Material
- Andrea Forte and Mark Guzdial. 2004. Computers for Communication, Not Calculation: Media as a Motivation and Context for Learning. In Proceedings of the 37th Annual Hawaii International Conference on System Sciences. Big Island, Hawaii. https://doi.org/10.1109/HICSS.2004.1265259Google ScholarCross Ref
- Rahul Gupta, Aditya Kanade, and Shirish Shevade. 2019. Deep Reinforcement Learning for Syntactic Error Repair in Student Programs. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 33. AAAI Press, Honolulu, Hawaii, 930--937. https://doi.org/10.1609/aaai.v33i01.3301930Google ScholarDigital Library
- Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, and Leonidas Guibas. 2015. Learning Program Embeddings to Propagate Feedback on Student Code. In Proceedings of the 32nd International Conference on Machine Learning (W&CP, Vol. 37 ). JMLR, Lille, France, 248--253. https://doi.org/10.48550/arXiv.1505.05969Google Scholar
Index Terms
- Automated Structural Evaluation of Block-based Coding Assignments
Recommendations
Enhancing Programming Education through Game-Based Learning: Design and Implementation of a Puyo Puyo-Inspired Teaching Tool
SIGCSE 2024: Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 2Although programming is part of primary school curricula in many countries, barriers persist for elementary students learning programming such as an insufficient understanding of the underlying mathematics, complex concepts, and purpose of programming. ...
Using Pirate Plunder to Develop Children's Abstraction Skills in Scratch
CHI EA '19: Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing SystemsScratch users often struggle to detect and correct 'code smells' (bad programming practices) such as duplicated blocks and large scripts, which can make programs difficult to understand and debug. These 'smells' can be caused by a lack of abstraction, a ...
The Block-based, Text-based, and the CS1 Prepared
ICER 2021: Proceedings of the 17th ACM Conference on International Computing Education ResearchFor over 50 years, computer scientists whose backgrounds span many academic and corporate affiliations have attempted to truncate a novice programmer’s investment into their learning that might expedite the length of time required to advance from ...
Comments