ABSTRACT
In a massive open online course (MOOC), a single pro-gramming or digital hardware design exercise may yield thousands of student solutions that vary in many ways, some superficial and some fundamental. Understanding large-scale variation in student solutions is a hard but important problem. For teachers, this variation can be a source of pedagogically valuable examples and expose corner cases not yet covered by autograding. For students, the variation in a large class means that other students may have struggled along a similar solution path, hit the same bugs, and can offer hints based on that earned expertise. We developed three systems to take advantage of the solu-tion variation in large classes, using program analysis and learnersourcing. All three systems have been evaluated using data or live deployments in on-campus or edX courses with thousands of students.
- Elena L Glassman, Lyla Fischer, Jeremy Scott, and Robert C Miller. 2015a. Foobaz: Variable Name Feedback for Student Code at Scale. In Proceedings of the 28th annual ACM symposium on User Interface Software and Technology (UIST '15). ACM, New York, NY, USA. Google ScholarDigital Library
- Elena L Glassman, Aaron Lin, Carrie J Cai, and Robert C Miller. 2015b. Learnersourcing Personalized Hints. In Proceedings of the 19th ACM Conference on Computer Supported Cooperative Work and Social Computing (CSCW '15). ACM, New York, NY, USA. Google ScholarDigital Library
- Elena L Glassman, Jeremy Scott, Rishabh Singh, Philip J Guo, and Robert C Miller. 2015c. Over-Code: Visualizing variation in student solutions to programming problems at scale. ACM Transactions on Computer-Human Interaction (TOCHI) 22, 2 (2015), 7. Google ScholarDigital Library
Recommendations
AXIS: Generating Explanations at Scale with Learnersourcing and Machine Learning
L@S '16: Proceedings of the Third (2016) ACM Conference on Learning @ ScaleWhile explanations may help people learn by providing information about why an answer is correct, many problems on online platforms lack high-quality explanations. This paper presents AXIS (Adaptive eXplanation Improvement System), a system for ...
What's In It for the Learners? Evidence from a Randomized Field Experiment on Learnersourcing Questions in a MOOC
L@S '21: Proceedings of the Eighth ACM Conference on Learning @ ScaleQuestion generation as a form of learnersourcing is both a metacognitive learning activity for students that encourages the development of higher-order thinking skills and a method for producing question banks and assessments. To better understand the ...
Learners Teaching Novices: An Uplifting Alternative Assessment
SIGCSE 2024: Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1We propose and carry-out a novel method of formative assessment called Assessment via Teaching (AVT), in which learners demonstrate their understanding of CS1 topics by tutoring more novice students. AVT has powerful benefits over traditional forms of ...
Comments