skip to main content
10.1145/3304221.3319759acmconferencesArticle/Chapter ViewAbstractPublication PagesiticseConference Proceedingsconference-collections
research-article

The Impact of Adding Textual Explanations to Next-step Hints in a Novice Programming Environment

Published: 02 July 2019 Publication History

Abstract

Automated hints, a powerful feature of many programming environments, have been shown to improve students' performance and learning. New methods for generating these hints use historical data, allowing them to scale easily to new classrooms and contexts. These scalable methods often generate next-step, code hints that suggest a single edit for the student to make to their code. However, while these code hints tell the student what to do, they do not explain why, which can make these hints hard to interpret and decrease students' trust in their helpfulness. In this work, we augmented code hints by adding adaptive, textual explanations in a block-based, novice programming environment. We evaluated their impact in two controlled studies with novice learners to investigate how our results generalize to different populations. We measured the impact of textual explanations on novices' programming performance. We also used quantitative analysis of log data, self-explanation prompts, and frequent feedback surveys to evaluate novices' understanding and perception of the hints throughout the learning process. Our results showed that novices perceived hints with explanations as significantly more relevant and interpretable than those without explanations, and were also better able to connect these hints to their code and the assignment. However, we found little difference in novices' performance. Our results suggest that explanations have the potential to make code hints more useful, but it is unclear whether this translates into better overall performance and learning.

References

[1]
Vincent Aleven, Ido Roll, Bruce M. McLaren, and Kenneth R. Koedinger. 2016. Help Helps, But Only So Much: Research on Help Seeking with Intelligent Tutoring Systems. International Journal of Artificial Intelligence in Education 26, 1 (2016), 1--19.
[2]
John R. Anderson, Frederick G. Conrad, and Albert T. Corbett. 1989. Skill Acquisition and the LISP tutor. Cognitive Science 13, 4 (1989), 467--505.
[3]
Michael Ball. 2018. Lambda: An Autograder for Snap!. Technical Report. Electrical Engineering and Computer Sciences University of California at Berkeley. https://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018--2.pdf
[4]
Joseph E Beck, Kai-min Chang, Jack Mostow, and Albert Corbett. 2008. Does help help? Introducing the Bayesian Evaluation and Assessment methodology. In International Conference on Intelligent Tutoring Systems. Springer, 383--394.
[5]
Tara S. Behrend, David J. Sharek, Adam W. Meade, and Eric N. Wiebe. 2011. The viability of crowdsourcing for survey research. Behavior Research Methods 43, 3 (25 Mar 2011), 800.
[6]
Jens Bennedsen and Michael E. Caspersen. 2007. Failure rates in introductory programming. ACM SIGCSE Bulletin 39, 2 (2007), 32.
[7]
Xianglei Chen and Matthew Soldner. 2013. STEM Attrition: College Students' Paths Into and Out of STEM Fields. Technical Report. National Center for Education Statistics, Institute of Education Sciences, U.S. Department of Education. http://nces.ed.gov/pubs2014/2014001rev.pdf
[8]
Albert T. Corbett. 2001. Cognitive Computer Tutors: Solving the Two-Sigma Problem. In Proceedings of the International Conference on User Modeling. Springer, 137--147.
[9]
Albert T Corbett and John R Anderson. 2001. Locus of feedback control in computer-based tutoring: Impact on learning rate, achievement and attitudes. In Proceedings of the SIGCHI conference on Human factors in computing systems. ACM, 245--252.
[10]
Davide Fossati, Barbara Di Eugenio, Stellan Ohlsson, Christopher Brown, and Lin Chen. 2010. Generating proactive feedback to help students stay on track. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 6095 LNCS, PART 2 (2010), 315-- 317.
[11]
Davide Fossati, Barbara Di Eugenio, Stellan Ohlsson, Christopher Brown, and Lin Chen. 2015. Data Driven Automatic Feedback Generation in the iList Intelligent Tutoring System. Technology, Instruction, Cognition and Learning 10, 1 (2015), 5--26.
[12]
Dan Garcia, Brian Harvey, and Tiffany Barnes. 2015. The Beauty and Joy of Computing. ACM Inroads 6, 4 (2015), 71--79.
[13]
Alex Gerdes, Bastiaan Heeren, Johan Jeuring, and L. Thomas van Binsbergen. 2017. Ask-Elle: an Adaptable Programming Tutor for Haskell Giving Automated Feedback. International Journal of Artificial Intelligence in Education 27, 1 (2017), 1--36.
[14]
Brian Harvey, Daniel Garcia, Josh Paley, and Luke Segars. 2012. Snap!:(build your own blocks). In Proceedings of the 43rd ACM technical symposium on Computer Science Education. ACM, 662--662.
[15]
Andrew Head, Elena Glassman, Gustavo Soares, Ryo Suzuki, Lucas Figueredo, Loris D'Antoni, and Björn Hartmann. 2017. Writing Reusable Code Feedback at Scale with Mixed-Initiative Program Synthesis. In Proceedings of the ACM Conference on Learning @ Scale. ACM, 89--98.
[16]
Jay Holland, Antonija Mitrovic, and Brent Martin. 2009. J-LATTE: a Constraintbased Tutor for Java. In Proceedings of the International Conference on Computers in Education. University of Canterbury. Computer Science and Software Engineering, 142--146.
[17]
Shalini Kaleeswaran, Anirudh Santhiar, Aditya Kanade, and Sumit Gulwani. 2016. Semi-Supervised Verified Feedback Generation. CoRR (2016), 739--750.
[18]
Hieke Keuning, Johan Jeuring, and Bastiaan Heeren. 2016. Towards a Systematic Review of Automated Feedback Generation for Programming Exercises. In Proceedings of the 2016 ACM Conference on Innovation and Technology in Computer Science Education - ITiCSE '16. ACM, 41--46.
[19]
Aniket Kittur, Ed H Chi, and Bongwon Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 453--456.
[20]
Nguyen-Thinh Le, Wolfgang Menzel, and Niels Pinkwart. 2009. Evaluation of a constraint-based homework assistance system for logic programming. Proceedings of the 17th International Conference on Computers in Education (2009), 51--58.
[21]
Victor J. Marin, Tobin Pereira, Srinivas Sridharan, and Carlos R. Rivero. 2017. Automated personalized feedback in introductory Java programming MOOCs. Proceedings - International Conference on Data Engineering August (2017), 1259-- 1270.
[22]
Danielle S. McNamara. 2017. Self-Explanation and Reading Strategy Training (SERT) Improves Low-Knowledge Students' Science Course Performance. Discourse Processes (2017).
[23]
Antonija Mitrovic. 1998. A knowledge-based teaching system for SQL. In Proceedings of ED-MEDIA, Vol. 98. 1027--1032.
[24]
Antonija Mitrovic, Brent Martin, and Pramuditha Suraweera. 2007. Intelligent tutors for all: Constraint-based modeling methodology, systems and authoring. IEEE Intelligent Systems 22 (2007), 38--45.
[25]
Benjamin Paaßen, Barbara Hammer, Thomas William Price, Tiffany Barnes, Sebastian Gross, and Niels Pinkwart. 2017. The Continuous Hint Factory - Providing Hints in Vast and Sparsely Populated Edit Distance Spaces. arXiv preprint arXiv:1708.06564 10, 1 (2017), 1--35. arXiv:1708.06564
[26]
Daniel Perelman, Sumit Gulwani, and Dan Grossman. 2014. Test-driven synthesis for automated feedback for introductory computer science assignments. Proceedings of Data Mining for Educational Assessment and Feedback (ASSESS 2014) (2014).
[27]
C Piech, J Huang, A Nguyen, M Phulsuksombati, M Sahami, and L Guibas. 2015. Learning program embeddings to propagate feedback on student code. arXiv preprint arXiv:1505.05969 (2015), 1093--1102. arXiv:arXiv:1505.05969v1
[28]
Thomas W Price, Yihuan Dong, and Dragan Lipovac. 2017. iSnap: towards intelligent tutoring in novice programming environments. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. ACM, 483--488.
[29]
Thomas W Price, Zhongxiu Liu, Veronica Cateté, and Tiffany Barnes. 2017. Factors Influencing Students' Help-Seeking Behavior while Programming with Human and Computer Tutors. In Proceedings of the 2017 ACM Conference on International Computing Education Research. ACM, 127--135.
[30]
Thomas W. Price, Joseph J. Williams, and Samiha Marwan. 2019. A Comparison of Two Designs for Automated Programming Hints. 2nd Educational Data Mining in Computer Science Education (CSEDM) Workshop at the International Conference on Learning Analytics and Knowledge (LAK) (2019).
[31]
Thomas W. Price, Rui Zhi, and Tiffany Barnes. 2017. Evaluation of a Datadriven Feedback Algorithm for Open-ended Programming. In Proceedings of the International Conference on Educational Data Mining.
[32]
Thomas W Price, Rui Zhi, and Tiffany Barnes. 2017. Hint generation under uncertainty: The effect of hint quality on help-seeking behavior. In International Conference on Artificial Intelligence in Education. Springer, 311--322.
[33]
Kelly Rivers. 2017. Automated Data-Driven Hint Generation for Learning Programming. PhD. Carnegie Mellon University.
[34]
Kelly Rivers and Kenneth R. Koedinger. 2017. Data-Driven Hint Generation in Vast Solution Spaces: a Self-Improving Python Programming Tutor. International Journal of Artificial Intelligence in Education 27, 1 (2017), 37--64.
[35]
André L. Santos. 2012. An open-ended environment for teaching Java in context. In Proceedings of the 17th ACM annual conference on Innovation and technology in computer science education - ITiCSE '12. ACM Press, New York, New York, USA, 87.
[36]
Rishabh Singh, Sumit Gulwani, and Armando Solar-Lezama. 2013. Automated feedback generation for introductory programming assignments. Acm Sigplan Notices 48, 6 (2013), 15--26.
[37]
Arto Vihavainen, Jonne Airaksinen, and ChristopherWatson. 2014. A systematic review of approaches for teaching introductory programming and their influence on success. Proceedings of the tenth annual conference on International computing education research - ICER '14 (2014), 19--26.
[38]
Christopher Watson and Frederick W B Li. 2014. Failure rates in introductory programming revisited. In Proceedings of the ACM Conference on Innovation and Technology in Computer Science Education. ACM, ACM, 39--44.
[39]
Jooyong Yi, Umair Z. Ahmed, Amey Karkare, Shin Hwei Tan, and Abhik Roychoudhury. 2017. A Feasibility Study of Using Automated Program Repair for Introductory Programming Assignments. In Proceedings of the Joint Meeting on Foundations of Software Engineering. ACM, 740--751.

Cited By

View all
  • (2025)Compiler-Integrated, Conversational AI for Debugging CS1 ProgramsProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 110.1145/3641554.3701827(994-1000)Online publication date: 12-Feb-2025
  • (2025)Experience Report on Using LANTERN in Teaching Relational Query ProcessingProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 110.1145/3641554.3701812(123-129)Online publication date: 12-Feb-2025
  • (2024)CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning ProgrammingProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3662032(51-62)Online publication date: 9-Jul-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ITiCSE '19: Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education
July 2019
583 pages
ISBN:9781450368957
DOI:10.1145/3304221
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 July 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. computer science education
  2. intelligent tutoring systems
  3. next step hints
  4. textual hints

Qualifiers

  • Research-article

Conference

ITiCSE '19
Sponsor:

Acceptance Rates

Overall Acceptance Rate 552 of 1,613 submissions, 34%

Upcoming Conference

ITiCSE '25
Innovation and Technology in Computer Science Education
June 27 - July 2, 2025
Nijmegen , Netherlands

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)76
  • Downloads (Last 6 weeks)5
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Compiler-Integrated, Conversational AI for Debugging CS1 ProgramsProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 110.1145/3641554.3701827(994-1000)Online publication date: 12-Feb-2025
  • (2025)Experience Report on Using LANTERN in Teaching Relational Query ProcessingProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 110.1145/3641554.3701812(123-129)Online publication date: 12-Feb-2025
  • (2024)CodeTailor: LLM-Powered Personalized Parsons Puzzles for Engaging Support While Learning ProgrammingProceedings of the Eleventh ACM Conference on Learning @ Scale10.1145/3657604.3662032(51-62)Online publication date: 9-Jul-2024
  • (2024)Next-Step Hint Generation for Introductory Programming Using Large Language ModelsProceedings of the 26th Australasian Computing Education Conference10.1145/3636243.3636259(144-153)Online publication date: 29-Jan-2024
  • (2024)Alloy Repair Hint Generation Based on Historical DataFormal Methods10.1007/978-3-031-71177-0_8(104-121)Online publication date: 13-Sep-2024
  • (2023)Understanding the Effects of Using Parsons Problems to Scaffold Code Writing for Students with Varying CS Self-Efficacy LevelsProceedings of the 23rd Koli Calling International Conference on Computing Education Research10.1145/3631802.3631832(1-12)Online publication date: 13-Nov-2023
  • (2023)Quick Fixes for Novice Programmers: Effective but Under-UtilisedProceedings of the 2023 Conference on United Kingdom & Ireland Computing Education Research10.1145/3610969.3611117(1-7)Online publication date: 7-Sep-2023
  • (2023)Impact of Hint Content on Performance and Learning: A Study with Primary School Children in a Scratch CourseProceedings of the 18th WiPSCE Conference on Primary and Secondary Computing Education Research10.1145/3605468.3605498(1-10)Online publication date: 27-Sep-2023
  • (2023)Comparing Code Explanations Created by Students and Large Language ModelsProceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 110.1145/3587102.3588785(124-130)Online publication date: 29-Jun-2023
  • (2023)Evaluating Distance Measures for Program RepairProceedings of the 2023 ACM Conference on International Computing Education Research - Volume 110.1145/3568813.3600130(495-507)Online publication date: 7-Aug-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media