skip to main content
10.1145/3291279.3339420acmconferencesArticle/Chapter ViewAbstractPublication PagesicerConference Proceedingsconference-collections
research-article

An Evaluation of the Impact of Automated Programming Hints on Performance and Learning

Published: 30 July 2019 Publication History

Abstract

A growing body of work has explored how to automatically generate hints for novice programmers, and many programming environments now employ these hints. However, few studies have investigated the efficacy of automated programming hints for improving performance and learning, how and when novices find these hints beneficial, and the tradeoffs that exist between different types of hints. In this work, we explored the efficacy of next-step code hints with 2 complementary features: textual explanations and self-explanation prompts. We conducted two studies in which novices completed two programming tasks in a block-based programming environment with automated hints. In Study 1, 10 undergraduate students completed 2 programming tasks with a variety of hint types, and we interviewed them to understand their perceptions of the affordances of each hint type. For Study 2, we recruited a convenience sample of participants without programming experience from Amazon Mechanical Turk. We conducted a randomized experiment comparing the effects of hints' types on learners' performance and performance on a subsequent task without hints. We found that code hints with textual explanations significantly improved immediate programming performance. However, these hints only improved performance in a subsequent post-test task with similar objectives, when they were combined with self-explanation prompts. These results provide design insights into how automatically generated code hints can be improved with textual explanations and prompts to self-explain, and provide evidence about when and how these hints can improve programming performance and learning.

References

[1]
Vincent Aleven. 2013. Help Seeking and Intelligent Tutoring Systems: Theoretical Perspectives and a Step Towards Theoretical Integration. International Handbook of Metacognition and Learning Technologies 28, January (2013), 197--211.
[2]
Vincent Aleven and Kenneth R. Koedinger. 2001. Investigations into Help seeking and Learning with a Cognitive Tutor. In Papers of the AIED 2001 Workhop 'Help Provision And Help Seeking In Interactive Learning Environments'. 47--58.
[3]
Vincent Aleven, Ido Roll, Bruce M. McLaren, and Kenneth R. Koedinger. 2016. Help Helps, But Only So Much: Research on Help Seeking with Intelligent Tutoring Systems. International Journal of Artificial Intelligence in Education 26, 1(2016), 1--19.
[4]
Vincent Aleven, Elmar Stahl, Silke Schworm, Frank Fischer, and Raven Wallace. 2003. Help Seeking and Help Design in Interactive Learning Environments Vincent. Review of Educational Research 73, 3 (2003), 277--320.
[5]
JR Anderson. 1996. ACT: A simple theory of complex cognition. American Psychologist(1996). http://psycnet.apa.org/journals/amp/51/4/355/http://courses.csail.mit.edu/6.803/pdf/anderson.pdf
[6]
John R. Anderson. 1993.Rules of the Mind. Hillsdale, New Jersey.
[7]
John R Anderson, Albert T Corbett, Kenneth R Koedinger, and Ray Pelletier. 1995. Cognitive Tutors: Lessons Learned. The Journal of the Learning Sciences 4, 2(1995), 167--207.
[8]
Michael Ball. 2018. Lambda: An Autograder for snap. Technical Report. Electrical Engineering and Computer Sciences University of California at Berkeley. https://www2.eecs.berkeley.edu/Pubs/TechRpts/2018/EECS-2018-2.pdf
[9]
Joseph E. Beck, Kai Min Chang, Jack Mostow, and Albert Corbett. 2008. Does help help? Introducing the Bayesian Evaluation and Assessment Methodology. In Proceedings of the International Conference on Intelligent Tutoring Systems. 383--394.
[10]
Brett A Becker, Graham Glanville, Ricardo Iwashima, Claire McDonnell, Kyle Goslin, and Catherine Mooney. 2016. Effective compiler error message enhancement for novice programming students. Computer Science Education 26, 2--3(2016), 148--175.
[11]
Tara S. Behrend, David J. Sharek, Adam W. Meade, and Eric N. Wiebe. 2011. Theviability of crow sourcing for survey research. Behavior Research Methods 43, 3(25 Mar 2011), 800.
[12]
Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: apractical and powerful approach to multiple testing.Journal of the Royal statisticalsociety57, 1 (1995), 289--300.
[13]
Jens Bennedsen and Michael E. Caspersen. 2007. Failure rates in introductoryprogramming.ACM SIGCSE Bulletin39, 2 (2007), 32.
[14]
Ruth Butler. 1998. Determinants of Help Seeking: Relations Between PerceivedReasons for Classroom Help-Avoidance and Help-Seeking Behaviors in an Ex-perimental Context.Journal of Educational Psychology90, 4 (1998), 630--643.
[15]
Xianglei Chen and Matthew Soldner. 2013.STEM Attrition: College Students' PathsInto and Out of STEM Fields. Technical Report. National Center for EducationStatistics, Institute of Education Sciences, U.S. Department of Education. http://nces.ed.gov/pubs2014/2014001rev.pdf
[16]
Rohan Roy Choudhury, Hezheng Yin, and Armando Fox. 2016. Scale-drivenautomatic hint generation for coding style. InProceedings of the InternationalConference on Intelligent Tutoring Systems. 122--1
[17]
Albert Corbett and John R. Anderson. 2001. Locus of Feedback Control inComputer-Based Tutoring: Impact on Learning Rate, Achievement and Attitudes.InProceedings of the SIGCHI Conference on Human Computer Interaction. 245--252.http://dl.acm.org/citation.cfm?id=365111
[18]
Dan Davis, Claudia Hauff, and Geert-Jan Houben. 2018. Evaluating Crowdwork-ers as a Proxy for Online Learners in Video-Based Learning Contexts.Proceedingsof the ACM on Human-Computer Interaction2, CSCW (2018), 42.
[19]
Olive Jean Dunn. 1964. Multiple comparisons using rank sums.Technometrics6,3 (1964), 241--252.
[20]
Anna Espasa and Julio Meneses. 2010. Analysing feedback processes in an online teaching and learning environment: an exploratory study. Higher education 59, 3(2010), 277--292.
[21]
Davide Fossati, Barbara Di Eugenio, Stellan Ohlsson, Christopher Brown, and LinChen. 2015. Data Driven Automatic Feedback Generation in the iList IntelligentTutoring System.Technology, Instruction, Cognition and Learning10, 1 (2015),5--26.
[22]
Alex Gerdes, Bastiaan Heeren, Johan Jeuring, and L. Thomas van Binsbergen. 2016. Ask-Elle: an Adaptable Programming Tutor for Haskell Giving Automated Feedback. International Journal of Artificial Intelligence in Education 27, 1 (2016), 1--36.
[23]
Rahul Gupta, Soham Pal, Aditya Kanade, and Shirish Shevade. 2017. DeepFix: Fixing Common Programming Errors by Deep Learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 1. 1345--1351.
[24]
Luke Gusukuma, Austin Cory Bart, Dennis Kafura, and Jeremy Ernst. 2018. Misconception-driven feedback: Results from an experimental study. In Proceedings of the 2018 ACM Conference on International Computing Education Research. ACM, 160--168.
[25]
Björn Hartmann, Daniel Macdougall, Joel Brandt, and Scott R Klemmer. 2010. What Would Other Programmers Do? Suggesting Solutions to Error Messages. In Proceedings of the ACM Conference on Human Factors in Computing Systems. 1019--1028.
[26]
Andrew Hicks, Barry Peddycord III, and Tiffany Barnes. 2014. Building Gamesto Learn from Their Players: Generating Hints in a Serious Game. In Proceedings of the International Conference on Intelligent Tutoring Systems. 312--317.
[27]
Aniket Kittur, Ed H Chi, and Bong won Suh. 2008. Crowdsourcing user studies with Mechanical Turk. In Proceedings of the SIGCHI conference on human factors in computing systems. ACM, 453--456.
[28]
KR Koedinger and JC Stamper. 2013. Using data-driven discovery of better student models to improve student learning. In Proceedings of the International Conference on Artificial Intelligence in Education.
[29]
Timotej Lazar and Ivan Bratko. 2014. Data-Driven Program Synthesis for Hint Generation in Programming Tutors. In Proceedings of the International Conference on Intelligent Tutoring Systems. Springer, 306--311.
[30]
Timotej Lazar, Martin Moina, and Ivan Bratko. 2017. Automatic Extraction of AST Patterns for Debugging Student Programs. In Proceedings of the International Conference on Artificial Intelligence in Education. 162--174.
[31]
Michael J Lee and Andrew J Ko. 2015. Comparing the effectiveness of online learning approaches on CS1 learning outcomes. In Proceedings of the eleventh annual international conference on international computing education research. ACM, 237--246.
[32]
S. Marwan, N. Lytle, J. J. Williams, and T. W. Price. 2019. The Impact of Adding Textual Explanations to Next-step Hints in a Novice Programming Environment. In Proceedings of the 24th Annual ACM Conference on Innovation and Technology in Computer Science Education, ITiCSE 19 (forthcoming).
[33]
Danielle S. McNamara. 2017. Self-Explanation and Reading Strategy Training(SERT) Improves Low-Knowledge Students' Science Course Performance. Discourse Processes(2017).
[34]
Antonija Mitrovic, Pramuditha Suraweera, Brent Martin, and Amali Weerasinghe. 2004. DB-suite: Experiences with three intelligent, web-based database tutors. Journal of Interactive Learning Research1 5, 4 (2004), 409--432.
[35]
Briana B Morrison, Lauren E Margulieux, and Cherry Street. 2015. Subgoals, Context, and Worked Examples in Learning Computing Problem Solving. In Proceedings of the International Computing Education Research Conference. 21--29.
[36]
Pete Nordquist. 2007. Providing accurate and timely feedback by automatically grading student programming labs. Journal of Computing Sciences in Colleges 23, 2 (2007), 16--23.
[37]
Benjamin Paaßen, Barbara Hammer, Thomas W. Price, Tiffany Barnes, Sebastian Gross, and Niels Pinkwart. 2018. The Continuous Hint Factory -Providing Hintsin Vast and Sparsely Populated Edit Distance Spaces. Journal of Educational Data Mining(2018), 1--50.
[38]
Daniel Perelman, Sumit Gulwani, and Dan Grossman. 2014. Test-Driven Synthesis for Automated Feedback for Introductory Computer Science Assignments. In Proceedings of the Workshop on Data Mining for Educational Assessment and Feedback.
[39]
Chris Piech, Mehran Sahami, Joh Huang, and Leo Guibas. 2015. Autonomously Generating Hints by Inferring Problem Solving Policies. In Proceedings of the ACM Conference on Learning @ Scale. 1--10.
[40]
T.W. Price, R. Zhi, Y. Dong, N. Lytle, and T. Barnes. 2018. The impact of data quantity and source on the quality of data-driven hints for programming. In Proceedings of the International Conference on Artificial Intelligence in Education.
[41]
Thomas W. Price, Yihuan Dong, and Dragan Lipovac. 2017. iSnap: Towards Intelligent Tutoring in Novice Programming Environments. In Proceedings of the ACM Technical Symposium on Computer Science Education.
[42]
Thomas W Price, Yihuan Dong, Rui Zhi, Benjamin Paaßen, Nicholas Lytle, Veronica Cateté, and Tiffany Barnes. 2019. A Comparison of the Quality of Data-driven Programming Hint Generation Algorithms. International Journal of Artificial Intelligence in Education(2019).
[43]
Thomas W. Price, Zhongxiu Liu, Veronica Catete, and Tiffany Barnes. 2017. Factors Influencing Students' Help-Seeking Behavior while Programming with Human and Computer Tutors. In Proceedings of the International Computing Education Research Conference.
[44]
T. W. Price, J. J. Williams, and S. Marwan. 2019. A Comparison of Two Designs for Automated Programming Hints. (2019).
[45]
Thomas W. Price, Rui Zhi, and Tiffany Barnes. 2017. Evaluation of a Data-driven Feedback Algorithm for Open-ended Programming. In Proceedings of the International Conference on Educational Data Mining.
[46]
Thomas W. Price, Rui Zhi, and Tiffany Barnes. 2017. Hint Generation Under Uncertainty: The Effect of Hint Quality on Help-Seeking Behavior. In Proceedings of the International Conference on Artificial Intelligence in Education.
[47]
Kelly Rivers. 2016.Automated Data-Driven Hint Generation for Learning Program-ming. Ph.D. Dissertation. Carnegie Mellon University.
[48]
Kelly Rivers and Kenneth R. Koedinger. 2017. Data-Driven Hint Generation in Vast Solution Spaces: a Self-Improving Python Programming Tutor. International Journal of Artificial Intelligence in Education 27, 1 (2017), 37--64.
[49]
Marguerite Roy and Michelene TH Chi. 2005. The self-explanation principle in multimedia learning. The Cambridge handbook of multimedia learning(2005), 271--286.
[50]
Silke Schworm and Alexander Renkl. 2006. Computer-supported Example-based Learning: When Instructional Explanations Reduce Self-explanations. Computers & Education 46, 4 (2006), 426--445.
[51]
Benjamin Shih, Kenneth Koedinger, and Richard Scheines. 2008. A Response Time Model for Bottom-Out Hints as Worked Examples. In Proceedings of the International Conference on Educational Data Mining. 117--126.
[52]
Hyungyu Shin, Eun-Young Ko, Joseph Jay Williams, and Juho Kim. 2018. Understanding the Effect of In-Video Prompting on Learners and Instructors. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems(CHI '18). ACM, New York, NY, USA, Article 319, 12 pages.
[53]
Arto Vihavainen, Craig S. Miller, and Amber Settle. 2015. Benefits of Self-explanation in Introductory Programming.Proceedings of the 46th ACM Technical Symposium on Computer Science Education - SIGCSE '1568 (2015), 284--289.
[54]
Ke Wang, Benjamin Lin, Bjorn Rettig, Paul Pardi, and Rishabh Singh. 2017. Data-Driven Feedback Generator for Online Programing Courses. In Proceedings of the ACM Conference on Learning @ Scale. 257--260.
[55]
Christopher Watson and Frederick W B Li. 2014. Failure rates in introductory programming revisited. In Proceedings of the ACM Conference on Innovation and Technology in Computer Science Education. ACM, 39--44.
[56]
Christopher Watson, Frederick W B Li, and Jamie L. Godwin. 2012. Blue Fix: Using crowd-sourced feedback to support programming students in error diagnosis and repair. In Proceedings of the International Conference on Web-based Learning. 228--239.
[57]
Joseph Jay Williams, Tania Lombrozo, Anne Hsu, Bernd Huber, and Juho Kim.2016. Revising Learner Misconceptions Without Feedback: Prompting for Reflection on Anomalies. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 470--474.
[58]
Jooyong Yi, Umair Z. Ahmed, Amey Karkare, Shin Hwei Tan, and Abhik Roy-choudhury. 2017. A Feasibility Study of Using Automated Program Repair for Introductory Programming Assignments. In Proceedings of the Joint Meeting on Foundations of Software Engineering. 740--751. http://dl.acm.org/citation.cfm?doid=3106237.3106262

Cited By

View all
  • (2024)Gelingensbedingungen für die affektive Förderung von Kindern durch einen Robotik-Making-KursPromoting Affective Components of Children by a Maker Course on RoboticsMedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung10.21240/mpaed/56/2024.03.15.X56(429-456)Online publication date: 15-Mar-2024
  • (2024)New Supportive Features for the Online Coding Tutorial Systems: The Learners and Educators PerspectivesProceedings of the 2024 the 16th International Conference on Education Technology and Computers10.1145/3702163.3702418(226-230)Online publication date: 18-Sep-2024
  • (2024)Exploring Novices' Problem-Solving Strategies in Computing and Math DomainsProceedings of the 24th Koli Calling International Conference on Computing Education Research10.1145/3699538.3699557(1-8)Online publication date: 12-Nov-2024
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICER '19: Proceedings of the 2019 ACM Conference on International Computing Education Research
July 2019
375 pages
ISBN:9781450361859
DOI:10.1145/3291279
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 30 July 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. block-based programming
  2. computer science education
  3. next-step hints
  4. self-explanation

Qualifiers

  • Research-article

Conference

ICER '19
Sponsor:

Acceptance Rates

ICER '19 Paper Acceptance Rate 28 of 137 submissions, 20%;
Overall Acceptance Rate 189 of 803 submissions, 24%

Upcoming Conference

ICER 2025
ACM Conference on International Computing Education Research
August 3 - 6, 2025
Charlottesville , VA , USA

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)87
  • Downloads (Last 6 weeks)16
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Gelingensbedingungen für die affektive Förderung von Kindern durch einen Robotik-Making-KursPromoting Affective Components of Children by a Maker Course on RoboticsMedienPädagogik: Zeitschrift für Theorie und Praxis der Medienbildung10.21240/mpaed/56/2024.03.15.X56(429-456)Online publication date: 15-Mar-2024
  • (2024)New Supportive Features for the Online Coding Tutorial Systems: The Learners and Educators PerspectivesProceedings of the 2024 the 16th International Conference on Education Technology and Computers10.1145/3702163.3702418(226-230)Online publication date: 18-Sep-2024
  • (2024)Exploring Novices' Problem-Solving Strategies in Computing and Math DomainsProceedings of the 24th Koli Calling International Conference on Computing Education Research10.1145/3699538.3699557(1-8)Online publication date: 12-Nov-2024
  • (2024)Guiding Students in Using LLMs in Supported Learning Environments: Effects on Interaction Dynamics, Learner Performance, Confidence, and TrustProceedings of the ACM on Human-Computer Interaction10.1145/36870388:CSCW2(1-30)Online publication date: 8-Nov-2024
  • (2024)Navigating Compiler Errors with AI Assistance - A Study of GPT Hints in an Introductory Programming CourseProceedings of the 2024 on Innovation and Technology in Computer Science Education V. 110.1145/3649217.3653608(94-100)Online publication date: 3-Jul-2024
  • (2024)Hint Cards for Common Ozobot Robot Issues: Supporting Feedback for Learning Programming in Elementary SchoolsProceedings of the 55th ACM Technical Symposium on Computer Science Education V. 110.1145/3626252.3630868(408-414)Online publication date: 7-Mar-2024
  • (2024)Python OCTS: Design, Implementation, and Evaluation of an Online Coding Tutorial System Prototype2024 IEEE World Engineering Education Conference (EDUNINE)10.1109/EDUNINE60625.2024.10500548(1-6)Online publication date: 10-Mar-2024
  • (2024)Integrating Generative AI in Data Science Programming: Group Differences in Hint RequestsComputers in Human Behavior: Artificial Humans10.1016/j.chbah.2024.100089(100089)Online publication date: Aug-2024
  • (2024)Alloy Repair Hint Generation Based on Historical DataFormal Methods10.1007/978-3-031-71177-0_8(104-121)Online publication date: 13-Sep-2024
  • (2023)Do Current Online Coding Tutorial Systems Address Novice Programmer Difficulties?Proceedings of the 15th International Conference on Education Technology and Computers10.1145/3629296.3629333(242-248)Online publication date: 26-Sep-2023
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media