ABSTRACT
Automated feedback on programming exercises has traditionally focused on correctness of submitted exercises. The correctness has been inferred, for example, based on a set of unit tests. Recent advances in the area of providing feedback have suggested relying on large language models for building feedback. In this poster, we present an approach for automatically constructed formative feedback, written in natural language, that builds on two streams of research: (1) automatic program repair, and (2) automatically generating descriptions of programs. Building on combining these two streams, we propose a new approach for constructing written formative feedback on programming exercise submissions.
- Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. CoRR abs/2005.14165(2020). arXiv:2005.14165https://arxiv.org/abs/2005.14165Google Scholar
- Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, 2020. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155(2020).Google Scholar
- James Finnie-Ansley, Paul Denny, Brett A Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Australasian Computing Education Conference. 10–19.Google Scholar
- John Hattie and Helen Timperley. 2007. The power of feedback. Review of educational research 77, 1 (2007), 81–112.Google Scholar
- Stephen MacNeil, Andrew Tran, Dan Mogil, Seth Bernstein, Erin Ross, and Ziheng Huang. 2022. Generating Diverse Code Explanations using the GPT-3 Large Language Model. In Proceedings of the 2022 ACM Conference on International Computing Education Research-Volume 2. 37–39.Google ScholarDigital Library
- Samiha Marwan, Bita Akram, Tiffany Barnes, and Thomas W Price. 2022. Adaptive Immediate Feedback for Block-Based Programming: Design and Evaluation. IEEE Transactions on Learning Technologies(2022).Google Scholar
- Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, and Leonidas Guibas. 2015. Learning program embeddings to propagate feedback on student code. In International conference on machine Learning. PMLR, 1093–1102.Google Scholar
- Kelly Rivers and Kenneth R Koedinger. 2013. Automatic generation of programming feedback: A data-driven approach. In The First Workshop on AI-supported Education for Computer Science (AIEDCS 2013), Vol. 50.Google Scholar
- Kelly Rivers and Kenneth R Koedinger. 2017. Data-driven hint generation in vast solution spaces: a self-improving python programming tutor. International Journal of Artificial Intelligence in Education 27, 1(2017), 37–64.Google ScholarCross Ref
- Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. In Proceedings of the 2022 ACM Conference on International Computing Education Research-Volume 1. 27–43.Google ScholarDigital Library
- Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, 2019. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771(2019).Google Scholar
- Mike Wu, Noah Goodman, Chris Piech, and Chelsea Finn. 2021. ProtoTransformer: A meta-learning approach to providing student feedback. arXiv preprint arXiv:2107.14035(2021).Google Scholar
- Lisa Yan, Annie Hu, and Chris Piech. 2019. Pensieve: Feedback on coding process for novices. In Proceedings of the 50th acm technical symposium on computer science education. 253–259.Google ScholarDigital Library
Index Terms
- Towards Open Natural Language Feedback Generation for Novice Programmers using Large Language Models
Recommendations
Characterizing the pedagogical benefits of adaptive feedback for compilation errors by novice programmers
ICSE-SEET '20: Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Software Engineering Education and TrainingCan automated adaptive feedback for correcting erroneous programs help novice programmers learn to code better? In a large-scale experiment, we compare student performance when tutored by human tutors, and when receiving automated adaptive feedback. The ...
Automated Assessment & Feedback System for Novice Programmers
ITiCSE '21: Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 2The 'Introductory Programming' module is the first step in software development courses and a number of activities have been introduced to motivate novice students in programming modules. An assessment and feedback system is one such activity. This ...
Using GPT-4 to Provide Tiered, Formative Code Feedback
SIGCSE 2024: Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1Large language models (LLMs) have shown promise in generating sensible code explanation and feedback in programming exercises. In this experience report, we discuss the process of using one of these models (OpenAI's GPT-4) to generate individualized ...
Comments