skip to main content
10.1145/3564721.3565955acmotherconferencesArticle/Chapter ViewAbstractPublication Pageskoli-callingConference Proceedingsconference-collections
extended-abstract

Towards Open Natural Language Feedback Generation for Novice Programmers using Large Language Models

Published:17 November 2022Publication History

ABSTRACT

Automated feedback on programming exercises has traditionally focused on correctness of submitted exercises. The correctness has been inferred, for example, based on a set of unit tests. Recent advances in the area of providing feedback have suggested relying on large language models for building feedback. In this poster, we present an approach for automatically constructed formative feedback, written in natural language, that builds on two streams of research: (1) automatic program repair, and (2) automatically generating descriptions of programs. Building on combining these two streams, we propose a new approach for constructing written formative feedback on programming exercise submissions.

References

  1. Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens Winter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language Models are Few-Shot Learners. CoRR abs/2005.14165(2020). arXiv:2005.14165https://arxiv.org/abs/2005.14165Google ScholarGoogle Scholar
  2. Zhangyin Feng, Daya Guo, Duyu Tang, Nan Duan, Xiaocheng Feng, Ming Gong, Linjun Shou, Bing Qin, Ting Liu, Daxin Jiang, 2020. Codebert: A pre-trained model for programming and natural languages. arXiv preprint arXiv:2002.08155(2020).Google ScholarGoogle Scholar
  3. James Finnie-Ansley, Paul Denny, Brett A Becker, Andrew Luxton-Reilly, and James Prather. 2022. The Robots Are Coming: Exploring the Implications of OpenAI Codex on Introductory Programming. In Australasian Computing Education Conference. 10–19.Google ScholarGoogle Scholar
  4. John Hattie and Helen Timperley. 2007. The power of feedback. Review of educational research 77, 1 (2007), 81–112.Google ScholarGoogle Scholar
  5. Stephen MacNeil, Andrew Tran, Dan Mogil, Seth Bernstein, Erin Ross, and Ziheng Huang. 2022. Generating Diverse Code Explanations using the GPT-3 Large Language Model. In Proceedings of the 2022 ACM Conference on International Computing Education Research-Volume 2. 37–39.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Samiha Marwan, Bita Akram, Tiffany Barnes, and Thomas W Price. 2022. Adaptive Immediate Feedback for Block-Based Programming: Design and Evaluation. IEEE Transactions on Learning Technologies(2022).Google ScholarGoogle Scholar
  7. Chris Piech, Jonathan Huang, Andy Nguyen, Mike Phulsuksombati, Mehran Sahami, and Leonidas Guibas. 2015. Learning program embeddings to propagate feedback on student code. In International conference on machine Learning. PMLR, 1093–1102.Google ScholarGoogle Scholar
  8. Kelly Rivers and Kenneth R Koedinger. 2013. Automatic generation of programming feedback: A data-driven approach. In The First Workshop on AI-supported Education for Computer Science (AIEDCS 2013), Vol. 50.Google ScholarGoogle Scholar
  9. Kelly Rivers and Kenneth R Koedinger. 2017. Data-driven hint generation in vast solution spaces: a self-improving python programming tutor. International Journal of Artificial Intelligence in Education 27, 1(2017), 37–64.Google ScholarGoogle ScholarCross RefCross Ref
  10. Sami Sarsa, Paul Denny, Arto Hellas, and Juho Leinonen. 2022. Automatic Generation of Programming Exercises and Code Explanations Using Large Language Models. In Proceedings of the 2022 ACM Conference on International Computing Education Research-Volume 1. 27–43.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, 2019. Huggingface’s transformers: State-of-the-art natural language processing. arXiv preprint arXiv:1910.03771(2019).Google ScholarGoogle Scholar
  12. Mike Wu, Noah Goodman, Chris Piech, and Chelsea Finn. 2021. ProtoTransformer: A meta-learning approach to providing student feedback. arXiv preprint arXiv:2107.14035(2021).Google ScholarGoogle Scholar
  13. Lisa Yan, Annie Hu, and Chris Piech. 2019. Pensieve: Feedback on coding process for novices. In Proceedings of the 50th acm technical symposium on computer science education. 253–259.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Towards Open Natural Language Feedback Generation for Novice Programmers using Large Language Models

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Other conferences
        Koli Calling '22: Proceedings of the 22nd Koli Calling International Conference on Computing Education Research
        November 2022
        282 pages
        ISBN:9781450396165
        DOI:10.1145/3564721

        Copyright © 2022 Owner/Author

        Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the Owner/Author.

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 17 November 2022

        Check for updates

        Qualifiers

        • extended-abstract
        • Research
        • Refereed limited

        Acceptance Rates

        Overall Acceptance Rate80of182submissions,44%

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader

      HTML Format

      View this article in HTML Format .

      View HTML Format