skip to main content
10.1145/3699538.3699588acmotherconferencesArticle/Chapter ViewAbstractPublication Pageskoli-callingConference Proceedingsconference-collections
poster

Towards the Integration of Large Language Models and Automatic Assessment Tools: Enhancing Student Support in Programming Assignments

Published: 13 November 2024 Publication History

Abstract

The rise of Large Language Models (LLMs) has sparked discussion in Computer Science Education (CSE) due to their ability to generate code from text prompts. Students may rely on these tools, neglecting core skills like computational thinking and program design. Thus, it’s crucial to responsibly integrate them into computer science courses.
To address this, we integrated an open-source Automatic Assessment Tool with GPT, enabling students to receive LLM assistance on their programming assignments. This tool can be adopted and improved by educators, promoting more responsible integration of LLMs in CSE.

References

[1]
Bruno Pereira Cipriano and Pedro Alves. 2024. Seven Years Later: Lessons Learned in Automated Assessment. In 5th International Computer Programming Education Conference (ICPEC 2024). Schloss Dagstuhl–Leibniz-Zentrum für Informatik.
[2]
Bruno Pereira Cipriano, Bernardo Baltazar, Nuno Fachada, Athanasios Vourvopoulos, and Pedro Alves. 2024. Bridging the Gap between Project-Oriented and Exercise-Oriented Automatic Assessment Tools. Computers 13, 7 (2024), 162.
[3]
Bruno Pereira Cipriano, Nuno Fachada, and Pedro Alves. 2022. Drop Project: An automatic assessment tool for programming assignments. SoftwareX 18 (2022), 101079.
[4]
Paul Denny, Juho Leinonen, James Prather, Andrew Luxton-Reilly, Thezyrie Amarouche, Brett A. Becker, and Brent N. Reeves. 2024. Prompt Problems: A New Programming Exercise for the Generative AI Era. In Proc. of the 55th ACM Technical Symposium on Computer Science Education V. 1 (Portland, OR, USA) (SIGCSE 2024). ACM, New York, NY, USA, 296–302.
[5]
Emma Enström, Gunnar Kreitz, Fredrik Niemelä, Pehr Söderman, and Viggo Kann. 2011. Five Years with Kattis — Using an Automated Assessment System in Teaching. In 2011 Frontiers in education conference (FIE). IEEE, T3J–1.
[6]
Sam Lau and Philip Guo. 2023. From "Ban it till we understand it" to" Resistance is futile": How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools such as ChatGPT and GitHub Copilot. In Proceedings of the 2023 ACM Conference on International Computing Education Research-Volume 1. 106–121.
[7]
Mark Liffiton, Brad E Sheese, Jaromir Savelka, and Paul Denny. 2024. CodeHelp: Using Large Language Models with Guardrails for Scalable Support in Programming Classes. In Proc. of the 23rd Koli Calling International Conf. on Computing Education Research (Koli, Finland) (Koli Calling ’23). ACM, New York, NY, USA, Article 8, 11 pages.
[8]
Andrew Luxton-Reilly, Ewan Tempero, Nalin Arachchilage, Angela Chang, Paul Denny, Allan Fowler, Nasser Giacaman, Igor Kontorovich, Danielle Lottridge, Sathiamoorthy Manoharan, et al. 2023. Automated Assessment: Experiences From the Trenches. In Proceedings of the 25th Australasian Computing Education Conference. 1–10.
[9]
Vincent Massol. 2004. JUnit in Action. Citeseer.
[10]
José Carlos Paiva, José Paulo Leal, and Álvaro Figueira. 2022. Automated assessment in computer science education: A state-of-the-art review. ACM Transactions on Computing Education (TOCE) 22, 3 (2022), 1–40.
[11]
Siddhartha Prasad, Ben Greenman, Tim Nelson, and Shriram Krishnamurthi. 2023. Generating Programs Trivially: Student Use of Large Language Models. In Proceedings of the ACM Conference on Global Computing Education Vol 1(CompEd 2023). Association for Computing Machinery, New York, NY, USA, 126–132.
[12]
Christoph Treude. 2023. Navigating Complexity in Software Engineering: A Prototype for Comparing GPT-n Solutions. In 2023 IEEE/ACM 5th International Workshop on Bots in Software Engineering (BotSE). IEEE, 1–5.

Index Terms

  1. Towards the Integration of Large Language Models and Automatic Assessment Tools: Enhancing Student Support in Programming Assignments

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    Koli Calling '24: Proceedings of the 24th Koli Calling International Conference on Computing Education Research
    November 2024
    382 pages
    ISBN:9798400710384
    DOI:10.1145/3699538
    Permission to make digital or hard copies of part or all of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 13 November 2024

    Check for updates

    Author Tags

    1. large language models
    2. automatic assessment tools
    3. feedback

    Qualifiers

    • Poster

    Conference

    Koli Calling '24

    Acceptance Rates

    Overall Acceptance Rate 80 of 182 submissions, 44%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 0
      Total Citations
    • 74
      Total Downloads
    • Downloads (Last 12 months)74
    • Downloads (Last 6 weeks)31
    Reflects downloads up to 17 Feb 2025

    Other Metrics

    Citations

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media