skip to main content
10.1145/3051457.3051466acmconferencesArticle/Chapter ViewAbstractPublication Pagesl-at-sConference Proceedingsconference-collections
research-article

Gradescope: A Fast, Flexible, and Fair System for Scalable Assessment of Handwritten Work

Published: 12 April 2017 Publication History

Abstract

We present a system for online assessment of handwritten homework assignments and exams. First, either instructors or students scan and upload handwritten work. Instructors then grade the work and distribute the results using a web-based platform. Our system optimizes for three key dimensions: speed, consistency, and flexibility. The primary innovation enabling improvements in all three dimensions is a dynamically evolving rubric for each question on an assessment. We also describe how the system minimizes the overhead incurred in the digitization process. Our system has been in use for four years, with instructors at 200 institutions having graded over 10 million pages of student work. We present results as user-reported data and feedback regarding time saved grading, enjoyment, and student experience. Two-thirds of responders report saving 30% or more time relative to their traditional workflow. We also find that the time spent grading an individual response to a question rapidly decays with the number of responses to that question that the grader has already graded.

References

[1]
Aaron Bloomfield. 2010. Evolution of a digital paper exam grading system. In Frontiers in Education Conference (FIE). IEEE.
[2]
Aaron Bloomfield and James F Groves. 2008. A tablet-based paper exam grading system. In ACM SIGCSE Bulletin, Vol. 40. ACM, 83--87.
[3]
Brenda Cheang, Andy Kurnia, Andrew Lim, and Wee-Chong Oon. 2003. On automated grading of programming assignments in an academic institution. Computers & Education 41, 2 (2003), 121--131.
[4]
Ronan Cummins, Meng Zhang, and Ted Briscoe. 2016. Constrained Multi-Task Learning for Automated Essay Scoring. Association for Computational Linguistics.
[5]
Andrew Eberhard and Donald Sheridan. 2015. The Transition to Online Marking in Large Classes. In EdMedia: World Conference on Educational Media and Technology, Vol. 2015. 371--376.
[6]
Susan E Embretson and Steven P Reise. 2013. Item response theory. Psychology Press.
[7]
John Hattie. 2008. Visible Learning: A Synthesis of Over 800 Meta-Analyses Relating to Achievement. Taylor & Francis. https://books.google.com/books?id=c2GbhdNoQX8C
[8]
Michael T Helmick. 2007. Interface-based programming assignments and automatic grading of java programs. In ACM SIGCSE Bulletin, Vol. 39. ACM, 63--67.
[9]
Anders Jonsson and Gunilla Svingby. 2007. The use of scoring rubrics: Reliability, validity and educational consequences. Educational research review 2, 2 (2007), 130--144.
[10]
Youngwook Paul Kwon and Sara McMains. 2015. An Automated Grading/Feedback System for 3-View Engineering Drawings using RANSAC. In Proceedings of the Second (2015) ACM Conference on Learning@ Scale. ACM, 157--166.
[11]
David J Malan. 2013. CS50 sandbox: secure execution of untrusted code. In Proceeding of the 44th ACM technical symposium on Computer science education. ACM, 141--146.
[12]
James Park and John Hagen Jr. 2005. Managing Large Volumes of Assignments. EDUCAUSE Quarterly (2005).
[13]
Y Malini Reddy and Heidi Andrade. 2010. A review of rubric use in higher education. Assessment & Evaluation in Higher Education 35, 4 (2010), 435--448.
[14]
Saul Schleimer, Daniel S Wilkerson, and Alex Aiken. 2003. Winnowing: local algorithms for document fingerprinting. In Proceedings of the 2003 ACM SIGMOD international conference on Management of data. ACM, 76--85.
[15]
Susan C Schneider. 2014. "Paperless Grading" of Handwritten Homework: Electronic Process and Assessment. In ASEE North Midwest Section Conference.
[16]
Donald Sheridan and Lesley Gardner. 2012. From Cellulose to Software: The Evolution of a Marking System. In World Conference on Educational Multimedia, Hypermedia and Telecommunications, Vol. 2012. 454--461.
[17]
D.D. Stevens and A. Levi. 2005. Introduction to Rubrics: An Assessment Tool to Save Grading Time, Convey Effective Feedback, and Promote Student Learning. Stylus Pub. https://books.google.com/books?id=LIxWgDn8\_N0C
[18]
Salvatore Valenti, Francesca Neri, and Alessandro Cucchiarelli. 2003. An overview of current research on automated essay grading. Journal of Information Technology Education 2 (2003), 319--330.

Cited By

View all
  • (2025)Midterm Exam Outliers Efficiently Highlight Potential Cheaters on Programming AssignmentsProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 110.1145/3641554.3701883(437-442)Online publication date: 12-Feb-2025
  • (2025)Accelerating Accurate Assignment Authoring Using Solution-Generated AutogradersProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 110.1145/3641554.3701862(227-233)Online publication date: 12-Feb-2025
  • (2025)Beyond Policing: AI Writing Detection Tools, Trust, Academic Integrity, and Their Implications for College WritingInternet Reference Services Quarterly10.1080/10875301.2024.243717429:1(83-116)Online publication date: Jan-2025
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
L@S '17: Proceedings of the Fourth (2017) ACM Conference on Learning @ Scale
April 2017
352 pages
ISBN:9781450344500
DOI:10.1145/3051457
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 April 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. computer-assisted instruction
  2. education
  3. learning assessment
  4. rubric-based grading
  5. scaling large courses

Qualifiers

  • Research-article

Conference

L@S 2017
Sponsor:
L@S 2017: Fourth (2017) ACM Conference on Learning @ Scale
April 20 - 21, 2017
Massachusetts, Cambridge, USA

Acceptance Rates

L@S '17 Paper Acceptance Rate 14 of 105 submissions, 13%;
Overall Acceptance Rate 117 of 440 submissions, 27%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)131
  • Downloads (Last 6 weeks)14
Reflects downloads up to 01 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Midterm Exam Outliers Efficiently Highlight Potential Cheaters on Programming AssignmentsProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 110.1145/3641554.3701883(437-442)Online publication date: 12-Feb-2025
  • (2025)Accelerating Accurate Assignment Authoring Using Solution-Generated AutogradersProceedings of the 56th ACM Technical Symposium on Computer Science Education V. 110.1145/3641554.3701862(227-233)Online publication date: 12-Feb-2025
  • (2025)Beyond Policing: AI Writing Detection Tools, Trust, Academic Integrity, and Their Implications for College WritingInternet Reference Services Quarterly10.1080/10875301.2024.243717429:1(83-116)Online publication date: Jan-2025
  • (2024)Smart Education Using Explainable Artificial IntelligenceInternet of Behavior-Based Computational Intelligence for Smart Education Systems10.4018/979-8-3693-8151-9.ch004(95-130)Online publication date: 25-Oct-2024
  • (2024)Exploring the Role of Artificial Intelligence-Powered Facilitator in Enhancing Digital Competencies of Primary School TeachersEuropean Journal of Educational Research10.12973/eu-jer.13.1.219volume-13-2024:volume-13-issue-1-january-2024(219-231)Online publication date: 15-Jan-2024
  • (2024)Who Validates the Validators? Aligning LLM-Assisted Evaluation of LLM Outputs with Human PreferencesProceedings of the 37th Annual ACM Symposium on User Interface Software and Technology10.1145/3654777.3676450(1-14)Online publication date: 13-Oct-2024
  • (2024)Dynamic answer-dependent multiple-choice questions and holistic assessment analysis in high-enrollment coursesJournal of Microbiology & Biology Education10.1128/jmbe.00047-2425:2Online publication date: 29-Aug-2024
  • (2024)College Exam Grader using LLM AI models2024 IEEE/ACIS 27th International Conference on Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD)10.1109/SNPD61259.2024.10673924(282-289)Online publication date: 5-Jul-2024
  • (2024)Leveraging Valgrind to Assess Concurrent, Testing-Unaware C Programs2024 IEEE 31st International Conference on High Performance Computing, Data and Analytics Workshop (HiPCW)10.1109/HiPCW63042.2024.00013(17-24)Online publication date: 18-Dec-2024
  • (2024)WIP: Code Insight: Combining Code Reading and Debugging Practices for Active Learning in Entry-Level Computer Science Courses2024 IEEE Frontiers in Education Conference (FIE)10.1109/FIE61694.2024.10893296(1-5)Online publication date: 13-Oct-2024
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media