skip to main content
10.1145/2538862.2538900acmconferencesArticle/Chapter ViewAbstractPublication PagessigcseConference Proceedingsconference-collections
research-article

CrowdGrader: a tool for crowdsourcing the evaluation of homework assignments

Published:05 March 2014Publication History

ABSTRACT

CrowdGrader is a system that lets students submit and collaboratively review and grade homework. We describe the techniques and ideas used in CrowdGrader, and report on the experience of using CrowdGrader in disciplines ranging from Computer Science to Economics, Writing, and Technology. In CrowdGrader, students receive an overall crowd-grade that reflects both the quality of their homework, and the quality of their work as reviewers. This creates an incentive for students to provide accurate grades and helpful reviews of other students' work. Instructors can use the crowd-grades as final grades, or fine-tune the grades according to their wishes. Our results on seven classes show that students actively participate in the grading and write reviews that are generally helpful to the submissions' authors. The results also show that grades computed by CrowdGrader are sufficiently precise to be used as the homework component of class grades. Students report that the main benefits in using CrowdGrader are the quality of the reviews they receive, and the ability to learn from reviewing their peers' work. Instructors can leverage peer learning in their classes, and easily handle homework evaluation in large classes.

References

  1. J. Bartholdi, C. Tovey, and M. Trick. Voting schemes for which it can be difficult to tell who won the election. Social Choice and Welfare, 6(2):157--165, 1989.Google ScholarGoogle ScholarCross RefCross Ref
  2. D. J. Boud, R. Cohen, and J. Sampson. Peer learning in higher education: learning from & with each other. Psychology Press, 2001.Google ScholarGoogle Scholar
  3. R. Bradley and M. Terry. Rank analysis of incomplete block designs: I. the method of paired comparisons. Biometrika, 39(3/4):pp. 324--345, 1952.Google ScholarGoogle ScholarCross RefCross Ref
  4. K. Cho, T. Chung, W. King, and C. Schunn. Peer-based computer-supported knowledge refinement: An empirical investigation. Communications of the ACM, 51(3):83--88, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. C. Davidson. How to crowdsource grading, 2009. http://www.hastac.org/blogs/cathy-davidson/how-crowdsource-grading.Google ScholarGoogle Scholar
  6. L. de Alfaro and M. Shavlovsky. Crowdgrader: Crowdsourcing the evaluation of homework assignments. Technical Report UCSC-SOE-13-11, UC Santa Cruz, 2013. arXiv:1308.5273.Google ScholarGoogle Scholar
  7. J.-C. de Borda. Memoire sur les Elections au Scrutin. 1781.Google ScholarGoogle Scholar
  8. C. Dwork, R. Kumar, M. Naor, and D. Sivakumar. Rank aggregation methods for the web. In Proceedings of the 10th international conference on World Wide Web, pages 613--622. ACM, 2001. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. A. Elo. The Rating of Chess Players Past and Present. New York, Arco, 1978.Google ScholarGoogle Scholar
  10. M. Glickman. Paired Comparison Models with Time-varying Parameters. Harvard University, 1993.Google ScholarGoogle ScholarCross RefCross Ref
  11. R. Luce. Individual choice behavior : a theoretical analysis. Wiley N.Y, 1959.Google ScholarGoogle Scholar
  12. M. Merrifield and D. Saari. Telescope time without tears: A distributed approach to peer review. Astronomy & Geophysics, 50(4):4--16, 2009.Google ScholarGoogle ScholarCross RefCross Ref
  13. P. Naghizadeh and M. Liu. Incentives, quality, and risk: A look into the NSF proposal review pilot. Arxiv, 1307.6528v1, 2013.Google ScholarGoogle Scholar
  14. National Science Foundation. Dear colleague letter: Information to principal investigators (PIs) planning to submit proposals to the sensors and sensing systems (sss) program October 1 , 2013 deadline, 2013.Google ScholarGoogle Scholar

Index Terms

  1. CrowdGrader: a tool for crowdsourcing the evaluation of homework assignments

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      SIGCSE '14: Proceedings of the 45th ACM technical symposium on Computer science education
      March 2014
      800 pages
      ISBN:9781450326056
      DOI:10.1145/2538862

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 March 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      SIGCSE '14 Paper Acceptance Rate108of274submissions,39%Overall Acceptance Rate1,595of4,542submissions,35%

      Upcoming Conference

      SIGCSE Virtual 2024
      SIGCSE Virtual 2024: ACM Virtual Global Computing Education Conference
      November 30 - December 1, 2024
      Virtual Event , USA

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader