Skip to main content

A Survey of Methods for Improving Review Quality

  • Conference paper
  • First Online:
New Horizons in Web Based Learning (ICWL 2014)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 8699))

Included in the following conference series:

Abstract

For peer review to be successful, students need to submit high-quality reviews of each other’s work. This requires a certain amount of training and guidance by the review system. We consider four methods for improving review quality: calibration, reputation systems, meta-reviewing, and automated meta-reviewing. Calibration is training to help a reviewer match the scores given by the instructor. Reputation systems determine how well each reviewer’s scores track scores assigned by other reviewers. Meta-reviewing means evaluating the quality of a review; this can be done either by a human or by software. Combining these strategies effectively is a topic for future research.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    In principle, one could also do this with high scores associated with rubric criteria, but high scores tend to be much more common than low scores, and a high score may just be indicative of an inexperienced reviewer’s inability to find anything wrong with the work on a particular dimension.

References

  1. Topping, K.: Peer assessment between students in colleges and universities. Rev. Educ. Res. 68(3), 249–276 (1998)

    Article  Google Scholar 

  2. Chapman, O.L.: Calibrated peer review (TM). Abstracts of Papers of the American Chemical Society, vol. 217, pp. U311–U311. 1155 16TH ST. Am. Chemical Soc., NW, Washington, DC 20036 USA (1999)

    Google Scholar 

  3. Margerum, L.D.: Application of calibrated peer review (CPR) writing assignments to enhance experiments with an environmental chemistry focus. J. Chem. Educ. 84(2), 292 (2007)

    Article  Google Scholar 

  4. Kulkarni, C., Wei, K.P., Le, H., Chia, D., Papadopoulos, K., Cheng, J., Klemmer, S.R.: Peer and self assessment in massive online classes. ACM Trans. Computer-Human Interact. (TOCHI) 20(6), 33 (2013)

    Article  Google Scholar 

  5. Hamer, J., Ma, K.T., Kwong, H.H.: A method of automatic grade calibration in peer assessment. In: Young, A., Tolhurst, D. (eds.) Proceedings of the 7th Australasian Conference on Computing Education. ACM International Conference Proceeding Series, Newcastle, New South Wales, Australia, vols. 42, 106, pp. 67–72. Australian Computer Society, Darlinghurst (2005)

    Google Scholar 

  6. Cho, K., Schunn, C.D., Wilson, R.W.: Validity and reliability of scaffolded peer assessment of writing from instructor and student perspectives. J. Educ. Psych. 98(4), 891–901 (2006)

    Article  Google Scholar 

  7. Lauw, H.W., Lim, E.-P., Wang, K.: Summarizing review scores of “unequal” reviewers. In: 2007 SIAM International Conference on Data Mining, Minneapolis, 26–28 April, pp. 539–544 (2007)

    Google Scholar 

  8. Denny, P., Hamer, J., Luxton-Reilly, A., Purchase, H.: PeerWise: students sharing their multiple choice questions. In: Proceedings of the Fourth international Workshop on Computing Education Research, pp. 51–58. ACM, September 2008

    Google Scholar 

  9. Cho, K., Schunn, C.: Scaffolded writing and rewriting in the discipline: a web-based reciprocal peer-review system. Comput. Educ. 48, 409–426 (2007)

    Article  Google Scholar 

  10. de Alfaro, L., Shavlovsky, M.: CrowdGrader: a tool for crowdsourcing the evaluation of homework assignments. In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education (SIGCSE ‘14), pp. 415–420. ACM, New York (2014). http://doi.acm.org/10.1145/2538862.2538900, doi:10.1145/2538862.2538900

  11. Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., Koller, D.: Tuned models of peer assessment in MOOCs. In: Proceedings of the 6th International Conference on Educational Data Mining, Memphis, TN, July 2013

    Google Scholar 

  12. Gehringer, E., Peddycord, B., Grading by experience points: an example from computer ethics. In: Proceedings of the Frontiers in Education 2013, Oklahoma, Oct 2013

    Google Scholar 

  13. Gehringer, E.F.: Expertiza: information management for collaborative learning. In: Monitoring and Assessment in Online Collaborative Environments: Emergent Computational Technologies for E-Learning Support, pp. 143–159 (2009)

    Google Scholar 

  14. Ramachandran, L.: Automated assessment of reviews, Ph.D. dissertation, North Carolina State University (2013). http://www.lib.ncsu.edu/resolver/1840.16/8813

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Edward F. Gehringer .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer International Publishing Switzerland

About this paper

Cite this paper

Gehringer, E.F. (2014). A Survey of Methods for Improving Review Quality. In: Cao, Y., Väljataga, T., Tang, J., Leung, H., Laanpere, M. (eds) New Horizons in Web Based Learning. ICWL 2014. Lecture Notes in Computer Science(), vol 8699. Springer, Cham. https://doi.org/10.1007/978-3-319-13296-9_10

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-13296-9_10

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-13295-2

  • Online ISBN: 978-3-319-13296-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics