Skip to main content

Give and Take: Adaptive Balanced Allocation for Peer Assessments

  • Conference paper
  • First Online:
Book cover Computing and Combinatorics (COCOON 2019)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 11653))

Included in the following conference series:

Abstract

Peer assessments, in which people review the works of peers and have their own works reviewed by peers, are useful for assessing homework, reviewing academic papers and so on. In conventional peer assessment systems, works are usually allocated to people before the assessment begins; therefore, if people drop out (abandoning reviews) during an assessment period, an imbalance occurs between the number of works a person reviews and that of peers who have reviewed the work. When the total imbalance increases, some people who diligently complete reviews may suffer from a lack of reviews and be discouraged to participate in future peer assessments. Therefore, in this study, we adopt a new adaptive allocation approach in which people are allocated review works only when requested and propose an algorithm for allocating works to people, which reduces the total imbalance. To show the effectiveness of the proposed algorithm, we provide an upper bound of the total imbalance that the proposed algorithm yields. In addition, we experimentally compare the proposed adaptive allocation to existing nonadaptive allocation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    https://dataverse.harvard.edu/dataset.xhtml?persistentId=doi:10.7910/DVN/XB2TLU.

  2. 2.

    https://www.coursera.org.

  3. 3.

    https://www.edx.org.

References

  1. Abraham, I., Alonso, O., Kandylas, V., Slivkins, A.: Adaptive crowdsourcing algorithms for the bandit survey problem. In: Conference on Learning Theory, pp. 882–910 (2013)

    Google Scholar 

  2. Acosta, E.S., Otero, J.J.E., Toletti, G.C.: Peer review experiences for MOOC. Development and testing of a peer review system for a massive online course. New Educ. Rev. 37(3), 66–79 (2014)

    Google Scholar 

  3. de Alfaro, L., Shavlovsky, M.: CrowdGrader: a tool for crowdsourcing the evaluation of homework assignments. In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education, pp. 415–420 (2014)

    Google Scholar 

  4. Babik, D., Gehringer, E.F., Kidd, J., Pramudianto, F., Tinapple, D.: Probing the landscape: toward a systematic taxonomy of online peer assessment systems in education. In: Educational Data Mining (Workshops) (2016)

    Google Scholar 

  5. Chan, H.P., King, I.: Leveraging social connections to improve peer assessment in MOOCs. In: Proceedings of the 26th International Conference on World Wide Web Companion, pp. 341–349 (2017)

    Google Scholar 

  6. Chen, X., Lin, Q., Zhou, D.: Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. In: International Conference on Machine Learning, pp. 64–72 (2013)

    Google Scholar 

  7. Díez Peláez, J., Luaces Rodríguez, Ó., Alonso Betanzos, A., Troncoso, A., Bahamonde Rionda, A.: Peer assessment in MOOCs using preference learning via matrix factorization. In: NIPS Workshop on Data Driven Education (2013)

    Google Scholar 

  8. Er, E., Bote-Lorenzo, M.L., Gómez-Sánchez, E., Dimitriadis, Y., Asensio-Pérez, J.I.: Predicting student participation in peer reviews in moocs. In: Proceedings of the Second European MOOCs Stakeholder Summit 2017 (2017)

    Google Scholar 

  9. Estévez-Ayres, I., García, R.M.C., Fisteus, J.A., Kloos, C.D.: An algorithm for peer review matching in massive courses for minimising students’ frustration. J. UCS 19(15), 2173–2197 (2013)

    Google Scholar 

  10. Gehringer, E.F.: A survey of methods for improving review quality. In: Cao, Y., Väljataga, T., Tang, J.K.T., Leung, H., Laanpere, M. (eds.) ICWL 2014. LNCS, vol. 8699, pp. 92–97. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-13296-9_10

    Chapter  Google Scholar 

  11. Karger, D.R., Oh, S., Shah, D.: Budget-optimal task allocation for reliable crowdsourcing systems. Oper. Res. 62(1), 1–24 (2014)

    Article  Google Scholar 

  12. Onah, D.F., Sinclair, J., Boyatt, R.: Dropout rates of massive open online courses: behavioural patterns. In: International Conference on Education and New Learning Technologies, pp. 5825–5834 (2014)

    Google Scholar 

  13. Pappano, L.: The year of the MOOC. New York Times 2(12), 2012 (2012)

    Google Scholar 

  14. Piech, C., Huang, J., Chen, Z., Do, C., Ng, A., Koller, D.: Tuned models of peer assessment in MOOCs. In: Educational Data Mining 2013 (2013)

    Google Scholar 

  15. Ramachandran, L.: Automated Assessment of Reviews. North Carolina State University (2013)

    Google Scholar 

  16. Raman, K., Joachims, T.: Methods for ordinal peer grading. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1037–1046 (2014)

    Google Scholar 

  17. Shah, N.B., Bradley, J.K., Parekh, A., Wainwright, M., Ramchandran, K.: A case for ordinal peer-evaluation in MOOCs. In: NIPS Workshop on Data Driven Education (2013)

    Google Scholar 

  18. Weld, D.S., et al.: Personalized online education-a crowdsourcing challenge. In: Workshops at the Twenty-Sixth AAAI Conference on Artificial Intelligence, pp. 1–31 (2012)

    Google Scholar 

  19. Yan, Y., Rosales, R., Fung, G., Dy, J.G.: Active learning from crowds. In: International Conference on Machine Learning, vol. 11, pp. 1161–1168 (2011)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hideaki Ohashi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ohashi, H., Asano, Y., Shimizu, T., Yoshikawa, M. (2019). Give and Take: Adaptive Balanced Allocation for Peer Assessments. In: Du, DZ., Duan, Z., Tian, C. (eds) Computing and Combinatorics. COCOON 2019. Lecture Notes in Computer Science(), vol 11653. Springer, Cham. https://doi.org/10.1007/978-3-030-26176-4_38

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-26176-4_38

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-26175-7

  • Online ISBN: 978-3-030-26176-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics