Abstract
In online learning environments, the individual characteristics of learners have necessarily an influence on the reliability and the credibility of peer assessment. This paper focuses on the stage of allocating students’ submissions within online peer assessment process. Firstly, by providing an overview of the main applications of this process in online assessment tools and MOOCs. This overview considers mainly the methodologies of assigning submissions to learners for evaluation; and secondly, by proposing a model for the assessor based on the individual personal characteristics that shape her or his assessment profile. This profile plays a key role in the success of the peer assessment/feedback experience. We conclude this paper with a brief discussion of the potential that can provide the assessor model in the context of an approach that manages the allocation of submissions and considers the personal characteristics of the learners’ community.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Reis, R., Escudeiro, P.: The role of virtual worlds for enhancing student-student interaction in MOOCs. In: User-Centered Design Strategies for Massive Open Online Courses (MOOCs), pp. 208–221. IGI Global (2016)
Daradoumis, T., et al.: A review on massive e-learning (MOOC) design, delivery and assessment. In: 2013 Eighth International Conference on P2P, Parallel, Grid, Cloud and Internet Computing (3PGCIC). IEEE (2013)
Hmedna, B., et al.: Identifying and tracking learning styles in MOOCs: a neural networks approach. Int. J. Innov. Appl. Stud. 19(2), 267 (2017)
Yousef, A.M.F., et al.: The impact of rubric-based peer assessment on feedback quality in blended MOOCs. In: International Conference on Computer Supported Education. Springer, Cham (2015)
Zhao, C., et al.: Exploring the role of assessment in developing learners’ critical thinking in massive open online courses. In: European Conference on Massive Open Online Courses. Springer, Cham (2017)
Jiao, J., et al.: Improving learning in MOOCs through peer feedback: how is learning improved by providing and receiving feedback? In: Learning and Knowledge Analytics in Open Education, pp. 69–87. Springer, Cham (2017)
Khalil, H., Ebner, M.: MOOCs completion rates and possible methods to improve retention-a literature review. In: World Conference on Educational Multimedia, Hypermedia and Telecommunications (2014)
Ashenafi, M.M., Ronchetti, M., Riccardi, G.: Exploring the role of online peer-assessment as a tool of early intervention. In: SETE@ ICWL (2016)
Falchikov, N., Goldfinch, J.: Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Rev. Educ. Res. 70(3), 287–322 (2000)
Haber, J.: MOOCs, pp. 69–76. MIT Press, Cambridge (2014)
Shute, V.J., Rahimi, S.: Review of computer-based assessment for learning in elementary and secondary education. J. Comput. Assist. Learn. 33(1), 1–19 (2017)
Balfour, S.P.: Assessing writing in MOOCs: automated essay scoring and calibrated peer review (TM). Res. Pract. Assess. 8, 40–48 (2013)
Zupanc, K., Bosnić, Z.: Automated essay evaluation with semantic analysis. Knowl. Based Syst. 120, 118–132 (2017)
Falchikov, N.: Involving students in assessment. Psychol. Learn. Teach. 3(2), 102–108 (2004)
Topping, K.: Peer assessment between students in colleges and universities. Rev. Educ. Res. 68(3), 249–276 (1998)
Kollar, I., Fischer, F.: Peer assessment as collaborative learning: a cognitive perspective. Learn. Instr. 20(4), 344–348 (2010)
Gielen, S., et al.: Goals of peer assessment and their associated quality concepts. Stud. High. Educ. 36(6), 719–735 (2011)
Lu, J., Law, N.: Online peer assessment: effects of cognitive and affective feedback. Instr. Sci. 40(2), 257–275 (2012)
Babik, D., et al.: Probing the landscape: toward a systematic taxonomy of online peer assessment systems in education. In: EDM (Workshops) (2016)
de Alfaro, L., Shavlovsky, M.: CrowdGrader: a tool for crowdsourcing the evaluation of homework assignments. In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education. ACM (2014)
Cho, K., Schunn, C.D.: Scaffolded writing and rewriting in the discipline: a web-based reciprocal peer review system. Comput. Educ. 48(3), 409–426 (2007)
Schunn, C., Godley, A., DeMartino, S.: The reliability and validity of peer review of writing in high school AP English classes. J. Adolesc. Adult Lit. 60(1), 13–23 (2016)
Rice, W.: Moodle E-Learning Course Development. Packt Publishing Ltd., Birmingham (2015)
Using Workshop – MoodleDocs. https://docs.moodle.org/29/en/Using_Workshop#Grade_for_assessment. Accessed 2017
Russell, J., et al.: Variability in students’ evaluating processes in peer assessment with calibrated peer review. J. Comput. Assist. Learn. 33(2), 178–190 (2017)
Edx: Open Response Assessments (2017). https://edx.readthedocs.io/projects/open-edx-building-and-running-a-course/en/named-release-birch/exercises_tools/open_response_assessments/index.html
Staubitz, T., et al.: Improving the peer assessment experience on MOOC platforms. In: Proceedings of the Third (2016) ACM Conference on Learning@ Scale. ACM (2016)
Piech, C., et al.: Tuned models of peer assessment in MOOCs. arXiv preprint arXiv:1307.2579 (2013)
Goldin, I.M., Ashley, K.D.: Peering inside peer review with Bayesian models. In: Artificial Intelligence in Education. Springer, Heidelberg (2011)
Van den Berg, I., Admiraal, W., Pilot, A.: Design principles and outcomes of peer assessment in higher education. Stud. Higher Educ. 31(03), 341–356 (2006)
IEEE P1484.2/D7, 2000-11-28: Draft Standard for Learning Technology. Public and Private Information (PAPI) for Learners (PAPI Learner) (2002). http://ltsc.ieee.org/wg2/. Accessed 25 Oct 2002
Oubahssi, L., Grandbastien, M.: From learner information packages to student models: which continuum? In: International Conference on Intelligent Tutoring Systems. Springer, Heidelberg (2006)
Battou, A.: Approche granulaire des objets pédagogiques en vue de l’adaptabilité dans le cadre des Environnements Informatiques pour l’Apprentissage Humain (2012)
Qazdar, A., et al.: AeLF: mixing adaptive learning system with learning management system. Int. J. Comput. Appl. 119(15), 1–8 (2015)
Fini, A.: The technological dimension of a massive open online course: the case of the CCK08 course tools. Int. Rev. Res. Open Distrib. Learn. 10(5) (2009)
Brown, G.A., Bull, J., Pendlebury, M.: Assessing Student Learning in Higher Education. Routledge, New York (2013)
Jeffery, D., et al.: How to achieve accurate peer assessment for high value written assignments in a senior undergraduate course. Assess. Eval. Higher Educ. 41(1), 127–140 (2016)
Xiong, Y., et al.: A proposed credibility index (CI) in peer assessment. In: Poster Presented at the Annual Meeting of the National Council on Measurement in Education, Philadelphia, PA (2014)
Engelhard, G.: Examining rater errors in the assessment of written composition with a Many-Faceted Rasch model. J. Educ. Meas. 31(2), 93–112 (1994)
Felder, R.M., Silverman, L.K.: Learning and teaching styles in engineering education. Eng. Educ. 78(7), 674–681 (1988)
Lan, C.H., Graf, S., Lai, K.R.: Enrichment of peer assessment with agent negotiation. IEEE Trans. Learn. Technol. 4(1), 35–46 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2018 Springer International Publishing AG
About this paper
Cite this paper
Abrache, MA., Qazdar, A., Bendou, A., Cherkaoui, C. (2018). The Allocation of Submissions in Online Peer Assessment: What Can an Assessor Model Provide in This Context?. In: Ben Ahmed, M., Boudhir, A. (eds) Innovations in Smart Cities and Applications. SCAMS 2017. Lecture Notes in Networks and Systems, vol 37. Springer, Cham. https://doi.org/10.1007/978-3-319-74500-8_25
Download citation
DOI: https://doi.org/10.1007/978-3-319-74500-8_25
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-74499-5
Online ISBN: 978-3-319-74500-8
eBook Packages: EngineeringEngineering (R0)