Skip to main content
Log in

FOPA-MC: fuzzy multi-criteria group decision making for peer assessment

  • Focus
  • Published:
Soft Computing Aims and scope Submit manuscript

Abstract

Massive Open Online Courses are gaining popularity with millions of students enrolled, thousands of courses available and hundreds of learning institutions involved. Due to the high number of students and the relatively small number of tutors, student assessment, especially for complex tasks, is a typical issue of such courses. Thus, peer assessment is becoming increasingly popular to solve such a problem and several approaches have been proposed so far to improve the reliability of its outcomes. Among the most promising, there is fuzzy ordinal peer assessment (FOPA) that adopts models coming from fuzzy set theory and group decision Making. In this paper we propose an extension of FOPA supporting multi-criteria assessment based on rubrics. Students are asked to rank a small number of peer submissions against specified criteria, then provided rankings are transformed in fuzzy preference relations, expanded to obtain missing values and aggregated to estimate final grades. Results obtained are promising if compared to other peer assessment techniques both in the reconstruction of the correct ranking and on the estimation of students’ grades.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

Notes

  1. www.peergrading.org

  2. miriadax.net

References

  • Albano G, Capuano N, Pierri A (2017) Adaptive peer grading and formative assessment. J e-Learn Knowl Soc 13(1):147–161

    Google Scholar 

  • Allen L, Likens A, McNamara D (2018) A multi-dimensional analysis of writing flexibility in an automated writing evaluation system. In: Proceedings of the 8th international conference on learning analytics and knowledge. Sydney, Australia

  • Alonso S, Chiclana F, Herrera F, Herrera-Viedma E, Alcala-Fdez J, Porcel C (2008) A consistency-based procedure to estimate missing pairwise preference values. Int J Intell Syst 23(1):155–175

    MATH  Google Scholar 

  • Alonso S, Herrera-Viedma E, Chiclana F, Herrera F (2009) Individual and social strategies to deal with ignorance situations in multi-person decision making. Int J Inf Technol Decis Mak 8(2):313–333

    MATH  Google Scholar 

  • Ashton S, Davies R (2015) Using Scaffolded rubrics to improve peer assessment in a MOOC. Distance Educ 36(3):312–334

    Google Scholar 

  • Bang H (2013) Reliability of national writing project’s analytic writing continuum assessment system. J Writ Assess 6(1):13–24

    Google Scholar 

  • Borda JC (1784) Memoire sur les elections au scrutin, Histoire de l’Académie royale des sciences. Paris 1781:31–34

    Google Scholar 

  • Bouzidi L, Jaillet A (2009) Can online peer assessment be trusted? Educ Technol Soc 12(4):257–268

    Google Scholar 

  • Bradley RA, Terry ME (1952) Rank analysis of incomplete block designs: I. The method of paired comparisons. Biometrika 39(3):324

    MathSciNet  MATH  Google Scholar 

  • Capuano N, Caballé S, Miguel J (2016) Improving peer grading reliability with graph mining techniques. Int J Emerg Technol Learn 11(7):24–33

    Google Scholar 

  • Capuano N, Loia V, Orciuoli F (2017) A fuzzy group decision making model for ordinal peer assessment. IEEE Trans Learn Technol 10(2):247–259

    Google Scholar 

  • Capuano N, Chiclana F, Fujita H, Herrera-Viedma E, Loia V (2018) Fuzzy group decision making with incomplete information guided by social influence. IEEE Trans Fuzzy Syst 26(3):1704–1718

    Google Scholar 

  • Capuano N, Chiclana F, Fujita H, Herrera-Viedma E, Loia V (2018) Fuzzy rankings for preferences modeling in group decision making. Int J Intell Syst 33(7):1555–1570

    Google Scholar 

  • Capuano N, Caballé S (2015) Towards adaptive peer assessment for MOOCs. In: Proceedings of the 10th international conference on P2P, parallel, GRID, cloud and internet computing (3PGCIC 2015). Krakow, Poland

  • Capuano N, Caballé S (2018) Multi-criteria fuzzy ordinal peer assessment for MOOCs. In: Proceedings of the 10th international conference on intelligent networking and collaborative systems (INCOS 2018). Bratislava, Slovakia

  • Capuano N, Orciuoli F (2017) Application of fuzzy ordinal peer assessment in formative evaluation. In: Proceedings of the 12th international conference on P2P, parallel, grid, cloud and internet computing (3PGCIC 2017). Barcelona, Spain

  • Caragiannis I, Krimpas A, Voudouris AA (2015) Aggregating partial rankings with applications to peer grading in massive online open courses. In: Proceedings of the International Conference on Autonomous Agents and Multiagent Systems, Istanbul

  • Carlson PA, Berry FC (2003) Calibrated peer review and assessing learning outcomes. In: Proceedings of the 33rd international conference frontiers in education

  • Cheng L, Watanabe Y, Curtis A (2004) Washback in language testing. Lawrence Erlbaum Associates Inc., Mahwah

    Google Scholar 

  • Chiclana F, Herrera F, Herrera-Viedma E (1998) Integrating three representation models in fuzzy multipurpose decision making based on fuzzy preference relations. Fuzzy sets and systems 97(1):33–48

    MathSciNet  MATH  Google Scholar 

  • Chiclana F, Herrera-Viedma E, Herrera F, Alonso S (2007) Some induced ordered weighted averaging operators and their use for solving group decision-making problems based on fuzzy preference relations. Eur J Oper Res 182(1):383–399

    MATH  Google Scholar 

  • Ekel P, Queiroz J, Parreiras R, Palhares R (2009) Fuzzy set based models and methods of multicriteria group decision-making. Nonlinear Anal Theory Methods Appl 71(12):409–419

    MATH  Google Scholar 

  • Falchikov N, Boud D (1989) Student self-assessment in higher education: a meta-analysis. Rev Educ Res 59(4):395–430

    Google Scholar 

  • Falchikov N, Goldfinch J (2000) Student peer assessment in higher education: a meta-analysis comparing peer and teacher marks. Rev Educ Res 70(3):287–322

    Google Scholar 

  • Frederiksen J, Collins A (1989) A systems approach to educational testing. Educ Res 18(9):27–32

    Google Scholar 

  • Glance DG, Forsey M, Riley M (2013) The pedagogical foundations of massive open online courses. First Mon. https://doi.org/10.5210/fm.v18i5.4350

    Article  Google Scholar 

  • Goldin IM (2012) Accounting for peer reviewer bias with Bayesian models. In: Proceedings of the 11th international conference on intelligent tutoring systems

  • Huisman B, Admiraal W, Pilli O, van de Ven M, Saab N (2018) Peer assessment in MOOCs: the relationship between peer reviewers’ ability and authors’ essay performance. Br J Educ Technol 49(1):101–110

    Google Scholar 

  • Jonsson A, Svingby G (2007) The use of scoring rubrics: reliability, validity and educational consequences. Educ Res Rev 2(2):130–144

    Google Scholar 

  • Jonsson A, Svingby G (2007) The use of scoring rubrics: reliability, validity and educational consequences. Educ Res Rev 2:130–144

    Google Scholar 

  • Joyner D (2018) Intelligent evaluation and feedback in support of a credit-bearing MOOC. In: Proceedings of artificial intelligence in education (AIED 2018), LNCS

  • Lan CH, Graf S, Lai KR (2011) Kinshuk, enrichment of peer assessment with agent negotiation. IEEE Trans Learn Technol 4(1):35–46

    Google Scholar 

  • Lu J, Zhang G, Ruan D, Wu F (2007) Multi-objective group decision making, methods, software and applications with fuzzy set techniques. World Scientific, Singapore

    MATH  Google Scholar 

  • Mallows CL (1957) Non-null ranking models, I. Biometrika 44(1):114

    MathSciNet  MATH  Google Scholar 

  • Mi F, Yeung D (2015) Probabilistic graphical models for boosting cardinal and ordinal peer grading in MOOCs. In: Proceedings of the 29th AAAI conference on artificial intelligence

  • Nguyen H, Xiong W, Litman D (2017) Iterative design and classroom evaluation of automated formative feedback for improving peer feedback localization. Int J Artif Intell Educ 27(3):582–622

    Google Scholar 

  • Panadero E, Romero M, Strijbos J (2013) The impact of a rubric and friendship on peer assessment: effects on construct validity, performance, and perceptions of fairness and comfort. Stud Educ Eval 39(4):195–203

    Google Scholar 

  • Passonneau R, Poddar A, Gite G, Krivokapic A, Yang Q, Perin D (2018) Wise crowd content assessment and educational rubrics. Int J Artif Intell Educ 28(1):29–55

    Google Scholar 

  • Pedrycz W, Ekel P, Parreiras R (2010) Fuzzy multicriteria decision-making: models, methods and applications. Wiley, Hoboken

    MATH  Google Scholar 

  • Perin D, Lauterbach M (2018) Assessing text-based writing of low-skilled college students. Int J Artif Intell Educ 28(1):56–78

    Google Scholar 

  • Piech C, Huang J, Chen Z, Do C, Ng A, Koller D (2013) Tuned models of peer assessment in MOOCs. In: Proceedings of the 6th international conference on educational data mining

  • Plackett RL (1975) The analysis of permutations. Appl Stat 24(2):193

    MathSciNet  Google Scholar 

  • Rahimi Z, Litman D, Correnti R, Wang E, Matsumura L (2017) Assessing students’ use of evidence and organization in response-to-text writing: using natural language processing for rubric-based automated scoring. Int J Artif Intell Educ 27(4):694–728

    Google Scholar 

  • Raman K, Joachims T (2014) Methods for ordinal peer grading. In: Proceedings of the 20th SIGKDD international conference on knowledge discovery and data mining

  • Reddy Y, Andrade H (2009) A review of rubric use in higher education. Assess Eval High Educ 35(4):435–448

    Google Scholar 

  • Rezaei A, Lovorn M (2010) Reliability and validity of rubrics for assessment through writing. Assess Writ 15:18–39

    Google Scholar 

  • Staubitz T, Petrick D, Bauer M, Renz J, Meinel C (2016) Improving the peer assessment experience on MOOC platforms. In: Proceedings of the third ACM conference on learning@Scale. Edinburgh, Scotland, UK

  • Suen HK (2014) Peer assessment for massive open online courses (MOOCs). Int Rev Res Open Distrib Learn 15(3):312–327

    Google Scholar 

  • Sun D, Harris N, Walther G, Baiocchi M (2015) Peer assessment enhances student learning: the results of a matched randomized crossover experiment in a college statistics class. PLoS ONE 10(12):e0143177

    Google Scholar 

  • Uto M, Ueno M (2016) Item response theory for peer assessment. IEEE Trans Learn Technol 9(2):157–160

    Google Scholar 

  • Vajjala S (2018) Automated assessment of non-native learner essays: investigating the role of linguistic features. Int J Artif Intell Educ 28(1):79–105

    Google Scholar 

  • Vie J, Popineau F, Bruillard E, Bourda Y (2018) Automated test assembly for handling learner cold-start in large-scale assessments. Int J Artif Intell Educ 28:616–631

    Google Scholar 

  • Walsh T (2014) The peer rank method for peer assessment. In: Proceedings of the 21st European conference on artificial intelligence

  • Wang YM, Fan ZP (2007) Fuzzy preference relations: aggregation and weight determination. Comput Ind Eng 53(1):163–172

    MathSciNet  Google Scholar 

  • Yager RR (1993) Families of OWA operators. Fuzzy sets and systems 59(2):125–148

    MathSciNet  MATH  Google Scholar 

  • Zadeh LA (1983) A computational approach to fuzzy quantifiers in natural languages. Comput Math Appl 9:149–184

    MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by the project “colMOOC: Integrating Conversational Agents and Learning Analytics in MOOCs”, co-funded by the European Commission within the Erasmus+ Knowledge Alliances program (Ref. 588438-EPP-1-2017-1-EL-EPPKA2-KA).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pierluigi Ritrovato.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Communicated by Yaroslav D. Sergeyev.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Capuano, N., Caballé, S., Percannella, G. et al. FOPA-MC: fuzzy multi-criteria group decision making for peer assessment. Soft Comput 24, 17679–17692 (2020). https://doi.org/10.1007/s00500-020-05155-5

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00500-020-05155-5

Keywords

Navigation