Skip to main content

Utilising Learnersourcing to Inform Design Loop Adaptivity

  • Conference paper
  • First Online:
Book cover Addressing Global Challenges and Quality Education (EC-TEL 2020)

Abstract

Design-loop adaptivity refers to data-driven decisions that inform the design of learning materials to improve learning for student populations within adaptive educational systems (AES). Commonly in AESs, decisions on the quality of learning material are based on students’ performance, i.e., whether engaging with the material led to learning gains. This paper investigates an alternative approach for design adaptivity, which utilises students’ subjective ratings and comments to infer the quality of the learning material. This approach is in line with the recent shift towards learner-centred learning and learnersourcing, that aim to transform the role of students from passive recipients of content to active participants that engage with various higher-order learning tasks including evaluating the quality of resources. In this paper, we present a suite of aggregation-based and reliability-based methods that can be used to infer the quality of learning material based on student ratings and comments. We investigate the feasibility and accuracy of the methods in a live learnersourcing educational platform called RiPPLE that provides the capacity to capture subjective ratings and comments from students. Empirical data from the use of RiPPLE in a first-year course on information systems are used to evaluate the presented methods. Results indicate that the use of a combination of reliability-based methods provides an acceptable level of accuracy in determining the quality of learning resources.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Approval from our Human Research Ethics Committee at The University of Queensland was received for conducting this evaluation #2018000125.

References

  1. Abdi, S., Khosravi, H., Sadiq, S., Gasevic, D.: Complementing educational recommender systems with open learner models. In: Proceedings of the Tenth International Conference LAK, pp. 360–365 (2020)

    Google Scholar 

  2. Abdi, S., Khosravi, H., Sadiq, S., Gasevic, D.: A multivariate Elo-based learner model for adaptive educational systems. In: Proceedings of the Educational Data Mining Conference, pp. 462–467 (2019)

    Google Scholar 

  3. Alenezi, H.S., Faisal, M.H.: Utilizing crowdsourcing and machine learning in education: literature review. Educ. Inf. Technol. 25, 1–16 (2020)

    Article  Google Scholar 

  4. Aleven, V., McLaughlin, E.A., Glenn, R.A., Koedinger, K.R.: Instruction based on adaptive learning technologies. In: Mayer, R.E., Alexander, P. (eds.) Handbook of Research on Learning and Instruction. Routledge, New York (2016)

    Google Scholar 

  5. Boud, D., Soler, R.: Sustainable assessment revisited. Assess. Eval. High. Educ. 41(3), 400–413 (2016)

    Article  Google Scholar 

  6. Bull, S., Ginon, B., Boscolo, C., Johnson, M.: Introduction of learning visualisations and metacognitive support in a persuadable open learner model. In: Proceedings of the 6th Conference on Learning Analytics & knowledge, pp. 30–39 (2016)

    Google Scholar 

  7. Denny, P., Hamer, J., Luxton-Reilly, A., Purchase, H.: Peerwise: students sharing their multiple choice questions. In: Proceedings of the Fourth International Workshop on Computing Education Research, pp. 51–58 (2008)

    Google Scholar 

  8. Doroudi, S., et al.: Crowdsourcing and Education: Towards a Theory and Praxis of Learnersourcing. International Society of the Learning Sciences, London (2018)

    Google Scholar 

  9. Guerra, J., Hosseini, R., Somyurek, S., Brusilovsky, P.: An intelligent interface for learning content: combining an open learner model and social comparison to support self-regulated learning and engagement. In: Proceedings of the 21st International Conference on Intelligent User Interfaces, pp. 152–163 (2016)

    Google Scholar 

  10. Heffernan, N.T., et al.: The future of adaptive learning: does the crowd hold the key? Int. J. Artif. Intell. Educ. 26(2), 615–644 (2016)

    Article  Google Scholar 

  11. Karataev, E., Zadorozhny, V.: Adaptive social learning based on crowdsourcing. IEEE Trans. Learn. Technol. 10(2), 128–139 (2016)

    Article  Google Scholar 

  12. Khosravi, H., Cooper, K.: Topic dependency models: graph-based visual analytics for communicating assessment data. J. Learn. Anal. 5(3), 136–153 (2018)

    Google Scholar 

  13. Khosravi, H., Gyamfi, G., Hanna, B.E., Lodge, J.: Fostering and supporting empirical research on evaluative judgement via a crowdsourced adaptive learning system. In: Proceedings of the Tenth International Conference on Learning Analytics & Knowledge, pp. 83–88 (2020)

    Google Scholar 

  14. Khosravi, H., Kitto, K., Joseph, W.: RiPPLE: a crowdsourced adaptive platform for recommendation of learning activities. J. Learn. Anal. 6(3), 91–105 (2019)

    Google Scholar 

  15. Kim, J., Nguyen, P.T., Weir, S., Guo, P.J., Miller, R.C., Gajos, K.Z.: Crowdsourcing step-by-step information extraction to enhance existing how-to videos. In: Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, pp. 4017–4026 (2014)

    Google Scholar 

  16. Kim, J., et al.: Learnersourcing: improving learning with collective learner activity. Ph.D. thesis, Massachusetts Institute of Technology (2015)

    Google Scholar 

  17. Krishnan, S., Patel, J., Franklin, M.J., Goldberg, K.: A methodology for learning, analyzing, and mitigating social influence bias in recommender systems. In: Proceedings of the 8th Conference on Recommender systems, pp. 137–144 (2014)

    Google Scholar 

  18. Naldi, M.: A review of sentiment computation methods with R packages. arXiv preprint arXiv:1901.08319 (2019)

  19. Paré, D.E., Joordens, S.: Peering into large lectures: examining peer and expert mark agreement using peerscholar, an online peer assessment tool. J. Comput. Assist. Learn. 24(6), 526–540 (2008)

    Article  Google Scholar 

  20. Purchase, H., Hamer, J.: Peer-review in practice: eight years of Aropä. Assess. Eval. High. Educ. 43(7), 1146–1165 (2018)

    Article  Google Scholar 

  21. Rinker, T.: Sentimentr: calculate text polarity sentiment, version 2.4. 0 (2018)

    Google Scholar 

  22. Shnayder, V., Parkes, D.C.: Practical peer prediction for peer assessment. In: Fourth AAAI Conference on Human Computation and Crowdsourcing (2016)

    Google Scholar 

  23. Venanzi, M., Guiver, J., Kazai, G., Kohli, P., Shokouhi, M.: Community-based bayesian aggregation models for crowdsourcing. In: Proceedings of the 23rd International Conference on World Wide Web, pp. 155–164 (2014)

    Google Scholar 

  24. Wang, W., An, B., Jiang, Y.: Optimal spot-checking for improving evaluation accuracy of peer grading systems. In: Thirty-Second AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  25. Wang, X., Talluri, S.T., Rose, C., Koedinger, K.: Upgrade: sourcing student open-ended solutions to create scalable learning opportunities. In: Proceedings of the Sixth (2019) ACM Conference on Learning@ Scale, pp. 1–10 (2019)

    Google Scholar 

  26. Williams, J.J., et al.: Axis: generating explanations at scale with learnersourcing and machine learning. In: Proceedings of the Third (2016) ACM Conference on Learning@ Scale, pp. 379–388 (2016)

    Google Scholar 

  27. Willis, A., Davis, G., Ruan, S., Manoharan, L., Landay, J., Brunskill, E.: Key phrase extraction for generating educational question-answer pairs. In: Proceedings of the Sixth (2019) ACM Conference on Learning@ Scale, pp. 1–10 (2019)

    Google Scholar 

  28. Wind, D.K., Jørgensen, R.M., Hansen, S.L.: Peer feedback with peergrade. In: ICEL 2018 13th International Conference on e-Learning, p. 184. Academic Conferences and publishing limited (2018)

    Google Scholar 

  29. Wright, J.R., Thornton, C., Leyton-Brown, K.: Mechanical TA: partially automated high-stakes peer grading. In: Proceedings of the 46th ACM Technical Symposium on Computer Science Education, pp. 96–101 (2015)

    Google Scholar 

  30. Zheng, Y., Li, G., Li, Y., Shan, C., Cheng, R.: Truth inference in crowdsourcing: is the problem solved? Proc. VLDB Endowment 10(5), 541–552 (2017)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hassan Khosravi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Darvishi, A., Khosravi, H., Sadiq, S. (2020). Utilising Learnersourcing to Inform Design Loop Adaptivity. In: Alario-Hoyos, C., Rodríguez-Triana, M.J., Scheffel, M., Arnedillo-Sánchez, I., Dennerlein, S.M. (eds) Addressing Global Challenges and Quality Education. EC-TEL 2020. Lecture Notes in Computer Science(), vol 12315. Springer, Cham. https://doi.org/10.1007/978-3-030-57717-9_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-57717-9_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-57716-2

  • Online ISBN: 978-3-030-57717-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics