Skip to main content

Towards Automated Evaluation of Learning Resources Inside Repositories

  • Chapter
  • First Online:

Abstract

It is known that current Learning Object Repositories adopt strategies for quality assessment of their resources that rely on the impressions of quality given by the members of the repository community. Although this strategy can be considered effective at some extent, the number of resources inside repositories tends to increase more rapidly than the number of evaluations given by this community, thus leaving several resources of the repository without any quality assessment. The present work describes the results of two experiments to automatically generate quality information about learning resources based on their intrinsic features as well as on evaluative metadata (ratings) available about them in MERLOT repository. Preliminary results point out the feasibility of achieving such goal which suggests that this method can be used as a starting point for the pursuit of automatically generation of internal quality information about resources inside repositories.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD   109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    Although this limitation may affect the results, the process of collecting the information is extremely slow and such limitation was needed. In order to acquire the samples used in this study, the crawler kept running uninterruptedly for 2 (in 2009) and 4 (in 2010) full months.

  2. 2.

    The so-called not-good group was formed by the union of the average group and the poor group.

  3. 3.

    The difficulties for training, validating and testing predictive models for subsets with less than 40 resources would be more severe.

  4. 4.

    Just some models were presented in the figure.

References

  1. Vuorikari R, Manouselis N, Duval E (2008) Using metadata for storing, sharing and reusing evaluations for social recommendations: the case of learning resources. Social information retrieval systems: emerging technologies and applications for searching the web effectively. Idea Group, Hershey, PA, pp 87–107

    Google Scholar 

  2. Ochoa X, Duval E (2009) Quantitative analysis of learning object repositories. IEEE Trans Learn Technol 2(3):226–238

    Article  Google Scholar 

  3. Ochoa X, Duval E (2008) Relevance ranking metrics for learning objects. IEEE Trans Learn Technol 1(1):34–48. doi:10.1109/TLT.2008.1, http://dx.doi.org/

    Article  Google Scholar 

  4. Sanz-Rodriguez J, Dodero J, Sánchez-Alonso S (2010) Ranking learning objects through integration of different quality indicators. IEEE Trans Learn Technol 3(4):358–363. doi:10.1109/TLT.2010.23

    Article  Google Scholar 

  5. Cechinel C, Sánchez-Alonso S, Sicilia M-Á (2009) Empirical analysis of errors on human-generated learning objects metadata. In: Sartori F, Sicilia MÁ, Manouselis N (eds) Metadata and semantic research, vol 46, Communications in computer and information science. Springer, Berlin, pp 60–70. doi:10.1007/978-3-642-04590-5_6

    Chapter  Google Scholar 

  6. Cechinel C, Sánchez-Alonso S, García-Barriocanal E (2011) Statistical profiles of highly-rated learning objects. Comput Educ 57(1):1255–1269. doi:10.1016/j.compedu.2011.01.012

    Article  Google Scholar 

  7. Cechinel C, Silva Camargo S, Sánchez-Alonso S, Sicilia M-Á (2012) On the search for intrinsic quality metrics of learning objects. In: Dodero J, Palomo-Duarte M, Karampiperis P (eds) Metadata and semantics research, Communications in computer and information science. Springer, Berlin, pp 49–60. doi:10.1007/978-3-642-35233-1_5

    Chapter  Google Scholar 

  8. Meyer M, Hannappel A, Rensing C, Steinmetz R (2007) Automatic classification of didactic functions of e-learning resources. Paper presented at the Proceedings of the 15th international conference on multimedia, Augsburg, Germany

    Google Scholar 

  9. Mendes E, Hall W, Harrison R (1998) Applying metrics to the evaluation of educational hypermedia applications. J Univers Comput Sci 4(4):382–403. doi:10.3217/jucs-004-04-0382

    Google Scholar 

  10. Blumenstock JE (2008) Size matters: word count as a measure of quality on Wikipedia. Paper presented at the Proceedings of the 17th international conference on World Wide Web, Beijing, China

    Google Scholar 

  11. Stvilia B, Twidale MB, Smith LC, Gasser L (2005) Assessing information quality of a community-based encyclopedia. In: Proceedings of the international conference on information quality – ICIQ 2005, pp 442-454. Doi:citeulike-article-id:1833325

    Google Scholar 

  12. Ivory MY, Hearst MA (2002) Statistical profiles of highly-rated web sites. Changing our world, changing ourselves. Paper presented at the proceedings of the SIGCHI conference on Human factors in computing systems, Minneapolis, MA, 2002

    Google Scholar 

  13. Nesbit JC, Belfer K, Leacock T (2003) Learning object review instrument (LORI). E-learning research and assessment network. http://www.elera.net/eLera/Home/Articles/LORI%20manual

  14. García-Barriocanal E, Sicilia M-Á (2009) Preliminary explorations on the statistical profiles of highly-rated learning objects. In: Sartori F, Sicilia MÁ, Manouselis N (eds) Metadata and semantic research, vol 46, Communications in computer and information science. Springer, Berlin, pp 108–117. doi:10.1007/978-3-642-04590-5_10

    Chapter  Google Scholar 

  15. Hall M, Frank E, Holmes G, Pfahringer B, Reutemann P, Witten IH (2009) The WEKA data mining software: an update. SIGKDD Explor Newsl 11(1):10–18. doi:10.1145/1656274.1656278

    Article  Google Scholar 

  16. Cichosz P (2011) Assessing the quality of classification models: performance measures and evaluation procedures. Cent Eur J Eng 1(2):132–158. doi:10.2478/s13531-011-0022-9

    Article  Google Scholar 

  17. Xu L, Hoos HH, Leyton-Brown K (2007) Hierarchical hardness models for SAT. Paper presented at the Proceedings of the 13th international conference on principles and practice of constraint programming, Providence, RI

    Google Scholar 

  18. Hagan MT, Menhaj MB (1994) Training feedforward networks with the Marquardt algorithm. IEEE Trans Neural Netw 5(6):989–993. doi:10.1109/72.329697

    Article  Google Scholar 

  19. Bishop CM (2006) Pattern recognition and machine learning, Information Science and Statistics. Springer, New York

    MATH  Google Scholar 

  20. Cechinel C, Camargo SdS, Ochoa X, Sánchez-Alonso S, Sicilia M-Á (2012a) Populating learning object repositories with hidden internal quality information. In: Manouselis N, Drachsler H, Verbert K, Santos OC (eds) Recommender systems in technology enhanced learning, CEUR workshop proceedings, Saarbrücken, pp 11–22

    Google Scholar 

  21. Cechinel C, Sánchez-Alonso S (2011) Analyzing associations between the different ratings dimensions of the MERLOT repository. Interdisciplinary Journal of E-Learning and Learning Objects 7:1–9

    Google Scholar 

Download references

Acknowledgments

The work presented here has been partially funded by the European Commission through the project IGUAL (www.igualproject.org)—Innovation for Equality in Latin American University (code DCIALA/19.09.01/10/21526/245-315/ALFAIII (2010)123) of the ALFA III Programme, by Spanish Ministry of Science and Innovation through project MAVSEL: Mining, data analysis and visualization based in social aspects of e-learning (code TIN2010-21715-C02-01) and by CYTED (Ibero-American Programme for Science, Technology and Development) as part of project “RIURE - Ibero-American Network for the Usability of Learning Repositories “ (code 513RT0471).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cristian Cechinel .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Springer Science+Business Media New York

About this chapter

Cite this chapter

Cechinel, C., da Silva Camargo, S., Sánchez-Alonso, S., Sicilia, MÁ. (2014). Towards Automated Evaluation of Learning Resources Inside Repositories. In: Manouselis, N., Drachsler, H., Verbert, K., Santos, O. (eds) Recommender Systems for Technology Enhanced Learning. Springer, New York, NY. https://doi.org/10.1007/978-1-4939-0530-0_2

Download citation

  • DOI: https://doi.org/10.1007/978-1-4939-0530-0_2

  • Published:

  • Publisher Name: Springer, New York, NY

  • Print ISBN: 978-1-4939-0529-4

  • Online ISBN: 978-1-4939-0530-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics