Abstract
Computer-adaptive tests (CATs) are software applications that adapt the level of difficulty of test questions to the learner’s proficiency level. The CAT prototype introduced here includes a proficiency level estimation based on Item Response Theory and a questions’ database. The questions in the database are classified according to topic area and difficulty level. The level of difficulty estimate comprises expert evaluation based upon Bloom’s taxonomy and users’ performance over time. The output from our CAT prototype is a continuously updated user model that estimates proficiency in each of the domain areas covered in the test. This user model was employed to provide automated feedback for learners in a summative assessment context. The evaluation of our feedback tool by a group of learners suggested that our approach was a valid one, capable of providing useful advice for individual development.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Anderson, L.W., Krathwohl, D.R. (eds.): A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Longman, New York (2001)
Barker, T., Lilley, M.: Are Individual Learners Disadvantaged by the Use of Computer-Adaptive Testing? In: Proceedings of the 8th Learning Styles Conference, University of Hull, United Kingdom, European Learning Styles Information Network, pp. 30–39 (2003)
Barker, T., Jones, S., Britton, C., Messer, D.: The use of a co-operative student model of learner characteristics to configure a multimedia application. User Modelling and User Adapted Interaction 12(2/3), 207–241 (2002)
Bloom, B.S.: Taxonomy of educational objectives. In: Handbook 1, Cognitive domain: the classification of educational goals. Longman, London (1956)
Freeman, R., Lewis, R.: Planning and implementing assessment. Kogan Page, London (1998)
Lilley, M., Barker, T., Britton, C.: The development and evaluation of a software prototype for computer adaptive testing. Computers & Education Journal 43(1-2), 109–122 (2004)
Mathan, S.A., Koedinger, K.R.: An empirical assessment of comprehension fostering features in an Intelligent Tutoring System. In: Cerri, S.A., Gouardéres, G., Paraguaçu, F. (eds.) ITS 2002. LNCS, vol. 2363, pp. 330–343. Springer, Heidelberg (2002)
Lord, F.M.: Applications of Item Response Theory to practical testing problems. Lawrence Erlbaum Associates, New Jersey (1980)
Wainer, H.: Computerized Adaptive Testing (A Primer), 2nd edn. Lawrence Erlbaum Associates, New Jersey (2000)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2005 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lilley, M., Barker, T., Britton, C. (2005). The Generation of Automated Learner Feedback Based on Individual Proficiency Levels. In: Ali, M., Esposito, F. (eds) Innovations in Applied Artificial Intelligence. IEA/AIE 2005. Lecture Notes in Computer Science(), vol 3533. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11504894_115
Download citation
DOI: https://doi.org/10.1007/11504894_115
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-540-26551-1
Online ISBN: 978-3-540-31893-4
eBook Packages: Computer ScienceComputer Science (R0)