Skip to main content

The Generation of Automated Learner Feedback Based on Individual Proficiency Levels

  • Conference paper
Innovations in Applied Artificial Intelligence (IEA/AIE 2005)

Abstract

Computer-adaptive tests (CATs) are software applications that adapt the level of difficulty of test questions to the learner’s proficiency level. The CAT prototype introduced here includes a proficiency level estimation based on Item Response Theory and a questions’ database. The questions in the database are classified according to topic area and difficulty level. The level of difficulty estimate comprises expert evaluation based upon Bloom’s taxonomy and users’ performance over time. The output from our CAT prototype is a continuously updated user model that estimates proficiency in each of the domain areas covered in the test. This user model was employed to provide automated feedback for learners in a summative assessment context. The evaluation of our feedback tool by a group of learners suggested that our approach was a valid one, capable of providing useful advice for individual development.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Anderson, L.W., Krathwohl, D.R. (eds.): A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives. Longman, New York (2001)

    Google Scholar 

  2. Barker, T., Lilley, M.: Are Individual Learners Disadvantaged by the Use of Computer-Adaptive Testing? In: Proceedings of the 8th Learning Styles Conference, University of Hull, United Kingdom, European Learning Styles Information Network, pp. 30–39 (2003)

    Google Scholar 

  3. Barker, T., Jones, S., Britton, C., Messer, D.: The use of a co-operative student model of learner characteristics to configure a multimedia application. User Modelling and User Adapted Interaction 12(2/3), 207–241 (2002)

    Article  MATH  Google Scholar 

  4. Bloom, B.S.: Taxonomy of educational objectives. In: Handbook 1, Cognitive domain: the classification of educational goals. Longman, London (1956)

    Google Scholar 

  5. Freeman, R., Lewis, R.: Planning and implementing assessment. Kogan Page, London (1998)

    Google Scholar 

  6. Lilley, M., Barker, T., Britton, C.: The development and evaluation of a software prototype for computer adaptive testing. Computers & Education Journal 43(1-2), 109–122 (2004)

    Article  Google Scholar 

  7. Mathan, S.A., Koedinger, K.R.: An empirical assessment of comprehension fostering features in an Intelligent Tutoring System. In: Cerri, S.A., Gouardéres, G., Paraguaçu, F. (eds.) ITS 2002. LNCS, vol. 2363, pp. 330–343. Springer, Heidelberg (2002)

    Chapter  Google Scholar 

  8. Lord, F.M.: Applications of Item Response Theory to practical testing problems. Lawrence Erlbaum Associates, New Jersey (1980)

    Google Scholar 

  9. Wainer, H.: Computerized Adaptive Testing (A Primer), 2nd edn. Lawrence Erlbaum Associates, New Jersey (2000)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2005 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Lilley, M., Barker, T., Britton, C. (2005). The Generation of Automated Learner Feedback Based on Individual Proficiency Levels. In: Ali, M., Esposito, F. (eds) Innovations in Applied Artificial Intelligence. IEA/AIE 2005. Lecture Notes in Computer Science(), vol 3533. Springer, Berlin, Heidelberg. https://doi.org/10.1007/11504894_115

Download citation

  • DOI: https://doi.org/10.1007/11504894_115

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-26551-1

  • Online ISBN: 978-3-540-31893-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics