Skip to main content

Unsupervised Learning of Question Difficulty Levels Using Assessment Responses

  • Conference paper
  • First Online:
Computational Science and Its Applications – ICCSA 2017 (ICCSA 2017)

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 10404))

Included in the following conference series:

Abstract

Question Difficulty Level is an important factor in determining assessment outcome. Accurate mapping of the difficulty levels in question banks offers a wide range of benefits apart from higher assessment quality: improved personalized learning, adaptive testing, automated question generation, and cheating detection. Adopting unsupervised machine learning techniques, we propose an efficient method derived from assessment responses to enhance consistency and accuracy in the assignment of question difficulty levels. We show effective feature extraction is achieved by partitioning test takers based on their test-scores. We validate our model using a large dataset collected from a two thousand student university-level proctored assessment. Preliminary results show our model is effective, achieving mean accuracy of 84% using instructor validation. We also show the model’s effectiveness in flagging mis-calibrated questions. Our approach can easily be adapted for a wide range of applications in e-learning and e-assessments.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Conklin, J., Anderson, L.W., Krathwohl, D., Airasian, P., Cruikshank, K.A., Mayer, R.E., Pintrich, P., Raths, J., Wittrock, M.C.: A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom’s Taxonomy of Educational Objectives Complete Edition. JSTOR (2005)

    Google Scholar 

  2. Reynolds, C.R., Livingston, R.B., Willson, V.L., Willson, V.: Measurement and Assessment in Education. Pearson Education International, Upper Saddle River (2010)

    Google Scholar 

  3. Chen, C.-M., Lee, H.-M., Chen, Y.-H.: Personalized e-learning system using item response theory. Comput. Educ. 44(3), 237–255 (2005)

    Article  Google Scholar 

  4. Wang, T.-H.: Developing an assessment-centered e-Learning system for improving student learning effectiveness. Comput. Educ. 73, 189–203 (2014)

    Article  Google Scholar 

  5. Haris, S.S., Omar, N.: A rule-based approach in Bloom’s Taxonomy question classification through natural language processing. In: 2012 7th International Conference on Computing and Convergence Technology (ICCCT), pp. 410–414. IEEE (2012)

    Google Scholar 

  6. Yahya, A.A., Toukal, Z., Osman, A.: Bloom’s taxonomy-based classification for item bank questions using support vector machines. In: Ding, W., Jiang, H., Ali, M., Li, M. (eds.) Modern Advances in Intelligent Systems and Tools. SCI, vol. 431. Springer, Heidelberg (2012). doi:10.1007/978-3-642-30732-4_17

    Chapter  Google Scholar 

  7. Sangodiah, A., Ahmad, R., Ahmad, W.F.W.: Integration of machine learning approach in item bank test system. In: 2016 3rd International Conference on Computer and Information Sciences (ICCOINS), pp. 164–168. IEEE (2016)

    Google Scholar 

  8. Thomas, N.T., Kumar, A., Bijlani, K.: Automatic answer assessment in LMS using latent semantic analysis. Procedia Comput. Sci. 58, 257–264 (2015). Elsevier

    Article  Google Scholar 

  9. Raykova, M., Kostadinova, H., Totkov, G.: Adaptive test system based on revised Bloom’s Taxonomy. In: Proceedings of the 12th International Conference on Computer Systems and Technologies, pp. 504–509. ACM (2011)

    Google Scholar 

  10. Haladyna, T.M., Downing, S.M., Rodriguez, M.C.: A review of multiple-choice item-writing guidelines for classroom assessment. Appl. Measur. Educ. 15(3), 309–333 (2002)

    Article  Google Scholar 

  11. Wesolowsky, G.O.: Detecting excessive similarity in answers on multiple choice exams. J. Appl. Stat. 27(7), 909–921 (2000)

    Article  Google Scholar 

  12. Friedman, J., Hastie, T., Tibshirani, R.: The Elements of Statistical Learning. Springer Series in Statistics, vol. 1. Springer, New York (2001)

    MATH  Google Scholar 

  13. Nair, N.C., Archana, J.S., Chatterjee, S., Bijlani, K.: Knowledge representation and assessment using concept based learning. In: 2015 International Conference on Advances in Computing, Communications and Informatics (ICACCI), pp. 848–854. IEEE (2015)

    Google Scholar 

  14. Stemler, S.: An overview of content analysis. Pract. Assess. Res. Eval. 7(17), 137–146 (2001)

    Google Scholar 

  15. University of Washington: ScorePak: Office of Educational Assessment University of Washington, Seattle, USA

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sankaran Narayanan .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Cite this paper

Narayanan, S., Kommuri, V.S., Subramanian, N.S., Bijlani, K., Nair, N.C. (2017). Unsupervised Learning of Question Difficulty Levels Using Assessment Responses. In: Gervasi, O., et al. Computational Science and Its Applications – ICCSA 2017. ICCSA 2017. Lecture Notes in Computer Science(), vol 10404. Springer, Cham. https://doi.org/10.1007/978-3-319-62392-4_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-62392-4_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-62391-7

  • Online ISBN: 978-3-319-62392-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics