Abstract
Designing a high-quality multiple-choice test is a challenging task. Typically, to validate a test, this must be administered to a sample of the target population, allowing one to estimate the difficulty of each question and its consistency. In several scenarios, this administration is costly and time-consuming, so predicting the difficulty of multiple-choice questions before field testing could reduce costs and time during the test validation process. In this article, we propose three deep-learning approaches which aim to reduce the resources required to estimate the difficulty of multiple-choice questions during test development of high-stakes tests. These data-driven approaches use Neural Network architectures such as Recurrent Neural Networks (RNN), Bidirectional Long Short-term Memory (BiLSTM), and Bidirectional Encoder Representations for Transformers (BERT). The models are trained on a data source built with a sample of the standardized high-stakes exams for university admissions in Chile. Our approaches consider different configurations specific to each architecture and a set of features that represent the readability level and the similarities between the response options. The results show that BiLSTM performs best and is the most suitable model for the task, even though it could be considered outdated by the appearance of contemporary architectures. Finally, we elaborate on how this data-driven approach might be improved.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Kurdi, G., et al.: A comparative study of methods for a priori prediction of MCQ difficulty. Semant. web 12(3), 449-465 (2021). https://doi.org/10.3233/SW-200390
Benedetto, L., et al.: A survey on recent approaches to question difficulty estimation from text. ACM Comput. Surv. 55(9), 1–37 (2023). https://doi.org/10.1145/3556538. Article 178
Attali, Y., Saldivia, L., Jackson, C., Schuppan, F., Wanamaker, W.: Estimating item difficulty with comparative judgments. ETS Res. Rep. Ser. 2014(2), 1–8 (2014). https://doi.org/10.1002/ets2.12042
Kiessling, C., Lahner, F.-M., Winkelmann, A., Bauer, D.: When predicting item difficulty, is it better to ask authors or reviewers? Med. Educ. 52, 571–572 (2018). https://doi.org/10.1111/medu.13570
Hambleton, R.K., Sireci, S.G., Swaminathan, H., Xing, D., Rizavi, S.: Anchor-based methods for judgmentally estimating item difficulty parameters (LSAC Research Report 98–05). Law School Admission Council, Newtown, PA (2003)
AlKhuzaey, S., Grasso, F., Payne, T.R., Tamma, V.: A systematic review of data-driven approaches to item difficulty prediction. In: Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., Dimitrova, V. (eds.) AIED 2021. LNCS (LNAI), vol. 12748, pp. 29–41. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-78292-4_3
Yaneva, V., Baldwin, P., Mee, J.: Predicting the difficulty of multiple choice questions in a high-stakes medical exam. In Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, Florence, Italy, pp. 11-20. Association for Computational Linguistics (2019)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997). https://doi.org/10.1162/neco.1997.9.8.1735
Lin, L.H., Chang, T.H., Hsu, F.Y.: Automated prediction of item difficulty in reading comprehension using long short-term memory. In: 2019 International Conference on Asian Language Processing (IALP), Shanghai, China, 2019, pp. 132-135 (2019). https://doi.org/10.1109/IALP48816.2019.9037716.
Qiu, Z., Wu, X., Fan, W.: Question difficulty prediction for multiple choice problems in medical exams. In: Proceedings of the 28th ACM International Conference on Information and Knowledge Management, CIKM 2019, (New York, NY, USA), pp. 139–148. Association for Computing Machinery (2019). https://doi.org/10.1145/3357384.3358013.
Benedetto, L., Aradelli, G., Cremonesi, P., Cappelli, A., Giussani, A., Turrin, R.: On the application of transformers for estimating the difficulty of multiple-choice questions from text. In: Proceedings of the 16th Workshop on Innovative Use of NLP for Building Educational Applications, pp. 147–157. Association for Computational Linguistics (2021)
Vaswani, A., et al.: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems (NIPS 2017), pp. 6000–6010. Curran Associates Inc., Red Hook, NY, USA (2017). https://doi.org/10.48550/arXiv.1706.03762
Devlin, J., Chang, M. W., Lee, K., Toutanova, K.: BERT: Pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), Minneapolis, Minnesota, pp. 4171–4186. Association for Computational Linguistics (2019)
Schuster, M., Paliwal, K.K.: Bidirectional recurrent neural networks. IEEE Trans. Signal Process. 45(11), 2673–2681 (1997). https://doi.org/10.1109/78.650093
Rasch, G.: Probabilistic Models for Some Intelligence and Attainment Tests. University of Chicago Press (1980)
Flesch, R.: A new readability yardstick. (1948). https://doi.org/10.1037/h0057532
García, C., Ponsoda, V., Sierra, A.: Prediction of item psychometric indices from item characteristics automatically extracted from the stem and option text. Int. J. Continuing Eng. Educ. Life-Long Learn. 213, 210–221 (2011). https://doi.org/10.1504/IJCEELL.2011.040199
Smith, E.A., Senter, R.: Automated Readability Index, pp. 1–14. AMRL-TR. Aerospace Medical Research Laboratories (1967)
Spaulding, S.: A Spanish readability formula. Mod. Lang. J. 40, 433–441 (1956)
Pennington, J., Socher, R., Manning, C.D.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, pp. 1532–1543. Association for Computational Linguistics (2014). https://doi.org/10.3115/v1/D14-1162
Sanh, V., Debut, L., Chaumond, J., Wolf, T.: DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter (2019). ArXiv: abs/1910.01108
Cañete, J., Chaperon, G., Fuentes, R., Ho, J.H., Kang, H., Pérez, J.: Spanish pre-trained BERT model and evaluation data. PML4DC ICLR 2020 (2020)
Acknowledgments
For the development of this work, we want to express our sincere gratitude to Danner Schottlerbeck, who contributed to this work by using parsing methods to build the dataset used for training and testing all of the models.
This research was supported by the following grant from ANID: FONDEF ID21I10343. Support from ANID/PIA/Basal Funds for Centers of Excellence FB0003 (Center for Advanced Research in Education), and ACE210010/FB210005 (Center for Mathematical Modeling) is also gratefully acknowledged.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Reyes, D., Jimenez, A., Dartnell, P., Lions, S., Ríos, S. (2023). Multiple-Choice Questions Difficulty Prediction with Neural Networks. In: Milrad, M., et al. Methodologies and Intelligent Systems for Technology Enhanced Learning, 13th International Conference. MIS4TEL 2023. Lecture Notes in Networks and Systems, vol 764. Springer, Cham. https://doi.org/10.1007/978-3-031-41226-4_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-41226-4_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-41225-7
Online ISBN: 978-3-031-41226-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)