Abstract
Assessment plays an important role in learning and Multiple Choice Questions (MCQs) are quite popular in large-scale evaluations. Technology-enabled learning necessitates a smart assessment. Therefore, automatic MCQ generation became increasingly popular in the last two decades. Despite a large amount of research effort, system generated MCQs are not useful in real educational applications. This is because of the inability to produce diverse and human-alike distractors. Distractors are the wrong choices given along with the correct answer (key) to befuddle the examinee. In several domains, the MCQs deal with names or named entities. However, existing literature is not adequate in generating quality named entity distractors. In this paper, we present a method for automatic generation of named entity distractors. The technique uses a combination of statistical and semantic similarities. To compute the statistical similarity, a set of class-specific attributes are defined are their values are extracted from the web. Semantic similarity is computed using a predicate-argument extraction based method. The proposed technique is tested in cricket domain because of the availability of a large number of web resources and MCQs for dataset preparation. An evaluation strategy is proposed along with a set of metrics. A set of human evaluators performed the evaluation and they found that the average closeness value of the distractors as 2.3. This value indicates that 2.3 out of 3 system-generated distractors are as good as human-generated distractors. Two good distractors make an MCQ usable in real assessment. So, the proposed technique is capable of generating high-quality distractors.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.References
Afzal, N., & Mitkov, R. (2014). Automatic generation of multiple choice questions using dependency-based semantic relations. Soft Computing - A Fusion of Foundations, Methodologies and Applications, 18(7), 1269–1281.
Agarwal, M., & Mannem, P. (2011). Automatic gap-fill question generation from text books. In Proceedings of the sixth workshop on innovative use of NLP for building educational applications (pp. 56–64). Portland.
Araki, J., Rajagopal, D., Sankaranarayanan, S., Holm, S., Yamakawa, Y., Mitamura, T. (2016). Generating questions and multiple-choice answers using semantic analysis of texts. In Proceedings of COLING 2016: Technical Papers (pp. 1125–1136). Osaka.
Aldabe, I., & Maritxalar, M. (2010). Automatic distractor generation for domain specific texts. In Proceedings of IceTAL 2010, LNAI 6233 (pp. 27–38).
Aldabe, I., Maritxalar, M., Mitkov, R. (2009). A Study on the automatic selection of candidate sentences and distractors. In Proceedings of the 14th international conference on artificial intelligence in education (AIED2009). Brighton.
Bhatia, A.S., Kirti, M., Saha, S.K. (2013). Automatic generation of multiple choice questions using wikipedia. In Proceedings of pattern recognition and machine intelligence (PReMI -13), LNCS 8251 (pp. 733–738).
Brown, J.C., Frishkoff, G.A., Eskenazi, M. (2005). Automatic question generation for vocabulary assessment. In Proceedings of HLT/EMNLP (pp. 819–826). Vancouver.
Coniam, D. (1997). A preliminary inquiry into using corpus word frequency data in the automatic generation of english language cloze tests. CALICO Journal, 14(2), 15–33.
Correia, R., Baptista, J., Mamede, N., Trancoso, I., Eskenazi M. (2010). Automatic generation of cloze question distractors. In Second language studies: acquisition, learning, education and technology.
Fattoh, I.E. (2014). Automatic multiple choice question generation system for semantic attributes using string similarity measures. Computer Engineering and Intelligent Systems, 5(8), 66–73.
Goto, T., Kojiri, T., Watanabe, T., Iwata, T., Yamada, T. (2010). Automatic generation system of multiple-choice cloze questions and its evaluation. Knowledge Management & E-Learning: An International Journal, 2(3), 210–224.
Hoshino, A., & Nakagawa, H. (2007). Assisting cloze test making with a web application. In Proceedings of society for information technology and teacher education international conference (pp. 2807–2814). Chesapeake.
Karamanis, N., Ha, L.A., Mitkov, R. (2006). Generating multiple-choice test items from medical text: A pilot study. In Proceedings of the fourth international natural language generation conference (pp. 111–113). Sydney.
Kumar, G., Banchs, R.E., D? Haro L.D. (2015). Automatic fill-the-blank question generator for student self-assessment. In Frontiers in education conference (FIE), IEEE.
Lee, J., & Seneff, S. (2007). Automatic generation of cloze items for prepositions. In Proceedings INTERSPEECH (pp. 2173–2176). Antwerp.
Lin, Y.C., Sung, L.C., Chen, M.C. (2007). An automatic multiple-choice question generation scheme for english adjective understanding. In Workshop on modeling, management and generation of problems/questions in elearning, ICCE 2007 (pp. 137–142). Hiroshima.
Liu, C.L., Wang, C.H., Gao, Z.M., Huang, SM. (2005). Applications of lexical information for algorithmically composing multiple-choice cloze items. In Proceedings of the second workshop on building educational applications using NLP (pp. 1–8). Ann Arbor.
Majumdar, M., & Saha, S.K. (2014a). Automatic selection of informative sentences: The sentences that can generate multiple choice questions. Knowledge Management & E-Learning: An International Journal, 6(4), 377–391.
Majumdar, M., & Saha, S.K. (2014b). A system for generating multiple choice questions: With a novel approach for sentence selection. In Proceedings of the 2nd ACL workshop on natural language processing techniques for educational applications (NLP-TEA) (pp. 64–72). Beijing.
Mitkov, R., & Ha, L.A. (2003). Computer-aided generation of multiple-choice tests. In Proceedings of the HLT/NAACL 2003 workshop on building educational applications using NLP (pp. 17–22). Edmonton.
Mitkov, R., Ha, L.A., Varga, A., Rello, L. (2009). Semantic similarity of distractors in multiple-choice tests: extrinsic evaluation. In Proceedings of the EACL 2009 workshop on GEMS: GEometical models of natural language semantics (pp. 49–56).
Papasalouros, A., Kanaris, K., Kotis, K. (2008). Automatic generation of multiple-choice questions from domain ontologies. IADIS e-Learning.
Patra, R., & Saha, S.K. (2017). Automatic generation of named entity distractors of multiple choice questions using web information. In International conference on computing analytics and networking (ICCAN 2017) (pp. 511–518).
Pino, J., Heilman, M., Eskenazi, M. (2008). A selection strategy to improve cloze question quality. In Proceedings of the workshop on intelligent tutoring systems for ill-defined domains, in 9th international conference on intelligent tutoring systems (pp. 22–32).
Rusu, D., Dali, L., Fortuna, B., Grobelnik, M., Mladeni, D. (2007). Triplet extraction from sentences. In Proceedings of the 10th international multiconference information society 2007 (vol. A, pp. 218–222).
Funding
This work is partially supported by
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
The authors declare that:
- This article does not contain any studies with human participants or animals performed by any of the authors.
- For this type of study formal consent is not required.
Conflict of interests
The authors declare that they have no conflict of interest.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
About this article
Cite this article
Patra, R., Saha, S.K. A hybrid approach for automatic generation of named entity distractors for multiple choice questions. Educ Inf Technol 24, 973–993 (2019). https://doi.org/10.1007/s10639-018-9814-3
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10639-018-9814-3