Abstract
Open-ended text questions allow for better assessment students’ knowledge, but analyzing the answer, determining it’s correctness and, especially, providing detailed and meaningful feedback about errors to a student are more difficult tasks than for closed-ended and numerical questions.
The analysis of the answer in the form of freely written text can be performed on three different levels. Character-level analysis seeks error in characters’ placement inside a word or a token; it’s mostly used for detecting and correcting typos, allowing to discern typos from actual errors. Word-level (or token-level) analysis concerns word placement in a sentence or phrase, allowing to find missing, extraneous and misplaced words. Analysis on the semantic level tries to formally capture the meaning of the answer and compare it with the meaning of the correct answer provided by question creator in natural-language or formal form. Some tools analyze answers on several levels.
Long open-ended text questions typically allow variability of correct answers, which complicates the error search. Different ways of specifying patterns for variability and the possibility of introducing error-determining methods to questions with patterned answers are discussed.
This paper presents the results of research carried out under the RFBR grant 18-07-00032 “Intelligent support of decision making of knowledge management for learning and scientific research based on the collaborative creation and reuse of the domain information space and ontology knowledge representation model”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Anikin A, Sychev O, Gurtovoy V (2019) Multi-level modeling of structural elements of natural language texts and its applications. In: Samsonovich AV (ed) Biologically inspired cognitive architectures 2018. Springer International Publishing, Cham, pp 1–8. https://doi.org/10.1007/978-3-319-99316-4_1
Apostolico A (1997) String editing and longest common subsequences. Handbook of formal languages. Springer, Berlin, pp 361–398. https://doi.org/10.1007/978-3-662-07675-0_8
Belazzougui D, Raffinot M (2011) Approximate regular expression matching with multi-strings. In: International symposium on string processing and information retrieval. Springer, pp 55–66. https://doi.org/10.1016/j.jda.2012.07.008
Burstein J, Leacock C, Swartz R (2001) Automated evaluation of essays and short answers. https://dspace.lboro.ac.uk/2134/1790
Butcher PG, Jordan SE (2010) A comparison of human and computer marking of short free-text student responses. Comput Educ 55(2):489–499. https://doi.org/10.1016/j.compedu.2010.02.012
Callear DH, Jerrams-Smith J, Soh V (2001) Caa of short non-mcq answers. https://doi.org/10.1.1.58.2210
Chakraborty UK, Gurung R, Roy S (2014) Semantic similarity based approach for automatic evaluation of free text answers using link grammar. In: 2014 IEEE sixth international conference on technology for education, pp 218–221. https://doi.org/10.1109/T4E.2014.57
Damerau FJ (1964) A technique for computer detection and correction of spelling errors. Commun ACM 7(3):171–176
Graesser AC, Chipman P, Haynes BC, Olney A (2005) Autotutor: an intelligent tutoring system with mixed-initiative dialogue. IEEE Trans Educ 48(4):612–618. https://doi.org/10.1109/te.2005.856149
Jordan S, Mitchell T (2009) e-assessment for learning? the potential of short-answer free-text questions with tailored feedback. Br J Educ Technol 40(2):371–385. https://doi.org/10.1111/j.1467-8535.2008.00928.x
Laurikari V, et al. (2001) Efficient submatch addressing for regular expressions. Helsinki University of Technology. https://laurikari.net/ville/regex-submatch.pdf
Leacock C, Chodorow M (2003) C-rater: automated scoring of short-answer questions. Comput Hum 37(4):389–405. https://doi.org/10.1023/A:1025779619903
Levenshtein VI (1966) Binary codes capable of correcting deletions, insertions, and reversals. Sov Phys dokl 10:707–710
Myers EW, Miller W (1989) Approximate matching of regular expressions. Bull Math Biol 51(1):5–37
Sychev OA, Mamontov DP (2018) Automatic error detection and hint generation in the teaching of formal languages syntax using correctwriting question type for moodle lms. In: 2018 3rd Russian-pacific conference on computer technology and applications (RPC), pp 1–4 . https://doi.org/10.1109/RPC.2018.8482125
Sychev O, Streltsov V (2015) Use of regular expressions as templates in formative and summative open answer questions. Otkrytoe Obrazovanie 2(109):38–45 https://elibrary.ru/item.asp?id=23241596
Sykes ER, Franek F (2003) A prototype for an intelligent tutoring system for students learning to program in java (tm). In: Proceedings of the IASTED international conference on computers and advanced technology in education, pp 78–83. Citeseer. https://doi.org/10.1109/ICALT.2003.1215208
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Sychev, O., Anikin, A., Prokudin, A. (2020). Methods of Determining Errors in Open-Ended Text Questions. In: Samsonovich, A. (eds) Biologically Inspired Cognitive Architectures 2019. BICA 2019. Advances in Intelligent Systems and Computing, vol 948. Springer, Cham. https://doi.org/10.1007/978-3-030-25719-4_67
Download citation
DOI: https://doi.org/10.1007/978-3-030-25719-4_67
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-25718-7
Online ISBN: 978-3-030-25719-4
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)