Abstract
In large classrooms with limited teacher time, there is a need for automatic evaluation of text answers and real-time personalized feedback during the learning process. In this paper, we discuss Amrita Test Evaluation & Scoring Tool (A-TEST), a text evaluation and scoring tool that learns from course materials and from human-rater scored text answers and also directly from teacher input. We use latent semantic analysis (LSA) to identify the key concepts. While most AES systems use LSA to compare students’ responses with a set of ideal essays, this ignores learning the common misconceptions that students may have about a topic. A-TEST also uses LSA to learn misconceptions from the lowest scoring essays using this as a factor for scoring. ‘A-TEST’ was evaluated using two datasets of 1400 and 1800 pre-scored text answers that were manually scored by two teachers. The scoring accuracy and kappa scores between the derived ‘A-TEST’ model and the human raters were comparable to those between the human raters.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Hiemstra, D., de Jong, F.: Statistical language models and information retrieval: natural language processing really meets retrieval. Glot Int. 5(8), 288–293 (2001)
Kakkonen, T., Myller, N., Sutinen, E., Timonen, J.: Comparison of dimension reduction methods for automated essay grading. Int. Forum Educ. Technol. Soc. (IFETS) J. 11, 275–288 (2008)
Adhitia, R., Purwarianti, A.: Automated essay grading system using SVM and LSA for essay answers in Indonesian. JSI 5(1) (2009)
Flor, M., Futagi, Y.: On using context for automatic correction of non-word misspellings in student essays. In: The 7th Workshop on the Innovative Use of NLP for Building Educational Applications, Montreal, Canada, 3–8 June 2012, pp. 105–115 (2012)
Turney, P.D., Littman, M.L.: Measuring praise and criticism: inference of semantic orientation from association. ACM Trans. Inf. Syst. 21, 315–346 (2003)
Nagata, R., Kakegawa, J., Yabuta, Y.: A topic-independent method for automatically scoring essay content rivaling topic-dependent methods. In: 2009 Ninth IEEE International Conference on Advanced Learning Technologies (2009)
Liu, C.-L., Hsiao, W.-H., Lee, C.-H., Chi, H.-C.: An HMM-based algorithm for content ranking and coherence-feature extraction. IEEE Trans. Syst. Man Cybern. 42, 397–407 (2012)
Miller, T.: Essay assessment with latent semantic analysis. Department of Computer Science, University of Toronto, Toronto, ON M5S 3G4, Canada (2002)
Loraksa, C., Peachavanish, R.: Automatic Thai language essay scoring using neural network and latent semantic analysis. In: Proceedings of the First Asia International Conference on Modeling & Simulation (AMS’07), pp. 400–402 (2007)
Haley, D.T., Thomas, P., Roeck, A.D., Petre, M.: Measuring improvement in latent semantic analysis based marking systems: using a computer to mark questions about HTML. In: Proceedings of the Ninth Australasian Computing Education Conference (ACE), pp. 35–52 (2007)
Monjurul Islam, M., Latiful Hoque, A.S.M.: Automated essay scoring using generalized latent semantic analysis. J. Comput. 7(3), (2012)
Berry, M.W., Dumais, S.T., O’Brien, G.W.: Using linear algebra for intelligent information retrieval. National Science Foundation under grant Nos. NSF-CDA-9115428 and NSF-ASC-92-03004
Kakkonen, T., Myller, N., Sutinen, E.: Applying part of speech enhanced LSA to automated essay grading. Automated Assessment Technologies for Free Text and Programming Assignments by Academy of Finland (2006)
Warrens, M.J.: Weighted kappa is higher than Cohen’s kappa for tridiagonal agreement tables. Stat. Methodol. 8, 268–272 (2011)
The William and Flora Hewlett Foundation (Hewlett Foundation), Automated Student Assessment Prize (ASAP) designed by The Common Pool, LLC, managed by Open Education Solutions, Inc., 12 April 2012
Acknowledgment
This work derives direction and inspiration from the Chancellor of Amrita University, Sri Mata Amritanandamayi Devi. We thank Dr. Ramachandra Kaimal for his valuable feedback.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Institute for Computer Sciences, Social Informatics and Telecommunications Engineering
About this paper
Cite this paper
Nedungadi, P., L, J., Raman, R. (2014). Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation and Scoring Tool. In: Bissyandé, T., van Stam, G. (eds) e-Infrastructure and e-Services for Developing Countries. AFRICOMM 2013. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 135. Springer, Cham. https://doi.org/10.1007/978-3-319-08368-1_31
Download citation
DOI: https://doi.org/10.1007/978-3-319-08368-1_31
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-08367-4
Online ISBN: 978-3-319-08368-1
eBook Packages: Computer ScienceComputer Science (R0)