Skip to main content

Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation and Scoring Tool

  • Conference paper
  • First Online:
e-Infrastructure and e-Services for Developing Countries (AFRICOMM 2013)

Abstract

In large classrooms with limited teacher time, there is a need for automatic evaluation of text answers and real-time personalized feedback during the learning process. In this paper, we discuss Amrita Test Evaluation & Scoring Tool (A-TEST), a text evaluation and scoring tool that learns from course materials and from human-rater scored text answers and also directly from teacher input. We use latent semantic analysis (LSA) to identify the key concepts. While most AES systems use LSA to compare students’ responses with a set of ideal essays, this ignores learning the common misconceptions that students may have about a topic. A-TEST also uses LSA to learn misconceptions from the lowest scoring essays using this as a factor for scoring. ‘A-TEST’ was evaluated using two datasets of 1400 and 1800 pre-scored text answers that were manually scored by two teachers. The scoring accuracy and kappa scores between the derived ‘A-TEST’ model and the human raters were comparable to those between the human raters.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Hiemstra, D., de Jong, F.: Statistical language models and information retrieval: natural language processing really meets retrieval. Glot Int. 5(8), 288–293 (2001)

    Google Scholar 

  2. Kakkonen, T., Myller, N., Sutinen, E., Timonen, J.: Comparison of dimension reduction methods for automated essay grading. Int. Forum Educ. Technol. Soc. (IFETS) J. 11, 275–288 (2008)

    Google Scholar 

  3. Adhitia, R., Purwarianti, A.: Automated essay grading system using SVM and LSA for essay answers in Indonesian. JSI 5(1) (2009)

    Google Scholar 

  4. Flor, M., Futagi, Y.: On using context for automatic correction of non-word misspellings in student essays. In: The 7th Workshop on the Innovative Use of NLP for Building Educational Applications, Montreal, Canada, 3–8 June 2012, pp. 105–115 (2012)

    Google Scholar 

  5. Turney, P.D., Littman, M.L.: Measuring praise and criticism: inference of semantic orientation from association. ACM Trans. Inf. Syst. 21, 315–346 (2003)

    Article  Google Scholar 

  6. Nagata, R., Kakegawa, J., Yabuta, Y.: A topic-independent method for automatically scoring essay content rivaling topic-dependent methods. In: 2009 Ninth IEEE International Conference on Advanced Learning Technologies (2009)

    Google Scholar 

  7. Liu, C.-L., Hsiao, W.-H., Lee, C.-H., Chi, H.-C.: An HMM-based algorithm for content ranking and coherence-feature extraction. IEEE Trans. Syst. Man Cybern. 42, 397–407 (2012)

    Article  Google Scholar 

  8. Miller, T.: Essay assessment with latent semantic analysis. Department of Computer Science, University of Toronto, Toronto, ON M5S 3G4, Canada (2002)

    Google Scholar 

  9. Loraksa, C., Peachavanish, R.: Automatic Thai language essay scoring using neural network and latent semantic analysis. In: Proceedings of the First Asia International Conference on Modeling & Simulation (AMS’07), pp. 400–402 (2007)

    Google Scholar 

  10. Haley, D.T., Thomas, P., Roeck, A.D., Petre, M.: Measuring improvement in latent semantic analysis based marking systems: using a computer to mark questions about HTML. In: Proceedings of the Ninth Australasian Computing Education Conference (ACE), pp. 35–52 (2007)

    Google Scholar 

  11. Monjurul Islam, M., Latiful Hoque, A.S.M.: Automated essay scoring using generalized latent semantic analysis. J. Comput. 7(3), (2012)

    Google Scholar 

  12. Berry, M.W., Dumais, S.T., O’Brien, G.W.: Using linear algebra for intelligent information retrieval. National Science Foundation under grant Nos. NSF-CDA-9115428 and NSF-ASC-92-03004

    Google Scholar 

  13. Kakkonen, T., Myller, N., Sutinen, E.: Applying part of speech enhanced LSA to automated essay grading. Automated Assessment Technologies for Free Text and Programming Assignments by Academy of Finland (2006)

    Google Scholar 

  14. Warrens, M.J.: Weighted kappa is higher than Cohen’s kappa for tridiagonal agreement tables. Stat. Methodol. 8, 268–272 (2011)

    Article  MATH  MathSciNet  Google Scholar 

  15. The William and Flora Hewlett Foundation (Hewlett Foundation), Automated Student Assessment Prize (ASAP) designed by The Common Pool, LLC, managed by Open Education Solutions, Inc., 12 April 2012

    Google Scholar 

Download references

Acknowledgment

This work derives direction and inspiration from the Chancellor of Amrita University, Sri Mata Amritanandamayi Devi. We thank Dr. Ramachandra Kaimal for his valuable feedback.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Prema Nedungadi .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2014 Institute for Computer Sciences, Social Informatics and Telecommunications Engineering

About this paper

Cite this paper

Nedungadi, P., L, J., Raman, R. (2014). Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation and Scoring Tool. In: Bissyandé, T., van Stam, G. (eds) e-Infrastructure and e-Services for Developing Countries. AFRICOMM 2013. Lecture Notes of the Institute for Computer Sciences, Social Informatics and Telecommunications Engineering, vol 135. Springer, Cham. https://doi.org/10.1007/978-3-319-08368-1_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-08368-1_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-08367-4

  • Online ISBN: 978-3-319-08368-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics