ABSTRACT
With the advancements in Artificial Intelligence (AI), ‘Automated Essay Scoring’ (AES) systems have become more and more prevalent in recent years. This research proposes an extension to the Coh-Metrix algorithm AES, with a focus on feature lists. Technical features, such as, referential cohesion, lexical diversity, and syntactic complexity are evaluated. Furthermore, it proposes the use of four novel semantic measures, including estimating the topic overlap between an essay and its brief. A prototype implementation, using neural networks, is used to test the individual and comparative performance of the newly proposed AES system. The results show a considerable improvement on the results obtained in the existing research for the original Coh-Metrix algorithm; from an adjacent accuracy of 91%, to an adjacent accuracy of 97.5% (and a QWK of 0.822). This suggests that the new features and the proposed system have the potential to improve essay grading and would be a good area for further research
- Chung, G. and O'Neil, G. 1997. Methodological Approaches to Online Scoring of Essays. CSE Technical Report 461 Gregory K. W. K. Chung CRESST / University of California, Los Angeles Harold F. O Neil, Jr. University of Southern California / CRESST December 1997 Center for the Study’, Center for the Study of Evaluation, CRESST, 1522(310).Google Scholar
- Coh-Metrix Version 3.0 Indices 2020. Retrieved January 16, 2020 from http://141.225.41.245/cohmetrixhome/documentation_indices.htmlGoogle Scholar
- Jennifer Onod Contreras, Shadi MS Hilles, and Zainab Abu Bakar. 2018. Automated Essay Scoring with Ontology based on Text Mining and NLTK tools. In Proceedings of the International Conference on Smart Computing and Electronic Enterprise (ICSCEE). IEEE, pp. 1–6. doi: 10.1109/ICSCEE.2018.8538399.Google ScholarCross Ref
- Arjiit De and Sunil Kumar Kopparapu. 2011. An unsupervised approach to automated selection of good essays. In Proceedings of the IEEE Recent Advances in Intelligent Computational Systems. IEEE, pp. 662–666. doi: 10.1109/RAICS.2011.6069393.Google ScholarCross Ref
- Semir Dikli. 2006. An overview of automated scoring of essays. In the Journal of Technology, Learning, and Assessment, 5(1), pp. 1–35.Google Scholar
- David Gefen, James E. Endicott, Jorge E. Fresneda, Jacob Miller, and Kai R. Larsen. 2017. A Guide to Text Analysis with Latent Semantic Analysis in R with Annotated Code Studying Online Reviews and the Stack Exchange Community. In Communications of the Association for Information Systems, 41(1), pp. 450–496. doi: 10.17705/1CAIS.04121.Google ScholarCross Ref
- Google Code 2013. Google Code Archive - Long-Term Storage For Google Code Project Hosting. Retrieved February 15, 2020 from https://code.google.com/archive/p/word2vec/Google Scholar
- Hussein, M. A., Hassan, H. and Nassef, M. 2019. Automated language essay scoring systems: a literature review. In PeerJ Computer Science, 5, p. e208. doi: 10.7717/peerj-cs.208.Google ScholarCross Ref
- Harneet Kaur Janda, Atish Pawar, Shan Du, and Vijay Mago. 2019. Syntactic, Semantic and Sentiment Analysis: The Joint Effect on Automated Essay Evaluation. In IEEE Access. IEEE, 7, pp. 108486–108503. doi: 10.1109/ACCESS.2019.2933354.Google ScholarCross Ref
- Kaggle.com 2012. The Hewlett Foundation: Automated Essay Scoring | Kaggle. Retrieved November 11, 2019 from https://www.kaggle.com/c/asap-aes/overviewGoogle Scholar
- Kristopher Kyle, Scott Crossley, and Cynthia Berger. 2018. The tool for the automatic analysis of lexical sophistication (TAALES): version 2.0. In Behavior Research Methods. Behavior Research Methods, 50(3), pp. 1030–1046. doi: 10.3758/s13428-017-0924-4.Google ScholarCross Ref
- Thomas K Landauer, Peter W. Foltz and Darrell Laham. 1998. An introduction to latent semantic analysis. In Discourse Processes, 25(2–3), pp. 259–284. doi: 10.1080/01638539809545028.Google ScholarCross Ref
- Danielle S McNamara, Scott A Crossley, and Rod Roscoe. 2013. Natural language processing in an intelligent writing strategy tutoring system. In Behavior Research Methods, 45(2), pp. 499–515. doi: 10.3758/s13428-012-0258-1.Google ScholarCross Ref
- Md. Monjurul Islam and A. S. M. Latiful Hoque. 2010. Automated essay scoring using Generalized Latent Semantic Analysis. In Proceedings of the 13th International Conference on Computer and Information Technology (ICCIT), 358–363. https://doi.org/10.1109/ICCITECHN.2010.5723884Google Scholar
- Ellis B. Page. 1966. The imminence of... grading essays by computer. In the Phi Delta Kappan. Phi Delta Kappa International, 47(5), pp. 238–243.Google Scholar
- Diego Palma and John Atkinson. 2018. Coherence-Based Automatic Essay Assessment. In IEEE Intelligent Systems, 33(5), pp. 26–36. doi: 10.1109/MIS.2018.2877278.Google ScholarDigital Library
- Abigail R. Razon, Ma. Lourdes J. Vargas, Rowena Cristina L. Guevara, and Prospero C. Naval. 2010. Automated essay content analysis based on concept indexing with fuzzy C-means clustering. In Proceedings of the IEEE Asia-Pacific Conference on Circuits and Systems, Proceedings, APCCAS. IEEE, (December), pp. 1167–1170. doi: 10.1109/APCCAS.2010.5775058.Google Scholar
- Zining Wang, Jianli Liu, and Ruihai Dong. 2018. Intelligent Auto-grading System. In Proceedings of the 5th IEEE International Conference on Cloud Computing and Intelligence Systems (CCIS). IEEE, pp. 430–435. doi: 10.1109/CCIS.2018.8691244.Google ScholarCross Ref
- Xue Mei Yao. 2013. Automated Essay Scoring: A Comparative Study. In Applied Mechanics and Materials, 274, pp. 650–653. doi: 10.4028/www.scientific.net/AMM.274.650.Google ScholarCross Ref
- Kaja Zupanc and Zoran Bosnic, Z. 2014. Automated Essay Evaluation Augmented with Semantic Coherence Measures. In 2014 IEEE International Conference on Data Mining. IEEE, pp. 1133-1138. doi: 10.1109/ICDM.2014.21Google ScholarDigital Library
Recommendations
Machine Learning-based Automated Essay Scoring System for Chinese Proficiency Test (HSK)
NLPIR '20: Proceedings of the 4th International Conference on Natural Language Processing and Information RetrievalAutomated essay scoring (AES) gains momentum recently in English-based environment. However, the development of Chinese AES system is slow and fruitless. Many foreign students participate in the Chinese Proficiency Test (HSK) so a HSK automated essay ...
Automated essay evaluation with semantic analysis
Essays are considered as the most useful tool to assess learning outcomes, guide students' learning process and to measure their progress. Manual grading of students' essays is a time-consuming process, but is nevertheless necessary. Automated essay ...
Automated Essay Scoring via Example-Based Learning
Web EngineeringAbstractAutomated essay scoring (AES) is the task of assigning grades to essays. It can be applied for quality assessment as well as pricing on User Generated Content. Previous works mainly consider using the prompt information for scoring. However, some ...
Comments