skip to main content
10.1145/3106426.3106513acmconferencesArticle/Chapter ViewAbstractPublication PageswiConference Proceedingsconference-collections
research-article

Overwritable automated japanese short-answer scoring and support system

Authors Info & Claims
Published:23 August 2017Publication History

ABSTRACT

We have developed an automated Japanese short-answer scoring and support machine for new National Center written test exams. Our approach is based on the fact that accurate recognizing textual entailment and/or synonymy has been almost impossible for several years. The system generates automated scores on the basis of evaluation criteria or rubrics, and human raters revise them. The system determines semantic similarity between the model answers and the actual written answers as well as a certain degree of semantic identity and implication. Owing to the need for the scoring results to be classified at multiple levels, we use random forests to utilize many predictors effectively rather than use support vector machines. An experimental prototype operates as a web system on a Linux computer. We compared human scores with the automated scores for a case in which 3--6 allotment points were placed in 8 categories of a social studies test as a trial examination. The differences between the scores were within one point for 70--90 percent of the data when high semantic judgment was not needed.

References

  1. Leo Breiman. 2001. Random Forests. Machine Learning 45, 1 (2001), 5--32. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Leo Breiman and Adele Cutler. 2004. Random Forests. (March 2004). http://www.stat.berkeley.eduGoogle ScholarGoogle Scholar
  3. Charles J. Fillmore. 1968. The Case for Case. Universals in Linguistic Theory, New York: Holt, Rinehart, and Winston, 1--88.Google ScholarGoogle Scholar
  4. The Hewlett Foundation. 2012. Short Answer Scoring. Automated Studendent Assessment Prize, Phase Two. https://www.kaggle.comGoogle ScholarGoogle Scholar
  5. Ben Hamner. 2015. Package 'Metrics' Evaluation metrics for machine learning. https://github.com/benhamner/Metrics/tree/master/RGoogle ScholarGoogle Scholar
  6. Tsunenori Ishioka and Msayuki Kameda. 2006. Automated Japanese essay scoring system based on articles written by experts. In Proceeding ACL-44 Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics (Coling-ACL 2006). Association for Computational Linguistics, 233--240. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Claudia Leacock and Martin Chodorow. 2003. C-rater: Automated Scoring of Short-Answer Questions. Computers and the Humanities 37, 4 (2003), 389--405.Google ScholarGoogle Scholar
  8. Andy Liaw and Matthew Wiener. 2015. Package RandomForest. Breiman and Cutler's Random Forests for Classification and Regression 4.6--12 (Oct. 2015).Google ScholarGoogle Scholar
  9. MEXT. 2016. Publishing the final report of high school and university articulation meeting. Ministry of Education, Culture, Sports, Science and Technology in Japan. http://www.mext.go.jpGoogle ScholarGoogle Scholar
  10. NII. 2013. It's ability which understands the meaning to be asked about. NII Today 60 (2013), 8--9.Google ScholarGoogle Scholar
  11. Stephen G. Pulman and Jana Z. Sukkarieh. 2005. Automatic short answer marking. In EdAppsNLP 05 Proceedings of the second workshop on Building Educational Applications Using NLP. Association for Computational Linguistics, 9--16. http://dl.acm.org/citation.cfm?id=1609831 Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Mark D. Shermis and Jill Burstein. 2013. Handbook of Automated Essay Evaluation. Routledge.Google ScholarGoogle Scholar
  13. Luis Tandalla. 2012. Scoring Short Answer Essays. The Hewlett Foundation: Short Answer Scoring. https://kaggle2.blob.core.windows.net/competitions/kaggle/2959/media/TechnicalMethodsPaper.pdfGoogle ScholarGoogle Scholar
  14. Richard Vigilante. 1999. Online Computer Scoring of Constructed-Response Questions. Journal of Information Technology 1, 2 (1999), 57--62.Google ScholarGoogle Scholar

Index Terms

  1. Overwritable automated japanese short-answer scoring and support system

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        WI '17: Proceedings of the International Conference on Web Intelligence
        August 2017
        1284 pages
        ISBN:9781450349512
        DOI:10.1145/3106426

        Copyright © 2017 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 23 August 2017

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        WI '17 Paper Acceptance Rate118of178submissions,66%Overall Acceptance Rate118of178submissions,66%
      • Article Metrics

        • Downloads (Last 12 months)7
        • Downloads (Last 6 weeks)1

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader