Skip to main content

Automated Assessment of Quality and Coverage of Ideas in Students’ Source-Based Writing

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2021)

Abstract

Source-based writing is an important academic skill in higher education, as it helps instructors evaluate students’ understanding of subject matter. To assess the potential for supporting instructors’ grading, we design an automated assessment tool for students’ source-based summaries with natural language processing techniques. It includes a special-purpose parser that decomposes the sentences into clauses, a pre-trained semantic representation method, a novel algorithm that allocates ideas into weighted content units and another algorithm for scoring students’ writing. We present results on three sets of student writing in higher education: two sets of STEM student writing samples and a set of reasoning sections of case briefs from a law school preparatory course. We show that this tool achieves promising results by correlating well with reliable human rubrics, and by helping instructors identify issues in grades they assign. We then discuss limitations and two improvements: a neural model that learns to decompose complex sentences into simple sentences, and a distinct model that learns a latent representation.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    We direct readers to [1, 3] for detailed output from PyrEval that shows content alignments between reference summaries and students summaries. PyrEval is downloadable at https://github.com/serenayj/PyrEval.

References

  1. Gao, Y., Davies, P.M., Passonneau, R.J.: Automated content analysis: a case study of computer science student summaries. In: Proceedings of the Thirteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 264–272 (2018)

    Google Scholar 

  2. Gao, Y., et al.: Rubric reliability and annotation of content and argument in source-based argument essays. In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 507–518 (2019)

    Google Scholar 

  3. Gao, Y., Sun, C., Passonneau, R.J.: Automated pyramid summarization evaluation. In: Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL), pp. 404–418 (2019)

    Google Scholar 

  4. Graham, S., et al.: Teaching secondary students to write effectively. Educator’s practice guide. What works clearinghouse.™ ncee 2017–4002. What Works Clearinghouse (2016)

    Google Scholar 

  5. Guo, W., Diab, M.: Modeling sentences in the latent space. In: Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pp. 864–872 (2012)

    Google Scholar 

  6. Hirvela, A., Du, Q.: “Why am i paraphrasing?”: undergraduate esl writers’ engagement with source-based academic writing and reading. J. English Acad. Purposes 12(2), 87–98 (2013)

    Google Scholar 

  7. Kintsch, W., Van Dijk, T.A.: Toward a model of text comprehension and production. Psychol. Rev. 85(5), 363 (1978)

    Article  Google Scholar 

  8. Klein, R., Kyrilov, A., Tokman, M.: Automated assessment of short free-text responses in computer science using latent semantic analysis. In: Proceedings of the 16th Annual Joint Conference on Innovation and Technology in Computer Science Education, pp. 158–162 (2011)

    Google Scholar 

  9. Lundstrom, K., Diekema, A.R., Leary, H., Haderlie, S., Holliday, W.: Teaching and learning information synthesis: an intervention and rubric based assessment. Commun. Inf. Lit. 9(1), 4 (2015)

    Google Scholar 

  10. Nadeem, F., Nguyen, H., Liu, Y., Ostendorf, M.: Automated essay scoring with discourse-aware neural models. In: Proceedings of the Fourteenth Workshop on Innovative Use of NLP for Building Educational Applications, pp. 484–493 (2019)

    Google Scholar 

  11. Passonneau, R.J., Chen, E., Guo, W., Perin, D.: Automated pyramid scoring of summaries using distributional semantics. In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), pp. 143–147 (2013)

    Google Scholar 

  12. Passonneau, R.J., Poddar, A., Gite, G., Krivokapic, A., Yang, Q., Perin, D.: Wise crowd content assessment and educational rubrics. Int. J. Artif. Intell. Educ. 28(1), 29–55 (2018)

    Article  Google Scholar 

  13. Rakedzon, T., Baram-Tsabari, A.: To make a long story short: a rubric for assessing graduate students’ academic and popular science writing skills. Assess. Writ. 32, 28–42 (2017)

    Article  Google Scholar 

  14. Ranalli, J., Link, S., Chukharev-Hudilainen, E.: Automated writing evaluation for formative assessment of second language writing: investigating the accuracy and usefulness of feedback as part of argument-based validation. Educ. Psychol. 37(1), 8–25 (2017)

    Article  Google Scholar 

  15. Sakai, S., Togasaki, M., Yamazaki, K.: A note on greedy algorithms for the maximum weighted independent set problem. Discret. Appl. Math. 126(2–3), 313–322 (2003)

    Article  MathSciNet  Google Scholar 

  16. Sampson, V., Enderle, P., Grooms, J., Witte, S.: Writing to learn by learning to write during the school science laboratory: helping middle and high school students develop argumentative writing skills as they learn core ideas. Sci. Educ. 97(5), 643–670 (2013)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yanjun Gao .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Gao, Y., Passonneau, R.J. (2021). Automated Assessment of Quality and Coverage of Ideas in Students’ Source-Based Writing. In: Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2021. Lecture Notes in Computer Science(), vol 12749. Springer, Cham. https://doi.org/10.1007/978-3-030-78270-2_82

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78270-2_82

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78269-6

  • Online ISBN: 978-3-030-78270-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics