Skip to main content

Assessment2Vec: Learning Distributed Representations of Assessments to Reduce Marking Workload

  • Conference paper
  • First Online:
Artificial Intelligence in Education (AIED 2021)

Abstract

Reducing instructors workload in online and large-scale learning environments could be one of the most important factors in educational systems. To address this challenge, techniques such as Artificial Intelligence has been considered in tutoring systems and automatic essay scoring tasks. In this paper, we construct a novel model to enable learning distributed representations of assessments namely Assessment2Vec and mark assessments automatically with Supervised Contrastive Learning loss which will effectively reduce instructors’ workload in marking large number of assessments. The experimental results based on the real-world datasets show the effectiveness of the proposed approach.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Dai, A.M., Le, Q.V.: Semi-supervised sequence learning. In: Advances in Neural Information Processing Systems, vol. 28, pp. 3079–3087 (2015)

    Google Scholar 

  2. Peters, M.E., et al.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)

  3. Radford, A., Narasimhan, K., Salimans, T., Sutskever, I.: Improving language understanding with unsupervised learning. Technical report, OpenAI (2018)

    Google Scholar 

  4. Beheshti, A., Benatallah, B., Sheng, Q.Z., Schiliro, F.: Intelligent knowledge lakes: the age of artificial intelligence and big data. In: U, L.H., Yang, J., Cai, Y., Karlapalem, K., Liu, A., Huang, X. (eds.) WISE 2020. CCIS, vol. 1155, pp. 24–34. Springer, Singapore (2020). https://doi.org/10.1007/978-981-15-3281-8_3

    Chapter  Google Scholar 

  5. Yuan, X., Wang, S., Wan, L., Zhang, C.: SSF: sentence similar function based on word2vector similar elements. J. Inf. Process. Syst. 15(6), 1503–1516 (2019)

    Google Scholar 

  6. Zhou, T., Zhang, Y., Lu, v; Classifying computer science papers. In: Proceedings of the 25th International Joint Conference on Artificial Intelligence, pp. 9–15 (2016)

    Google Scholar 

  7. Jaiswal, A., Babu, A.R., Zadeh, M.Z., Banerjee, D., Makedon, F.: A survey on contrastive self-supervised learning. Technologies 9(1), 2 (2021)

    Google Scholar 

  8. Fang, H., Xie, P.: CERT: contrastive self-supervised learning for language understanding. arXiv preprint arXiv:2005.12766 (2020)

  9. Strubell, E., Ganesh, A., McCallum, A.: Energy and policy considerations for deep learning in NLP. arXiv preprint arXiv:1906.02243 (2019)

  10. Burrows, S., Gurevych, I., Stein, B.: The eras and trends of automatic short answer grading. Int. J. Artif. Intell. Educ. 25(1), 60–117 (2015)

    Article  Google Scholar 

  11. Dasgupta, T., Naskar, A., Dey, L., Saha, R.: Augmenting textual qualitative features in deep convolution recurrent neural network for automatic essay scoring. In: Proceedings of the 5th Workshop on Natural Language Processing Techniques for Educational Applications, pp. 93–102 (2018)

    Google Scholar 

  12. Sung, C., Dhamecha, T.I., Mukhi, N.: Improving short answer grading using transformer-based pre-training. In: Isotani, S., Millán, E., Ogan, A., Hastings, P., McLaren, B., Luckin, R. (eds.) AIED 2019. LNCS (LNAI), vol. 11625, pp. 469–481. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-23204-7_39

    Chapter  Google Scholar 

  13. Taghipour, K., Ng, H.T.: A neural approach to automated essay scoring. In; Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 1882–1891 (2016)

    Google Scholar 

  14. Uto, M., Okano, M.: Robust neural automated essay scoring using item response theory. In: Bittencourt, I.I., Cukurova, M., Muldner, K., Luckin, R., Millán, E. (eds.) AIED 2020. LNCS (LNAI), vol. 12163, pp. 549–561. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-52237-7_44

    Chapter  Google Scholar 

  15. Uto, M., Xie, Y., Ueno., M.: Neural automated essay scoring incorporating handcrafted features. In: Proceedings of the 28th International Conference on Computational Linguistics, pp. 6077–6088, Barcelona, Spain (Online). International Committee on Computational Linguistics, December 2020

    Google Scholar 

  16. Wang, Y., Wei, Z., Zhou, Y., Huang, X.-J.: Automatic essay scoring incorporating rating schema via reinforcement learning. In: Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pp. 791–797 (2018)

    Google Scholar 

  17. Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)

    Article  Google Scholar 

  18. Devlin, J., Chang, M.-W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805 (2018)

  19. Gunel, B., Du, J., Conneau, A., Stoyanov, V.: Supervised contrastive learning for pre-trained language model fine-tuning. arXiv preprint arXiv:2011.01403 (2020)

  20. Tabebordbar, A., Beheshti, A., Benatallah, B.: ConceptMap: a conceptual approach for formulating user preferences in large information spaces. In: Cheng, R., Mamoulis, N., Sun, Y., Huang, X. (eds.) WISE 2020. LNCS, vol. 11881, pp. 779–794. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-34223-4_49

    Chapter  Google Scholar 

  21. Beheshti, A., Tabebordbar, A., Benatallah, B.: iStory: intelligent storytelling with social data. In: Companion of The 2020 Web Conference 2020, Taipei, Taiwan, 20–24 April 2020, pp. 253–256. ACM/IW3C2 (2020)

    Google Scholar 

Download references

Acknowledgement

We Acknowledge the AI-enabled Processes (AIP) Research Centre and ITIC Pty Ltd for funding this research.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Shuang Wang or Amin Beheshti .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, S. et al. (2021). Assessment2Vec: Learning Distributed Representations of Assessments to Reduce Marking Workload. In: Roll, I., McNamara, D., Sosnovsky, S., Luckin, R., Dimitrova, V. (eds) Artificial Intelligence in Education. AIED 2021. Lecture Notes in Computer Science(), vol 12749. Springer, Cham. https://doi.org/10.1007/978-3-030-78270-2_68

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78270-2_68

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78269-6

  • Online ISBN: 978-3-030-78270-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics