Skip to main content

Automated Assessment in Computer Science: A Bibliometric Analysis of the Literature

  • Conference paper
  • First Online:
Learning Technologies and Systems (ICWL 2022, SETE 2022)

Abstract

Over the years, several systematic literature reviews have been published reporting advances in tools and techniques for automated assessment in Computer Science. However, there is not yet a major bibliometric study that examines the relationships and influence of publications, authors, and journals to make these research trends visible. This paper presents a bibliometric study of automated assessment of programming exercises, including a descriptive analysis using various bibliometric measures and data visualizations. The data was collected from the Web of Science Core Collection. The obtained results allow us to identify the most influential authors and their affiliations, monitor the evolution of publications and citations, establish relationships between emerging themes in publications, discover research trends, and more. This paper provides a deeper knowledge of the literature and facilitates future researchers to start in this field.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 79.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 99.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ala-Mutka, K.M.: A survey of automated assessment approaches for programming assignments. Comput. Sci. Educ. 15(2), 83–102 (2005). https://doi.org/10.1080/08993400500150747

    Article  Google Scholar 

  2. Andrés, A.: Measuring Academic Research. Chandos Publishing (Oxford), Witney, England (2009)

    Book  Google Scholar 

  3. Aria, M., Cuccurullo, C.: bibliometrix: an r-tool for comprehensive science mapping analysis. J. Informetrics 11(4), 959–975 (2017). https://doi.org/10.1016/j.joi.2017.08.007

    Article  Google Scholar 

  4. Blikstein, P., Worsley, M., Piech, C., Sahami, M., Cooper, S., Koller, D.: Programming pluralism: using learning analytics to detect patterns in the learning of computer programming. J. Learn. Sci. 23(4), 561–599 (2014). https://doi.org/10.1080/10508406.2014.954750

    Article  Google Scholar 

  5. Clarivate: Web of science core collection (2022). https://www.webofscience.com/wos/woscc/summary/f75398e2-c55c-4b98-b5c0-103c1ebcb3cc-53a79dff/relevance/1. Accessed 19 Sep 2022

  6. Cobo, M., López-Herrera, A., Herrera-Viedma, E., Herrera, F.: An approach for detecting, quantifying, and visualizing the evolution of a research field: a practical application to the fuzzy sets theory field. J. Informetrics 5(1), 146–166 (2011). https://doi.org/10.1016/j.joi.2010.10.002

    Article  Google Scholar 

  7. da Cruz Alves, N., Wangenheim, C.G.V., Hauck, J.C.R.: Approaches to assess computational thinking competences based on code analysis in k-12 education: a systematic mapping study. Inf. Educ. 18(1), 17–39 (2019). https://doi.org/10.15388/infedu.2019.02

  8. DeNero, J., Sridhara, S., Pérez-Quiñones, M., Nayak, A., Leong, B.: Beyond autograding: advances in student feedback platforms. In: Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education. SIGCSE 2017, Association for Computing Machinery, New York, pp. 651–652 (2017). https://doi.org/10.1145/3017680.3017686

  9. Edwards, S.H., Shams, Z.: Comparing test quality measures for assessing student-written tests. In: Companion Proceedings of the 36th International Conference on Software Engineering. ICSE Companion 2014, Association for Computing Machinery, New York, pp. 354–363. (2014). https://doi.org/10.1145/2591062.2591164

  10. Falkner, N., Vivian, R., Piper, D., Falkner, K.: Increasing the effectiveness of automated assessment by increasing marking granularity and feedback units. In: Proceedings of the 45th ACM Technical Symposium on Computer Science Education. SIGCSE 2014, Association for Computing Machinery, New York, pp. 9–14 (2014). https://doi.org/10.1145/2538862.2538896

  11. Garfield, E.: Historiographic mapping of knowledge domains literature. J. Inf. Sci. 30(2), 119–145 (2004). https://doi.org/10.1177/0165551504042802

    Article  Google Scholar 

  12. Ihantola, P., Ahoniemi, T., Karavirta, V., Seppälä, O.: Review of recent systems for automatic assessment of programming assignments. In: Proceedings of the 10th Koli Calling International Conference on Computing Education Research - Koli Calling 2010, pp. 86–93. ACM Press, Berlin, Germany (2010). https://doi.org/10.1145/1930464.1930480

  13. Insa, D., Silva, J.: Semi-automatic assessment of unrestrained java code: a library, a DSL, and a workbench to assess exams and exercises. In: Proceedings of the 2015 ACM Conference on Innovation and Technology in Computer Science Education. ITiCSE 2015, Association for Computing Machinery, New York, pp. 39–44 (2015). https://doi.org/10.1145/2729094.2742615

  14. Monperrus, M.: A critical review of automatic patch generation learned from human-written patches: essay on the problem statement and the evaluation of automatic software repair. In: Proceedings of the 36th International Conference on Software Engineering. ICSE 2014, Association for Computing Machinery, New York, pp. 234–242 (2014). https://doi.org/10.1145/2568225.2568324

  15. Moon, S., Kim, Y., Kim, M., Yoo, S.: Ask the mutants: mutating faulty programs for fault localization. In: 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation, pp. 153–162. IEEE (2014). https://doi.org/10.1109/ICST.2014.28

  16. Motwani, M., Sankaranarayanan, S., Just, R., Brun, Y.: Do automated program repair techniques repair hard and important bugs? Empirical Softw. Eng. 23(5), 2901–2947 (2017). https://doi.org/10.1007/s10664-017-9550-0

    Article  Google Scholar 

  17. Naudé, K.A., Greyling, J.H., Vogts, D.: Marking student programs using graph similarity. Comput. Educ. 54(2), 545–561 (2010). https://doi.org/10.1016/j.compedu.2009.09.005

    Article  Google Scholar 

  18. Paiva, J.C., Leal, J.P., Figueira, A.: Automated assessment in computer science education: a state-of-the-art review. ACM Trans. Comput. Educ. 22(3) (2022). https://doi.org/10.1145/3513140

  19. Pape, S., Flake, J., Beckmann, A., Jürjens, J.: Stage: a software tool for automatic grading of testing exercises: case study paper. In: Proceedings of the 38th International Conference on Software Engineering Companion. ICSE 2016, Association for Computing Machinery, New York, pp. 491–500 (2016). https://doi.org/10.1145/2889160.2889203

  20. Porfirio, A., Pereira, R., Maschio, E.: Automatic source code evaluation: a systematic mapping. Technical report, Federal University of Technology, Paraná, Brazil (UTFPR) (2021). https://doi.org/10.13140/RG.2.2.36112.33287

  21. Singh, R., Gulwani, S., Solar-Lezama, A.: Automated feedback generation for introductory programming assignments. In: Proceedings of the 34th ACM SIGPLAN Conference on Programming Language Design and Implementation. PLDI 2013, Association for Computing Machinery, New York, pp. 15–26 (2013). https://doi.org/10.1145/2491956.2462195

  22. Smith, R., Tang, T., Warren, J., Rixner, S.: An automated system for interactively learning software testing. In: Proceedings of the 2017 ACM Conference on Innovation and Technology in Computer Science Education. ITiCSE 2017, Association for Computing Machinery, New York, pp. 98–103 (2017). https://doi.org/10.1145/3059009.3059022

  23. Souza, D.M., Felizardo, K.R., Barbosa, E.F.: A systematic literature review of assessment tools for programming assignments. In: 2016 IEEE 29th International Conference on Software Engineering Education and Training (CSEET), pp. 147–156. IEEE (2016). https://doi.org/10.1109/CSEET.2016.48

  24. Souza, D.M.D., Isotani, S., Barbosa, E.F.: Teaching novice programmers using progtest. Int. J. Knowl. Learn. 10(1), 60–77 (2015). https://doi.org/10.1504/IJKL.2015.071054

    Article  Google Scholar 

  25. Sridhara, S., Hou, B., Lu, J., DeNero, J.: Fuzz testing projects in massive courses. In: Proceedings of the Third (2016) ACM Conference on Learning @ Scale. L@S 2016, Association for Computing Machinery, New York, pp. 361–367 (2016). https://doi.org/10.1145/2876034.2876050

  26. Srikant, S., Aggarwal, V.: A system to grade computer programming skills using machine learning. In: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. KDD 2014, Association for Computing Machinery, New York, pp. 1887–1896 (2014). https://doi.org/10.1145/2623330.2623377

  27. Verdú, E., Regueras, L.M., Verdú, M.J., Leal, J.P., de Castro, J.P., Queirós, R.: A distributed system for learning programming on-line. Comput. Educ. 58(1), 1–10 (2012). https://doi.org/10.1016/j.compedu.2011.08.015

    Article  Google Scholar 

  28. von Wangenheim, C.G., et al.: Codemaster - automatic assessment and grading of app inventor and snap! programs. Inf. Educ. 17(1), 117–150 (2018). https://doi.org/10.15388/infedu.2018.08

  29. Wen, M., Chen, J., Wu, R., Hao, D., Cheung, S.C.: Context-aware patch generation for better automated program repair. In: Proceedings of the 40th International Conference on Software Engineering. ICSE 2018, Association for Computing Machinery, New York, pp. 1–11 (2018). https://doi.org/10.1145/3180155.3180233

  30. Xiong, Y., et al.: Precise condition synthesis for program repair. In: Proceedings of the 39th International Conference on Software Engineering. ICSE 2017, pp. 416–426. IEEE Press (2017). https://doi.org/10.1109/ICSE.2017.45

  31. Yera, R., Martínez, L.: A recommendation approach for programming online judges supported by data preprocessing techniques. Appl. Intell. 47(2), 277–290 (2016). https://doi.org/10.1007/s10489-016-0892-x

    Article  Google Scholar 

Download references

Acknowledgements

This work is financed by National Funds through the Portuguese funding agency, FCT – Fundação para a Ciência e a Tecnologia, within project LA/P/0063/2020. J.C.P. also wishes to acknowledge the FCT for the Ph.D. Grant 2020.04430.BD.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to José Carlos Paiva .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Paiva, J.C., Figueira, Á., Leal, J.P. (2023). Automated Assessment in Computer Science: A Bibliometric Analysis of the Literature. In: González-González, C.S., et al. Learning Technologies and Systems. ICWL SETE 2022 2022. Lecture Notes in Computer Science, vol 13869. Springer, Cham. https://doi.org/10.1007/978-3-031-33023-0_11

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-33023-0_11

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-33022-3

  • Online ISBN: 978-3-031-33023-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics