skip to main content
10.1145/3510454.3517058acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Students vs. professionals: improving the learning of software testing

Published:19 October 2022Publication History

ABSTRACT

Software testing is a crucial phase of software development. Educators now are assessing tests written by students. Methods have been proposed to assess the completeness of student-written tests. However, there is more to good test quality than completeness, and these additional quality are not assessed by the previous work. Test smells are patterns of poorly designed tests that may negatively affect the quality of test and production code. We propose to assess test smells in students' code to improve the quality of software tests they write. In the early stages of this research, we will examine whether practitioners actually spend time and energy to remove those test smells summarized by researchers backed by real-world software projects. Then, we will develop and evaluate our new assessment method.

References

  1. Kalle Aaltonen, Petri Ihantola, and Otto Seppälä. 2010. Mutation analysis vs. code coverage in automated assessment of students' testing skills. In Proceedings of the ACM international conference companion on Object oriented programming systems languages and applications companion. 153--160.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Kirsti M Ala-Mutka. 2005. A survey of automated assessment approaches for programming assignments. Computer science education 15, 2 (2005), 83--102.Google ScholarGoogle Scholar
  3. Gina R Bai, Justin Smith, and Kathryn T Stolee. 2021. How Students Unit Test: Perceptions, Practices, and Pitfalls. In Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1. 248--254.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Kevin Buffardi and Juan Aguirre-Ayala. 2021. Unit Test Smells and Accuracy of Software Engineering Student Test Suites. In Proceedings of the 26th ACM Conference on Innovation and Technology in Computer Science Education V. 1. 234--240.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Richard A DeMillo, Richard J Lipton, and Frederick G Sayward. 1978. Hints on test data selection: Help for the practicing programmer. Computer 11, 4 (1978), 34--41.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Stephen H Edwards. 2004. Using software testing to move students from trial-and-error to reflection-in-action. In Proceedings of the 35th SIGCSE technical symposium on Computer science education. 26--30.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Stephen H Edwards and Zalia Shams. 2014. Comparing test quality measures for assessing student-written tests. In Companion Proceedings of the 36th International Conference on Software Engineering. 354--363.Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Sukru Eraslan, Kamilla Kopec-Harding, Caroline Jay, Suzanne M Embury, Robert Haines, Julio César Cortés Ríos, and Peter Crowther. 2020. Integrating GitLab metrics into coursework consultation sessions in a software engineering course. Journal of Systems and Software 167 (2020), 110613.Google ScholarGoogle ScholarCross RefCross Ref
  9. Vahid Garousi and Barış Küçük. 2018. Smells in software test code: A survey of knowledge in industry and academia. Journal of systems and software 138 (2018), 52--81.Google ScholarGoogle ScholarCross RefCross Ref
  10. Michael H Goldwasser. 2002. A gimmick to integrate software testing throughout the curriculum. ACM SIGCSE Bulletin 34, 1 (2002), 271--275.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Divya Kumar and Krishn Kumar Mishra. 2016. The impacts of test automation on software's cost, quality and time to market. Procedia Computer Science 79 (2016), 8--15.Google ScholarGoogle ScholarCross RefCross Ref
  12. Stefano Lambiase, Andrea Cupito, Fabiano Pecorelli, Andrea De Lucia, and Fabio Palomba. 2020. Just-In-Time Test Smell Detection and Refactoring: The DARTS Project. In Proceedings of the 28th International Conference on Program Comprehension. 441--445.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Anthony Peruma, Khalid Almalki, Christian D Newman, Mohamed Wiem Mkaouer, Ali Ouni, and Fabio Palomba. 2020. Tsdetect: An open source test smells detection tool. In Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 1650--1654.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Abdallah Qusef, Mahmoud O Elish, and David Binkley. 2019. An exploratory study of the relationship between software test smells and fault-proneness. IEEE Access 7 (2019), 139526--139536.Google ScholarGoogle ScholarCross RefCross Ref
  15. Railana Santana, Luana Martins, Larissa Rocha, Tássio Virgínio, Adriana Cruz, Heitor Costa, and Ivan Machado. 2020. RAIDE: a tool for Assertion Roulette and Duplicate Assert identification and refactoring. In Proceedings of the 34th Brazilian Symposium on Software Engineering. 374--379.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Danilo Silva, Nikolaos Tsantalis, and Marco Tulio Valente. 2016. Why we refactor? confessions of github contributors. In Proceedings of the 2016 24th acm sigsoft international symposium on foundations of software engineering. 858--870.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Brent van Bladel and Serge Demeyer. 2017. Test refactoring: a research agenda. In Proceedings SATToSE.Google ScholarGoogle Scholar
  18. Arie Van Deursen, Leon Moonen, Alex Van Den Bergh, and Gerard Kok. 2001. Refactoring test code. In Proceedings of the 2nd international conference on extreme programming and flexible processes in software engineering (XP2001). Citeseer, 92--95.Google ScholarGoogle Scholar
  19. Tongjie Wang, Yaroslav Golubev, Oleg Smirnov, Jiawei Li, Timofey Bryksin, and Iftekhar Ahmed. 2021. PyNose: A Test Smell Detector For Python. arXiv preprint arXiv:2108.04639 (2021).Google ScholarGoogle Scholar

Index Terms

  1. Students vs. professionals: improving the learning of software testing
      Index terms have been assigned to the content through auto-classification.

      Recommendations

      Comments

      Login options

      Check if you have access through your login credentials or your institution to get full access on this article.

      Sign in
      • Published in

        cover image ACM Conferences
        ICSE '22: Proceedings of the ACM/IEEE 44th International Conference on Software Engineering: Companion Proceedings
        May 2022
        394 pages
        ISBN:9781450392235
        DOI:10.1145/3510454

        Copyright © 2022 ACM

        Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

        Publisher

        Association for Computing Machinery

        New York, NY, United States

        Publication History

        • Published: 19 October 2022

        Permissions

        Request permissions about this article.

        Request Permissions

        Check for updates

        Qualifiers

        • research-article

        Acceptance Rates

        Overall Acceptance Rate276of1,856submissions,15%
      • Article Metrics

        • Downloads (Last 12 months)34
        • Downloads (Last 6 weeks)1

        Other Metrics

      PDF Format

      View or Download as a PDF file.

      PDF

      eReader

      View online with eReader.

      eReader