skip to main content
10.1145/3530019.3530028acmotherconferencesArticle/Chapter ViewAbstractPublication PageseaseConference Proceedingsconference-collections
research-article

Overlap between Automated Unit and Acceptance Testing – a Systematic Literature Review

Published:13 June 2022Publication History

ABSTRACT

Unit and automated acceptance testing have different objectives (e.g., testing units of code versus testing complete features). Testing practices (e.g., test-first, model-based) used for one “level” of testing (for either unit or acceptance testing) may require knowledge and skills that are not applicable to the other. This makes it difficult for practitioners to gain the skills required to effectively test at all levels and form a cohesive testing strategy. The aim of this systematic literature review is to understand whether there are any automated unit testing practices that have similarities with automated acceptance testing practices (and vice versa). Understanding these similarities can enable skill transfer across testing activities at different levels. This systematic literature review focuses on empirical research with an industry focus. We found that test-driven development (TDD) and model-based test generation (MBTG) are two practices widely researched for both unit testing and acceptance testing. For TDD we found that a design- and test-first mindset is required and helpful at both the unit and acceptance levels, but practitioners struggle with that practice. For MBTG we found that, despite its ability to increase code coverage, the additional manual effort to enable automated test generation may outweigh its benefits.

References

  1. E. Alégroth, R. Feldt, and H. H. Olsson. 2013. Transitioning Manual System Test Suites to Automated Testing: An Industrial Case Study. In IEEE 6th International Conference on Software Testing, Verification and Validation. IEEE, 56–65. https://doi.org/10.1109/ICST.2013.14Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. E. Alégroth, R. Feldt, and L. Ryrholm. 2015. Visual GUI testing in practice: challenges, problemsand limitations. Empirical Software Engineering 20, 3 (2015), 694–744. https://doi.org/10.1007/s10664-013-9293-5Google ScholarGoogle ScholarCross RefCross Ref
  3. A. Ampatzoglou, S. Bibi, P. Avgeriou, M. Verbeek, and A. Chatzigeorgiou. 2019. Identifying, categorizing and mitigating threats to validity in software engineering secondary studies. Information and Software Technology 106 (2019), 201–230. https://doi.org/10.1016/j.infsof.2018.10.006Google ScholarGoogle ScholarCross RefCross Ref
  4. M. Aniche and M. A. Gerosa. 2015. Does test-driven development improve class design? A qualitative study on developers’ perceptions. Journal of the Brazilian Computer Society 21, 1 (2015). https://doi.org/10.1186/s13173-015-0034-zGoogle ScholarGoogle ScholarCross RefCross Ref
  5. R. Awedikian and B. Yannou. 2014. A practical model-based statistical approach for generating functional test cases: Application in the automotive industry. Software Testing Verification and Reliability 24, 2(2014), 85–123. https://doi.org/10.1002/stvr.1479Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. F. Paulo Basso, T. Cavalcante Oliveira, and K. Farias. 2014. Extending JUnit 4 with Java Annotations and Reflection to Test Variant Model Transformation Assets. In Proc. of the 29th Annual ACM Symposium on Applied Computing. ACM, 1601–1608. https://doi.org/10.1145/2554850.2555054Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. G. Bavota, A. Qusef, R. Oliveto, A. De Lucia, and D. Binkley. 2015. Are test smells really harmful? An empirical study. Empirical Software Engineering 20, 4 (2015), 1052–1094. https://doi.org/10.1007/s10664-014-9313-0Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. B. Beverland, S. J. S. Wilner, and P. Micheli. 2015. Reconciling the tension between consistency and relevance: design thinking as a mechanism for brand ambidexterity. Journal of the Academy of Marketing Science 43, 5 (01 Sep 2015), 589–609. https://doi.org/10.1007/s11747-015-0443-8Google ScholarGoogle ScholarCross RefCross Ref
  9. K. Bjerke-Gulstuen, E. W. Larsen, T. Stålhane, and T. Dingsøyr. 2015. High level test driven development – shift left. In Lecture Notes in Business Information Processing. Vol. 212. Springer, 239–247. https://doi.org/10.1007/978-3-319-18612-2_23Google ScholarGoogle Scholar
  10. E. Borjesson and R. Feldt. 2012. Automated System Testing Using Visual GUI Testing Tools: A Comparative Study in Industry. In 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation. IEEE, 350–359. https://doi.org/10.1109/ICST.2012.115Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. D. Bowes, T. Hall, J. Petric, T. Shippey, and B. Turhan. 2017. How Good Are My Tests?. In 2017 IEEE/ACM 8th Workshop on Emerging Trends in Software Metrics (WETSoM). IEEE, 9–14. https://doi.org/10.1109/WETSoM.2017.2Google ScholarGoogle ScholarCross RefCross Ref
  12. G. Buchgeher, S. Fischer, M. Moser, and J. Pichler. 2020. An Early Investigation of Unit Testing Practices of Component-Based Software Systems. In 2020 IEEE Workshop on Validation, Analysis and Evolution of Software Tests (VST). IEEE, 12–15. https://doi.org/10.1109/VST50071.2020.9051632Google ScholarGoogle Scholar
  13. D. Budgen and P. Brereton. 2006. Performing Systematic Literature Reviews in Software Engineering. In Proc. of the 28th International Conference on Software Engineering. ACM, 1051–1052. https://doi.org/10.1145/1134285.1134500Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T. Clune and R. Rood. 2011. Software Testing and Verification in Climate Model Development. IEEE Software 28, 6 (2011), 49–55. https://doi.org/10.1109/MS.2011.117Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. A. Costa and L. Teixeira. 2018. Testing Strategies for Smart Cities Applications: A Systematic Mapping Study. In Proc. of the III Brazilian Symposium on Systematic and Automated Software Testing. ACM, 20–28. https://doi.org/10.1145/3266003.3266005Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. E. Daka and G. Fraser. 2014. A Survey on Unit Testing Practices and Problems. In 2014 IEEE 25th International Symposium on Software Reliability Engineering. IEEE, 201–211. https://doi.org/10.1109/ISSRE.2014.11Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. B. Danglot, O. L. Vera-Pérez, B. Baudry, and M. Monperrus. 2019. Automatic test improvement with DSpot: a study with ten mature open-source projects. Empirical Software Engineering 24, 4 (2019), 2603–2635. https://doi.org/10.1007/s10664-019-09692-yGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  18. S. Easterbrook, J. Singer, M. Storey, and D. Damian. 2008. Selecting Empirical Methods for Software Engineering Research. Springer London, 285–311. https://doi.org/10.1007/978-1-84800-044-5_11Google ScholarGoogle Scholar
  19. M. El-Attar and H. A. Abdul-Ghani. 2016. Using security robustness analysis for early-stage validation of functional security requirements. Requirements Engineering 21, 1 (2016), 1–27. https://doi.org/10.1007/s00766-014-0208-9Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. E. Engström and P. Runeson. 2011. Software product line testing – A systematic mapping study. Information and Software Technology 53, 1 (2011), 2–13. https://doi.org/10.1016/j.infsof.2010.05.011Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. G. Fraser, M. Staats, P. McMinn, A. Arcuri, and F. Padberg. 2013. Does Automated White-Box Test Generation Really Help Software Testers?. In Proc. of the International Symposium on Software Testing and Analysis. ACM, 291–301. https://doi.org/10.1145/2483760.2483774Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. D. Fucci, H. Erdogmus, B. Turhan, M. Oivo, and N. Juristo. 2017. A Dissection of the Test-Driven Development Process: Does It Really Matter to Test-First or to Test-Last?IEEE Transactions on Software Engineering 43, 7 (2017), 597–614. https://doi.org/10.1109/TSE.2016.2616877Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. D. Gafurov, M. Sunde Grovan, and A. Erik Hurum. 2020. Lightweight MBT Testing for National E-Health Portal in Norway. ACM, 1194–1198. https://doi.org/10.1145/3324884.3421843Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. D. Ganesan, M. Lindvall, D. McComas, M. Bartholomew, S. Slegel, B. Medina, R. Krikhaar, C. Verhoef, and L. P. Montgomery. 2013. An analysis of unit tests of a flight software product line. Science of Computer Programming 78, 12 (2013), 2360–2380. https://doi.org/10.1016/j.scico.2012.02.006Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. V. Garousi, M. Felderer, and F. Nur Kılıçaslan. 2019. A survey on software testability. Information and Software Technology 108 (2019), 35–64. https://doi.org/10.1016/j.infsof.2018.12.003Google ScholarGoogle ScholarCross RefCross Ref
  26. V. Garousi and M. V. Mäntylä. 2016. When and what to automate in software testing? A multi-vocal literature review. Information and Software Technology 76 (2016), 92–117. https://doi.org/10.1016/j.infsof.2016.04.015Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. D. Gonzalez, M. Rath, and M. Mirakhorli. 2020. Did You Remember To Test Your Tokens?. In Proc. of the 17th International Conference on Mining Software Repositories (MSR). ACM, 232–242. https://doi.org/10.1145/3379597.3387471Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. D. Gonzalez, J. C.S. Santos, A. Popovich, M. Mirakhorli, and M. Nagappan. 2017. A Large-Scale Study on the Usage of Testing Patterns That Address Maintainability Attributes: Patterns for Ease of Modification, Diagnoses, and Comprehension. In 2017 IEEE/ACM 14th International Conference on Mining Software Repositories (MSR). IEEE, 391–401. https://doi.org/10.1109/MSR.2017.8Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. J. E. González, N. Juristo, and S. Vegas. 2014. A Systematic Mapping Study on Testing Technique Experiments: Has the Situation Changed since 2000?. In Proc. of the 8th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement. ACM, Article 3, 4 pages. https://doi.org/10.1145/2652524.2652569Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. M. Götz, M. Book, C. Bodenstein, and M. Riedel. 2017. Supporting Software Engineering Practices in the Development of Data-Intensive HPC Applications with the JuML Framework. In Proc. of the 1st International Workshop on Software Engineering for High Performance Computing in Computational and Data-Enabled Science & Engineering. ACM, 1–8. https://doi.org/10.1145/3144763.3144765Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. B. Haugset and G. Kjetil Hanssen. 2008. Automated Acceptance Testing: A Literature Review and an Industrial Case Study. In Agile 2008 Conference. 27–38. https://doi.org/10.1109/Agile.2008.82Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. B. Haugset and G. K. Hanssen. 2011. The Home Ground of Automated Acceptance Testing: Mature Use of FitNesse. In 2011 Agile Conference. IEEE, 97–106. https://doi.org/10.1109/AGILE.2011.37Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. L. Messan Hillah, A. Maesano, L. Maesano, F. De Rosa, F. Kordon, and P. Wuillemin. 2016. Service Functional Testing Automation with Intelligent Scheduling and Planning. ACM, 1605–1610. https://doi.org/10.1145/2851613.2851807Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. IEEE. 1986. IEEE Standard for Software Unit Testing, ANSI/IEEE Std 1008-1987. https://doi.org/10.1109/IEEESTD.1986.81001Google ScholarGoogle Scholar
  35. IEEE. 2017. IEEE Standard for System, Software, and Hardware Verification and Validation, Std 1012-2016 (Revision of IEEE Std 1012-2012/ Incorporates IEEE Std 1012-2016/Cor1-2017). https://doi.org/10.1109/IEEESTD.2017.8055462Google ScholarGoogle Scholar
  36. ISO/IEC/IEEE. 2013. International Standard—Software and systems engineering—Software testing—Part 1: Concepts and definitions ISO/IEC/IEEE 29119-1:2013(E)). https://doi.org/10.1109/IEEESTD.2013.6588537Google ScholarGoogle Scholar
  37. N. Juristo, A. M. Moreno, and S. Vegas. 2004. Reviewing 25 Years of Testing Technique Experiments. Empirical Softw. Engg. 9, 1–2 (mar 2004), 7–44. https://doi.org/10.1023/B:EMSE.0000013513.48963.1bGoogle ScholarGoogle ScholarDigital LibraryDigital Library
  38. D. Kamma and P. Maruthi. 2014. Effective Unit-Testing in Model-Based Software Development. In Proc. of the 9th International Workshop on Automation of Software Test. ACM, 36–42. https://doi.org/10.1145/2593501.2593507Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. U. Kanewala and J. M. Bieman. 2014. Testing scientific software: A systematic literature review. Information and Software Technology 56, 10 (2014), 1219–1232. https://doi.org/10.1016/j.infsof.2014.05.006Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. J. Kasurinen. 2010. Elaborating Software Test Processes and Strategies. In 2010 Third International Conference on Software Testing, Verification and Validation. IEEE, 355–358. https://doi.org/10.1109/ICST.2010.25Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. A. H. Khan, Z. H. Khan, and Z. Weiguo. 2014. Model-based verification and validation of safety-critical embedded real-time systems: Formation and tools. In Studies in Computational Intelligence. Vol. 520. Springer, 153–183. https://doi.org/10.1007/978-3-642-40888-5_6Google ScholarGoogle Scholar
  42. B. Kitchenham, H. Al-Khilidar, M. Ali Babar, M. Berry, K. Cox, J. Keung, F. Kurniawati, M. Staples, H. Zhang, and L. Zhu. 2008. Evaluating guidelines for reporting empirical software engineering studies. Empirical Software Engineering 13, 1 (01 Feb 2008), 97–121. https://doi.org/10.1007/s10664-007-9053-5Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. B. Kitchenham and S. Charters. 2007. Guidelines for performing systematic literature reviews in software engineering. Tech. Rep.School of Computer Science and Mathematics, Keele University.Google ScholarGoogle Scholar
  44. W. Lam, S. Srisakaokul, B. Bassett, P. Mahdian, T. Xie, P. Lakshman, and J. Halleux. 2018. A Characteristic Study of Parameterized Unit Tests in.NET Open Source Projects. In 32nd European Conference on Object-Oriented Programming (ECOOP 2018)(LIPIcs, Vol. 109). Schloss Dagstuhl–Leibniz-Zentrum fuer Informatik, 5:1–5:27. https://doi.org/10.4230/LIPIcs.ECOOP.2018.5Google ScholarGoogle Scholar
  45. R. Latorre. 2014. A successful application of a Test-Driven Development strategy in the industrial environment. Empirical Software Engineering 19, 3 (2014), 753–773. https://doi.org/10.1007/s10664-013-9281-9Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. M. Lindvall, D. Ganesan, R. Árdal, and R. E. Wiegand. 2015. Metamorphic Model-Based Testing Applied on NASA DAT – An Experience Report. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 2. IEEE, 129–138. https://doi.org/10.1109/ICSE.2015.348Google ScholarGoogle ScholarCross RefCross Ref
  47. D. Hiura Longo, P. Vilain, and L. Pereira Silva. 2019. Impacts of Data Uniformity in the Reuse of Acceptance Test Glue Code. In The 31st International Conference on Software Engineering and Knowledge Engineering. KSI Research Inc., 129–176. https://doi.org/10.18293/SEKE2019-102Google ScholarGoogle Scholar
  48. D. D. Ma’ayan. 2018. The Quality of Junit Tests: An Empirical Study Report. In Proc. of the 1st International Workshop on Software Qualities and Their Dependencies. ACM, 33–36. https://doi.org/10.1145/3194095.3194102Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. A. Marques, F. Ramalho, and W. L. Andrade. 2014. Comparing Model-Based Testing with Traditional Testing Strategies: An Empirical Study. In 2014 IEEE Seventh International Conference on Software Testing, Verification and Validation Workshops. IEEE, 264–273. https://doi.org/10.1109/ICSTW.2014.29Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. F. L. Morais, A. Fontão, and S. C. Tamayo. 2020. Combining Design Thinking and Scrum to enhance Test Suite during innovative software development. In 23rd Iberoamerican Conference on Software Engineering, CIbSE 2020.Google ScholarGoogle Scholar
  51. H. Munir and P. Runeson. 2015. Software Testing in Open Innovation: An Exploratory Case Study of the Acceptance Test Harness for Jenkins. In Proc. of the International Conference on Software and System Process. ACM, 187–191. https://doi.org/10.1145/2785592.2795365Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. A. Nanthaamornphong and J. C. Carver. 2018. Test-driven development in hpc science: A case study. Computing in Science and Engineering 20, 5 (2018), 98–113. https://doi.org/10.1109/MCSE.2018.05329819Google ScholarGoogle ScholarCross RefCross Ref
  53. N. Nomura, Y. Kikushima, and M. Aoyama. 2014. A Test Scenario Design Methodology Based on Business Context Modeling and Its Evaluation. In 2014 21st Asia-Pacific Software Engineering Conference, Vol. 1. IEEE, 3–10. https://doi.org/10.1109/APSEC.2014.10Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. G. O’Regan. 2019. Fundamentals of software engineering. Springer, 49–51. https://doi.org/10.1007/978-3-030-28494-7_2Google ScholarGoogle Scholar
  55. B. K. Papis, K. Grochowski, K. Subzda, and K. Sijko. 2020. Experimental evaluation of test-driven development with interns working on a real industrial project. IEEE Transactions on Software Engineering(2020). https://doi.org/10.1109/TSE.2020.3027522Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. K. Petersen, S. Vakkalanka, and L. Kuzniarz. 2015. Guidelines for conducting systematic mapping studies in software engineering: An update. Information and Software Technology 64 (2015), 1–18. https://doi.org/10.1016/j.infsof.2015.03.007Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. J. Petrić, T. Hall, and D. Bowes. 2020. Which Software Faults Are Tests Not Detecting?. In Proc. of the Evaluation and Assessment in Software Engineering. ACM, 160–169. https://doi.org/10.1145/3383219.3383236Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. M. P. Prado and A. M. R. Vincenzi. 2018. Towards cognitive support for unit testing: A qualitative study with practitioners. Journal of Systems and Software 141 (2018), 66–84. https://doi.org/10.1016/j.jss.2018.03.052Google ScholarGoogle ScholarCross RefCross Ref
  59. R. Ramler, G. Buchgeher, and C. Klammer. 2018. Adapting automated test generation to GUI testing of industry applications. Information and Software Technology 93 (2018), 248–263. https://doi.org/10.1016/j.infsof.2017.07.005Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. R. Ramler, T. Wetzlmaier, and C. Klammer. 2017. An Empirical Study on the Application of Mutation Testing for a Safety-Critical Industrial Software System. In Proc. of the Symposium on Applied Computing. ACM, 1401–1408. https://doi.org/10.1145/3019612.3019830Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. C. L. Robinson-Mallett and R. M. Hierons. 2017. Integrating Graphical and Natural Language Specifications to Support Analysis and Testing. In 2017 IEEE 25th International Requirements Engineering Conference Workshops (REW). IEEE, 331–338. https://doi.org/10.1109/REW.2017.50Google ScholarGoogle ScholarCross RefCross Ref
  62. O. Rodríguez-Valdés, T. E. J. Vos, P. Aho, and B. Marín. 2021. 30 Years of Automated GUI Testing: A Bibliometric Analysis. In Quality of Information and Communications Technology. Springer, 473–488.Google ScholarGoogle Scholar
  63. R. Rwemalika, M. Kintis, M. Papadakis, Y. Le Traon, and P. Lorrach. 2019. On the Evolution of Keyword-Driven Test Suites. In 2019 12th IEEE Conference on Software Testing, Validation and Verification (ICST). IEEE, 335–345. https://doi.org/10.1109/ICST.2019.00040Google ScholarGoogle Scholar
  64. M. Sahinoglu, K. Incki, and M. S. Aktas. 2015. Mobile Application Verification: A Systematic Mapping Study. In Computational Science and Its Applications – ICCSA 2015. Springer, 147–163.Google ScholarGoogle ScholarCross RefCross Ref
  65. E. César Dos Santos, P. Vilain, and D. Hiura Longo. 2018. A Systematic Literature Review to Support the Selection of User Acceptance Testing Techniques. In Proc. of the 40th International Conference on Software Engineering: Companion Proceeedings. ACM, 418–419. https://doi.org/10.1145/3183440.3195036Google ScholarGoogle ScholarDigital LibraryDigital Library
  66. P. Lamela Seijas, S. Thompson, and M. Ángel Francisco. 2016. Model Extraction and Test Generation from JUnit Test Suites. In Proc. of the 11th International Workshop on Automation of Software Test. ACM, 8–14. https://doi.org/10.1145/2896921.2896927Google ScholarGoogle ScholarDigital LibraryDigital Library
  67. P.K. Singh, O.P. Sangwan, and A. Sharma. 2013. A systematic review on fault based mutation testing techniques and tools for Aspect-J programs. In 2013 3rd IEEE International Advance Computing Conference (IACC). IEEE, 1455–1461. https://doi.org/10.1109/IAdCC.2013.6514441Google ScholarGoogle Scholar
  68. T. Toroi, A. Raninen, and L. Väätäinen. 2013. Identifying Process Improvement Targets in Test Processes: A Case Study. In IEEE International Conference on Software Maintenance. IEEE, 11–19. https://doi.org/10.1109/ICSM.2013.12Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. A. Tosun, M. Ahmed, B. Turhan, and N. Juristo. 2018. On the Effectiveness of Unit Tests in Test-Driven Development. In Proc. of the Int’l Conference on Software and System Process. ACM, 113–122. https://doi.org/10.1145/3202710.3203153Google ScholarGoogle ScholarDigital LibraryDigital Library
  70. F. Trautsch, S. Herbold, and J. Grabowski. 2020. Are unit and integration test definitions still valid for modern Java projects? An empirical study on open-source projects. Journal of Systems and Software 159 (2020). https://doi.org/10.1016/j.jss.2019.110421Google ScholarGoogle ScholarDigital LibraryDigital Library
  71. H. William Veiga, M. Hering Queiroz, J. Farines, and M. Lopes Lima. 2017. Automatic Conformance Testing of Safety Instrumented Systems for Offshore Oil Platforms. In Critical Systems: Formal Methods and Automated Verification. Springer, 51–65. https://doi.org/10.1007/978-3-319-67113-0_4Google ScholarGoogle Scholar
  72. F. Zampetti, A. Di Sorbo, C. A. Visaggio, G. Canfora, and M. Di Penta. 2020. Demystifying the adoption of behavior-driven development in open source projects. Information and Software Technology 123 (2020). https://doi.org/10.1016/j.infsof.2020.106311Google ScholarGoogle Scholar
  73. Q. Zhu, A. Panichella, and A. Zaidman. 2018. A systematic literature review of how mutation testing supports quality assurance processes. Software Testing, Verification and Reliability 28, 6(2018), e1675. https://doi.org/10.1002/stvr.1675Google ScholarGoogle ScholarCross RefCross Ref

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Other conferences
    EASE '22: Proceedings of the 26th International Conference on Evaluation and Assessment in Software Engineering
    June 2022
    466 pages
    ISBN:9781450396134
    DOI:10.1145/3530019

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 13 June 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • research-article
    • Research
    • Refereed limited

    Acceptance Rates

    Overall Acceptance Rate71of232submissions,31%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

HTML Format

View this article in HTML Format .

View HTML Format