ABSTRACT
Automated unit test generation tools can produce tests that are superior to manually written ones in terms of code coverage, but are these tests helpful to developers while they are writing code? A developer would first need to know when and how to apply such a tool, and would then need to understand the resulting tests in order to provide test oracles and to diagnose and fix any faults that the tests reveal. Considering all this, does automatically generating unit tests provide any benefit over simply writing unit tests manually? We empirically investigated the effects of using an automated unit test generation tool (EvoSuite) during development. A controlled experiment with 41 students shows that using EvoSuite leads to an average branch coverage increase of +13%, and 36% less time is spent on testing compared to writing unit tests manually. However, there is no clear effect on the quality of the implementations, as it depends on how the test generation tool and the generated tests are used. In-depth analysis, using five think-aloud observations with professional programmers, confirms the necessity to increase the usability of automated unit test generation tools, to integrate them better during software development, and to educate software developers on how to best use those tools.
- C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball, “Feedback-directed random test generation,” in ACM/IEEE Int. Conference on Software Engineering (ICSE), 2007, pp. 75–84. Google ScholarDigital Library
- B. Meyer, I. Ciupa, A. Leitner, and L. L. Liu, “Automatic testing of object-oriented software,” in SOFSEM’07: Theory and Practice of Computer Science, ser. LNCS, vol. 4362. Springer-Verlag, 2007, pp. 114–129. Google ScholarDigital Library
- N. Tillmann and J. N. de Halleux, “Pex — white box test generation for .NET,” in Int. Conference on Tests And Proofs (TAP), ser. LNCS, vol. 4966. Springer, 2008, pp. 134 – 253. Google ScholarDigital Library
- C. Csallner and Y. Smaragdakis, “JCrasher: an automatic robustness tester for Java,” Softw. Pract. Exper., vol. 34, pp. 1025–1050, 2004. Google ScholarDigital Library
- P. Tonella, “Evolutionary testing of classes,” in ACM Int. Symposium on Software Testing and Analysis (ISSTA), 2004, pp. 119–128. Google ScholarDigital Library
- J. H. Andrews, F. C. H. Li, and T. Menzies, “Nighthawk: a two-level genetic-random unit test data generator,” in IEEE/ACM Int. Conference on Automated Software Engineering (ASE), 2007, pp. 144–153. Google ScholarDigital Library
- L. Baresi, P. L. Lanzi, and M. Miraz, “TestFul: an evolutionary test approach for Java,” in IEEE Int. Conference on Software Testing, Verification and Validation (ICST), 2010, pp. 185–194. Google ScholarDigital Library
- G. Fraser and A. Arcuri, “Whole test suite generation,” IEEE Transactions on Software Engineering, vol. 39, no. 2, pp. 276–291, 2013. Google ScholarDigital Library
- S. Park, B. M. M. Hossain, I. Hussain, C. Csallner, M. Grechanik, K. Taneja, C. Fu, and Q. Xie, “CarFast: achieving higher statement coverage faster,” in ACM Symposium on the Foundations of Software Engineering (FSE), 2012, pp. 35:1–35:11. Google ScholarDigital Library
- (2014) Agitar One. {Online}. Available: http://www.agitar.comGoogle Scholar
- (2014) Parasoft JTest. {Online}. Available: http://www.parasoft.com/jtestGoogle Scholar
- G. Fraser, M. Staats, P. McMinn, A. Arcuri, and F. Padberg, “Does automated white-box test generation really help software testers?” in ACM Int. Symposium on Software Testing and Analysis (ISSTA), 2013, pp. 291–301. Google ScholarDigital Library
- G. Fraser and A. Arcuri, “EvoSuite: Automatic test suite generation for object-oriented software.” in ACM Symposium on the Foundations of Software Engineering (FSE), 2011, pp. 416–419. Google ScholarDigital Library
- A. Jedlitschka, M. Ciolkowski, and D. Pfahl, “Reporting experiments in software engineering,” in Guide to Advanced Empirical Software Engineering, F. Shull, J. Singer, and D. Sjøberg, Eds. Springer London, 2008, pp. 201–228.Google ScholarCross Ref
- (2014) Apache Commons Libraries. {Online}. Available: http://commons.apache.org/Google Scholar
- (2014) JavaNCSS - a source measurement suite for Java. Version 32.53. {Online}. Available: http://www.kclee.de/clemens/java/javancssGoogle Scholar
- (2014) EclEmma - Java code coverage for Eclipse. Version 2.3.1. {Online}. Available: http://www.eclemma.org/Google Scholar
- (2014) Rabbit - Eclipse statistics tracking plugin. Version 1.2.1. {Online}. Available: https://code.google.com/p/rabbit-eclipse/Google Scholar
- N. Li, X. Meng, J. Offutt, and L. Deng, “Is bytecode instrumentation as good as source code instrumentation: An empirical study with industrial tools (experience report),” in IEEE Int. Symposium on Software Reliability Engineering (ISSRE), 2013, pp. 380–389.Google ScholarCross Ref
- R. Just, “The Major mutation framework: Efficient and scalable mutation analysis for Java,” in ACM Int. Symposium on Software Testing and Analysis (ISSTA), 2014, pp. 433–436. Google ScholarDigital Library
- J. H. Andrews, L. C. Briand, and Y. Labiche, “Is mutation an appropriate tool for testing experiments?” in ACM/IEEE Int. Conference on Software Engineering (ICSE), 2005, pp. 402–411. Google ScholarDigital Library
- R. Just, D. Jalali, L. Inozemtseva, M. Ernst, R. Holmes, and G. Fraser, “Are mutants a valid substitute for real faults in software testing?” in ACM Symposium on the Foundations of Software Engineering (FSE), 2014. Google ScholarDigital Library
- M. Fay and M. Proschan, “Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules,” Statistics Surveys, vol. 4, pp. 1–39, 2010.Google ScholarCross Ref
- A. Arcuri and L. Briand, “A hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering,” Software Testing, Verification and Reliability (STVR), vol. 24, no. 3, pp. 219–250, 2012.Google ScholarDigital Library
- R. Grissom and J. Kim, Effect sizes for research: A broad practical approach. Lawrence Erlbaum, 2005.Google Scholar
- J. Cohen, “A power primer,” Psychological bulletin, vol. 112, no. 1, pp. 155–159, 1992.Google ScholarCross Ref
- A. Vargha and H. D. Delaney, “A critique and improvement of the CL common language effect size statistics of McGraw and Wong,” Journal of Educational and Behavioral Statistics, vol. 25, no. 2, pp. 101–132, 2000.Google Scholar
- J. Carver, L. Jaccheri, S. Morasca, and F. Shull, “Issues in using students in empirical studies in software engineering education,” in IEEE Int. Software Metrics Symposium, 2003, pp. 239–249. Google ScholarDigital Library
- M. Höst, B. Regnell, and C. Wohlin, “Using students as subjects—A comparative study of students and professionals in lead-time impact assessment,” Empirical Software Engineering, vol. 5, no. 3, pp. 201–214, 2000. Google ScholarDigital Library
- B. A. Kitchenham, S. L. Pfleeger, L. M. Pickard, P. W. Jones, D. C. Hoaglin, K. E. Emam, and J. Rosenberg, “Preliminary guidelines for empirical research in software engineering,” IEEE Transactions on Software Engineering (TSE), vol. 28, no. 8, pp. 721–734, Aug. 2002. Google ScholarDigital Library
- K. A. Ericsson and H. A. Simon, Protocol Analysis: Verbal Reports as Data (revised edition). MIT Press, Cambridge, MA, 1993.Google ScholarCross Ref
- K. A. Ericsson, “Valid and non-reactive verbalization of thoughts during performance of tasks - towards a solution to the central problems of introspection as a source of scientific data,” Consciousness Studies, vol. 10, no. 9-10, pp. 1–18, 2003.Google Scholar
- S. McDonald, H. Edwards, and T. Zhao, “Exploring think-alouds in usability testing: An international survey,” IEEE Transactions on Professional Communication, vol. 55, no. 1, pp. 2–19, March 2012.Google ScholarCross Ref
- D. Sjoberg, J. Hannay, O. Hansen, V. By Kampenes, A. Karahasanovic, N. Liborg, and A. C Rekdal, “A survey of controlled experiments in software engineering,” IEEE Transactions on Software Engineering (TSE), vol. 31, no. 9, pp. 733–753, 2005. Google ScholarDigital Library
- R. P. Buse, C. Sadowski, and W. Weimer, “Benefits and barriers of user evaluation in software engineering research,” in ACM SIGPLAN Notices, vol. 46, no. 10, 2011, pp. 643–656. Google ScholarDigital Library
- A. Orso and G. Rothermel, “Software Testing: A Research Travelogue (2000–2014),” in ACM Future of Software Engineering (FOSE), 2014, pp. 117–132. Google ScholarDigital Library
- M. Ceccato, A. Marchetto, L. Mariani, C. D. Nguyen, and P. Tonella, “An empirical study about the effectiveness of debugging when random test cases are used,” in ACM/IEEE Int. Conference on Software Engineering (ICSE), 2012, pp. 452–462. Google ScholarDigital Library
- R. Ramler, D. Winkler, and M. Schmidt, “Random test case generation and manual unit testing: Substitute or complement in retrofitting tests for legacy code?” in IEEE Euromicro Conference on Software Engineering and Advanced Applications (SEAA), 2012, pp. 286–293. Google ScholarDigital Library
- D. Saff and M. D. Ernst, “An experimental evaluation of continuous testing during development,” in ACM Int. Symposium on Software Testing and Analysis (ISSTA), 2004, pp. 76–85. Google ScholarDigital Library
- J. Hughes and S. Parkes, “Trends in the use of verbal protocol analysis in software engineering research,” Behaviour and Information Technology, vol. 22, no. 2, pp. 127–140, 2003.Google ScholarCross Ref
- A. M. Vans, A. von Mayrhauser, and G. Somlo, “Program understanding behavior during corrective maintenance of large-scale software,” Int. Journal of Human-Computer Studies, vol. 51, no. 1, pp. 31–70, 1999. Google ScholarDigital Library
- J. E. Hale, S. Sharpe, and D. P. Hale, “An evaluation of the cognitive processes of programmers engaged in software debugging,” Software Maintenance: Research and Practice, vol. 11, no. 2, pp. 73–91, 1999. Google ScholarDigital Library
- T. Roehm, R. Tiarks, R. Koschke, and W. Maalej, “How do professional developers comprehend software?” in ACM/IEEE Int. Conference on Software Engineering (ICSE), 2012, pp. 255–265. Google ScholarDigital Library
- S. Owen, P. Brereton, and D. Budgen, “Protocol analysis: A neglected practice,” Commun. ACM, vol. 49, no. 2, pp. 117–122, 2006. Google ScholarDigital Library
- J. Whalley and N. Kasto, “A qualitative think-aloud study of novice programmers’ code writing strategies,” in ACM Conf. on Innovation and Technology in Computer Science Education (ITiCSE), 2014, pp. 279–284. Google ScholarDigital Library
- J.-P. Ostberg, J. Ramadani, and S. Wagner, “A novel approach for discovering barriers in using automatic static analysis,” in ACM Int. Conference on Evaluation and Assessment in Software Engineering (EASE), 2013, pp. 78–81. Google ScholarDigital Library
- G. Fraser and A. Zeller, “Exploiting common object usage in test case generation,” in IEEE Int. Conference on Software Testing, Verification and Validation (ICST), 2011, pp. 80–89. Google ScholarDigital Library
- S. Afshan, P. McMinn, and M. Stevenson, “Evolving readable string test inputs using a natural language model to reduce human oracle cost,” in Int. Conference on Software Testing, Verification and Validation (ICST), 2013, pp. 352–361. Google ScholarDigital Library
- R. Santelices, P. K. Chittimalli, T. Apiwattanapong, A. Orso, and M. J. Harrold, “Test-suite augmentation for evolving software,” in IEEE/ACM Int. Conference on Automated Software Engineering (ASE), 2008, pp. 218–227. Google ScholarDigital Library
- Z. Xu, Y. Kim, M. Kim, G. Rothermel, and M. B. Cohen, “Directed test suite augmentation: techniques and tradeoffs,” in ACM Symposium on the Foundations of Software Engineering (FSE), 2010, pp. 257–266. Google ScholarDigital Library
- M. Mirzaaghaei, F. Pastore, and M. Pezze, “Supporting test suite evolution through test case adaptation,” in IEEE Int. Conference on Software Testing, Verification and Validation (ICST), 2012, pp. 231–240. Google ScholarDigital Library
Index Terms
- Automated unit test generation during software development: a controlled experiment and think-aloud observations
Recommendations
Does Automated Unit Test Generation Really Help Software Testers? A Controlled Empirical Study
Special Issue on ISSTA 2013Work on automated test generation has produced several tools capable of generating test data which achieves high structural coverage over a program. In the absence of a specification, developers are expected to manually construct or verify the test ...
Does automated white-box test generation really help software testers?
ISSTA 2013: Proceedings of the 2013 International Symposium on Software Testing and AnalysisAutomated test generation techniques can efficiently produce test data that systematically cover structural aspects of a program. In the absence of a specification, a common assumption is that these tests relieve a developer of most of the work, as the ...
Automated unit test generation for evolving software
ESEC/FSE 2015: Proceedings of the 2015 10th Joint Meeting on Foundations of Software EngineeringAs developers make changes to software programs, they want to ensure that the originally intended functionality of the software has not been affected. As a result, developers write tests and execute them after making changes. However, high quality ...
Comments