Abstract
Test maintenance has recently gained increasing attention from the software testing research community. When using automated unit test generation tools, the tests are typically created by random test generation or search-based algorithms. Although these tools produce a large number of tests quickly, they mostly seek to improve test coverage; overlooking other quality attributes like understandability and readability. As a result, maintaining a large and automatically generated test suite is quite challenging. In this paper, by utilizing a high level of similarity among the automatically generated tests, we propose a technique for automatically abstracting similar tests through transforming them into parameterized tests. This approach leads to the improvement of readability and understandability by reducing the size of the test suite and also by separating data and logic of the tests. We have implemented this technique as a plugin for IntelliJ IDEA and have evaluated its performance over the test suites produced by the Randoop test generation tool. The results have demonstrated that the proposed approach is able to effectively reduce the size of the test suites between 11% and 96%, with an average of 66%.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Bell, J., Legunsen, O., Hilton, M., Eloussi, L., Yung, T., Marinov, D.: Deflaker: automatically detecting flaky tests. In: 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pp. 433–444. IEEE (2018)
Daka, E., Rojas, J.M., Fraser, G.: Generating unit tests with descriptive names or: would you name your children thing1 and thing2? In: Proceedings of the 26th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 57–67 (2017)
Fraser, G., Arcuri, A.: EvoSuite: automatic test suite generation for object-oriented software. In: Gyimóthy, T., Zeller, A. (eds.) SIGSOFT/FSE’11 19th ACM SIGSOFT Symposium on the Foundations of Software Engineering (FSE-19) and ESEC’11: 13th European Software Engineering Conference (ESEC-13), Szeged, Hungary, 5–9 September 2011, pp. 416–419. ACM (2011). https://doi.org/10.1145/2025113.2025179
Fraser, G., Zeller, A.: Generating parameterized unit tests. In: Proceedings of the 2011 International Symposium on Software Testing and Analysis, pp. 364–374 (2011)
Grano, G., Scalabrino, S., Gall, H.C., Oliveto, R.: An empirical investigation on the readability of manual and generated test cases. In: 2018 IEEE/ACM 26th International Conference on Program Comprehension (ICPC), pp. 348–3483. IEEE (2018)
Kochhar, P.S., Xia, X., Lo, D.: Practitioners’ views on good software testing practices. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE-SEIP), pp. 61–70. IEEE (2019)
Li, B., Vendome, C., Linares-Vásquez, M., Poshyvanyk, D., Kraft, N.A.: Automatically documenting unit test cases. In: 2016 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 341–352. IEEE (2016)
Pacheco, C., Ernst, M.D.: Randoop: feedback-directed random testing for java. In: Companion to the 22nd ACM SIGPLAN Conference on Object-Oriented Programming Systems and Applications Companion, pp. 815–816 (2007)
Panichella, S.: Summarization techniques for code, change, testing, and user feedback. In: 2018 IEEE Workshop on Validation, Analysis and Evolution of Software Tests (VST), pp. 1–5. IEEE (2018)
Panichella, S., Panichella, A., Beller, M., Zaidman, A., Gall, H.C.: The impact of test case summaries on bug fixing performance: an empirical investigation. In: Proceedings of the 38th International Conference on Software Engineering, pp. 547–558 (2016)
Paydar, S., Azamnouri, A.: An experimental study on flakiness and fragility of randoop regression test suites. In: Hojjat, H., Massink, M. (eds.) FSEN 2019. LNCS, vol. 11761, pp. 111–126. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31517-7_8
Thummalapenta, S., Marri, M.R., Xie, T., Tillmann, N., de Halleux, J.: Retrofitting unit tests for parameterized unit testing. In: Giannakopoulou, D., Orejas, F. (eds.) FASE 2011. LNCS, vol. 6603, pp. 294–309. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-19811-3_21
Tsukamoto, K., Maezawa, Y., Honiden, S.: AutoPUT: an automated technique for retrofitting closed unit tests into parameterized unit tests. In: Proceedings of the 33rd Annual ACM Symposium on Applied Computing, pp. 1944–1951 (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 IFIP International Federation for Information Processing
About this paper
Cite this paper
Azamnouri, A., Paydar, S. (2021). Compressing Automatically Generated Unit Test Suites Through Test Parameterization. In: Hojjat, H., Massink, M. (eds) Fundamentals of Software Engineering. FSEN 2021. Lecture Notes in Computer Science(), vol 12818. Springer, Cham. https://doi.org/10.1007/978-3-030-89247-0_15
Download citation
DOI: https://doi.org/10.1007/978-3-030-89247-0_15
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-89246-3
Online ISBN: 978-3-030-89247-0
eBook Packages: Computer ScienceComputer Science (R0)