ABSTRACT
Writing automated test cases is a challenging and demanding activity. The test case itself is software that requires proper design to ensure it can be implemented and maintained as long as the production code evolves. Like code smells, test smells may indicate violations of principles that negatively affect the quality of test code design, making it difficult to comprehend and, consequently, impairing its proper use and evolution. This work aims to investigate the occurrence of test smells in JavaScript test code and whether their presence can be correlated with test code quality. We perform an empirical study using the STEEL tool where the test suites of 11 open-source JavaScript projects from the Github repository are analyzed to detect a set of previously cataloged test smells. We then investigate: i) which ones occur more frequently; ii) whether given test smells are likely to occur together, and iii) if the presence of certain test smells is related to classical bad design indicators on the test code. We found that the Duplicate Assert, Magic Number Test, Unknown Test and Conditional Test Logic smells are the most common in JavaScript test code, whereas the Mystery Guest, Ignored Test and Resource Optimism smells are the least common. Moreover, the Conditional Test Logic, Magic Number Test, Duplicate Assert and the Exception Handling smells may often appear together. Furthermore, there is a moderate to a strong positive correlation between some smells count and quality measures in the test code. We can conclude that test smells are frequently found in JavaScript test code, and their presence may be an indicator of low design quality.
- Wajdi Aljedaani, Anthony Peruma, Ahmed Aljohani, Mazen Alotaibi, Mohamed Wiem Mkaouer, Ali Ouni, Christian D. Newman, Abdullatif Ghallab, and Stephanie Ludi. 2021. Test Smell Detection Tools: A Systematic Mapping Study. In Evaluation and Assessment in Software Engineering (Trondheim, Norway) (EASE 2021). Association for Computing Machinery, New York, NY, USA, 170–180. https://doi.org/10.1145/3463274.3463335Google ScholarDigital Library
- Victor R Basili, Gianluigi Caldiera, and H Dieter Rombach. 1994. The goal question metric approach. Encyclopedia of Software Engineering 2 (1994), 528–532. http://maisqual.squoring.com/wiki/index.php/The Goal Question Metric ApproachGoogle Scholar
- Gabriele Bavota, Abdallah Qusef, Rocco Oliveto, Andrea De Lucia, and David Binkley. 2012. An empirical analysis of the distribution of unit test smells and their impact on software maintenance. In 2012 28th IEEE International Conference on Software Maintenance (ICSM). IEEE, Trento, Italy, 56–65. https://doi.org/10.1109/ICSM.2012.6405253Google ScholarDigital Library
- Gabriele Bavota, Abdallah Qusef, Rocco Oliveto, Andrea De Lucia, and Dave W. Binkley. 2015. Are test smells really harmful? An empirical study. Empirical Software Engineering 20, 4 (2015), 1052–1094. https://doi.org/10.1007/s10664-014-9313-0Google ScholarDigital Library
- Amin Milani Fard and Ali Mesbah. 2017. JavaScript: The (Un)Covered Parts. In 2017 IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, Tokyo, Japan, March 13-17, 2017. IEEE Computer Society, Tokyo, Japan, 230–240. https://doi.org/10.1109/ICST.2017.28Google Scholar
- Vahid Garousi and Barış Küçük. 2018. Smells in software test code: A survey of knowledge in industry and academia. Journal of Systems and Software 138 (2018), 52 – 81. https://doi.org/10.1016/j.jss.2017.12.013Google ScholarCross Ref
- Geoffrey K. Gill and Chris F. Kemerer. 1991. Cyclomatic Complexity Density and Software Maintenance Productivity. IEEE Transactions on Software Engineering 17, 12 (1991), 1284–1288. https://doi.org/10.1109/32.106988Google ScholarDigital Library
- Peter Gyimesi, Bela Vancsics, Andrea Stocco, Davood Mazinanian, Arpad Beszedes, Rudolf Ferenc, and Ali Mesbah. 2019. BugsJS: a Benchmark of JavaScript Bugs. In 2019 12th IEEE Conference on Software Testing, Validation and Verification (ICST). IEEE, Xi’an, China, 90–101. https://doi.org/10.1109/ICST.2019.00019Google ScholarCross Ref
- M. H. Halstead. 1975. Toward a theoretical basis for estimating programming effort. Proceedings of the 1975 Annual Conference, ACM 1975 (1975), 222–224. https://doi.org/10.1145/800181.810326Google ScholarDigital Library
- Ieee. 1990. IEEE Standard Glossary of Software Engineering Terminology. Office 121990, 1 (1990), 1. https://doi.org/10.1109/IEEESTD.1990.101064Google Scholar
- T.J. McCabe. 1976. A Complexity Measure. IEEE Transactions on Software Engineering SE-2, 4 (dec 1976), 308–320. https://doi.org/10.1109/TSE.1976.233837Google Scholar
- Shabnam Mirshokraie, Ali Mesbah, and Karthik Pattabiraman. 2015. JSEFT: Automated Javascript Unit Test Generation. In 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST). IEEE, Graz, Austria, 1–10. https://doi.org/10.1109/ICST.2015.7102595Google Scholar
- Anthony Peruma, Khalid Almalki, Christian D. Newman, Mohamed Wiem Mkaouer, Ali Ouni, and Fabio Palomba. 2019. On the Distribution of Test Smells in Open Source Android Applications: An Exploratory Study. In Proceedings of the 29th Annual International Conference on Computer Science and Software Engineering (Toronto, Ontario, Canada) (CASCON ’19). IBM Corp., USA, 193–202. https://dl.acm.org/doi/10.5555/3370272.3370293Google ScholarDigital Library
- Anthony Peruma, Christian D. Newman, Mohamed Wiem Mkaouer, Ali Ouni, and Fabio Palomba. 2020. An Exploratory Study on the Refactoring of Unit Test Files in Android Applications. In Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops. ACM, New York, NY, USA, 350–357. https://doi.org/10.1145/3387940.3392189Google ScholarDigital Library
- Anthony Shehan Ayam Peruma. 2018. What the Smell? An Empirical Investigation on the Distribution and Severity of T erity of Test Smells in Open Sour est Smells in Open Source Andr ce Android Applications. Master’s thesis. Rochester Institute of Technology, New York, US.Google Scholar
- Per Runeson and Martin Höst. 2009. Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering 14, 2 (2009), 131–164.Google ScholarDigital Library
- Amir Saboury, Pooya Musavi, Foutse Khomh, and Giulio Antoniol. 2017. An empirical study of code smells in JavaScript projects. In 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, Klagenfurt, Austria, 294–305. https://doi.org/10.1109/SANER.2017.7884630Google ScholarCross Ref
- Neil J. Salkind. 2018. Exploring research. Pearson, Harlow etc.Google Scholar
- Davide Spadini, Fabio Palomba, Andy Zaidman, Magiel Bruntink, and Alberto Bacchelli. 2018. On the Relation of Test Smells to Software Code Quality. In 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, Madrid, Spain, 1–12. https://doi.org/10.1109/ICSME.2018.00010Google ScholarCross Ref
- Davide Spadini, Martin Schvarcbacher, Ana-Maria Oprescu, Magiel Bruntink, and Alberto Bacchelli. 2020. Investigating Severity Thresholds for Test Smells. In Proceedings of the 17th International Conference on Mining Software Repositories. ACM, New York, NY, USA, 311–321. https://doi.org/10.1145/3379597.3387453Google ScholarDigital Library
- Michele Tufano, Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, Andrea De Lucia, and Denys Poshyvanyk. 2016. An empirical investigation into the nature of test smells. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. ACM, New York, NY, USA, 4–15. https://doi.org/10.1145/2970276.2970340 arxiv:arXiv:1508.06655v1Google ScholarDigital Library
- Arash Vahabzadeh, Amin Milani Fard, and Ali Mesbah. 2015. An empirical study of bugs in test code. In 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, New York, NY, USA, 101–110. https://doi.org/10.1109/ICSM.2015.7332456Google ScholarDigital Library
- B. Van Rompaey, B. Du Bois, S. Demeyer, and M. Rieger. 2007. On The Detection of Test Smells: A Metrics-Based Approach for General Fixture and Eager Test. IEEE Transactions on Software Engineering 33, 12 (2007), 800–817.Google ScholarDigital Library
- Edmond VanDoren 2002. Maintainability index technique for measuring program maintainability. SEI STR Rep (2002).Google Scholar
- Investigating Test Smells in JavaScript Test Code
Recommendations
On the diffusion of test smells and their relationship with test code quality of Java projects
AbstractTest smells are considered bad practices that can reduce the test code quality, thus harming software testing goals and maintenance activities. Prior studies have investigated the diffusion of test smells and their impact on test code ...
Automated Detection of Test Fixture Strategies and Smells
ICST '13: Proceedings of the 2013 IEEE Sixth International Conference on Software Testing, Verification and ValidationDesigning automated tests is a challenging task. One important concern is how to design test fixtures, i.e. code that initializes and configures the system under test so that it is in an appropriate state for running particular automated tests. Test ...
Are test smells really harmful? An empirical study
Bad code smells have been defined as indicators of potential problems in source code. Techniques to identify and mitigate bad code smells have been proposed and studied. Recently bad test code smells (test smells for short) have been put forward as a ...
Comments