skip to main content
10.1145/3482909.3482915acmotherconferencesArticle/Chapter ViewAbstractPublication PagessastConference Proceedingsconference-collections
research-article

Investigating Test Smells in JavaScript Test Code

Published:12 October 2021Publication History

ABSTRACT

Writing automated test cases is a challenging and demanding activity. The test case itself is software that requires proper design to ensure it can be implemented and maintained as long as the production code evolves. Like code smells, test smells may indicate violations of principles that negatively affect the quality of test code design, making it difficult to comprehend and, consequently, impairing its proper use and evolution. This work aims to investigate the occurrence of test smells in JavaScript test code and whether their presence can be correlated with test code quality. We perform an empirical study using the STEEL tool where the test suites of 11 open-source JavaScript projects from the Github repository are analyzed to detect a set of previously cataloged test smells. We then investigate: i) which ones occur more frequently; ii) whether given test smells are likely to occur together, and iii) if the presence of certain test smells is related to classical bad design indicators on the test code. We found that the Duplicate Assert, Magic Number Test, Unknown Test and Conditional Test Logic smells are the most common in JavaScript test code, whereas the Mystery Guest, Ignored Test and Resource Optimism smells are the least common. Moreover, the Conditional Test Logic, Magic Number Test, Duplicate Assert and the Exception Handling smells may often appear together. Furthermore, there is a moderate to a strong positive correlation between some smells count and quality measures in the test code. We can conclude that test smells are frequently found in JavaScript test code, and their presence may be an indicator of low design quality.

References

  1. Wajdi Aljedaani, Anthony Peruma, Ahmed Aljohani, Mazen Alotaibi, Mohamed Wiem Mkaouer, Ali Ouni, Christian D. Newman, Abdullatif Ghallab, and Stephanie Ludi. 2021. Test Smell Detection Tools: A Systematic Mapping Study. In Evaluation and Assessment in Software Engineering (Trondheim, Norway) (EASE 2021). Association for Computing Machinery, New York, NY, USA, 170–180. https://doi.org/10.1145/3463274.3463335Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Victor R Basili, Gianluigi Caldiera, and H Dieter Rombach. 1994. The goal question metric approach. Encyclopedia of Software Engineering 2 (1994), 528–532. http://maisqual.squoring.com/wiki/index.php/The Goal Question Metric ApproachGoogle ScholarGoogle Scholar
  3. Gabriele Bavota, Abdallah Qusef, Rocco Oliveto, Andrea De Lucia, and David Binkley. 2012. An empirical analysis of the distribution of unit test smells and their impact on software maintenance. In 2012 28th IEEE International Conference on Software Maintenance (ICSM). IEEE, Trento, Italy, 56–65. https://doi.org/10.1109/ICSM.2012.6405253Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Gabriele Bavota, Abdallah Qusef, Rocco Oliveto, Andrea De Lucia, and Dave W. Binkley. 2015. Are test smells really harmful? An empirical study. Empirical Software Engineering 20, 4 (2015), 1052–1094. https://doi.org/10.1007/s10664-014-9313-0Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Amin Milani Fard and Ali Mesbah. 2017. JavaScript: The (Un)Covered Parts. In 2017 IEEE International Conference on Software Testing, Verification and Validation, ICST 2017, Tokyo, Japan, March 13-17, 2017. IEEE Computer Society, Tokyo, Japan, 230–240. https://doi.org/10.1109/ICST.2017.28Google ScholarGoogle Scholar
  6. Vahid Garousi and Barış Küçük. 2018. Smells in software test code: A survey of knowledge in industry and academia. Journal of Systems and Software 138 (2018), 52 – 81. https://doi.org/10.1016/j.jss.2017.12.013Google ScholarGoogle ScholarCross RefCross Ref
  7. Geoffrey K. Gill and Chris F. Kemerer. 1991. Cyclomatic Complexity Density and Software Maintenance Productivity. IEEE Transactions on Software Engineering 17, 12 (1991), 1284–1288. https://doi.org/10.1109/32.106988Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Peter Gyimesi, Bela Vancsics, Andrea Stocco, Davood Mazinanian, Arpad Beszedes, Rudolf Ferenc, and Ali Mesbah. 2019. BugsJS: a Benchmark of JavaScript Bugs. In 2019 12th IEEE Conference on Software Testing, Validation and Verification (ICST). IEEE, Xi’an, China, 90–101. https://doi.org/10.1109/ICST.2019.00019Google ScholarGoogle ScholarCross RefCross Ref
  9. M. H. Halstead. 1975. Toward a theoretical basis for estimating programming effort. Proceedings of the 1975 Annual Conference, ACM 1975 (1975), 222–224. https://doi.org/10.1145/800181.810326Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Ieee. 1990. IEEE Standard Glossary of Software Engineering Terminology. Office 121990, 1 (1990), 1. https://doi.org/10.1109/IEEESTD.1990.101064Google ScholarGoogle Scholar
  11. T.J. McCabe. 1976. A Complexity Measure. IEEE Transactions on Software Engineering SE-2, 4 (dec 1976), 308–320. https://doi.org/10.1109/TSE.1976.233837Google ScholarGoogle Scholar
  12. Shabnam Mirshokraie, Ali Mesbah, and Karthik Pattabiraman. 2015. JSEFT: Automated Javascript Unit Test Generation. In 2015 IEEE 8th International Conference on Software Testing, Verification and Validation (ICST). IEEE, Graz, Austria, 1–10. https://doi.org/10.1109/ICST.2015.7102595Google ScholarGoogle Scholar
  13. Anthony Peruma, Khalid Almalki, Christian D. Newman, Mohamed Wiem Mkaouer, Ali Ouni, and Fabio Palomba. 2019. On the Distribution of Test Smells in Open Source Android Applications: An Exploratory Study. In Proceedings of the 29th Annual International Conference on Computer Science and Software Engineering (Toronto, Ontario, Canada) (CASCON ’19). IBM Corp., USA, 193–202. https://dl.acm.org/doi/10.5555/3370272.3370293Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Anthony Peruma, Christian D. Newman, Mohamed Wiem Mkaouer, Ali Ouni, and Fabio Palomba. 2020. An Exploratory Study on the Refactoring of Unit Test Files in Android Applications. In Proceedings of the IEEE/ACM 42nd International Conference on Software Engineering Workshops. ACM, New York, NY, USA, 350–357. https://doi.org/10.1145/3387940.3392189Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Anthony Shehan Ayam Peruma. 2018. What the Smell? An Empirical Investigation on the Distribution and Severity of T erity of Test Smells in Open Sour est Smells in Open Source Andr ce Android Applications. Master’s thesis. Rochester Institute of Technology, New York, US.Google ScholarGoogle Scholar
  16. Per Runeson and Martin Höst. 2009. Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering 14, 2 (2009), 131–164.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Amir Saboury, Pooya Musavi, Foutse Khomh, and Giulio Antoniol. 2017. An empirical study of code smells in JavaScript projects. In 2017 IEEE 24th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, Klagenfurt, Austria, 294–305. https://doi.org/10.1109/SANER.2017.7884630Google ScholarGoogle ScholarCross RefCross Ref
  18. Neil J. Salkind. 2018. Exploring research. Pearson, Harlow etc.Google ScholarGoogle Scholar
  19. Davide Spadini, Fabio Palomba, Andy Zaidman, Magiel Bruntink, and Alberto Bacchelli. 2018. On the Relation of Test Smells to Software Code Quality. In 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, Madrid, Spain, 1–12. https://doi.org/10.1109/ICSME.2018.00010Google ScholarGoogle ScholarCross RefCross Ref
  20. Davide Spadini, Martin Schvarcbacher, Ana-Maria Oprescu, Magiel Bruntink, and Alberto Bacchelli. 2020. Investigating Severity Thresholds for Test Smells. In Proceedings of the 17th International Conference on Mining Software Repositories. ACM, New York, NY, USA, 311–321. https://doi.org/10.1145/3379597.3387453Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Michele Tufano, Fabio Palomba, Gabriele Bavota, Massimiliano Di Penta, Rocco Oliveto, Andrea De Lucia, and Denys Poshyvanyk. 2016. An empirical investigation into the nature of test smells. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering. ACM, New York, NY, USA, 4–15. https://doi.org/10.1145/2970276.2970340 arxiv:arXiv:1508.06655v1Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Arash Vahabzadeh, Amin Milani Fard, and Ali Mesbah. 2015. An empirical study of bugs in test code. In 2015 IEEE International Conference on Software Maintenance and Evolution (ICSME). IEEE, New York, NY, USA, 101–110. https://doi.org/10.1109/ICSM.2015.7332456Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. B. Van Rompaey, B. Du Bois, S. Demeyer, and M. Rieger. 2007. On The Detection of Test Smells: A Metrics-Based Approach for General Fixture and Eager Test. IEEE Transactions on Software Engineering 33, 12 (2007), 800–817.Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Edmond VanDoren 2002. Maintainability index technique for measuring program maintainability. SEI STR Rep (2002).Google ScholarGoogle Scholar
  1. Investigating Test Smells in JavaScript Test Code

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      SAST '21: Proceedings of the 6th Brazilian Symposium on Systematic and Automated Software Testing
      September 2021
      63 pages
      ISBN:9781450385039
      DOI:10.1145/3482909

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 12 October 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate45of92submissions,49%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format