skip to main content
10.1145/3474624.3477060acmotherconferencesArticle/Chapter ViewAbstractPublication PagessbesConference Proceedingsconference-collections
research-article

Definition of a Knowledge Base Towards a Benchmark for Experiments with Mutation Testing

Published:05 October 2021Publication History

ABSTRACT

Context: Mutation testing has been investigated since the late 70’s. Ever since, researchers have devised dozens of mutation approaches, including ways of generating, executing, and analyzing mutants, as well as ways of reducing the costs for its application. However, research on this field falls short when it comes to producing a representative and manageable set of artifacts to enable experiments with the plethora of existing mutation approaches. Objective: In this paper, we describe the process and current results of creating a knowledge base of mutation-related artifacts to support experiments with mutation testing. Method: We setup the Evosuite tool for generating test cases, and the PIT tool for generating and running mutants. We also created scripts to import the results to a relational database. The database includes some procedures to generate killing matrices for the tested Java classes. Results: Beyond establishing the tooling infrastructure, we populated our database with classes extracted from five Java projects, from which four are open source projects hosted in GitHub and the fifth one is composed by simple Java programs. Currently, the database includes around 2,000 classes, 50,000 test cases, and 195,000 mutants. Conclusion: The database structure eases the addition of other Java programs and the related mutation artifacts. Furthermore, it allows for tasks such as minimizing test sets and mutant sets (e.g. by removing redundant tests and trivial mutants), thus providing researchers with a well-established and extensible basis to support varied experiments.

References

  1. J. H. Andrews, L. C. Briand, and Y. Labiche. 2005. Is Mutation an Appropriate Tool for Testing Experiments?. In ICSE’05.Google ScholarGoogle Scholar
  2. E. F. Barbosa, J. C. Maldonado, and A. M. R. Vincenzi. 2001. Toward the Determination of Sufficient Mutant Operators for C. STVR 11, 2 (2001).Google ScholarGoogle Scholar
  3. L. D. Dallilo, A. V. Pizzoleto, and F. C. Ferrari. 2019. An Evaluation of Internal Program Metrics as Predictors of Mutation Operator Score. In SAST’19.Google ScholarGoogle Scholar
  4. R. A. DeMillo, R. J. Lipton, and F. G. Sayward. 1978. Hints on Test Data Selection: Help for the Practicing Programmer. IEEE Computer 11, 4 (1978).Google ScholarGoogle Scholar
  5. H. Do, S. Elbaum, and G. Rothermel. 2005. Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and Its Potential Impact. Empirical Software Engineering 10, 4 (2005).Google ScholarGoogle Scholar
  6. M. Hutchins et al.1994. Experiments on the Effectiveness of Dataflow- and control-flow-based test adequacy criteria. In ICSE’94.Google ScholarGoogle Scholar
  7. Y. Jia and M. Harman. 2011. An Analysis and Survey of the Development of Mutation Testing. IEEE Trans. on Soft. Eng. 37, 5 (2011).Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. R. Just, D. Jalali, and M. D. Ernst. 2014. Defects4J: A Database of existing faults to enable controlled testing studies for Java programs. In ISSTA’14 - Tools Track.Google ScholarGoogle Scholar
  9. R. Just et al.2014. Are Mutants a Valid Substitute for Real Faults in Software Testing?. In FSE’14.Google ScholarGoogle Scholar
  10. M. Kintis and N. Malevris. 2015. MEDIC: A Static Analysis Framework for Equivalent Mutant Identification. Information and Software Technology 68 (2015).Google ScholarGoogle Scholar
  11. M. Kintis, M. Papadakis, Y. Jia, N. Malevris, Y. Le Traon, and M. Harman. 2018. Detecting Trivial Mutant Equivalences via Compiler Optimisations. IEEE Transactions on Software Engineering 44, 4 (2018).Google ScholarGoogle ScholarCross RefCross Ref
  12. L. F. M. Oliveira. 2010. Tackling The Useless Mutants Problem. Tese. Centro de Informática, Universidade Federal de Pernambuco.Google ScholarGoogle Scholar
  13. M. Papadakis, Y. Jia, M. Harman, and Y. Le Traon. 2015. Trivial Compiler Equivalence: A Large Scale Empirical Study of a Simple, Fast and Effective Equivalent Mutant Detection Technique. In ICSE’15.Google ScholarGoogle Scholar
  14. M. Papadakis, M. Kintis, J. Zhang, Y. Jia, Yves Le Traon, and M. Harman. 2019. Mutation Testing Advances: An Analysis and Survey. In Advances in Computers.Google ScholarGoogle Scholar
  15. D. K. Phan, Y. Kim, and M. Kim. 2018. MUSIC: Mutation Analysis Tool with High Configurability and Extensibility. In Mutation’18.Google ScholarGoogle Scholar
  16. A. V. Pizzoleto, F. C. Ferrari, L. D. Dallilo, and J. Offutt. 2020. SiMut: Exploring Program Similarity to Support the Cost Reduction of Mutation Testing. In Mutation’20.Google ScholarGoogle Scholar
  17. A. V. Pizzoleto, F. C. Ferrari, A. J. Offutt, L. Fernandes, and M. Ribeiro. 2019. A Systematic Literature Review of Techniques and Metrics to Reduce the Cost of Mutation Testing. Journal of Systems and Software 157 (2019).Google ScholarGoogle Scholar
  18. J. M. Rojas, J. Campos, M. Vivanti, G. Fraser, and A. Arcuri. 2015. Combining Multiple Coverage Criteria in Search-Based Unit Test Generation. In Search-Based Software Engineering. Springer, Cham, 93–108.Google ScholarGoogle Scholar
  1. Definition of a Knowledge Base Towards a Benchmark for Experiments with Mutation Testing

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Other conferences
      SBES '21: Proceedings of the XXXV Brazilian Symposium on Software Engineering
      September 2021
      473 pages
      ISBN:9781450390613
      DOI:10.1145/3474624

      Copyright © 2021 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 5 October 2021

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate147of427submissions,34%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format