skip to main content
10.1145/3340433.3342825acmconferencesArticle/Chapter ViewAbstractPublication PagesisstaConference Proceedingsconference-collections
research-article

A platform for diversity-driven test amplification

Published: 26 August 2019 Publication History

Abstract

Test amplification approaches take a manually written set of tests (input/output mappings) and enhance their effectiveness for some clearly defined engineering goal such as detecting faults. Conceptually, they can either achieve this in a ``black box'' way using only the initial ``seed'' tests or in a ``white box'' way utilizing additional inputs such as the source code or specification of the software under test. However, no fully black box approach to test amplification is currently available even though they can be used to enhance white box approaches. In this paper we introduce a new approach that uses the seed tests to search for existing redundant implementations of the software under test and leverages them as oracles in the generation and evaluation of new tests. The approach can therefore be used as a stand alone black box test amplification method or in tandem with other methods. In this paper we explain the approach, describe its synergies with other approaches and provide some evidence for its practical feasibility.

References

[1]
Miltiadis Allamanis, Earl T. Barr, Premkumar Devanbu, and Charles Sutton. 2018. A Survey of Machine Learning for Big Code and Naturalness. ACM Comput. Surv. 51, 4, Article 81 (July 2018), 37 pages.
[2]
Paul Ammann and Jeff Offutt. 2008. Introduction to Software Testing (1 ed.). Cambridge University Press.
[3]
Earl T. Barr, Yuriy Brun, Premkumar Devanbu, Mark Harman, and Federica Sarro. 2014. The Plastic Surgery Hypothesis. In Proceedings of the 22Nd ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE 2014). ACM, New York, NY, USA, 306–317.
[4]
E. T. Barr, M. Harman, P. McMinn, M. Shahbaz, and S. Yoo. 2015. The Oracle Problem in Software Testing: A Survey. IEEE Transactions on Software Engineering 41, 5 (May 2015), 507–525.
[5]
B. Beckert and R. Hähnle. 2014. Reasoning and Verification: State of the Art and Current Trends. IEEE Intelligent Systems 29, 1 (Jan 2014), 20–29.
[6]
Carl Boettiger. 2015. An Introduction to Docker for Reproducible Research. SIGOPS Oper. Syst. Rev. 49, 1 (Jan. 2015), 71–79.
[7]
[8]
Antonio Carzaniga, Alberto Goffi, Alessandra Gorla, Andrea Mattavelli, and Mauro Pezzè. 2014. Cross-checking Oracles from Intrinsic Software Redundancy. In Proceedings of the 36th International Conference on Software Engineering (ICSE 2014). ACM, New York, NY, USA, 931–942.
[9]
[10]
Maven Central. 2019. Website. http://search.maven.org (accessed 2019-05-01).
[11]
A. C. d. Paula, E. Guerra, C. V. Lopes, H. Sajnani, and O. A. L. Lemos. 2016. An Exploratory Study of Interface Redundancy in Code Repositories. In 2016 IEEE 16th International Working Conference on Source Code Analysis and Manipulation (SCAM). 107–116.
[12]
Benjamin Danglot, Oscar Vera-Perez, Zhongxing Yu, Martin Monperrus, and Benoit Baudry. 2018. A Snowballing Literature Study on Test Amplification. CoRR abs/1705.10692v2 (2018). arXiv: 1705.10692v2 http://arxiv.org/abs/1705.10692v2
[13]
Benjamin Danglot, Oscar Luis Vera-Pérez, Benoit Baudry, and Martin Monperrus. 2018. Automatic Test Improvement with DSpot: a Study with Ten Mature Open-Source Projects. CoRR abs/1811.08330 (2018). arXiv: 1811.08330 http://arxiv.org/ abs/1811.08330
[14]
Luca Della Toffola, Cristian Alexandru Staicu, and Michael Pradel. 2017. Saying ’Hi!’ is Not Enough: Mining Inputs for Effective Test Generation. In Proceedings of the 32Nd IEEE/ACM International Conference on Automated Software Engineering (ASE 2017). IEEE Press, Piscataway, NJ, USA, 44–49. http://dl.acm.org/citation. cfm?id=3155562.3155572
[15]
Robert Dyer, Hoan Anh Nguyen, Hridesh Rajan, and Tien N. Nguyen. 2013. Boa: A Language and Infrastructure for Analyzing Ultra-large-scale Software Repositories. In Proceedings of the 2013 International Conference on Software Engineering (ICSE ’13). IEEE Press, Piscataway, NJ, USA, 422–431. http://dl.acm. org/citation.cfm?id=2486788.2486844
[16]
D. E. Eckhardt, A. K. Caglayan, J. C. Knight, L. D. Lee, D. F. McAllister, M. A. Vouk, and J. P. J. Kelly. 1991. An experimental evaluation of software redundancy as a strategy for improving reliability. IEEE Transactions on Software Engineering 17, 7 (July 1991), 692–702.
[17]
Michael D. Ernst, Jeff H. Perkins, Philip J. Guo, Stephen McCamant, Carlos Pacheco, Matthew S. Tschantz, and Chen Xiao. 2007. The Daikon system for dynamic detection of likely invariants. Science of Computer Programming 69, 1 (2007), 35 – 45. Special issue on Experimental Software and Toolkits.
[18]
Gordon Fraser and Andrea Arcuri. 2011. EvoSuite: Automatic Test Suite Generation for Object-oriented Software. In Proceedings of the 19th ACM SIGSOFT A Platform for Diversity-Driven Test Amplification A-TEST ’19, August 26–27, 2019, Tallinn, Estonia Symposium and the 13th European Conference on Foundations of Software Engineering (ESEC/FSE ’11). ACM, New York, NY, USA, 416–419. 1145/2025113.2025179
[19]
Gordon Fraser and Andrea Arcuri. 2014. A Large-Scale Evaluation of Automated Unit Test Generation Using EvoSuite. ACM Trans. Softw. Eng. Methodol. 24, 2, Article 8 (Dec. 2014), 42 pages.
[20]
G. Grahne and J. Zhu. 2005. Fast algorithms for frequent itemset mining using FP-trees. IEEE Transactions on Knowledge and Data Engineering 17, 10 (Oct 2005), 1347–1362.
[21]
Robert M. Hierons, Kirill Bogdanov, Jonathan P. Bowen, Rance Cleaveland, John Derrick, Jeremy Dick, Marian Gheorghe, Mark Harman, Kalpesh Kapoor, Paul Krause, Gerald Lüttgen, Anthony J. H. Simons, Sergiy Vilkomir, Martin R. Woodward, and Hussein Zedan. 2009. Using Formal Specifications to Support Testing. ACM Comput. Surv. 41, 2, Article 9 (Feb. 2009), 76 pages.
[22]
Oliver Hummel. 2008. Semantic component retrieval in software engineering. Ph.D. Dissertation. Universität Mannheim.
[23]
O. Hummel, W. Janjic, and C. Atkinson. 2008. Code Conjurer: Pulling Reusable Software out of Thin Air. IEEE Software 25, 5 (Sep. 2008), 45–52.
[24]
Werner Janjic and Colin Atkinson. 2013. Utilizing Software Reuse Experience for Automated Test Recommendation. In Proceedings of the 8th International Workshop on Automation of Software Test (AST ’13). IEEE Press, Piscataway, NJ, USA, 100–106. http://dl.acm.org/citation.cfm?id=2662413.2662436
[25]
Werner Janjic, Florian Barth, Oliver Hummel, and Colin Atkinson. 2011. Discrepancy Discovery in Search-enhanced Testing. In Proceedings of the 3rd International Workshop on Search-Driven Development: Users, Infrastructure, Tools, and Evaluation (SUITE ’11). ACM, New York, NY, USA, 21–24. 1985429.1985435
[26]
Jenkins. 2019. Website. https://jenkins.io/. (accessed 2019-05-01).
[27]
René Just. 2014. The Major Mutation Framework: Efficient and Scalable Mutation Analysis for Java. In Proceedings of the 2014 International Symposium on Software Testing and Analysis (ISSTA 2014). ACM, New York, NY, USA, 433–436.
[28]
Y. Ke, K. T. Stolee, C. L. Goues, and Y. Brun. 2015. Repairing Programs with Semantic Code Search (T). In 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE). 295–306. 2015.60
[29]
Marcus Kessel and Colin Atkinson. 2016. Ranking software components for reuse based on non-functional properties. Information Systems Frontiers 18, 5 (01 Oct 2016), 825–853.
[30]
M. Kessel and C. Atkinson. 2018. Integrating Reuse into the Rapid, Continuous Software Engineering Cycle through Test-Driven Search. In 2018 IEEE/ACM 4th International Workshop on Rapid Continuous Software Engineering (RCoSE). 8–11.
[31]
Groovy Language. 2019. Website. https://http://groovy-lang.org/ (accessed 2019-05-01).
[32]
Otávio Augusto Lazzarini Lemos, Sushil Krishna Bajracharya, Joel Ossher, Ricardo Santos Morla, Paulo Cesar Masiero, Pierre Baldi, and Cristina Videira Lopes. 2007. CodeGenie: Using Test-cases to Search and Reuse Source Code. In Proceedings of the Twenty-second IEEE/ACM International Conference on Automated Software Engineering (ASE ’07). ACM, New York, NY, USA, 525–526.
[33]
Otávio Augusto Lazzarini Lemos, Fabiano Cutigi Ferrari, Fábio Fagundes Silveira, and Alessandro Garcia. 2012.
[34]
Development of Auxiliary Functions: Should You Be Agile? An Empirical Assessment of Pair Programming and Test-first Programming. In Proceedings of the 34th International Conference on Software Engineering (ICSE ’12). IEEE Press, Piscataway, NJ, USA, 529–539. http://dl.acm.org/citation.cfm?id=2337223.2337285
[35]
Christopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press.
[36]
Maven. 2019. Website. http://maven.apache.org/ (accessed 2019-05-01).
[37]
Carlos Pacheco, Shuvendu K. Lahiri, Michael D. Ernst, and Thomas Ball. 2007. Feedback-Directed Random Test Generation. In Proceedings of the 29th International Conference on Software Engineering (ICSE ’07). IEEE Computer Society, Washington, DC, USA, 75–84.
[38]
Andy Podgurski and Lynn Pierce. 1993. Retrieving Reusable Software by Sampling Behavior. ACM Trans. Softw. Eng. Methodol. 2, 3 (July 1993), 286–303.
[39]
Steven P. Reiss. 2009. Semantics-based Code Search. In Proceedings of the 31st International Conference on Software Engineering (ICSE ’09). IEEE Computer Society, Washington, DC, USA, 243–253.
[40]
[41]
M. Robillard, R. Walker, and T. Zimmermann. 2010. Recommendation Systems for Software Engineering. IEEE Software 27, 4 (July 2010), 80–86. org/10.1109/MS.2009.161
[42]
Chanchal K. Roy, James R. Cordy, and Rainer Koschke. 2009. Comparison and Evaluation of Code Clone Detection Techniques and Tools: A Qualitative Approach. Sci. Comput. Program. 74, 7 (May 2009), 470–495.
[43]
S. Segura, G. Fraser, A. B. Sanchez, and A. Ruiz-CortÃľs. 2016. A Survey on Metamorphic Testing. IEEE Transactions on Software Engineering 42, 9 (Sep. 2016), 805–824.
[44]
Kathryn T. Stolee, Sebastian Elbaum, and Daniel Dobos. 2014. Solving the Search for Source Code. ACM Trans. Softw. Eng. Methodol. 23, 3, Article 26 (June 2014), 45 pages.
[45]
M. Suzuki, A. C. d. Paula, E. Guerra, C. V. Lopes, and O. A. L. Lemos. 2017. An Exploratory Study of Functional Redundancy in Code Repositories. In 2017 IEEE 17th International Working Conference on Source Code Analysis and Manipulation (SCAM). 31–40.

Cited By

View all
  • (2024)Promoting open science in test-driven software experimentsJournal of Systems and Software10.1016/j.jss.2024.111971212:COnline publication date: 1-Jun-2024
  • (2023)Cross-coverage testing of functionally equivalent programs2023 IEEE/ACM International Conference on Automation of Software Test (AST)10.1109/AST58925.2023.00014(101-111)Online publication date: May-2023
  • (2022)Diversity-driven unit test generationJournal of Systems and Software10.1016/j.jss.2022.111442193:COnline publication date: 1-Nov-2022

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
A-TEST 2019: Proceedings of the 10th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation
August 2019
41 pages
ISBN:9781450368506
DOI:10.1145/3340433
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 26 August 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. automated testing
  2. behavior
  3. mining software repositories
  4. observations
  5. oracle problem
  6. test amplification

Qualifiers

  • Research-article

Conference

ESEC/FSE '19
Sponsor:

Upcoming Conference

ISSTA '25

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)14
  • Downloads (Last 6 weeks)4
Reflects downloads up to 08 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Promoting open science in test-driven software experimentsJournal of Systems and Software10.1016/j.jss.2024.111971212:COnline publication date: 1-Jun-2024
  • (2023)Cross-coverage testing of functionally equivalent programs2023 IEEE/ACM International Conference on Automation of Software Test (AST)10.1109/AST58925.2023.00014(101-111)Online publication date: May-2023
  • (2022)Diversity-driven unit test generationJournal of Systems and Software10.1016/j.jss.2022.111442193:COnline publication date: 1-Nov-2022

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media