skip to main content
10.1145/3356317.3356324acmotherconferencesArticle/Chapter ViewAbstractPublication PagessastConference Proceedingsconference-collections
research-article

SeleCTT: An Infrastructure for Selection of Concurrent Software Testing Techniques

Published: 23 September 2019 Publication History

Abstract

[Background]: A variety of software testing techniques have been published by the academia in the last years, however, the industry rarely embraces their use. The transference of knowledge between academia and industry is a challenge which is mainly occasioned by a lack of a solid base evidence with effective information to help the tester during the selection of a testing technique. [Aim]: This paper presents a computational infrastructure, called SeleCTT tool, that uses relevant information of concurrent software testing techniques to automate the selection of an adequate testing approach for a concurrent software project. [Method]: A review on the available technical literature was performed to identify attributes and concepts of concurrent software testing that affect the selection process and a characterization approach was developed and used as a kernel of the tool. [Results and Conclusions]: An online repository was built to guide the techniques selection, in order to provide interaction with the interested community. This works as a body of evidence for this area. Therefore, the SeleCTT tool provides a wide range access to all this information and also supports the decision-making process on the choice of the testing technique suitable for a concurrent software project.

References

[1]
Victor R. Basili. 1985. Quantitative Evaluation of Software Methodology. In Proceedings of the First Pan-Pacific Computer Conference. 1--19.
[2]
Victor R. Basili, Forrest Shull, and Filippo Lanubile. 1999. Building Knowledge Through Families of Experiments. IEEE Trans. Softw. Eng. 25, 4 (1999), 456--473.
[3]
Natalia Juristo, Ana M. Moreno, and Sira Vegas. 2004. Towards Building a Solid Empirical Body of Knowledge in Testing Techniques. SIGSOFT Softw. Eng. Notes 29, 5 (2004), 1--4.
[4]
Silvana M. Melo, Jeffrey C. Carver, Paulo S. L Souza, and Simone R. S. Souza. 2017. How to Test Your Concurrent Software: An Approach for the Selection of Testing Techniques. In Proceedings of the 4th ACM SIGPLAN International Workshop on Software Engineering for Parallel Systems (SEPS 2017). ACM, NY, USA, 42--43.
[5]
Silvana M. Melo, Jeffrey C. Carver, Paulo S. L Souza, and Simone R. S. Souza. 2019. Empirical research on concurrent software testing: A systematic mapping study. Information & Software Technology 105 (2019), 226--251.
[6]
Silvana M. Melo, Simone R. S. Souza, Rodolfo A. Silva, and Paulo S. L. Souza. 2015. Concurrent Software Testing in Practice: A Catalog of Tools. In Proceedings of the 6th International Workshop on Automating Test Case Design, Selection and Evaluation (A-TEST 2015). ACM, 31--40.
[7]
Felipe M. Moura, Silvana M. Melo, and Simone R. S. Souza. 2017. SeleCTT: a tool for automation of the concurrent software testing technique selection process. {in portuguese}. Technical Report RT-423, ISSN -- 0103-2569. ICMC-USP, Sao Carlos, SP, Brazil.
[8]
Arilo Claudio Dias Neto and Guilherme Horta Travassos. 2009. Model-based testing approaches selection for software projects. Information & Software Technology 51, 11 (2009), 1487--1504.
[9]
Dewayne E. Perry, Adam A. Porter, and Lawrence G. Votta. 2000. Empirical Studies of Software Engineering: A Roadmap. In Proceedings of the Conference on The Future of Software Engineering (ICSE '00). ACM, 345--355.
[10]
Marina Pilar, Jocelyn Simmonds, and Hernán Astudillo. 2014. Semi-automated Tool Recommender for Software Development Processes. Electron. Notes Theor. Comput. Sci. 302 (2014), 95--109.
[11]
Tanja E. J. Vos, Beatriz Marín, María José Escalona, and Alessandro Marchetto. 2012. A Methodological Framework for Evaluating Software Testing Techniques and Tools. In 2012 12th International Conference on Quality Software. Xi'an, China, 230--239.
[12]
Margaret A. Wojcicki and Paul Strooper. 2007. An Iterative Empirical Strategy for the Systematic Selection of a Combination of Verification and Validation Technologies. In Fifth International Workshop on Software Quality. WoSQ'07. Minneapolis, MN, USA, 9.

Cited By

View all
  • (2022) JUGE : An infrastructure for benchmarking Java unit test generators Software Testing, Verification and Reliability10.1002/stvr.183833:3Online publication date: 20-Dec-2022
  • (2020)Contributions to improve the combined selection of concurrent software testing techniquesProceedings of the 5th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3425174.3425214(69-78)Online publication date: 20-Oct-2020
  • (2020)Towards a unified catalog of attributes to guide industry in software testing technique selection2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)10.1109/ICSTW50294.2020.00071(398-407)Online publication date: Oct-2020

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Other conferences
SAST '19: Proceedings of the IV Brazilian Symposium on Systematic and Automated Software Testing
September 2019
99 pages
ISBN:9781450376488
DOI:10.1145/3356317
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

In-Cooperation

  • SBC: Sociedade Brasileira de Computação

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 23 September 2019

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. Concurrent Software Testing
  2. Testing Techniques Selection

Qualifiers

  • Research-article
  • Research
  • Refereed limited

Funding Sources

  • Fundação de Amparo à Pesquisa do Estado de São Paulo

Conference

SAST 2019

Acceptance Rates

SAST '19 Paper Acceptance Rate 9 of 22 submissions, 41%;
Overall Acceptance Rate 45 of 92 submissions, 49%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)5
  • Downloads (Last 6 weeks)0
Reflects downloads up to 05 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2022) JUGE : An infrastructure for benchmarking Java unit test generators Software Testing, Verification and Reliability10.1002/stvr.183833:3Online publication date: 20-Dec-2022
  • (2020)Contributions to improve the combined selection of concurrent software testing techniquesProceedings of the 5th Brazilian Symposium on Systematic and Automated Software Testing10.1145/3425174.3425214(69-78)Online publication date: 20-Oct-2020
  • (2020)Towards a unified catalog of attributes to guide industry in software testing technique selection2020 IEEE International Conference on Software Testing, Verification and Validation Workshops (ICSTW)10.1109/ICSTW50294.2020.00071(398-407)Online publication date: Oct-2020

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media