skip to main content
10.1145/3194733.3194740acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
short-paper

Guided test case generation through AI enabled output space exploration

Published: 28 May 2018 Publication History

Abstract

Black-box software testing is a crucial part of quality assurance for industrial products. To verify the reliable behavior of software intensive systems, testing needs to ensure that the system produces the correct outputs from a variety of inputs. Even more critical, it needs to ensure that unexpected corner cases are tested. Existing approaches attempt to address this problem by the generation of input data to known outputs based on the domain knowledge of an expert. Such input space exploration, however, does not guarantee an adequate coverage of the output space as the test input data generation is done independently of the system output. The paper discusses a novel test case generation approach enabled by neural networks which promises higher probability of exposing system faults by systematically exploring the output space of the system under test. As such, the approach potentially improves the defect detection capability by identifying gaps in the test suite of uncovered system outputs. These gaps are closed by automatically determining inputs that lead to specific outputs by performing backward reasoning on an artificial neural network. The approach is demonstrated on an industrial train control system.

References

[1]
Martín Abadi et al. 2015. TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems. (2015). https://www.tensorflow.org/ Software available from tensorflow.org.
[2]
F. T. Chan, Tsong Yueh Chen, I. K. Mak, and Yuen-Tak Yu. 1996. Proportional sampling strategy: guidelines for software testing practitioners. Information & Software Technology 38, 12 (1996), 775--782.
[3]
I. J. Goodfellow, J. Shlens, and C. Szegedy. 2014. Explaining and Harnessing Adversarial Examples. ArXiv e-prints (Dec. 2014). arXiv:stat.ML/1412.6572
[4]
Dick Hamlet. 2002. Continuity in Software Systems. SIGSOFT Softw. Eng. Notes 27, 4 (July 2002), 196--200.
[5]
Dick Hamlet and Ross Taylor. 1990. Partition Testing Does Not Inspire Confidence (Program Testing). IEEE Trans. Softw. Eng. 16, 12 (Dec. 1990), 1402--1411.
[6]
Patrick J. Schroeder, Pat Faherty, and Bogdan Korel. 2002. Generating expected results for automated black-box testing. In IEEE Trans. Autom. Sci. Engin. 139--148.
[7]
D. Richard Kuhn, Raghu N. Kacker, and Yu Lei. 2010. SP 800--142. Practical Combinatorial Testing. Technical Report. Gaithersburg, MD, United States.
[8]
Stefan Mitsch, Marco Gario, Christof J. Budnik, Michael Golm, and André Platzer. 2017. Formal Verification of Train Control with Air Pressure Brakes. In Reliability, Safety, and Security of Railway Systems. Modelling, Analysis, Verification, and Certification, Alessandro Fantechi, Thierry Lecomte, and Alexander Romanovsky (Eds.). Springer International Publishing, Cham, 173--191.
[9]
Patrick J. Schroeder and Bogdan Korel. 2000. Black-box Test Reduction Using Input-output Analysis. SIGSOFT Softw. Eng. Notes 25, 5 (Aug. 2000), 173--177.

Cited By

View all
  • (2023)Automated Support for Unit Test GenerationOptimising the Software Development Process with Artificial Intelligence10.1007/978-981-19-9948-2_7(179-219)Online publication date: 20-Jul-2023
  • (2023)The integration of machine learning into automated test generation: A systematic mapping studySoftware Testing, Verification and Reliability10.1002/stvr.184533:4Online publication date: 2-May-2023
  • (2021)DroidGamer: Android Game Testing with Operable Widget Recognition by Deep Learning*2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS)10.1109/QRS54544.2021.00031(197-206)Online publication date: Dec-2021
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
AST '18: Proceedings of the 13th International Workshop on Automation of Software Test
May 2018
85 pages
ISBN:9781450357432
DOI:10.1145/3194733
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 28 May 2018

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. adversarial examples
  2. artificial intelligence
  3. black-box testing
  4. neural networks
  5. test design
  6. test oracle

Qualifiers

  • Short-paper

Conference

ICSE '18
Sponsor:

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)22
  • Downloads (Last 6 weeks)7
Reflects downloads up to 20 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2023)Automated Support for Unit Test GenerationOptimising the Software Development Process with Artificial Intelligence10.1007/978-981-19-9948-2_7(179-219)Online publication date: 20-Jul-2023
  • (2023)The integration of machine learning into automated test generation: A systematic mapping studySoftware Testing, Verification and Reliability10.1002/stvr.184533:4Online publication date: 2-May-2023
  • (2021)DroidGamer: Android Game Testing with Operable Widget Recognition by Deep Learning*2021 IEEE 21st International Conference on Software Quality, Reliability and Security (QRS)10.1109/QRS54544.2021.00031(197-206)Online publication date: Dec-2021
  • (2021)Boosting Exploratory Testing of Industrial Automation Systems with AI2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST)10.1109/ICST49551.2021.00048(362-371)Online publication date: Apr-2021

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media