Abstract
System testing is essential for developing high-quality systems, but the degree of automation in system testing is still low. Therefore, there is high potential for Artificial Intelligence (AI) techniques like machine learning, natural language processing, or search-based optimization to improve the effectiveness and efficiency of system testing. This chapter presents where and how AI techniques can be applied to automate and optimize system testing activities. First, we identified different system testing activities (i.e., test planning and analysis, test design, test execution, and test evaluation) and indicated how AI techniques could be applied to automate and optimize these activities. Furthermore, we presented an industrial case study on test case analysis, where AI techniques are applied to encode and group natural language into clusters of similar test cases for cluster-based test optimization. Finally, we discuss the levels of autonomy of AI in system testing.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Editors note: The source code linked in this chapter could change after publishing this book. A snapshot of the source code accompanying this chapter can be found at https://doi.org/10.5281/zenodo.6965479.
- 2.
Please note that the authors have approval from the third party for the utilized case study and its code provided in the Git repository https://github.com/leohatvani/clustering-dependency-detection. The permission to use the code is also granted to the reader.
- 3.
The source code of our work can be found online at [16], together with anonymized feature vectors and a test case graph.
- 4.
Is a measure of a model’s accuracy on a dataset.
- 5.
The Area Under Curve provides an aggregate measure of performance across all possible classification thresholds.
- 6.
Based on the levels shown by Synopsis https://www.synopsys.com/automotive/autonomous-driving-levels.html.
References
A. Abran, J. Moore, P. Bourque, R. Dupuis, L. Tripp, Software engineering body of knowledge. IEEE Comput. Soc. (2004)
D. Adamo, M.K. Khan, S. Koppula, R. Bryce, Reinforcement learning for android GUI testing. In: Proceedings of the 9th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation (2008), pp. 2–8
S. Anand, E. Burke, T.Y. Chen, J. Clark, M. Cohen, W. Grieskamp, M. Harman, M. Harrold, P. McMinn, A. Bertolino et al., An orchestrated survey of methodologies for automated software test case generation. J. Syst. Softw. 86(8), 1978–2001 (2013)
A. Arcuri, Test suite generation with the many independent objective (MIO) algorithm. Inf. Softw. Technol. 104, 195–206 (2018)
K. Baral, J. Offutt, F. Mulla, Self determination: a comprehensive strategy for making automated tests more effective and efficient, in 2021 14th IEEE Conference on Software Testing, Verification and Validation (ICST) (IEEE, 2021), pp. 127–136
G. Bath, E. Van Veenendaal, Improving the Test Process: Implementing Improvement and Change-A Study Guide for the ISTQB Expert Level Module (Rocky Nook, Inc., 2013)
L.C. Briand, Y. Labiche, M. Shousha, Using genetic algorithms for early schedulability analysis and stress testing in real-time systems. Genet. Program Evolvable Mach. 7(2), 145–170 (2006)
P. Dutta, G. Ryan, A. Zieba, S. Stolfo, Simulated user bots: real time testing of insider threat detection systems, in 2018 IEEE Security and Privacy Workshops (SPW) (IEEE, 2018), pp. 228–236
E. Enoiu, M. Frasheri, Test agents: The next generation of test cases, in International Conference on Software Testing, Verification and Validation Workshops (ICSTW) (IEEE, 2019), pp. 305–308
L. Erlenhov, F.G. Oliveira Neto, R. Scandariato, P. Leitner, Current and future bots in software development, in International Workshop on Bots in Software Engineering (BotSE) (IEEE, 2019), pp. 7–11
M. Felderer, I. Schieferdecker, A taxonomy of risk-based testing. Int. J. Softw. Tools Technol. Transfer 16(5), 559–568 (2014)
R. Feldt, F.G. Oliveira Neto, R. Torkar, Ways of applying artificial intelligence in software engineering, in International Workshop on Realizing Artificial Intelligence Synergies in Software Engineering (RAISE) (IEEE, 2018), pp. 35–41
K. Frounchi, L.C. Briand, L. Grady, Y. Labiche, R. Subramanyan, Automating image segmentation verification and validation by learning test oracles. Inf. Softw. Technol. 53(12), 1337–1348 (2011)
V. Garousi, M. Felderer, Ç.M. Karapıçak, U. Yılmaz, Testing embedded software: a survey of the literature. Inf. Softw. Technol. 104, 14–45 (2018)
M. Harman, P. McMinn, J.T. Souza, S. Yoo, Search based software engineering: techniques, taxonomy, tutorial, in Empirical Software Engineering and Verification (Springer, Berlin, 2010), pp. 1–59
L. Hatvani, S. Tahvili, Clustering dependency detection (2018). https://github.com/leohatvani/clustering-dependency-detection
IEEE: IEEE standard glossary of software engineering terminology. IEEE Std. 610.12-1990 (1990), pp. 1–84
J.Y. Jiang, M. Zhang, C. Li, M. Bendersky, N. Golbandi, M. Najork, Semantic text matching for long-form documents, in WWW ’19 (Association for Computing Machinery, New York, NY, USA, 2019)
M. Johnson, J. Bradshaw, P. Feltovich, C. Jonker, B. Van Riemsdijk, M. Sierhuis, The fundamental principle of coactive design: Interdependence must shape autonomy, in International Workshop on Coordination, Organizations, Institutions, and Norms in Agent Systems (Springer, Berlin, 2010), pp. 172–191
A. Joshi, E. Fidalgo, E. Alegre, L. Fernández-Robles, Summcoder: an unsupervised framework for extractive text summarization based on deep auto-encoders. Expert Syst. Appl. 129, 200–215 (2019)
M. Kane, Validating the interpretations and uses of test scores. J. Educ. Meas. 50, 1–73 (2013)
P. Kumaresen, M. Frasheri, E. Enoiu, Agent-based software testing: a definition and systematic mapping study, in 2020 IEEE 20th International Conference on Software Quality, Reliability and Security Companion (QRS-C) (IEEE, 2020), pp. 24–31
D. Liang, F. Zhang, W. Zhang, Q. Zhang, J. Fu, M. Peng, T. Gui, X. Huang, Adaptive multi-attention network incorporating answer information for duplicate question detection, in SIGIR’19 (Association for Computing Machinery, New York, NY, USA, 2019), pp. 95–104
L. van der Maaten, E. Postma, H. Herik, Dimensionality reduction: a comparative review. J. Mach. Learn. Res.-JMLR 10 (2007)
C. Malz, N. Jazdi, Agent-based test management for software system test, in International Conference on Automation Quality and Testing Robotics (AQTR), vol. 2 (IEEE, 2010), pp. 1–6
M. Mansoor, Z. Rehman, M. Shaheen, M. Khan, M. Habib, Deep learning based semantic similarity detection using text data. Inf. Technol. Control 49 (2020)
I. Markov, H. Gómez-Adorno, J.P. Posadas-Durán, G. Sidorov, A. Gelbukh, Author profiling with doc2vec neural network-based document embeddings, in Advances in Soft Computing. ed. by O. Pichardo-Lagunas, S. Miranda-Jiménez (Springer International Publishing, Cham, 2017), pp.117–131
A.M. Memon, M.E. Pollack, M.L. Soffa, A planning-based approach to GUI testing, in Proceedings of The 13th International Software/Internet Quality Week (2000)
R. Ramler, M. Felderer, Requirements for integrating defect prediction and risk-based testing, in 2016 42th Euromicro Conference on Software Engineering and Advanced Applications (SEAA) (IEEE, 2016), pp. 359–362
S. Tahvili, W. Afzal, M. Saadatmand, M. Bohlin, S.H. Ameerjan, Espret: a tool for execution time estimation of manual test cases. J. Syst. Softw. 161, 1–43 (2018)
S. Tahvili, M. Bohlin, M. Saadatmand, S. Larsson, W. Afzal, D. Sundmark, Cost-benefit analysis of using dependency knowledge at integration testing, in The 17th International Conference On Product-Focused Software Process Improvement (2016)
S. Tahvili, L. Hatvani, M. Felderer, W. Afzal, M. Bohlin, Automated functional dependency detection between test cases using doc2vec and clustering, in 2019 IEEE International Conference On Artificial Intelligence Testing (AITest) (IEEE, 2019), pp. 19–26
S. Tahvili, L. Hatvani, M. Felderer, W. Afzal, M. Saadatmand, M. Bohlin, Cluster-based test scheduling strategies using semantic relationships between test specifications, in Proceedings of the 5th International Workshop on Requirements Engineering and Testing (2018), pp. 1–4
S. Tahvili, L. Hatvani, E. Ramentol, R. Pimentel, W. Afzal, F. Herrera, A novel methodology to classify test cases using natural language processing and imbalanced learning. Eng. Appl. Artif. Intell. 95, 1–13 (2020)
J. Tang, Towards automation in software test life cycle based on multi-agent, in International Conference on Computational Intelligence and Software Engineering (IEEE, 2010), pp. 1–4
M. Utting, A. Pretschner, B. Legeard, A taxonomy of model-based testing approaches. Softw. Test., Verif. Reliab. 22(5), 297–312 (2012)
M. Winikoff, Future directions for agent-based software engineering. IJAOSE 3(4), 402–410 (2009)
M. Wooldridge, Agent-based software engineering. IEE Proc.-Softw. 144(1), 26–37 (1997)
Acknowledgements
This work was partially supported by the Austrian Science Fund (FWF): I 4701-N and the project ConTest funded by the Austrian Research Promotion Agency (FFG). Eduard Enoiu was partially supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement No. 957212.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this chapter
Cite this chapter
Felderer, M., Enoiu, E.P., Tahvili, S. (2023). Artificial Intelligence Techniques in System Testing. In: Romero, J.R., Medina-Bulo, I., Chicano, F. (eds) Optimising the Software Development Process with Artificial Intelligence. Natural Computing Series. Springer, Singapore. https://doi.org/10.1007/978-981-19-9948-2_8
Download citation
DOI: https://doi.org/10.1007/978-981-19-9948-2_8
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-19-9947-5
Online ISBN: 978-981-19-9948-2
eBook Packages: Computer ScienceComputer Science (R0)