Abstract
Black box testing can employ randomness for generating test sequences. Often, even a large number of test sequences may sample a minuscule portion of the overall behaviors, thus missing failures of the system under test. The challenge is to reconcile the tradeoff between good coverage and high complexity. Combining black box testing with learning (a sequence of increasingly more accurate) models for the tested system was suggested for improving the coverage of black box testing. The learned models can be used to perform more comprehensive exploration, e.g., using model checking. We present a light-weight approach that employs machine learning ideas in order to improve the coverage and accelerate the testing process. Rather than focus on constructing a complete model for the tested system, we construct a kernel, whose nodes are consistent with prefixes of test sequences that were examined so far; as part of the testing process, we keep refining and expanding the kernel. We detect whether the kernel itself contains faulty executions. Otherwise, we exploit the kernel to generate further test sequences that use only a reduced set of representative prefixes.
The research was partially funded by Israeli Science Foundation grant 1464/18: “Efficient Runtime Verification for Systems with Lots of Data and its Applications”.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
When testing a system without a never claim, we may assume that all the inputs are enabled from each state, or simply connect such inputs to a sink node.
- 2.
The code is available at https://github.com/roiDaniela/ABBT. We used the aalpy package [14] with some modifications to suit our specific use case.
References
Alpern, F.B.B.: Schneider: recognizing safety and liveness. Distrib. Comput. 2, 117–126 (1987). https://doi.org/10.1007/BF01782772
Angluin, D.: Learning Regular Sets from Queries and Counterexamples. Inf. Comput. 75, 87–106 (1987)
Angluin, D.: A note on the number of queries needed to identify regular languages. Inf. Control 51, 76–87 (1981)
Groce, A., Peled, D., Yannakakis, M.: Adaptive model checking. Logic J. IGPL 14, 729–744 (2006)
Higuera, C.: Grammatical inference: learning automata and grammars. Cambridge University Press (2010)
Holzmann, G.J.: The spin model checker: primer and reference manual. Addison-Wesley Professional (2014)
Isberner, M., Howar, F., Steffen, B.: The TTT algorithm: a redundancy-free approach to active automata learning. In: Bonakdarpour, B., Smolka, S.A. (eds.) RV 2014. LNCS, vol. 8734, pp. 307–322. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11164-3_26
Lamport, L.: What good is temporal logic? In: Proceedings of the IFIP 9th World Computer Congress, Information Processing, vol. 83, pp. 657–668 (1983)
Leucker, M.: Learning meets verification. In: de Boer, F.S., Bonsangue, M.M., Graf, S., de Roever, W.-P. (eds.) FMCO 2006. LNCS, vol. 4709, pp. 127–151. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74792-5_6
Z. Manna, A. Pnueli, Temporal verification of reactive systems - safety, 1 Edn. Springer (1995). https://doi.org/10.1007/978-1-4612-4222-2
Meinke, K., Sindhu, M.A.: Incremental learning-based testing for reactive systems. In: Gogolla, M., Wolff, B. (eds.) TAP 2011. LNCS, vol. 6706, pp. 134–151. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21768-5_11
Meinke, K., Sindhu, M.: LBTest: a learning-based testing tool for reactive systems. In: 2013 IEEE Sixth International Conference on Software Testing, Verification and Validation, pp. 447–454 (2013)
Meinke, K., Niu, F., Sindhu, M.: Learning-based software testing: a tutorial. In: Hähnle, R., Knoop, J., Margaria, T., Schreiner, D., Steffen, B. (eds.) ISoLA 2011. CCIS, pp. 200–219. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-34781-8_16
Muskardin, E., Aichernig, B., Pill, I., Pferscher, A., Tappler, M.: AALpy: An active automata learning library. Innov. Syst. Softw. Eng. 18, 417–426 (2022). https://doi.org/10.1007/s11334-022-00449-3
Oncina, J., García, P.: Inferring regular languages in polynomial updated time, series in machine perception and artificial. Intelligence 4, 49–61 (1992)
Peled, D., Vardi, M., Yannakakis, M.: Black box checking. In: Proceedings of the 14th International Symposium on Mathematical Foundations of Computer Science, vol. 1672, pp. 225–240 (1999)
Raffelt, H., Merten, M., Steffen, B., Margaria, T.: Dynamic testing via automata learning. Int. J. Softw. Tools Technol. Transfer 11(4), 307–324 (2009)
Rivest, R., Schapire, R.: Inference of finite automata using homing sequences. Inf. Comput. 103(2), 299–347 (1993)
Sindhu, M.A., Meinke, K.: IDS: an incremental learning algorithm for finite automata, CoRR, vol. abs/1206.2691, pp. 1–12 (2012)
Sutton, R., Barto, A.: Reinforcement learning - an introduction. MIT Press, Adaptive Computation and Machine Learning (1998)
Vaandrager, F., Garhewal, B., Rot, J., Wißmann, T.: A new approach for active automata learning based on apartness. CoRR, vol. abs/2107.05419 (2021)
Weiss, G., Goldberg, Y., Yahav, E.: Extracting automata from recurrent neural networks using queries and counterexamples. In: Proceedings of the 35th International Conference on Machine Learning (ICML 2018), vol. 80, pp. 5244–5253 PMLR (2018)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Fogler, R., Cohen, I., Peled, D. (2023). Accelerating Black Box Testing with Light-Weight Learning. In: Caltais, G., Schilling, C. (eds) Model Checking Software. SPIN 2023. Lecture Notes in Computer Science, vol 13872. Springer, Cham. https://doi.org/10.1007/978-3-031-32157-3_6
Download citation
DOI: https://doi.org/10.1007/978-3-031-32157-3_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-32156-6
Online ISBN: 978-3-031-32157-3
eBook Packages: Computer ScienceComputer Science (R0)