Abstract
Inductive inference is the process of hypothesizing a model from a set of examples. It can be considered to be the inverse of program testing, which is the process of generating a finite set of tests that are intended to fully exercise a software system. This relationship has been acknowledged for almost 30 years, and has led to the emergence of several induction-based techniques that aim either to generate suitable test sets or assess the adequacy of existing test sets. Unfortunately these techniques are usually deemed to be too impractical, because they are based on exact inference, requiring a vast set of examples or tests. In practice a test set can still be adequate if the inferred model contains minor errors. This paper shows how the Probably Approximately Correct (PAC) framework, a well-established approach in the field of inductive inference, can be applied to inductive testing techniques. This facilitates a more pragmatic assessment of these techniques by allowing for a degree of error. This evaluation framework gives rise to a challenge: To identify the best combination of testing and inference techniques that produce practical and (approximately) adequate test sets.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Weyuker, E.: Assessing test data adequacy through program inference. ACM Transactions on Programming Languages and Systems 5(4), 641–655 (1983)
Bergadano, F., Gunetti, D.: Testing by means of inductive program learning. ACM Transactions on Software Engineering and Methodology 5(2), 119–145 (1996)
Zhu, H., Hall, P., May, J.: Inductive inference and software testing. Software Testing, Verification, and Reliability 2(2), 69–81 (1992)
Zhu, H.: A formal interpretation of software testing as inductive inference. Software Testing, Verification and Reliability 6(1), 3–31 (1996)
Harder, M., Mellen, J., Ernst, M.: Improving test suites via operational abstraction. In: Proceedings of the International Conference on Software Engineering (ICSE 2003), pp. 60–71 (2003)
Xie, T., Notkin, D.: Mutually enhancing test generation and specification inference. In: Petrenko, A., Ulrich, A. (eds.) FATES 2003. LNCS, vol. 2931, pp. 60–69. Springer, Heidelberg (2003)
Berg, T., Grinchtein, O., Jonsson, B., Leucker, M., Raffelt, H., Steffen, B.: On the correspondence between conformance testing and regular inference. In: Cerioli, M. (ed.) FASE 2005. LNCS, vol. 3442, pp. 175–189. Springer, Heidelberg (2005)
Raffelt, H., Steffen, B.: Learnlib: A library for automata learning and experimentation. In: Baresi, L., Heckel, R. (eds.) FASE 2006. LNCS, vol. 3922, pp. 377–380. Springer, Heidelberg (2006)
Bollig, B., Katoen, J., Kern, C., Leucker, M.: Smyle: A tool for synthesizing distributed models from scenarios by learning. In: van Breugel, F., Chechik, M. (eds.) CONCUR 2008. LNCS, vol. 5201, pp. 162–166. Springer, Heidelberg (2008)
Shahbaz, M., Groz, R.: Inferring mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 207–222. Springer, Heidelberg (2009)
Walkinshaw, N., Derrick, J., Guo, Q.: Iterative refinement of reverse-engineered models by model-based testing. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 305–320. Springer, Heidelberg (2009)
Valiant, L.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)
Summers, P.: A methodology for LISP program construction from examples. Journal of the ACM 24(1) (January 1977)
Lee, D., Yannakakis, M.: Principles and Methods of Testing Finite State Machines - A Survey. Proceedings of the IEEE 84, 1090–1126 (1996)
Gold, M.: Language Identification in the Limit. Information and Control 10, 447–474 (1967)
Angluin, D.: Learning Regular Sets from Queries and Counterexamples. Information and Computation 75, 87–106 (1987)
Lang, K., Pearlmutter, B., Price, R.: Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm. In: Honavar, V.G., Slutzki, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 1–12. Springer, Heidelberg (1998)
Gold, M.: Complexity of automaton identification from given data. Information and Control 37, 302–320 (1978)
Angluin, D.: On the complexity of minimum inference of regular sets. Information and Control 39, 337–350 (1978)
Walkinshaw, N., Bogdanov, K., Holcombe, M., Salahuddin, S.: Improving Dynamic Software Analysis by Applying Grammar Inference Principles. Journal of Software Maintenance and Evolution: Research and Practice (2008)
Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)
Walkinshaw, N., Bogdanov, K., Johnson, K.: Evaluation and Comparison of Inferred Regular Grammars. In: Clark, A., Coste, F., Miclet, L. (eds.) ICGI 2008. LNCS (LNAI), vol. 5278, pp. 252–265. Springer, Heidelberg (2008)
Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Information Processing and Management 45(4), 427–437 (2009)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2010 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Walkinshaw, N. (2010). The Practical Assessment of Test Sets with Inductive Inference Techniques. In: Bottaci, L., Fraser, G. (eds) Testing – Practice and Research Techniques. TAIC PART 2010. Lecture Notes in Computer Science, vol 6303. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15585-7_16
Download citation
DOI: https://doi.org/10.1007/978-3-642-15585-7_16
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-15584-0
Online ISBN: 978-3-642-15585-7
eBook Packages: Computer ScienceComputer Science (R0)