Skip to main content

The Practical Assessment of Test Sets with Inductive Inference Techniques

  • Conference paper
Testing – Practice and Research Techniques (TAIC PART 2010)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 6303))

Abstract

Inductive inference is the process of hypothesizing a model from a set of examples. It can be considered to be the inverse of program testing, which is the process of generating a finite set of tests that are intended to fully exercise a software system. This relationship has been acknowledged for almost 30 years, and has led to the emergence of several induction-based techniques that aim either to generate suitable test sets or assess the adequacy of existing test sets. Unfortunately these techniques are usually deemed to be too impractical, because they are based on exact inference, requiring a vast set of examples or tests. In practice a test set can still be adequate if the inferred model contains minor errors. This paper shows how the Probably Approximately Correct (PAC) framework, a well-established approach in the field of inductive inference, can be applied to inductive testing techniques. This facilitates a more pragmatic assessment of these techniques by allowing for a degree of error. This evaluation framework gives rise to a challenge: To identify the best combination of testing and inference techniques that produce practical and (approximately) adequate test sets.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Weyuker, E.: Assessing test data adequacy through program inference. ACM Transactions on Programming Languages and Systems 5(4), 641–655 (1983)

    Article  MATH  Google Scholar 

  2. Bergadano, F., Gunetti, D.: Testing by means of inductive program learning. ACM Transactions on Software Engineering and Methodology 5(2), 119–145 (1996)

    Article  Google Scholar 

  3. Zhu, H., Hall, P., May, J.: Inductive inference and software testing. Software Testing, Verification, and Reliability 2(2), 69–81 (1992)

    Article  Google Scholar 

  4. Zhu, H.: A formal interpretation of software testing as inductive inference. Software Testing, Verification and Reliability 6(1), 3–31 (1996)

    Article  Google Scholar 

  5. Harder, M., Mellen, J., Ernst, M.: Improving test suites via operational abstraction. In: Proceedings of the International Conference on Software Engineering (ICSE 2003), pp. 60–71 (2003)

    Google Scholar 

  6. Xie, T., Notkin, D.: Mutually enhancing test generation and specification inference. In: Petrenko, A., Ulrich, A. (eds.) FATES 2003. LNCS, vol. 2931, pp. 60–69. Springer, Heidelberg (2003)

    Google Scholar 

  7. Berg, T., Grinchtein, O., Jonsson, B., Leucker, M., Raffelt, H., Steffen, B.: On the correspondence between conformance testing and regular inference. In: Cerioli, M. (ed.) FASE 2005. LNCS, vol. 3442, pp. 175–189. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  8. Raffelt, H., Steffen, B.: Learnlib: A library for automata learning and experimentation. In: Baresi, L., Heckel, R. (eds.) FASE 2006. LNCS, vol. 3922, pp. 377–380. Springer, Heidelberg (2006)

    Chapter  Google Scholar 

  9. Bollig, B., Katoen, J., Kern, C., Leucker, M.: Smyle: A tool for synthesizing distributed models from scenarios by learning. In: van Breugel, F., Chechik, M. (eds.) CONCUR 2008. LNCS, vol. 5201, pp. 162–166. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  10. Shahbaz, M., Groz, R.: Inferring mealy machines. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 207–222. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  11. Walkinshaw, N., Derrick, J., Guo, Q.: Iterative refinement of reverse-engineered models by model-based testing. In: Cavalcanti, A., Dams, D.R. (eds.) FM 2009. LNCS, vol. 5850, pp. 305–320. Springer, Heidelberg (2009)

    Chapter  Google Scholar 

  12. Valiant, L.: A theory of the learnable. Communications of the ACM 27(11), 1134–1142 (1984)

    Article  MATH  Google Scholar 

  13. Summers, P.: A methodology for LISP program construction from examples. Journal of the ACM 24(1) (January 1977)

    Google Scholar 

  14. Lee, D., Yannakakis, M.: Principles and Methods of Testing Finite State Machines - A Survey. Proceedings of the IEEE 84, 1090–1126 (1996)

    Article  Google Scholar 

  15. Gold, M.: Language Identification in the Limit. Information and Control 10, 447–474 (1967)

    Article  MATH  Google Scholar 

  16. Angluin, D.: Learning Regular Sets from Queries and Counterexamples. Information and Computation 75, 87–106 (1987)

    Article  MATH  MathSciNet  Google Scholar 

  17. Lang, K., Pearlmutter, B., Price, R.: Results of the Abbadingo One DFA Learning Competition and a New Evidence-Driven State Merging Algorithm. In: Honavar, V.G., Slutzki, G. (eds.) ICGI 1998. LNCS (LNAI), vol. 1433, pp. 1–12. Springer, Heidelberg (1998)

    Chapter  Google Scholar 

  18. Gold, M.: Complexity of automaton identification from given data. Information and Control 37, 302–320 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  19. Angluin, D.: On the complexity of minimum inference of regular sets. Information and Control 39, 337–350 (1978)

    Article  MATH  MathSciNet  Google Scholar 

  20. Walkinshaw, N., Bogdanov, K., Holcombe, M., Salahuddin, S.: Improving Dynamic Software Analysis by Applying Grammar Inference Principles. Journal of Software Maintenance and Evolution: Research and Practice (2008)

    Google Scholar 

  21. Mitchell, T.: Machine Learning. McGraw-Hill, New York (1997)

    MATH  Google Scholar 

  22. Walkinshaw, N., Bogdanov, K., Johnson, K.: Evaluation and Comparison of Inferred Regular Grammars. In: Clark, A., Coste, F., Miclet, L. (eds.) ICGI 2008. LNCS (LNAI), vol. 5278, pp. 252–265. Springer, Heidelberg (2008)

    Chapter  Google Scholar 

  23. Sokolova, M., Lapalme, G.: A systematic analysis of performance measures for classification tasks. Information Processing and Management 45(4), 427–437 (2009)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2010 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Walkinshaw, N. (2010). The Practical Assessment of Test Sets with Inductive Inference Techniques. In: Bottaci, L., Fraser, G. (eds) Testing – Practice and Research Techniques. TAIC PART 2010. Lecture Notes in Computer Science, vol 6303. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-15585-7_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-15585-7_16

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-15584-0

  • Online ISBN: 978-3-642-15585-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics