Skip to main content

Development of an Automatic Testing Environment for Mercury

  • Conference paper
Logic Programming (ICLP 2008)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 5366))

Included in the following conference series:

  • 1210 Accesses

Abstract

Testing refers to the activity of running a software component with respect to a well-chosen set of inputs and comparing the outputs that are produced with the expected results in order to find errors. To make testing less repetitive and quicker, a so-called test automation framework can be used to automatically execute a (previously written) test suite without user intervention. An automatic tool runs the software component that is being tested once for each test input, compares the actual result with the expected result and reports those test cases that failed during the test; a well-known example of such a tool being JUnit for Java [1]. However, the construction of test suites remains a mostly manual and thus time-consuming activity [2]. The need of adequacy criteria [3,4] renders the construction of (large) test suites complex and error-prone [5]. The objective of this work is to develop an analysis that automatically creates a set of test inputs that satisfies a particular coverage criterion for a given program written in Mercury.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Institutional subscriptions

Similar content being viewed by others

References

  1. Hunt, A., Thomas, D.: Pragmatic unit testing in java with junit. Pragmatic Bookshelf (2003)

    Google Scholar 

  2. Artho, C., et al.: Combining test case generation and runtime verification. Theoretical Computer Science 336(2-3) (2005)

    Google Scholar 

  3. Zhu, H., Hall, P., May, J.: Software unit test coverage and adequacy. ACM Computing Surveys 29(4) (1997)

    Google Scholar 

  4. Weyuker, E.J.: Axiomatizing software test data adequacy. IEEE Trans. Softw. Eng. 12(12), 1128–1138 (1986)

    Article  Google Scholar 

  5. Li, K., Wu, M.: Effective software test automation, Sybex (2004)

    Google Scholar 

  6. Sy, N., Deville, Y.: Automatic test data generation for programs with integer and float variables. In: 16th IEEE International Conference on Automated Software Engineering (ASE 2001) (2001)

    Google Scholar 

  7. Visser, W., Pasareanu, C.S., Khurshid, S.: Test input generation with java pathfinder. SIGSOFT Softw. Eng. Notes 29(4), 97–107 (2004)

    Article  Google Scholar 

  8. Degrave, F., Vanhoof, W.: A control flow graph for Mercury. In: Proceedings of CICLOPS 2007 (2007)

    Google Scholar 

  9. Degrave, F., Schrijvers, T., Vanhoof, W.: Automatic generation of test inputs for Mercury. In: LOPSTR 2008. LNCS. Springer, Heidelberg (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2008 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Degrave, F. (2008). Development of an Automatic Testing Environment for Mercury. In: Garcia de la Banda, M., Pontelli, E. (eds) Logic Programming. ICLP 2008. Lecture Notes in Computer Science, vol 5366. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-540-89982-2_82

Download citation

  • DOI: https://doi.org/10.1007/978-3-540-89982-2_82

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-89981-5

  • Online ISBN: 978-3-540-89982-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics