skip to main content
10.1145/2635868.2635917acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Improving oracle quality by detecting brittle assertions and unused inputs in tests

Published:11 November 2014Publication History

ABSTRACT

Writing oracles is challenging. As a result, developers often create oracles that check too little, resulting in tests that are unable to detect failures, or check too much, resulting in tests that are brittle and difficult to maintain. In this paper we present a new technique for automatically analyzing test oracles. The technique is based on dynamic tainting and detects both brittle assertions—assertions that depend on values that are derived from uncontrolled inputs—and unused inputs—inputs provided by the test that are not checked by an assertion. We also presented OraclePolish, an implementation of the technique that can analyze tests that are written in Java and use the JUnit testing framework. Using OraclePolish, we conducted an empirical evaluation of more than 4000 real test cases. The results of the evaluation show that OraclePolish is effective; it detected 164 tests that contain brittle assertions and 1618 tests that have unused inputs. In addition, the results also demonstrate that the costs associated with using the technique are reasonable.

References

  1. E. Bosman, A. Slowinska, and H. Bos. Minemu: The world’s fastest taint tracker. In Proceedings of the 14th International Conference on Recent Advances in Intrusion Detection, pages 1–20, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. M. Chabbi. Efficient taint analysis using multicore machines. Master’s thesis, University of Arizona, 2007.Google ScholarGoogle Scholar
  3. S. Choudhary, H. Versee, and A. Orso. WEBDIFF: Automated identification of cross-browser issues in web applications. In Proceedings of the 2010 IEEE International Conference on Software Maintenance, pages 1–10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. J. Clause and A. Orso. Penumbra: Automatically identifying failure-relevant inputs using dynamic tainting. In Proceedings of the Eighteenth International Symposium on Software Testing and Analysis, pages 249–260, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. J. Clause, W. Li, and A. Orso. Dytan: A generic dynamic taint analysis framework. In Proceedings of the 2007 International Symposium on Software Testing and Analysis, pages 196–206, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. C. Csallner and Y. Smaragdakis. Jcrasher: An automatic robustness tester for Java. Software: Practice and Experience, 34(11):1025–1050, June 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. R. A. DeMillo, R. J. Lipton, and F. G. Sayward. Hints on test data selection: Help for the practicing programmer. Computer, 11(4):34–41, Apr. 1978. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. G. Fraser and A. Zeller. Mutation-driven generation of unit tests and oracles. In Proceedings of the 19th International Symposium on Software Testing and Analysis, pages 147–158, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. P. Godefroid, N. Klarlund, and K. Sen. DART: Directed automated random testing. In Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, pages 213–223, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. R. G. Hamlet. Testing programs with the aid of a compiler. IEEE Trans. Softw. Eng., 3(4):279–290, July 1977. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. K. Koster. A state coverage tool for JUnit. In Companion of the 30th International Conference on Software Engineering, pages 965–966, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. K. Koster and D. C. Kao. State coverage: A structural test adequacy criterion for behavior checking. In Proceedings of the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on The Foundations of Software Engineering, pages 541–544, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. P. Loyola, M. Staats, I.-Y. Ko, and G. Rothermel. Dodona: automated oracle data set selection. In Proceedings of the 14th International Symposium on Software Testing and Analysis, pages 193–203, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. A. Mesbah and M. R. Prasad. Automated cross-browser compatibility testing. In Proceedings of the 33rd International Conference on Software Engineering, pages 561–570, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. B. P. Miller, L. Fredriksen, and B. So. An empirical study of the reliability of unix utilities. Commun. ACM, 33(12):32–44, December 1990. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. M. Ozsoy, D. Ponomarev, N. Abu-Ghazaleh, and T. Suri. SIFT: A low-overhead dynamic information flow tracking architecture for smt processors. In Proceedings of the 8th ACM International Conference on Computing Frontiers, pages 37:1–37:11, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. C. Pacheco and M. D. Ernst. Eclat: Automatic generation and classification of test inputs. In Proceedings of the 19th European Conference on Object-Oriented Programming, pages 504–527, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. C. Pacheco, S. K. Lahiri, M. D. Ernst, and T. Ball. Feedback-directed random test generation. In Proceedings of the 29th International Conference on Software Engineering, pages 75–84, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. B. Robinson, M. D. Ernst, J. H. Perkins, V. Augustine, and N. Li. Scaling up automated test generation: Automatically generating maintainable regression unit tests for programs. In Proceedings of the 26th IEEE/ACM International Conference on Automated Software Engineering, pages 23–32, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. P. Saxena, R. Sekar, and V. Puranik. Efficient finegrained binary instrumentation with applications to taint-tracking. In CGO ’08: Proceedings of the Sixth Annual IEEE/ACM International Symposium on Code Generation and Optimization, pages 74–83, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. D. Schuler and A. Zeller. Assessing oracle quality with checked coverage. In Proceedings of the Fourth IEEE International Conference on Software Testing, Verification and Validation, pages 90–99, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. K. Sen, D. Marinov, and G. Agha. Cute: a concolic unit testing engine for c. In Proceedings of the 10th European software engineering conference held jointly with 13th ACM SIGSOFT international symposium on Foundations of software engineering, pages 263–272, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. M. Staats, G. Gay, and M. P. E. Heimdahl. Automated oracle creation support, or: How i learned to stop worrying about fault propagation and love mutation testing. In Proceedings of the 34th International Conference on Software Engineering, pages 870–880, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. K. Taneja and T. Xie. Diffgen: Automated regression unit-test generation. In Proceedings of the 23rd IEEE/ACM International Conference on Automated Software Engineering, pages 407–410, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. D. Vanoverberghe, J. de Halleux, N. Tillmann, and F. Piessens. State coverage: Software validation metrics beyond code coverage. In Proceedings of the 38th International Conference on Current Trends in Theory and Practice of Computer Science, pages 542–553, 2012. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. W. Visser, C. S. Pˇ asˇ areanu, and S. Khurshid. Test input generation with java pathfinder. In Proceedings of the 2004 ACM SIGSOFT International Symposium on Software Testing and Analysis, pages 97–107, 2004. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. Q. Xie and A. M. Memon. Designing and comparing automated test oracles for gui-based software applications. ACM Transactions on Software Engineering and Methodology, 16(1), February 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. T. Xie, D. Marinov, W. Schulte, and D. Notkin. Symstra: A framework for generating object-oriented unit tests using symbolic execution. In Proceedings of the 11th international conference on Tools and Algorithms for the Construction and Analysis of Systems, pages 365–381, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. S. Zhang, D. Jalali, J. Wuttke, K. Muslu, W. Lam, M. D. Ernst, and D. Notkin. Empirically revisiting the test independence assumption. In Proceedings of the 14th International Symposium on Software Testing and Analysis, pages 385–396, 2014. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Improving oracle quality by detecting brittle assertions and unused inputs in tests

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      FSE 2014: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering
      November 2014
      856 pages
      ISBN:9781450330565
      DOI:10.1145/2635868

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 11 November 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article

      Acceptance Rates

      Overall Acceptance Rate17of128submissions,13%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader