Skip to main content

Study of Integrating Random and Symbolic Testing for Object-Oriented Software

  • Conference paper
  • First Online:
Integrated Formal Methods (IFM 2018)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 11023))

Included in the following conference series:

Abstract

Testing is currently the main technique adopted by the industry for improving the quality, reliability, and security of software. In order to lower the cost of manual testing, automatic testing techniques have been devised, such as random and symbolic testing, with their respective trade-offs. For example, random testing excels at fast global exploration of software, while it plateaus when faced with hard-to-hit numerically-intensive execution paths. On the other hand, symbolic testing excels at exploring such paths, while it struggles when faced with complex heap class structures. In this paper, we describe an approach for automatic unit testing of object-oriented software that integrates the two techniques. We leverage feedback-directed unit testing to generate meaningful sequences of constructor+method invocations that create rich heap structures, and we in turn further explore these sequences using dynamic symbolic execution. We implement this approach in a tool called JDoop, which we augment with several parameters for fine-tuning its heuristics; such “knobs” allow for a detailed exploration of the various trade-offs that the proposed integration offers. Using JDoop, we perform an extensive empirical exploration of this space, and we describe lessons learned and guidelines for future research efforts in this area.

Supported in part by the National Science Foundation (NSF) award CCF 1421678.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    Note that a very preliminary version of JDoop was presented earlier as a short workshop extended abstract [11].

  2. 2.

    JDoop is available under the GNU General Public License version 3 (or later) at https://github.com/psycopaths/jdoop.

  3. 3.

    The testing infrastructure is available under the GNU Affero GPLv3+ license at https://github.com/soarlab/jdoop-wrapper.

  4. 4.

    These are methods for which Nhandler was not configured to take over execution, leading to a crash of JDart. We configured Nhandler to take care of all native methods of java.lang.String.

  5. 5.

    http://www.evosuite.org/documentation/measuring-code-coverage.

  6. 6.

    https://groups.google.com/forum/#!topic/evosuite/ctk2yPIqIoM.

  7. 7.

    https://stackoverflow.com/questions/41632769/evosuite-code-coverage-does-not-match-with-jacoco-coverage.

References

  1. Apt testbed facility. https://www.aptlab.net

  2. ASM: A Java bytecode engineering library. http://asm.ow2.org

  3. Baluda, M., Denaro, G., Pezzè, M.: Bidirectional symbolic analysis for effective branch testing. IEEE Trans. Softw. Eng. 42(5), 403–426 (2016)

    Article  Google Scholar 

  4. Beyer, D.: Reliable and reproducible competition results with BenchExec and witnesses (report on SV-COMP 2016). In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 887–904. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49674-9_55

    Chapter  Google Scholar 

  5. Boshernitsan, M., Doong, R., Savoia, A.: From Daikon to Agitator: lessons and challenges in building a commercial tool for developer testing. In: ISSTA, pp. 169–180 (2006)

    Google Scholar 

  6. Cadar, C., Dunbar, D., Engler, D.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: OSDI, pp. 209–224 (2008)

    Google Scholar 

  7. Cho, C.Y., Babić, D., Poosankam, P., Chen, K.Z., Wu, E.X., Song, D.: MACE: model-inference-assisted concolic exploration for protocol and vulnerability discovery. In: Proceedings of the 20th USENIX Security Symposium (2011)

    Google Scholar 

  8. Csallner, C., Smaragdakis, Y., Xie, T.: DSD-Crasher: a hybrid analysis tool for bug finding. ACM Trans. Softw. Eng. Methodol. 17(2), 8:1–8:37 (2008)

    Article  Google Scholar 

  9. De Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24

    Chapter  Google Scholar 

  10. Deters, M., Reynolds, A., King, T., Barrett, C.W., Tinelli, C.: A tour of CVC4: how it works, and how to use it. In: FMCAD, p. 7 (2014)

    Google Scholar 

  11. Dimjašević, M., Rakamarić, Z.: JPF-Doop: combining concolic and random testing for Java. In: Java Pathfinder Workshop (JPF) (2013). Extended abstract

    Google Scholar 

  12. Eler, M.M., Endo, A.T., Durelli, V.H.S.: Quantifying the characteristics of Java programs that may influence symbolic execution from a test data generation perspective. In: COMPSAC, pp. 181–190 (2014)

    Google Scholar 

  13. Fraser, G., Arcuri, A.: EvoSuite: automatic test suite generation for object-oriented software. In: ESEC/FSE, pp. 416–419 (2011)

    Google Scholar 

  14. Galeotti, J.P., Fraser, G., Arcuri, A.: Improving search-based test suite generation with dynamic symbolic execution. In: ISSRE, pp. 360–369 (2013)

    Google Scholar 

  15. Garg, P., Ivančić, F., Balakrishnan, G., Maeda, N., Gupta, A.: Feedback-directed unit test generation for C/C++ using concolic execution. In: ICSE, pp. 132–141 (2013)

    Google Scholar 

  16. Gligoric, M., Groce, A., Zhang, C., Sharma, R., Alipour, M.A., Marinov, D.: Comparing non-adequate test suites using coverage criteria. In: ISSTA, pp. 302–313 (2013)

    Google Scholar 

  17. Godefroid, P., Klarlund, N., Sen, K.: DART: directed automated random testing. In: PLDI, pp. 213–223 (2005)

    Google Scholar 

  18. Godefroid, P., Levin, M.Y., Molnar, D.: SAGE: whitebox fuzzing for security testing. Queue 10(1), 20:20–20:27 (2012)

    Article  Google Scholar 

  19. Inkumsah, K., Xie, T.: Improving structural testing of object-oriented programs via integrating evolutionary testing and symbolic execution. In: ASE, pp. 297–306 (2008)

    Google Scholar 

  20. JaCoCo Java code coverage library. http://www.jacoco.org/jacoco

  21. Jayaraman, K., Harvison, D., Ganesh, V.: jFuzz: a concolic whitebox fuzzer for Java. In: NFM, pp. 121–125 (2009)

    Google Scholar 

  22. Jaygarl, H., Kim, S., Xie, T., Chang, C.K.: OCAT: object capture-based automated testing. In: ISSTA, pp. 159–170 (2010)

    Google Scholar 

  23. Java PathFinder (JPF). http://babelfish.arc.nasa.gov/trac/jpf

  24. Kähkönen, K., Launiainen, T., Saarikivi, O., Kauttio, J., Heljanko, K., Niemelä, I.: LCT: an open source concolic testing tool for Java programs. In: BYTECODE, pp. 75–80 (2011)

    Google Scholar 

  25. Khurshid, S., Păsăreanu, C.S., Visser, W.: Generalized symbolic execution for model checking and testing. In: Garavel, H., Hatcliff, J. (eds.) TACAS 2003. LNCS, vol. 2619, pp. 553–568. Springer, Heidelberg (2003). https://doi.org/10.1007/3-540-36577-X_40

    Chapter  MATH  Google Scholar 

  26. Luckow, K., Dimjašević, M., Giannakopoulou, D., Howar, F., Isberner, M., Kahsai, T., Rakamarić, Z., Raman, V.: JDart: a dynamic symbolic analysis framework. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 442–459. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49674-9_26

    Chapter  Google Scholar 

  27. Marcozzi, M., Bardin, S., Kosmatov, N., Papadakis, M., Prevosto, V., Correnson, L.: Time to clean your test objectives. In: ICSE, pp. 456–467 (2018)

    Google Scholar 

  28. McMinn, P.: Search-based software testing: past, present and future. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, pp. 153–163 (2011)

    Google Scholar 

  29. Pacheco, C., Lahiri, S., Ernst, M., Ball, T.: Feedback-directed random test generation. In: ICSE, pp. 75–84 (2007)

    Google Scholar 

  30. Pasareanu, C.S., Rungta, N., Visser, W.: Symbolic execution with mixed concrete-symbolic solving. In: ISSTA, pp. 34–44 (2011)

    Google Scholar 

  31. Prasetya, I.S.W.B.: Budget-aware random testing with T3: benchmarking at the SBST2016 testing tool contest. In: SBST, pp. 29–32 (2016)

    Google Scholar 

  32. Pǎsǎreanu, C.S., Mehlitz, P.C., Bushnell, D.H., Gundy-Burlet, K., Lowry, M., Person, S., Pape, M.: Combining unit-level symbolic execution and system-level concrete execution for testing NASA software. In: ISSTA, pp. 15–26 (2008)

    Google Scholar 

  33. Rueda, U., Just, R., Galeotti, J.P., Vos, T.E.J.: Unit testing tool competition – round four. In: SBST, pp. 19–28 (2016)

    Google Scholar 

  34. Sakti, A., Pesant, G., Guéhéneuc, Y.G.: JTExpert at the fourth unit testing tool competition. In: SBST, pp. 37–40 (2016)

    Google Scholar 

  35. Sen, K., Agha, G.: CUTE and jCUTE: concolic unit testing and explicit path model-checking tools. In: Ball, T., Jones, R.B. (eds.) CAV 2006. LNCS, vol. 4144, pp. 419–423. Springer, Heidelberg (2006). https://doi.org/10.1007/11817963_38

    Chapter  Google Scholar 

  36. Sen, K., Marinov, D., Agha, G.: CUTE: a concolic unit testing engine for C. In: ESEC/FSE, pp. 263–272 (2005)

    Google Scholar 

  37. The SF110 benchmark suite, July 2013. http://www.evosuite.org/experimental-data/sf110

  38. Shafiei, N., van Breugel, F.: Automatic handling of native methods in Java PathFinder. In: SPIN, pp. 97–100 (2014)

    Google Scholar 

  39. Soot: A Java optimization framework. http://sable.github.io/soot

  40. Stephens, N., Grosen, J., Salls, C., Dutcher, A., Wang, R., Corbetta, J., Shoshitaishvili, Y., Kruegel, C., Vigna, G.: Driller: augmenting fuzzing through selective symbolic execution. In: NDSS (2016)

    Google Scholar 

  41. Tanno, H., Zhang, X., Hoshino, T., Sen, K.: TesMa and CATG: automated test generation tools for models of enterprise applications. In: ICSE, pp. 717–720 (2015)

    Google Scholar 

  42. Thummalapenta, S., Xie, T., Tillmann, N., de Halleux, J., Su, Z.: Synthesizing method sequences for high-coverage testing. SIGPLAN Not. 46(10), 189–206 (2011)

    Article  Google Scholar 

  43. Tillmann, N., de Halleux, J.: Pex–white box test generation for \(\text{.NET }\). In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79124-9_10

    Chapter  Google Scholar 

  44. Visser, W., Havelund, K., Brat, G., Park, S., Lerda, F.: Model checking programs. Autom. Softw. Eng. 10(2), 203–232 (2003)

    Article  Google Scholar 

  45. White, B., Lepreau, J., Stoller, L., Ricci, R., Guruprasad, S., Newbold, M., Hibler, M., Barb, C., Joglekar, A.: An integrated experimental environment for distributed systems and networks. SIGOPS Oper. Syst. Rev. 36(SI), 255–270 (2002)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Zvonimir Rakamarić .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2018 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Dimjašević, M., Howar, F., Luckow, K., Rakamarić, Z. (2018). Study of Integrating Random and Symbolic Testing for Object-Oriented Software. In: Furia, C., Winter, K. (eds) Integrated Formal Methods. IFM 2018. Lecture Notes in Computer Science(), vol 11023. Springer, Cham. https://doi.org/10.1007/978-3-319-98938-9_6

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-98938-9_6

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-98937-2

  • Online ISBN: 978-3-319-98938-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics