Skip to main content

Ranged Program Analysis via Instrumentation

  • Conference paper
  • First Online:
Software Engineering and Formal Methods (SEFM 2023)

Abstract

Ranged program analysis has recently been proposed as a means to scale a single analysis and to define parallel cooperation of different analyses.

To this end, ranged program analysis first splits a program’s paths into different parts. Then, it runs one analysis instance per part, thereby restricting the instance to analyze only the paths of the respective part. To achieve the restriction, the analysis is combined with a so-called range reduction component responsible for excluding the paths outside of the part.

So far, ranged program analysis and in particular the range reduction component have been defined in the framework of configurable program analysis (CPA). In this paper, we suggest program instrumentation as an alternative for achieving the analysis restriction, which allows us to use arbitrary analyzers in ranged program analysis. Our evaluation on programs from the SV-COMP benchmark shows that ranged program analysis with instrumentation performs comparably to the CPA-based version and that the evaluation results for the CPA-based ranged program analysis carry over to the instrumentation-based version.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Data Availability Statement

All experimental data and our open-source implementation are archived and available in our supplementary artifact [21].

Notes

  1. 1.

    The implementation covers the GNU C-standard.

  2. 2.

    In our implementation, we only instrument branching points that occur on the paths induced by the lower bound or upper bound.

  3. 3.

    https://tree-sitter.github.io/tree-sitter/.

  4. 4.

    In the benchmark, reach_error is called whenever an assert is violated, cf. Fig. 1.

References

  1. Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: a technique to pass information between verifiers. In: Proceedings of FSE. ACM (2012)

    Google Scholar 

  2. Beyer, D., Jakobs, M.-C.: CoVeriTest: cooperative verifier-based testing. In: Hähnle, R., van der Aalst, W. (eds.) FASE 2019. LNCS, vol. 11424, pp. 389–408. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16722-6_23

    Chapter  Google Scholar 

  3. Beyer, D., Jakobs, M.-C., Lemberger, T., Wehrheim, H.: Reducer-based construction of conditional verifiers. In: Proceedings of ICSE, pp. 1182–1193. ACM (2018)

    Google Scholar 

  4. Beyer, D., Lemberger, T.: Conditional testing. In: Chen, Y.-F., Cheng, C.-H., Esparza, J. (eds.) ATVA 2019. LNCS, vol. 11781, pp. 189–208. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31784-3_11

    Chapter  Google Scholar 

  5. Beyer, D.: Progress on software verification: SV-COMP 2022. In: Fisman, D., Rosu, G. (eds.) TACAS 2022. LNCS, vol. 13244, pp. 375–402. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99527-0_20

  6. Beyer, D.: Competition on software verification and witness validation: SV-COMP 2023. In: Sankaranarayanan, S., Sharygina, N. (eds.) TACAS 2023. LNCS, vol. 13994, pp. 495–522. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30820-8_29

  7. Beyer, D., Henzinger, T.A., Théoduloz, G.: Configurable software verification: concretizing the convergence of model checking and program analysis. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 504–518. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73368-3_51

    Chapter  MATH  Google Scholar 

  8. Beyer, D., Keremoglu, M.E.: CPAchecker: a tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_16

    Chapter  Google Scholar 

  9. Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: requirements and solutions. STTT 21(1), 1–29 (2019)

    Article  Google Scholar 

  10. Bucur, S., Ureche, V., Zamfir, C., Candea, G.: Parallel symbolic execution for automated real-world software testing. In: Proceedings of EuroSys, pp. 183–198. ACM (2011)

    Google Scholar 

  11. Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proceedings of OSDI, pp. 209–224. USENIX Association (2008)

    Google Scholar 

  12. Chalupa, M., Mihalkovič, V., Řechtáčková, A., Zaoral, L., Strejček, J.: Symbiotic 9: string analysis and backward symbolic execution with loop folding. In: Fisman, D., Rosu, G. (eds.) TACAS 2022. LNCS, vol. 13244, pp. 462–467. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99527-0_32

    Chapter  Google Scholar 

  13. Chalupa, M., Strejček, J.: Backward symbolic execution with loop folding. In: Drăgoi, C., Mukherjee, S., Namjoshi, K. (eds.) SAS 2021. LNCS, vol. 12913, pp. 49–76. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88806-0_3

    Chapter  Google Scholar 

  14. Christakis, M., Müller, P., Wüstholz, V.: Collaborative verification and testing with explicit assumptions. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 132–146. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32759-9_13

    Chapter  Google Scholar 

  15. SV-Benchmarks Community: SV-Benchmarks (2023). https://gitlab.com/sosy-lab/benchmarking/sv-benchmarks/-/tree/svcomp23

  16. Czech, M., Jakobs, M.-C., Wehrheim, H.: Just test what you cannot verify! In: Egyed, A., Schaefer, I. (eds.) FASE 2015. LNCS, vol. 9033, pp. 100–114. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46675-9_7

    Chapter  Google Scholar 

  17. Daca, P., Gupta, A., Henzinger, T.A.: Abstraction-driven concolic testing. In: Jobstmann, B., Leino, K.R.M. (eds.) VMCAI 2016. LNCS, vol. 9583, pp. 328–347. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49122-5_16

    Chapter  Google Scholar 

  18. Funes, D., Siddiqui, J.H., Khurshid, S.: Ranged model checking. ACM SIGSOFT Softw. Eng. Notes 37(6), 1–5 (2012)

    Article  Google Scholar 

  19. Gerrard, M.J., Dwyer, M.B.: ALPACA: a large portfolio-based alternating conditional analysis. In: Proceedings of ICSE, pp. 35–38. IEEE/ACM (2019)

    Google Scholar 

  20. Haltermann, J., Jakobs, M., Richter, C., Wehrheim, H.: Parallel program analysis via range splitting. In: Lambers, L., Uchitel, S. (eds.) FASE 2023. LNCS, vol. 13991, pp. 195–219. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30826-0_11

  21. Haltermann, J., Jakobs, M., Richter, C., Wehrheim, H.: Replication package for article ‘Ranged Program Analysis via Instrumentation’, June 2023. https://doi.org/10.5281/zenodo.8065229

  22. Heizmann, M., et al.: Ultimate automizer and the CommuHash normal form - (competition contribution). In: Sankaranarayanan, S., Sharygina, N. (eds.) Proceedings of TACAS. LNCS, vol. 13994, pp. 577–581. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30820-8_39

  23. Heizmann, M., Hoenicke, J., Podelski, A.: Software model checking for people who love automata. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 36–52. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39799-8_2

    Chapter  Google Scholar 

  24. Huster, S., Ströbele, J., Ruf, J., Kropf, T., Rosenstiel, W.: Using robustness testing to handle incomplete verification results when combining verification and testing techniques. In: Yevtushenko, N., Cavalli, A.R., Yenigün, H. (eds.) ICTSS 2017. LNCS, vol. 10533, pp. 54–70. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67549-7_4

    Chapter  Google Scholar 

  25. Inverso, O., Trubiani, C.: Parallel and distributed bounded model checking of multi-threaded programs. In: Proceedings of PPoPP, pp. 202–216. ACM (2020)

    Google Scholar 

  26. Nguyen, T.L., Schrammel, P., Fischer, B., La Torre, S., Parlato, G.: Parallel bug-finding in concurrent programs via reduced interleaving instances. In: Proceedings of ASE, pp. 753–764. IEEE (2017)

    Google Scholar 

  27. Pauck, F., Wehrheim, H.: Together strong: cooperative android app analysis. In: Proceedings of ESEC/FSE, pp. 374–384. ACM (2019)

    Google Scholar 

  28. Qiu, R., Khurshid, S., Păsăreanu, C.S., Wen, J., Yang, G.: Using test ranges to improve symbolic execution. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NFM 2018. LNCS, vol. 10811, pp. 416–434. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77935-5_28

    Chapter  Google Scholar 

  29. Sherman, E., Dwyer, M.B.: Structurally defined conditional data-flow static analysis. In: Beyer, D., Huisman, M. (eds.) TACAS 2018. LNCS, vol. 10806, pp. 249–265. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89963-3_15

    Chapter  MATH  Google Scholar 

  30. Siddiqui, J.H., Khurshid, S.: Scaling symbolic execution using ranged analysis. In: Proceedings of SPLASH, pp. 523–536. ACM (2012)

    Google Scholar 

  31. Singh, S., Khurshid, S.: Parallel chopped symbolic execution. In: Lin, S.-W., Hou, Z., Mahony, B. (eds.) ICFEM 2020. LNCS, vol. 12531, pp. 107–125. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63406-3_7

    Chapter  Google Scholar 

  32. Staats, M., Pasareanu, S.S.: Parallel symbolic execution for structural test generation. In: Proceedings of ISSTA, pp. 183–194. ACM (2010)

    Google Scholar 

  33. Wei, G., et al.: Compiling parallel symbolic execution with continuations. In: ICSE, pp. 1316–1328. IEEE (2023)

    Google Scholar 

  34. Weiser, M.D.: Program slicing. IEEE TSE 10(4), 352–357 (1984)

    MATH  Google Scholar 

  35. Yang, G., Do, Q.C.D., Wen, J.: Distributed assertion checking using symbolic execution. ACM SIGSOFT Softw. Eng. Notes 40(6), 1–5 (2015)

    Article  Google Scholar 

  36. Yang, G., Qiu, R., Khurshid, S., Pasareanu, C.S., Wen, J.: A synergistic approach to improving symbolic execution using test ranges. Innov. Syst. Softw. Eng. 15(3-4), 325–342 (2019)

    Google Scholar 

  37. Yin, B., Chen, L., Liu, J., Wang, J., Cousot, P.: Verifying numerical programs via iterative abstract testing. In: Chang, B.-Y.E. (ed.) SAS 2019. LNCS, vol. 11822, pp. 247–267. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32304-2_13

    Chapter  Google Scholar 

  38. Zhou, L., Gan, S., Qin, X., Han, W.: SECloud: binary analyzing using symbolic execution in the cloud. In: Proceedings of CBD, pp. 58–63. IEEE (2013)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jan Haltermann .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Haltermann, J., Jakobs, MC., Richter, C., Wehrheim, H. (2023). Ranged Program Analysis via Instrumentation. In: Ferreira, C., Willemse, T.A.C. (eds) Software Engineering and Formal Methods. SEFM 2023. Lecture Notes in Computer Science, vol 14323. Springer, Cham. https://doi.org/10.1007/978-3-031-47115-5_9

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-47115-5_9

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-47114-8

  • Online ISBN: 978-3-031-47115-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics