Abstract
Ranged program analysis has recently been proposed as a means to scale a single analysis and to define parallel cooperation of different analyses.
To this end, ranged program analysis first splits a program’s paths into different parts. Then, it runs one analysis instance per part, thereby restricting the instance to analyze only the paths of the respective part. To achieve the restriction, the analysis is combined with a so-called range reduction component responsible for excluding the paths outside of the part.
So far, ranged program analysis and in particular the range reduction component have been defined in the framework of configurable program analysis (CPA). In this paper, we suggest program instrumentation as an alternative for achieving the analysis restriction, which allows us to use arbitrary analyzers in ranged program analysis. Our evaluation on programs from the SV-COMP benchmark shows that ranged program analysis with instrumentation performs comparably to the CPA-based version and that the evaluation results for the CPA-based ranged program analysis carry over to the instrumentation-based version.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Data Availability Statement
All experimental data and our open-source implementation are archived and available in our supplementary artifact [21].
Notes
- 1.
The implementation covers the GNU C-standard.
- 2.
In our implementation, we only instrument branching points that occur on the paths induced by the lower bound or upper bound.
- 3.
- 4.
In the benchmark, reach_error is called whenever an assert is violated, cf. Fig. 1.
References
Beyer, D., Henzinger, T.A., Keremoglu, M.E., Wendler, P.: Conditional model checking: a technique to pass information between verifiers. In: Proceedings of FSE. ACM (2012)
Beyer, D., Jakobs, M.-C.: CoVeriTest: cooperative verifier-based testing. In: Hähnle, R., van der Aalst, W. (eds.) FASE 2019. LNCS, vol. 11424, pp. 389–408. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-16722-6_23
Beyer, D., Jakobs, M.-C., Lemberger, T., Wehrheim, H.: Reducer-based construction of conditional verifiers. In: Proceedings of ICSE, pp. 1182–1193. ACM (2018)
Beyer, D., Lemberger, T.: Conditional testing. In: Chen, Y.-F., Cheng, C.-H., Esparza, J. (eds.) ATVA 2019. LNCS, vol. 11781, pp. 189–208. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-31784-3_11
Beyer, D.: Progress on software verification: SV-COMP 2022. In: Fisman, D., Rosu, G. (eds.) TACAS 2022. LNCS, vol. 13244, pp. 375–402. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99527-0_20
Beyer, D.: Competition on software verification and witness validation: SV-COMP 2023. In: Sankaranarayanan, S., Sharygina, N. (eds.) TACAS 2023. LNCS, vol. 13994, pp. 495–522. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30820-8_29
Beyer, D., Henzinger, T.A., Théoduloz, G.: Configurable software verification: concretizing the convergence of model checking and program analysis. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 504–518. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73368-3_51
Beyer, D., Keremoglu, M.E.: CPAchecker: a tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_16
Beyer, D., Löwe, S., Wendler, P.: Reliable benchmarking: requirements and solutions. STTT 21(1), 1–29 (2019)
Bucur, S., Ureche, V., Zamfir, C., Candea, G.: Parallel symbolic execution for automated real-world software testing. In: Proceedings of EuroSys, pp. 183–198. ACM (2011)
Cadar, C., Dunbar, D., Engler, D.R.: KLEE: unassisted and automatic generation of high-coverage tests for complex systems programs. In: Proceedings of OSDI, pp. 209–224. USENIX Association (2008)
Chalupa, M., Mihalkovič, V., Řechtáčková, A., Zaoral, L., Strejček, J.: Symbiotic 9: string analysis and backward symbolic execution with loop folding. In: Fisman, D., Rosu, G. (eds.) TACAS 2022. LNCS, vol. 13244, pp. 462–467. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-99527-0_32
Chalupa, M., Strejček, J.: Backward symbolic execution with loop folding. In: Drăgoi, C., Mukherjee, S., Namjoshi, K. (eds.) SAS 2021. LNCS, vol. 12913, pp. 49–76. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88806-0_3
Christakis, M., Müller, P., Wüstholz, V.: Collaborative verification and testing with explicit assumptions. In: Giannakopoulou, D., Méry, D. (eds.) FM 2012. LNCS, vol. 7436, pp. 132–146. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-32759-9_13
SV-Benchmarks Community: SV-Benchmarks (2023). https://gitlab.com/sosy-lab/benchmarking/sv-benchmarks/-/tree/svcomp23
Czech, M., Jakobs, M.-C., Wehrheim, H.: Just test what you cannot verify! In: Egyed, A., Schaefer, I. (eds.) FASE 2015. LNCS, vol. 9033, pp. 100–114. Springer, Heidelberg (2015). https://doi.org/10.1007/978-3-662-46675-9_7
Daca, P., Gupta, A., Henzinger, T.A.: Abstraction-driven concolic testing. In: Jobstmann, B., Leino, K.R.M. (eds.) VMCAI 2016. LNCS, vol. 9583, pp. 328–347. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49122-5_16
Funes, D., Siddiqui, J.H., Khurshid, S.: Ranged model checking. ACM SIGSOFT Softw. Eng. Notes 37(6), 1–5 (2012)
Gerrard, M.J., Dwyer, M.B.: ALPACA: a large portfolio-based alternating conditional analysis. In: Proceedings of ICSE, pp. 35–38. IEEE/ACM (2019)
Haltermann, J., Jakobs, M., Richter, C., Wehrheim, H.: Parallel program analysis via range splitting. In: Lambers, L., Uchitel, S. (eds.) FASE 2023. LNCS, vol. 13991, pp. 195–219. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30826-0_11
Haltermann, J., Jakobs, M., Richter, C., Wehrheim, H.: Replication package for article ‘Ranged Program Analysis via Instrumentation’, June 2023. https://doi.org/10.5281/zenodo.8065229
Heizmann, M., et al.: Ultimate automizer and the CommuHash normal form - (competition contribution). In: Sankaranarayanan, S., Sharygina, N. (eds.) Proceedings of TACAS. LNCS, vol. 13994, pp. 577–581. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-30820-8_39
Heizmann, M., Hoenicke, J., Podelski, A.: Software model checking for people who love automata. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 36–52. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39799-8_2
Huster, S., Ströbele, J., Ruf, J., Kropf, T., Rosenstiel, W.: Using robustness testing to handle incomplete verification results when combining verification and testing techniques. In: Yevtushenko, N., Cavalli, A.R., Yenigün, H. (eds.) ICTSS 2017. LNCS, vol. 10533, pp. 54–70. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-67549-7_4
Inverso, O., Trubiani, C.: Parallel and distributed bounded model checking of multi-threaded programs. In: Proceedings of PPoPP, pp. 202–216. ACM (2020)
Nguyen, T.L., Schrammel, P., Fischer, B., La Torre, S., Parlato, G.: Parallel bug-finding in concurrent programs via reduced interleaving instances. In: Proceedings of ASE, pp. 753–764. IEEE (2017)
Pauck, F., Wehrheim, H.: Together strong: cooperative android app analysis. In: Proceedings of ESEC/FSE, pp. 374–384. ACM (2019)
Qiu, R., Khurshid, S., Păsăreanu, C.S., Wen, J., Yang, G.: Using test ranges to improve symbolic execution. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NFM 2018. LNCS, vol. 10811, pp. 416–434. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77935-5_28
Sherman, E., Dwyer, M.B.: Structurally defined conditional data-flow static analysis. In: Beyer, D., Huisman, M. (eds.) TACAS 2018. LNCS, vol. 10806, pp. 249–265. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89963-3_15
Siddiqui, J.H., Khurshid, S.: Scaling symbolic execution using ranged analysis. In: Proceedings of SPLASH, pp. 523–536. ACM (2012)
Singh, S., Khurshid, S.: Parallel chopped symbolic execution. In: Lin, S.-W., Hou, Z., Mahony, B. (eds.) ICFEM 2020. LNCS, vol. 12531, pp. 107–125. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-63406-3_7
Staats, M., Pasareanu, S.S.: Parallel symbolic execution for structural test generation. In: Proceedings of ISSTA, pp. 183–194. ACM (2010)
Wei, G., et al.: Compiling parallel symbolic execution with continuations. In: ICSE, pp. 1316–1328. IEEE (2023)
Weiser, M.D.: Program slicing. IEEE TSE 10(4), 352–357 (1984)
Yang, G., Do, Q.C.D., Wen, J.: Distributed assertion checking using symbolic execution. ACM SIGSOFT Softw. Eng. Notes 40(6), 1–5 (2015)
Yang, G., Qiu, R., Khurshid, S., Pasareanu, C.S., Wen, J.: A synergistic approach to improving symbolic execution using test ranges. Innov. Syst. Softw. Eng. 15(3-4), 325–342 (2019)
Yin, B., Chen, L., Liu, J., Wang, J., Cousot, P.: Verifying numerical programs via iterative abstract testing. In: Chang, B.-Y.E. (ed.) SAS 2019. LNCS, vol. 11822, pp. 247–267. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32304-2_13
Zhou, L., Gan, S., Qin, X., Han, W.: SECloud: binary analyzing using symbolic execution in the cloud. In: Proceedings of CBD, pp. 58–63. IEEE (2013)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Haltermann, J., Jakobs, MC., Richter, C., Wehrheim, H. (2023). Ranged Program Analysis via Instrumentation. In: Ferreira, C., Willemse, T.A.C. (eds) Software Engineering and Formal Methods. SEFM 2023. Lecture Notes in Computer Science, vol 14323. Springer, Cham. https://doi.org/10.1007/978-3-031-47115-5_9
Download citation
DOI: https://doi.org/10.1007/978-3-031-47115-5_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-47114-8
Online ISBN: 978-3-031-47115-5
eBook Packages: Computer ScienceComputer Science (R0)