Skip to main content

Counterexample-Guided Bit-Precision Selection

  • Conference paper
  • First Online:
Programming Languages and Systems (APLAS 2017)

Part of the book series: Lecture Notes in Computer Science ((LNPSE,volume 10695))

Included in the following conference series:

Abstract

Static program verifiers based on satisfiability modulo theories (SMT) solvers often trade precision for scalability to be able to handle large programs. A popular trade-off is to model bitwise operations, which are expensive for SMT solving, using uninterpreted functions over integers. Such an over-approximation improves scalability, but can introduce undesirable false alarms in the presence of bitwise operations that are common in, for example, low-level systems software. In this paper, we present our approach to diagnose the spurious counterexamples caused by this trade-off, and leverage the learned information to lazily and gradually refine the precision of reasoning about bitwise operations in the whole program. Our main insight is to employ a simple and fast type analysis to transform both a counterexample and program into their more precise versions that block the diagnosed spurious counterexample. We implement our approach in the SMACK software verifier, and evaluate it on the benchmark suite from the International Competition on Software Verification (SV-COMP). The evaluation shows that we significantly reduce the number of false alarms while maintaining scalability.

This work was supported in part by NSF award CNS 1527526.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    The solution to the type constraints of the whole program is the same for each iteration of function cexg and can thus be cached. To simplify the presentation, we recompute it in the paper at each iteration.

  2. 2.

    We made the tool publicly available at https://github.com/shaobo-he/TraceTransformer.

  3. 3.

    Benchmark sleep_true-no-overflow_false-valid-deref.i from category Systems_BusyBox_Overflows is removed because an invalid dereference leads to a signed integer overflow error, which is not specified by the SV-COMP rules.

  4. 4.

    Bit-vector mode must always be enabled for reasoning about floating-points, and it is also not consistent with SMACK’s support for Pthreads. Our tool currently does not fully support SMACK’s encoding of memory safety properties. Finally, SMACK currently cannot verify termination.

References

  1. Babić, D., Hu, A.J.: Structural abstraction of software verification conditions. In: Damm, W., Hermanns, H. (eds.) CAV 2007. LNCS, vol. 4590, pp. 371–383. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-73368-3_41

    Google Scholar 

  2. Ball, T., Bounimova, E., Kumar, R., Levin, V.: SLAM2: static driver verification with under 4% false alarms. In: Bloem, R., Sharygina, N. (eds.) FMCAD 2010, pp. 35–42. FMCAD Inc, Austin (2010)

    Google Scholar 

  3. Barrett, C., Sebastiani, R., Seshia, S., Tinelli, C.: Satisfiability modulo theories. In: Handbook of Satisfiability, Chap. 26, pp. 825–885. IOS Press (2009)

    Google Scholar 

  4. Beyer, D.: Second competition on software verification. In: Piterman, N., Smolka, S.A. (eds.) TACAS 2013. LNCS, vol. 7795, pp. 594–609. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-36742-7_43

    Chapter  Google Scholar 

  5. Beyer, D., Keremoglu, M.E.: CPAchecker: a tool for configurable software verification. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 184–190. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_16

    Chapter  Google Scholar 

  6. Beyer, D., Löwe, S., Wendler, P.: Benchmarking and resource measurement. In: Fischer, B., Geldenhuys, J. (eds.) Model Checking Software. LNCS, vol. 9232, pp. 160–178. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-23404-5_12

    Chapter  Google Scholar 

  7. Christakis, M., Bird, C.: What developers want and need from program analysis: an empirical study. In: Lo, D., Apel, S., Khurshid, S. (eds.) ASE 2016. pp. 332–343. ACM, New York (2016). https://doi.org/10.1145/2970276.2970347

  8. Clarke, E., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000). https://doi.org/10.1007/10722167_15

    Chapter  Google Scholar 

  9. de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24

    Chapter  Google Scholar 

  10. DeLine, R., Leino, K.R.M.: BoogiePL: a typed procedural language for checking object-oriented programs. Technical report MSR-TR-2005-70, Microsoft Research (2005)

    Google Scholar 

  11. Emulab network emulation testbed. http://www.emulab.net

  12. Godefroid, P., Levin, M.Y., Molnar, D.: Automated whitebox fuzz testing. In: Cowan, C., Vigna, G. (eds.) NDSS 2008, pp. 151–166. Internet Society, Reston (2008)

    Google Scholar 

  13. Henzinger, T.A., Jhala, R., Majumdar, R., Sutre, G.: Lazy abstraction. In: Launchbury, J., Mitchell, J.C. (eds.) POPL 2002. pp. 58–70. ACM, New York (2002). https://doi.org/10.1145/503272.503279

  14. Lahiri, S.K., Qadeer, S., Rakamarić, Z.: Static and precise detection of concurrency errors in systems code using SMT solvers. In: Bouajjani, A., Maler, O. (eds.) CAV 2009. LNCS, vol. 5643, pp. 509–524. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-02658-4_38

    Chapter  Google Scholar 

  15. Lal, A., Qadeer, S.: Powering the static driver verifier using Corral. In: Cheung, S., Orso, A., Storey, M. (eds.) FSE 2014. pp. 202–212. ACM, New York (2014). https://doi.org/10.1145/2635868.2635894

  16. Lal, A., Qadeer, S., Lahiri, S.: Corral: a solver for reachability modulo theories. In: Parthasarathy, M., Seshia, S.A. (eds.) CAV 2012. LNCS, vol. 7358, pp. 427–443. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31424-7_32

    Chapter  Google Scholar 

  17. Lattner, C., Adve, V.: LLVM: a compilation framework for lifelong program analysis & transformation. In: Dulong, C., Smith, M.D. (eds.) CGO 2004, pp. 75–86. IEEE Computer Society Washington, D.C. (2004)

    Google Scholar 

  18. Lattner, C., Lenharth, A., Adve, V.: Making context-sensitive points-to analysis with heap cloning practical for the real world. In: Ferrante, J., McKinley, K.S. (eds.) PLDI 2007. pp. 278–289. ACM, New York (2007). https://doi.org/10.1145/1250734.1250766

  19. Linux driver verification project. https://forge.ispras.ru/projects/ldv

  20. Leino, K.R.M.: This is Boogie 2 (2008)

    Google Scholar 

  21. Rakamarić, Z., Emmi, M.: SMACK: decoupling source language details from verifier implementations. In: Biere, A., Bloem, R. (eds.) CAV 2014. LNCS, vol. 8559, pp. 106–113. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-08867-9_7

    Google Scholar 

  22. Rakamarić, Z., Hu, A.J.: A scalable memory model for low-level code. In: Jones, N.D., Müller-Olm, M. (eds.) VMCAI 2009. LNCS, vol. 5403, pp. 290–304. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-540-93900-9_24

    Chapter  Google Scholar 

  23. SMACK software verifier and verification toolchain. http://smackers.github.io

  24. Srikanth, A., Sahin, B., Harris, W.R.: Complexity verification using guided theorem enumeration. In: Castagna, G., Gordon, A.D. (eds.) POPL 2017. pp. 639–652. ACM, New York (2017). https://doi.org/10.1145/3093333.3009864

  25. International competition on software verification (SV-COMP). https://sv-comp.sosy-lab.org

  26. White, B., Lepreau, J., Stoller, L., Ricci, R., Guruprasad, S., Newbold, M., Hibler, M., Barb, C., Joglekar, A.: An integrated experimental environment for distributed systems and networks. In: Culler, D., Druschel, P. (eds.) OSDI 2002. pp. 255–270. ACM, New York (2002). https://doi.org/10.1145/844128.844152

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Shaobo He .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2017 Springer International Publishing AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

He, S., Rakamarić, Z. (2017). Counterexample-Guided Bit-Precision Selection. In: Chang, BY. (eds) Programming Languages and Systems. APLAS 2017. Lecture Notes in Computer Science(), vol 10695. Springer, Cham. https://doi.org/10.1007/978-3-319-71237-6_26

Download citation

  • DOI: https://doi.org/10.1007/978-3-319-71237-6_26

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-319-71236-9

  • Online ISBN: 978-3-319-71237-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics