skip to main content
10.1145/2950290.2950351acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article

Correctness witnesses: exchanging verification results between verifiers

Published: 01 November 2016 Publication History

Abstract

Standard verification tools provide a counterexample to witness a specification violation, and, since a few years, such a witness can be validated by an independent validator using an exchangeable witness format. This way, information about the violation can be shared across verification tools and the user can use standard tools to visualize and explore witnesses. This technique is not yet established for the correctness case, where a program fulfills a specification. Even for simple programs, it is often difficult for users to comprehend why a given program is correct, and there is no way to independently check the verification result. We close this gap by complementing our earlier work on violation witnesses with correctness witnesses. While we use an extension of the established common exchange format for violation witnesses to represent correctness witnesses, the techniques for producing and validating correctness witnesses are different. The overall goal to make proofs available to engineers is probably as old as programming itself, and proof-carrying code was proposed two decades ago --- our goal is to make it practical: We consider witnesses as first-class exchangeable objects, stored independently from the source code and checked independently from the verifier that produced them, respecting the important principle of separation of concerns. At any time, the invariants from the correctness witness can be used to reconstruct a correctness proof to establish trust. We extended two state-of-the-art verifiers, CPAchecker and Ultimate Automizer, to produce and validate witnesses, and report that the approach is promising on a large set of verification tasks.

Supplementary Material

Auxiliary Archive (p326-beyer-s.zip)
As supplementary material we provide the virtual machine mentioned in our article, containing all our raw experimental data and our implementations. The virtual machine has been prepared such that our experiments can be repeated easily. The username and password to log into the virtual machine are both 'fse'.

References

[1]
D. Beyer. Status report on software verification. In Proc. TACAS, LNCS 8413, pages 373–388. Springer, 2014.
[2]
D. Beyer. Software verification and verifiable witnesses (Report on SV-COMP 2015). In Proc. TACAS, LNCS 9035, pages 401–416. Springer, 2015.
[3]
D. Beyer. Reliable and reproducible competition results with BenchExec and witnesses. In Proc. TACAS, LNCS 9636, pages 887–904. Springer, 2016.
[4]
D. Beyer and M. Dangl. Verification-aided debugging: An interactive web-service for exploring error witnesses. In Proc. CAV, LNCS 9780, pages 502–509. Springer, 2016.
[5]
D. Beyer, M. Dangl, D. Dietsch, M. Heizmann, and A. Stahlbauer. Witness validation and stepwise testification across software verifiers. In Proc. FSE, pages 721–733. ACM, 2015.
[6]
D. Beyer, M. Dangl, and P. Wendler. Boosting kinduction with continuously-refined invariants. In Proc. CAV, LNCS 9206, pages 622–640. Springer, 2015.
[7]
D. Beyer, T. A. Henzinger, M. E. Keremoglu, and P. Wendler. Conditional model checking: A technique to pass information between verifiers. In Proc. FSE. ACM, 2012.
[8]
D. Beyer, T. A. Henzinger, and G. Théoduloz. Configurable software verification: Concretizing the convergence of model checking and program analysis. In Proc. CAV, LNCS 4590, pages 504–518. Springer, 2007.
[9]
D. Beyer and M. E. Keremoglu. CPAchecker: A tool for configurable software verification. In Proc. CAV, LNCS 6806, pages 184–190. Springer, 2011.
[10]
D. Beyer, S. Löwe, and P. Wendler. Benchmarking and resource measurement. In Proc. SPIN, LNCS 9232, pages 160–178. Springer, 2015.
[11]
D. Beyer and P. Wendler. Reuse of verification results: Conditional model checking, precision reuse, and verification witnesses. In Proc. SPIN, LNCS 7976, pages 1–17. Springer, 2013.
[12]
U. Brandes, M. Eiglsperger, I. Herman, M. Himsolt, and M. S. Marshall. GraphML progress report. In Graph Drawing, LNCS 2265, pages 501–512. Springer, 2001.
[13]
H. Cai, Z. Shao, and A. Vaynberg. Certified selfmodifying code. In Proc. PLDI, pages 66–77. ACM, 2007.
[14]
A. Champion, A. Mebsout, C. Sticksel, and C. Tinelli. The Kind 2 Model Checker In Proc. CAV, LNCS 9780, pages 502–509. Springer, 2016.
[15]
K. Dräger, A. Kupriyanov, B. Finkbeiner, and H. Wehrheim. Slab: A certifying model checker for infinite-state concurrent systems. In Proc. TACAS, LNCS 6015, pages 271–274. Springer, 2010.
[16]
M. Heizmann, D. Dietsch, J. Leike, B. Musa, and A. Podelski. Ultimate Automizer with array interpolation. In Proc. TACAS, LNCS 9035, pages 455–457. Springer, 2015.
[17]
M. Heizmann, J. Hoenicke, and A. Podelski. Software model checking for people who love automata. In Proc. CAV, LNCS 8044, pages 36–52. Springer, 2013.
[18]
T. A. Henzinger, R. Jhala, R. Majumdar, G. C. Necula, G. Sutre, and W. Weimer. Temporal-safety proofs for systems code. In Proc. CAV, LNCS 2404, pages 526–538. Springer, 2002.
[19]
T. A. Henzinger, R. Jhala, R. Majumdar, and M. A. A. Sanvido. Extreme model checking. In Verification: Theory and Practice, pages 332–358, 2003.
[20]
A. Iliasov. Generation of certifiably correct programs from formal models. In Proc. WoSoCER, pages 43–48. IEEE, 2011.
[21]
M.-C. Jakobs. Speed up configurable certificate validation by certificate reduction and partitioning. In Proc. SEFM, LNCS 9276, pages 159–174. Springer, 2015.
[22]
M.-C. Jakobs and H. Wehrheim. Certification for configurable program analysis. In Proc. SPIN, pages 30–39. ACM, 2014.
[23]
M.-C. Jakobs and H. Wehrheim. Programs from proofs of predicated data-flow analyses. In Proc. SAC, pages 1729–1736. ACM, 2015.
[24]
T. Kahsai and C. Tinelli. PKind: A parallel k-induction based model checker. In Proc. PDMC, EPTCS 72, pages 55–62, 2011.
[25]
K. S. Namjoshi. Certifying model checkers. In Proc. CAV, LNCS 2102, pages 2–13. Springer, 2001.
[26]
G. C. Necula. Proof-carrying code. In Proc. POPL, pages 106–119. ACM, 1997.
[27]
H. Rocha, R. S. Barreto, L. Cordeiro, and A. D. Neto. Understanding programming bugs in ANSI-C software using bounded model checking counter-examples. In Proc. IFM, LNCS 7321, pages 128–142. Springer, 2012.
[28]
C. Sternagel and R. Thiemann. The certification problem format. In Proc. UITP, EPTCS 167, pages 61–72, 2014.
[29]
A. Taleghani and J. M. Atlee. Search-carrying code. In Proc. ASE, pages 367–376. ACM, 2010.
[30]
T. Wahl. The k-induction principle, 2013. Available at http://www.ccs.neu.edu/home/wahl/Publications/ k-induction.pdf.
[31]
M. Whalen, J. Schumann, and B. Fischer. Synthesizing certified code. In Proc. FME, pages 431–450. Springer, 2002.

Cited By

View all
  • (2025)Six years later: testing vs. model checkingInternational Journal on Software Tools for Technology Transfer10.1007/s10009-024-00769-8Online publication date: 16-Jan-2025
  • (2024)JCWIT: A Correctness-Witness Validator for Java Programs Based on Bounded Model CheckingProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3685303(1831-1835)Online publication date: 11-Sep-2024
  • (2024)Algorithm Selection for Software Verification Using Graph Neural NetworksACM Transactions on Software Engineering and Methodology10.1145/363722533:3(1-36)Online publication date: 14-Mar-2024
  • Show More Cited By

Index Terms

  1. Correctness witnesses: exchanging verification results between verifiers

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    FSE 2016: Proceedings of the 2016 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering
    November 2016
    1156 pages
    ISBN:9781450342186
    DOI:10.1145/2950290
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 01 November 2016

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Correctness Witness
    2. Model Checking
    3. Program Analysis
    4. Software Verification
    5. Witness Validation

    Qualifiers

    • Research-article

    Conference

    FSE'16
    Sponsor:

    Acceptance Rates

    Overall Acceptance Rate 17 of 128 submissions, 13%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)42
    • Downloads (Last 6 weeks)4
    Reflects downloads up to 17 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2025)Six years later: testing vs. model checkingInternational Journal on Software Tools for Technology Transfer10.1007/s10009-024-00769-8Online publication date: 16-Jan-2025
    • (2024)JCWIT: A Correctness-Witness Validator for Java Programs Based on Bounded Model CheckingProceedings of the 33rd ACM SIGSOFT International Symposium on Software Testing and Analysis10.1145/3650212.3685303(1831-1835)Online publication date: 11-Sep-2024
    • (2024)Algorithm Selection for Software Verification Using Graph Neural NetworksACM Transactions on Software Engineering and Methodology10.1145/363722533:3(1-36)Online publication date: 14-Mar-2024
    • (2024)Parallel program analysis on path rangesScience of Computer Programming10.1016/j.scico.2024.103154238:COnline publication date: 1-Dec-2024
    • (2024)Exchanging information in cooperative software validationSoftware and Systems Modeling (SoSyM)10.1007/s10270-024-01155-323:3(695-719)Online publication date: 1-Jun-2024
    • (2024)Software Verification Witnesses 2.0Model Checking Software10.1007/978-3-031-66149-5_11(184-203)Online publication date: 10-Apr-2024
    • (2024)Certifying Phase AbstractionAutomated Reasoning10.1007/978-3-031-63498-7_17(284-303)Online publication date: 1-Jul-2024
    • (2024)Btor2-Cert: A Certifying Hardware-Verification Framework Using Software AnalyzersTools and Algorithms for the Construction and Analysis of Systems10.1007/978-3-031-57256-2_7(129-149)Online publication date: 6-Apr-2024
    • (2024)CPAchecker 2.3 with Strategy SelectionTools and Algorithms for the Construction and Analysis of Systems10.1007/978-3-031-57256-2_21(359-364)Online publication date: 6-Apr-2024
    • (2024)Goblint Validator: Correctness Witness Validation by Abstract InterpretationTools and Algorithms for the Construction and Analysis of Systems10.1007/978-3-031-57256-2_17(335-340)Online publication date: 6-Apr-2024
    • Show More Cited By

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media