skip to main content
10.1145/3092282.3098206acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

The RERS 2017 challenge and workshop (invited paper)

Published: 13 July 2017 Publication History

Abstract

RERS is an annual verification challenge that focuses on LTL and reachability properties of reactive systems. In 2017, RERS was extended to a one day workshop that in addition to the original challenge program also featured an invited talk about possible future developments. As a satellite of ISSTA and SPIN, the 2017 RERS Challenge itself increased emphasis on the parallel benchmark problems which, like their sequential counterparts, were generated using property-preserving transformations in order to scale their level of difficulty. The first half of the RERS workshop focused on the 2017 benchmark profiles, the evaluation of the received contributions, and short presentations of each participating team. The second half comprised discussions about attractive problem scenarios for future benchmarks, like race detection, the topic of the invited talk, and about systematic ways to leverage a tool's performance based on competition benchmarks and machine learning.

References

[1]
Clark Barrett, Christopher L. Conway, Morgan Deters, Liana Hadarean, Dejan Jovanović, Tim King, Andrew Reynolds, and Cesare Tinelli. 2011. CVC4. In Proc. CAV 2011 (LNCS 6806). Springer, 171–177. http://dl.acm.org/citation.cfm?id= 2032305.2032319
[2]
D. Beyer. 2014. Status Report on Software Verification. In Proc. TACAS (LNCS 8413). Springer, 373–388.
[3]
Dirk Beyer. 2016. Reliable and Reproducible Competition Results with BenchExec and Witnesses (Report on SV-COMP 2016). In Proc. TACAS (LNCS 9636). Springer, 887–904.
[4]
Pavol Bielik, Veselin Raychev, and Martin Vechev. 2015. Scalable Race Detection for Android Applications. In Proceedings of the 2015 ACM SIGPLAN International Conference on Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA 2015). ACM, 332–348.
[5]
Hans-J. Boehm. 2011. How to Miscompile Programs with "Benign" Data Races. In Proceedings of the 3rd USENIX Conference on Hot Topic in Parallelism (HotPar’11). USENIX Association, Berkeley, CA, USA, 3–3. http://dl.acm.org/citation.cfm?id= 2001252.2001255
[6]
M. Broy, B. Jonsson, J.-P. Katoen, M. Leucker, and A. Pretschner (Eds.). 2005. Model-Based Testing of Reactive Systems. Springer.
[7]
E. M. Clarke, O. Grumberg, and D. Peled. 2001. Model Checking. MIT Press. I–XIV, 1–314 pages.
[8]
Mike Czech, Eyke Hüllermeier, Marie-Christine Jakobs, and Heike Wehrheim. 2017.
[9]
Predicting Rankings of Software Verification Competitions. CoRR abs/1703.00757 (2017). http://arxiv.org/abs/1703.00757
[10]
Leonardo de Moura and Nikolaj Bjørner. 2008. Z3: An Efficient SMT Solver. In Proc. TACAS (LNCS 4963). Springer, 337–340. 978-3-540-78800-3_24
[11]
F. de Sande. 2004. https://sourceforge.net/projects/ompscr/. (2004). OpenMP Source Code Repository, file c_fft6.c. Accessed May 22, 2017.
[12]
Yulia Demyanova, Thomas Pani, Helmut Veith, and Florian Zuleger. 2017. Empirical Software Metrics for Benchmarking of Verification Tools. Formal Methods in System Design 50, 2-3 (2017), 289–316.
[13]
Maren Geske, Malte Isberner, and Bernhard Steffen. 2015. Rigorous Examination of Reactive Systems: The RERS Challenge 2015. In Runtime Verification: 6th International Conference. Springer, 423–429.
[14]
Maren Geske, Marc Jasper, Bernhard Steffen, Falk Howar, Markus Schordan, and Jaco van de Pol. 2016. RERS 2016: Parallel and Sequential Benchmarks with Focus on LTL Verification. In International Symposium on Leveraging Applications of Formal Methods. Springer, 787–803.
[15]
Patrice Godefroid and Nachiappan Nagappan. 2008. Concurrency at Microsoft — An Exploratory Survey. In (EC) 2, CAV 2008 Workshop on “Exploiting Concurrency Efficiently and Correctly”. Princeton. https://patricegodefroid.github.io/public_ psfiles/ec2.pdf.
[16]
Orna Grumberg, Shlomi Livne, and Shaul Markovitch. 2003. Learning to Order BDD Variables in Verification. J. Artif. Intell. Res. (JAIR) 18 (2003), 83–116.
[17]
Richard Heijblom. 2016. Using Features Of Models To Improve State Space Exploration. Master’s thesis. University of Twente.
[18]
Charles Antony Richard Hoare. 1978. Communicating Sequential Processes. In The origin of concurrent programming. Springer, 413–443.
[19]
Gerard Holzmann. 2011. The SPIN Model Checker: Primer and Reference Manual (1st ed.). Addison-Wesley Professional.
[20]
G. J. Holzmann and M. H. Smith. 2001. Software Model Checking: Extracting Verification Models from Source Code. Softw. Testing, Verification and Reliability 11, 2 (2001).
[21]
Falk Howar, Malte Isberner, Maik Merten, Bernhard Steffen, and Dirk Beyer. 2012. The RERS Grey-Box Challenge 2012: Analysis of Event-Condition-Action Systems. In International Symposium On Leveraging Applications of Formal Methods, Verification and Validation. Springer, 608–614.
[22]
F. Howar, M. Isberner, M. Merten, B. Steffen, D. Beyer, and C. Păsăreanu. 2014. Rigorous Examination of Reactive Systems. The RERS Challenges 2012 and 2013. STTT 16, 5 (2014), 457–464.
[23]
Marieke Huisman, Vladimir Klebanov, and Rosemary Monahan. 2015. VerifyThis 2012 - A Program Verification Competition. STTT 17, 6 (2015), 647–657.
[24]
Marc Jasper and Markus Schordan. 2016. Multi-core Model Checking of Largescale Reactive Systems Using Different State Representations. In International Symposium on Leveraging Applications of Formal Methods. Springer, 212–226.
[25]
Baris Kasikci, Cristian Zamfir, and George Candea. 2012. Data Races vs. Data Race Bugs: Telling the Difference with Portend. In Proc. of the 17th International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS XVII). ACM, 185–198.
[26]
J. C. King. 1976. Symbolic Execution and Program Testing. Commun. ACM 19, 7 (1976), 385–394.
[27]
F. Kordon, H. Garavel, L. M. Hillah, F. Hulin-Hubard, G. Chiardo, A. Hamez, L. Jezequel, A. Miner, J. Meijer, E. Paviot-Adet, D. Racordon, C. Rodriguez, C. Rohr, J. Srba, Y. Thierry-Mieg, G. Tri.nh, and K. Wolf. 2016. Complete Results for the 2016 Edition of the Model Checking Contest. http://mcc.lip6.fr/2016/results.php. (June 2016).
[28]
Kim Guldstrand Larsen. 1989. Modal Specifications. In International Conference on Computer Aided Verification. Springer, 232–246.
[29]
Tiziana Margaria and Bernhard Steffen. 2010. Simplicity as a Driver for Agile Innovation. Computer 43, 6 (2010), 90–92.
[30]
Satish Narayanasamy, Zhenghao Wang, Jordan Tigani, Andrew Edwards, and Brad Calder. 2007. Automatically Classifying Benign and Harmful Data Races Using Replay Analysis. In Proceedings of the 28th ACM SIGPLAN Conference on Programming Language Design and Implementation (PLDI ’07). ACM, New York, NY, USA, 22–31.
[31]
F. Nielson, H. R. Nielson, and C. Hankin. 1999. Principles of Program Analysis. Springer.
[32]
James Lyle Peterson. 1981. Petri Net Theory and the Modeling of Systems. Prentice Hall PTR, Upper Saddle River, NJ, USA.
[33]
Louis-Noel Pouchet. 2012. PolyOpt/C 0.2.1: a Polyhedral Compiler for ROSE. http://web.cs.ucla.edu/~pouchet/software/polyopt/. (2012).
[34]
G. Ramalingam. 2000. Context-sensitive Synchronization-sensitive Analysis is Undecidable. ACM Trans. Program. Lang. Syst. 22, 2 (March 2000), 416–430.
[35]
Markus Schordan, Pei-Hung Lin, Dan Quinlan, and Louis-Noël Pouchet. 2014. Verification of Polyhedral Optimizations with Constant Loop Bounds in Finite State Space Computations. In International Symposium on Leveraging Applications of Formal Methods, Verification and Validation (LNCS 8803). Springer, 493–508.
[36]
Stephen F. Siegel, Manchun Zheng, Ziqing Luo, Timothy K. Zirkel, Andre V. Marianiello, John G. Edenhofner, Matthew B. Dwyer, and Michael S. Rogers. 2015. CIVL: The Concurrency Intermediate Verification Language. In Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis (SC ’15). ACM, New York, Article 61, 12 pages.
[37]
Bernhard Steffen, Falk Howar, Malte Isberner, Stefan Naujokat, and Tiziana Margaria. 2014. Tailored Generation of Concurrent Benchmarks. STTT 16, 5 (2014), 543–558.
[38]
Bernhard Steffen, Malte Isberner, Stefan Naujokat, Tiziana Margaria, and Maren Geske. 2014. Property-Driven Benchmark Generation: Synthesizing Programs of Realistic Structure. STTT 16, 5 (2014), 465–479.
[39]
Bernhard Steffen and Marc Jasper. 2017. Property-Preserving Parallel Decomposition. In LNCS (to appear). Springer.
[40]
Bernhard Steffen, Marc Jasper, Jaco van de Pol, and Jeroen Meijer. 2017. Property-Preserving Generation of Tailored Benchmark Petri Nets. In Proceedings of ACSD 2017 (to appear). IEEE Computer Society.

Cited By

View all
  • (2022)Active vs. Passive: A Comparison of Automata Learning Paradigms for Network ProtocolsElectronic Proceedings in Theoretical Computer Science10.4204/EPTCS.371.1371(1-19)Online publication date: 27-Sep-2022
  • (2019)Sound black-box checking in the LearnLibInnovations in Systems and Software Engineering10.1007/s11334-019-00342-6Online publication date: 30-May-2019
  • (2019)Synchronous or Alternating?Models, Mindsets, Meta: The What, the How, and the Why Not?10.1007/978-3-030-22348-9_24(417-430)Online publication date: 26-Jun-2019
  • Show More Cited By

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
SPIN 2017: Proceedings of the 24th ACM SIGSOFT International SPIN Symposium on Model Checking of Software
July 2017
199 pages
ISBN:9781450350778
DOI:10.1145/3092282
© 2017 Association for Computing Machinery. ACM acknowledges that this contribution was authored or co-authored by an employee, contractor or affiliate of the United States government. As such, the United States Government retains a nonexclusive, royalty-free right to publish or reproduce this article, or to allow others to do so, for Government purposes only.

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 13 July 2017

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. benchmark generation
  2. modal transition systems
  3. model checking
  4. property-preservation
  5. race analysis
  6. temporal logic
  7. verification

Qualifiers

  • Research-article

Conference

ISSTA '17
Sponsor:

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)4
  • Downloads (Last 6 weeks)1
Reflects downloads up to 20 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2022)Active vs. Passive: A Comparison of Automata Learning Paradigms for Network ProtocolsElectronic Proceedings in Theoretical Computer Science10.4204/EPTCS.371.1371(1-19)Online publication date: 27-Sep-2022
  • (2019)Sound black-box checking in the LearnLibInnovations in Systems and Software Engineering10.1007/s11334-019-00342-6Online publication date: 30-May-2019
  • (2019)Synchronous or Alternating?Models, Mindsets, Meta: The What, the How, and the Why Not?10.1007/978-3-030-22348-9_24(417-430)Online publication date: 26-Jun-2019
  • (2019)Benchmarks for Automata Learning and Conformance TestingModels, Mindsets, Meta: The What, the How, and the Why Not?10.1007/978-3-030-22348-9_23(390-416)Online publication date: 26-Jun-2019
  • (2019)RERS 2019: Combining Synthesis with Real-World ModelsTools and Algorithms for the Construction and Analysis of Systems10.1007/978-3-030-17502-3_7(101-115)Online publication date: 4-Apr-2019
  • (2019)TOOLympics 2019: An Overview of Competitions in Formal MethodsTools and Algorithms for the Construction and Analysis of Systems10.1007/978-3-030-17502-3_1(3-24)Online publication date: 4-Apr-2019
  • (2018)Sound Black-Box Checking in the LearnLibNASA Formal Methods10.1007/978-3-319-77935-5_24(349-366)Online publication date: 11-Mar-2018
  • (2018)RERS 2018: CTL, LTL, and ReachabilityLeveraging Applications of Formal Methods, Verification and Validation. Verification10.1007/978-3-030-03421-4_27(433-447)Online publication date: 30-Oct-2018
  • (2018)Synthesizing Subtle Bugs with Known WitnessesLeveraging Applications of Formal Methods, Verification and Validation. Verification10.1007/978-3-030-03421-4_16(235-257)Online publication date: 30-Oct-2018
  • (2018)Evaluating Tools for Software Verification (Track Introduction)Leveraging Applications of Formal Methods, Verification and Validation. Verification10.1007/978-3-030-03421-4_10(139-143)Online publication date: 30-Oct-2018

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media