ABSTRACT
Software functional equivalence checking is a technique for analyzing the impact of change of a portion of code on the rest of the system. The existing functional equivalence checking approaches are applicable only at the individual software product level. In this paper, we propose a lifted functional equivalence checking approach, CLEVER-V, that can efficiently handle annotative software product lines. Instead of checking functional equivalence of every product separately, CLEVER-V analyzes all products together to iteratively identify groups of non-equivalent products with common causes. We report on the implementation of the lifted functional equivalence checking approach and demonstrate its effectiveness and scalability on a suite of 288 realistic software updates from BusyBox.
- Sven Apel, Don S. Batory, Christian Kästner, and Gunter Saake. Feature-Oriented Software Product Lines - Concepts and Implementation. Springer, 2013.Google ScholarCross Ref
- Sven Apel, Hendrik Speidel, Philipp Wendler, Alexander von Rhein, and Dirk Beyer. Detection of feature interactions using feature-aware verification. 2011 26th IEEE/ACM International Conference on Automated Software Engineering (ASE 2011), pages 372--375, 2011.Google ScholarDigital Library
- Sven Apel, Alexander von Rhein, Philipp Wendler, Armin Größlinger, and Dirk Beyer. Strategies for product-line verification: Case studies and experiments. 2013 35th International Conference on Software Engineering (ICSE), pages 482--491, 2013.Google ScholarDigital Library
- Sahar Badihi, Faridah Akinotcho, Yi Li, and Julia Rubin. Ardiff: scaling program equivalence checking via iterative abstraction and refinement of common code. In Prem Devanbu, Myra B. Cohen, and Thomas Zimmermann, editors, ESEC/FSE '20: 28th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering, Virtual Event, USA, November 8-13, 2020, pages 13--24. ACM, 2020.Google ScholarDigital Library
- Gilles Barthe, Juan Manuel Crespo, and César Kunz. Relational verification using product programs. In World Congress on Formal Methods, pages 200--214, 06 2011.Google ScholarCross Ref
- Gilles Barthe, Pedro R. D'Argenio, and Tamara Rezk. Secure information flow by self-composition. In 17th IEEE Computer Security Foundations Workshop, (CSFW-17 2004), 28-30 June 2004, Pacific Grove, CA, USA, pages 100--114. IEEE Computer Society, 2004.Google ScholarCross Ref
- Len Bass, Paul Clements, and Rick Kazman. Software Architecture in Practice. Addison-Wesley Professional, 3rd edition, 2012.Google ScholarDigital Library
- Danilo Beuche, Holger Papajewski, and Wolfgang Schröder-Preikschat. Variability management with feature models. Science of Computer Programming, 53(3):333--352, 2004. Software Variability Management.Google ScholarDigital Library
- Cristian Cadar and Hristina Palikareva. Shadow symbolic execution for better testing of evolving software. Companion Proceedings of the 36th International Conference on Software Engineering, 2014.Google ScholarDigital Library
- Berkeley R. Churchill, Oded Padon, Rahul Sharma, and Alexander Aiken. Semantic program alignment for equivalence checking. Proceedings of the 40th ACM SIGPLAN Conference on Programming Language Design and Implementation, 2019.Google ScholarDigital Library
- Krzysztof Czarnecki. Generative programming: Methods, techniques, and applications tutorial abstract. In Cristina Gacek, editor, Software Reuse: Methods, Techniques, and Tools, pages 351--352, Berlin, Heidelberg, 2002. Springer Berlin Heidelberg.Google ScholarCross Ref
- R.A. DeMillo, R.J. Lipton, and F.G. Sayward. Hints on test data selection: Help for the practicing programmer. Computer, 11(4):34--41, 1978.Google ScholarDigital Library
- Dennis Felsing, Sarah Grebing, Vladimir Klebanov, Philipp Rümmer, and Mattias Ulbrich. Automating regression verification. In Proceedings of the 29th ACM/IEEE International Conference on Automated Software Engineering, ASE '14, page 349--360, New York, NY, USA, 2014. Association for Computing Machinery.Google ScholarDigital Library
- Nick Feng, Federico Mora, Vincent Hui, and Marsha Chechik. Scaling client-specific equivalence checking via impact boundary search. In 2020 35th IEEE/ACM International Conference on Automated Software Engineering (ASE), pages 734--745, 2020.Google ScholarDigital Library
- Robert J. Fowler, Michael S. Paterson, and Steven L. Tanimoto. Optimal packing and covering in the plane are np-complete. Information Processing Letters, 12(3):133--137, 1981.Google ScholarCross Ref
- Arie Gurfinkel, Temesghen Kahsai, Anvesh Komuravelli, and Jorge A. Navas. The seahorn verification framework. In Daniel Kroening and Corina S. Păsăreanu, editors, Computer Aided Verification, pages 343--361, Cham, 2015. Springer International Publishing.Google ScholarCross Ref
- C. A. R. Hoare. An axiomatic basis for computer programming. Commun. ACM, 12(10):576--580, oct 1969.Google ScholarDigital Library
- Vincent Hui and Nick Feng. Human-guided precondition synthesis. https://github.com/NickF0211/precondition_sys, 2021.Google Scholar
- Christian Kästner, Alexander von Rhein, Sebastian Erdweg, Jonas Pusch, Sven Apel, Tillmann Rendel, and Klaus Ostermann. Toward variability-aware testing. In Proceedings of the 4th International Workshop on Feature-Oriented Software Development, FOSD '12, page 1--8, New York, NY, USA, 2012. Association for Computing Machinery.Google ScholarDigital Library
- Andy Kenner, Christian Kästner, Steffen Haase, and Thomas Leich. Typechef: Toward type checking #ifdef variability in c. In Proceedings of the 2nd International Workshop on Feature-Oriented Software Development, FOSD '10, page 25--32, New York, NY, USA, 2010. Association for Computing Machinery.Google ScholarDigital Library
- Pascal Kerschke, Jakob Bossek, and Heike Trautmann. Parameterization of state-of-the-art performance indicators: A robustness study based on inexact tsp solvers. In Proceedings of the Genetic and Evolutionary Computation Conference Companion, GECCO '18, page 1737--1744, New York, NY, USA, 2018. Association for Computing Machinery.Google ScholarDigital Library
- Gregor Kiczales, John Lamping, Anurag Mendhekar, Chris Maeda, Cristina Videira Lopes, Jean-Marc Loingtier, and John Irwin. Aspect-oriented programming. In Mehmet Aksit and Satoshi Matsuoka, editors, ECOOP'97 - Object-Oriented Programming, 11th European Conference, Jyväskylä, Finland, June 9-13, 1997, Proceedings, volume 1241 of Lecture Notes in Computer Science, pages 220--242. Springer, 1997.Google ScholarCross Ref
- Shuvendu K. Lahiri, Chris Hawblitzel, Ming Kawaguchi, and Henrique Rebêlo. SYMDIFF: A language-agnostic semantic diff tool for imperative programs. In P. Madhusudan and Sanjit A. Seshia, editors, Computer Aided Verification - 24th International Conference, CAV 2012, Berkeley, CA, USA, July 7-13, 2012 Proceedings, volume 7358 of Lecture Notes in Computer Science, pages 712--717. Springer, 2012.Google ScholarDigital Library
- Hugo Martin, Mathieu Acher, Juliana Alves Pereira, Luc Lesoil, Jean-Marc Jézéquel, and Djamel Eddine Khelladi. Transfer learning across variants and versions: The case of linux kernel size. IEEE Transactions on Software Engineering, 48(11):4274--4290, 2022.Google ScholarCross Ref
- Federico Mora, Yi Li, Julia Rubin, and Marsha Chechik. Client-specific equivalence checking. In Marianne Huchard, Christian Kästner, and Gordon Fraser, editors, Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, ASE 2018, Montpellier, France, September 3-7, 2018, pages 441--451. ACM, 2018.Google ScholarDigital Library
- Suzette Person, Matthew B. Dwyer, Sebastian Elbaum, and Corina S. Pundefined-sundefinedreanu. Differential symbolic execution. In Proceedings of the 16th ACM SIGSOFT International Symposium on Foundations of Software Engineering, SIGSOFT '08/FSE-16, page 226--237, New York, NY, USA, 2008. Association for Computing Machinery.Google ScholarDigital Library
- Suzette Person, Guowei Yang, Neha Rungta, and Sarfraz Khurshid. Directed incremental symbolic execution. In ACM-SIGPLAN Symposium on Programming Language Design and Implementation, 2011.Google ScholarDigital Library
- Hendrik Post and Carsten Sinz. Configuration lifting: Verification meets software configuration. 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, pages 347--350, 2008.Google ScholarDigital Library
- Ina Schaefer, Lorenzo Bettini, Viviana Bono, Ferruccio Damiani, and Nico Tanzarella. Delta-oriented programming of software product lines. In Jan Bosch and Jaejoon Lee, editors, Software Product Lines: Going Beyond - 14th International Conference, SPLC 2010, Jeju Island, South Korea, September 13-17, 2010. Proceedings, volume 6287 of Lecture Notes in Computer Science, pages 77--91. Springer, 2010.Google ScholarCross Ref
- Ina Schaefer, Dilian Gurov, and Siavash Soleimanifard. Compositional algorithmic verification of software product lines. In Formal Methods for Components and Objects, 2010.Google Scholar
- Ramy Shahin and Marsha Chechik. Automatic and efficient variability-aware lifting of functional programs. Proceedings of the ACM on Programming Languages, 4:1--27, 2020.Google ScholarDigital Library
- Ron Shemer, Arie Gurfinkel, Sharon Shoham, and Yakir Vizel. Property directed self composition. In Isil Dillig and Serdar Tasiran, editors, Computer Aided Verification - 31st International Conference, CAV 2019, New York City, NY, USA, July 15-18, 2019, Proceedings, Part I, volume 11561 of Lecture Notes in Computer Science, pages 161--179. Springer, 2019.Google ScholarCross Ref
- Thomas Thüm, Sven Apel, Christian Kästner, Ina Schaefer, and Gunter Saake. A classification and survey of analysis strategies for software product lines. ACM Comput. Surv., 47(1), jun 2014.Google Scholar
- Anna Trostanetski, Orna Grumberg, and Daniel Kroening. Modular demand-driven analysis of semantic difference for program versions. In SAS, 2017.Google ScholarCross Ref
- Anna Zaks and Amir Pnueli. Covac: Compiler validation by program analysis of the cross-product. In World Congress on Formal Methods, 2008.Google ScholarDigital Library
Recommendations
Model checking software product lines with SNIP
We present SNIP, an efficient model checker for software product lines (SPLs). Variability in software product lines is generally expressed in terms of features, and the number of potential products is exponential in the number of features. Whereas ...
Delta-oriented multi software product lines
SPLC '14: Proceedings of the 18th International Software Product Line Conference - Volume 1Modern software systems outgrow the scope of traditional software product lines (SPLs) resulting in multi software product lines (MSPLs) with many interconnected subsystem versions and variants. Delta-oriented programming (DOP) is a flexible, modular ...
Potential synergies of theorem proving and model checking for software product lines
SPLC '14: Proceedings of the 18th International Software Product Line Conference - Volume 1The verification of software product lines is an active research area. A challenge is to efficiently verify similar products without the need to generate and verify them individually. As solution, researchers suggest family-based verification approaches,...
Comments