ABSTRACT
Mistakes in boundary conditions are the cause of many bugs in software. These mistakes happen when, e.g., developers make use of '<' or '>' in cases where they should have used '<=' or '>='. Mistakes in boundary conditions are often hard to find and manually detecting them might be very time-consuming for developers. While researchers have been proposing techniques to cope with mistakes in the boundaries for a long time, the automated detection of such bugs still remains a challenge. We conjecture that, for a tool to be able to precisely identify mistakes in boundary conditions, it should be able to capture the overall context of the source code under analysis. In this work, we propose a deep learning model that learn mistakes in boundary conditions and, later, is able to identify them in unseen code snippets. We train and test a model on over 1.5 million code snippets, with and without mistakes in different boundary conditions. Our model shows an accuracy from 55% up to 87%. The model is also able to detect 24 out of 41 real-world bugs; however, with a high false positive rate. The existing state-of-the-practice linter tools are not able to detect any of the bugs. We hope this paper can pave the road towards deep learning models that will be able to support developers in detecting mistakes in boundary conditions.
- Miltiadis Allamanis. The adverse effects of code duplication in machine learning models of code. In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, pages 143--153, 2019.Google ScholarDigital Library
- Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. code2vec: Learning distributed representations of code. Proceedings of the ACM on Programming Languages, 3(POPL):40, 2019.Google Scholar
- Mark Dowd, John McDonald, and Justin Schuh. The art of software security assessment: Identifying and preventing software vulnerabilities. Pearson Education, 2006.Google Scholar
- M Finavaro Aniche, FFJ Hermans, and A van Deursen. Pragmatic software testing education. In SIGCSE 2019-Proceedings of the 50th ACM Technical Symposium on Computer Science Education. Association for Computing Machinery (ACM), 2019.Google ScholarDigital Library
- Katerina Goseva-Popstojanova and Andrei Perhinschi. On the capability of static code analysis to detect security vulnerabilities. Information and Software Technology, 68:18--33, 2015.Google ScholarDigital Library
- Bernhard JM Grün, David Schuler, and Andreas Zeller. The impact of equivalent mutants. In 2009 International Conference on Software Testing, Verification, and Validation Workshops, pages 192--199. IEEE, 2009.Google ScholarDigital Library
- Daniel Hoffman, Paul Strooper, and Lee White. Boundary values and automated component testing. Software Testing, Verification and Reliability, 9(1):3--26, 1999.Google ScholarCross Ref
- David Hovemeyer and William Pugh. Finding bugs is easy. Acm sigplan notices, 39(12):92--106, 2004.Google Scholar
- Bingchiang Jeng and Elaine J Weyuker. A simplified domain-testing strategy. ACM Transactions on Software Engineering and Methodology (TOSEM), 3(3):254--270, 1994.Google Scholar
- Brittany Johnson. A study on improving static analysis tools: Why are we not using them? In 2012 34th International Conference on Software Engineering (ICSE), pages 1607--1609. IEEE, 2012.Google ScholarCross Ref
- Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. Why don't software developers use static analysis tools to find bugs? In Proceedings of the 2013 International Conference on Software Engineering, pages 672--681. IEEE Press, 2013.Google ScholarDigital Library
- René Just, Darioush Jalali, and Michael D Ernst. Defects4j: A database of existing faults to enable controlled testing studies for java programs. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, pages 437--440, 2014.Google ScholarDigital Library
- Hendrig Sellik Pavel Rapoport Georgios Gousios Maurício Aniche Jón Arnar Briem, Jordi Smit. Offside: Learning to identify mistakes in boundary conditions (appendix). Complementary tables: https://figshare.com/articles/Offside_Learning_to_Identify_Mistakes_in_Boundary_Conditions_Appendix_/11689227; Source code: https://github.com/SERG-Delft/ml4se-offside; Dataset: https://zenodo.org/record/3606812, 2020.Google Scholar
- Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv.1412.6980, 2014.Google Scholar
- Bruno Legeard, Fabien Peureux, and Mark Utting. Automated boundary testing from z and b. In International Symposium of Formal Methods Europe, pages 21--40. Springer, 2002.Google ScholarCross Ref
- Yi Li, Shaohua Wang, Tien N Nguyen, and Son Van Nguyen. Improving bug detection via context-based code representation learning and attention-based neural networks. Proceedings of the ACM on Programming Languages, 3(OOPSLA): 1--30, 2019.Google ScholarDigital Library
- Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, pages 701-710, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-2956-9. doi: 10.1145/2623330.2623732. URL http://doi.acm.org/10.1145/2623330.2623732.Google ScholarDigital Library
- Michael Pradel and Koushik Sen. Deepbugs: A learning approach to name-based bug detection. Proceedings of the ACM on Programming Languages, 2(OOPSLA): 147, 2018.Google ScholarDigital Library
- Stuart C Reid. An empirical analysis of equivalence partitioning, boundary value analysis and random testing. In Proceedings Fourth International Software Metrics Symposium, pages 64--73. IEEE, 1997.Google ScholarDigital Library
- Philip Samuel and Rajib Mall. Boundary value testing based on uml models. In 14th Asian Test Symposium (ATS'05), pages 94--99. IEEE, 2005.Google ScholarDigital Library
- Davide Spadini, Maurício Aniche, and Alberto Bacchelli. Pydriller: Python framework for mining software repositories. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 908--911. ACM, 2018.Google ScholarDigital Library
- Shaked Brody Uri Alon, Omer Levy and Eran Yahav. Code2seq: Generating sequences from structured representations of code. ICLR, 2019.Google Scholar
- OffSide: Learning to Identify Mistakes in Boundary Conditions
Recommendations
An undergraduate course on software bug detection tools and techniques
The importance of software bug detection tools is high with the constant threat of malicious activity. Companies are increasingly relying on software bug detection tools to catch exploitable bugs before the program is released. This paper describes a ...
An undergraduate course on software bug detection tools and techniques
SIGCSE '06: Proceedings of the 37th SIGCSE technical symposium on Computer science educationThe importance of software bug detection tools is high with the constant threat of malicious activity. Companies are increasingly relying on software bug detection tools to catch exploitable bugs before the program is released. This paper describes a ...
Future of Mining Software Archives: A Roundtable
This article looks at what happens when you combine the four goal-driven approaches to testing classification (requirements-driven, structure-driven, statistics-driven, and risk-driven) with the three phase-driven approaches (unit testing, integration ...
Comments