skip to main content
10.1145/3387940.3391464acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article
Open Access

OffSide: Learning to Identify Mistakes in Boundary Conditions

Published:25 September 2020Publication History

ABSTRACT

Mistakes in boundary conditions are the cause of many bugs in software. These mistakes happen when, e.g., developers make use of '<' or '>' in cases where they should have used '<=' or '>='. Mistakes in boundary conditions are often hard to find and manually detecting them might be very time-consuming for developers. While researchers have been proposing techniques to cope with mistakes in the boundaries for a long time, the automated detection of such bugs still remains a challenge. We conjecture that, for a tool to be able to precisely identify mistakes in boundary conditions, it should be able to capture the overall context of the source code under analysis. In this work, we propose a deep learning model that learn mistakes in boundary conditions and, later, is able to identify them in unseen code snippets. We train and test a model on over 1.5 million code snippets, with and without mistakes in different boundary conditions. Our model shows an accuracy from 55% up to 87%. The model is also able to detect 24 out of 41 real-world bugs; however, with a high false positive rate. The existing state-of-the-practice linter tools are not able to detect any of the bugs. We hope this paper can pave the road towards deep learning models that will be able to support developers in detecting mistakes in boundary conditions.

References

  1. Miltiadis Allamanis. The adverse effects of code duplication in machine learning models of code. In Proceedings of the 2019 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software, pages 143--153, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Uri Alon, Meital Zilberstein, Omer Levy, and Eran Yahav. code2vec: Learning distributed representations of code. Proceedings of the ACM on Programming Languages, 3(POPL):40, 2019.Google ScholarGoogle Scholar
  3. Mark Dowd, John McDonald, and Justin Schuh. The art of software security assessment: Identifying and preventing software vulnerabilities. Pearson Education, 2006.Google ScholarGoogle Scholar
  4. M Finavaro Aniche, FFJ Hermans, and A van Deursen. Pragmatic software testing education. In SIGCSE 2019-Proceedings of the 50th ACM Technical Symposium on Computer Science Education. Association for Computing Machinery (ACM), 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Katerina Goseva-Popstojanova and Andrei Perhinschi. On the capability of static code analysis to detect security vulnerabilities. Information and Software Technology, 68:18--33, 2015.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Bernhard JM Grün, David Schuler, and Andreas Zeller. The impact of equivalent mutants. In 2009 International Conference on Software Testing, Verification, and Validation Workshops, pages 192--199. IEEE, 2009.Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Daniel Hoffman, Paul Strooper, and Lee White. Boundary values and automated component testing. Software Testing, Verification and Reliability, 9(1):3--26, 1999.Google ScholarGoogle ScholarCross RefCross Ref
  8. David Hovemeyer and William Pugh. Finding bugs is easy. Acm sigplan notices, 39(12):92--106, 2004.Google ScholarGoogle Scholar
  9. Bingchiang Jeng and Elaine J Weyuker. A simplified domain-testing strategy. ACM Transactions on Software Engineering and Methodology (TOSEM), 3(3):254--270, 1994.Google ScholarGoogle Scholar
  10. Brittany Johnson. A study on improving static analysis tools: Why are we not using them? In 2012 34th International Conference on Software Engineering (ICSE), pages 1607--1609. IEEE, 2012.Google ScholarGoogle ScholarCross RefCross Ref
  11. Brittany Johnson, Yoonki Song, Emerson Murphy-Hill, and Robert Bowdidge. Why don't software developers use static analysis tools to find bugs? In Proceedings of the 2013 International Conference on Software Engineering, pages 672--681. IEEE Press, 2013.Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. René Just, Darioush Jalali, and Michael D Ernst. Defects4j: A database of existing faults to enable controlled testing studies for java programs. In Proceedings of the 2014 International Symposium on Software Testing and Analysis, pages 437--440, 2014.Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. Hendrig Sellik Pavel Rapoport Georgios Gousios Maurício Aniche Jón Arnar Briem, Jordi Smit. Offside: Learning to identify mistakes in boundary conditions (appendix). Complementary tables: https://figshare.com/articles/Offside_Learning_to_Identify_Mistakes_in_Boundary_Conditions_Appendix_/11689227; Source code: https://github.com/SERG-Delft/ml4se-offside; Dataset: https://zenodo.org/record/3606812, 2020.Google ScholarGoogle Scholar
  14. Diederik P Kingma and Jimmy Ba. Adam: A method for stochastic optimization. arXiv preprint arXiv.1412.6980, 2014.Google ScholarGoogle Scholar
  15. Bruno Legeard, Fabien Peureux, and Mark Utting. Automated boundary testing from z and b. In International Symposium of Formal Methods Europe, pages 21--40. Springer, 2002.Google ScholarGoogle ScholarCross RefCross Ref
  16. Yi Li, Shaohua Wang, Tien N Nguyen, and Son Van Nguyen. Improving bug detection via context-based code representation learning and attention-based neural networks. Proceedings of the ACM on Programming Languages, 3(OOPSLA): 1--30, 2019.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Bryan Perozzi, Rami Al-Rfou, and Steven Skiena. Deepwalk: Online learning of social representations. In Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, KDD '14, pages 701-710, New York, NY, USA, 2014. ACM. ISBN 978-1-4503-2956-9. doi: 10.1145/2623330.2623732. URL http://doi.acm.org/10.1145/2623330.2623732.Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Michael Pradel and Koushik Sen. Deepbugs: A learning approach to name-based bug detection. Proceedings of the ACM on Programming Languages, 2(OOPSLA): 147, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Stuart C Reid. An empirical analysis of equivalence partitioning, boundary value analysis and random testing. In Proceedings Fourth International Software Metrics Symposium, pages 64--73. IEEE, 1997.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Philip Samuel and Rajib Mall. Boundary value testing based on uml models. In 14th Asian Test Symposium (ATS'05), pages 94--99. IEEE, 2005.Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Davide Spadini, Maurício Aniche, and Alberto Bacchelli. Pydriller: Python framework for mining software repositories. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pages 908--911. ACM, 2018.Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Shaked Brody Uri Alon, Omer Levy and Eran Yahav. Code2seq: Generating sequences from structured representations of code. ICLR, 2019.Google ScholarGoogle Scholar
  1. OffSide: Learning to Identify Mistakes in Boundary Conditions

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader