skip to main content
10.1145/2597073.2597076acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
Article

The impact of code review coverage and code review participation on software quality: a case study of the qt, VTK, and ITK projects

Published:31 May 2014Publication History

ABSTRACT

Software code review, i.e., the practice of having third-party team members critique changes to a software system, is a well-established best practice in both open source and proprietary software domains. Prior work has shown that the formal code inspections of the past tend to improve the quality of software delivered by students and small teams. However, the formal code inspection process mandates strict review criteria (e.g., in-person meetings and reviewer checklists) to ensure a base level of review quality, while the modern, lightweight code reviewing process does not. Although recent work explores the modern code review process qualitatively, little research quantitatively explores the relationship between properties of the modern code review process and software quality. Hence, in this paper, we study the relationship between software quality and: (1) code review coverage, i.e., the proportion of changes that have been code reviewed, and (2) code review participation, i.e., the degree of reviewer involvement in the code review process. Through a case study of the Qt, VTK, and ITK projects, we find that both code review coverage and participation share a significant link with software quality. Low code review coverage and participation are estimated to produce components with up to two and five additional post-release defects respectively. Our results empirically confirm the intuition that poorly reviewed code has a negative impact on software quality in large systems using modern reviewing tools.

References

  1. H. Akaike. A New Look at the Statistical Model Identification. Transactions on Automatic Control (TAC), 19(6):716–723, 1974.Google ScholarGoogle Scholar
  2. A. Bacchelli and C. Bird. Expectations, Outcomes, and Challenges of Modern Code Review. In Proc. of the 35th Int’l Conf. on Software Engineering (ICSE), pages 712– 721, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. O. Baysal, O. Kononenko, R. Holmes, and M. W. Godfrey. The Influence of Non-technical Factors on Code Review. In Proc. of the 20th Working Conf. on Reverse Engineering (WCRE), pages 122–131, 2013.Google ScholarGoogle ScholarCross RefCross Ref
  4. N. Bettenburg, A. E. Hassan, B. Adams, and D. M. German. Management of community contributions: A case study on the Android and Linux software ecosystems. Empirical Software Engineering, To appear, 2014.Google ScholarGoogle Scholar
  5. C. Bird, N. Nagappan, B. Murphy, H. Gall, and P. Devanbu. Don’t Touch My Code! Examining the Effects of Ownership on Software Quality. In Proc. of the 8th joint meeting of the European Software Engineering Conf. and the Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 4–14, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. M. Cataldo, A. Mockus, J. A. Roberts, and J. D. Herbsleb. Software Dependencies, Work Dependencies, and Their Impact on Failures. Transactions on Software Engineering (TSE), 35(6):864–878, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. J. M. Chambers and T. J. Hastie, editors. Statistical Models in S, chapter 4. Wadsworth and Brooks/Cole, 1992. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. M. E. Fagan. Design and Code Inspections to Reduce Errors in Program Development. IBM Systems Journal, 15(3):182–211, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. J. Fox. Applied Regression Analysis and Generalized Linear Models. Sage Publications, 2nd edition, 2008.Google ScholarGoogle Scholar
  10. J. Fox and S. Weisberg. An R Companion to Applied Regression. Sage, 2nd edition, 2011.Google ScholarGoogle Scholar
  11. T. L. Graves, A. F. Karr, J. S. Marron, and H. Siy. Predicting Fault Incidence using Software Change History. Transactions on Software Engineering (TSE), 26(7):653–661, 2000. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. K. Hamasaki, R. G. Kula, N. Yoshida, A. E. C. Cruz, K. Fujiwara, and H. Iida. Who Does What during a Code Review? Datasets of OSS Peer Review Repositories. In Proc. of the 10th Working Conf. on Mining Software Repositories (MSR), pages 49–52, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  13. A. E. Hassan. Predicting Faults Using the Complexity of Code Changes. In Proc. of the 31st Int’l Conf. on Software Engineering (ICSE), pages 78–88, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. T. Hastie, R. Tibshirani, and J. Friedman. Elements of Statistical Learning. Springer, 2nd edition, 2009.Google ScholarGoogle Scholar
  15. I. Herraiz, D. M. German, J. M. Gonzalez-Barahona, and G. Robles. Towards a Simplification of the Bug Report form in Eclipse. In Proc. of the 5th Working Conf. on Mining Software Repositories (MSR), pages 145–148, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Y. Jiang, B. Adams, and D. M. German. Will My Patch Make It? And How Fast?: Case Study on the Linux Kernel. In Proc. of the 10th Working Conf. on Mining Software Repositories (MSR), pages 101–110, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Y. Kamei, S. Matsumoto, A. Monden, K. ichi Matsumoto, B. Adams, and A. E. Hassan. Revisiting Common Bug Prediction Findings Using Effort-Aware Models. In Proc. of the 26th Int’l Conf. on Software Maintenance (ICSM), pages 1–10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Y. Kamei, E. Shihab, B. Adams, A. E. Hassan, A. Mockus, A. Sinha, and N. Ubayashi. A Large-Scale Empirical Study of Just-in-Time Quality Assurance. Transactions on Software Engineering (TSE), 39(6):757–773, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. C. F. Kemerer and M. C. Paulk. The Impact of Design and Code Reviews on Software Quality: An Empirical Study Based on PSP Data. Transactions on Software Engineering (TSE), 35(4):534–550, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. S. Kim, E. J. Whitehead, Jr., and Y. Zhang. Classifying software changes: Clean or buggy? Transactions on Software Engineering (TSE), 34(2):181–196, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. A. G. Koru, D. Zhang, K. E. Emam, and H. Liu. An Investigation into the Functional Form of the Size-Defect Relationship for Software Modules. Transactions on Software Engineering (TSE), 35(2):293–304, 2009. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. S. Matsumoto, Y. Kamei, A. Monden, K. ichi Matsumoto, and M. Nakamura. An analysis of developer metrics for fault prediction. In Proc. of the 6th Int’l Conf. on Predictive Models in Software Engineering (PROMISE), pages 18:1–18:9, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. T. J. McCabe. A complexity measure. In Proc. of the 2nd Int’l Conf. on Software Engineering (ICSE), page 407, 1976. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. T. Menzies, J. S. D. Stefano, M. Chapman, and K. McGill. Metrics That Matter. In Proc of the 27th Annual NASA Goddard/IEEE Software Engineering Workshop, pages 51–57, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. A. Mockus. Organizational Volatility and its Effects on Software Defects. In Proc. of the 18th Symposium on the Foundations of Software Engineering (FSE), pages 117–126, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. A. Mockus, R. T. Fielding, and J. D. Herbsleb. Two Case Studies of Open Source Software Development: Apache and Mozilla. Transactions On Software Engineering and Methodology (TOSEM), 11(3):309–346, 2002. Google ScholarGoogle ScholarDigital LibraryDigital Library
  27. A. Mockus and D. M. Weiss. Predicting Risk of Software Changes. Bell Labs Technical Journal, 5(2):169– 180, 2000.Google ScholarGoogle Scholar
  28. M. Mukadam, C. Bird, and P. C. Rigby. Gerrit Software Code Review Data from Android. In Proc. of the 10th Working Conf. on Mining Software Repositories (MSR), pages 45–48, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. N. Nagappan and T. Ball. Use of relative code churn measures to predict system defect density. In Proc. of the 27th Int’l Conf. on Software Engineering (ICSE), pages 284–292, 2005. Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. N. Nagappan and T. Ball. Using Software Dependencies and Churn Metrics to Predict Field Failures: An Empirical Case Study. In Proc. of the 1st Int’l Symposium on Empirical Software Engineering and Measurement (ESEM), pages 364–373, 2007. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. R Core Team. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing, Vienna, Austria, 2013.Google ScholarGoogle Scholar
  32. F. Rahman and P. Devanbu. Ownership, Experience and Defects: A Fine-Grained Study of Authorship. In Proc. of the 33rd Int’l Conf. on Software Engineering (ICSE), pages 491–500, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. F. Rahman and P. Devanbu. How, and why, process metrics are better. In Proc. of the 35th Int’l Conf. on Software Engineering (ICSE), pages 432–441, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. P. C. Rigby and C. Bird. Convergent Contemporary Software Peer Review Practices. In Proc. of the 9th joint meeting of the European Software Engineering Conf. and the Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 202–212, 2013. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. P. C. Rigby, D. M. German, and M.-A. Storey. Open Source Software Peer Review Practices: A Case Study of the Apache Server. In Proc. of the 30th Int’l Conf. on Software Engineering (ICSE), pages 541–550, 2008. Google ScholarGoogle ScholarDigital LibraryDigital Library
  36. P. C. Rigby and M.-A. Storey. Understanding Broadcast Based Peer Review on Open Source Software Projects. In Proc. of the 33rd Int’l Conf. on Software Engineering (ICSE), pages 541–550, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. E. Shihab, Z. M. Jiang, W. M. Ibrahim, B. Adams, and A. E. Hassan. Understanding the Impact of Code and Process Metrics on Post-Release Defects: A Case Study on the Eclipse Project. In Proc. of the 4th Int’l Symposium on Empirical Software Engineering and Measurement (ESEM), pages 1–10, 2010. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. E. Shihab, A. Mockus, Y. Kamei, B. Adams, and A. E. Hassan. High-Impact Defects: A Study of Breakage and Surprise Defects. In Proc. of the 8th joint meeting of the European Software Engineering Conf. and the Symposium on the Foundations of Software Engineering (ESEC/FSE), pages 300–310, 2011. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. T. Tanaka, K. Sakamoto, S. Kusumoto, K. ichi Matsumoto, and T. Kikuno. Improvement of Software Process by Process Description and Benefit Estimation. In Proc. of the 17th Int’l Conf. on Software Engineering (ICSE), pages 123–132, 1995. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. The impact of code review coverage and code review participation on software quality: a case study of the qt, VTK, and ITK projects

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      MSR 2014: Proceedings of the 11th Working Conference on Mining Software Repositories
      May 2014
      427 pages
      ISBN:9781450328630
      DOI:10.1145/2597073
      • General Chair:
      • Premkumar Devanbu,
      • Program Chairs:
      • Sung Kim,
      • Martin Pinzger

      Copyright © 2014 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 31 May 2014

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • Article

      Upcoming Conference

      ICSE 2025

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader