Abstract
When submitting patches for code review, individual developers are primarily interested in maximizing the chances of their patch being accepted in the least time possible. In principle, code review is a transparent process in which reviewers aim to assess the qualities of the patch on its technical merits in a timely manner; however, in practice the execution of this process can be affected by a variety of factors, some of which are external to the technical content of the patch itself. In this paper, we describe empirical studies of the code review processes for large, open source projects such as WebKit and Google Blink. We first consider factors that have been examined in previous studies — patch size, priority, and component — and then extend our enquiries to explore the effects of organization (which company is involved) and developer profile (review load and activity, patch writer experience) on code review response time and eventual outcome. Our approach uses a reverse engineered model of the patch submission process, and extracts key information from the issue-tracking and code review systems. Our findings suggest that these non-technical factors can significantly impact code review outcomes.
Similar content being viewed by others
Notes
Extracted data is stored in a database and made available online: https://cs.uwaterloo.ca/~obaysal/webkit_data.sqlite
Our Blink dataset is available online: https://cs.uwaterloo.ca/~obaysal/blink_data.sqlite
References
Bacchelli A, Bird C (2013) Expectations, outcomes, and challenges of modern code review. In: Proceedings of the 2013 international conference on software engineering, pp 712–721
Baysal O, Holmes R (2012) A qualitative study of mozilla’s process management practices. Tech. Rep. CS-2012-10, David R. Cheriton School of Computer Science, University of Waterloo, Waterloo, Canada. http://www.cs.uwaterloo.ca/research/tr/2012/CS-2012-10.pdf
Baysal O, Kononenko O, Holmes R, Godfrey M (2012) The secret life of patches: a firefox case study. In: Procedings of the 19th working conference on reverse engineering, pp 447–455
Baysal O, Kononenko O, Holmes R, Godfrey MW (2013) The Influence of Non-technical Factors on Code Review. In: Proceedings of the Working Conference on Reverse Engineering, pp 122–131
Beller M, Bacchelli A, Zaidman A, Juergens E (2014) Modern code reviews in open-source projects: Which problems do they fix? In: Proceedings of the 11th working conference on mining software repositories, pp 202–211
Bitergia (2013) Reviewers and companies in the webkit project. http://blog.bitergia.com/2013/03/01/reviewers-and-companies-in-webkit-project/
Conway M (1968) How do committees invent? Datamation 14(4):28–31
Herraiz I, German DM, Gonzalez-Barahona JM, Robles G (2008) Towards a simplification of the bug report form in eclipse. In: Proceedings of the 2008 international working conference on mining software repositories, pp 145–148
Jiang Y, Adams B, German DM (2013) Will my patch make it? and how fast? – case study on the linux kernel. In: Proceedings of the 10th IEEE working conference on mining software repositories. San Francisco, CA, US
Kruskal WH, Wallis WA (1952) Use of ranks in one-criterion variance analysis. J Am Stat Assoc 47(260):583–621
Lehmann E, D’Abrera H (2006) Nonparametrics: statistical methods based on ranks. Springer
Massey FJ (1951) The kolmogorov-smirnov test for goodness of fit. J Am Stat Assoc 46(253):8–78
Mozilla Super-review policy. https://www.mozilla.org/hacking/reviewers.html
Nagappan N, Murphy B, Basili V (2008) The influence of organizational structure on software quality: an empirical case study. In: Proceedings of the 30th International Conference on Software Engeneering, pp 521–530
Protalinski E (2013) Opera confirms it will follow google and ditch webkit for blink, as part of its commitment to chromium. http://thenextweb.com/insider/2013/04/04/opera-confirms-it-will-follow-google-and-ditch-webkit-for-blink-as-part-of-its-commitment-to-chromium/
Rigby P, German D (2006) A preliminary examination of code review processes in open source projects. Tech. Rep. DCS-305-IR, University of Victoria, Canada
Rigby PC, German DM, Storey MA (2008) Open source software peer review practices: a case study of the apache server. In: Proceedings of the 30th international conference on software engineering, pp 541–550
Weissgerber P, Neu D, Diehl S (2008) Small patches get in! In: Proceedings of the 2008 international working conference on mining software repositories, pp 67–76
Acknowledgments
We thank the WebKit and Blink developers we talked to for their insights into the source code hierarchy and the review process.
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by: Romain Robbes, Massimiliano Di Penta and Rocco Oliveto
Rights and permissions
About this article
Cite this article
Baysal, O., Kononenko, O., Holmes, R. et al. Investigating technical and non-technical factors influencing modern code review. Empir Software Eng 21, 932–959 (2016). https://doi.org/10.1007/s10664-015-9366-8
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10664-015-9366-8