skip to main content
10.1145/3180155.3180217acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

"Was my contribution fairly reviewed?": a framework to study the perception of fairness in modern code reviews

Published:27 May 2018Publication History

ABSTRACT

Modern code reviews improve the quality of software products. Although modern code reviews rely heavily on human interactions, little is known regarding whether they are performed fairly. Fairness plays a role in any process where decisions that affect others are made. When a system is perceived to be unfair, it affects negatively the productivity and motivation of its participants. In this paper, using fairness theory we create a framework that describes how fairness affects modern code reviews. To demonstrate its applicability, and the importance of fairness in code reviews, we conducted an empirical study that asked developers of a large industrial open source ecosystem (OpenStack) what their perceptions are regarding fairness in their code reviewing process. Our study shows that, in general, the code review process in OpenStack is perceived as fair; however, a significant portion of respondents perceive it as unfair. We also show that the variability in the way they prioritize code reviews signals a lack of consistency and the existence of bias (potentially increasing the perception of unfairness). The contributions of this paper are: (1) we propose a framework---based on fairness theory---for studying and managing social behaviour in modern code reviews, (2) we provide support for the framework through the results of a case study on a large industrial-backed open source project, (3) we present evidence that fairness is an issue in the code review process of a large open source ecosystem, and, (4) we present a set of guidelines for practitioners to address unfairness in modern code reviews.

References

  1. Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In Proc. of the 35th Intl. Conf. on Software Engineering (ICSE '13). IEEE, 712--721. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Vipin Balachandran. 2013. Reducing human effort and improving quality in peer code reviews using automatic static analysis and reviewer recommendation. In Proc. of the 2013 Intl. Conf. on Software Engineering. IEEE, 931--940. Google ScholarGoogle ScholarDigital LibraryDigital Library
  3. Victor R Basili, Forrest Shull, and Filippo Lanubile. 1999. Building knowledge through families of experiments. IEEE Transactions on Software Engineering 25, 4 (1999), 456--473. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Gabriele Bavota and Barbara Russo. 2015. Four eyes are better than two: On the impact of code reviews on software quality. In Software Maintenance and Evolution (ICSME), 2015 IEEE Intl. Conf. on. IEEE, 81--90. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W. Godfrey. 2013. The influence of non-technical factors on code review. In Proc. of the 20th Intl. Working Conf. on Reverse Engineering (WCRE '13). 122--131.Google ScholarGoogle Scholar
  6. Olga Baysal, Oleksii Kononenko, Reid Holmes, and Michael W Godfrey. 2016. Investigating technical and non-technical factors influencing modern code review. Empirical Software Engineering 21, 3 (2016), 932--959. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Andrew Begel and Thomas Zimmermann. 2014. Analyze this! 145 questions for data scientists in software engineering. In Proc. of the 36th Intl. Conf. on Software Engineering. ACM, 12--23. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Robert J. Bies and Joseph S. Moag. 1986. Interactional justice: Communication criteria of fairness. In Research on Negotiation in Organizations, R.J. Lewicki, B.H. Sheppard, and M.H. Bazerman (Eds.). JAI Press, 43--55.Google ScholarGoogle Scholar
  9. Robert J. Bies and Debra L. Shapiro. 1987. Interactional fairness judgments: The influence of causal accounts. Social Justice Research 1, 2 (01 jun 1987), 199--218.Google ScholarGoogle Scholar
  10. Amiangshu Bosu and Jeffrey C Carver. 2014. Impact of developer reputation on code review outcomes in OSS projects: An empirical investigation. In Proc. of the 8th ACM/IEEE Intl. Symp. on Empirical Software Engineering and Measurement. ACM, 33. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Amiangshu Bosu, Michaela Greiler, and Christian Bird. 2015. Characteristics of Useful Code Reviews: An Empirical Study at Microsoft. In Proc. of the 12th Intl. Working Conf. on Mining Software Repositories (MSR '15). 146--156. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Jon Brodkin. 2013. Linus Torvalds defends his right to shame Linux kernel developers. Ars Technica. (July 2013).Google ScholarGoogle Scholar
  13. Mauro Carvalho Chehab. 2016. Code of Conflict. Online. (2016). https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/code-of-conflict.rst?h=v4.13-rc6 Visited 2017-08-23.Google ScholarGoogle Scholar
  14. Jacob Cohen. 1968. Weighted kappa: Nominal scale agreement provision for scaled disagreement or partial credit. Psychological bulletin 70, 4 (1968), 213.Google ScholarGoogle Scholar
  15. Ronald L Cohen. 1987. Distributive justice: Theory and research. Social Justice Research1, 1 (1987), 19--40.Google ScholarGoogle Scholar
  16. Jason A Colquitt. 2001. On the dimensionality of organizational justice: a construct validation of a measure. Journal of applied psychology86, 3 (2001), 386.Google ScholarGoogle ScholarCross RefCross Ref
  17. Jason A. Colquitt and Jerome M. Chertkoff. 2002. Explaining Injustice: The Interactive Effect of Explanation and Outcome on Fairness Perceptions and Task Motivation. Journal of Management 28, 5 (2002), 591--610.Google ScholarGoogle ScholarCross RefCross Ref
  18. Jason A. Colquitt, Donald E. Conlon, Michael J. Wesson, Christopher O. L. H. Porter, and K. Yee Ng. 2001. Justice at the Millenium: A Meta-Analytic Review of 25 Years of Organizational Justice Research. Journal of Applied Psychology 86, 3 (2001), 425--445.Google ScholarGoogle ScholarCross RefCross Ref
  19. Jacek Czerwonka, Michaela Greiler, and Jack Tilford. 2015. Code Reviews Do Not Find Bugs: How the Current Code Review Best Practice Slows Us Down. In Proc. of the 37th Intl. Conf. on Software Engineering (ICSE '15). IEEE, Piscataway, NJ, USA, 27--28. Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Laura Dabbish, Colleen Stuart, Jason Tsay, and Jim Herbsleb. 2012. Social coding in GitHub: transparency and collaboration in an open software repository. In Proc. of the ACM Conf. on Computer Supported Cooperative Work (CSCW '12). ACM, 1277--1286. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Amy Edmondson. 1999. Psychological safety and learning behavior in work teams. Administrative science quarterly 44, 2 (1999), 350--383.Google ScholarGoogle Scholar
  22. Cynthia D. Fisher. 2000. Mood and Emotions while Working: Missing Pieces of Job Satisfaction? Research in Organizational Behavior 21, 2 (2000), 1850--202.Google ScholarGoogle Scholar
  23. Bent Flyvbjerg. 2006. Five misunderstandings about case-study research. Qualitative inquiry 12, 2 (2006), 219--245.Google ScholarGoogle Scholar
  24. Robert Folger and Russell Cropanzano. 1998. Toward a General Theory of Fairness. SAGE, 173--196.Google ScholarGoogle Scholar
  25. Robert Folger and Russell Cropanzano. 2001. Fairness Theory: Justice as Accoutability. Stanford University Press, 1--55.Google ScholarGoogle Scholar
  26. Robert Folger, David Rosenfield, Janet Grove, and Louise Corkran. 1979. Effects of 'voice' and peer opinions on responses to inequity. Journal of Personality and Social Psychology 37, 12 (1979), 2253--2261.Google ScholarGoogle ScholarCross RefCross Ref
  27. Daviti Gachechiladze, Filippo Lanubile, Nicole Novielli, and Alexander Serebrenik. 2017. Anger and its direction in collaborative software development. In Proc. of the 39th Intl. Conf. on Software Engineering: New Ideas and Emerging Results Track. IEEE, 11--14. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jerald Greenberg. 1987. A taxonomy of organizational justice theories. Academy of Management review 12, 1 (1987), 9--22.Google ScholarGoogle ScholarCross RefCross Ref
  29. Steven L. Grover. 2014. Unraveling respect in organization studies. Human Relations 67, 1 (2014), 27--51.Google ScholarGoogle ScholarCross RefCross Ref
  30. Kazuki Hamasaki, Raula Gaikovina Kula, Norihiro Yoshida, A. E. Camargo Cruz, Kenji Fujiwara, and Hajimu Iida. 2013. Who does what during a code review? Datasets of OSS peer review repositories. In Proc. of the 10th Intl. Working Conf. on Mining Software Repositories (MSR' 13). IEEE, 49--52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  31. Il-Horn Hann, Jeffrey A Roberts, and Sandra A Slaughter. 2013. All are not equal: An examination of the economic returns to different forms of participation in open source software communities. Information Systems Research 24, 3 (2013), 520--538.Google ScholarGoogle ScholarCross RefCross Ref
  32. Bryan W. Husted and Robert Folger. 2004. Fairness and Transaction Costs: The Contribution of Organizational Justice Theory to an Integrative Model of Economic Organization. Organization Scienc 15, 6 (2004), 719--729. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. Daniel Izquierdo-Cortazar, Lars Kurth, Jesus M Gonzalez-Barahona, Santiago Dueñas, and Nelson Sekitoleko. 2016. Characterization of the Xen project code review process: an experience report. In Proc. of the 13th Intl. Conf. on Mining Software Repositories. ACM, 386--390. Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Daniel Izquierdo-Cortazar, Nelson Sekitoleko, Jesus M Gonzalez-Barahona, and Lars Kurth. 2017. Using Metrics to Track Code Review Performance. In Proc. of the 21st Intl. Conf. on Evaluation and Assessment in Software Engineering. ACM, 214--223. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Joab Jackson. 2017. Node.js Forked Again Over Complaints of Unresponsive Leadership. The News Stack https://thenewstack.io/node-js-forked-complaints-repeated-harassment/. (Aug 2017).Google ScholarGoogle Scholar
  36. Yujuan Jiang, Bram Adams, and Daniel M. German. 2013. Will My Patch Make It? And How Fast? Case Study on the Linux Kernel. In Proc. of the 10th Intl. Working Conf. on Mining Software Repositories (MSR '13). 101--110. Google ScholarGoogle ScholarDigital LibraryDigital Library
  37. Sean Michael Kerner. 2016. OpenStack Revenues Approaching $3.4B: 451 Research. Online. (2016). http://www.eweek.com/cloud/openstack-revenues-approaching-3.4b-451-research Visited2017-08-23.Google ScholarGoogle Scholar
  38. Oleksii Kononenko, Olga Baysal, Latifa Guerrouj, Yaxin Cao, and Michael W Godfrey. 2015. Investigating code review quality: Do people and participation matter?. In Software Maintenance and Evolution (ICSME), 2015 IEEE Intl. Conf. on. IEEE, 111--120. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Adam Kuper and Jessica Kuper. 1985. The Social Science Encyclopedia. Routledge.Google ScholarGoogle Scholar
  40. Amanda Lee, Jeffrey C Carver, and Amiangshu Bosu. 2017. Understanding the impressions, motivations, and barriers of one time code contributors to FLOSS projects: a survey. In Proc. of the 39th Intl. Conf. on Software Engineering. IEEE, 187--197. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Timothy C Lethbridge, Susan Elliott Sim, and Janice Singer. 2005. Studying software engineers: Data collection techniques for software field studies. Empirical software engineering 10, 3 (2005), 311--341. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. Gerald S Leventhal. 1976. The distribution of rewards and resources in groups and organizations. Advances in experimental social psychology 9 (1976), 91--131.Google ScholarGoogle Scholar
  43. Gerald S Leventhal. 1980. What should be done with equity theory? Springer.Google ScholarGoogle Scholar
  44. Susan A. Lynham. 2002. The General Method of Theory-Building Research in Applied Disciplines. Advances in Developing Human Resources 4, 3 (2002), 221--241.Google ScholarGoogle ScholarCross RefCross Ref
  45. Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E. Hassan. 2015. An Empirical Study of the Impact of Modern Code Review Practices on Software Quality. Empirical Software Engineering (EMSE) 21, 5 (2015), 1--44. Google ScholarGoogle ScholarDigital LibraryDigital Library
  46. Shane McIntosh, Yasutaka Kamei, Bram Adams, and Ahmed E Hassan. 2016. An empirical study of the impact of modern code review practices on software quality. Empirical Software Engineering 21, 5 (2016), 2146--2189. Google ScholarGoogle ScholarDigital LibraryDigital Library
  47. Daniel Pletea, Bogdan Vasilescu, and Alexander Serebrenik. 2014. Security and Emotion: Sentiment Analysis of Security Discussions on GitHub. In Proc. of the 11th Working Conf. on Mining Software Repositories (MSR 2014). 348--351. Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Teade Punter, Marcus Ciolkowski, Bernd Freimut, and Isabel John. 2003. Conducting on-line surveys in software engineering. In Empirical Software Engineering, 2003. ISESE 2003. Proc.. 2003 Intl. Symp. on. IEEE, 80--88. Google ScholarGoogle ScholarDigital LibraryDigital Library
  49. Uzma Raja and Marietta J Tretter. 2012. Defining and evaluating a measure of Open Source project survivability. IEEE Transactions on Software Engineering 38, 1 (2012), 163--174. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Peter C. Rigby and Christian Bird. 2013. Convergent contemporary software peer review practices. In Proc. of the 9th Joint Meeting on Foundations of Software Engineering (FSE '13). ACM, 202--212. Google ScholarGoogle ScholarDigital LibraryDigital Library
  51. Peter C. Rigby, Daniel M. German, Laura Cowen, and Margaret-Anne Storey. 2014. Peer Review on Open-Source Software Projects: Parameters, Statistical Models, and Theory. Transactions on Software Engineering Methodologies 23, 4, Article 35 (Sept. 2014), 33 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Peter C. Rigby, Daniel M German, and Margaret-Anne Storey. 2008. Open source software peer review practices: a case study of the apache server. In Proc. of the 30th Intl. Conf. on Software engineering. ACM, 541--550. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Per Runeson and Martin Höst. 2009. Guidelines for conducting and reporting case study research in software engineering. Empirical software engineering 14, 2 (2009), 131. Google ScholarGoogle ScholarDigital LibraryDigital Library
  54. Per Runeson, Martin Host, Austen Rainer, and Bjorn Regnell. 2012. Case Study Research in Software Engineering: Guidelines and Examples. Wiley Blackwell, Hoboken, New Jersey, USA. 256 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Daniel Schneider, Scott Spurlock, and Megan Squire. 2016. Differentiating Communication Styles of Leaders on the Linux Kernel Mailing List. In Proc. of the 12th Intl. Symp. on Open Collaboration (OpenSym '16). ACM, New York, NY, USA, Article 2, 10 pages. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Carolyn B. Seaman. 1999. Qualitative methods in empirical studies of software engineering. IEEE Transactions on software engineering 25, 4 (1999), 557--572. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Sarah Sharp. 2015. Closing a door. Online. (2015). http://sarah.thesharps.us/2015/10/05/closing-a-door/ Visited 2017-08-23.Google ScholarGoogle Scholar
  58. Janice Singer, Susan E. Sim, and Timothy C. Lethbridge. 2008. Software Engineering Data Collection for Field Studies. In Guide to Advanced Empirical Software Engineering. Springer, London, UK, 9--34.Google ScholarGoogle Scholar
  59. Megan Squire and Rebecca Gazda. 2015. FLOSS as a Source for Profanity and Insults: Collecting the Data. In Proc. of the 48th Hawaii Intl. Conf. on System Sciences, Vol. HICSS '15. IEEE, 5290--5298. Google ScholarGoogle ScholarDigital LibraryDigital Library
  60. Igor Steinmacher, Marco Aurelio Graciotto Silva, Marco Aurelio Gerosa, and David F Redmiles. 2015. A systematic literature review on the barriers faced by newcomers to open source software projects. Information and Software Technology 59 (2015), 67--85. Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. How do software engineers understand code changes?: an exploratory study in industry. In Proc. of the ACM SIGSOFT 20th Intl. Symp. on the Foundations of Software Engineering. ACM, 51. Google ScholarGoogle ScholarDigital LibraryDigital Library
  62. The OpenStack Foundation. 2012. Companies Supporting The OpenStack Foundation. Online. (2012). https://www.openstack.org/foundation/companies/ Visited 2017-08-23.Google ScholarGoogle Scholar
  63. The OpenStack Foundation. 2012. The OpenStack Foundation. Online. (2012). https://www.openstack.org/foundation/ Visited 2017-08-23.Google ScholarGoogle Scholar
  64. The OpenStack Foundation. 2017. OpenStack community contribution in all releases I Lines of code. Online. (2017). http://stackalytics.com/?release=all&metric=loc Visited 2017-08-23.Google ScholarGoogle Scholar
  65. The OpenStack Foundation. 2017. OpenStack com=munity contribution in all releases I Reviews. Online. (2017). http://stackalytics.com/?release=all&metric=marks Visited 2017-08-23.Google ScholarGoogle Scholar
  66. John W Thibaut and Laurens Walker. 1975. Procedural justice: A psychological analysis. L. Erlbaum Associates.Google ScholarGoogle Scholar
  67. Patanamon Thongtanunam, Shane McIntosh, Ahmed E Hassan, and Hajimu Iida. 2016. Revisiting code ownership and its relationship with software quality in the scope of modern code review. In Proc. of the 38th Intl. Conf. on Software Engineering (ICSE '16). IEEE, 1039--1050. Google ScholarGoogle ScholarDigital LibraryDigital Library
  68. Patanamon Thongtanunam, Shane McIntosh, Ahmed E Hassan, and Hajimu Iida. 2017. Review participation in modern code review. Empirical Software Engineering 22, 2 (2017), 768--817. Google ScholarGoogle ScholarDigital LibraryDigital Library
  69. Parastou Tourani and Bram Adams. 2016. The Impact of Human Discussions on Just-In-Time Quality Assurance. In Proc. of the 23rd Intl. Conf. on Software Analysis, Evolution, and Reengineering (SANER '16). 189--200.Google ScholarGoogle Scholar
  70. Parastou Tourani, Bram Adams, and Alexander Serebrenik. 2017. Code of Conduct in Open Source Projects. In Proc. of the 24th Intl. Conf. on Software Analysis, Evolution, and Reengineering (SANER '17). IEEE, 24--33.Google ScholarGoogle ScholarCross RefCross Ref
  71. Jason Tsay, Laura Dabbish, and James Herbsleb. 2014. Let's talk about it: evaluating contributions through discussion in GitHub. In Proc. of the 22nd Intl. Symp. on Foundations of Software Engineering (FSE '14). ACM, 144--154. Google ScholarGoogle ScholarDigital LibraryDigital Library
  72. Tom R. Tyler. 1994. Psychological models of the justice motive: Antecedents of distributive and procedural justice. Journal of Personality and Social Psychology 67, 5 (1994), 850 -- 863.Google ScholarGoogle ScholarCross RefCross Ref
  73. Georg Von Krogh, Sebastian Spaeth, and Karim R Lakhani. 2003. Community, joining, and specialization in open source software innovation: a case study. Research Policy 32, 7 (2003), 1217--1241.Google ScholarGoogle ScholarCross RefCross Ref
  74. Jing Wang, Patrick C Shih, Yu Wu, and John M Carroll. 2015. Comparative case studies of open source software peer review practices. Information and Software Technology 67 (2015), 1--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  75. Howard M. Weiss and Russell Cropanzano. 1996. Affective Events Theory: A theoretical discussion of the structure, causes and consequences of affective experiences at work. Vol. 18. Elsevier, 1--74.Google ScholarGoogle Scholar
  76. Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wessén. 2012. Experimentation in software engineering. Springer. Google ScholarGoogle ScholarDigital LibraryDigital Library
  77. Xin Xia, David Lo, Xinyu Wang, and Xiaohu Yang. 2015. Who should review this change?: Putting text and file location analyses together for more accurate recommendations. In Software Maintenance and Evolution (ICSME), 2015 IEEE Intl. Conf. on. 261--270. Google ScholarGoogle ScholarDigital LibraryDigital Library
  78. Xin Yang, Raula Gaikovina Kula, Norihiro Yoshida, and Hajimu Iida. 2016. Peer Review Social Network (PeRSoN) in Open Source Projects. IEICE Transactions on Information and Systems E99-D, 3 (2016), 661--670.Google ScholarGoogle Scholar
  79. Motahareh Bahrami Zanjani, Huzefa Kagdi, and Christian Bird. 2016. Automatically recommending peer reviewers in modern code review. IEEE Transactions on Software Engineering 42, 6 (2016), 530--543. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. "Was my contribution fairly reviewed?": a framework to study the perception of fairness in modern code reviews
          Index terms have been assigned to the content through auto-classification.

          Recommendations

          Comments

          Login options

          Check if you have access through your login credentials or your institution to get full access on this article.

          Sign in
          • Published in

            cover image ACM Conferences
            ICSE '18: Proceedings of the 40th International Conference on Software Engineering
            May 2018
            1307 pages
            ISBN:9781450356381
            DOI:10.1145/3180155
            • Conference Chair:
            • Michel Chaudron,
            • General Chair:
            • Ivica Crnkovic,
            • Program Chairs:
            • Marsha Chechik,
            • Mark Harman

            Copyright © 2018 ACM

            Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

            Publisher

            Association for Computing Machinery

            New York, NY, United States

            Publication History

            • Published: 27 May 2018

            Permissions

            Request permissions about this article.

            Request Permissions

            Check for updates

            Qualifiers

            • research-article

            Acceptance Rates

            Overall Acceptance Rate276of1,856submissions,15%

            Upcoming Conference

            ICSE 2025

          PDF Format

          View or Download as a PDF file.

          PDF

          eReader

          View online with eReader.

          eReader