skip to main content
10.1145/3597503.3639227acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Barriers for Students During Code Change Comprehension

Published: 12 April 2024 Publication History

Abstract

Modern code review (MCR) is a key practice for many software engineering organizations, so undergraduate software engineering courses often teach some form of it to prepare students. However, research on MCR describes how many its professional implementations can fail, to say nothing on how these barriers manifest under students' particular contexts. To uncover barriers students face when evaluating code changes during review, we combine interviews and surveys with an observational study. In a junior-level software engineering course, we first interviewed 29 undergraduate students about their experiences in code review. Next, we performed an observational study that presented 44 students from the same course with eight code change comprehension activities. These activities provided students with pull requests of potential refactorings in a familiar code base, collecting feedback on accuracy and challenges. This was followed by a reflection survey.
Building on these methods, we combine (1) a qualitative analysis of the interview transcripts, activity comments, and reflection survey with (2) a quantitative assessment of their performance in identifying behavioral changes in order to outline the barriers that students face during code change comprehension. Our results reveal that students struggle with a number of facets around a program: the context for review, the review tools, the code itself, and the implications of the code changes. These findings - along with our result that student developers tend to overestimate behavioral similarity during code comparison - have implications for future support to help student developers have smoother code review experiences. We motivate a need for several interventions, including sentiment analysis on pull request comments to flag toxicity, scaffolding for code comprehension while reviewing large changes, and behavioral diffing to contrast the evolution of syntax and semantics.

References

[1]
Louis Alfieri, Timothy J Nokes-Malach, and Christian D Schunn. 2013. Learning through case comparisons: A meta-analytic review. Educational Psychologist 48, 2 (2013), 87--113.
[2]
Eman Abdullah AlOmar, Hussein Alrubaye, Mohamed Wiem Mkaouer, Ali Ouni, and Marouane Kessentini. 2021. Refactoring Practices in the Context of Modern Code Review: An Industrial Case Study at Xerox. In 43rd IEEE/ACM International Conference on Software Engineering: Software Engineering in Practice, ICSE (SEIP) 2021, Madrid, Spain, May 25--28, 2021. IEEE, 348--357.
[3]
Eman Abdullah AlOmar, Moataz Chouchen, Mohamed Wiem Mkaouer, and Ali Ouni. 2022. Code Review Practices for Refactoring Changes: An Empirical Study on OpenStack. In 19th IEEE/ACM International Conference on Mining Software Repositories, MSR 2022, Pittsburgh, PA, USA, May 23--24, 2022. ACM, 689--701.
[4]
Alberto Bacchelli and Christian Bird. 2013. Expectations, outcomes, and challenges of modern code review. In 2013 35th International Conference on Software Engineering (ICSE). IEEE, 712--721.
[5]
Mike Barnett, Christian Bird, João Brunet, and Shuvendu K Lahiri. 2015. Helping developers help themselves: Automatic decomposition of code review changesets. In 2015 IEEE/ACM 37th IEEE International Conference on Software Engineering, Vol. 1. IEEE, 134--144.
[6]
Tobias Baum and Kurt Schneider. 2016. On the need for a new generation of code review tools. In Product-Focused Software Process Improvement: 17th International Conference, PROFES 2016, Trondheim, Norway, November 22--24, 2016, Proceedings 17. Springer, 301--308.
[7]
Amiangshu Bosu, Michaela Greiler, and Christian Bird. 2015. Characteristics of useful code reviews: An empirical study at microsoft. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. IEEE, 146--156.
[8]
Raymond PL Buse and Westley R Weimer. 2010. Automatically documenting program changes. In Proceedings of the 25th IEEE/ACM international conference on automated software engineering. 33--42.
[9]
Teresa Busjahn, Roman Bednarik, Andrew Begel, Martha Crosby, James H Paterson, Carsten Schulte, Bonita Sharif, and Sascha Tamm. 2015. Eye movements in code reading: Relaxing the linear order. In 2015 IEEE 23rd International Conference on Program Comprehension. IEEE, 255--265.
[10]
Chun Yong Chong, Patanamon Thongtanunam, and Chakkrit Tantithamthavorn. 2021. Assessing the students' understanding and their mistakes in code review checklists: an experience report of 1,791 code review checklist questions from 394 students. In 2021 IEEE/ACM 43rd International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET). IEEE, 20--29.
[11]
Nicole Davila and Ingrid Nunes. 2021. A systematic literature review and taxonomy of modern code review. Journal of Systems and Software 177 (2021), 110951.
[12]
Felipe Ebert, Fernando Castor, Nicole Novielli, and Alexander Serebrenik. 2019. Confusion in Code Reviews: Reasons, Impacts, and Coping Strategies. In 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER). 49--60.
[13]
Carolyn D. Egelman, Emerson Murphy-Hill, Elizabeth Kammer, Margaret Morrow Hodges, Collin Green, Ciera Jaspan, and James Lin. 2020. Predicting Developers' Negative Feelings about Code Review. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). 174--185.
[14]
Khashayar Etemadi, Aman Sharma, Fernanda Madeiral, and Martin Monperrus. 2023. Augmenting Diffs With Runtime Information. IEEE Transactions on Software Engineering (2023), 1--20.
[15]
Janet Feigenspan, Christian Kästner, Jörg Liebig, Sven Apel, and Stefan Hanenberg. 2012. Measuring programming experience. In 2012 20th IEEE International Conference on Program Comprehension (ICPC). 73--82.
[16]
Denae Ford, Kristina Lustig, Jeremy Banks, and Chris Parnin. 2018. " We Don't Do That Here" How Collaborative Editing with Mentors Improves Engagement in Social Q&A Communities. In Proceedings of the 2018 CHI conference on human factors in computing systems. 1--12.
[17]
Martin Fowler. 2018. Refactoring: improving the design of existing code. Addison-Wesley Professional.
[18]
Enrico Fregnan, Larissa Braz, Marco D'Ambros, Gül Çalıklı, and Alberto Bacchelli. 2022. First Come First Served: The Impact of File Position on Code Review. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC/FSE 2022). Association for Computing Machinery, New York, NY, USA, 483--494.
[19]
Adrian Furnham and Hua Chu Boo. 2011. A literature review of the anchoring effect. The journal of socio-economics 40, 1 (2011), 35--42.
[20]
Xi Ge, Saurabh Sarkar, Jim Witschey, and Emerson Murphy-Hill. 2017. Refactoring-aware code review. In 2017 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 71--79.
[21]
Sanuri Dananja Gunawardena, Peter Devine, Isabelle Beaumont, Lola Piper Garden, Emerson Murphy-Hill, and Kelly Blincoe. 2022. Destructive Criticism in Software Code Review Impacts Inclusion. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 292 (nov 2022), 29 pages.
[22]
Sarah Heckman, Kathryn T. Stolee, and Christopher Parnin. 2018. 10+ Years of Teaching Software Engineering with Itrust: The Good, the Bad, and the Ugly. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering Education and Training (ICSE-SEET '18). Association for Computing Machinery, New York, NY, USA, 1--4.
[23]
Bradford W. Hesse. 1995. Curb cuts in the virtual community: telework and persons with disabilities. In 28th Annual Hawaii International Conference on System Sciences (HICSS-28), January 3--6, 1995, Kihei, Maui, Hawaii, USA. IEEE Computer Society, 418--425.
[24]
Theresia Devi Indriasari, Andrew Luxton-Reilly, and Paul Denny. 2020. A review of peer code review in higher education. ACM Transactions on Computing Education (TOCE) 20, 3 (2020), 1--25.
[25]
Sarah Jessup, Sasha M Willis, Gene Alarcon, and Michael Lee. 2021. Using eye-tracking data to compare differences in code comprehension and code perceptions between expert and novice programmers. (2021).
[26]
Hieke Keuning, Bastiaan Heeren, and Johan Jeuring. 2020. Student refactoring behaviour in a programming tutor. In Proceedings of the 20th Koli Calling International Conference on Computing Education Research. 1--10.
[27]
Oleksii Kononenko, Olga Baysal, and Michael W Godfrey. 2016. Code review quality: How developers see it. In Proceedings of the 38th international conference on software engineering. 1028--1038.
[28]
SeolHwa Lee, Andrew Matteson, Danial Hooshyar, SongHyun Kim, JaeBum Jung, GiChun Nam, and HeuiSeok Lim. 2016. Comparing programming language comprehension between novice and expert programmers using eeg analysis. In 2016 IEEE 16th international conference on bioinformatics and bioengineering (BIBE). IEEE, 350--355.
[29]
Moira Maguire and Brid Delahunt. 2017. Doing a thematic analysis: A practical, step-by-step guide for learning and teaching scholars. All Ireland Journal of Higher Education 9, 3 (2017).
[30]
George Mathew, Chris Parnin, and Kathryn T Stolee. 2020. SLACC: Simion-based language agnostic code clones. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering. 210--221.
[31]
Mary L McHugh. 2012. Interrater reliability: the kappa statistic. Biochemia medica 22, 3 (2012), 276--282.
[32]
Andrew Meneely, Ben Smith, and Laurie Williams. 2012. Appendix B: iTrust electronic health care system case study. Software and Systems Traceability (2012), 425.
[33]
Justin Middleton and Kathryn T Stolee. 2022. Understanding Similar Code through Comparative Comprehension. In 2022 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 1--11.
[34]
Emerson Murphy-Hill, Chris Parnin, and Andrew P. Black. 2012. How We Refactor, and How We Know It. IEEE Transactions on Software Engineering 38, 1 (2012), 5--18.
[35]
Matheus Paixão, Anderson Uchôa, Ana Carla Bibiano, Daniel Oliveira, Alessandro Garcia, Jens Krinke, and Emilio Arvonio. 2020. Behind the intents: An in-depth empirical study on software refactoring in modern code review. In Proceedings of the 17th International Conference on Mining Software Repositories. 125--136.
[36]
Luca Pascarella, Davide Spadini, Fabio Palomba, Magiel Bruntink, and Alberto Bacchelli. 2018. Information Needs in Contemporary Code Review. Proc. ACM Hum.-Comput. Interact. 2, CSCW, Article 135 (nov 2018), 27 pages.
[37]
Elizabeth Patitsas, Michelle Craig, and Steve Easterbrook. 2013. Comparing and contrasting different algorithms leads to increased student learning. In Proceedings of the ninth annual international ACM conference on International computing education research. 145--152.
[38]
Rajshakhar Paul, Amiangshu Bosu, and Kazi Zakia Sultana. 2019. Expressions of Sentiments during Code Reviews: Male vs. Female. In 2019 IEEE 26th International Conference on Software Analysis, Evolution and Reengineering (SANER). 26--37.
[39]
Patrick Peachock, Nicholas Iovino, and Bonita Sharif. 2017. Investigating eye movements in natural language and c++ source code-a replication experiment. In Augmented Cognition. Neurocognition and Machine Learning: 11th International Conference, AC 2017, Held as Part of HCI International 2017, Vancouver, BC, Canada, July 9--14, 2017, Proceedings, Part I 11. Springer, 206--218.
[40]
Norman Peitek, Annabelle Bergum, Maurice Rekrut, Jonas Mucke, Matthias Nadig, Chris Parnin, Janet Siegmund, and Sven Apel. 2022. Correlates of programmer efficacy and their link to experience: A combined EEG and eye-tracking study. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 120--131.
[41]
Peter C Rigby and Christian Bird. 2013. Convergent contemporary software peer review practices. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. 202--212.
[42]
Peter C Rigby, Daniel M German, Laura Cowen, and Margaret-Anne Storey. 2014. Peer review on open-source software projects: Parameters, statistical models, and theory. ACM Transactions on Software Engineering and Methodology (TOSEM) 23, 4 (2014), 1--33.
[43]
Christian Rohrer. 2014. When to use which user-experience research methods. Nielsen Norman Group 12 (2014).
[44]
Caitlin Sadowski, Emma Söderberg, Luke Church, Michal Sipko, and Alberto Bacchelli. 2018. Modern code review: a case study at google. In Proceedings of the 40th international conference on software engineering: Software engineering in practice. 181--190.
[45]
Caitlin Sadowski, Kathryn T. Stolee, and Sebastian Elbaum. 2015. How Developers Search for Code: A Case Study. In Proceedings of the 2015 10th Joint Meeting on Foundations of Software Engineering (ESEC/FSE 2015). Association for Computing Machinery, New York, NY, USA, 191--201.
[46]
Galit Shmueli, Thomas P Minka, Joseph B Kadane, Sharad Borle, and Peter Boatwright. 2005. A useful distribution for fitting discrete data: revival of the Conway-Maxwell-Poisson distribution. Journal of the Royal Statistical Society: Series C (Applied Statistics) 54, 1 (2005), 127--142.
[47]
Xiangyu Song, Seth Copen Goldstein, and Majd Sakr. 2020. Using Peer Code Review as an Educational Tool. In Proceedings of the 2020 ACM Conference on Innovation and Technology in Computer Science Education (ITiCSE '20). Association for Computing Machinery, New York, NY, USA, 173--179.
[48]
Yida Tao, Yingnong Dang, Tao Xie, Dongmei Zhang, and Sunghun Kim. 2012. How Do Software Engineers Understand Code Changes? An Exploratory Study in Industry. In Proceedings of the ACM SIGSOFT 20th International Symposium on the Foundations of Software Engineering (FSE '12). Association for Computing Machinery, New York, NY, USA, Article 51, 11 pages.
[49]
Yida Tao and Sunghun Kim. 2015. Partitioning composite code changes to facilitate code review. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. IEEE, 180--190.
[50]
Deborah A. Trytten. 2005. A Design for Team Peer Code Review. In Proceedings of the 36th SIGCSE Technical Symposium on Computer Science Education (SIGCSE '05). Association for Computing Machinery, New York, NY, USA, 455--459.
[51]
Marvin Wyrich, Andreas Preikschat, Daniel Graziotin, and Stefan Wagner. 2021. The mind is a powerful place: How showing code comprehensibility metrics influences code understanding. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). IEEE, 512--523.

Cited By

View all
  • (2024)Co-Designing Web Interfaces for Code Comparison2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VL/HCC60511.2024.00029(187-198)Online publication date: 2-Sep-2024
  • (2024)A Systematic Literature Review on Recent Peer Code Review Implementation in Education2024 International Conference on TVET Excellence & Development (ICTeD)10.1109/ICTeD62334.2024.10844661(13-19)Online publication date: 16-Dec-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
ICSE '24: Proceedings of the IEEE/ACM 46th International Conference on Software Engineering
May 2024
2942 pages
ISBN:9798400702174
DOI:10.1145/3597503
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

In-Cooperation

  • Faculty of Engineering of University of Porto

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 12 April 2024

Check for updates

Qualifiers

  • Research-article

Funding Sources

Conference

ICSE '24
Sponsor:

Acceptance Rates

Overall Acceptance Rate 276 of 1,856 submissions, 15%

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)151
  • Downloads (Last 6 weeks)21
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2024)Co-Designing Web Interfaces for Code Comparison2024 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC)10.1109/VL/HCC60511.2024.00029(187-198)Online publication date: 2-Sep-2024
  • (2024)A Systematic Literature Review on Recent Peer Code Review Implementation in Education2024 International Conference on TVET Excellence & Development (ICTeD)10.1109/ICTeD62334.2024.10844661(13-19)Online publication date: 16-Dec-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media