skip to main content
10.1145/3338906.3338968acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article
Artifacts Available

Effects of explicit feature traceability on program comprehension

Authors Info & Claims
Published:12 August 2019Publication History

ABSTRACT

Developers spend a substantial amount of their time with program comprehension. To improve their comprehension and refresh their memory, developers need to communicate with other developers, read the documentation, and analyze the source code. Many studies show that developers focus primarily on the source code and that small improvements can have a strong impact. As such, it is crucial to bring the code itself into a more comprehensible form. A particular technique for this purpose are explicit feature traces to easily identify a program’s functionalities. To improve our empirical understanding about the effects of feature traces, we report an online experiment with 49 professional software developers. We studied the impact of explicit feature traces, namely annotations and decomposition, on program comprehension and compared them to the same code without traces. Besides this experiment, we also asked our participants about their opinions in order to combine quantitative and qualitative data. Our results indicate that, as opposed to purely object-oriented code: (1) annotations can have positive effects on program comprehension; (2) decomposition can have a negative impact on bug localization; and (3) our participants perceive both techniques as beneficial. Moreover, none of the three code versions yields significant improvements on task completion time. Overall, our results indicate that lightweight traceability, such as using annotations, provides immediate benefits to developers during software development and maintenance without extensive training or tooling; and can improve current industrial practices that rely on heavyweight traceability tools (e.g., DOORS) and retroactive fulfillment of standards (e.g., ISO-26262, DO-178B).

References

  1. Giuliano Antoniol, Gerardo Canfora, Gerardo Casazza, Andrea De Lucia, and Ettore Merlo. 2002. Recovering Traceability Links Between Code and Documentation. IEEE Transactions on Software Engineering 28, 10 (2002), 970–983. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Sven Apel, Don Batory, Christian Kästner, and Gunter Saake. 2013. Feature-Oriented Software Product Lines: Concepts and Implementation. Springer. Google ScholarGoogle Scholar
  3. Deborah J. Armstrong. 2006. The Quarks of Object-Oriented Development. Communications of the ACM 49, 2 (2006), 123–128. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Thorsten Berger, Daniela Lettner, Julia Rubin, Paul Grünbacher, Adeline Silva, Martin Becker, Marsha Chechik, and Krzysztof Czarnecki. 2015. What is a Feature? A Qualitative Study of Features in Industrial Software Product Lines. In International Conference on Software Product Line (SPLC). ACM, 16–25. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Ted J. Biggerstaff, Bharat G. Mitbander, and Dallas Webster. 1993. The Concept Assignment Problem in Program Understanding. In International Conference on Software Engineering (ICSE). IEEE, 482–498. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Mauro Cherubini, Gina Venolia, Rob DeLine, and Andrew J. Ko. 2007. Let’s Go to the Whiteboard: How and Why Software Developers Use Drawings. In Conference on Human Factors in Computing Systems (CHI). ACM, 557–566. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Paul Clements and Linda Northrop. 2001. Software Product Lines: Practices and Patterns. Addison-Wesley. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Joseph W. Davison, Dennis M. Manci, and William F. Opdyke. 2000. Understanding and Addressing the Essential Costs of Evolving Systems. Bell Labs Technical Journal (2000).Google ScholarGoogle Scholar
  9. Alexander Delater and Barbara Paech. 2013. Tracing Requirements and Source Code During Software Development: An Empirical Study. In International Symposium on Empirical Software Engineering and Measurement (ESEM). IEEE, 25–34. Effects of Explicit Feature Traceability on Program Comprehension ESEC/FSE ’19, August 26–30, 2019, Tallinn, Estonia Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. Alexander Egyed, Florian Graf, and Paul Grünbacher. 2010. Effort and Quality of Recovering Requirements-to-Code Traces: Two Exploratory Experiments. In International Requirements Engineering Conference (RE). IEEE, 221–230. Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Janet Feigenspan, Christian Kästner, Sven Apel, and Thomas Leich. 2009. How to Compare Program Comprehension in FOSD Empirically: An Experience Report. In International Workshop on Feature-Oriented Software Development (FOSD). ACM, 55–62. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Janet Feigenspan, Christian Kästner, Sven Apel, Jörg Liebig, Michael Schulze, Raimund Dachselt, Maria Papendieck, Thomas Leich, and Gunter Saake. 2013.Google ScholarGoogle Scholar
  13. Do Background Colors Improve Program Comprehension in the #ifdef Hell? Empirical Software Engineering 18, 4 (2013), 699–745. Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Ronald A. Fisher. 1936. Statistical Methods For Research Workers. Oliver and Boyd.Google ScholarGoogle Scholar
  15. Florian Fittkau, Santje Finke, Wilhelm Hasselbring, and Jan Waller. 2015. Comparing Trace Visualizations for Program Comprehension through Controlled Experiments. In International Conference on Program Comprehension (ICPC). IEEE, 266–276. Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Johannes C. Hofmeister, Janet Siegmund, and Daniel V. Holt. 2019. Shorter Identifier Names Take Longer to Comprehend. Empirical Software Engineering 24, 1 (2019), 417–443. Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Sture Holm. 1979. A Simple Sequentially Rejective Multiple Test Procedure. Scandinavian Journal of Statistics (1979), 65–70.Google ScholarGoogle Scholar
  18. Claus Hunsen, Bo Zhang, Janet Siegmund, Christian Kästner, Olaf Leßenich, Martin Becker, and Sven Apel. 2016. Preprocessor-Based Variability in Open-Source and Industrial Software Systems: An Empirical Study. Empirical Software Engineering 21, 2 (2016), 449–482. Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Khaled Jaber, Bonita Sharif, and Chang Liu. 2013. A Study on the Effect of Traceability Links in Software Maintenance. IEEE Access 1 (2013), 726–741.Google ScholarGoogle ScholarCross RefCross Ref
  20. Wenbin Ji, Thorsten Berger, Michal Antkiewicz, and Krzysztof Czarnecki. 2015. Maintaining Feature Traceability with Embedded Annotations. In International Conference on Software Product Line (SPLC). ACM, 61–70. Google ScholarGoogle ScholarDigital LibraryDigital Library
  21. Yue Jia and Mark Harman. 2011. An Analysis and Survey of the Development of Mutation Testing. Transactions on Software Engineering 37, 5 (2011), 649–678. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Kyo C. Kang, Sajoong Kim, Jaejoon Lee, Kijoo Kim, Euiseob Shin, and Moonhang Huh. 1998. FORM: A Feature-Oriented Reuse Method with Domain-Specific Reference Architectures. Annals of Software Engineering 5, 1 (1998), 143. Google ScholarGoogle ScholarDigital LibraryDigital Library
  23. Christian Kästner, Sven Apel, and Klaus Ostermann. 2011. The Road to Feature Modularity?. In International Software Product Line Conference (SPLC). ACM, 5:1–5:8. Google ScholarGoogle ScholarDigital LibraryDigital Library
  24. Sebastian Krieter, Jacob Krüger, and Thomas Leich. 2018. Don’t Worry About It: Managing Variability On-The-Fly. In International Workshop on Variability Modelling of Software-Intensive Systems (VaMoS). ACM, 19–26. Google ScholarGoogle ScholarDigital LibraryDigital Library
  25. Jacob Krüger. 2018. Separation of Concerns: Experiences of the Crowd. In Symposium on Applied Computing (SAC). ACM, 2076–2077. Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Jacob Krüger, Thorsten Berger, and Thomas Leich. 2019. Features and How to Find Them: A Survey of Manual Feature Location. In Software Engineering for Variability Intensive Systems: Foundations and Applications. LLC/CRC Press, 153–172.Google ScholarGoogle Scholar
  27. Jacob Krüger, Wanzi Gu, Hui Shen, Mukelabai Mukelabai, Regina Hebig, and Thorsten Berger. 2018. Towards a Better Understanding of Software Features and Their Characteristics: A Case Study of Marlin. In International Workshop on Variability Modelling of Software-Intensive Systems (VaMoS). ACM. Google ScholarGoogle ScholarDigital LibraryDigital Library
  28. Jacob Krüger, Kai Ludwig, Bernhard Zimmermann, and Thomas Leich. 2018. Physical Separation of Features: A Survey with CPP Developers. In Symposium on Applied Computing (SAC). ACM, 2042–2049. Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Jacob Krüger, Mukelabai Mukelabai, Wanzi Gu, Hui Shen, Regina Hebig, and Thorsten Berger. 2019. Where is my Feature and What is it About? A Case Study on Recovering Feature Facets. Journal of Systems and Software 152 (2019), 239–253.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Jacob Krüger, Marcus Pinnecke, Andy Kenner, Christopher Kruczek, Fabian Benduhn, Thomas Leich, and Gunter Saake. 2018. Composing Annotations Without Regret? Practical Experiences using FeatureC. Software: Practice and Experience 48, 3 (2018), 402–427.Google ScholarGoogle ScholarCross RefCross Ref
  31. Jacob Krüger, Ivonne Schröter, Andy Kenner, Christopher Kruczek, and Thomas Leich. 2016. FeatureCoPP: Compositional Annotations. In International Workshop on Feature-Oriented Software Development (FOSD). ACM, 74–84. Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Jacob Krüger, Jens Wiemann, Wolfram Fenske, Gunter Saake, and Thomas Leich. 2018. Do You Remember this Source Code?. In International Conference on Software Engineering (ICSE). ACM, 764–775. Google ScholarGoogle ScholarDigital LibraryDigital Library
  33. William H. Kruskal and W. Allen Wallis. 1952. Use of Ranks in One-Criterion Variance Analysis. Journal of the American Statistical Association 47, 260 (1952), 583–621.Google ScholarGoogle ScholarCross RefCross Ref
  34. Thomas D. LaToza, Gina Venolia, and Robert DeLine. 2006. Maintaining Mental Models: A Study of Developer Work Habits. In International Conference on Software Engineering (ICSE). ACM, 492–501. Google ScholarGoogle ScholarDigital LibraryDigital Library
  35. Dawn Lawrie, Christopher Morrell, Henry Feild, and David Binkley. 2007. Effective Identifier Names for Comprehension and Memory. Innovations in Systems and Software Engineering 3, 4 (2007), 303–318.Google ScholarGoogle ScholarCross RefCross Ref
  36. Duc Le, Eric Walkingshaw, and Martin Erwig. 2011. #ifdef Confirmed Harmful: Promoting Understandable Software Variation. In Symposium on Visual Languages and Human-Centric Computing (VL/HCC). IEEE, 143–150.Google ScholarGoogle Scholar
  37. Jörg Liebig, Sven Apel, Christian Lengauer, Christian Kästner, and Michael Schulze. 2010. An Analysis of the Variability in Forty Preprocessor-Based Software Product Lines. In International Conference on Software Engineering (ICSE). ACM, 105–114. Google ScholarGoogle ScholarDigital LibraryDigital Library
  38. Patrick Mäder and Alexander Egyed. 2015. Do Developers Benefit from Requirements Traceability when Evolving and Maintaining a Software System? Empirical Software Engineering 20, 2 (2015), 413–441. Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Jean Melo, Claus Brabrand, and Andrzej Wąsowski. 2016. How Does the Degree of Variability Affect Bug Finding?. In International Conference on Software Engineering (ICSE). ACM, 679–690. Google ScholarGoogle ScholarDigital LibraryDigital Library
  40. Sebastian Nielebock, Dariusz Krolikowski, Jacob Krüger, Thomas Leich, and Frank Ortmeier. 2019. Commenting Source Code: Is it Worth it for Small Programming Tasks? Empirical Software Engineering 24, 3 (2019), 1418–1457. Google ScholarGoogle ScholarDigital LibraryDigital Library
  41. Nan Niu, Wentao Wang, and Arushi Gupta. 2016. Gray Links in the Use of Requirements Traceability. In International Symposium on Foundations of Software Engineering (FSE). ACM, 384–395. Google ScholarGoogle ScholarDigital LibraryDigital Library
  42. David L. Parnas. 1972. On the Criteria to Be Used in Decomposing Systems into Modules. Communications of the ACM 15, 12 (1972), 1053–1058. Google ScholarGoogle ScholarDigital LibraryDigital Library
  43. Leonardo Passos, Jesús Padilla, Thorsten Berger, Sven Apel, Krzysztof Czarnecki, and Marco Tulio Valente. 2015. Feature Scattering in the Large: A Longitudinal Study of Linux Kernel Device Drivers. In International Conference on Modularity (MODULARITY). ACM, 81–92. Google ScholarGoogle ScholarDigital LibraryDigital Library
  44. Leonardo Passos, Rodrigo Queiroz, Mukelabai Mukelabai, Thorsten Berger, Sven Apel, Krzysztof Czarnecki, and Jesus Padilla. 2018. A Study of Feature Scattering in the Linux Kernel. IEEE Transactions on Software Engineering (2018). Preprint.Google ScholarGoogle Scholar
  45. Christian Prehofer. 1997. Feature-Oriented Programming: A Fresh Look at Objects. In European Conference on Object-Oriented Programming (ECOOP). Springer, 419– 443.Google ScholarGoogle ScholarCross RefCross Ref
  46. R Core Team. 2018. R: A Language and Environment for Statistical Computing. R Foundation for Statistical Computing. https://www.R-project.orgGoogle ScholarGoogle Scholar
  47. Patrick Rempel and Parick Mäder. 2016. Preventing Defects: The Impact of Requirements Traceability Completeness on Software Quality. IEEE Transactions on Software Engineering 43, 8 (2016), 777–797.Google ScholarGoogle ScholarDigital LibraryDigital Library
  48. Martin P. Robillard, Andrian Marcus, Christoph Treude, Gabriele Bavota, Oscar Chaparro, Neil Ernst, Marco Aurélio Gerosa, Michael Godfrey, Michele Lanza, Mario Linares-Vásquez, Gail C. Murphy, Laura Moreno, David Shepherd, and Edmund Wong. 2017. On-Demand Developer Documentation. In International Conference on Software Maintenance and Evolution (ICSME). IEEE, 479–483.Google ScholarGoogle Scholar
  49. Iran Rodrigues, Márcio Ribeiro, Flávio Medeiros, Paulo Borba, Baldoino Fonseca, and Rohit Gheyi. 2016. Assessing Fine-Grained Feature Dependencies. Information and Software Technology 78 (2016), 27–52. Google ScholarGoogle ScholarDigital LibraryDigital Library
  50. Julia Rubin and Marsha Chechik. 2013. A Survey of Feature Location Techniques. In Domain Engineering. Springer.Google ScholarGoogle Scholar
  51. Alcemir Rodrigues Santos, Ivan do Carmo Machado, Eduardo Santana de Almeida, Janet Siegmund, and Sven Apel. 2019. Comparing the Influence of Using Feature-Oriented Programming and Conditional Compilation on Comprehending Feature-Oriented Software. Empirical Software Engineering 24, 3 (2019), 1226–1258. Google ScholarGoogle ScholarDigital LibraryDigital Library
  52. Ivonne Schröter, Jacob Krüger, Janet Siegmund, and Thomas Leich. 2017. Comprehending Studies on Program Comprehension. In International Conference on Program Comprehension (ICPC). IEEE, 308–311. Google ScholarGoogle ScholarDigital LibraryDigital Library
  53. Kanwarpreet Sethi, Yuanfang Cai, Sunny Wong, Alessandro Garcia, and Claudio Sant’Anna. 2009. From Retrospect to Prospect: Assessing Modularity and Stability from Software Architecture. In Conference on Software Architecture & European Conference on Software Architecture (WICSA/ECSA). IEEE, 269–272.Google ScholarGoogle ScholarCross RefCross Ref
  54. Janet Siegmund, Christian Kästner, Jörg Liebig, and Sven Apel. 2012. Comparing Program Comprehension of Physically and Virtually Separated Concerns. In International Workshop on Feature-Oriented Software Development (FOSD). ACM, 17–24. Google ScholarGoogle ScholarDigital LibraryDigital Library
  55. Janet Siegmund, Christian Kästner, Jörg Liebig, Sven Apel, and Stefan Hanenberg. 2014. Measuring and Modeling Programming Experience. Empirical Software Engineering 19, 5 (2014), 1299–1334. Google ScholarGoogle ScholarDigital LibraryDigital Library
  56. Janet Siegmund and Jana Schumann. 2015. Confounding Parameters on Program Comprehension: A Literature Survey. Empirical Software Engineering 20, 4 (2015), 1159–1192. Google ScholarGoogle ScholarDigital LibraryDigital Library
  57. Janet Siegmund, Norbert Siegmund, and Sven Apel. 2015. Views on Internal and External Validity in Empirical Software Engineering. In International Conference on Software Engineering (ICSE). IEEE, 9–19. Google ScholarGoogle ScholarDigital LibraryDigital Library
  58. Janice Singer, Timothy Lethbridge, Norman Vinson, and Nicolas Anquetil. 2010. An Examination of Software Engineering Work Practices. In CASCON First Decade High Impact Papers (CASCON). IBM, 174–188. Google ScholarGoogle ScholarDigital LibraryDigital Library
  59. Donna Spencer. 2009. Card Sorting: Designing Usable Categories. Rosenfeld Media.Google ScholarGoogle Scholar
  60. Peri Tarr, Harold Ossher, William Harrison, and Stanley M. Sutton, Jr. 1999. N Degrees of Separation: Multi-Dimensional Separation of Concerns. In International Conference on Software Engineering (ICSE). ACM, 107–119. ESEC/FSE ’19, August 26–30, 2019, Tallinn, Estonia Jacob Krüger, Gül Çalıklı, Thorsten Berger, Thomas Leich, and Gunter Saake Google ScholarGoogle ScholarDigital LibraryDigital Library
  61. Rebecca Tiarks. 2011. What Maintenance Programmers Really do: An Observational Study. In Workshop on Software Reengineering (WSR).Google ScholarGoogle Scholar
  62. Anneliese von Mayrhauser, A. Marie Vans, and Adele E. Howe. 1997. Program Understanding Behaviour during Enhancement of Large-Scale Software. Journal of Software Maintenance: Research and Practice 9, 5 (1997), 299–327. Google ScholarGoogle ScholarDigital LibraryDigital Library
  63. Ivonne von Nostitz-Wallwitz, Jacob Krüger, Janet Siegmund, and Thomas Leich. 2018. Knowledge Transfer from Research to Industry: A Survey on Program Comprehension. In International Conference on Software Engineering (ICSE). ACM, 300–301. Google ScholarGoogle ScholarDigital LibraryDigital Library
  64. Jinshui Wang, Xin Peng, Zhenchang Xing, and Wenyun Zhao. 2013. How Developers Perform Feature Location Tasks: A Human-Centric and Process-Oriented Exploratory Study. Journal of Software: Evolution and Process 25, 11 (2013), 1193– 1224.Google ScholarGoogle ScholarCross RefCross Ref
  65. Tianxia Wang and Yan Liu. 2017. Jsea: A Program Comprehension Tool Adopting LDA-Based Topic Modeling. International Journal of Advanced Computer Science and Applications 2, 3 (2017).Google ScholarGoogle Scholar
  66. Ronald L. Wasserstein, Allen L. Schirm, and Nicole A. Lazar. 2019. Moving to a World Beyond “p < 0.05”. The American Statistician 73 (2019), 1–19.Google ScholarGoogle ScholarCross RefCross Ref
  67. Claes Wohlin, Per Runeson, Martin Höst, Magnus C Ohlsson, Björn Regnell, and Anders Wesslén. 2012. Experimentation in Software Engineering. Springer. Google ScholarGoogle ScholarCross RefCross Ref
  68. Scott N. Woodfield, Hubert E. Dunsmore, and Vincent Yun Shen. 1981. The Effect of Modularization and Comments on Program Comprehension. In International Conference on Software Engineering (ICSE). IEEE, 215–223. Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Effects of explicit feature traceability on program comprehension

        Recommendations

        Comments

        Login options

        Check if you have access through your login credentials or your institution to get full access on this article.

        Sign in
        • Published in

          cover image ACM Conferences
          ESEC/FSE 2019: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering
          August 2019
          1264 pages
          ISBN:9781450355728
          DOI:10.1145/3338906

          Copyright © 2019 ACM

          Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

          Publisher

          Association for Computing Machinery

          New York, NY, United States

          Publication History

          • Published: 12 August 2019

          Permissions

          Request permissions about this article.

          Request Permissions

          Check for updates

          Qualifiers

          • research-article

          Acceptance Rates

          Overall Acceptance Rate112of543submissions,21%

        PDF Format

        View or Download as a PDF file.

        PDF

        eReader

        View online with eReader.

        eReader