skip to main content
10.1145/3387906.3388621acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

An empirical study on self-fixed technical debt

Published: 25 September 2020 Publication History

Abstract

Technical Debt (TD) can be paid back either by those that incurred it or by others. We call the former self-fixed TD, and it is particularly effective, as developers are experts in their own code and are best-suited to fix the corresponding TD issues. To what extent is TD self-fixed, which types of TD are more likely to be self-fixed and is the remediation time of self-fixed TD shorter than non-self-fixed TD? This paper attempts to answer these questions. It reports on an empirical study that analyzes the self-fixed issues of five types of TD (i.e., Code, Defect, Design, Documentation and Test), captured via static analysis, in more than 17,000 commits from 20 Python projects of the Apache Software Foundation. The results show that more than two thirds of the issues are self-fixed and that the self-fixing rate is negatively correlated with the number of commits, developers and project size. Furthermore, the survival time of self-fixed issues is generally shorter than non-self-fixed issues. Moreover, the majority of Defect Debt tends to be self-fixed and has a shorter survival time, while Test Debt and Design Debt are likely to be fixed by other developers. These results can benefit both researchers and practitioners by aiding the prioritization of TD remediation activities within development teams, and by informing the development of TD management tools.

References

[1]
Beatrice AAkerblom, Jonathan Stendahl, Mattias Tumlin, and Tobias Wrigstad. 2014. Tracing Dynamic Features in Python Programs. In Proceedings of the 11th Working Conference on Mining Software Repositories (MSR '14). ACM, Hyderabad, India, 292--295.
[2]
N. Alves, L. F. Ribeiro, V. Caires, T. Mendes, and R. SpÃηnola. 2014. Towards an Ontology of Terms on Technical Debt. In Sixth International Workshop on Managing Technical Debt. 1--7.
[3]
Gabriele Bavota, Abdallah Qusef, Rocco Oliveto, Andrea De Lucia, and Dave Binkley. 2015. Are test smells really harmful? An empirical study. Empirical Software Engineering 20, 4 (2015), 1052--1094.
[4]
G. Bavota and B. Russo. 2016. A Large-Scale Empirical Study on Self-Admitted Technical Debt. In 2016 IEEE/ACM 13th Working Conference on Mining Software Repositories (MSR). 315--326.
[5]
T. Besker, A. Martini, and J. Bosch. 2017. The Pricey Bill of Technical Debt: When and by Whom will it be Paid?. In 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME). 13--23.
[6]
G. Ann Campbell and Patroklos P. Papapetrou. 2013. SonarQube in Action (1st ed.). Manning Publications Co., Greenwich, CT, USA.
[7]
A. Chatzigeorgiou and A. Manakos. 2010. Investigating the Evolution of Bad Smells in Object-Oriented Code. In Proceedings of the Seventh International Conference on the Quality of Information and Communications Technology. 106--115.
[8]
T. Chen, S. W. Thomas, H. Hemmati, M. Nagappan, and A. E. Hassan. 2017. An Empirical Study on the Effect of Testing on Code Quality Using Topic Models: A Case Study on Software Development Systems. IEEE Transactions on Reliability 66, 3 (Sep. 2017), 806--824.
[9]
Zhifei Chen, Lin Chen, Wanwangying Ma, Xiaoyu Zhou, Yuming Zhou, and Baowen Xu. 2017. Understanding metric-based detectable smells in Python software: A comparative study. Information and Software Technology 94 (Sept. 2017).
[10]
J Cohen. 1988. Statistical power analysis for the behavioral sciences. Lawrence Earlbaum Associates.
[11]
E. d. S. Maldonado and E. Shihab. 2015. Detecting and quantifying different types of self-admitted technical Debt. In 2015 IEEE 7th International Workshop on Managing Technical Debt (MTD). 9--15.
[12]
Melissa R. Dale and Clemente Izurieta. 2014. Impacts of Design Pattern Decay on System Quality. In Proceedings of the Eighth ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM '14). ACM, Torino, Italy, Article 37, 4 pages.
[13]
G. Digkas, M. Lungu, P. Avgeriou, A. Chatzigeorgiou, and A. Ampatzoglou. 2018. How do developers fix issues and pay back technical debt in the Apache ecosystem?. In 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). 153--163.
[14]
F. A. Fontana, V. Ferme, and M. Zanoni. 2015. Towards Assessing Software Architecture Quality by Exploiting Code Smell Relations. In Proceedings of the IEEE/ACM Second International Workshop on Software Architecture and Metrics. 1--7.
[15]
Francesca Arcelli Fontana, Riccardo Roveda, and Marco Zanoni. 2016. Tool Support for Evaluating Architectural Debt of an Existing System: An Experience Report. In Proceedings of the 31st Annual ACM Symposium on Applied Computing (SAC '16). ACM, Pisa, Italy, 1347--1349.
[16]
V. Lenarduzzi, N. Saarimaki, and D. Taibi. 2019. On the Diffuseness of Code Technical Debt in Java Projects of the Apache Ecosystem. In Proceedings of the 2019 IEEE/ACM International Conference on Technical Debt (TechDebt '19). 98--107.
[17]
Zengyang Li, Paris Avgeriou, and Peng Liang. 2015. A Systematic Mapping Study on Technical Debt and Its Management. Journal of Systems and Software 101, C (March 2015), 193--220.
[18]
E. D. S. Maldonado, R. Abdalkareem, E. Shihab, and A. Serebrenik. 2017. An Empirical Study on the Removal of Self-Admitted Technical Debt. In 2017 IEEE International Conference on Software Maintenance and Evolution (ICSME). 238--248.
[19]
Diego Marcilio, Rodrigo Bonifácio, Eduardo Monteiro, Edna Canedo, Welder Luz, and Gustavo Pinto. 2019. Are Static Analysis Violations Really Fixed?: A Closer Look at Realistic Usage of SonarQube. In Proceedings of the 27th International Conference on Program Comprehension (ICPC '19). IEEE Press, Montreal, Quebec, Canada, 209--219.
[20]
R. Marinescu. 2012. Assessing technical debt by identifying design flaws in software systems. IBM Journal of Research and Development 56, 5 (Sept 2012), 9:1--9:13.
[21]
Fabio Palomba, Dario Di Nucci, Annibale Panichella, Rocco Oliveto, and Andrea De Lucia. 2016. On the Diffusion of Test Smells in Automatically Generated Test Code: An Empirical Study. In Proceedings of the Nineth International Workshop on Search-Based Software Testing (SBST '16). ACM, Austin, Texas, 5--14.
[22]
A. Potdar and E. Shihab. 2014. An Exploratory Study on Self-Admitted Technical Debt. In 2014 IEEE International Conference on Software Maintenance and Evolution. 91--100.
[23]
Baishakhi Ray, Daryl Posnett, Vladimir Filkov, and Premkumar Devanbu. 2014. A large scale study of programming languages and code quality in github. Proc. FSE 2014 (Nov. 2014), 155--165.
[24]
Per Runeson, Martin Host, Austen Rainer, and Bjorn Regnell. 2012. Case Study Research in Software Engineering: Guidelines and Examples. Wiley Blackwell.
[25]
S. Shamshiri, J. M. Rojas, J. P. Galeotti, N. Walkinshaw, and G. Fraser. 2018. How Do Automatically Generated Unit Tests Influence Software Maintenance?. In 2018 IEEE 11th International Conference on Software Testing, Verification and Validation (ICST). 250--261.
[26]
Emad Shihab, Zhen Jiang, Walid Ibrahim, Bram Adams, and Ahmed E. Hassan. 2010. Understanding the impact of code and process metrics on post-release defects: A case study on the eclipse project. ESEM 2010 - Proceedings of the 2010 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement.
[27]
D. Spinellis and P. Avgeriou. 2019. Evolution of the Unix System Architecture: An Exploratory Case Study. IEEE Transactions on Software Engineering (2019).
[28]
Jie Tan, Mircea Lungu, and Paris Avgeriou. 2018. Towards Studying the Evolution of Technical Debt in the Python Projects from the Apache Software Ecosystem. In Proceedings of the 17th Belgium-Netherlands Software Evolution Workshop (BENEVOL '18). 43--45.
[29]
M. Tufano, F. Palomba, G. Bavota, R. Oliveto, M. D. Penta, A. De Lucia, and D. Poshyvanyk. 2017. When and Why Your Code Starts to Smell Bad (and Whether the Smells Go Away). IEEE Transactions on Software Engineering 43, 11 (Nov. 2017), 1063--1088.
[30]
Rini van Solingen, Vic Basili, Gianluigi Caldiera, and H. Dieter Rombach. 2002. Goal Question Metric (GQM) Approach. In Encyclopedia of Software Engineering. John Wiley & Sons, Inc., Hoboken, NJ, USA, 528--532.
[31]
Nicole Vavrová and Vadim Zaytsev. 2017. Does Python Smell Like Java? Tool Support for Design Defect Discovery in Python. CoRR abs/1703.10882 (2017). arXiv:1703.10882 http://arxiv.org/abs/1703.10882
[32]
Beibei Wang, Lin Chen, Wanwangying Ma, Zhifei Chen, and Baowen Xu. 2015. An empirical study on the impact of Python dynamic features on change-proneness. In Proceedings of the 27th International Conference on Software Engineering and Knowledge Engineering (SEKE '15). Pittsburgh, PA, USA, 134--139.

Cited By

View all

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
TechDebt '20: Proceedings of the 3rd International Conference on Technical Debt
June 2020
131 pages
ISBN:9781450379601
DOI:10.1145/3387906
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

In-Cooperation

  • IEEE CS

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 25 September 2020

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. python
  2. self-fixed issues
  3. static analysis
  4. technical debt

Qualifiers

  • Research-article

Funding Sources

Conference

TechDebt '20
Sponsor:
TechDebt '20: International Conference on Technical Debt
June 28 - 30, 2020
Seoul, Republic of Korea

Acceptance Rates

TechDebt '20 Paper Acceptance Rate 14 of 31 submissions, 45%;
Overall Acceptance Rate 14 of 31 submissions, 45%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)20
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2024)A Catalog of Prevention Strategies for Test Technical DebtProceedings of the XXIII Brazilian Symposium on Software Quality10.1145/3701625.3701692(706-717)Online publication date: 5-Nov-2024
  • (2023)The lifecycle of Technical Debt that manifests in both source code and issue trackersInformation and Software Technology10.1016/j.infsof.2023.107216159:COnline publication date: 10-May-2023
  • (2023)How SonarQube-identified technical debt is prioritizedInformation and Software Technology10.1016/j.infsof.2023.107147156:COnline publication date: 1-Apr-2023
  • (2023)Keyword-labeled self-admitted technical debt and static code analysis have significant relationship but limited overlapSoftware Quality Journal10.1007/s11219-023-09655-z32:2(391-429)Online publication date: 16-Nov-2023
  • (2023)Integrating privacy debt and VSE's software developmentsJournal of Software: Evolution and Process10.1002/smr.243735:8Online publication date: 7-Aug-2023
  • (2022)The Gap between the Admitted and the Measured Technical Debt: An Empirical StudyApplied Sciences10.3390/app1215748212:15(7482)Online publication date: 26-Jul-2022
  • (2022)Reproducibility in the technical debt domainActa Universitatis Sapientiae, Informatica10.2478/ausi-2021-001613:2(335-360)Online publication date: 2-Feb-2022
  • (2022)What Factors Affect the Performance of Software after Migration: A Case Study on Sunway TaihuLight SupercomputerIEICE Transactions on Information and Systems10.1587/transinf.2021MPL0003E105.D:1(26-30)Online publication date: 1-Jan-2022
  • (2022)An Exploratory Study on Self-Fixed Software Vulnerabilities in OSS Projects2022 IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER)10.1109/SANER53432.2022.00023(90-100)Online publication date: Mar-2022
  • (2022)analyzeR: A SonarQube plugin for analyzing object-oriented R PackagesSoftwareX10.1016/j.softx.2022.10111319(101113)Online publication date: Jul-2022
  • Show More Cited By

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media