skip to main content
10.1145/3524610.3528387acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
short-paper

A first look at duplicate and near-duplicate self-admitted technical debt comments

Published:20 October 2022Publication History

ABSTRACT

Self-admitted technical debt (SATD) refers to technical debt that is intentionally introduced by developers and explicitly documented in code comments or other software artifacts (e.g., issue reports) to annotate sub-optimal decisions made by developers in the software development process.

In this work, we take the first look at the existence and characteristics of duplicate and near-duplicate SATD comments in five popular Apache OSS projects, i.e., JSPWiki, Helix, Jackrabbit, Archiva, and SystemML. We design a method to automatically identify groups of duplicate and near-duplicate SATD comments and track their evolution in the software system by mining the commit history of a software project. Leveraging the proposed method, we identified 3,520 duplicate and near-duplicate SATD comments from the target projects, which belong to 1,141 groups. We manually analyze the content and context of a sample of 1,505 SATD comments (by sampling 100 groups for each project) and identify if they annotate the same root cause. We also investigate whether duplicate SATD comments exist in code clones, whether they co-exist in the same file, and whether they are introduced and removed simultaneously. Our preliminary study reveals several surprising findings that would shed light on future studies aiming to improve the management of duplicate SATD comments. For instance, only 48.5% duplicate SATD comment groups with the same root cause exist in regular code clones, and only 33.9% of the duplicate SATD comment pairs are introduced in the same commit.

References

  1. Gabriele Bavota and Barbara Russo. 2016. A Large-Scale Empirical Study on Self-Admitted Technical Debt. In Proceedings of the 13th International Conference on Mining Software Repositories (Austin, Texas) (MSR '16). Association for Computing Machinery, New York, NY, USA, 315--326. Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise.. In kdd, Vol. 96. 226--231.Google ScholarGoogle Scholar
  3. Gianmarco Fucci, Nathan Cassee, Fiorella Zampetti, Nicole Novielli, Alexander Serebrenik, and Massimiliano Di Penta. 2021. Waiting around or job half-done? Sentiment in self-admitted technical debt. In 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR). IEEE, 403--414.Google ScholarGoogle ScholarCross RefCross Ref
  4. Zhipeng Gao, Xin Xia, David Lo, John Grundy, and Thomas Zimmermann. 2021. Automating the removal of obsolete TODO comments. In Proceedings of the 29th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 218--229.Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Zhaoqiang Guo, Shiran Liu, Jinping Liu, Yanhui Li, Lin Chen, Hongmin Lu, and Yuming Zhou. 2021. How far have we progressed in identifying self-admitted technical debts? A comprehensive empirical study. ACM Transactions on Software Engineering and Methodology (TOSEM) 30, 4 (2021), 1--56.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Qiao Huang, Emad Shihab, Xin Xia, David Lo, and Shanping Li. 2018. Identifying Self-Admitted Technical Debt in Open Source Projects Using Text Mining. Empirical Softw. Engg. 23, 1 (feb 2018), 418--451. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Zengyang Li, Paris Avgeriou, and Peng Liang. 2015. A systematic mapping study on technical debt and its management. Journal of Systems and Software 101 (2015), 193--220. Google ScholarGoogle ScholarDigital LibraryDigital Library
  8. Zhongxin Liu, Qiao Huang, Xin Xia, Emad Shihab, David Lo, and Shanping Li. 2018. SATD Detector: A Text-Mining-Based Self-Admitted Technical Debt Detection Tool. In Proceedings of the 40th International Conference on Software Engineering: Companion Proceeedings (Gothenburg, Sweden) (ICSE '18). Association for Computing Machinery, New York, NY, USA, 9--12. Google ScholarGoogle ScholarDigital LibraryDigital Library
  9. Rungroj Maipradit, Bin Lin, Csaba Nagy, Gabriele Bavota, Michele Lanza, Hideaki Hata, and Kenichi Matsumoto. 2020. Automated Identification of On-hold Self-admitted Technical Debt. CoRR abs/2009.13113 (2020). arXiv:2009.13113 https://arxiv.org/abs/2009.13113Google ScholarGoogle Scholar
  10. Solomon Mensah, Jacky Keung, Michael Franklin Bosu, and Kwabena Ebo Bennin. 2016. Rework effort estimation of self-admitted technical debt. CEUR Workshop Proceedings 1771 (2016), 72--75. http://ceur-ws.org/Vol-1771/ Joint of the 4th International Workshop on Quantitative Approaches to Software Quality, QuASoQ 2016 and 1st International Workshop on Technical Debt Analytics, TDA 2016; Conference date: 06-12-2016.Google ScholarGoogle Scholar
  11. Solomon Mensah, Jacky Keung, Jeffery Svajlenko, Kwabena Ebo Bennin, and Qing Mi. 2018. On the Value of a Prioritization Scheme for Resolving Self-Admitted Technical Debt. J. Syst. Softw. 135, C (jan 2018), 37--54. Google ScholarGoogle ScholarDigital LibraryDigital Library
  12. Manishankar Mondai, Chanchal K Roy, and Kevin A Schneider. 2018. Micro-clones in evolving software. In 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). IEEE, 50--60.Google ScholarGoogle ScholarCross RefCross Ref
  13. Manishankar Mondal, Banani Roy, Chanchal K Roy, and Kevin A Schneider. 2020. Investigating near-miss micro-clones in evolving software. In Proceedings of the 28th International Conference on Program Comprehension. 208--218.Google ScholarGoogle ScholarDigital LibraryDigital Library
  14. Aniket Potdar and Emad Shihab. 2014. An Exploratory Study on Self-Admitted Technical Debt. In 2014 IEEE International Conference on Software Maintenance and Evolution. 91--100. Google ScholarGoogle ScholarDigital LibraryDigital Library
  15. Chanchal K Roy and James R Cordy. 2008. NICAD: Accurate detection of near-miss intentional clones using flexible pretty-printing and code normalization. In 2008 16th iEEE international conference on program comprehension. IEEE, 172--181.Google ScholarGoogle Scholar
  16. Chanchal K Roy and James R Cordy. 2009. A mutation/injection-based automatic framework for evaluating code clone detection tools. In 2009 international conference on software testing, verification, and validation workshops. IEEE, 157--166.Google ScholarGoogle ScholarDigital LibraryDigital Library
  17. Giancarlo Sierra, Emad Shihab, and Yasutaka Kamei. 2019. A survey of self-admitted technical debt. Journal of Systems and Software 152 (2019), 70--82. Google ScholarGoogle ScholarDigital LibraryDigital Library
  18. Alexander Trautsch, Fabian Trautsch, and Steffen Herbold. 2021. MSR Mining Challenge: The SmartSHARK Repository Mining Data. arXiv preprint arXiv:2102.11540 (2021).Google ScholarGoogle Scholar
  19. Tiantian Wang, Mark Harman, Yue Jia, and Jens Krinke. 2013. Searching for better configurations: a rigorous approach to clone evaluation. In Proceedings of the 2013 9th Joint Meeting on Foundations of Software Engineering. 455--465.Google ScholarGoogle ScholarDigital LibraryDigital Library
  20. Frank Wilcoxon. 1992. Individual comparisons by ranking methods. In Breakthroughs in statistics. Springer, 196--202.Google ScholarGoogle Scholar
  21. Laerte Xavier, Fabio Ferreira, Rodrigo Brito, and Marco Tulio Valente. 2020. Beyond the Code: Mining Self-Admitted Technical Debt in Issue Tracker Systems. Association for Computing Machinery, New York, NY, USA, 137--146. Google ScholarGoogle ScholarDigital LibraryDigital Library
  22. Fiorella Zampetti, Gianmarco Fucci, Alexander Serebrenik, and Massimiliano Di Penta. 2021. Self-Admitted Technical Debt Practices: A Comparison Between Industry and Open-Source. Empirical Software Engineering 26, 6 (Nov. 2021). Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ICPC '22: Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension
    May 2022
    698 pages
    ISBN:9781450392983
    DOI:10.1145/3524610
    • Conference Chairs:
    • Ayushi Rastogi,
    • Rosalia Tufano,
    • General Chair:
    • Gabriele Bavota,
    • Program Chairs:
    • Venera Arnaoudova,
    • Sonia Haiduc

    Copyright © 2022 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 20 October 2022

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • short-paper

    Upcoming Conference

    ICSE 2025

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader