skip to main content
research-article

Code-line-level Bugginess Identification: How Far have We Come, and How Far have We Yet to Go?

Published: 27 May 2023 Publication History

Abstract

Background. Code-line-level bugginess identification (CLBI) is a vital technique that can facilitate developers to identify buggy lines without expending a large amount of human effort. Most of the existing studies tried to mine the characteristics of source codes to train supervised prediction models, which have been reported to be able to discriminate buggy code lines amongst others in a target program.
Problem. However, several simple and clear code characteristics, such as complexity of code lines, have been disregarded in the current literature. Such characteristics can be acquired and applied easily in an unsupervised way to conduct more accurate CLBI, which also can decrease the application cost of existing CLBI approaches by a large margin.
Objective. We aim at investigating the status quo in the field of CLBI from the perspective of (1) how far we have really come in the literature, and (2) how far we have yet to go in the industry, by analyzing the performance of state-of-the-art (SOTA) CLBI approaches and tools, respectively.
Method. We propose a simple heuristic baseline solution GLANCE (aiminG at controL- ANd ComplEx-statements) with three implementations (i.e., GLANCE-MD, GLANCE-EA, and GLANCE-LR). GLANCE is a two-stage CLBI framework: first, use a simple model to predict the potentially defective files; second, leverage simple code characteristics to identify buggy code lines in the predicted defective files. We use GLANCE as the baseline to investigate the effectiveness of the SOTA CLBI approaches, including natural language processing (NLP) based, model interpretation techniques (MIT) based, and popular static analysis tools (SAT).
Result. Based on 19 open-source projects with 142 different releases, the experimental results show that GLANCE framework has a prediction performance comparable or even superior to the existing SOTA CLBI approaches and tools in terms of 8 different performance indicators.
Conclusion. The results caution us that, if the identification performance is the goal, the real progress in CLBI is not being achieved as it might have been envisaged in the literature and there is still a long way to go to really promote the effectiveness of static analysis tools in industry. In addition, we suggest using GLANCE as a baseline in future studies to demonstrate the usefulness of any newly proposed CLBI approach.

References

[1]
Amritanshu Agrawal, Wei Fu, Di Chen, Xipeng Shen, and Tim Menzies. 2021. How to “DODGE” complex software analytics. IEEE Transactions on Software Engineering 47, 10 (2021), 2182–2194. DOI:
[2]
Hirohisa Aman, Sousuke Amasaki, Tomoyuki Yokogawa, and Minoru Kawahara. 2019. A survival analysis-based prioritization of code checker warning: A case study using PMD. In Proceedings of the Big Data, Cloud Computing, and Data Science Engineering, Selected Papers from The 4th IEEE/ACIS International Conference on Big Data, Cloud Computing, Data Science and Engineering.Roger Y. Lee (Ed.), Vol. 844, Springer, 69–83. DOI:
[3]
Ömer Faruk Arar and Kürsat Ayan. 2015. Software defect prediction using cost-sensitive neural network. Applied Soft Computing 33 (2015), 263–277. DOI:
[4]
Victor R. Basili, Lionel C. Briand, and Walcélio L. Melo. 1996. A validation of object-oriented design metrics as quality indicators. IEEE Transactions on Software Engineering 22, 10 (1996), 751–761. DOI:
[5]
Moritz Beller, Radjino Bholanath, Shane McIntosh, and Andy Zaidman. 2016. Analyzing the state of static analysis: A large-scale evaluation in open source software. In Proceedings of the IEEE 23rd International Conference on Software Analysis, Evolution, and Reengineering. IEEE Computer Society, 470–481. DOI:
[6]
Yoav Benjamini and Yosef Hochberg. 1995. Controlling the false discovery rate: A practical and powerful approach to multiple testing. Journal of the Royal Statistical Society Series b-methodological 57, 1 (1995), 289–300.
[7]
Pan Bian, Bin Liang, Wenchang Shi, Jianjun Huang, and Yan Cai. 2018. NAR-miner: Discovering negative association rules from code for bug detection. In Proceedings of the 2018 ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering.Gary T. Leavens, Alessandro Garcia, and Corina S. Pasareanu (Eds.), ACM, 411–422. DOI:
[8]
BugDet 2022. Dataset and scripts. (2022). Retrieved from https://github.com/Naplues/BugDet. Accessed November 8, 2022.
[9]
Joshua Charles Campbell, Abram Hindle, and José Nelson Amaral. 2014. Syntax errors just aren’t natural: Improving error reporting with language models. In Proceedings of the 11th Working Conference on Mining Software Repositories.Premkumar T. Devanbu, Sung Kim, and Martin Pinzger (Eds.), ACM, 252–261. DOI:
[10]
Jianfeng Chen, Vivek Nair, Rahul Krishna, and Tim Menzies. 2019. “Sampling” as a baseline optimizer for search-based software engineering. IEEE Trans. Software Eng. 45, 6 (2019), 597–614. DOI:
[11]
CLBI 2022. Replication kit. (2022). Retrieved from https://github.com/Naplues/CLBI. Accessed November 8, 2022.
[12]
Norman E. Fenton and Niclas Ohlsson. 2000. Quantitative analysis of faults and failures in a complex software system. IEEE Transactions on Software Engineering 26, 8 (2000), 797–814. DOI:
[13]
Wei Fu and Tim Menzies. 2017. Easy over hard: A case study on deep learning. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. Eric Bodden, Wilhelm Schäfer, Arie van Deursen, and Andrea Zisman (Eds.), ACM, 49–60. DOI:
[14]
Emanuel Giger, Marco D’Ambros, Martin Pinzger, and Harald C. Gall. 2012. Method-level bug prediction. In Proceedings of the 2012 ACM-IEEE International Symposium on Empirical Software Engineering and Measurement.Per Runeson, Martin Höst, Emilia Mendes, Anneliese Amschler Andrews, and Rachel Harrison (Eds.), ACM, 171–180. DOI:
[15]
Zhaoqiang Guo, Shiran Liu, Jinping Liu, Yanhui Li, Lin Chen, Hongmin Lu, and Yuming Zhou. 2021. How far have we progressed in identifying self-admitted technical debts? A comprehensive empirical study. ACM Transactions on Software Engineering and Methodology 30, 4 (2021), 45:1–45:56. DOI:
[16]
Andrew Habib and Michael Pradel. 2018. How many of all bugs do we find? A study of static bug detectors. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering.Marianne Huchard, Christian Kästner, and Gordon Fraser (Eds.), ACM, 317–328. DOI:
[17]
Maurice H. Halstead. 1977. Elements of software science (Operating and Programming Systems Series). Elsevier Science Inc. DOI:
[18]
Ahmed E. Hassan. 2009. Predicting faults using the complexity of code changes. In Proceedings of the 31st International Conference on Software Engineering. IEEE, 78–88. DOI:
[19]
Peng He, Bing Li, Xiao Liu, Jun Chen, and Yutao Ma. 2015. An empirical study on software defect prediction with a simplified metric set. Information and Software Technology 59 (2015), 170–190. DOI:
[20]
Vincent J. Hellendoorn and Premkumar T. Devanbu. 2017. Are deep neural networks the best choice for modeling source code?. In Proceedings of the 2017 11th Joint Meeting on Foundations of Software Engineering. Eric Bodden, Wilhelm Schäfer, Arie van Deursen, and Andrea Zisman (Eds.), ACM, 763–773. DOI:
[21]
Steffen Herbold, Alexander Trautsch, Benjamin Ledel, Alireza Aghamohammadi, Taher Ahmed Ghaleb, Kuljit Kaur Chahal, Tim Bossenmaier, Bhaveet Nagaria, Philip Makedonski, Matin Nili Ahmadabadi, Kristof Szabados, Helge Spieker, Matej Madeja, Nathaniel Hoy, Valentina Lenarduzzi, Shangwen Wang, Gema Rodríguez-Pérez, Ricardo Colomo Palacios, Roberto Verdecchia, Paramvir Singh, Yihao Qin, Debasish Chakroborti, Willard Davis, Vijay Walunj, Hongjun Wu, Diego Marcilio, Omar Alam, Abdullah Aldaeej, Idan Amit, Burak Turhan, Simon Eismann, Anna-Katharina Wickert, Ivano Malavolta, Matús Sulír, Fatemeh H. Fard, Austin Z. Henley, Stratos Kourtzanidis, Eray Tuzun, Christoph Treude, Simin Maleki Shamasbi, Ivan Pashchenko, Marvin Wyrich, James Davis, Alexander Serebrenik, Ella Albrecht, Ethem Utku Aktas, Daniel Strüber, and Johannes Erbel. 2022. A fine-grained data set and analysis of tangling in bug fixing commits. Empirical Software Engineering 27, 6 (2022), 125. DOI:
[22]
Abram Hindle, Earl T. Barr, Zhendong Su, Mark Gabel, and Premkumar T. Devanbu. 2012. On the naturalness of software. In Proceedings of the 34th International Conference on Software Engineering.Martin Glinz, Gail C. Murphy, and Mauro Pezzè (Eds.), IEEE Computer Society, 837–847. DOI:
[23]
Thong Hoang, Hoa Khanh Dam, Yasutaka Kamei, David Lo, and Naoyasu Ubayashi. 2019. DeepJIT: An end-to-end deep learning framework for just-in-time defect prediction. In Proceedings of the 16th International Conference on Mining Software Repositories.Margaret-Anne D. Storey, Bram Adams, and Sonia Haiduc (Eds.), IEEE / ACM, 34–45. DOI:
[24]
David Hovemeyer and William W. Pugh. 2007. Finding more null pointer bugs, but not too many. In Proceedings of the 7th ACM SIGPLAN-SIGSOFT Workshop on Program Analysis for Software Tools and Engineering.Manuvir Das and Dan Grossman (Eds.), ACM, 9–14. DOI:
[25]
Nasif Imtiaz, Brendan Murphy, and Laurie A. Williams. 2019. How do developers act on static analysis alerts? An empirical study of coverity usage. In Proceedings of the 30th IEEE International Symposium on Software Reliability Engineering.Katinka Wolter, Ina Schieferdecker, Barbara Gallina, Michel Cukier, Roberto Natella, Naghmeh Ramezani Ivaki, and Nuno Laranjeiro (Eds.), IEEE, 323–333. DOI:
[26]
Nasif Imtiaz, Akond Rahman, Effat Farhana, and Laurie A. Williams. 2019. Challenges with responding to static analysis tool alerts. In Proceedings of the 16th International Conference on Mining Software Repositories.Margaret-Anne D. Storey, Bram Adams, and Sonia Haiduc (Eds.), IEEE / ACM, 245–249. DOI:
[27]
Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, and Christoph Treude. 2020. The impact of automated feature selection techniques on the interpretation of defect models. Empirical Software Engineering 25, 5 (2020), 3590–3638. DOI:
[28]
Jirayus Jiarpakdee, Chakkrit Kla Tantithamthavorn, Hoa Khanh Dam, and John C. Grundy. 2022. An empirical study of model-agnostic techniques for defect prediction models. IEEE Transactions on Software Engineering 48, 2 (2022), 166–185. DOI:
[29]
Brittany Johnson, Yoonki Song, Emerson R. Murphy-Hill, and Robert W. Bowdidge. 2013. Why don’t software developers use static analysis tools to find bugs?. In Proceedings of the 35th International Conference on Software Engineering.David Notkin, Betty H. C. Cheng, and Klaus Pohl (Eds.), IEEE Computer Society, 672–681. DOI:
[30]
Yasutaka Kamei, Takafumi Fukushima, Shane McIntosh, Kazuhiro Yamashita, Naoyasu Ubayashi, and Ahmed E. Hassan. 2016. Studying just-in-time defect prediction using cross-project models. Empirical Software Engineering 21, 5 (2016), 2072–2106. DOI:
[31]
Yasutaka Kamei, Shinsuke Matsumoto, Akito Monden, Ken-ichi Matsumoto, Bram Adams, and Ahmed E. Hassan. 2010. Revisiting common bug prediction findings using effort-aware models. In Proceedings of the 26th IEEE International Conference on Software Maintenance.Radu Marinescu, Michele Lanza, and Andrian Marcus (Eds.), IEEE Computer Society, 1–10. DOI:
[32]
Yasutaka Kamei, Emad Shihab, Bram Adams, Ahmed E. Hassan, Audris Mockus, Anand Sinha, and Naoyasu Ubayashi. 2013. A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering 39, 6 (2013), 757–773. DOI:
[33]
Ritu Kapur and Balwinder Sodhi. 2020. A defect estimator for source code: Linking defect reports with programming constructs usage metrics. ACM Transactions on Software Engineering and Methodology 29, 2 (2020), 12:1–12:35. DOI:
[34]
Sunghun Kim and Michael D. Ernst. 2007. Which warnings should I fix first?. In Proceedings of the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering.Ivica Crnkovic and Antonia Bertolino (Eds.), ACM, 45–54. DOI:
[35]
Sunghun Kim, Thomas Zimmermann, E. James Whitehead Jr., and Andreas Zeller. 2007. Predicting faults from cached history. In Proceedings of the 29th International Conference on Software Engineering.IEEE Computer Society, 489–498. DOI:
[36]
Masanari Kondo, Daniel M. Germán, Osamu Mizuno, and Eun-Hye Choi. 2020. The impact of context metrics on just-in-time defect prediction. Empirical Software Engineering 25, 1 (2020), 890–939. DOI:
[37]
Akif Günes Koru and Hongfang Liu. 2005. An investigation of the effect of module size on defect prediction using static measures. Proceedings of the 2005 Workshop on Predictor Models in Software Engineering 30, 4 (2005), 1–5. DOI:
[38]
Lov Kumar, Sanjay Misra, and Santanu Ku. Rath. 2017. An empirical analysis of the effectiveness of software metrics and fault prediction model for identifying faulty classes. Computer Standards and Interfaces 53 (2017), 1–32. DOI:
[39]
Jaekwon Lee, Dongsun Kim, Tegawendé F. Bissyandé, Woosung Jung, and Yves Le Traon. 2018. Bench4BL: Reproducibility study on the performance of IR-based bug localization. In Proceedings of the 27th ACM SIGSOFT International Symposium on Software Testing and Analysis.Frank Tip and Eric Bodden (Eds.), ACM, 61–72. DOI:
[40]
Jinping Liu, Yuming Zhou, Yibiao Yang, Hongmin Lu, and Baowen Xu. 2017. Code churn: A neglected metric in effort-aware just-in-time defect prediction. In Proceedings of the 2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement.Ayse Bener, Burak Turhan, and Stefan Biffl (Eds.), IEEE Computer Society, 11–19. DOI:
[41]
Zhongxin Liu, Xin Xia, Ahmed E. Hassan, David Lo, Zhenchang Xing, and Xinyu Wang. 2018. Neural-machine-translation-based commit message generation: How far are we?. In Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering.Marianne Huchard, Christian Kästner, and Gordon Fraser (Eds.), ACM, 373–384. DOI:
[42]
Amirabbas Majd, Mojtaba Vahidi-Asl, Alireza Khalilian, Pooria Poorsarvi-Tehrani, and Hassan Haghighi. 2020. SLDeep: Statement-level software defect prediction using deep-learning model on static code features. Expert Systems with Applications 147 (2020), 113156. DOI:
[43]
Suvodeep Majumder, Nikhila Balaji, Katie Brey, Wei Fu, and Tim Menzies. 2018. 500+ times faster than deep learning: A case study exploring faster methods for text mining stackoverflow. In Proceedings of the 15th International Conference on Mining Software Repositories.Andy Zaidman, Yasutaka Kamei, and Emily Hill (Eds.), ACM, 554–563. DOI:
[44]
Thomas J. McCabe. 1976. A complexity measure. IEEE Transactions on Software Engineering 2, 4 (1976), 308–320. DOI:
[45]
Shane McIntosh and Yasutaka Kamei. 2018. Are fix-inducing changes a moving target? A longitudinal case study of just-in-time defect prediction. IEEE Transactions on Software Engineering 44, 5 (2018), 412–428. DOI:
[46]
Tim Menzies, Jeremy Greenwald, and Art Frank. 2007. Data mining static code attributes to learn defect predictors. IEEE Transactions on Software Engineering 33, 1 (2007), 2–13. DOI:
[47]
Tim Menzies, Zach Milton, Burak Turhan, Bojan Cukic, Yue Jiang, and Ayse Basar Bener. 2010. Defect prediction from static code features: Current results, limitations, new approaches. Automated Software Engineering 17, 4 (2010), 375–407. DOI:
[48]
Akito Monden, Takuma Hayashi, Shoji Shinoda, Kumiko Shirai, Junichi Yoshida, Mike Barker, and Ken-ichi Matsumoto. 2013. Assessing the cost effectiveness of fault prediction in acceptance testing. IEEE Transactions on Software Engineering 39, 10 (2013), 1345–1357. DOI:
[49]
Nachiappan Nagappan, Thomas Ball, and Andreas Zeller. 2006. Mining metrics to predict component failures. In Proceedings of the 28th International Conference on Software Engineering.Leon J. Osterweil, H. Dieter Rombach, and Mary Lou Soffa (Eds.), ACM, 452–461. DOI:
[50]
Nachiappan Nagappan, Brendan Murphy, and Victor R. Basili. 2008. The influence of organizational structure on software quality: An empirical case study. In Proceedings of the 30th International Conference on Software Engineering.Wilhelm Schäfer, Matthew B. Dwyer, and Volker Gruhn (Eds.), ACM, 521–530. DOI:
[51]
Chao Ni, Wei Wang, Kaiwen Yang, Xin Xia, Kui Liu, and David Lo. 2022. The best of both worlds: Integrating semantic features with expert features for defect prediction and localization. In Proceedings of the 30th ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering.Abhik Roychoudhury, Cristian Cadar, and Miryung Kim (Eds.), ACM, 672–683. DOI:
[52]
Thomas J. Ostrand, Elaine J. Weyuker, and Robert M. Bell. 2004. Where the bugs are. In Proceedings of the ACM/SIGSOFT International Symposium on Software Testing and Analysis.George S. Avrunin and Gregg Rothermel (Eds.), ACM, 86–96. DOI:
[53]
Thomas J. Ostrand, Elaine J. Weyuker, and Robert M. Bell. 2005. Predicting the location and number of faults in large software systems. IEEE Transactions on Software Engineering 31, 4 (2005), 340–355. DOI:
[54]
Chris Parnin and Alessandro Orso. 2011. Are automated debugging techniques actually helping programmers?. In Proceedings of the 20th International Symposium on Software Testing and Analysis.Matthew B. Dwyer and Frank Tip (Eds.), ACM, 199–209. DOI:
[55]
Luca Pascarella, Fabio Palomba, and Alberto Bacchelli. 2018. Re-evaluating method-level bug prediction. In Proceedings of the 25th International Conference on Software Analysis, Evolution and Reengineering.Rocco Oliveto, Massimiliano Di Penta, and David C. Shepherd (Eds.), IEEE Computer Society, 592–601. DOI:
[56]
Luca Pascarella, Fabio Palomba, and Alberto Bacchelli. 2020. On the performance of method-level bug prediction: A negative result. Journal of Systems and Software 161 (2020), 1–15. DOI:
[57]
Chanathip Pornprasit and Chakkrit Tantithamthavorn. 2021. JITLine: A simpler, better, faster, finer-grained just-in-time defect prediction. In Proceedings of the 18th IEEE/ACM International Conference on Mining Software Repositories. IEEE, 369–379. DOI:
[58]
Danijel Radjenovic, Marjan Hericko, Richard Torkar, and Ales Zivkovic. 2013. Software fault prediction metrics: A systematic literature review. Information and Software Technology 55, 8 (2013), 1397–1418. DOI:
[59]
Foyzur Rahman, Sameer Khatri, Earl T. Barr, and Premkumar T. Devanbu. 2014. Comparing static bug finders and statistical prediction. In Proceedings of the 36th International Conference on Software Engineering.Pankaj Jalote, Lionel C. Briand, and André van der Hoek (Eds.), ACM, 424–434. DOI:
[60]
Baishakhi Ray, Vincent J. Hellendoorn, Saheel Godhane, Zhaopeng Tu, Alberto Bacchelli, and Premkumar T. Devanbu. 2016. On the “naturalness” of buggy code. In Proceedings of the 38th International Conference on Software Engineering.Laura K. Dillon, Willem Visser, and Laurie A. Williams (Eds.), ACM, 428–439. DOI:
[61]
Veselin Raychev, Martin T. Vechev, and Eran Yahav. 2014. Code completion with statistical language models. In Proceedings of the ACM SIGPLAN Conference on Programming Language Design and Implementation.Michael F. P. O’Boyle and Keshav Pingali (Eds.), ACM, 419–428. DOI:
[62]
Marco Túlio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. “Why should I trust you?”: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Francisco.Balaji Krishnapuram, Mohak Shah, Alexander J. Smola, Charu C. Aggarwal, Dou Shen, and Rajeev Rastogi (Eds.), ACM, 1135–1144. DOI:
[63]
Danilo Silva, João Paulo da Silva, Gustavo Jansen de Souza Santos, Ricardo Terra, and Marco Túlio Valente. 2021. RefDiff 2.0: A multi-language refactoring detection tool. IEEE Transactions on Software Engineering 47, 12 (2021), 2786–2802. DOI:
[64]
Jacek Sliwerski, Thomas Zimmermann, and Andreas Zeller. 2005. When do changes induce fixes?. In Proceedings of the 2005 International Workshop on Mining Software Repositories. ACM. DOI:
[65]
Suresh Thummalapenta and Tao Xie. 2009. Alattin: Mining alternative patterns for detecting neglected conditions. In Proceedings of the ASE 2009, 24th IEEE/ACM International Conference on Automated Software Engineering. IEEE Computer Society, 283–294. DOI:
[66]
Alexander Trautsch, Steffen Herbold, and Jens Grabowski. 2020. A longitudinal study of static analysis warning evolution and the effects of PMD on software quality in Apache open source projects. Empirical Software Engineering 25, 6 (2020), 5137–5192. DOI:
[67]
Alexander Trautsch, Steffen Herbold, and Jens Grabowski. 2020. Static source code metrics and static analysis warnings for fine-grained just-in-time defect prediction. In Proceedings of the IEEE International Conference on Software Maintenance and Evolution. IEEE, 127–138. DOI:
[68]
Zhaopeng Tu, Zhendong Su, and Premkumar T. Devanbu. 2014. On the localness of software. In Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering.Shing-Chi Cheung, Alessandro Orso, and Margaret-Anne D. Storey (Eds.), ACM, 269–280. DOI:
[69]
Carmine Vassallo, Sebastiano Panichella, Fabio Palomba, Sebastian Proksch, Harald C. Gall, and Andy Zaidman. 2020. How developers engage with static analysis tools in different contexts. Empirical Software Engineering 25, 2 (2020), 1419–1457. DOI:
[70]
Song Wang, Devin Chollak, Dana Movshovitz-Attias, and Lin Tan. 2016. Bugram: Bug detection with n-gram language models. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering.David Lo, Sven Apel, and Sarfraz Khurshid (Eds.), ACM, 708–719. DOI:
[71]
Andrzej Wasylkowski, Andreas Zeller, and Christian Lindig. 2007. Detecting object usage anomalies. In Proceedings of the 6th Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering.Ivica Crnkovic and Antonia Bertolino (Eds.), ACM, 35–44. DOI:
[72]
Supatsara Wattanakriengkrai, Napat Srisermphoak, Sahawat Sintoplertchaikul, Morakot Choetkiertikul, Chaiyong Ragkhitwetsagul, Thanwadee Sunetnanta, Hideaki Hata, and Kenichi Matsumoto. 2019. Automatic classifying self-admitted technical debt using N-Gram IDF. In Proceedings of the 26th Asia-Pacific Software Engineering Conference. IEEE, 316–322. DOI:
[73]
Supatsara Wattanakriengkrai, Patanamon Thongtanunam, Chakkrit Tantithamthavorn, Hideaki Hata, and Kenichi Matsumoto. 2022. Predicting defective lines using a model-agnostic technique. IEEE Transactions on Software Engineering 48, 5 (2022), 1480–1496. DOI:
[74]
Chu-Pan Wong, Yingfei Xiong, Hongyu Zhang, Dan Hao, Lu Zhang, and Hong Mei. 2014. Boosting bug-report-oriented fault localization with segmentation and stack-trace analysis. In Proceedings of the 30th IEEE International Conference on Software Maintenance and Evolution. IEEE Computer Society, 181–190. DOI:
[75]
Rongxin Wu, Ming Wen, Shing-Chi Cheung, and Hongyu Zhang. 2018. ChangeLocator: Locate crash-inducing changes based on crash reports. Empirical Software Engineering 23, 5 (2018), 2866–2900. DOI:
[76]
Rongxin Wu, Hongyu Zhang, Shing-Chi Cheung, and Sunghun Kim. 2014. CrashLocator: Locating crashing faults based on crash stacks. In Proceedings of the International Symposium on Software Testing and Analysis.Corina S. Pasareanu and Darko Marinov (Eds.), ACM, 204–214. DOI:
[77]
Meng Yan, Yicheng Fang, David Lo, Xin Xia, and Xiaohong Zhang. 2017. File-level defect prediction: Unsupervised vs. supervised models. In Proceedings of the 2017 ACM/IEEE International Symposium on Empirical Software Engineering and Measurement.Ayse Bener, Burak Turhan, and Stefan Biffl (Eds.), IEEE Computer Society, 344–353. DOI:
[78]
Meng Yan, Xin Xia, Yuanrui Fan, Ahmed E. Hassan, David Lo, and Shanping Li. 2022. Just-in-time defect identification and localization: A two-phase framework. IEEE Transactions on Software Engineering 48, 2 (2022), 82–101. DOI:
[79]
Xinli Yang, David Lo, Xin Xia, Yun Zhang, and Jianling Sun. 2015. Deep learning for just-in-time defect prediction. In Proceedings of the 2015 IEEE International Conference on Software Quality, Reliability and Security. IEEE, 17–26. DOI:
[80]
Yibiao Yang, Mark Harman, Jens Krinke, Syed S. Islam, David W. Binkley, Yuming Zhou, and Baowen Xu. 2016. An empirical study on dependence clusters for effort-aware fault-proneness prediction. In Proceedings of the 31st IEEE/ACM International Conference on Automated Software Engineering.David Lo, Sven Apel, and Sarfraz Khurshid (Eds.), ACM, 296–307. DOI:
[81]
Yibiao Yang, Yuming Zhou, Jinping Liu, Yangyang Zhao, Hongmin Lu, Lei Xu, Baowen Xu, and Hareton Leung. 2016. Effort-aware just-in-time defect prediction: Simple unsupervised models could be better than supervised models. In Proceedings of the 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering.Thomas Zimmermann, Jane Cleland-Huang, and Zhendong Su (Eds.), ACM, 157–168. DOI:
[82]
Suraj Yatish, Jirayus Jiarpakdee, Patanamon Thongtanunam, and Chakkrit Tantithamthavorn. 2019. Mining software defects: Should we consider affected releases?. In Proceedings of the 41st International Conference on Software Engineering.Joanne M. Atlee, Tevfik Bultan, and Jon Whittle (Eds.), IEEE / ACM, 654–665. DOI:
[83]
Zhe Yu, Fahmid Morshed Fahid, Huy Tu, and Tim Menzies. 2022. Identifying self-admitted technical debts with jitterbug: A two-step approach. IEEE Trans. Software Eng. 48, 5 (2022), 1676–1691. DOI:
[84]
Fiorella Zampetti, Simone Scalabrino, Rocco Oliveto, Gerardo Canfora, and Massimiliano Di Penta. 2017. How open source projects use static code analysis tools in continuous integration pipelines. In Proceedings of the 14th International Conference on Mining Software Repositories.Jesús M. González-Barahona, Abram Hindle, and Lin Tan (Eds.), IEEE Computer Society, 334–344. DOI:
[85]
Hongyu Zhang and S. C. Cheung. 2013. A cost-effectiveness criterion for applying software defect prediction models. In Proceedings of the Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering., Bertrand Meyer, Luciano Baresi, and Mira Mezini (Eds.), ACM, 643–646. DOI:
[86]
Jian Zhou, Hongyu Zhang, and David Lo. 2012. Where should the bugs be fixed? More accurate information retrieval-based bug localization based on bug reports. In 3Proceedings of the 4th International Conference on Software Engineering.Martin Glinz, Gail C. Murphy, and Mauro Pezzè (Eds.), IEEE Computer Society, 14–24. DOI:
[87]
Yuming Zhou, Yibiao Yang, Hongmin Lu, Lin Chen, Yanhui Li, Yangyang Zhao, Junyan Qian, and Baowen Xu. 2018. How far we have progressed in the journey? An examination of cross-project defect prediction. ACM Transactions on Software Engineering and Methodology 27, 1 (2018), 1:1–1:51. DOI:
[88]
Thomas Zimmermann, Rahul Premraj, and Andreas Zeller. 2007. Predicting defects for eclipse. In Proceedings of the 3rd International Workshop on Predictor Models in Software Engineering. IEEE Computer Society, 9. DOI:

Cited By

View all
  • (2024)A Just-in-time Software Defect Localization Method based on Code Graph RepresentationProceedings of the 32nd IEEE/ACM International Conference on Program Comprehension10.1145/3643916.3644428(293-303)Online publication date: 15-Apr-2024
  • (2024)Multi-Intent Inline Code Comment Generation via Large Language ModelInternational Journal of Software Engineering and Knowledge Engineering10.1142/S021819402450005034:06(845-868)Online publication date: 23-Mar-2024
  • (2024)Software defect prediction: future directions and challengesAutomated Software Engineering10.1007/s10515-024-00424-131:1Online publication date: 27-Feb-2024
  • Show More Cited By

Index Terms

  1. Code-line-level Bugginess Identification: How Far have We Come, and How Far have We Yet to Go?

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Transactions on Software Engineering and Methodology
    ACM Transactions on Software Engineering and Methodology  Volume 32, Issue 4
    July 2023
    938 pages
    ISSN:1049-331X
    EISSN:1557-7392
    DOI:10.1145/3599692
    • Editor:
    • Mauro Pezzè
    Issue’s Table of Contents

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 27 May 2023
    Online AM: 01 February 2023
    Accepted: 14 December 2022
    Revised: 08 November 2022
    Received: 06 January 2022
    Published in TOSEM Volume 32, Issue 4

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Code line
    2. bugginess
    3. defect prediction
    4. quality assurance
    5. static analysis tool

    Qualifiers

    • Research-article

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)335
    • Downloads (Last 6 weeks)44
    Reflects downloads up to 06 Jan 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)A Just-in-time Software Defect Localization Method based on Code Graph RepresentationProceedings of the 32nd IEEE/ACM International Conference on Program Comprehension10.1145/3643916.3644428(293-303)Online publication date: 15-Apr-2024
    • (2024)Multi-Intent Inline Code Comment Generation via Large Language ModelInternational Journal of Software Engineering and Knowledge Engineering10.1142/S021819402450005034:06(845-868)Online publication date: 23-Mar-2024
    • (2024)Software defect prediction: future directions and challengesAutomated Software Engineering10.1007/s10515-024-00424-131:1Online publication date: 27-Feb-2024
    • (2024)Parameter-efficient fine-tuning of pre-trained code models for just-in-time defect predictionNeural Computing and Applications10.1007/s00521-024-09930-536:27(16911-16940)Online publication date: 1-Sep-2024
    • (2024)Deep learning or classical machine learning? An empirical study on line‐level software defect predictionJournal of Software: Evolution and Process10.1002/smr.2696Online publication date: 2-Jun-2024

    View Options

    Login options

    Full Access

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Full Text

    View this article in Full Text.

    Full Text

    Media

    Figures

    Other

    Tables

    Share

    Share

    Share this Publication link

    Share on social media