Skip to main content
Log in

A survey on bug-report analysis

缺陷报告分析综述

  • Review
  • Published:
Science China Information Sciences Aims and scope Submit manuscript

Abstract

Bug reports are essential software artifacts that describe software bugs, especially in open-source software. Lately, due to the availability of a large number of bug reports, a considerable amount of research has been carried out on bug-report analysis, such as automatically checking duplication of bug reports and localizing bugs based on bug reports. To review the work on bug-report analysis, this paper presents an exhaustive survey on the existing work on bug-report analysis. In particular, this paper first presents some background for bug reports and gives a small empirical study on the bug reports on Bugzilla to motivate the necessity for work on bug-report analysis. Then this paper summaries the existing work on bug-report analysis and points out some possible problems in working with bug-report analysis.

摘要

创新点

缺陷报告是描述软件缺陷的重要软件制品。 过去十年中, 缺陷报告数量迅速增长, 和缺陷报告分析有关的论文也大量出现。 本文是第一篇和缺陷报告分析有关的较为全面的综述, 为以后研究人员进行缺陷报告有关的分析工作提供了很好的参考。 另外, 本文提出了一个全新的初步的缺陷报告分析分类框架, 从缺陷报告优化、缺陷报告分类以及缺陷修复三个角度系统介绍了现有缺陷报告分析研究的内容、效果及不足。

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others

References

  1. Si X, Hu C, Zhou Z. Fault prediction model based on evidential reasoning approach. Sci China Inf Sci, 2010, 53: 2032–2046

    Google Scholar 

  2. Xie T, Zhang L, Xiao X, et al. Cooperative software testing and analysis: advances and challenges. J Comput Sci Technol, 2014, 29: 713–723

    Google Scholar 

  3. Anvik J, Hiew L, Murphy G C. Who should fix this bug? In: Proceedings of the International Conference on Software Engineering, Shanghai, 2006. 361–370

    Google Scholar 

  4. Bettenburg N, Just S, Schröter A, et al. What makes a good bug report? In: Proceedings of the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Atlanta, 2008. 308–318

    Google Scholar 

  5. Bettenburg N, Just S, Schröter A, et al. Quality of bug reports in Eclipse. In: Proceedings of the OOPSLA workshop on Eclipse Technology eXchange, Montreal, 2007. 21–25

    Google Scholar 

  6. Hooimeijer P, Weimer W. Modeling bug report quality. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, Atlanta, 2007. 34–43

    Google Scholar 

  7. Liu K, Tan H B K, Chandramohan M. Has this bug been reported? In: Proceedings of the ACMSIGSOFT Symposium on the Foundations of Software Engineering, Cary, 2012. 28

    Google Scholar 

  8. Nguyen A T, Nguyen T T, Nguyen T N, et al. Duplicate bug report detection with a combination of information retrieval and topic modeling. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, Essen, 2012. 70–79

    Google Scholar 

  9. Runeson P, Alexandersson M, Nyholm O. Detection of duplicate defect reports using natural language processing. In: Proceedings of the International Conference on Software Engineering, Minneapolis, 2007. 499–510

    Google Scholar 

  10. Anvik J. Automating bug report assignment. In: Proceedings of the International Conference on Software Engineering, Shanghai, 2006. 937–940

    Google Scholar 

  11. Tamrawi A, Nguyen T T, Al-Kofahi J, et al. Fuzzy set-based automatic bug triaging. In: Proceedings of the International Conference on Software Engineering, Waikiki, 2011. 884–887

    Google Scholar 

  12. Nguyen A T, Nguyen T T, Al-Kofahi J, et al. A topic-based approach for narrowing the search space of buggy files from a bug report. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, Lawrence, 2011. 263–272

    Google Scholar 

  13. Menzies T, Marcus A. Automated severity assessment of software defect reports. In: Proceedings of the IEEE International Conference on Software Maintenance, Beijing, 2008. 346–355

    Google Scholar 

  14. Podgurski A, Leon D, Francis P, et al. Automated support for classifying software failure reports. In: Proceedings of the International Conference on Software Engineering, Portland, 2003. 465–475

    Google Scholar 

  15. Raymond E. The cathedral and the bazaar. Knowl Technol Policy, 1999, 12: 23–49

    MathSciNet  Google Scholar 

  16. Čubranić D. A Automatic bug triage using text categorization. I In: Proceedings of the International Conference on Software Engineering & Knowledge Engineering, Alberta, 2004. 92–97

    Google Scholar 

  17. Xie J, Zhou M, Mockus A. Impact of triage: a study of mozilla and gnome. In: Proceedings of the ACM / IEEE International Symposium on Empirical Software Engineering and Measurement, Baltimore, 2013. 247–250

    Google Scholar 

  18. Breu S, Premraj R, Sillito J, et al. Information needs in bug reports: improving cooperation between developers and users. In: Proceedings of the ACM Conference on Computer Supported Cooperative Work, Savannah, 2010. 301–310

    Google Scholar 

  19. Lamkanfi A, Demeyer S. Predicting reassignments of bug reports-an exploratory investigation. In: Proceedings of the European Conference on Software Maintenance and Reengineering, Genova, 2013. 327–330

    Google Scholar 

  20. Herraiz I, German D M, Gonzalez-Barahona J M, et al. Towards a simplification of the bug report form in Eclipse. In: Proceedings of the International Working Conference on Mining Software Repositories, Leipzig, 2008. 145–148

    Google Scholar 

  21. Wu L L, Xie B, Kaiser G E, et al. BugMiner: software reliability analysis via data mining of bug reports. In: Proceedings of the International Conference on Software Engineering & Knowledge Engineering, Miami Beach, 2011. 95–100

    Google Scholar 

  22. Xia X, Lo D, Wen M, et al. An empirical study of bug report field reassignment. In: Proceedings of the Software Evolution Week-IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering, Antwerp, 2014. 174–183

    Google Scholar 

  23. Rastkar S, Murphy G C, Murray G. Summarizing software artifacts: a case study of bug reports. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, Cape Town, 2010. 505–514

    Google Scholar 

  24. Ko A J, Myers B A, Chau D H. A linguistic analysis of how people describe software problems. In: Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing, Brighton, 2006. 127–134

    Google Scholar 

  25. Toutanova K, Klein D, Manning C D, et al. Feature-rich part-of-speech tagging with a cyclic dependency network. In: Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics, Edmonton, 2003. 173–180

    Google Scholar 

  26. Antoniol G, Ayari K, Di Penta M, et al. Is it a bug or an enhancement? A text-based approach to classify change requests. In: Proceedings of the Conference of the Centre for Advanced Studies on Collaborative Research, Richmond Hill, 2008. 23

    Google Scholar 

  27. Pingclasai N, Hata H, Matsumoto Ki. Classifying bug reports to bugs and other requests using topic modeling. In: Proceedings of the Asia-Pacific Software Engineering Conference, Ratchathewi, 2013. 13–18

    Google Scholar 

  28. Herzig K, Just S, Zeller A. It’s not a bug, it’s a feature: how misclassification impacts bug prediction. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, San Francisco, 2013. 392–401

    Google Scholar 

  29. Serrano Zanetti M, Scholtes I, Tessone C J, et al. Categorizing bugs with social networks: a case study on four open source software communities. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, San Francisco, 2013. 1032–1041

    Google Scholar 

  30. William C. Fast effective rule induction. In: Proceedings of the International Conference on Machine Learning, Tahoe City, 1995. 115–123

    Google Scholar 

  31. Lamkanfi A, Demeyer S, Giger E, et al. Predicting the severity of a reported bug. In: Proceedings of the International Working Conference on Mining Software Repositories, Cape Town, 2010. 1–10

    Google Scholar 

  32. Lamkanfi A, Demeyer S, Soetens Q D, et al. Comparing mining algorithms for predicting the severity of a reported bug. In: Proceedings of the European Conference on Software Maintenance and Reengineering, Oldenburg, 2011. 249–258

    Google Scholar 

  33. Tian Y, Lo D, Sun C. Information retrieval based nearest neighbor classification for fine-grained bug severity prediction. In: Proceedings of the Working Conference on Reverse Engineering, Kingston, 2012. 215–224

    Google Scholar 

  34. Jeong G, Kim S, Zimmermann T. Improving bug triage with bug tossing graphs. In: Proceedings of the joint meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Amsterdam, 2009. 111–120

    Google Scholar 

  35. Park JW, Lee MW, Kim J, et al. Costriage: a cost-aware triage algorithm for bug reporting systems. In: Proceedings of the Conference on Artificial Intelligence, San Francisco, 2011. 139–144

    Google Scholar 

  36. Yu L, Tsai W T, Zhao W, et al. Predicting defect priority based on neural networks. In: Proceedings of the International Conference on Advanced Data Mining and Applications, Chongqing, 2010. 356–367

    Google Scholar 

  37. Kanwal J, Maqbool O. Bug prioritization to facilitate bug report triage. J Comput Sci Technol, 2012, 27: 397–412

    Google Scholar 

  38. Tian Y, Lo D, Sun C. DRONE: predicting priority of reported bugs by multi-factor analysis. In: Proceedings of the IEEE International Conference on Software Maintenance, Eindhoven, 2013. 200–209

    Google Scholar 

  39. Anvik J, Hiew L, Murphy G C. Coping with an open bug repository. In: Proceedings of the OOPSLA Workshop on Eclipse Technology Exchange, San Diego, 2005. 35–39

    Google Scholar 

  40. Hiew L. Assisted detection of duplicate bug reports. Dissertation for the Master Degree. Vancouver: The University of British Columbia, 2006

    Google Scholar 

  41. Jalbert N, Weimer W. Automated duplicate detection for bug tracking systems. In: Proceedings of the Annual IEEE/IFIP International Conference on Dependable Systems and Networks, Anchorage, 2008. 52–61

    Google Scholar 

  42. Sureka A, Jalote P. Detecting duplicate bug report using character n-gram-based features. In: Proceedings of the Asia Pacific Software Engineering Conference, Sydney, 2010. 366–374

    Google Scholar 

  43. Sun C, Lo D, Wang X, et al. A discriminative model approach for accurate duplicate bug report retrieval. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, Cape Town, 2010. 45–54

    Google Scholar 

  44. Sun C, Lo D, Khoo S C, et al. Towards more accurate retrieval of duplicate bug reports. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, Lawrence, 2011. 253–262

    Google Scholar 

  45. Robertson S, Zaragoza H, Taylor M. Simple BM25 extension to multiple weighted fields. In: Proceedings of the ACM CIKM International Conference on Information and Knowledge Management, Washington, 2004. 42–49

    Google Scholar 

  46. Tian Y, Sun C, Lo D. Improved duplicate bug report identification. In: Proceedings of the European Conference on Software Maintenance and Reengineering, Szeged, 2012. 385–390

    Google Scholar 

  47. Banerjee S, Cukic B, Adjeroh D. Automated duplicate bug report classification using subsequence matching. In: Proceedings of the International IEEE Symposium on High-Assurance Systems Engineering, Omaha, 2012. 74–81

    Google Scholar 

  48. Falessi D, Cantone G, Canfora G. Empirical principles and an industrial case study in retrieving equivalent requirements via natural language processing techniques. IEEE Trans Softw Eng, 2013, 39: 18–44

    Google Scholar 

  49. Zhou J, Zhang H. Learning to rank duplicate bug reports. In: Proceedings of the ACM International Conference on Information and Knowledge Management, Maui, 2012. 852–861

    Google Scholar 

  50. Feng L, Song L, Sha C, et al. Practical duplicate bug reports detection in a large web-based development community. In: Proceedings of Asia-Pacific Web Conference on the Web Technologies and Applications, Sydney, 2013. 709–720

    Google Scholar 

  51. Alipour A, Hindle A, Stroulia E. A contextual approach towards more accurate duplicate bug report detection. In: Proceedings of the Working Conference on Mining Software Repositories, San Francisco, 2013. 183–192

    Google Scholar 

  52. Wang X, Zhang L, Xie T, et al. An approach to detecting duplicate bug reports using natural language and execution information. In: Proceedings of the International Conference on Software Engineering, Leipzig, 2008. 461–470

    Google Scholar 

  53. Song Y, Wang X, Xie T, et al. JDF: detecting duplicate bug reports in jazz. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, Cape Town, 2010. 315–316

    Google Scholar 

  54. Lerch J, Mezini M. Finding duplicates of your yet unwritten bug report. In: Proceedings of the European Conference on Software Maintenance and Reengineering, Genova, 2013. 69–78

    Google Scholar 

  55. Kim S, Zimmermann T, Nagappan N. Crash graphs: an aggregated view of multiple crashes to improve crash triage. In: Proceedings of the IEEE/IFIP International Conference on Dependable Systems and Networks, Hong Kong, 2011. 486–493

    Google Scholar 

  56. Dang Y, Wu R, Zhang H, et al. Rebucket: a method for clustering duplicate crash reports based on call stack similarity. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, Zurich, 2012. 1084–1093

    Google Scholar 

  57. Bettenburg N, Premraj R, Zimmermann T, et al. Duplicate bug reports considered harmful…really? In: Proceedings of the IEEE International Conference on Software Maintenance, Beijing, 2008. 337–345

    Google Scholar 

  58. Wang X, Lo D, Jiang J, et al. Extracting paraphrases of technical terms from noisy parallel software corpora. In: Proceedings of the Annual Meeting of the Association for Computational Linguistics and the International Joint Conference on Natural Language Processing of the AFNLP, Singapore, 2009. 197–200

    Google Scholar 

  59. Cavalcanti Y, Almeida E, Cunha C, et al. An initial study on the bug report duplication problem. In: Proceedings of the European Conference on Software Maintenance and Reengineering, Madrid, 2010. 264–267

    Google Scholar 

  60. Cavalcanti Y, Mota Silveira Neto P, Lucrdio D, et al. The bug report duplication problem: an exploratory study. Softw Qual J, 2013, 21: 39–66

    Google Scholar 

  61. Davidson J, Mohan N, Jensen C. Coping with duplicate bug reports in free/open source software projects. In: Proceedings of the IEEE Symposium on Visual Languages and Human-Centric Computing, Pittsburgh, 2011. 101–108

    Google Scholar 

  62. Bhattacharya P, Neamtiu I. Fine-grained incremental learning and multi-feature tossing graphs to improve bug triaging. In: Proceedings of the IEEE International Conference on Software Maintenance, Timisoara, 2010. 1–10

    Google Scholar 

  63. Hu H, Zhang H, Xuan J, et al. Effective bug triage based on historical bug-fix information. In: Proceedings of the IEEE International Symposium on Software Reliability Engineering, Naples, 2014. 122–132

    Google Scholar 

  64. Lin Z, Shu F, Yang Y, et al. An empirical study on bug assignment automation using chinese bug data. In: Proceedings of the International Symposium on Empirical Software Engineering and Measurement, Lake Buena Vista, 2009. 451–455

    Google Scholar 

  65. Xuan J, Jiang H, Ren Z, et al. Automatic bug triage using semi-supervised text classification. In: Proceedings of International Conference on Software Engineering & Knowledge Engineering, Redwood City, 2010. 209–214

    Google Scholar 

  66. Alenezi M, Magel K, Banitaan S. Efficient bug triaging using text mining. J Softw, 2013, 8: 2185–2190

    Google Scholar 

  67. Zou W, Hu Y, Xuan J, et al. Towards training set reduction for bug triage. In: Proceedings of the Annual IEEE International Computer Software and Applications Conference, Munich, 2011. 576–581

    Google Scholar 

  68. Canfora G, Cerulo L. Supporting change request assignment in open-source development. In: Proceedings of the ACM Symposium on Applied Computing, Dijon, 2006. 1767–1772

    Google Scholar 

  69. Matter D, Kuhn A, Nierstrasz O. Assigning bug reports using a vocabulary-based expertise model of developers. In: Proceedings of the International Working Conference on Mining Software Repositories, Vancouver, 2009. 131–140

    Google Scholar 

  70. Xia X, Lo D, Wang X, et al. Accurate developer recommendation for bug resolution. In: Proceedings of the Working Conference on Reverse Engineering, Koblenz, 2013. 72–81

    Google Scholar 

  71. Baysal O, Godfrey MW, Cohen R. A bug you like: a framework for automated assignment of bugs. In: Proceedings of the IEEE International Conference on Program Comprehension, Vancouver, 2009. 297–298

    Google Scholar 

  72. Aljarah I, Banitaan S, Abufardeh S, et al. Selecting discriminating terms for bug assignment: a formal analysis. In: Proceedings of the International Conference on Predictive Models in Software Engineering, Banff, 2011. 12

    Google Scholar 

  73. Servant F, Jones J A. Whosefault: automatic developer-to-fault assignment through fault localization. In: Proceedings of the International Conference on Software Engineering, Zurich, 2012. 36–46

    Google Scholar 

  74. Shokripour R, Anvik J, Kasirun Z M, et al. Why so complicated? simple term filtering and weighting for locationbased bug report assignment recommendation. In: Proceedings of the Working Conference on Mining Software Repositories, San Francisco, 2013. 2–11

    Google Scholar 

  75. Kevic K, Muller S C, Fritz T, et al. Collaborative bug triaging using textual similarities and change set analysis. In: Proceedings of the International Workshop on Cooperative and Human Aspects of Software Engineering, San Francisco, 2013. 17–24

    Google Scholar 

  76. Guo P J, Zimmermann T, Nagappan N, et al. Not my bug! and other reasons for software bug report reassignments. In: Proceedings of the ACM Conference on Computer Supported Cooperative Work, Hangzhou, 2011. 395–404

    Google Scholar 

  77. Xie J, Zheng Q, Zhou M, et al. Product assignment recommender. In: Proceedings of the International Conference on Software Engineering, Hyderabad, 2014. 556–559

    Google Scholar 

  78. Li W, Li N. A formal semantics for program debugging. Sci China Inf Sci, 2012, 55: 133–148

    MATH  MathSciNet  Google Scholar 

  79. Gay G, Haiduc S, Marcus A, et al. On the use of relevance feedback in IR-based concept location. In: Proceedings of the IEEE International Conference on Software Maintenance, Edmonton, 2009. 351–360

    Google Scholar 

  80. Zhou J, Zhang H, Lo D. Where should the bugs be fixed? more accurate information retrieval-based bug localization based on bug reports. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, Zurich, 2012. 14–24

    Google Scholar 

  81. Deerwester S C, Dumais S T, Landauer T K, et al. Indexing by latent semantic analysis. J Amer Soc Inform Sci, 1990, 41: 391–407

    Google Scholar 

  82. Marcus A, Sergeyev A, Rajlich V, et al. An information retrieval approach to concept location in source code. In: Proceedings of the Working Conference on Reverse Engineering, Delft, 2004. 214–223

    Google Scholar 

  83. Marcus A, Rajlich V, Buchta J, et al. Static techniques for concept location in object-oriented code. In: Proceedings of the International Workshop on Program Comprehension, Louis, 2005. 33–42

    Google Scholar 

  84. Poshyvanyk D, Marcus A. Combining formal concept analysis with information retrieval for concept location in source code. In: Proceedings of the International Conference on Program Comprehension, Banff, 2007. 37–48

    Google Scholar 

  85. Poshyvanyk D, Guéhéneuc Y G, Marcus A, et al. Feature location using probabilistic ranking of methods based on execution scenarios and information retrieval. IEEE Trans Softw Eng, 2007, 33: 420–432

    Google Scholar 

  86. Liu D, Marcus A, Poshyvanyk D, et al. Feature location via information retrieval based filtering of a single scenario execution trace. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, Atlanta, 2007. 234–243

    Google Scholar 

  87. Lukins S K, Kraft N A, Etzkorn L H. Source code retrieval for bug localization using latent dirichlet allocation. In: Proceedings of the Working Conference on Reverse Engineering, Antwerp, 2008. 155–164

    Google Scholar 

  88. Lukins S K, Kraft N A, Etzkorn L H. Bug localization using latent dirichlet allocation. Inf Softw Technol, 2010, 52: 972–990

    Google Scholar 

  89. Salton G, Wong A, Yang CS. A vector space model for automatic indexing. Commun ACM, 1975, 18: 613–620

    MATH  Google Scholar 

  90. Rao S, Kak A. Retrieval from software libraries for bug localization: a comparative study of generic and composite text models. In: Proceedings of the International Working Conference on Mining Software Repositories, Waikiki, 2011. 43–52

    Google Scholar 

  91. Chawla I, Singh S K. Performance evaluation of vsm and lsi models to determine bug reports similarity. In: Proceedings of the International Conference on Contemporary Computing, Noida, 2013. 375–380

    Google Scholar 

  92. Saha R K, Lease M, Khurshid S, et al. Improving bug localization using structured information retrieval. In: Proceedings of the IEEE/ACM International Conference on Automated Software Engineering, Silicon Valley, 2013. 345–355

    Google Scholar 

  93. Kim D, Tao Y, Kim S, et al. Where should we fix this bug? a two-phase recommendation model. IEEE Trans Softw Eng, 2013, 39: 1597–1610

    Google Scholar 

  94. Ye X, Bunescu R, Liu C. Learning to rank relevant files for bug reports using domain knowledge. In: Proceedings of the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Hong Kong, 2014. 66–76

    Google Scholar 

  95. Wong C P, Xiong Y, Zhang H, et al. Boosting bug-report-oriented fault localization with segmentation and stack-trace analysis. In: Proceedings of the IEEE International Conference on Software Maintenance and Evolution, Victoria, 2014. 181–190

    Google Scholar 

  96. Ashok B, Joy J, Liang H, et al. Debugadvisor: a recommender system for debugging. In: Proceedings of the joint meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Amsterdam, 2009. 373–382

    Google Scholar 

  97. Čubranić D, Murphy G C, Singer J, et al. Hipikat: a project memory for software development. IEEE Trans Softw Eng, 2005, 31: 446–465

    Google Scholar 

  98. Davies S, Roper M, Wood M. Using bug report similarity to enhance bug localisation. In: Proceedings of the Working Conference on Reverse Engineering, Kingston, 2012. 125–134

    Google Scholar 

  99. Fischer M, Pinzger M, Gall H. Populating a release history database from version control and bug tracking systems. In: Proceedings of the International Conference on Software Maintenance, Amsterdam, 2003. 23–32

    Google Scholar 

  100. Fischer M, Pinzger M, Gall H. Analyzing and relating bug report data for feature tracking. In: Proceedings of the Working Conference on Reverse Engineering, Victoria, 2003. 90

    Google Scholar 

  101. Śliwerski J, Zimmermann T, Zeller A. When do changes induce fixes? ACM Sigsoft Softw Eng Notes, 2005, 30: 1–5

    Google Scholar 

  102. Schröter A, Zimmermann T, Premraj R, et al. If your bug database could talk. In: Proceedings of the ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, Rio de Janeiro, 2006. 18–20

    Google Scholar 

  103. Zimmermann T, Premraj R, Zeller A. Predicting defects for Eclipse. In: Proceedings of the International Workshop on Predictor Models in Software Engineering, Minneapolis, 2007. 9

    Google Scholar 

  104. Wu R, Zhang H, Kim S, et al. Relink: recovering links between bugs and changes. In: Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering, Szeged, 2011. 15–25

    Google Scholar 

  105. Bissyandé T F, Thung F, Wang S, et al. Empirical evaluation of bug linking. In: Proceedings of the European Conference on Software Maintenance and Reengineering, Genova, 2013. 89–98

    Google Scholar 

  106. Nguyen A T, Nguyen T T, Nguyen H A, et al. Multi-layered approach for recovering links between bug reports and fixes. In: Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering, Cary, 2012. 63

    Google Scholar 

  107. Bird C, Bachmann A, Rahman F, et al. Linkster: enabling efficient manual inspection and annotation of mined data. In: Proceedings of the ACM SIGSOFT Symposium on the Foundations of Software Engineering, Santa Fe, 2010. 369–370

    Google Scholar 

  108. Bird C, Bachmann A, Aune E, et al. Fair and balanced? Bias in bug-fix datasets. In: Proceedings of the Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Amsterdam, 2009. 121–130

    Google Scholar 

  109. Bachmann A, Bird C, Rahman F, et al. The missing links: bugs and bug-fix commits. In: Proceedings of the ACM SIGSOFT International Symposium on Foundations of Software Engineering, Santa Fe, 2010. 97–106

    Google Scholar 

  110. Nguyen T H, Adams B, Hassan A E. A case study of bias in bug-fix datasets. In: Proceedings of the Working Conference on Reverse Engineering, Beverly, 2010. 259–268

    Google Scholar 

  111. Panjer L D. Predicting Eclipse bug lifetimes. In: Proceedings of the International Workshop on Mining Software Repositories, Minneapolis, 2007. 29

    Google Scholar 

  112. Giger E, Pinzger M, Gall H. Predicting the fix time of bugs. In: Proceedings of the International Workshop on Recommendation Systems for Software Engineering, Cape Town, 2010, 52–56

    Google Scholar 

  113. Zhang H, Gong L, Versteeg S. Predicting bug-fixing time: an empirical study of commercial software projects. In: Proceedings of the International Conference on Software Engineering, San Francisco, 2013. 1042–1051

    Google Scholar 

  114. Weiss C, Premraj R, Zimmermann T, et al. How long will it take to fix this bug? In: Proceedings of the International Workshop on Mining Software Repositories, Minneapolis, 2007. 1

    Google Scholar 

  115. Anbalagan P, Vouk M. On predicting the time taken to correct bug reports in open source projects. In: Proceedings of the IEEE International Conference on Software Maintenance, Edmonton, 2009. 523–526

    Google Scholar 

  116. Guo PJ, Zimmermann T, Nagappan N, et al. Characterizing and predicting which bugs get fixed: an empirical study of microsoft windows. In: Proceedings of the ACM/IEEE International Conference on Software Engineering, Cape Town, 2010. 495–504

    Google Scholar 

  117. Bhattacharya P, Neamtiu I. Bug-fix time prediction models: can we do better? In: Proceedings of the International Working Conference on Mining Software Repositories, Waikiki, 2011. 207–210

    Google Scholar 

  118. Saha R K, Khurshid S, Perry D E. An empirical study of long lived bugs. In: Proceedings of Software Evolution Week—IEEE Conference on Software Maintenance, Reengineering, and Reverse Engineering, Antwerp, 2014. 144–153

    Google Scholar 

  119. Ko A J, Chilana P K. Design, discussion, and dissent in open bug reports. In: Proceedings of iConference, Berlin, 2011. 106–113

    Google Scholar 

  120. Sahoo S K, Criswell J, Adve V. An empirical study of reported bugs in server software with implications for automated bug diagnosis. In: Proceedings of the International Conference on Software Engineering, Cape Town, 2010. 485–494

    Google Scholar 

  121. Xuan J, Jiang H, Ren Z, et al. Developer prioritization in bug repositories. In: Proceedings of the International Conference on Software Engineering, Zurich, 2012. 25–35

    Google Scholar 

  122. Bhattacharya P, Ulanova L, Neamtiu I, et al. An empirical analysis of bug reports and bug fixing in open-source Android apps. In: Proceedings of the European Conference on Software Maintenance and Reengineering, Genova, 2013. 133–143

    Google Scholar 

  123. Wang J, Zhang H. Predicting defect numbers based on defect state transition models. In: Proceedings of the ACM/IEEE International Symposium on Empirical Software Engineering and Measurement, Lund, 2012. 191–200

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Dan Hao or Lu Zhang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, J., Wang, X., Hao, D. et al. A survey on bug-report analysis. Sci. China Inf. Sci. 58, 1–24 (2015). https://doi.org/10.1007/s11432-014-5241-2

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11432-014-5241-2

Keywords

关键词

Navigation