skip to main content
10.1145/3545258.3545288acmotherconferencesArticle/Chapter ViewAbstractPublication PagesinternetwareConference Proceedingsconference-collections
research-article

Bug Report Priority Prediction Using Developer-Oriented Socio-Technical Features

Published: 15 September 2022 Publication History

Abstract

Software stakeholders report bugs in Issue Tracking System (ITS) with manually labeled priorities. However, the lack of knowledge and standard for prioritization may cause stakeholders to mislabel the priorities. In response, priority predictors are actively developed to support them. Prior studies trained machine learners based on textual similarity, categorical, and numeric technical features of bug reports. Most models were validated by time-insensitive approaches, and they were producing sub-optimal results for practical usage. Moreover, they tend to ignore the developer and social aspects of ITS. Since ITS bridges users and developers, we integrate their sentiment- and community-oriented socio-technical features to perform 2- and multi-classed bug priority prediction and validate our model in within-project, cross-project, and time-wise scenarios. The proposed model outperforms the 2 baselines by up to 10% in AUC-ROC and 13% in MCC, and the significance of improvement is statistically confirmed. We reveal involving assignee and reporter features from socio-technical perspectives such as sentiment could boost prediction performance. Finally, we test statistically the mean and distribution of the features that reflect the differences in socio-technical aspects (e.g., quality of communication and resource distribution) between high and low priority reports. In conclusion, we suggest researchers should involve contributors’ experience and sentiments in bug report priority prediction.

References

[1]
Manuel De Stefano, Fabiano Pecorelli, Damian A. Tamburri, Fabio Palomba, and Andrea De Lucia. 2020. Splicing Community Patterns and Smells: A Preliminary Study. In 42nd International Conference on Software Engineering Workshops (ICSEW). 703–710.
[2]
Beyza Eken, Francis Palma, Ayse Basar, and Ayse Tosun. 2021. An empirical study on the effect of community smells on bug prediction. Software Quality Journal 29, 1 (2021), 159–194.
[3]
Davide Falessi, Jacky Huang, Likhita Narayana, Jennifer Fong Thai, and Burak Turhan. 2020. On the need of preserving order of data when validating within-project defect classifiers. Empirical Software Engineering 25, 6 (2020), 4805–4830.
[4]
Luiz Alberto Ferreira Gomes, Ricardo da Silva Torres, and Mario Lúcio Côrtes. 2019. Bug report severity level prediction in open source software: A survey and research opportunities. Information and Software Technology 115 (2019), 58–78.
[5]
D. Graziotin, F. Fagerholm, X. Wang, and P. Abrahamsson. 2017. Consequences of Unhappiness while Developing Software. In 2nd International Workshop on Emotion Awareness in Software Engineering (SEmotion). 42–47.
[6]
Daniel Graziotin, Fabian Fagerholm, Xiaofeng Wang, and Pekka Abrahamsson. 2018. What happens when software developers are (un)happy. Journal of Systems and Software 140 (2018), 32–47.
[7]
L. Gren, P. Lenberg, and K. Ljungberg. 2019. What Software Engineering Can Learn from Research on Affect in Social Psychology. In 4th International Workshop on Emotion Awareness in Software Engineering (SEmotion). 38–41.
[8]
A. Guzzi, A. Bacchelli, M. Lanza, M. Pinzger, and A. van Deursen. 2013. Communication in open source software development mailing lists. In 10th Working Conference on Mining Software Repositories (MSR). 277–286.
[9]
M. Hozano, A. Garcia, N. Antunes, B. Fonseca, and E. Costa. 2017. Smells Are Sensitive to Developers! On the Efficiency of (Un)Guided Customized Detection. In 25th International Conference on Program Comprehension (ICPC). 110–120.
[10]
Zijie Huang. 2022. Replication dataset. https://github.com/SORD-src/Internetware22Replication
[11]
Zijie Huang, Zhiqing Shao, Guisheng Fan, Jianhua Gao, Ziyi Zhou, Kang Yang, and Xingguang Yang. 2021. Predicting Community Smells’ Occurrence on Individual Developers by Sentiments. In 29th International Conference on Program Comprehension (ICPC). 230–241.
[12]
J. Jiarpakdee, C. Tantithamthavorn, H. K. Dam, and J. Grundy. 2020. An Empirical Study of Model-Agnostic Techniques for Defect Prediction Models. IEEE Transactions on Software Engineering 48, 2 (2020.), 166–185.
[13]
Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, and Christoph Treude. 2020. The impact of automated feature selection techniques on the interpretation of defect models. Empirical Software Engineering 25, 5 (2020), 3590–3638.
[14]
Jirayus Jiarpakdee, Chakkrit Kla Tantithamthavorn, and John Grundy. 2021. Practitioners’ Perceptions of the Goals and Visual Explanations of Defect Prediction Models. In 18th International Conference on Mining Software Repositories (MSR). 432–443.
[15]
R. Jongeling, S. Datta, and A. Serebrenik. 2015. Choosing your weapons: On sentiment analysis tools for software engineering research. In 31st International Conference on Software Maintenance and Evolution (ICSME). 531–535.
[16]
Serkan Kirbas, Bora Caglayan, Tracy Hall, Steve Counsell, David Bowes, Alper Sen, and Ayse Bener. 2017. The relationship between evolutionary coupling and defects in large industrial software. Journal of Software: Evolution and Process 29, 4 (2017), e1842.
[17]
A. Lamkanfi, S. Demeyer, E. Giger, and B. Goethals. 2010. Predicting the severity of a reported bug. In 7th Working Conference on Mining Software Repositories (MSR). 1–10.
[18]
A. Lamkanfi, S. Demeyer, Q. D. Soetens, and T. Verdonck. 2011. Comparing Mining Algorithms for Predicting the Severity of a Reported Bug. In 15th European Conference on Software Maintenance and Reengineering (CSMR). 249–258.
[19]
Scott M Lundberg, Gabriel Erion, Hugh Chen, Alex DeGrave, Jordan M Prutkin, Bala Nair, Ronit Katz, Jonathan Himmelfarb, Nisha Bansal, and Su-In Lee. 2020. From local explanations to global understanding with explainable AI for trees. Nature Machine Intelligence 2, 1 (2020), 56–67.
[20]
Scott M. Lundberg and Su-In Lee. December 2017. A Unified Approach to Interpreting Model Predictions. In 31st International Conference on Neural Information Processing Systems (NIPS). 4768–4777.
[21]
Simone Magnoni. 2020. An approach to measure community smells in software development communities. https://github.com/maelstromdat/codeface4smells_TR
[22]
Mika Mäntylä, Bram Adams, Giuseppe Destefanis, Daniel Graziotin, and Marco Ortu. 2016. Mining Valence, Arousal, and Dominance: Possibilities for Detecting Burnout and Productivity?. In 13th International Conference on Mining Software Repositories (MSR). 247–258.
[23]
S. McIntosh and Y. Kamei. 2018. Are Fix-Inducing Changes a Moving Target? A Longitudinal Case Study of Just-In-Time Defect Prediction. IEEE Transactions on Software Engineering 44, 5 (2018), 412–428.
[24]
Alessandro Murgia, Parastou Tourani, Bram Adams, and Marco Ortu. 2014. Do Developers Feel Emotions? An Exploratory Analysis of Emotions in Software Artifacts. In 11th Working Conference on Mining Software Repositories (MSR). 262–271.
[25]
M. Ortu, B. Adams, G. Destefanis, P. Tourani, M. Marchesi, and R. Tonelli. 2015. Are Bullies More Productive? Empirical Study of Affectiveness vs. Issue Fixing Time. In 12th Working Conference on Mining Software Repositories (MSR). 303–313.
[26]
Marco Ortu, Giuseppe Destefanis, Mohamad Kassab, Steve Counsell, Michele Marchesi, and Roberto Tonelli. 2015. Would you mind fixing this issue?. In 16th International Conference on Agile Software Development (XP), Vol. 212. 129–140.
[27]
M. Ortu, A. Murgia, G. Destefanis, P. Tourani, R. Tonelli, M. Marchesi, and B. Adams. 2016. The Emotional Side of Software Developers in JIRA. In 13th Working Conference on Mining Software Repositories (MSR). 480–483.
[28]
Fabio Palomba, Annibale Panichella, Andy Zaidman, Rocco Oliveto, and Andrea De Lucia. 2018. The Scent of a Smell: An Extensive Comparison Between Textual and Structural Smells. IEEE Transactions on Software Engineering 44, 10 (2018), 977–1000.
[29]
Fabio Palomba and Damian Andrew Tamburri. 2021. Predicting the emergence of community smells using socio-technical metrics: A machine-learning approach. Journal of Systems and Software 171 (2021), 110847.
[30]
F. Palomba, D. A. Tamburri, F. Arcelli Fontana, R. Oliveto, A. Zaidman, and A. Serebrenik. 2021. Beyond Technical Aspects: How Do Community Smells Influence the Intensity of Code Smells?IEEE Transactions on Software Engineering 47, 1 (2021), 108–129.
[31]
F. Palomba, M. Zanoni, F. A. Fontana, A. De Lucia, and R. Oliveto. 2019. Toward a Smell-Aware Bug Prediction Model. IEEE Transactions on Software Engineering 45, 2 (2019), 194–218.
[32]
Fabiano Pecorelli, Fabio Palomba, Foutse Khomh, and Andrea De Lucia. 2020. Developer-Driven Code Smell Prioritization. In 17th International Conference on Mining Software Repositories (MSR). 220–231.
[33]
G. K. Rajbahadur, S. Wang, G. Ansaldi, Y. Kamei, and A. E. Hassan. 2021. The impact of feature importance methods on the interpretation of defect classifiers. IEEE Transactions on Software Engineering(2021), Early Access.
[34]
Korosh Koochekian Sabor, Mohammad Hamdaqa, and Abdelwahab Hamou-Lhadj. 2020. Automatic prediction of the severity of bugs using stack traces and categorical features. Information and Software Technology 123 (2020), 106205.
[35]
Natthawute Sae-Lim, Shinpei Hayashi, and Motoshi Saeki. 2018. Context-based approach to prioritize code smells for prefactoring. Journal of Software: Evolution and Process 30, 6 (2018), e1886.
[36]
R. K. Saha, J. Lawall, S. Khurshid, and D. E. Perry. 2015. Are These Bugs Really “Normal”?. In 12th Working Conference on Mining Software Repositories (MSR). 258–268.
[37]
D. A. Tamburri, F. Palomba, and R. Kazman. 2021. Exploring Community Smells in Open-Source: An Automated Approach. IEEE Transactions on Software Engineering 47, 3 (2021), 630–652.
[38]
Yuan Tian, Nasir Ali, David Lo, and Ahmed E. Hassan. 2016. On the Unreliability of Bug Severity Data. Empirical Software Engineering 21, 6 (2016), 2298–2323.
[39]
Y. Tian, D. Lo, and C. Sun. 2012. Information Retrieval Based Nearest Neighbor Classification for Fine-Grained Bug Severity Prediction. In 19th Working Conference on Reverse Engineering (WCRE). 215–224.
[40]
Yuan Tian, David Lo, Xin Xia, and Chengnian Sun. 2015. Automated Prediction of Bug Report Priority Using Multi-Factor Analysis. Empirical Software Engineering 20, 5 (2015), 1354–1383.
[41]
Q. Umer, H. Liu, and I. Illahi. 2020. CNN-Based Automatic Prioritization of Bug Reports. IEEE Transactions on Reliability 69, 4 (2020), 1341–1354.
[42]
Harold Valdivia-Garcia and Emad Shihab. 2014. Characterizing and Predicting Blocking Bugs in Open Source Projects. In 11th Working Conference on Mining Software Repositories (MSR). 72–81.
[43]
Harold Valdivia-Garcia, Emad Shihab, and Meiyappan Nagappan. 2018. Characterizing and predicting blocking bugs in open source projects. Journal of Systems and Software 143 (2018), 44–58.
[44]
Geunseok Yang, Tao Zhang, and Byungjeong Lee. 2018. An Emotion Similarity Based Severity Prediction of Software Bugs: A Case Study of Open Source Projects. IEICE Transactions on Information and Systems E101.D, 8(2018), 2015–2026.
[45]
Yibiao Yang, Yuming Zhou, Jinping Liu, Yangyang Zhao, Hongmin Lu, Lei Xu, Baowen Xu, and Hareton Leung. 2016. Effort-Aware Just-in-Time Defect Prediction: Simple Unsupervised Models Could Be Better than Supervised Models. In 24th ACM SIGSOFT International Symposium on Foundations of Software Engineering (FSE). 157–168.
[46]
Jingxiu Yao and Martin J. Shepperd. 2021. The impact of using biased performance metrics on software defect prediction research. Information Software Technology 139 (2021), 106664.
[47]
Tao Zhang, Jiachi Chen, Geunseok Yang, Byungjeong Lee, and Xiapu Luo. 2016. Towards more accurate severity prediction and fixer recommendation of software bugs. Journal of Systems and Software 117 (2016), 166–184.
[48]
W. Zou, D. Lo, Z. Chen, X. Xia, Y. Feng, and B. Xu. 2020. How Practitioners Perceive Automated Bug Report Management Techniques. IEEE Transactions on Software Engineering 46, 8 (2020), 836–862.

Cited By

View all
  • (2024)Prioritization of Software Bugs Using Entropy‐Based MeasuresJournal of Software: Evolution and Process10.1002/smr.274237:2Online publication date: 26-Nov-2024
  • (2023)Software bug priority prediction technique based on intuitionistic fuzzy representation and class imbalance learningKnowledge and Information Systems10.1007/s10115-023-02000-766:3(2135-2164)Online publication date: 10-Oct-2023
  • (2023)Bug report priority prediction using social and technical featuresJournal of Software: Evolution and Process10.1002/smr.261636:6Online publication date: 19-Sep-2023

Index Terms

  1. Bug Report Priority Prediction Using Developer-Oriented Socio-Technical Features

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Other conferences
    Internetware '22: Proceedings of the 13th Asia-Pacific Symposium on Internetware
    June 2022
    291 pages
    ISBN:9781450397803
    DOI:10.1145/3545258
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 September 2022

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. bug report priority
    2. developer sentiment
    3. empirical software engineering
    4. issue tracking system
    5. socio-technical analysis

    Qualifiers

    • Research-article
    • Research
    • Refereed limited

    Funding Sources

    Conference

    Internetware 2022

    Acceptance Rates

    Overall Acceptance Rate 55 of 111 submissions, 50%

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)35
    • Downloads (Last 6 weeks)6
    Reflects downloads up to 16 Feb 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Prioritization of Software Bugs Using Entropy‐Based MeasuresJournal of Software: Evolution and Process10.1002/smr.274237:2Online publication date: 26-Nov-2024
    • (2023)Software bug priority prediction technique based on intuitionistic fuzzy representation and class imbalance learningKnowledge and Information Systems10.1007/s10115-023-02000-766:3(2135-2164)Online publication date: 10-Oct-2023
    • (2023)Bug report priority prediction using social and technical featuresJournal of Software: Evolution and Process10.1002/smr.261636:6Online publication date: 19-Sep-2023

    View Options

    Login options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format.

    HTML Format

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media