Skip to main content

Predicting Bug-Fixing Time: DistilBERT Versus Google BERT

  • Conference paper
  • First Online:
Product-Focused Software Process Improvement (PROFES 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13709))

Abstract

The problem of bug-fixing time can be treated as a supervised text categorization task in Natural Language Processing. In recent years, following the use of deep learning also in the field of Natural Language Processing, pre-trained contextualized representations of words have become widespread. One of the most used pre-trained language representations models is named Google BERT (hereinafter, for brevity, BERT). BERT uses a self-attention mechanism that allows learning the bidirectional context representation of a word in a sentence, which constitutes one of the main advantages over the previously proposed solutions. However, due to the large size of BERT, it is difficult for it to put it into production. To address this issue, a smaller, faster, cheaper and lighter version of BERT, named DistilBERT, has been introduced at the end of 2019. This paper compares the efficacy of BERT and DistilBERT, combined with the Logistic Regression, in predicting bug-fixing time from bug reports of a large-scale open-source software project, LiveCode. In the experimentation carried out, DistilBERT retains almost 100% of its language understanding capabilities and, in the best case, it is 63.28% faster than BERT. Moreover, with a not time-consuming tuning of the C parameter in Logistic Regression, the DistilBERT provides an accuracy value even better than BERT.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Acheampong, F.A., Wenyu, C., Nunoo-Mensah, H.: Text-based emotion detection: Advances, challenges, and opportunities. Engineering Reports 2 (2020)

    Google Scholar 

  2. Aggarwal, A.: Huggingface implmementation of distilbert. https://huggingface.co/docs/transformers/model_doc/distilbert

  3. de Almeida, C.D.A., Feijó, D.N., Rocha, L.S.: Studying the impact of continuous delivery adoption on bug-fixing time in apache’s open-source projects. In: 2022 IEEE/ACM 19th International Conference on Mining Software Repositories (MSR). pp. 132–136 (2022). https://doi.org/10.1145/3524842.3528049

  4. Ardimento, P., Mele, C.: Using BERT to predict bug-fixing time. In: 2020 IEEE Conference on Evolving and Adaptive Intelligent Systems, EAIS 2020, Bari, Italy, May 27–29, 2020. pp. 1–7. IEEE (2020). https://doi.org/10.1109/EAIS48028.2020.9122781

  5. Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. In: Dietterich, T.G., Becker, S., Ghahramani, Z. (eds.) Advances in Neural Information Processing Systems 14 [Neural Information Processing Systems: Natural and Synthetic, NIPS 2001(December), pp. 3–8, 2001. Vancouver, British Columbia, Canada]. pp. 601–608. MIT Press (2001). https://proceedings.neurips.cc/paper/2001/hash/296472c9542ad4d4788d543508116cbc-Abstract.html

  6. Bucila, C., Caruana, R., Niculescu-Mizil, A.: Model compression. In: Eliassi-Rad, T., Ungar, L.H., Craven, M., Gunopulos, D. (eds.) Proceedings of the Twelfth ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Philadelphia, PA, USA, August 20–23, 2006. pp. 535–541. ACM (2006). https://doi.org/10.1145/1150402.1150464

  7. Bugzilla: Bugzilla installation list. https://www.bugzilla.org/installation-list/ Accessed 09 Sept (2022)

  8. Casalino, G., Castiello, C., Buono, N.D., Mencar, C.: A framework for intelligent twitter data analysis with non-negative matrix factorization. Int. J. Web Inf. Syst. 14(3), 334–356 (2018). https://doi.org/10.1108/IJWIS-11-2017-0081

  9. Deng, L., Liu, Y. (eds.): Deep Learning in Natural Language Processing. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-5209-5

  10. Lee, D.D., Seung, H.S.: Algorithms for non-negative matrix factorization. In: Leen, T.K., Dietterich, T.G., Tresp, V. (eds.) Advances in Neural Information Processing Systems 13, Papers from Neural Information Processing Systems (NIPS) 2000, Denver, CO, USA. pp. 556–562. MIT Press (2000). https://proceedings.neurips.cc/paper/2000/hash/f9d1152547c0bde01830b7e8bd60024c-Abstract.html

  11. Liu, Q., Washizaki, H., Fukazawa, Y.: Adversarial multi-task learning-based bug fixing time and severity prediction. In: 2021 IEEE 10th Global Conference on Consumer Electronics (GCCE). pp. 185–186 (2021). https://doi.org/10.1109/GCCE53005.2021.9621355

  12. LiveCode: Livecode bug tracking system - bugzilla installation for livecode project. https://quality.livecode.com/ Accessed 09 Sept 2022

  13. Noyori, Y., et al.: Extracting features related to bug fixing time of bug reports by deep learning and gradient-based visualization. In: 2021 IEEE International Conference on Artificial Intelligence and Computer Applications (ICAICA). pp. 402–407 (2021). https://doi.org/10.1109/ICAICA52286.2021.9498236

  14. Sanh, V., Debut, L., Chaumond, J., Wolf, T.: Distilbert, a distilled version of BERT: smaller, faster, cheaper and lighter. CoRR abs/1910.01108 (2019). http://arxiv.org/abs/1910.01108

  15. Schicchi, D., Pilato, G.: WORDY: A semi-automatic methodology aimed at the creation of neologisms based on a semantic network and blending devices. In: Barolli, L., Terzo, O. (eds.) Complex, Intelligent, and Software Intensive Systems - Proceedings of the 11th International Conference on Complex, Intelligent, and Software Intensive Systems (CISIS-2017), Torino, Italy, July 10–12, 2017. Advances in Intelligent Systems and Computing, vol. 611, pp. 236–248. Springer (2017). https://doi.org/10.1007/978-3-319-61566-0_23

  16. simpletransformers: Simple transformers library. https://pypi.org/project/simpletransformers/ (2022). Accessed 09 Sept 2022

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Pasquale Ardimento .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Ardimento, P. (2022). Predicting Bug-Fixing Time: DistilBERT Versus Google BERT. In: Taibi, D., Kuhrmann, M., Mikkonen, T., Klünder, J., Abrahamsson, P. (eds) Product-Focused Software Process Improvement. PROFES 2022. Lecture Notes in Computer Science, vol 13709. Springer, Cham. https://doi.org/10.1007/978-3-031-21388-5_46

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-21388-5_46

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-21387-8

  • Online ISBN: 978-3-031-21388-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics