skip to main content
10.1145/3643991.3644934acmconferencesArticle/Chapter ViewAbstractPublication PagesicseConference Proceedingsconference-collections
research-article

Fine-Grained Just-In-Time Defect Prediction at the Block Level in Infrastructure-as-Code (IaC)

Published: 02 July 2024 Publication History

Abstract

Infrastructure-as-Code (IaC) is an emerging software engineering practice that leverages source code to facilitate automated configuration of software systems' infrastructure. IaC files are typically complex, containing hundreds of lines of code and dependencies, making them prone to defects, which can result in breaking online services at scale. To help developers early identify and fix IaC defects, research efforts have introduced IaC defect prediction models at the file level. However, the granularity of the proposed approaches remains coarse-grained, requiring developers to inspect hundreds of lines of code in a file, while only a small fragment of code is defective. To alleviate this issue, we introduce a machine-learning-based approach to predict IaC defects at a fine-grained level, focusing on IaC blocks, i.e., small code units that encapsulate specific behaviours within an IaC file. We trained various machine learning algorithms based on a mixture of code, process, and change-level metrics. We evaluated our approach on 19 open-source projects that use Terraform, a widely used IaC tool. The results indicated that there is no single algorithm that consistently outperforms the others in 19 projects. Overall, among the six algorithms, we observed that the LightGBM model achieved a higher average of 0.21 in terms of MCC and 0.71 in terms of AUC. Models analysis reveals that the developer's experience and the relative number of added lines tend to be the most important features. Additionally, we found that blocks belonging to the most frequent types are more prone to defects. Our defect prediction models have also shown sensitivity to concept drift, indicating that IaC practitioners should regularly retrain their models.

References

[1]
[n. d.]. https://registry.terraform.io/modules/terraform-aws-modules/eks/aws. Accessed on: Nov 1, 2023.
[2]
[n. d.]. https://survey.stackoverflow.co/2023/#most-popular-technologies-tools-tech-prof. Accessed on: Nov 1, 2023.
[3]
[n. d.]. https://github.com/terraform-aws-modules/terraform-aws-eks/commit/d2f162b190596756f1bc9d8f8061e68329c3e5c4. Accessed on: Nov 1, 2023.
[4]
[n. d.]. https://github.com/terraform-aws-modules/terraform-aws-eks/blob/master/CHANGELOG.md. Accessed on: Nov 1, 2023.
[5]
[n. d.]. https://developer.hashicorp.com/terraform/language/values/variables#input-variable-documentation. Accessed on: Nov 1, 2023.
[6]
[n. d.]. https://github.com/SonarSource/sonar-iac/tree/master/iac-extensions/terraform. Accessed on: Nov 1, 2023.
[7]
[n. d.]. https://scikit-learn.org/stable/. Accessed on: Nov 1, 2023.
[8]
[n. d.]. https://scikit-learn.org/stable/modules/generated/sklearn.model.TimeSeriesSplit.html. Accessed on: Nov 1, 2023.
[9]
[n. d.]. https://scikit-learn.org/stable/modules/generated/sklearn.dummy.DummyClassifier.html. Accessed on: Dec 24, 2023.
[10]
[n. d.]. https://scikit-learn.org/stable/modules/permutation_importance.html. Accessed on: Nov 1, 2023.
[11]
[n. d.]. AWS EKS Terraform module. https://github.com/terraform-aws-modules/terraform-aws-eks. Accessed on: Nov 1, 2023.
[12]
[n. d.]. AWS Lambda. https://aws.amazon.com/lambda/. Accessed on: Oct 1, 2023.
[13]
[n. d.]. Boundary. https://www.boundaryproject.io/. Accessed on: Oct 2, 2023.
[14]
[n. d.]. Crane: Uber's Next-Gen Infrastructure Stack. https://www.uber.com/en-IN/blog/crane-ubers-next-gen-infrastructure-stack/. Accessed on: Sept 23, 2023.
[15]
[n. d.]. GitHub REST API documentation. https://docs.github.com/en/rest?apiVersion=2022-11-28. Accessed on: Oct 1, 2023.
[16]
[n. d.]. No way to migrate from aws_s3_bucket_object to aws_s3_object. https://github.com/hashicorp/terraform-provider-aws/issues/25412. Accessed on: Oct 14, 2023.
[17]
[n. d.]. Packer. https://www.packer.io, note = Accessed on: Oct 2, 2023.
[18]
[n. d.]. Replication Package for the paper: "Fine-Grained Just-In-Time Defect Prediction at the Block Level in Infrastructure-as-Code (IaC)". https://figshare.com/s/facf0404c96b32274a35. Accessed on: Dec 22, 2023.
[19]
[n. d.]. Repository Contents. https://docs.github.com/en/rest/repos/contents?apiVersion=2022-11-28. Accessed on: Oct 1, 2023.
[20]
[n. d.]. State of Infrastructure-as-Code 2023. https://www.firefly.ai/state-of-iac. Accessed on: Sept 23, 2023.
[21]
[n. d.]. Terraform 1.3 Improvement. https://www.hashicorp.com/blog/terraform-1-3-improves-extensibility-and-maintainability-of-terraform-modules. Accessed on: Nov 12, 2023.
[22]
[n. d.]. Terraform AWS Provider Version 4 Upgrade Guide. https://registry.terraform.io/providers/hashicorp/aws/latest/docs/guides/version-4-upgrade#resource-aws_s3_bucket_object. Accessed on: Oct 1, 2023.
[23]
[n. d.]. Terraform Configuration Syntax. https://developer.hashicorp.com/terraform/docs. Accessed on: Sept 26, 2023.
[24]
[n. d.]. Terraform Configuration Syntax. https://developer.hashicorp.com/terraform/language/syntax/configuration. Accessed on: Sept 26, 2023.
[25]
[n. d.]. Using Terraform to Manage Infrastructure. https://shopify.engineering/manage-infrastructure-with-terraform. Accessed on: Sept 23, 2023.
[26]
Andre Altmann, Laura Tolosi, Oliver Sander, and Thomas Lengauer. 2010. Permutation importance: A corrected feature importance measure. Bioinformatics (Oxford, England) 26 (04 2010), 1340--7.
[27]
Erik Arisholm, Lionel C. Briand, and Eivind B. Johannessen. 2010. A systematic and comprehensive investigation of methods to build and evaluate fault prediction models. Journal of Systems and Software 83, 1 (2010), 2--17. SI: Top Scholars.
[28]
Google Cloud Load Balancers. [n. d.]. Google Cloud Load Balancers. https://cloud.google.com/load-balancing. Accessed on: Oct 1, 2023.
[29]
Akash Banginwar. [n. d.]. Same issue. https://github.com/terraform-aws-modules/terraform-aws-eks/issues/950#issuecomment-1015847431. Accessed on: Oct 1, 2023.
[30]
Mahi Begoug, Narjes Bessghaier, Ali Ouni, Eman Alomar, and Mohamed Wiem Mkaouer. 2023. What Do Infrastructure-as-Code Practitioners Discuss: An Empirical Study on Stack Overflow. In Proceedings of the 17th International Conference on Empirical Software Engineering and Measurement (ESEM '23). 11 pages.
[31]
Mahi Begoug, Moataz Chouchen, and Ali Ouni. 2024. TerraMetrics: An Open Source Tool for Infrastructure-as-Code (IaC) Quality Metrics in Terraform. In IEEE/ACM 32nd International Conference on Program Comprehension (ICPC). 5 pages.
[32]
Peter Bludau and Alexander Pretschner. 2022. Feature sets in just-in-time defect prediction: an empirical evaluation. In Proceedings of the 18th International Conference on Predictive Models and Data Analytics in Software Engineering. 22--31.
[33]
L. Breiman, Jerome H. Friedman, Richard A. Olshen, and C. J. Stone. 1984. Classification and Regression Trees. Biometrics 40 (1984), 874. https://api.semanticscholar.org/CorpusID:29458883
[34]
George G Cabral, Leandro L Minku, Emad Shihab, and Suhaib Mujahid. 2019. Class imbalance evolution and verification latency in just-in-time software defect prediction. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). IEEE, 666--676.
[35]
Jacob Cohen. 2013. Statistical power analysis for the behavioral sciences. Academic press.
[36]
Hashi Corp. [n. d.]. Terraform. https://www.terraform.io. Accessed on: Oct 1, 2023.
[37]
David R Cox. 1958. The regression analysis of binary sequences. Journal of the Royal Statistical Society: Series B (Methodological) 20, 2 (1958), 215--232.
[38]
Daniel Alencar da Costa, Shane McIntosh, Weiyi Shang, Uirá Kulesza, Roberta Coelho, and Ahmed E. Hassan. 2017. A Framework for Evaluating the Results of the SZZ Approach for Identifying Bug-Introducing Changes. IEEE Transactions on Software Engineering 43, 7 (2017), 641--657.
[39]
Stefano Dalla Palma, Dario Di Nucci, Fabio Palomba, and Damian Tamburri. 2021. Within-Project Defect Prediction of Infrastructure-as-Code Using Product and Process Metrics. IEEE Transactions on Software Engineering PP (01 2021), 1--1.
[40]
Stefano Dalla Palma, Dario Di Nucci, Fabio Palomba, and Damian Andrew Tamburri. 2020. Toward a catalog of software quality metrics for infrastructure code. Journal of Systems and Software 170 (2020), 110726.
[41]
Azure Active Directory. [n. d.]. Azure Active Directory. https://azure.microsoft.com/en-us/products/active-directory-ds. Accessed on: Oct 1, 2023.
[42]
Yuanrui Fan, Xin Xia, Daniel Alencar Da Costa, David Lo, Ahmed E Hassan, and Shanping Li. 2019. The impact of mislabeled changes by szz on just-in-time defect prediction. IEEE transactions on software engineering 47, 8 (2019), 1559--1586.
[43]
AWS Cloud Formation. [n. d.]. Cloud Formation. https://aws.amazon.com/cloudformation/. Accessed on: Oct 1, 2023.
[44]
Takafumi Fukushima, Yasutaka Kamei, Shane McIntosh, Kazuhiro Yamashita, and Naoyasu Ubayashi. 2014. An Empirical Study of Just-in-Time Defect Prediction Using Cross-Project Models. In Proceedings of the 11th Working Conference on Mining Software Repositories (Hyderabad, India) (MSR 2014). Association for Computing Machinery, New York, NY, USA, 172--181.
[45]
Jaime Hidalgo García. [n. d.]. Cycle error on destroy when updating from 12.0 to 12.2. terraform-aws-modules/terraform-aws-eks/issues/950. Accessed on: Oct 1, 2023.
[46]
Pierre Geurts, Damien Ernst, and Louis Wehenkel. 2006. Extremely randomized trees. Machine Learning 63 (2006), 3--42. https://api.semanticscholar.org/ CorpusID:15137276
[47]
Görkem Giray, Kwabena Ebo Bennin, Ömer Köksal, Önder Babur, and Bedir Tekinerdogan. 2023. On the use of deep learning in software defect prediction. Journal of Systems and Software 195 (2023), 111537.
[48]
GitHub. [n. d.]. The State of Open Source and AI. https://github.blog/2023-11-08-the-state-of-open-source-and-ai/. Accessed on: Nov 12, 2023.
[49]
Hashi Corp. [n. d.]. https://github.com/hashicorp/terraform-provider-aws. Accessed on: Nov 1, 2023.
[50]
Red Hat. [n. d.]. Ansible. https://www.ansible.com/. Accessed on: Oct 1, 2023.
[51]
Steffen Herbold. 2017. Comments on ScottKnottESD in response to" An empirical comparison of model validation techniques for defect prediction models". IEEE Transactions on Software Engineering 43, 11 (2017), 1091--1094.
[52]
Tin Kam Ho. 1995. Random decision forests. In Proceedings of 3rd international conference on document analysis and recognition, Vol. 1. IEEE, 278--282.
[53]
David W Hosmer Jr, Stanley Lemeshow, and Rodney X Sturdivant. 2013. Applied logistic regression. Vol. 398. John Wiley & Sons.
[54]
Khairul Islam, Toufique Ahmed, Rifat Shahriyar, Anindya Iqbal, and Gias Uddin. 2022. Early prediction for merged vs abandoned code changes in modern code reviews. Information and Software Technology 142 (2022), 106756.
[55]
Yujuan Jiang and Bram Adams. 2015. Co-evolution of Infrastructure and Source Code - An Empirical Study. In 2015 IEEE/ACM 12th Working Conference on Mining Software Repositories. 45--55.
[56]
Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, and Ahmed E. Hassan. 2021. The Impact of Correlated Metrics on the Interpretation of Defect Models. IEEE Transactions on Software Engineering 47, 2 (2021), 320--331.
[57]
Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, Akinori Ihara, and Kenichi Matsumoto. 2016. A Study of Redundant Metrics in Defect Prediction Datasets. In 2016 IEEE International Symposium on Software Reliability Engineering Workshops (ISSREW). 51--52.
[58]
Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, and Christoph Treude. 2018. AutoSpearman: Automatically Mitigating Correlated Software Metrics for Interpreting Defect Models. In 2018 IEEE International Conference on Software Maintenance and Evolution (ICSME). 92--103.
[59]
Jirayus Jiarpakdee, Chakkrit Kla Tantithamthavorn, Hoa Khanh Dam, and John Grundy. 2022. An Empirical Study of Model-Agnostic Techniques for Defect Prediction Models. IEEE Transactions on Software Engineering 48, 1 (2022), 166--185.
[60]
Eirini Kalliamvakou, Georgios Gousios, Kelly Blincoe, Leif Singer, Daniel M. German, and Daniela Damian. 2014. The Promises and Perils of Mining GitHub (MSR 2014). Association for Computing Machinery, New York, NY, USA, 92--101.
[61]
Yasutaka Kamei, Emad Shihab, Bram Adams, Ahmed E Hassan, Audris Mockus, Anand Sinha, and Naoyasu Ubayashi. 2012. A large-scale empirical study of just-in-time quality assurance. IEEE Transactions on Software Engineering 39, 6 (2012), 757--773.
[62]
Guolin Ke, Qi Meng, Thomas Finley, Taifeng Wang, Wei Chen, Weidong Ma, Qiwei Ye, and Tie-Yan Liu. 2017. Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems 30 (2017), 3146--3154.
[63]
Chaiyakarn Khanan, Worawit Luewichana, Krissakorn Pruktharathikoon, Jirayus Jiarpakdee, Chakkrit Tantithamthavorn, Morakot Choetkiertikul, Chaiyong Ragkhitwetsagul, and Thanwadee Sunetnanta. 2020. JITBot: an explainable just-in-time defect prediction bot. In Proceedings of the 35th IEEE/ACM international conference on automated software engineering. 1336--1339.
[64]
Sunghun Kim, Thomas Zimmermann, Kai Pan, and E. James Jr. Whitehead. 2006. Automatic Identification of Bug-Introducing Changes. In 21st IEEE/ACM International Conference on Automated Software Engineering (ASE'06). 81--90.
[65]
Suvodeep Majumder, Pranav Mody, and Tim Menzies. 2022. Revisiting Process versus Product Metrics: A Large Scale Analysis. Empirical Softw. Engg. 27, 3 (may 2022), 42 pages.
[66]
Everton da S. Maldonado and Emad Shihab. 2015. Detecting and quantifying different types of self-admitted technical Debt. In 2015 IEEE 7th International Workshop on Managing Technical Debt (MTD). 9--15.
[67]
Shane McIntosh and Yasutaka Kamei. 2018. Are Fix-Inducing Changes a Moving Target? A Longitudinal Case Study of Just-In-Time Defect Prediction. IEEE Transactions on Software Engineering 44, 5 (2018), 412--428.
[68]
Audris Mockus and David Weiss. 2002. Predicting risk of software changes. Bell Labs Technical Journal 5 (06 2002), 169--180.
[69]
Rebecca Moussa and Federica Sarro. 2022. On the Use of Evaluation Measures for Defect Prediction Studies. In Proceedings of the 31st ACM SIGSOFT International Symposium on Software Testing and Analysis (Virtual, South Korea) (ISSTA 2022). Association for Computing Machinery, New York, NY, USA, 101--113.
[70]
Nuthan Munaiah, Steven Kroh, Craig Cabrey, and Meiyappan Nagappan. 2017. Curating github for engineered software projects. Empirical Software Engineering 22 (2017), 3219--3253.
[71]
Shrikanth N.C., Suvodeep Majumder, and Tim Menzies. 2021. Early Life Cycle Software Defect Prediction. Why? How?. In 2021 IEEE/ACM 43rd International Conference on Software Engineering (ICSE). 448--459.
[72]
Edmilson Campos Neto, Daniel Alencar da Costa, and Uirá Kulesza. 2018. The impact of refactoring changes on the SZZ algorithm: An empirical study. In 2018 IEEE 25th International Conference on Software Analysis, Evolution and Reengineering (SANER). 380--390.
[73]
Chao Ni, Xin Xia, David Lo, Xiaohu Yang, and Ahmed E. Hassan. 2022. Just-In-Time Defect Prediction on JavaScript Projects: A Replication Study. ACM Trans. Softw. Eng. Methodol. 31, 4, Article 76 (aug 2022), 38 pages.
[74]
Ruben Opdebeeck, Ahmed Zerouali, and Coen De Roover. 2022. Smelly variables in ansible infrastructure code: detection, prevalence, and lifetime. In 19th International Conference on Mining Software Repositories. 61--72.
[75]
Moses Openja, Mohammad Mehdi Morovati, Le An, Foutse Khomh, and Mouna Abidi. 2022. Technical debts and faults in open-source quantum software systems: An empirical study. Journal of Systems and Software 193 (2022), 111458.
[76]
Luca Pascarella, Fabio Palomba, and Alberto Bacchelli. 2019. Fine-grained justin-time defect prediction. Journal of Systems and Software 150 (2019), 22--36.
[77]
Luca Pascarella, Fabio Palomba, and Alberto Bacchelli. 2019. On the Performance of Method-Level Bug Prediction: A Negative Result. Journal of Systems and Software 161 (12 2019), 110493.
[78]
Akond Rahman, Effat Farhana, Chris Parnin, and Laurie Williams. 2020. Gang of Eight: A Defect Taxonomy for Infrastructure as Code Scripts. In 2020 IEEE/ACM 42nd International Conference on Software Engineering (ICSE). 752--764.
[79]
Akond Rahman, Rezvan Mahdavi-Hezaveh, and Laurie Williams. 2019. A systematic mapping study of infrastructure as code research. Information and Software Technology 108 (2019), 65--77.
[80]
Akond Rahman, Chris Parnin, and Laurie Williams. 2019. The Seven Sins: Security Smells in Infrastructure as Code Scripts. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). 164--175.
[81]
Akond Rahman, Md Rayhanur Rahman, Chris Parnin, and Laurie Williams. 2021. Security smells in ansible and chef scripts: A replication study. ACM Transactions on Software Engineering and Methodology (TOSEM) 30, 1 (2021), 1--31.
[82]
Akond Rahman and Laurie Williams. 2019. Source code properties of defective infrastructure as code scripts. Information and Software Technology 112 (2019), 148--163.
[83]
Foyzur Rahman, Daryl Posnett, Israel Herraiz, and Premkumar Devanbu. 2013. Sample size vs. bias in defect prediction. In Proceedings of the 2013 9th joint meeting on foundations of software engineering. 147--157.
[84]
Gopi Krishnan Rajbahadur, Shaowei Wang, Gustavo A Oliva, Yasutaka Kamei, and Ahmed E Hassan. 2021. The impact of feature importance methods on the interpretation of defect classifiers. IEEE Transactions on Software Engineering 48, 7 (2021), 2245--2261.
[85]
Giovanni Rosa. [n. d.]. pyszz. https://github.com/grosa1/pyszz. Accessed on: Nov 12, 2023.
[86]
Nuno Saavedra and João Ferreira. 2023. GLITCH: Automated Polyglot Security Smell Detection in Infrastructure as Code. 1--12.
[87]
Islem Saidani, Ali Ouni, and Mohamed Wiem Mkaouer. 2022. Improving the prediction of continuous integration build failures using deep learning. Automated Software Engineering 29, 1 (2022), 21.
[88]
Tushar Sharma, Marios Fragkoulis, and Diomidis Spinellis. 2016. Does your configuration code smell?. In 13th International Conference on Mining Software Repositories. 189--200.
[89]
Raed Shatnawi. 2010. A Quantitative Investigation of the Acceptable Risk Levels of Object-Oriented Metrics in Open-Source Systems. IEEE Transactions on Software Engineering 36, 2 (2010), 216--225.
[90]
Martin Shepperd, David Bowes, and Tracy Hall. 2014. Researcher Bias: The Use of Machine Learning in Software Defect Prediction. IEEE Transactions on Software Engineering 40, 6 (2014), 603--616.
[91]
Swapnil Shukla, T Radhakrishnan, K Muthukumaran, and Lalita Bhanu Murthy Neti. 2018. Multi-objective cross-version defect prediction. Soft Computing 22 (2018), 1959--1980.
[92]
Jacek Sliwerski, Thomas Zimmermann, and Andreas Zeller. 2005. When do changes induce fixes? ACM sigsoft software engineering notes 30, 4 (2005), 1--5.
[93]
Liyan Song, Leandro Minku, Cong Teng, and Xin Yao. 2023. A Practical Human Labeling Method for Online Just-in-Time Software Defect Prediction. In Proceedings of the 31st ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering. 605--617.
[94]
Qinbao Song, Yuchen Guo, and Martin Shepperd. 2018. A Comprehensive Investigation of the Role of Imbalanced Learning for Software Defect Prediction. IEEE Transactions on Software Engineering PP (05 2018), 1--1.
[95]
Davide Spadini, Maurício Aniche, and Alberto Bacchelli. 2018. PyDriller: Python Framework for Mining Software Repositories. In Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering (Lake Buena Vista, FL, USA) (ESEC/FSE 2018). Association for Computing Machinery, New York, NY, USA, 908--911.
[96]
Chakkrit Tantithamthavorn and Ahmed E. Hassan. 2018. An Experience Report on Defect Modelling in Practice: Pitfalls and Challenges. In Proceedings of the 40th International Conference on Software Engineering: Software Engineering in Practice (Gothenburg, Sweden) (ICSE-SEIP '18). Association for Computing Machinery, New York, NY, USA, 286--295.
[97]
Chakkrit Tantithamthavorn, Ahmed E. Hassan, and Kenichi Matsumoto. 2020. The Impact of Class Rebalancing Techniques on the Performance and Interpretation of Defect Prediction Models. IEEE Transactions on Software Engineering 46, 11 (2020), 1200--1219.
[98]
Chakkrit Tantithamthavorn, Shane McIntosh, Ahmed E. Hassan, and Kenichi Matsumoto. 2018. The Impact of Automated Parameter Optimization for Defect Prediction Models. (2018).
[99]
Nikolaos Tsantalis, Ameya Ketkar, and Danny Dig. 2020. RefactoringMiner 2.0. IEEE Transactions on Software Engineering 48, 3 (2020), 930--950.
[100]
Huy Tu, Zhe Yu, and Tim Menzies. 2022. Better Data Labelling With EMBLEM (and how that Impacts Defect Prediction). IEEE Transactions on Software Engineering 48, 1 (2022), 278--294.
[101]
Alexandre Verdet, Mohammad Hamdaqa, Leuson Da Silva, and Foutse Khomh. 2023. Exploring Security Practices in Infrastructure as Code: An Empirical Study. arXiv preprint arXiv:2308.03952 (2023).
[102]
Jingxiu Yao and Martin Shepperd. 2020. Assessing Software Defection Prediction Performance: Why Using the Matthews Correlation Coefficient Matters. In Proceedings of the 24th International Conference on Evaluation and Assessment in Software Engineering (Trondheim, Norway) (EASE '20). Association for Computing Machinery, New York, NY, USA, 120--129.
[103]
Jerrold H Zar. 2005. Spearman rank correlation. Encyclopedia of Biostatistics 7 (2005).
[104]
Zhengran Zeng, Yuqun Zhang, Haotian Zhang, and Lingming Zhang. 2021. Deep Just-in-Time Defect Prediction: How Far Are We?. In Proceedings of the 30th ACM SIGSOFT International Symposium on Software Testing and Analysis (Virtual, Denmark) (ISSTA 2021). Association for Computing Machinery, New York, NY, USA, 427--438.
[105]
Feng Zhang, Audris Mockus, Iman Keivanloo, and Ying Zou. 2014. Towards Building a Universal Defect Prediction Model. In Proceedings of the 11th Working Conference on Mining Software Repositories (Hyderabad, India) (MSR 2014). Association for Computing Machinery, New York, NY, USA, 182--191.
[106]
Harry Zhang. 2004. The Optimality of Naive Bayes. Proceedings of the Seventeenth International Florida Artificial Intelligence Research Society Conference, FLAIRS 2004 2.

Cited By

View all
  • (2025)Assessing the adoption of security policies by developers in terraform across different cloud providersEmpirical Software Engineering10.1007/s10664-024-10610-030:3Online publication date: 27-Feb-2025
  • (2024)How Do Infrastructure-as-Code Practitioners Update their Provider Dependencies? An Empirical Study on the AWS ProviderService-Oriented Computing10.1007/978-981-96-0808-9_28(373-388)Online publication date: 7-Dec-2024

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
MSR '24: Proceedings of the 21st International Conference on Mining Software Repositories
April 2024
788 pages
ISBN:9798400705878
DOI:10.1145/3643991
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 02 July 2024

Check for updates

Author Tags

  1. defect prediction
  2. infrastructure-as-code
  3. IaC
  4. terraform

Qualifiers

  • Research-article

Funding Sources

Conference

MSR '24
Sponsor:

Upcoming Conference

ICSE 2025

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)207
  • Downloads (Last 6 weeks)25
Reflects downloads up to 07 Mar 2025

Other Metrics

Citations

Cited By

View all
  • (2025)Assessing the adoption of security policies by developers in terraform across different cloud providersEmpirical Software Engineering10.1007/s10664-024-10610-030:3Online publication date: 27-Feb-2025
  • (2024)How Do Infrastructure-as-Code Practitioners Update their Provider Dependencies? An Empirical Study on the AWS ProviderService-Oriented Computing10.1007/978-981-96-0808-9_28(373-388)Online publication date: 7-Dec-2024

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media