Abstract
In edge-cloud systems, the quality of infrastructure deployment is crucial for delivering high-quality services, especially when using popular Infrastructure as Code (IaC) tools like Ansible. Ensuring the reliability of such large-scale code systems poses a significant challenge due to the limited testing resources. Software defect prediction (SDP) addresses this limitation by identifying defect-prone software modules, allowing developers to prioritize testing resources effectively. This paper introduces a Large Language Model (LLM)-based approach for SDP in Ansible scripts with Code-Smell-guided Prompting (CSP). CSP leverages code smell indicators extracted from Ansible scripts to refine prompts given to LLMs, enhancing their understanding of code structure concerning defects. Our experimental results demonstrate that CSP variants, particularly the Chain of Thought CSP (CoT-CSP), outperform traditional prompting strategies, as evidenced by improved F1-scores and Recall. To the best of our knowledge, this is the first attempt to employ LLMs for SDP in Ansible scripts. By employing a code smell-guided prompting strategy tailored for Ansible, we anticipate that the proposed method will enhance software quality assurance and reliability, thereby increasing the overall reliability of edge-cloud systems.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Agapito, G., et al.: Current Trends in Web Engineering: ICWE 2022 International Workshops, BECS, SWEET and WALS, Bari, Italy, July 5–8, 2022, Revised Selected Papers. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-25380-5
Buyya, R., Srirama, S.N.: Fog and Edge Computing: Principles and Paradigms. Wiley, Hoboken (2019)
Meijer, B., Hochstein, L., Moser, R.: Ansible: Up and Running. O’Reilly Media, Inc., Sebastopol (2022)
Morris, K.: Infrastructure as Code: Managing Servers in the Cloud. O’Reilly Media, Inc., Sebastopol (2016)
Dalla Palma, S., Di Nucci, D., Palomba, F., Tamburri, D.A.: Within project defect prediction of infrastructure-as-code using product and process metrics. IEEE Trans. Softw. Eng. 14(8), 1 (2020)
Hassan, M.M., Rahman, A.: As code testing: characterizing test quality in open source Ansible development. In: 2022 IEEE Conference on Software Testing, Verification and Validation (ICST), pp. 208–219. IEEE (2022)
Kokuryo, S., Kondo, M., Mizuno, O.: An empirical study of utilization of imperative modules in Ansible. In: 2020 IEEE 20Th International Conference on Software Quality, Reliability and Security (QRS), pp. 442–449. IEEE (2020)
Opdebeeck, R., Zerouali, A., De Roover, C.: Andromeda: A dataset of Ansible galaxy roles and their evolution. In: 2021 IEEE/ACM 18th International Conference on Mining Software Repositories (MSR), pp. 580–584. IEEE (2021)
Pearce, H., Tan, B., Ahmad, B., Karri, R., Dolan-Gavitt, B.: Examining zero-shot vulnerability repair with large language models. In: 2023 IEEE Symposium on Security and Privacy (SP), pp. 2339–2356. IEEE (2023)
Xu, J., Yan, L., Wang, F., Ai, J.: A GitHub-based data collection method for software defect prediction. In: 2019 6th International Conference on Dependable Systems and Their Applications (DSA), pp. 100–108. IEEE (2020)
Wahono, R.S.: A systematic literature review of software defect prediction. J. Softw. Eng. 1(1), 1–16 (2015)
Brown, T., et al.: Language models are few-shot learners. In: Advances in Neural Information Processing Systems (NeurIPS 2020), vol. 33, pp. 1877–1901 (2020)
Wei, J., et al.: Chain-of-thought prompting elicits reasoning in large language models. In: Advances in Neural Information Processing Systems, (NeurIPS 2022), pp. 24824–24837 (2022)
Nong, Y., Aldeen, M., Cheng, L., Hu, H., Chen, F., Cai, H.: Chain-of-thought prompting of large language models for discovering and fixing software vulnerabilities. arXiv preprint arXiv:2402.17230 (2024)
Ingemann Tuffveson Jensen, R., Tawosi, V., Alamir, S.: Software vulnerability and functionality assessment using LLMs. arXiv e-prints, arXiv-2403
Opdebeeck, R., Zerouali, A., De Roover, C.: Smelly variables in Ansible infrastructure code: detection, prevalence, and lifetime. In: 2022 IEEE/ACM 19th International Conference on Mining Software Repositories (MSR), pp. 61–72. IEEE (2022)
Ali, M.M., Huda, S., Abawajy, J., Alyahya, S., Al-Dossari, H., Yearwood, J.: A parallel framework for software defect detection and metric selection on cloud computing. Clust. Comput. 20, 2267–2281 (2017)
Madeyski, L., Lewowski, T.: Detecting code smells using industry-relevant data. Inf. Softw. Technol. 155, 107112 (2023)
Kwon, S., Lee, S., Kim, T., Ryu, D., Baik, J.: Exploring the feasibility of ChatGPT for improving the quality of Ansible scripts in edge-cloud infrastructures through code recommendation. In: Casteleyn, S., Mikkonen, T., GarcÃa Simón, A., Ko, IY., Loseto, G. (eds.) ICWE 2023. CCIS, vol. 1898, pp. 75–83. Springer, Cham (2024). https://doi.org/10.1007/978-3-031-50385-6_7
Piotrowski, P., Madeyski, L.: Software defect prediction using bad code smells: a systematic literature review. In: Poniszewska-Marańda, A., Kryvinska, N., Jarząbek, S., Madeyski, L. (eds.) Data-Centric Business and Applications. LNDECT, vol. 40, pp. 77–99. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-34706-2_5
Danphitsanuphan, P., Suwantada, T.: Code smell detecting tool and code smell-structure bug relationship. In: 2012 Spring Congress on Engineering and Technology, pp. 1–5. IEEE (2012)
Kwon, S., Lee, S., Kim, T., Ryu, D., Baik, J.: Exploring LLM-based automated repairing of Ansible script in edge-cloud infrastructures. J. Web Eng. 22(6), 889–912 (2023)
Acknowledgments
This research was supported by the MSIT (Ministry of Science and ICT), Korea, under the ITRC (Information Technology Research Center) support program (IITP-2024-2020-0-01795) supervised by the IITP (Institute for Information & Communications Technology Planning & Evaluation, and Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education (NRF-2022R1I1A3069233).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hong, H., Lee, S., Ryu, D., Baik, J. (2025). Enhancing Software Defect Prediction in Ansible Scripts Using Code-Smell-Guided Prompting with Large Language Models in Edge-Cloud Infrastructures. In: Pautasso, C., Marcel, P. (eds) Current Trends in Web Engineering. ICWE 2024. Communications in Computer and Information Science, vol 2188. Springer, Cham. https://doi.org/10.1007/978-3-031-75110-3_3
Download citation
DOI: https://doi.org/10.1007/978-3-031-75110-3_3
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-75109-7
Online ISBN: 978-3-031-75110-3
eBook Packages: Computer ScienceComputer Science (R0)