skip to main content
10.1145/3663530.3665019acmconferencesArticle/Chapter ViewAbstractPublication PagesfseConference Proceedingsconference-collections
research-article
Open access

Evaluating the Quality of Open Source Ansible Playbooks: An Executability Perspective

Published: 15 July 2024 Publication History

Abstract

Infrastructure as code (IaC) is the practice of automatically managing computing platforms, such as Internet of Things (IoT) platforms. IaC has gained popularity in recent years, yielding a plethora of software artifacts, such as Ansible playbooks that are available on social coding platforms. Despite the availability of open source software (OSS) Ansible playbooks, there is a lack of empirical research on the quality of these playbooks, which can hinder the progress of IaC-related research. To that end, we conduct an empirical study with 2,952 OSS Ansible playbooks where we evaluate the quality of OSS playbooks from the perspective of executability, i.e., if publicly available OSS Ansible playbooks can be executed without failures. From our empirical study, we observe 71.5% of the mined 2,952 Ansible playbooks cannot be executed as is because of four categories of failures.

References

[1]
Ansible. [n. d.]. Ansible playbooks. https://docs.ansible.com/ansible/latest/ [Accessed 29-09-2023]
[2]
Ansible. 2019. AnsibleFest Atlanta - Scaling Ansible for IoT Deployments. https://www.ansible.com/scaling-ansible-for-iot-deployments
[3]
Ansible. 2023. Ansible community documentation. https://docs.ansible.com/ [Online; accessed 19-December-2022]
[4]
Ansible. 2023. Ansible Runner ansible-runner documentation. https://ansible.readthedocs.io/projects/runner/en/stable [Accessed 29-09-2023]
[5]
Ansible. 2024. Network Getting Started. https://docs.ansible.com/ansible/latest/network/getting_started/index.html [Online; accessed 19-December-2023]
[6]
Mohammad Mehedi Hassan and Akond Rahman. 2022. As code testing: Characterizing test quality in open source ansible development. In 2022 IEEE Conference on Software Testing, Verification and Validation (ICST). 208–219.
[7]
Jez Humble and David Farley. 2010. Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation (1st ed.). Addison-Wesley Professional. isbn:0321601912, 9780321601919
[8]
IEEE. 2010. IEEE Standard Classification for Software Anomalies. IEEE Std 1044-2009 (Revision of IEEE Std 1044-1993), 1–23. https://doi.org/10.1109/IEEESTD.2010.5399061
[9]
John Klein. 2019. INFRASTRUCTURE AS CODE–FINAL REPORT. https://api.semanticscholar.org/CorpusID:225061723
[10]
NIST. 2023. infrastructure as code. https://csrc.nist.gov/glossary/term/infrastructure_as_code [Online; accessed 25-Sep-2023]
[11]
Ruben Opdebeeck, Ahmed Zerouali, and Coen De Roover. 2023. Control and Data Flow in Security Smell Detection for Infrastructure as Code: Is It Worth the Effort? In 2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR). 534–545. https://doi.org/10.1109/MSR59073.2023.00079
[12]
Stefano Dalla Palma, Dario Di Nucci, Fabio Palomba, and Damian A. Tamburri. 2020. Towards a Catalogue of Software Quality Metrics for Infrastructure Code. ArXiv, abs/2005.13474 (2020), https://doi.org/10.1016/j.jss.2020.110726
[13]
Chris Parnin, Eric Helms, Chris Atlee, Harley Boughton, Mark Ghattas, Andy Glover, James Holman, John Micco, Brendan Murphy, Tony Savor, Michael Stumm, Shari Whitaker, and Laurie Williams. 2017. The Top 10 Adages in Continuous Deployment. IEEE Software, 34, 3 (2017), 86–95. https://doi.org/10.1109/MS.2017.86
[14]
Akond Rahman, Dibyendu Brinto Bose, Yue Zhang, and Rahul Pandita. 2024. An empirical study of task infections in Ansible scripts. Empirical Software Engineering, 29, 1 (2024), 34.
[15]
Akond Rahman and Chris Parnin. 2023. Detecting and Characterizing Propagation of Security Weaknesses in Puppet-based Infrastructure Management. IEEE Transactions on Software Engineering.
[16]
Akond Rahman, Chris Parnin, and Laurie Williams. 2019. The seven sins: Security smells in infrastructure as code scripts. In 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE). 164–175.
[17]
Akond Rahman, Md Rayhanur Rahman, Chris Parnin, and Laurie Williams. 2021. Security Smells in Ansible and Chef Scripts: A Replication Study. ACM Trans. Softw. Eng. Methodol., 30, 1 (2021), Article 3, Jan., 31 pages. issn:1049-331X https://doi.org/10.1145/3408897
[18]
RED HAT. 2021. Tips on managing IoT devices at the edge with Red Hat Ansible Automation. https://www.redhat.com/en/blog/tips-managing-iot-devices-edge-red-hat-ansible-automation
[19]
Sofia Reis, Rui Abreu, Marcelo d’Amorim, and Daniel Fortunato. 2023. Leveraging Practitioners’ Feedback to Improve a Security Linter. In Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering (ASE ’22). Association for Computing Machinery, New York, NY, USA. Article 66, 12 pages. isbn:9781450394758 https://doi.org/10.1145/3551349.3560419
[20]
J. Saldaña. 2009. The Coding Manual for Qualitative Researchers. Sage. isbn:9781847875488 lccn:01475499 https://books.google.co.in/books?id=OE7LngEACAAJ
[21]
Tushar Sharma, Marios Fragkoulis, and Diomidis Spinellis. 2016. Does Your Configuration Code Smell? In Proceedings of the 13th International Conference on Mining Software Repositories (MSR ’16). ACM, New York, NY, USA. 189–200. isbn:978-1-4503-4186-8 https://doi.org/10.1145/2901739.2901761

Cited By

View all
  • (2024)Methodology for Automating and Orchestrating Performance Evaluation of Kubernetes Container Network InterfacesComputers10.3390/computers1311028313:11(283)Online publication date: 1-Nov-2024

Index Terms

  1. Evaluating the Quality of Open Source Ansible Playbooks: An Executability Perspective

    Recommendations

    Comments

    Information & Contributors

    Information

    Published In

    cover image ACM Conferences
    SEA4DQ 2024: Proceedings of the 4th International Workshop on Software Engineering and AI for Data Quality in Cyber-Physical Systems/Internet of Things
    July 2024
    21 pages
    ISBN:9798400706721
    DOI:10.1145/3663530
    • General Chairs:
    • Tim Menzies,
    • Bowen Xu,
    • Program Chairs:
    • Hong Jin Kang,
    • Jie M. Zhang
    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

    Sponsors

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 15 July 2024

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. Ansible
    2. data quality
    3. devops
    4. executability
    5. infrastructure as code

    Qualifiers

    • Research-article

    Funding Sources

    • U.S. National Science Foundation
    • U.S. National Science of Foundation

    Conference

    SEA4DQ '24
    Sponsor:

    Contributors

    Other Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • Downloads (Last 12 months)149
    • Downloads (Last 6 weeks)38
    Reflects downloads up to 03 Mar 2025

    Other Metrics

    Citations

    Cited By

    View all
    • (2024)Methodology for Automating and Orchestrating Performance Evaluation of Kubernetes Container Network InterfacesComputers10.3390/computers1311028313:11(283)Online publication date: 1-Nov-2024

    View Options

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Login options

    Figures

    Tables

    Media

    Share

    Share

    Share this Publication link

    Share on social media