skip to main content
10.1145/2961111.2962629acmconferencesArticle/Chapter ViewAbstractPublication PagesesemConference Proceedingsconference-collections
short-paper

Release Readiness Classification: An Explorative Case Study

Authors Info & Claims
Published:08 September 2016Publication History

ABSTRACT

Context: To survive in a highly competitive software market, product managers are striving for frequent, incremental releases in ever shorter cycles. Release decisions are characterized by high complexity and have a high impact on project success. Under such conditions, using the experience from past releases could help product managers to take more informed decisions.

Goal and research objectives: To make decisions about when to make a release more operational, we formulated release readiness (RR) as a binary classification problem. The goal of our research presented in this paper is twofold: (i) to propose a machine learning approach called RC* (Release readiness Classification applying predictive techniques) with two approaches for defining the training set called incremental and sliding window, and (ii) to empirically evaluate the applicability of RC* for varying project characteristics.

Methodology: In the form of explorative case study research, we applied the RC* method to four OSS projects under the Apache Software Foundation. We retrospectively covered a period of 82 months, 90 releases and 3722 issues. We use Random Forest as the classification technique along with eight independent variables to classify release readiness in individual weeks. Predictive performance was measured in terms of precision, recall, F-measure, and accuracy.

Results: The incremental and sliding window approaches respectively achieve an overall 76% and 79% accuracy in classifying RR for four analyzed projects. Incremental approach outperforms sliding window approach in terms of stability of the predictive performance. Predictive performance for both approaches are significantly influenced by three project characteristics i) release duration, ii) number of issues in a release, iii) size of the initial training dataset.

Conclusion: As our initial observation we identified, incremental approach achieves higher accuracy when releases have long duration, low number of issues and classifiers are trained with large training set. On the other hand, sliding window approach achieves higher accuracy when releases have short duration and classifiers are trained with small training set.

References

  1. Alam, S. et al. 2016. Comparative Analysis of Predictive Techniques for Release Readiness Classification. RAISE 2016 (2016).Google ScholarGoogle Scholar
  2. Alam, S. et al. 2015. Monitoring and Controlling Release Readiness by Learning across Projects. Managing Software Process Evolution. Springer.Google ScholarGoogle Scholar
  3. Cichosz, P. 2015. Data Mining Algorithms: Explained Using R. John Wiley and Sons. Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Mcconnell, S. 1997. Gauging software readiness with defect tracking. IEEE Software. 14, 3 (1997), 135--136. Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Minku, L.L. and Yao, X. 2013. Ensembles and locality: Insight on improving software effort estimation. IST 55, 8 (2013), 1512--1528. Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Misirli, A.T. et al. 2011. An industrial case study of classifier ensembles for locating software defects. Software Quality Journal. 19, 3 (2011), 515--536. Google ScholarGoogle ScholarDigital LibraryDigital Library
  7. Motulsky, H. 2013. Intuitive biostatistics: A nonmathematical guide to statistical thinking. Oxford Univ. Press.Google ScholarGoogle Scholar
  8. Quah, J.T.S. and Liew, S.W. 2008. Gauging Software Readiness Using Metrics. SMCia (2008), 426--431.Google ScholarGoogle Scholar
  9. Quah, T.-S. 2009. Estimating software readiness using predictive models. Information Sciences. 179, 4 (Feb. 2009), 430--445. Google ScholarGoogle ScholarDigital LibraryDigital Library
  10. R. Brettschneider 1989. Zero-Failure model -Is your software ready for release? IEEE Software. 6, 4 (1989). Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Shahnewaz, S. and Ruhe, G. 2014. RELREA - An Analytical Approach for Evaluating Release Readiness. Proc. SEKE (2014).Google ScholarGoogle Scholar
  12. Staron, M. et al. 2012. Release Readiness Indicator for Mature Agile and Lean Software Development Projects. Agile Processes in Software Engineering and Extreme Programming. (2012), 93--107.Google ScholarGoogle Scholar
  13. Verner, J.M. et al. 2009. Guidelines for industrially-based multiple case studies in software engineering. RCIS (2009), 313--324.Google ScholarGoogle Scholar
  14. Wild, R. and Brune, P. 2012. Determining Software Product Release Readiness by the Change-Error Correlation Function: On the Importance of the Change-Error Time Lag. HICSS (2012), 5360--5367. Google ScholarGoogle ScholarDigital LibraryDigital Library

Recommendations

Comments

Login options

Check if you have access through your login credentials or your institution to get full access on this article.

Sign in
  • Published in

    cover image ACM Conferences
    ESEM '16: Proceedings of the 10th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement
    September 2016
    457 pages
    ISBN:9781450344272
    DOI:10.1145/2961111

    Copyright © 2016 ACM

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    • Published: 8 September 2016

    Permissions

    Request permissions about this article.

    Request Permissions

    Check for updates

    Qualifiers

    • short-paper
    • Research
    • Refereed limited

    Acceptance Rates

    ESEM '16 Paper Acceptance Rate27of122submissions,22%Overall Acceptance Rate130of594submissions,22%

PDF Format

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader