ABSTRACT
Context: To survive in a highly competitive software market, product managers are striving for frequent, incremental releases in ever shorter cycles. Release decisions are characterized by high complexity and have a high impact on project success. Under such conditions, using the experience from past releases could help product managers to take more informed decisions.
Goal and research objectives: To make decisions about when to make a release more operational, we formulated release readiness (RR) as a binary classification problem. The goal of our research presented in this paper is twofold: (i) to propose a machine learning approach called RC* (Release readiness Classification applying predictive techniques) with two approaches for defining the training set called incremental and sliding window, and (ii) to empirically evaluate the applicability of RC* for varying project characteristics.
Methodology: In the form of explorative case study research, we applied the RC* method to four OSS projects under the Apache Software Foundation. We retrospectively covered a period of 82 months, 90 releases and 3722 issues. We use Random Forest as the classification technique along with eight independent variables to classify release readiness in individual weeks. Predictive performance was measured in terms of precision, recall, F-measure, and accuracy.
Results: The incremental and sliding window approaches respectively achieve an overall 76% and 79% accuracy in classifying RR for four analyzed projects. Incremental approach outperforms sliding window approach in terms of stability of the predictive performance. Predictive performance for both approaches are significantly influenced by three project characteristics i) release duration, ii) number of issues in a release, iii) size of the initial training dataset.
Conclusion: As our initial observation we identified, incremental approach achieves higher accuracy when releases have long duration, low number of issues and classifiers are trained with large training set. On the other hand, sliding window approach achieves higher accuracy when releases have short duration and classifiers are trained with small training set.
- Alam, S. et al. 2016. Comparative Analysis of Predictive Techniques for Release Readiness Classification. RAISE 2016 (2016).Google Scholar
- Alam, S. et al. 2015. Monitoring and Controlling Release Readiness by Learning across Projects. Managing Software Process Evolution. Springer.Google Scholar
- Cichosz, P. 2015. Data Mining Algorithms: Explained Using R. John Wiley and Sons. Google ScholarDigital Library
- Mcconnell, S. 1997. Gauging software readiness with defect tracking. IEEE Software. 14, 3 (1997), 135--136. Google ScholarDigital Library
- Minku, L.L. and Yao, X. 2013. Ensembles and locality: Insight on improving software effort estimation. IST 55, 8 (2013), 1512--1528. Google ScholarDigital Library
- Misirli, A.T. et al. 2011. An industrial case study of classifier ensembles for locating software defects. Software Quality Journal. 19, 3 (2011), 515--536. Google ScholarDigital Library
- Motulsky, H. 2013. Intuitive biostatistics: A nonmathematical guide to statistical thinking. Oxford Univ. Press.Google Scholar
- Quah, J.T.S. and Liew, S.W. 2008. Gauging Software Readiness Using Metrics. SMCia (2008), 426--431.Google Scholar
- Quah, T.-S. 2009. Estimating software readiness using predictive models. Information Sciences. 179, 4 (Feb. 2009), 430--445. Google ScholarDigital Library
- R. Brettschneider 1989. Zero-Failure model -Is your software ready for release? IEEE Software. 6, 4 (1989). Google ScholarDigital Library
- Shahnewaz, S. and Ruhe, G. 2014. RELREA - An Analytical Approach for Evaluating Release Readiness. Proc. SEKE (2014).Google Scholar
- Staron, M. et al. 2012. Release Readiness Indicator for Mature Agile and Lean Software Development Projects. Agile Processes in Software Engineering and Extreme Programming. (2012), 93--107.Google Scholar
- Verner, J.M. et al. 2009. Guidelines for industrially-based multiple case studies in software engineering. RCIS (2009), 313--324.Google Scholar
- Wild, R. and Brune, P. 2012. Determining Software Product Release Readiness by the Change-Error Correlation Function: On the Importance of the Change-Error Time Lag. HICSS (2012), 5360--5367. Google ScholarDigital Library
Recommendations
Comparative analysis of predictive techniques for release readiness classification
RAISE '16: Proceedings of the 5th International Workshop on Realizing Artificial Intelligence Synergies in Software EngineeringContext: A software release is the deployment of a version of an evolving software product. Product managers are typically responsible for deciding the release content, time frame, price, and quality of the release. Due to all the dynamic changes in the ...
A Comparative Study on Methods of Extracting Land Cover Informations Based on Landsat 8 In Dianchi Basin
AICSconf '20: Proceedings of the 2020 Artificial Intelligence and Complex Systems ConferenceUsing remote sensing software ENVI5.1, combined with Landsat 8 data, the land cover information of the Dianchi Lake Basin is classified and extracted by maximum likelihood classification of supervised classification, ISODATA algorithm of unsupervised ...
Comments