Abstract
The software measurement becomes more complex as well as software systems. Indeed, the supervision of such systems needs to manage a lot of data. The measurement plans are heavy and time and resource consuming due to the amount of software properties to analyze. Moreover, the design of measurement processes depends on the software project, the used language, the used computer etc. Thereby, to evaluate a software, it is needed to know the context of the measured object, as well as, to analyze a software evaluation is needed to know the context. That is what makes difficult to automate a software measurement analysis. Formal models and standards have been standardized to facilitate some of these aspects. However, the maintainability of the measurements activities is still constituted of complex activities.
In our previous work, we conducted a research work to fully automate the generation of software measurement plans at runtime in order to have more flexible measurement processes adapted to the software needs. In this paper we aim at improving this latter. The idea is to learn from an historical measurements for generating an analysis model corresponding to the context. For that we propose to use a learning technique, which will learn from a measurements dataset of the evaluated software, as the expert does, and generate the corresponding analysis model.
The purpose is to use an unsupervised learning algorithm to generate automatically an analysis model in order to efficiently manage the efforts, time and resources of the experts.
This approach is well implemented, integrated on an industrial platform and experiments are processed to show the scalability and effectiveness of our approach. Discussions about the results have been provided.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Bardsiri, A.K., Hashemi, S.M.: Machine learning methods with feature selection approach to estimate software services development effort. Int. J. Serv. Sci. 6(1), 26–37 (2017)
Bouwers, E., van Deursen, A., Visser, J.: Evaluating usefulness of software metrics: an industrial experience report. In: Notkin, D., Cheng, B.H.C., Pohl, K. (eds.) 35th International Conference on Software Engineering, ICSE 2013, San Francisco, CA, USA, 18–26 May 2013, pp. 921–930. IEEE Computer Society (2013). https://doi.org/10.1109/ICSE.2013.6606641
Carvallo, J.P., Franch, X.: Extending the ISO/IEC 9126–1 quality model with non-technical factors for COTS components selection. In: Proceedings of the 2006 International Workshop on Software Quality, WoSQ 2006, pp. 9–14. ACM, New York (2006). https://doi.org/10.1145/1137702.1137706
Dahab, S., Porras, J.J.H., Maag, S.: A novel formal approach to automatically suggest metrics in software measurement plans. In: 2018 13th International Conference on Evaluation of Novel Approaches to Software Engineering (ENASE). IEEE (2018)
Dahab, S.A., Maag, S.: Suggesting software measurement plans with unsupervised learning data analysis. In: ENASE, pp. 189–197. SciTePress (2019)
Feng, Y., Hamerly, G.: PG-means: learning the number of clusters in data. In: Advances in Neural Information Processing Systems, pp. 393–400 (2007)
Fenton, N., Bieman, J.: Software Metrics: A Rigorous and Practical Approach. CRC Press, Boca Raton (2014)
Fenton, N.E., Neil, M.: Software metrics: roadmap. In: Proceedings of the Conference on the Future of Software Engineering, pp. 357–370. ACM (2000)
Gao, K., Khoshgoftaar, T.M., Wang, H., Seliya, N.: Choosing software metrics for defect prediction: an investigation on feature selection techniques. Softw.: Pract. Exp. 41(5), 579–606 (2011)
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org
Hentschel, J., Schmietendorf, A., Dumke, R.R.: Big data benefits for the software measurement community. In: 2016 Joint Conference of the International Workshop on Software Measurement and the International Conference on Software Process and Product Measurement (IWSM-MENSURA), pp. 108–114, October 2016. https://doi.org/10.1109/IWSM-Mensura.2016.025
Hovorushchenko, T., Pomorova, O.: Evaluation of mutual influences of software quality characteristics based ISO 25010:2011, pp. 80–83, September 2016. https://doi.org/10.1109/STC-CSIT.2016.7589874
ISO, I: IEC 25000 software and system engineering-software product quality requirements and evaluation (square)-guide to square. International Organization for Standardization (2005)
ISO, I: IEC 25020 software and system engineering-software product quality requirements and evaluation (square)-measurement reference model and guide. International Organization for Standardization (2007)
ISO/IEC: ISO/IEC 25010 system and software quality models. Technical report (2010)
Jin, C., Liu, J.A.: Applications of support vector mathine and unsupervised learning for predicting maintainability using object-oriented metrics. In: 2010 Second International Conference on Multimedia and Information Technology (MMIT), vol. 1, pp. 24–27. IEEE (2010)
Kim, J., Ryu, J.W., Shin, H.J., Song, J.H.: Machine learning frameworks for automated software testing tools: a study. Int. J. Contents 13(1), 38–44 (2017)
Kitchenham, B.A.: What’s up with software metrics? - A preliminary mapping study. J. Syst. Softw. 83(1), 37–51 (2010). https://doi.org/10.1016/j.jss.2009.06.041
Laradji, I.H., Alshayeb, M., Ghouti, L.: Software defect prediction using ensemble learning on selected features. Inf. Softw. Technol. 58, 388–402 (2015). https://doi.org/10.1016/j.infsof.2014.07.005
MacDonald, R.: Software defect prediction from code quality measurements via machine learning. In: Bagheri, E., Cheung, J. (eds.) Canadian AI 2018. LNCS, vol. 10832, pp. 331–334. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89656-4_35
Mouttappa, P., Maag, S., Cavalli, A.R.: Using passive testing based on symbolic execution and slicing techniques: application to the validation of communication protocols. Comput. Netw. 57(15), 2992–3008 (2013). https://doi.org/10.1016/j.comnet.2013.06.019
Omran, M., Engelbrecht, A., Salman, A.: An overview of clustering methods. Intell. Data Anal. 11, 583–605 (2007). https://doi.org/10.3233/IDA-2007-11602
Pedregosa, F., et al.: Scikit-learn: machine learning in Python. J. Mach. Learn. Res. 12, 2825–2830 (2011)
Pelleg, D., Moore, A.: X-means: extending k-means with efficient estimation of the number of clusters. In: Machine Learning (2002)
Pelleg, D., Moore, A.: X-means: extending k-means with efficient estimation of the number of clusters. In: Proceedings of the 17th International Conference on Machine Learning, pp. 727–734. Morgan Kaufmann (2000)
Shepperd, M.J., Bowes, D., Hall, T.: Researcher bias: the use of machine learning in software defect prediction. IEEE Trans. Softw. Eng. 40(6), 603–616 (2014). https://doi.org/10.1109/TSE.2014.2322358
Shin, Y., Meneely, A., Williams, L., Osborne, J.A.: Evaluating complexity, code churn, and developer activity metrics as indicators of software vulnerabilities. IEEE Trans. Softw. Eng. 37(6), 772–787 (2011). https://doi.org/10.1109/TSE.2010.81
Zhong, S., Khoshgoftaar, T., Seliya, N.: Analyzing software measurement data with clustering techniques. IEEE Intell. Syst. 19(2), 20–27 (2004). https://doi.org/10.1109/MIS.2004.1274907
Zhong, S., Khoshgoftaar, T.M., Seliya, N.: Unsupervised learning for expert-based software quality estimation. In: HASE, pp. 149–155. Citeseer (2004)
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Dahab, S.A., Maag, S. (2020). Automated Software Measurement Strategies Elaboration Using Unsupervised Learning Data Analysis. In: Damiani, E., Spanoudakis, G., Maciaszek, L. (eds) Evaluation of Novel Approaches to Software Engineering. ENASE 2019. Communications in Computer and Information Science, vol 1172. Springer, Cham. https://doi.org/10.1007/978-3-030-40223-5_17
Download citation
DOI: https://doi.org/10.1007/978-3-030-40223-5_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-40222-8
Online ISBN: 978-3-030-40223-5
eBook Packages: Computer ScienceComputer Science (R0)