Abstract
Software testing, which plays a crucial role in software quality assurance, is a time and resource consuming process. It is, therefore, necessary to estimate as soon as possible the effort required to test software, so that activities can be planned and resources can be optimally allocated. Unfortunately, little is known about the prediction of the testing effort. In this paper, we address the testing effort from the perspective of test suites size. The study presented aims at exploring empirically the relationships between use cases and the size of test suites in object-oriented systems. We introduce four metrics to characterize the size and complexity of use cases. The size of test suites is measured in terms of lines of test code. We performed an experimental study using data collected from five cases studies. Results provide evidence that there is a significant relationship between use case metrics and the size of test suites.
- M. Bruntink and A. Van Deursen, "Predicting class testability using object-oriented metrics", in Proceedings of the 4th IEEE International Workshop on Source Code Analysis and Manipulation (SCAM '04), pp. 136--145, September 2004. Google ScholarDigital Library
- V. Gupta, K. K. Aggarwal, and Y. Singh, "A Fuzzy Approach for Integrated Measure of Object-Oriented Software Testability", Journal of Computer Science, vol. 1, no. 2, pp. 276--282, 2005.Google ScholarCross Ref
- M. Bruntink and A. van Deursen, "An empirical study into class testability", Journal of Systems and Software, vol. 79, no. 9, pp. 1219--1232, 2006. Google ScholarDigital Library
- Y. Singh, A. Kaur, and R.Malhota, "Predicting testability effort using artificial neural network", in Proceedings of the World Congress on Engineering and Computer Science, San Francisco, Calif, USA, 2008.Google Scholar
- Y. Singh, A. Kaur, and R. Malhotra, "Empirical validation of object-oriented metrics for predicting fault proneness models", Software Quality Journal, vol. 18, no. 1, pp. 3--35, 2009. Google ScholarDigital Library
- L. Badri, M. Badri, and F. Toure, "Exploring empirically the relationship between lack of cohesion and testability in objectoriented systems", in Advances in Software Engineering, T.-h. Kim, H.-K. Kim, M. K. Khan et al., Eds., vol. 117 of Communications in Computer and Information Science, Springer, Berlin, Germany, 2010.Google Scholar
- Y. Singh and A. Saha, "Predicting testability of eclipse: a case study", Journal of Software Engineering, vol. 4, no. 2, 2010.Google Scholar
- L. Badri, M. Badri, and F. Toure, "An empirical analysis of lack of cohesion metrics for predicting testability of classes", International Journal of Software Engineering and Its Applications, vol. 5, no. 2, 2011.Google Scholar
- B. Baudry, B. Le Traon, and G. Sunyé, "Testability analysis of a UML class diagram", in Proceedings of the 9th International Software Metrics Symposium (METRICS '03), IEEE CS, 2003. Google ScholarDigital Library
- B. Baudry, Y. Le Traon, G. Sunyé, and J.M. Jézéquel, "Measuring and improving design patterns testability", in Proceedings of the 9th International Software Metrics Symposium (METRICS '03), IEEE Computer Society, 2003. Google ScholarDigital Library
- Y. Le Traon, F. Ouabdesselam, and C. Robach, "Analyzing testability on data flow designs", in Proceedings of the 11th International Symposium on Software Reliability Engineering (ISSRE '00), pp. 162--173, October 2000. Google ScholarDigital Library
- B. Baudry, Y. Le Traon, and G. Sunyé, "Improving the testability of UML class diagrams", in Proceedings of the International Workshop on Testability Analysis (IWoTA '04), Rennes, France, 2004.Google ScholarCross Ref
- R. A. Khan and K. Mustafa, "Metric based testability model for object-oriented design (MTMOOD)", ACM SIGSOFT Software Engineering Notes, vol. 34, no. 2, 2009. Google ScholarDigital Library
- M. Badri and F. Toure, "Empirical Analysis of Object?Oriented Design Metrics for Predicting Unit Testing Effort of Classes", Journal of Software Engineering and Applications (JSEA), Volume 5, Number 7, July 2012.Google Scholar
- Q. Yi, Z. Bo and Z. Xiaochum, "Early Estimate the Size of Test Suites from Use Cases", in Proceedings of the 15th Asia-Pacific Software Engineering Conference, IEEE Computer Society, 2008. Google ScholarDigital Library
- Érika R.C. de Almeida, Bruno T. de Abreu and Regina Moraes, "An Alternative Approach to Test Effort Estimation Based on Use Cases", in Proceedings of the International Conference on Software Testing, Verification and Validation, IEEE Computer Society, 2009. Google ScholarDigital Library
- G. Karner, "Resource Estimation for Objectory Projects", 1993.Google Scholar
- P. Mohagheghi, B. Anda and R. Conradi, "Effort Estimation of Use Cases for Incremental Large-Scale Software Development", in Proceedings of the International Conference on Software Engineering, 2005. Google ScholarDigital Library
- G. Robiolo and R. Orosco, "Employing use cases to early estimate effort with simpler metrics", in Innovations in Systems and Software Engineering, 4, 2008.Google Scholar
- G. Robiolo, C. Badano and R. Orosco, "Transactions and Paths: two use case based metrics which improve the early effort estimation", in Proceedings of the third International Symposium on Empirical Software Engineering and Measurement, IEEE Computer Society, 2009. Google ScholarDigital Library
- C. Larman, "Applying UML and Design Patterns, An introduction to object-oriented analysis and design and the unified process", Prentice Hall, 2004. Google ScholarDigital Library
Index Terms
- On the relationship between use cases and test suites size: an exploratory study
Recommendations
Investigating the Accuracy of Test Code Size Prediction using Use Case Metrics and Machine Learning Algorithms: An Empirical Study
ICMLSC '17: Proceedings of the 2017 International Conference on Machine Learning and Soft ComputingSoftware testing plays a crucial role in software quality assurance. It is, however, a time and resource consuming process. It is, therefore, important to predict as soon as possible the effort required to test software, so that activities can be ...
Prioritizing random combinatorial test suites
SAC '17: Proceedings of the Symposium on Applied ComputingThe behaviour of a system under test can be influenced by several factors, such as system configurations, user inputs, and so on. It has also been observed that many failures are caused by only a small number of factors. Combinatorial testing aims at ...
A Static Approach to Prioritizing JUnit Test Cases
Test case prioritization is used in regression testing to schedule the execution order of test cases so as to expose faults earlier in testing. Over the past few years, many test case prioritization techniques have been proposed in the literature. Most ...
Comments