Abstract
Testing is a complex and expensive process that may absorb more than 50% of the project costs. Methods aimed to estimate testing costs require, to be really effective, a degree of detail that is not always available at the initial stages. The cost of testing, as a consequence, is not adequately dealt with, giving rise to problematic deviations. To address this issue, an approach named “TeqReq” is proposed. The main objective of TeqReq is the integration of requirements with information about testing and testing costs, since the earliest project phases. This integration is achieved by applying a new family of test-related requirement management attributes. The goal of these attributes is to direct the focus of the project team towards analyzing and qualifying the requirements from a testing effort point of view, starting as soon as possible. The proposed approach was validated in an industrial case study involving several real-world projects. The results obtained are compared against results from similar projects where the proposed solution was not applied. The findings lead to confirm that TeqReq had a positive impact on the estimations of the testing effort and, indirectly, on the quality of the final product. In addition, other interesting and unexpected effects emerged from the application of TeqReq, like an increase in the quality of the relationships between the different stakeholders involved, or improvements in the acquisition of requirement-related knowledge. All these findings are analyzed and discussed throughout the paper.
Similar content being viewed by others
Data availability
The data collected in the research is available in the support documents. However, because of the sensitivity of some of the data or due to their strategic nature for Salesland, some of the data handled cannot be exposed in its original form and it has been masked in order to not violate the European GDPR.
Change history
29 August 2023
A Correction to this paper has been published: https://doi.org/10.1007/s11219-023-09647-z
References
Abedallah, Z., Selamat, M., Ghani Abd, A. A., Atan, R., Koh, T. W., & Ehsan, S. D., (2008). Issues in software cost estimation. Semantic Scholar. https://www.semanticscholar.org/paper/Issues-in-Software-Cost-Estimation. https://doi.org/10.1109/ICSE.2012.6227203
Abhilasha, & Sharma, A. (2013). Test effort estimation in regression testing. Paper presented at the 343–348. https://doi.org/10.1109/MITE.2013.6756364. https://ieeexplore.ieee.org/document/6756364
Ali, S., Hafeez, Y., Hussain, S., & Yang, S. (2020). Enhanced regression testing technique for agile software development and continuous integration strategies. Software Quality Journal, 28, 397–423. https://doi.org/10.1007/s11219-019-09463-4
Arora, M., Chopra, S., & Gupta, P. (2016). Estimation of regression test effort in agile projects. Far East Journal of Electronics and Communications, 741–753. https://doi.org/10.17654/ECSV3PII16741
Beck, K., Beedle, M., Van Bennekum, A., Cockburn, A., Cunningham, W., Fowler, M., Grenning, J., Highsmith, J., Hunt, A., Jeffries, R., Kern, J., Marick, B., Martin, R., Mellor, S., Schwaber, K., Sutherland, J., & Thomas, D. (2001). Manifesto for Agile Software Development [Internet]. Available at: https://agilemanifesto.org/principles.html. Accessed 11 Nov 2020.
Benbasat, I., Goldstein, D. K., & Mead, M. (1987). The case research strategy in studies of information systems. MIS Quarterly, 11(3), 369–396. https://doi.org/10.2307/248684
Boehm, B. W., & Valerdi, R. (2008). Achievements and challenges in cocomo-based software resource estimation. IEEE Software, 25(5), 74–83. https://doi.org/10.1109/MS.2008.133
Bjarnason, E., Smolander, K., Engström, E., & Runeson, P. (2016). A theory of distances in software engineering. Information and Software Technology, 70, 204–219. https://www.sciencedirect.com/science/article/pii/S0950584915001019. https://doi.org/10.1016/j.infsof.2015.05.004
Blokpoel, S. B., Reymen, I., & Dewulf, G. (2005). Uncertainty management in Real Estate Development: Studying the potential of SCRUM design methodology. Delft University of Technology; Jun 15, 2005. https://www.researchgate.net/publication/241875224_Uncertainty_management_in_real_estate_development_studying_the_potential_of_the_SCRUM_design_methodology. Accessed 15 Nov 2020.
Buglione, L., & Abran, A. (2007). Improving estimations in agile projects: Issues and avenues. In Proceedings of the 4th Software Measurement European Forum (SMEF 2007), Rome (Italy) (pp. 265–274).
Catal, C., & Mishra, D. (2013). Test case prioritization: a systematic mapping study. Software Quality Journal, 21(3), 445–478. https://doi.org/10.1007/s11219-012-9181-z
Charette, R. N. (2005). Why software fails [software failure]. IEEE Spectrum, 42(9), 42–49. https://doi.org/10.1109/MSPEC.2005.1502528
Chrissis, M. B., Konrad, M., & Shrum, S. (2011). CMMI for development (3rd ed.), Addison-Wesley.
FIPSP. (1984). Guideline for lifecycle validation, verification, and testing of computer software. Available at: https://archive.org/details/federalinformati101nati/. Accessed 15 Nov 2020.
Garousi, V., & van Veenendaal, E. (2021). Test maturity model integration (TMMi): Trends of worldwide test maturity and certifications. IEEE Software. https://doi.org/10.1109/MS.2021.3061930
Haiderzai, M. D., & Khattab, M. I. (2019). How software testing impact the quality of software systems? International Journal of Engineering in Computer Science. https://doi.org/10.13140/RG.2.2.17617.48480
Hailpern, B., & Santhanam, P. (2002). (2002) Software debugging, testing, and verification. IBM Systems Journal, 41(1), 4–12.
IEEE. (1983). IEEE 829–1983 - IEEE Standard for Software Test Documentation.
IEEE. (1998). Std 1219–1998: IEEE Standard for Software Maintenance. IEEE. https://doi.org/10.1109/IEEESTD.1998.88278
IEEE. (2018). ISO/IEC/IEEE international standard - systems and software engineering - life cycle processes - requirements engineering. ISO/IEC/IEEE 29148:2018(E). 2018:1–104. https://doi.org/10.1109/IEEESTD.2018.8559686
Kandil, P., Moussa, S., & Badr, N. (2015). A methodology for regression testing reduction and prioritization of agile releases. In 2015 5th International Conference on Information & Communication Technology and Accessibility (ICTA), 1–6. https://doi.org/10.1109/ICTA.2015.7426903
Khan, S. U. R., Lee, S. P., Javaid, N., & Abdul, W. (2018). A systematic review on test suite reduction: Approaches, experiment’s quality evaluation, and guidelines. In IEEE Access, 6, 11816–11841. https://doi.org/10.1109/ACCESS.2018.2809600
Khatibi, V., & Dayang, N. A. J. (2011). Cost estimation methods: a review. Journal of Emerging Trends in Computing and Information Sciences. ISSN: 2079–8407.
Kotonya, G. (1998). Requirements engineering: Processes and techniques. England: Wiley.
Labuschagne, A., Inozemtseva, L., & Holmes, R. (2017) Measuring the cost of regression testing in practice: a study of Java projects using continuous integration. ACM. https://doi.org/10.1145/3106237.3106288
Laksono, M., Budiardjo, E., & Ferdinansyah, A. (2019). Assessment of test maturity model. In Proceedings of the 2nd International Conference on Software Engineering and Information Management, pp. 110–118. https://doi.org/10.1145/3305160.3305203
Li, J., Ulrich, A., Bai, X., & Bertolino, A. (2020). Advances in test automation for software with special focus on artificial intelligence and machine learning. Software Quality Journal, 28, 245–248. https://doi.org/10.1007/s11219-019-09472-3
López-Martínez, J., Juárez-Ramírez, R., Huertas, C., Jiménez, S., & Guerra-García, C. (2016). Problems in the adoption of agile-scrum methodologies: a systematic literature review.In 2016 4th International Conference in Software Engineering Research and Innovation (CONISOFT), Puebla, Mexico. pp. 141–148.
Miller, G. (2013). Agile problems, challenges, & failures. PMI® Global Congress.
Myers, G., Badgett, T., & Sandler, C. (2012). The art of software testing (3rd ed.). Wiley.
Nageswaran, S. (2001). Test effort estimation using use case points. Quality Week, San Francisco, California, 6, 1–6.
Ng, S. P., Murnane, T., Reed, K., Grant, D., & Chen, T. Y. (2004). A preliminary survey on software testing practices in Australia. ASWEC. https://doi.org/10.1109/ASWEC.2004.1290464
Ngah, A., Munro, M., & Abdallah, M. (2017) An overview of regression testing. Journal of Telecommunication, Electronic and Computer Engineering, 9, 45–49. Available at: https://www.researchgate.net/profile/Amir_Ngah/publication/321243328_An_Overview_of_Regression_Testing/links/5a28c0504585155dd4277fca/An-Overview-of-Regression-Testing.pdf. Accessed 11 Nov 2020.
Nguyen, C. D., Miles, S., Perini, A., Tonella, P., Harkman, M., & Luck, M. (2012). Evolutionary testing of autonomous software agents. Autonomous Agents and Multi-Agent Systems, 25, 260–283. https://doi.org/10.1007/s10458-011-9175-4
Putnam-Majarian, C., & Putnam, D. (2015). The most common reasons why software projects fail. InfoQ. Available at: https://www.infoq.com/articles/software-failure-reasons/. Accessed 15 Nov 2020.
Raghuvirkamath. (2010). TPA – test point analysis – A method of test estimation. Retrieved Jul 14, 2019, from https://raghuvirkamath.wordpress.com/2010/06/08/tpa-test-point-analysis-a-method-of-test-estimation/
Rahikkala, J., Hyrynsalmi, S., & Leppänen, V. (2015). Accounting testing in software cost estimation: a case study of the current practice and impacts. https://doi.org/10.13140/rg.2.1.1907.7841
Ramachandran, M. (2003). Testing software components using boundary value analysis. https://doi.org/10.1109/EURMIC.2003.1231572.
Ramasubbu, N., & Balan, R. (2012). Overcoming the challenges in cost estimation for distributed software projects. IEEE Press. https://doi.org/10.1109/ICSE.2012.6227203
Robson, C. (2002). Real world research: a resource for social scientists and practitioner-researchers. Blackwell Oxford; 2002. Blackwell, (2nd edition).
Roman, A., & Mnich, M. (2020). Test-driven development with mutation testing – an experimental study. Software Quality Journal. https://doi.org/10.1007/s11219-020-09534-x
Runeson, P., & Höst, M. (2009). Guidelines for conducting and reporting case study research in software engineering. Empirical Software Engineering, 14(2), 131–164. https://doi.org/10.1007/s10664-008-9102-8
SAP. (2019). Business Blueprint. Available from: https://help.sap.com/doc/saphelp_sm71_sp13/7.1.13/en-US/45/f6da633a292312e10000000a11466f/content.htm?no_cache=true. Accessed 11 Nov 2020.
Srikanth, H., Banerjee, S., Williams, L., & Osborne, J. (2014). Towards the prioritization of system test cases. Software Testing, Verification and Reliability. 24(4), 320–337. https://doi.org/10.1002/stvr.1500
Sharma, C., Sabharwal, S., & Sibal, R. (2013). A survey on software testing techniques using genetic algorithm. International Journal of Computer Science Issues (IJCSI). 10(1), 381. https://search.proquest.com/docview/1441692230
Sutherland, J., & Schwaber, K. (2012). The Scrum Guide. In: Software in 30 Days. Hoboken, NJ, USA: John Wiley & Sons, Inc. pp. 133–152. https://doi.org/10.1002/9781119203278
SwarmOnline. (2018). How do Waterfall and Agile project estimates differ? [Internet]. Available at: https://www.swarmonline.com/how-do-waterfall-and-agile-project-estimates-differ/. Accessed 11 Nov 2020.
Tanveer, B., Vollmer, A. M., Braun, S., & Ali, N. (2019). An evaluation of effort estimation supported by change impact analysis in agile software development. Journal of Software: Evolution and Process. https://doi.org/10.1002/smr.2165
Taviloglu, O., & Cetin, F. (Jul 2018). Tracking the architectural quality: “W-model of software architecture”. Icatces 2018, Safranbolu.
Veenendaal, E., Pol, M., & McMullan, J. (1997). A test management approach for structured testing. Achieving Software Product Quality. Tutein Nolthenuis, pp. 145–163.
Wegener, J. (2005). Evolutionary Testing Techniques. In: Stochastic Algorithms: Foundations and Applications. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 82–94.
Whyte, G., & Mulder, D. (2011). The Impact of Software Test Constraints on Software Test Effectiveness. International Conference on Information Management and Evaluation. 2011 Apr 1, (p. 450).
Xiao, P., Liu, B., & Wang, S. (2018). Feedback-based integrated prediction: Defect prediction based on feedback from software testing process. The Journal of Systems & Software, 143, 159–171. https://doi.org/10.1016/j.jss.2018.05.029
Xu, D., Xu, W., Kent, M., Thomas, L., & Wang, L. (2015). An automated test generation technique for software quality assurance. IEEE Transactions on Reliability, 64(1), 247–268. https://doi.org/10.1109/TR.2014.2354172
Yin, R. K. (2014). Case study research (5th ed.). Los Angeles; London; New Delhi; Singapore; Washington DC: SAGE.
Yizama, Y., Varona, D., Waychal, P., & Capretz, L. F. (2020). The unpopularity of the software tester role among software practitioners: a case study. In: D. Karanki, G Vinod, & S. Ajit (eds.), Advances in RAMS Engineering. Springer Series in Reliability Engineering. Springer, Cham. https://doi.org/10.1007/978-3-030-36518-9_7
Funding
Although the research, as such, has not been explicitly supported by any organization, Salesland allowed us to apply our proposed solution in the development of several of its software development projects to test its effectiveness.
Author information
Authors and Affiliations
Contributions
Both authors of this paper contributed to its main ideas and concepts, as resumed in Sect. 6. The case study design was performed by both authors. It was carried out by E. Roncero in Salesland. The triangulation of results, as explained in Sect. 5.2, was carried out by both authors.
Corresponding author
Ethics declarations
Conflict of interest
The authors declare no competing interests.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
The original version of this article has been revised. The first author Enrique Roncero is now affiliated to affiliation 2.
Supplementary information
Below is the link to the electronic supplementary material.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Roncero, E., Silva, A. TeqReq: a new family of test-related requirements attributes. Software Qual J 30, 809–851 (2022). https://doi.org/10.1007/s11219-021-09577-8
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11219-021-09577-8