Abstract
The agile model is the present reality for any software development process. Its main objective is to produce good quality software in optimal time. Programmers do unit testing to ensure that the software unit or module they are developing should be bug-free and check that the module is doing what it is supposed to do. On the other hand mutation testing is an important technique to show that the quality of test cases is good. But, industrial practitioners do not follow it in practice because of the computational expenses and huge amount of effort required. In this paper, we introduce a technique to make mutation testing faster, so that the continuous integration (CI) which is the main process of agile can be performed. This way we are towards achieving principles of agile testing. We compute Line and Branch Coverages for a program and utilize them in mutation testing. Using Line coverage information we eliminate the Dead Mutants upfront. Next, using Branch coverage information we set the priority by assigning a rank to each test case and running on reachable mutants. In this paper, we have obtained better results for 45 out of 60 Programs i.e. 75%. Experimentally, we show that our proposed prioritization approach consumes approx. 1036 s less mutation testing time as compared to the baseline (without prioritization). Since we perform mutation testing in less time to achieve agility, we call this technique Agile Mutation Testing.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
- 2.
Here, traditional approach represents the automated technique without manual intervention.
- 3.
- 4.
- 5.
Order of the test cases is ordered as they have been generated from TC Generator.
- 6.
BC\(_{max-1}\) decreasing with 1 is just to show that we take the lesser value, but in real execution the actual branch coverage value will be considered.
- 7.
For more clarity it is to be noted that these values are for both traditional and our proposed approaches. So improvement for our proposed work is not due to uncovered elements of the programs rather dependent on ordering of test cases so that high ranked test case can kill the mutant and most of the mutants can be avoided.
References
Ayari, K., Bouktif, S., Antoniol, G.: Automatic mutation test input data generation via ant colony. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, pp. 1074–1081 (2007)
Clarke, E., Kroening, D., Lerda, F.: A tool for checking ANSI-C programs. In: Jensen, K., Podelski, A. (eds.) TACAS 2004. LNCS, vol. 2988, pp. 168–176. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-24730-2_15
Crispin, L., Gregory, J.: Agile Testing: A Practical Guide for Testers and Agile Teams. Addison-Wesley Professional, 1 edn (2009)
DeMillo, R., Martin, R.: The mothra software testing environment user’s manual. Software Engineering Research Center, Tech. Rep (1987)
DeMillo, R.A., Lipton, R.J., Sayward, F.G.: Hints on test data selection: help for the practicing programmer. Computer 11(4), 34–41 (1978)
DeMillo, R.A., Offutt, A.J.: Constraint-based automatic test data generation. IEEE Trans. Softw. Eng. 17(9), 900–910 (1991)
Do, H., Rothermel, G.: On the use of mutation faults in empirical assessments of test case prioritization techniques. IEEE Trans. Softw. Eng. 32(9), 733–752 (2006). https://doi.org/10.1109/TSE.2006.92
Dorigo, M., Birattari, M., Stutzle, T.: Ant colony optimization. IEEE Comput. Intell. Mag. 1(4), 28–39 (2006)
Dutta, A., Godboley, S.: MSFL: a model for fault localization using mutation-spectra technique. In: Przybyłek, A., Miler, J., Poth, A., Riel, A. (eds.) LASD 2021. LNBIP, vol. 408, pp. 156–173. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-67084-9_10
Dutta, A., Srivastava, S.S., Godboley, S., Mohapatra, D.P.: Combi-FL: Neural network and SBFL based fault localization using mutation analysis. J. Comput. Lang. 66, 101064 (2021)
Elbaum, S., Malishevsky, A.G., Rothermel, G.: Test case prioritization: a family of empirical studies. IEEE Trans. Softw. Eng. 28(2), 159–182 (2002)
Frankl, P.G., Weiss, S.N., Hu, C.: All-uses vs mutation testing: an experimental comparison of effectiveness. J. Syst. Softw. 38(3), 235–253 (1997)
Godboley, S., Dutta, A., Mohapatra, D.P., Das, A., Mall, R.: Making a concolic tester achieve increased MC/DC. Innovations Syst. Softw. Eng. 12(4), 319–332 (2016)
Godboley, S., Dutta, A., Mohapatra, D.P., Mall, R.: J3 model: a novel framework for improved modified condition/decision coverage analysis. Comput. Stand. Interfaces 50, 1–17 (2017)
Godboley, S., Dutta, A., Mohapatra, D.P., Mall, R.: Gecojap: a novel source-code preprocessing technique to improve code coverage. Comput. Stand. Interfaces 55, 27–46 (2018)
Godboley, S., Dutta, A., Mohapatra, D.P., Mall, R.: Scaling modified condition/decision coverage using distributed concolic testing for java programs. Comput. Stand. Interfaces 59, 61–86 (2018)
Godboley, S., Mohapatra, D.P., Das, A., Mall, R.: An improved distributed concolic testing approach. Softw. Pract. Exp. 47(2), 311–342 (2017)
Godboley, S., Sahani, A., Mohapatra, D.P.: ABCE: a novel framework for improved branch coverage analysis. Proc. Comput. Sci. 62, 266–273 (2015). https://doi.org/10.1016/j.procs.2015.08.449
Godefroid, P., Klarlund, N., Sen, K.: Dart: directed automated random testing. In: Proceedings of the 2005 ACM SIGPLAN conference on Programming language design and implementation, pp. 213–223 (2005)
Greiner, R.: PALO: a probabilistic hill-climbing algorithm. Artif. Intell. 84(1–2), 177–208 (1996)
Harik, G.R., Lobo, F.G., Goldberg, D.E.: The compact genetic algorithm. IEEE Trans. Evol. Comput. 3(4), 287–297 (1999)
Howden, W.E.: Weak mutation testing and completeness of test sets. IEEE Trans. Softw. Eng. 4, 371–379 (1982)
Javalanche: (2012). http://www.javalanche.org/
Kaminski, G., Ammann, P., Offutt, J.: Better predicate testing. In: Proceedings of the 6th International Workshop on Automation of Software Test, pp. 57–63 (2011)
Kaminski, G., Ammann, P., Offutt, J.: Improving logic-based testing. J. Syst. Softw. 86(8), 2002–2012 (2013)
Li, X., Li, W., Zhang, Y., Zhang, L.: Deepfl: Integrating multiple fault diagnosis dimensions for deep fault localization. In: Proceedings of the 28th ACM SIGSOFT International Symposium on Software Testing and Analysis, pp. 169–180 (2019)
Mall, R.: Fundamentals of Software Engineering. PHI Learning Pvt Ltd, New Delhi (2018)
Meek, B., Siu, K.: The effectiveness of error seeding. ACM Sigplan Not. 24(6), 81–89 (1989)
MILU: (2018). https://github.com/yuejia/Milu
Offutt, A.J., Lee, S.D.: How strong is weak mutation? In: Proceedings of the symposium on Testing, analysis, and verification, pp. 200–213 (1991)
Offutt, A.J., Pan, J.: Automatically detecting equivalent mutants and infeasible paths. Softw. Testing, Verification Reliab. 7(3), 165–192 (1997)
Offutt, A.J., Pan, J., Tewary, K., Zhang, T.: An experimental evaluation of data flow and mutation testing. Softw. Pract. Exp. 26(2), 165–176 (1996)
Offutt, A.J., Rothermel, G., Zapf, C.: An experimental evaluation of selective mutation. In: Proceedings of 1993 15th international conference on software engineering, pp. 100–107. IEEE (1993)
Offutt A.J., Untch R.H.: Mutation 2000: uniting the orthogonal. In: Wong, W.E., (eds) Mutation Testing for the New Century. The Springer International Series on Advances in Database Systems, vol. 24. Springer, Boston (2001)
Papadakis, M., Le Traon, Y.: Metallaxis-FL: mutation-based fault localization. Softw. Testing Verifi. Reliab. 25(5–7), 605–628 (2015)
Papadakis, M., Malevris, N.: Automatic mutation test case generation via dynamic symbolic execution. In: 2010 IEEE 21st International Symposium on Software Reliability Engineering, pp. 121–130. IEEE (2010)
Parsai, A.: Mutation testing: from theory to practice. Ph.D. thesis, University of Antwerp (2019)
PIT (2020). https://pitest.org/
RERS12 (2012). http://rers-challenge.org/2012/
Regular extrapolation of reactive systems (rers-2013): Problem28 (2013), http://rers-challenge.org/2013ase/problems/challengeProblems/White/Problem28/Problem28.c
Regular extrapolation of reactive systems (rers-2013): Problem29 (2013), http://rers-challenge.org/2013ase/problems/challengeProblems/White/Problem29/Problem29.c
Regular extrapolation of reactive systems (rers-2013): Problem30 (2013). http://rers-challenge.org/2013ase/problems/challengeProblems/White/Problem30/Problem30.c
Regular extrapolation of reactive systems (rers-2013): Problem32 (2013). http://rers-challenge.org/2013ase/problems/challengeProblems/White/Problem28/Problem32.c
RERS14 (2014). http://rers-challenge.org/2014/
RERS16 (2016). http://rers-challenge.org/2016/
Rigorous examination of reactive systems (rers-2017): Sequential ltl problems (2017). http://www.rers-challenge.org/2017/index.php?page=ltlProblems
Rigorous examination of reactive systems (rers-2017): Sequential reachability problems (2017). http://www.rers-challenge.org/2017/index.php?page=reachProblems
Rigorous examination of reactive systems (rers-2017): Sequential training problems for rers 2017 (2017). http://www.rers-challenge.org/2017/index.php?page=trainingphase
RERS (2018). http://rers-challenge.org/
RERS18 (2018). http://rers-challenge.org/2018/
Rigorous examination of reactive systems (rers-2018): Sequential training problems for rers 2018 (2018). http://www.rers-challenge.org/2018/index.php?page=trainingphase
RERS19: Sequential Reachability Problems (2019). http://rers-challenge.org/2019/index.php?page=reachProblems
RERS20: Sequential Reachability Problems (2020). http://rers-challenge.org/2020/index.php?page=reachProblems
Rothermel, G., Untch, R.H., Chu, C., Harrold, M.J.: Prioritizing test cases for regression testing. IEEE Trans. Softw. Eng. 27(10), 929–948 (2001)
Sen, K., Marinov, D., Agha, G.: Cute: a concolic unit testing engine for c. ACM SIGSOFT Softw. Eng. Not. 30(5), 263–272 (2005)
Testura.mutation (2021). https://github.com/Testura/Testura.Mutation
Tillmann, N., de Halleux, J.: Pex–White box test generation for .NET. In: Beckert, B., Hähnle, R. (eds.) TAP 2008. LNCS, vol. 4966, pp. 134–153. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-79124-9_10
Vercammen, S., Ghafari, M., Demeyer, S., Borg, M.: Goal-oriented mutation testing with focal methods. In: Proceedings of the 9th ACM SIGSOFT International Workshop on Automating TEST Case Design, Selection, and Evaluation, pp. 23–30 (2018)
Walsh, P.J.: A measure of test case completeness (software, engineering) (1985)
Woodward, M., Halewood, K.: From weak to strong, dead or alive? an analysis of some mutation testing issues. In: Workshop on software testing, verification, and analysis, pp. 152–153. IEEE Computer Society (1988)
Yang, Q., Li, J.J., Weiss, D.M.: A survey of coverage-based testing tools. Comput. J. 52(5), 589–597 (2007). https://doi.org/10.1093/comjnl/bxm021
Zhang, L., Hao, D., Zhang, L., Rothermel, G., Mei, H.: Bridging the gap between the total and additional test-case prioritization strategies. In: 2013 35th International Conference on Software Engineering (ICSE), pp. 192–201 (2013). https://doi.org/10.1109/ICSE.2013.6606565
Zhang, L., Marinov, D., Khurshid, S.: Faster mutation testing inspired by test prioritization and reduction. In: Proceedings of the 2013 International Symposium on Software Testing and Analysis, p. 235–245. ISSTA 2013, Association for Computing Machinery, New York, NY, USA (2013). https://doi.org/10.1145/2483760.2483782
Zhang, L., Xie, T., Zhang, L., Tillmann, N., De Halleux, J., Mei, H.: Test generation via dynamic symbolic execution for mutation testing. In: 2010 IEEE International Conference on Software Maintenance, pp. 1–10. IEEE (2010)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Godboley, S., Mohapatra, D.P. (2022). Towards Agile Mutation Testing Using Branch Coverage Based Prioritization Technique. In: Przybyłek, A., Jarzębowicz, A., Luković, I., Ng, Y.Y. (eds) Lean and Agile Software Development. LASD 2022. Lecture Notes in Business Information Processing, vol 438. Springer, Cham. https://doi.org/10.1007/978-3-030-94238-0_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-94238-0_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-94237-3
Online ISBN: 978-3-030-94238-0
eBook Packages: Computer ScienceComputer Science (R0)