Skip to main content

Directed Multi-target Search Based Unit Tests Generation

  • Conference paper
  • First Online:
Information and Software Technologies (ICIST 2019)

Abstract

Software testing costs are reduced by employing test automation. One of the automation activities is tests generation. The goal of tests generation is to generate tests with large code coverage with the efficient faults detection ability. Search-based tests generation methods are analysed and their experimental comparison is provided in this paper.

The novel search-based unit tests generation approach directed by multiple search targets to generate unit tests is presented. Introduced method allows generating test data and oracles using static code analysis and code instrumentation. Oracles are created as assertions based on system state after tests execution phase, thus making tests suitable for regression testing.

The method was implemented as an experimental tool. It was evaluated and compared against other search-based tests generation tools/methods by using code coverage and mutation score metrics. The experimental evaluation was performed on 124 classes from 3 open source libraries.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/Sable/soot.

  2. 2.

    https://github.com/javaparser/javaparser.

  3. 3.

    https://github.com/PROSRESEARCHCENTER/junitcontest.

  4. 4.

    https://www.eclemma.org/jacoco/.

  5. 5.

    http://pitest.org/.

References

  1. Whittaker, J.A., Voas, J.M.: 50 years of software: key principles for quality. IT Prof. 4(1520–9202), 28–35 (2002)

    Article  Google Scholar 

  2. Misra, S., Adewumi, A., Maskeliūnas, R., Damaševičius, R., Cafer, F.: Unit testing in global software development environment. In: Panda, B., Sharma, S., Roy, N.R. (eds.) REDSET 2017. CCIS, vol. 799, pp. 309–317. Springer, Singapore (2018). https://doi.org/10.1007/978-981-10-8527-7_25

    Chapter  Google Scholar 

  3. Myers, G.J., Sandler, C.: The Art of Software Testing. Wiley, New York (2004). ISBN 0471469122

    Google Scholar 

  4. Fitzgerald, B., Stol, K.-J.: Continuous software engineering and beyond: trends and challenges. In: Proceedings of the 1st International Workshop on Rapid Continuous Software Engineering, pp. 1–9. ACM, Hyderabad (2014). https://doi.org/10.1145/2593812.2593813. ISBN 978-1-4503-2856-2

  5. Alhassan, J.K., Misra, S., Umar, A., Maskeliūnas, R., Damaševičius, R., Adewumi, A.: A fuzzy classifier-based penetration testing for web applications. In: Rocha, Á., Guarda, T. (eds.) ICITS 2018. AISC, vol. 721, pp. 95–104. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-73450-7_10. ISBN 978-3-319-73449-1

    Chapter  Google Scholar 

  6. Soltani, M., Panichella, A., van Deursen, A.: A guided genetic algorithm for automated crash reproduction. In: 2017 IEEE/ACM 39th International Conference on Software Engineering (ICSE), pp. 209–220, May 2017

    Google Scholar 

  7. Cobanoglu, C.: Adopting continuous delivery practices to increase efficiency: a case study (2017)

    Google Scholar 

  8. Gopi, P., Ramalingam, M., Arumugam, C.: Search based test data generation: a multi objective approach using MOPSO evolutionary algorithm. In: Proceedings of the 9th Annual ACM India Conference, pp. 137–140. ACM, Gandhinagar (2016). https://doi.org/10.1145/2998476.2998492. ISBN 978-1-4503-4808-9

  9. Rojas, J.M., Vivanti, M., Arcuri, A., Fraser, G.: A detailed investigation of the effectiveness of whole test suite generation. Empir. Softw. Eng. 22, 1–42 (2016). https://doi.org/10.1007/s10664-015-9424-2. ISSN 1573–7616

    Article  Google Scholar 

  10. Pacheco, C., Ernst, M.D.: Randoop: feedback-directed random testing for Java. In: OOPSLA 2007 Companion, pp. 815–816. ACM (2007)

    Google Scholar 

  11. Danglot, B., Vera-Perez, O.L., Baudry, B., Monperrus, M.: Automatic test improvement with DSpot: a study with ten mature open-source projects. arXiv preprint arXiv:1811.08330 (2018)

  12. Pretschner, A.: Model-based testing, St. Louis, MO, USA (2005). https://doi.org/10.1145/1062455.1062636

  13. Barr, E.T., Harman, M., McMinn, P., Shahbaz, M., Yoo, S.: The Oracle problem in software testing: a survey. IEEE Trans. Softw. Eng. 41, 507–525 (2015)

    Article  Google Scholar 

  14. Jones, C., Bonsignour, O.: The Economics of Software Quality. Addison-Wesley Professional, Reading (2011)

    Google Scholar 

  15. Binkley, D.: Using semantic differencing to reduce the cost of regression testing. In: Proceedings Conference on Software Maintenance, pp. 41–50 (1992)

    Google Scholar 

  16. Eski, S., Buzluca, F.: An empirical study on object-oriented metrics and software evolution in order to reduce testing costs by predicting change-prone classes. In: 2011 IEEE Fourth International Conference on Software Testing, Verification and Validation Workshops, pp. 566–571 (2011)

    Google Scholar 

  17. Diaz, E., Tuya, J., Blanco, R.: Automated software testing using a meta-heuristic technique based on tabu search. In: 18th IEEE International Conference on Automated Software Engineering, Proceedings, pp. 310–313 (2003)

    Google Scholar 

  18. Karhu, K., Repo, T., Taipale, O., Smolander, K.: Empirical observations on software testing automation. In: 2009 International Conference on Software Testing Verification and Validation, pp. 201–209 (2009)

    Google Scholar 

  19. Solis, C., Wang, X.: A study of the characteristics of behaviour driven development. In: 2011 37th EUROMICRO Conference on Software Engineering and Advanced Applications, pp. 383–387 (2011)

    Google Scholar 

  20. Baudry, B., Fleurey, F., Le Traon, Y.: Improving test suites for efficient fault localization. In: Proceedings of the 28th International Conference on Software Engineering, pp. 82–91 (2006)

    Google Scholar 

  21. Lam, S.S.B., Raju, M.H.P., Ch, S., Srivastav, P.R., et al.: Automated generation of independent paths and test suite optimization using artificial bee colony. Procedia Eng. 30, 191–200 (2012)

    Article  Google Scholar 

  22. Korel, B.: Automated software test data generation. IEEE Trans. Softw. Eng. 16, 870–879 (1990)

    Article  Google Scholar 

  23. Zhang, S., Saff, D., Bu, Y., Ernst, M.D.: Combined static and dynamic automated test generation. In: Proceedings of the 2011 International Symposium on Software Testing and Analysis, pp. 353–363 (2011)

    Google Scholar 

  24. Chen, T., Zhang, X.-S., Guo, S.-Z., Li, H.-Y., Wu, Y.: State of the art: dynamic symbolic execution for automated test generation. Future Gener. Comput. Syst. 29, 1758–1773 (2013)

    Article  Google Scholar 

  25. McMinn, P.: Search-based software test data generation: a survey. Softw. Testing Verif. Reliab. 14, 105–156 (2004)

    Article  Google Scholar 

  26. Ali, S., Briand, L.C., Hemmati, H., Panesar-Walawege, R.K.: A systematic review of the application and empirical investigation of search-based test case generation. IEEE Trans. Softw. Eng. 36, 742–762 (2010)

    Article  Google Scholar 

  27. Lakhotia, K., Harman, M., McMinn, P.: A multi-objective approach to search-based test data generation. In: Proceedings of the 9th Annual Conference on Genetic and Evolutionary Computation, pp. 1098–1105 (2007)

    Google Scholar 

  28. Harman, M., Kim, S.G., Lakhotia, K., McMinn, P., Yoo, S.: Optimizing for the number of tests generated in search based test data generation with an application to the oracle cost problem. In: 2010 Third International Conference on Software Testing, Verification, and Validation Workshops, pp. 182–191 (2010)

    Google Scholar 

  29. Utting, M., Pretschner, A., Legeard, B.: A taxonomy of model-based testing approaches. Softw. Testing Verif. Reliab. 22, 297–312 (2012)

    Article  Google Scholar 

  30. Packevicius, S., et al.: Test data generation for complex data types using imprecise model constraints and constraint solving techniques. Inf. Technol. Control 42, 191–204 (2013). ISSN 1392–124X

    Google Scholar 

  31. Packevičius, Š., Ušaniov, A., Bareiša, E.: Software testing using imprecise OCL constraints as oracles. In: Proceedings of the 2007 International Conference on Computer Systems and Technologies, CompSysTech 2007, Bulgaria, pp. 121:1–121:6. ACM, New York (2007). https://doi.org/10.1145/1330598.1330726

  32. Pretschner, A., et al.: One evaluation of model-based testing and its automation. In: Proceedings of the 27th International Conference on Software Engineering, pp. 392–401 (2005)

    Google Scholar 

  33. Barisas, D., Bareiša, E., Packevičius, Š.: Automated method for software integration testing based on UML behavioral models. In: Skersys, T., Butleris, R., Butkiene, R. (eds.) ICIST 2013. CCIS, vol. 403, pp. 272–284. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-41947-8_23

    Chapter  Google Scholar 

  34. Arcuri, A.: RESTful API automated test case generation with EvoMaster. ACM Trans. Softw. Eng. Methodol. 28, 3:1–3:37 (2019). https://doi.org/10.1145/3293455. ISSN 1049–331X

    Article  MathSciNet  Google Scholar 

  35. Meszaros, G.: XUnit Test Patterns: Refactoring Test Code. Prentice Hall PTR, Upper Saddle River (2006). ISBN 0131495054

    Google Scholar 

  36. Almulla, H., Salahirad, A., Gay, G.: Using search-based test generation to discover real faults in Guava. In: Menzies, T., Petke, J. (eds.) SSBSE 2017. LNCS, vol. 10452, pp. 153–160. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66299-2_13

    Chapter  Google Scholar 

  37. Campos, J., et al.: An empirical evaluation of evolutionary algorithms for unit test suite generation. Inf. Softw. Technol. 104, 207–235 (2018). http://www.sciencedirect.com/science/article/pii/S0950584917304858. ISSN 0950–5849

    Article  Google Scholar 

  38. Yang, Q., Li, J.J., Weiss, D.M.: A survey of coverage-based testing tools. Comput. J. 52, 589–597 (2009)

    Article  Google Scholar 

  39. Jia, Y., Harman, M.: An analysis and survey of the development of mutation testing. IEEE Trans. Softw. Eng. 37, 649–678 (2011). ISSN 0098–5589

    Article  Google Scholar 

  40. Ma, L., et al.: GRT: an automated test generator using orchestrated program analysis. In: 2015 30th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 842–847, November 2015

    Google Scholar 

  41. Sakti, A., Pesant, G., Guéhéneuc, Y.: Instance generator and problem representation to improve object oriented code coverage. IEEE Trans. Softw. Eng. 41, 294–313 (2015). ISSN 0098–5589

    Article  Google Scholar 

  42. Panichella, A., Kifetew, F.M., Tonella, P.: Automated test case generation as a many-objective optimisation problem with dynamic selection of the targets. IEEE Trans. Softw. Eng. 44, 122–158 (2018). ISSN 0098–5589

    Article  Google Scholar 

  43. Rojas, J.M., Fraser, G., Arcuri, A.: Seeding strategies in search-based unit test generation. Softw. Testing Verif. Reliab. (2016). https://doi.org/10.1002/stvr.1601. ISSN 1099–1689

    Article  Google Scholar 

  44. Fraser, G.: A tutorial on using and extending the EvoSuite search-based test generator. In: Colanzi, T.E., McMinn, P. (eds.) SSBSE 2018. LNCS, vol. 11036, pp. 106–130. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-99241-9_5. ISBN 978-3-319-99240-2

    Chapter  Google Scholar 

  45. Rojas, J.M., Campos, J., Vivanti, M., Fraser, G., Arcuri, A.: Combining multiple coverage criteria in search-based unit test generation. In: Barros, M., Labiche, Y. (eds.) SSBSE 2015. LNCS, vol. 9275, pp. 93–108. Springer, Cham (2015). https://doi.org/10.1007/978-3-319-22183-0_7

    Chapter  Google Scholar 

  46. Manikumar, T., John Sanjeev Kumar, A., Maruthamuthu, R.: Automated test data generation for branch testing using incremental genetic algorithm. Sadhana 41, 959–976 (2016). https://doi.org/10.1007/s12046-016-0536-1. ISSN 0973–7677

  47. Shamshiri, S., Rojas, J.M., Fraser, G., McMinn, P.: Random or genetic algorithm search for object-oriented test suite generation? In: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 1367–1374. ACM, Madrid (2015). https://doi.org/10.1145/2739480.2754696. ISBN 978-1-4503-3472-3

  48. Arcuri, A., Briand, L.C.: A Hitchhiker’s guide to statistical tests for as sessing randomized algorithms in software engineering. Softw. Test. Verif. Reliab. 24, 219–250 (2014)

    Article  Google Scholar 

  49. Fraser, G., Zeller, A.: Mutation-driven generation of unit tests and Oracles. In: Proceedings of the ACM International Symposium on Software Testing and Analysis, pp. 147–158. ACM, Trento (2010). https://doi.org/10.1145/1831708.1831728. ISBN 978-1-60558-823-0

  50. Stewart, A., Cardell-Oliver, R., Davies, R.: UWA-CSSE-14001 Side effect and purity checking in Java: a review (2014)

    Google Scholar 

  51. Fraser, G., Arcuri, A.: Automated test generation for Java generics. In: Winkler, D., Biffl, S., Bergsmann, J. (eds.) SWQD 2014. LNBIP, vol. 166, pp. 185–198. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-03602-1_12

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Šarūnas Packevičius or Eduardas Bareiša .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Rudžionienė, G., Packevičius, Š., Bareiša, E. (2019). Directed Multi-target Search Based Unit Tests Generation. In: Damaševičius, R., Vasiljevienė, G. (eds) Information and Software Technologies. ICIST 2019. Communications in Computer and Information Science, vol 1078. Springer, Cham. https://doi.org/10.1007/978-3-030-30275-7_8

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-30275-7_8

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-30274-0

  • Online ISBN: 978-3-030-30275-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics