Skip to main content
Log in

Automated test data generation and stubbing method for C/C++ embedded projects

  • Published:
Automated Software Engineering Aims and scope Submit manuscript

Abstract

Automated test data generation for unit testing C/C++ functions using concolic testing has been known for improving software quality while reducing human testing effort. However, concolic testing could face challenging problems when tackling complex practical projects. This paper proposes a concolic-based method named Automated Unit Testing and Stubbing (AUTS) for automated test data and stub generation. The key idea of the proposed method is to apply the concolic testing approach with three major improvements. Firstly, the test data generation, which includes two path search strategies, not only is able to avoid infeasible paths but also achieves higher code coverage. Secondly, AUTS generates appropriate values for specialized data types to cover more test scenarios. Finally, the proposed method integrates automatic stub preparation and generation to reduce the costs of human effort. The method even works on incomplete source code or missing libraries. AUTS is implemented in a tool to test various C/C++ industrial and open-source projects. The experimental results show that the proposed method significantly improves the coverage of the generated test data in comparison with other existing methods.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Algorithm 1
Algorithm 2
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Data availability

No datasets were generated or analysed during the current study.

Notes

  1. http://www.gaio.com/users/CM1_5EB42ZKXSZDX.

  2. https://gitlab.com/tungtobi/vfpv-tool.

  3. https://github.com/fragglet/c-algorithms

  4. https://github.com/linux-can/can-utils

  5. https://gerrit.automotivelinux.org/gerrit/gitweb?p=src/app-framework-binder.git

  6. https://www.gnu.org/software/gsl/doc/html/intro.html.

References

  • Anand, A., Ram, M.: System reliability management: solutions and technologies. CRC Press (2018). https://doi.org/10.1201/9781351117661

    Article  Google Scholar 

  • Anand, S., Burke, E.K., Chen, T.Y., Clark, J., Cohen, M.B., Grieskamp, W., Harman, M., Harrold, M.J., Mcminn, P.: An orchestrated survey of methodologies for automated software test case generation. J. Syst. Softw. 86(8), 1978–2001 (2013). https://doi.org/10.1016/j.jss.2013.02.061

    Article  Google Scholar 

  • AUTOSAR: Guidelines for the use of the C++14 language in critical and safety-related systems. https://www.autosar.org/fileadmin/standards/R22-11/AP/AUTOSAR_RS_CPP14Guidelines.pdf

  • Avdeenko, T., Serdyukov, K.: Automated test data generation based on a genetic algorithm with maximum code coverage and population diversity. Appl. Sci. 11(10), 4673 (2021)

    Article  Google Scholar 

  • Baldoni, R., Coppa, E., D’elia, D.C., Demetrescu, C., Finocchi, I.: A survey of symbolic execution techniques. ACM Comput. Surv. CSUR. 51(3), 1–39 (2018). https://doi.org/10.1145/3182657

    Article  Google Scholar 

  • Buckle, G.: Static analysis of safety critical software (techniques, tools, and experiences). In: Redmill, F., Anderson, T. (eds.) Industrial Perspectives of Safety-critical Systems, pp. 150–168. Springer, London (1998)

    Chapter  Google Scholar 

  • Burnim, J., Sen, K.: Heuristics for scalable dynamic test generation. In: 2008 23rd IEEE/ACM International Conference on Automated Software Engineering, pp. 443–446 (2008). https://doi.org/10.1109/ASE.2008.69

  • Cadar, C., Dunbar, D., Engler, D.: Klee: Unassisted and automatic generation of high–coverage tests for complex systems programs. In: Proceedings of the 8th USENIX Conference on Operating Systems Design and Implementation. OSDI’08, pp. 209–224. USENIX Association, USA (2008a)

  • Cadar, C., Ganesh, V., Pawlowski, P.M., Dill, D.L., Engler, D.R.: Exe: Automatically generating inputs of death. ACM Trans. Inf. Syst. Secur. 12(2) (2008b) https://doi.org/10.1145/1455518.1455522

  • Cadar, C., Engler, D.: Execution generated test cases: how to make systems code crash itself. In: Godefroid, P. (ed.) Model Checking Software, pp. 2–23. Springer, Berlin, Heidelberg (2005)

    Chapter  Google Scholar 

  • Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013). https://doi.org/10.1145/2408776.2408795

    Article  Google Scholar 

  • Cha, S.K., Avgerinos, T., Rebert, A., Brumley, D.: Unleashing mayhem on binary code. In: 2012 IEEE Symposium on Security and Privacy, pp. 380–394 (2012). https://doi.org/10.1109/SP.2012.31

  • Chen, Y., Li, P., Xu, J., Guo, S., Zhou, R., Zhang, Y., Wei, T., Lu, L.: Savior: towards bug–driven hybrid testing. In: 2020 IEEE Symposium on Security and Privacy (SP), pp. 1580–1596 (2020). https://doi.org/10.1109/SP40000.2020.00002

  • Chess, B., West, J.: Secure Programming with Static Analysis, 1st edn. Addison-Wesley Professional, USA (2007)

    Google Scholar 

  • Chipounov, V., Kuznetsov, V., Candea, G.: The s2e platform Design implementation and applications. ACM Trans. Comput. Syst. (2012). https://doi.org/10.1145/2110356.2110358

    Article  Google Scholar 

  • Circular, A.: Ac: 25.1309-1a: System design and analysis. Washington, DC: Federal Aviation Administration (1988)

  • El Koursi, E.M., Mariano, G.: Assessment and certification of safety critical software. In: Proceedings of the 5th Biannual World Automation Congress, vol. 14, pp. 51–57 (2002). https://doi.org/10.1109/WAC.2002.1049420

  • Elkarablieh, B., Godefroid, P., Levin, M.Y.: Precise pointer reasoning for dynamic test generation. In: Proceedings of the Eighteenth International Symposium on Software Testing and Analysis. ISSTA ’09, pp. 129–140. Association for Computing Machinery, New York, NY, USA (2009). https://doi.org/10.1145/1572272.1572288

  • Ferrell, T.K., Ferrell, U.D.: Rtca do-178c/eurocae ed-12c and the technical supplements. In: Digital Avionics Handbook, pp. 207–215. CRC Press, (2017)

  • Flood, D., McCaffery, F., Casey, V., McKeever, R., Rust, P.: A roadmap to iso 14971 implementation. J. Softw. Evol. Process 27(5), 319–336 (2015). https://doi.org/10.1002/smr.1711

    Article  Google Scholar 

  • Godefroid, P., Klarlund, N., Sen, K.: Dart: Directed automated random testing. In: Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation. PLDI ’05, pp. 213–223. Association for Computing Machinery, New York, NY, USA (2005). https://doi.org/10.1145/1065010.1065036

  • Godefroid, P., Levin, M.Y., Molnarl, D.: Automated Whitebox Fuzz Testing. In: Proceedings of Network and DistributedSystems Security (NDSS 2008)

  • Godefroid, P.: Compositional dynamic test generation. In: Proceedings of the 34th Annual ACM SIGPLAN–SIGACT Symposium on Principles of Programming Languages. POPL ’07, pp. 47–54. Association for Computing Machinery, New York, NY, USA (2007). https://doi.org/10.1145/1190216.1190226

  • Grigorenko, E., Sternberg, R.: Dynamic testing. Psychol. Bull. 124, 75–111 (1998). https://doi.org/10.1037/0033-2909.124.1.75

    Article  Google Scholar 

  • Guo, X., Okamura, H., Dohi, T.: Automated software test data generation with generative adversarial networks. IEEE Access 10, 20690–20700 (2022). https://doi.org/10.1109/ACCESS.2022.3153347

    Article  Google Scholar 

  • Harman, M., Jones, B.F.: Search-based software engineering. Inf. Softw. Technol. 43(14), 833–839 (2001). https://doi.org/10.1016/S0950-5849(01)00189-6

    Article  Google Scholar 

  • Holm, O.: Improving the development of safety critical software : Automated test case generation for mc/dc coverage using incremental sat-based model checking. Master’s thesis, Linköping University, Department of Computer and Information Science (2019)

  • Holm, O.: Improving the Development of Safety Critical Software: Automated Test Case Generation for MC/DC Coverage using Incremental SAT-Based Model Checking (2019)

  • Huong, T.N., Chung, L.H., Tung, L.N., Tran, H.V., Hung, P.N.: An automated stub method for unit testing c/c++ projects. In: 2022 14th International Conference on Knowledge and Systems Engineering (KSE), pp. 1–6 (2022). https://doi.org/10.1109/KSE56063.2022.9953784

  • Ivanković, M., Petrović, G., Just, R., Fraser, G.: Code coverage at google. In: Proceedings of the 2019 27th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering. ESEC/FSE 2019, pp. 955–963. Association for Computing Machinery, New York, NY, USA (2019). https://doi.org/10.1145/3338906.3340459

  • Kafka, P.: The automotive standard iso 26262, the innovative driver for enhanced safety assessment & technology for motor cars. Proced. Eng. 45, 2–10 (2012). https://doi.org/10.1016/j.proeng.2012.08.112. (2012 International Symposium on Safety Science and Technology)

    Article  Google Scholar 

  • Kannavara, R., Havlicek, C.J., Chen, B., Tuttle, M.R., Cong, K., Ray, S., Xie, F.: Challenges and opportunities with concolic testing. In: 2015 National Aerospace and Electronics Conference (NAECON), pp. 374–378 (2015). https://doi.org/10.1109/NAECON.2015.7443099

  • Khurshid, S., Păsăreanu, C.S., Visser, W.: Generalized symbolic execution for model checking and testing. In: Proceedings of the 9th International Conference on Tools and Algorithms for the Construction and Analysis of Systems. TACAS’03, pp. 553–568. Springer, Berlin, Heidelberg (2003)

  • Kim, Y., Choi, Y., Kim, M.: Precise concolic unit testing of c programs using extended units and symbolic alarm filtering. In: 2018 IEEE/ACM 40th International Conference on Software Engineering (ICSE), pp. 315–326 (2018). https://doi.org/10.1145/3180155.3180253

  • Kim, Y., Kim, Y., Kim, T., Lee, G., Jang, Y., Kim, M.: Automated unit testing of large industrial embedded software using concolic testing. In: 2013 28th IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 519–528 (2013). https://doi.org/10.1109/ASE.2013.6693109

  • Kim, M., Kim, Y., Rothermel, G.: A scalable distributed concolic testing approach: An empirical evaluation. In: Proceedings of the 2012 IEEE Fifth International Conference on Software Testing, Verification and Validation. ICST ’12, pp. 340–349. IEEE Computer Society, USA (2012). https://doi.org/10.1109/ICST.2012.114

  • Kim, Y., Lee, D., Baek, J., Kim, M.: Concolic testing for high test coverage and reduced human effort in automotive industry. In: 2019 IEEE/ACM 41st International Conference on Software Engineering: Software Engineering in Practice (ICSE–SEIP), pp. 151–160 (2019). https://doi.org/10.1109/ICSE-SEIP.2019.00024

  • Kim, Y., Lee, D., Baek, J., Kim, M.: Maestro: automated test generation framework for high test coverage and reduced human effort in automotive industry. Inf. Softw. Technol. 123, 106221 (2020). https://doi.org/10.1016/j.infsof.2019.106221

    Article  Google Scholar 

  • King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976). https://doi.org/10.1145/360248.360252

    Article  MathSciNet  Google Scholar 

  • Kleeberger, V.B., Rutkowski, S., Coppens, R.: Design & verification of automotive soc firmware. In: Proceedings of the 52nd Annual Design Automation Conference. DAC ’15. Association for Computing Machinery, New York, NY, USA (2015). https://doi.org/10.1145/2744769.2747918

  • Kumar, G., Chopra, V.: Hybrid approach for automated test data generation. J. ICT Stand. 10(04), 531–562 (2022). https://doi.org/10.13052/jicts2245-800X.1043

    Article  Google Scholar 

  • Kurian, E., Briola, D., Braione, P., Denaro, G.: Automatically generating test cases for safety-critical software via symbolic execution. J. Syst. Softw. 199, 111629 (2023). https://doi.org/10.1016/j.jss.2023.111629

    Article  Google Scholar 

  • Larson, E., Austin, T.: High Coverage Detection of Input-Related Security Facults, p. 9. USENIX Association, USA (2003)

    Google Scholar 

  • Li, G., Andreasen, E., Ghosh, I.: Symjs: automatic symbolic testing of javascript web applications. In: Proceedings of the 22nd ACM SIGSOFT International Symposium on Foundations of Software Engineering. FSE 2014, pp. 449–459. Association for Computing Machinery, New York, NY, USA (2014). https://doi.org/10.1145/2635868.2635913

  • Li, Y., Su, Z., Wang, L., Li, X.: Steering symbolic execution to less traveled paths. SIGPLAN Not. 48(10), 19–32 (2013). https://doi.org/10.1145/2544173.2509553

    Article  Google Scholar 

  • Ma, K.-K., Yit Phang, K., Foster, J.S., Hicks, M.: Directed symbolic execution. In: Yahav, E. (ed.) Static Analysis, pp. 95–111. Springer, Berlin, Heidelberg (2011)

    Chapter  Google Scholar 

  • Majumdar, R., Sen, K.: Hybrid concolic testing. In: Proceedings of the 29th International Conference on Software Engineering. ICSE ’07, pp. 416–426. IEEE Computer Society, USA (2007). https://doi.org/10.1109/ICSE.2007.41

  • Michael, C.C., McGraw, G., Schatz, M.A.: Generating software test data by evolution. IEEE Trans. Softw. Eng. 27(12), 1085–1110 (2001). https://doi.org/10.1109/32.988709

    Article  Google Scholar 

  • Monpratarnchai, S., Fujiwara, S., Katayama, A., Uehara, T.: An automated testing tool for java application using symbolic execution based test case generation. In: 2013 20th Asia-Pacific Software Engineering Conference (APSEC), vol. 2, pp. 93–98 (2013). https://doi.org/10.1109/APSEC.2013.121

  • Myers, G.J., Sandler, C.: The Art of Software Testing. John Wiley & Sons Inc, Hoboken, NJ, USA (2004)

    Google Scholar 

  • Nguyen, A.T.V., Ogawa, M.: Automatic stub generation for dynamic symbolic execution of arm binary. In: Proceedings of the 11th International Symposium on Information and Communication Technology. SoICT ’22, pp. 352–359. Association for Computing Machinery, New York, NY, USA (2022). https://doi.org/10.1145/3568562.3568628

  • Nguyen, D.-A., Huong, T.N., Dinh, H.V., Hung, P.N.: Improvements of directed automated random testing in test data generation for c++ projects. Int. J. Software Eng. Knowl. Eng. 29(09), 1279–1312 (2019). https://doi.org/10.1142/S0218194019500402

    Article  Google Scholar 

  • Păsăreanu, C.S., Kersten, R., Luckow, K., Phan, Q.S.: Chapter six—symbolic execution and recent applications to worst-case execution, load testing, and security analysis. Advances in Computers, vol. 113, pp. 289–314. Elsevier (2019). https://doi.org/10.1016/bs.adcom.2018.10.004

  • Păsăreanu, C.S., Visser, W.: A survey of new trends in symbolic execution for software testing and analysis. Int. J. Softw. Tools Technol. Trans. 11(4), 339–353 (2009). https://doi.org/10.1007/s10009-009-0118-1

    Article  Google Scholar 

  • Platform, I.O.B.: ISO 26262–1:2011(en). https://www.iso.org/standard/68383.html

  • Rani, S.A., Akila, C., Raja, S.P.: Guided intelligent hyper-heuristic algorithm for critical software application testing satisfying multiple coverage criteria. J. Circuits Syst. Comput. 33(02), 2450029 (2024). https://doi.org/10.1142/S0218126624500294

    Article  Google Scholar 

  • Sabbaghi, A., Keyvanpour, M.R.: A systematic review of search strategies in dynamic symbolic execution. Comput. Stand. Interfaces 72, 103444 (2020). https://doi.org/10.1016/j.csi.2020.103444

    Article  Google Scholar 

  • Sen, K., Marinov, D., Agha, G.: Cute: A concolic unit testing engine for c. In: Proceedings of the 10th European Software Engineering Conference Held Jointly with 13th ACM SIGSOFT International Symposium on Foundations of Software Engineering. ESEC/FSE–13, pp. 263–272. Association for Computing Machinery, New York, NY, USA (2005). https://doi.org/10.1145/1081706.1081750

  • Sen, K.: Concolic testing. In: Proceedings of the Twenty–Second IEEE/ACM International Conference on Automated Software Engineering. ASE ’07, pp. 571–572. Association for Computing Machinery, New York, NY, USA (2007). https://doi.org/10.1145/1321631.1321746

  • Sen, K.: Concolic testing and constraint satisfaction. In: Sakallah, K.A., Simon, L. (eds.) Theory and Applications of Satisfiability Testing—SAT 2011, pp. 3–4. Springer, Berlin, Heidelberg (2011)

    Chapter  Google Scholar 

  • Sun, T., Wang, Z., Pu, G., Yu, X., Qiu, Z., Gu, B.: Towards scalable compositional test generation. In: 2009 Ninth International Conference on Quality Software, pp. 353–358 (2009). https://doi.org/10.1109/QSIC.2009.53

  • Takanen, A., Demott, J., Miller, C., Kettunen, A.: Fuzzing for Software Security Testing and Quality Assurance, Second Edition, (2018)

  • Tokumoto, S., Uehara, T., Munakata, K., Ishida, H., Eguchi, T., Baba, M.: Enhancing symbolic execution to test the compatibility of re-engineered industrial software. In: 2012 19th Asia-Pacific Software Engineering Conference, vol. 1, pp. 314–317 (2012). https://doi.org/10.1109/APSEC.2012.102

  • Tung, L.N., Tran, H.-V., Le, K.N., Hung, P.N.: An automated test data generation method for void pointers and function pointers in c/c++ libraries and embedded projects. Inf. Softw. Technol. 145, 106821 (2022). https://doi.org/10.1016/j.infsof.2022.106821

    Article  Google Scholar 

  • Vidaure, A.C., Lopez, E.C., Alcocer, J.P.S., Bergel, A.: Testevoviz: visual introspection for genetically-based test coverage evolution. In: 2020 Working Conference on Software Visualization (VISSOFT), pp. 1–11 (2020). https://doi.org/10.1109/VISSOFT51673.2020.00005

  • Vitorino, J., Dias, T., Fonseca, T., Maia, E., Praça, I.: Constrained Adversarial Learning and its applicability to Automated Software Testing: a systematic review (2023)

  • Wang, X., Sun, J., Chen, Z., Zhang, P., Wang, J., Lin, Y.: Towards optimal concolic testing. In: Proceedings of the 40th International Conference on Software Engineering. ICSE ’18, pp. 291–302. Association for Computing Machinery, New York, NY, USA (2018). https://doi.org/10.1145/3180155.3180177

  • Wang, Z., Yu, X., Sun, T., Pu, G., Ding, Z., Hu, J.: Test data generation for derived types in c program. In: 2009 Third IEEE International Symposium on Theoretical Aspects of Software Engineering, pp. 155–162 (2009). https://doi.org/10.1109/TASE.2009.10

  • Wang, H., Liu, T., Guan, X., Shen, C., Zheng, Q., Yang, Z.: Dependenceguidedsymbolicexecution. IEEE Trans. Softw. Eng. 43(3), 252–271 (2017). https://doi.org/10.1109/TSE.2016.2584063

    Article  Google Scholar 

  • Ward, N.J.: The static analysis of safety critical software using malpas. IFAC Proceed. Vol. 22(19), 91–96 (1989). https://doi.org/10.1016/S1474-6670(17)52801-3. (IFAC/IFIP Workshop on Safety of Computer Control Systems 1989 (SAFECOMP '89), Vienna, Austria, 5–7 December)

    Article  Google Scholar 

  • Wichmann, B., Canning, A.A., Clutterbuck, D.L., Winsborrow, L.A., Ward, N.J., Marsh, W.: Industrial perspective on static analysis. Softw. Eng. J. 10, 69–75 (1995). https://doi.org/10.1049/sej.1995.0010

    Article  Google Scholar 

  • Williams, N., Marre, B., Mouy, P., Roger, M.: Pathcrawler: automatic generation of path tests by combining static and dynamic analysis. In: Dal Cin, M., Kaâniche, M., Pataricza, A. (eds.) Dependable Computing—EDCC 5, pp. 281–292. Springer, Berlin, Heidelberg (2005)

    Chapter  Google Scholar 

  • Yoshida, H., Li, G., Kamiya, T., Ghosh, I., Rajan, S., Tokumoto, S., Munakata, K., Uehara, T.: Klover: automatic test generation for c and c++ programs, using symbolic execution. IEEE Softw. 34(5), 30–37 (2017). https://doi.org/10.1109/MS.2017.3571576

    Article  Google Scholar 

  • Zhu, H., Wei, L., Terragni, V., Liu, Y., Cheung, S.C., Wu, J., Sheng, Q., Zhang, B., Song, L.: Stubcoder: automated generation and repair of stub code for mock objects. ACM Trans. Softw. Eng. Methodol. (2023). https://doi.org/10.1145/3617171

    Article  Google Scholar 

Download references

Acknowledgements

Duong Nguyen was funded by the Master Scholarship Programme of Vingroup Innovation Foundation (VINIF), code VINIF.2023.ThS.029.

Author information

Authors and Affiliations

Authors

Contributions

LNT: Conceptualization, Methodology, Software, Validation, Formal analysis, Investigation, Writing—Original Draft, Data curation. DNVB: Formal analysis, Investigation, Writing—Reviewing and Editing. KNL: Writing—Reviewing and Editing. PNH: Main idea, Methodology, Formal analysis, Investigation, Writing—Reviewing and Editing, Supervision.

Corresponding author

Correspondence to Pham Ngoc Hung.

Ethics declarations

Competing interests

The authors declare no competing interests.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Nguyen Tung, L., Binh Duong, N.V., Le, K.N. et al. Automated test data generation and stubbing method for C/C++ embedded projects. Autom Softw Eng 31, 52 (2024). https://doi.org/10.1007/s10515-024-00449-6

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10515-024-00449-6

Keywords