Abstract
Testing of modernized legacy systems is difficult due to that typically requirements specifications do not exist and that detailed knowledge of the architecture and design of the system may have been lost. In this paper we present an approach which derives test suites for a modernized legacy systems from the legacy code. We extend our earlier presented approach deriving test suites from use case map (UCM) specifications of a system by transforming the legacy code into a UCM model. We further discuss enhancements to the test generation process required to operate on the large models obtained from realistic legacy systems and to assure that the generated tests are meaningful to the tester. This approach has been used to validate the modernization of large (in excess of 20 million lines of code) mainframe applications implemented in COBOL.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
European Telecommunications Standards Institute. TTCN-3: Core Language. ES 201 873–1 4.11.1 (2019)
International Telecommunications Union. Message Sequence Charts Z.120 (2011)
Letichevsky, A.A., Kapitonova, J.V., Kotlyarov, V.P., Volkov, V.A., Letichevsky, A.A., Weigert, T.: Semantics of message sequence charts. In: Prinz, A., Reed, R., Reed, J. (eds.) SDL 2005. LNCS, vol. 3530, pp. 117–132. Springer, Heidelberg (2005). https://doi.org/10.1007/11506843_8
Chelinsky, D.: The RSpec Book. The Pragmatic Bookshelf (2010)
Wynne, M., Hellesoy, A.: The Cucumber Book. The Pragmatic Bookshelf (2012)
Baranov, S., Kotlyarov, V., Letichevsky, A.: An industrial technology of test automation based on verified behavioral models of requirement specifications for telecommunication applications. In: Proceedings of the Region 8 IEEE EUROCON 2009 Conference 2009, pp. 122–129 (2009)
Baranov, S., Kapitonova, J., Letichevsky, A., Volkov, V., Weigert, T.: Basic protocols, message sequence charts, and verification of requirements specifications. Comput. Netw. 49(5), 661–675 (2005)
Baranov, S., Kotlyarov, V., Weigert, T.: Verifiable coverage criteria for automated testing. In: Ober, I., Ober, I. (eds.) SDL 2011. LNCS, vol. 7083, pp. 79–89. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-25264-8_8
Kolchin, A., et al.: An approach to creating concretized test scenarios within test automation technology for industrial software projects. Autom. Control Comput. Sci. 47(7), 433–442 (2013)
Buhr, R.: Use Case Maps for Object-Oriented Systems. Pearson, London (1995)
International Telecommunications Union. User Requirements Notation Z-151 (2018)
Kolchin, A.V.: An automatic method for the dynamic construction of abstractions of states of a formal model. Cybern. Syst. Anal. 46(4), 583–601 (2010)
Kolchin, A.V.: Interactive method for cumulative analysis of software formal models behavior. In: Proceedings of the 11th International Conference on Programming UkrPROG2018, CEUR-WS, vol. 2139, pp. 115–123 (2018)
Guba, A., et al.: A method for business logic extraction from legacy COBOL code of industrial systems. In: Proceedings of the 10th International Conference on Programming UkrPROG2016, CEUR-WS, vol. 1631, pp. 17–25 (2016)
Robot Framework User Guide. http://robotframework.org/robotframework/#user-guide
Tip, F.: A survey of program slicing techniques. J. Program. Lang. 3, 121–189 (1995)
Weiser, M.: Program slices: formal, psychological and practical investigations of an automatic program abstraction method. Ph.D. thesis, University of Michigan, Ann Arbor (1979)
Korel, B., Laski, J.: Dynamic program slicing. Inf. Process. Lett. 29(3), 155–163 (1988)
Ottenstein, K., Ottenstein, L.: The program dependence graph in a software development environment. In: Proceedings of the ACM SIGSOFT/SIGPLAN Software Engineering Symposium on Practical Software Development Environments, pp. 177–184 (1984)
Aho, A., Ullman, J.: Compilers: Principles, Techniques, and Tools. Addison-Wesley, Boston (2007)
Andersen, L.: Program analysis and specialization for the C programming language. Ph.D. thesis, DIEM, University of Copenhagen (1994)
Hardekopf, B., Lin, C.: The ant and the grasshopper: fast and accurate pointer analysis for millions of lines of code. In: Programming Language Design and Implementation (2007)
Weiser, M.: Program slicing. IEEE Trans. Softw. Eng. 10(4), 352–357 (1984)
Horwitz, S., Reps, T., Binkley, D.: Interprocedural slicing using dependence graphs. ACM Trans. Program. Lang. Syst. 12(1), 26–61 (1990)
Hwang, J., Du, M., Chou, C.: Finding program slices for recursive procedures. In: Proceedings of the 12th Annual International Computer Software and Application Conference, Chicago (1988)
Su, T., et al.: A survey on data-flow testing. ACM Comput. Surv. 50, 5 (2017)
Dssouli, R., et al.: Testing the control-flow, data-flow, and time aspects of communication systems: a survey. Adv. Comput. 107, 95–155 (2017)
Volkov, V., et al.: A survey of systematic methods for code-based test data generation. Artif. Intell. 2, 71–85 (2017)
Campos, J., Ge, Y., Fraser, G., Eler, M., Arcuri, A.: An empirical evaluation of evolutionary algorithms for test suite generation. In: Menzies, T., Petke, J. (eds.) SSBSE 2017. LNCS, vol. 10452, pp. 33–48. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66299-2_3
Beyer, D., Gulwani, S., Schmidt, D.A.: Combining model checking and data-flow analysis. Handbook of Model Checking, pp. 493–540. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-10575-8_16
Cadar, C., Sen, K.: Symbolic execution for software testing: three decades later. Commun. ACM 56(2), 82–90 (2013)
Hessel, A., Petterson, P.: A global algorithm for model-based test suite generation. Electr. Notes Theor. Comput. Sci. 190, 47–59 (2007)
Trabish, D., Mattavelli, A., Cadar, C.: Chopped symbolic execution. In: Proceedings of ICSE 2018 (2018)
Kuznetsov, V., et al.: Efficient state merging in symbolic execution. ACM SIGPLAN Conference on Programming Language Design and Implementation, pp. 193–204 (2012)
Boonstoppel, P., Cadar, C., Engler, D.: RWset: attacking path explosion in constraint-based test generation. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 351–366. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_27
Hong, H.S., Ural, H.: Dependence testing: extending data flow testing with control dependence. In: Khendek, F., Dssouli, R. (eds.) TestCom 2005. LNCS, vol. 3502, pp. 23–39. Springer, Heidelberg (2005). https://doi.org/10.1007/11430230_3
Kolchin, A., Potiyenko, S., Weigert, T.: Challenges for automated, model-based test scenario generation. In: Proceedings of the 25th International Conference on Information and Software Technologies, 12 p. (2019)
Rapps, S., Weyuker, E.: Data flow analysis techniques for test data selection. In: Proceedings of the International Conference of Software Engineering, pp. 272–277 (1982)
Kolchin, A.: A novel algorithm for attacking path explosion in model-based test generation for data flow coverage. In: Proceedings of the IEEE 1st International Conference on System Analysis and Intelligent Computing, SAIC (2018)
Maiya, P., Gupta, R., Kanade, A., Majumdar, R.: Partial order reduction for event-driven multi-threaded programs. In: Chechik, M., Raskin, J.-F. (eds.) TACAS 2016. LNCS, vol. 9636, pp. 680–697. Springer, Heidelberg (2016). https://doi.org/10.1007/978-3-662-49674-9_44
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2019 Springer Nature Switzerland AG
About this paper
Cite this paper
Weigert, T. et al. (2019). Generating Test Suites to Validate Legacy Systems. In: Fonseca i Casas, P., Sancho, MR., Sherratt, E. (eds) System Analysis and Modeling. Languages, Methods, and Tools for Industry 4.0. SAM 2019. Lecture Notes in Computer Science(), vol 11753. Springer, Cham. https://doi.org/10.1007/978-3-030-30690-8_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-30690-8_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-30689-2
Online ISBN: 978-3-030-30690-8
eBook Packages: Computer ScienceComputer Science (R0)