Abstract
Test automation allows automatizing some repetitive and tedious but essential tasks in a formalized testing process already in place, or to achieve additional testing that would be complicated manually. However, the automated testing software tools available today are typically used to execute a test case manually written and identified. However, it is a highly challenging task due to: (1) the large variability in structure of functional specification documents; (2) and the inter and intra observer variability across testers. In this work, we propose a novel automated test framework introducing three major contributions: (1) Modeling the interactions across all process (design, planning, and execution). Specifically, our framework permits the use of textual functional specifications to automate test projects. (2) Our framework automatizes test projects using Machine Learning (ML) and Natural Language Processing (NLP). Specifically, it automatically extracts automated test scenarios from functional specifications requirements. (3) The proposed method captures shared and complementary information between different processes. We evaluated our framework using 300 pages of project specification. We show that our framework is robust for the standardization of specification, the automatically extraction of test scenarios and the identification of automating scenarios.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Hu, G., Zhu, L., Yang, J.: AppFlow: using machine learning to synthesize robust, reusable UI tests. In: Proceedings of the 2018 26th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 269–282 (2018)
Gul, S., van Oort, E.: A machine learning approach to filtrate loss determination and test automation for drilling and completion fluids. J. Pet. Sci. Eng. 186, 106727 (2020)
Kim, J., Ryu, J.W., Shin, H.-J., Song, J.-H.: Machine learning frameworks for automated software testing tools: a study. Int. J. Contents 13(1), 38–44 (2017)
Durelli, V.H.S., et al.: Machine learning applied to software testing: a systematic mapping study. IEEE Trans. Reliab. 68(3), 1189–1212 (2019)
Zhang, J.M., Harman, M., Ma, L., Liu, Y.: Machine learning testing: survey, landscapes and horizons. IEEE Trans. Softw. Eng. (2020)
Jenny Li, J., Ulrich, A., Bai, X., Bertolino, A.: Advances in test automation for software with special focus on artificial intelligence and machine learning. Softw. Qual. J. 28(1), 245–248 (2020)
Belsare, D., Bhate, M.: A review of NLP oriented automated test case generation framework in testing. Int. J. Future Gener. Commun. Netw. 13(2), 14–16 (2020)
Antoine, L.Y., Uthayasooriyar, B., Wang, T.: A survey on natural language processing (NLP) and applications in insurance. arXiv preprint arXiv:2010.00462 (2020)
Medhat, W., Hassan, A., Korashy, H.: Sentiment analysis algorithms and applications: a survey. Ain Shams Eng. J. 5(4), 1093–1113 (2014)
Huang, B., Carley, K.M.: Parameterized convolutional neural networks for aspect level sentiment classification. arXiv preprint arXiv:1909.06276 (2019)
Tang, D., Qin, B., Feng, X., Liu, T.: Effective lstms for target-dependent sentiment classification. arXiv preprint arXiv:1512.01100 (2015)
Sun, C., Huang, L., Qiu, X.: Utilizing bert for aspect-based sentiment analysis via constructing auxiliary sentence. arXiv preprint arXiv:1903.09588 (2019)
Nguyen, T.H., Shirai, K.: PhraseRNN: Phrase recursive neural network for aspect-based sentiment analysis. In: Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pp. 2509–2514 (2015)
Xue, W., Li, T.: Aspect based sentiment analysis with gated convolutional networks. arXiv preprint arXiv:1805.07043 (2018)
Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: End-to-end memory networks. arXiv preprint arXiv:1503.08895 (2015)
Wang, Y., Huang, M., Zhu, X., Zhao, L.: Attention-based lstm for aspect-level sentiment classification. In: Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pp. 606–615 (2016)
Pennington, J., Socher, R., Manning, C.D.: GloVE: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543 (2014)
González-Carvajal, S., Garrido-Merchán, E.C.: Comparing BERT against traditional machine learning text classification. arXiv preprint arXiv:2005.13012 (2020)
Sun, C., Qiu, X., Xu, Y., Huang, X.: How to fine-tune BERT for text classification? In: Sun, M., Huang, X., Ji, H., Liu, Z., Liu, Y. (eds.) CCL 2019. LNCS, vol. 11856, pp. 194–206 (2019). Springer, Cham. https://doi.org/10.1007/978-3-030-32381-3_16
Gao, Z., Feng, A., Song, X., Xi, W.: Target-dependent sentiment classification with BERT. IEEE Access 7, 154290–154299 (2019)
Martin, L., et al.: CamemBERT: a tasty French language model. arXiv preprint arXiv:1911.03894 (2019)
Howard, J., Ruder, S.: Universal language model fine-tuning for text classification. arXiv preprint arXiv:1801.06146 (2018)
Peters, M.E., et al.: Deep contextualized word representations. arXiv preprint arXiv:1802.05365 (2018)
Thiergart, J., Huber, S., Übellacker, T.: Understanding emails and drafting responses–an approach using GPT-3. arXiv preprint arXiv:2102.03062 (2021)
Dwarakanath, A., Sengupta, S.: Litmus: generation of test cases from functional requirements in natural language. In: Bouma, G., Ittoo, A., Métais, E., Wortmann, H. (eds.) NLDB 2012. LNCS, vol. 7337, pp. 58–69. Springer, Heidelberg (2012). https://doi.org/10.1007/978-3-642-31178-9_6
Zhang, M., Yue, T., Ali, S., Zhang, H., Wu, J.: A systematic approach to automatically derive test cases from use cases specified in restricted natural languages. In: Amyot, D., Fonseca i Casas, P., Mussbacher, G. (eds.) SAM 2014. LNCS, vol. 8769, 142–157. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-11743-0_10
Soeken, M., Wille, R., Drechsler, R.: Assisted behavior driven development using natural language processing. In: Furia, C.A., Nanz, S. (eds.) TOOLS 2012. LNCS, vol. 7304, pp. 269–287 (2012). Springer, Heidelberg. https://doi.org/10.1007/978-3-642-30561-0_19
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Bnouni Rhim, N., Ben Mabrouk, M. (2022). NLP and Logic Reasoning for Fully Automating Test. In: Abraham, A., et al. Innovations in Bio-Inspired Computing and Applications. IBICA 2021. Lecture Notes in Networks and Systems, vol 419. Springer, Cham. https://doi.org/10.1007/978-3-030-96299-9_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-96299-9_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-96298-2
Online ISBN: 978-3-030-96299-9
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)