Abstract
Modern software systems operate in complex and changing environments and are exposed to multiple sources of uncertainty. Considering uncertainty as a first-class concern in software testing is currently on an uptrend. This paper introduces a novel methodology to deal with testing under uncertainty. Our proposal combines the usage of parametric model checking at design-time and online model-based testing algorithms to gather runtime evidence and detect requirements violations. As modeling formalism, we adopt parametric Markov Decision Processes where transition probabilities are not fixed, but are possibly given as a set of uncertain parameters. The design-time phase aims at analyzing the parameter space to identify the constraints for requirements satisfaction. Then, the testing activity applies a Bayesian inference process to identify violations of pre-computed constraints. An extensive empirical evaluation shows that the proposed technique is effective in discovering violations and is cheaper than existing testing under uncertainty methods.
Keywords
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsNotes
- 1.
The MBT module is open source software publicly available at https://github.com/SELab-unimi/mbt-module. A replication package of the experiments is available at https://github.com/SELab-unimi/sefm2020-replication-package.
References
Esfahani, N., Malek, S.: Uncertainty in self-adaptive software systems. In: de Lemos, R., Giese, H., Müller, H.A., Shaw, M. (eds.) Software Engineering for Self-Adaptive Systems II. LNCS, vol. 7475, pp. 214–238. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-35813-5_9
Camilli, M., Bellettini, C., Gargantini, A., Scandurra, P.: Online model-based testing under uncertainty. In: 2018 IEEE 29th International Symposium on Software Reliability Engineering (ISSRE), pp. 36–46, October 2018
Camilli, M., Gargantini, A., Madaudo, R., Scandurra, P.: HYPpOTesT: hypothesis testing toolkit for uncertain service-based web applications. In: Ahrendt, W., Tapia Tarifa, S.L. (eds.) IFM 2019. LNCS, vol. 11918, pp. 495–503. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-34968-4_27
Zhang, M., Ali, S., Yue, T., Norgren, R., Okariz, O.: Uncertainty-wise cyber-physical system test modeling. Softw. Syst. Model. (2017). https://doi.org/10.1007/s10270-017-0609-6
Zhang, M., Ali, S., Yue, T.: Uncertainty-wise test case generation and minimization for cyber-physical systems. J. Syst. Softw. 153, 1–21 (2019). https://doi.org/10.1016/j.jss.2019.03.011
Hahn, E.M., Hermanns, H., Wachter, B., Zhang, L.: PARAM: a model checker for parametric Markov models. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 660–664. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_56
Broy, M., Jonsson, B., Katoen, J.-P., Leucker, M., Pretschner, A. (eds.): Model-Based Testing of Reactive Systems: Advanced Lectures. LNCS, vol. 3472. Springer, Heidelberg (2005). https://doi.org/10.1007/b137241
Camilli, M., Gargantini, A., Scandurra, P., Bellettini, C.: Towards inverse uncertainty quantification in software development (short paper). In: Cimatti, A., Sirjani, M. (eds.) SEFM 2017. LNCS, vol. 10469, pp. 375–381. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66197-1_24
Robert, C.P.: The Bayesian Choice: From Decision-Theoretic Foundations to Computational Implementation, 2nd edn. Springer, New York (2007). https://doi.org/10.1007/0-387-71599-1
Forejt, V., Kwiatkowska, M., Norman, G., Parker, D.: Automated verification techniques for probabilistic systems. In: Bernardo, M., Issarny, V. (eds.) SFM 2011. LNCS, vol. 6659, pp. 53–113. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-21455-4_3
Puterman, M.L.: Markov Decision Processes: Discrete Stochastic Dynamic Programming, 1st edn. Wiley, New York (1994)
Filieri, A., Tamburrelli, G., Ghezzi, C.: Supporting self-adaptation via quantitative verification and sensitivity analysis at run time. IEEE Trans. Software Eng. 42(1), 75–99 (2016)
Courcoubetis, C., Yannakakis, M.: The complexity of probabilistic verification. J. ACM (JACM) 42(4), 857–907 (1995). https://doi.org/10.1145/210332.210339
Hahn, E.M., Han, T., Zhang, L.: Synthesis for PCTL in parametric Markov decision processes. In: Bobaru, M., Havelund, K., Holzmann, G.J., Joshi, R. (eds.) NFM 2011. LNCS, vol. 6617, pp. 146–161. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-20398-5_12
Berger, J.: Statistical Decision Theory and Bayesian Analysis. Springer Series in Statistics. Springer, New York (1985). https://doi.org/10.1007/978-1-4757-4286-2
Camilli, M., Gargantini, A., Scandurra, P.: Model-based hypothesis testing of uncertain software systems. Softw. Test. Verif. Reliab. 30(2), e1730 (2020)
Doob, J.L.: Application of the theory of martingales. In: Actes du Colloque International Le Calcul des Probabilités et ses applications, pp. 23–27 (1949)
Freedman, D.A.: On the asymptotic behavior of Bayes estimates in the discrete case II. Ann. Math. Stat. 36(2), 454–456 (1965)
Veanes, M., Campbell, C., Schulte, W., Tillmann, N.: Online testing with model programs. In: Proceedings of the 10th European Software Engineering Conference/13th ACM International Symposium on Foundations of Software Engineering, pp. 273–282 (2005)
Arcuri, A., Briand, L.: A practical guide for using statistical tests to assess randomized algorithms in software engineering. In: Proceedings of the 33rd International Conference on Software Engineering, ser. ICSE 2011, pp. 1–10. Association for Computing Machinery, New York (2011). https://doi.org/10.1145/1985793.1985795
Ramirez, A.J., Jensen, A.C., Cheng, B.H.C.: A taxonomy of uncertainty for dynamically adaptive systems. In: Proceedings of the 7th International Symposium on Software Engineering for Adaptive and Self-Managing Systems, ser. SEAMS 2012, pp. 99–108. IEEE Press, Piscataway (2012). http://dl.acm.org/citation.cfm?id=2666795.2666812
Perez-Palacin, D., Mirandola, R.: Uncertainties in the modeling of self-adaptive systems: a taxonomy and an example of availability evaluation. In: Proceedings of the 5th ACM/SPEC International Conference on Performance Engineering, ser. ICPE 2014, pp. 3–14. ACM, New York (2014). http://doi.acm.org/10.1145/2568088.2568095
Filieri, A., Grunske, L., Leva, A.: Lightweight adaptive filtering for efficient learning and updating of probabilistic models. In: Proceedings of the 37th International Conference on Software Engineering - Volume 1, ser. ICSE 2015, pp. 200–211. IEEE Press, Piscataway (2015). http://dl.acm.org/citation.cfm?id=2818754.2818781
Incerto, E., Tribastone, M., Trubiani, C.: Software performance self-adaptation through efficient model predictive control. In: International Conference on Automated Software Engineering, pp. 485–496 (2017)
Scheftelowitsch, D., Buchholz, P., Hashemi, V., Hermanns, H.: Multi-objective approaches to Markov decision processes with uncertain transition parameters. In: International Conference on Performance Evaluation Methodologies and Tools, pp. 44–51 (2017)
Walkinshaw, N., Fraser, G.: Uncertainty-driven black-box test data generation. In: 2017 IEEE International Conference on Software Testing, Verification and Validation (ICST), pp. 253–263, March 2017
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Camilli, M., Russo, B. (2020). Model-Based Testing Under Parametric Variability of Uncertain Beliefs. In: de Boer, F., Cerone, A. (eds) Software Engineering and Formal Methods. SEFM 2020. Lecture Notes in Computer Science(), vol 12310. Springer, Cham. https://doi.org/10.1007/978-3-030-58768-0_10
Download citation
DOI: https://doi.org/10.1007/978-3-030-58768-0_10
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-58767-3
Online ISBN: 978-3-030-58768-0
eBook Packages: Computer ScienceComputer Science (R0)