Abstract
The field of machine learning focuses on computationally efficient, yet approximate algorithms. On the contrary, the field of formal methods focuses on mathematical rigor and provable correctness. Despite their superficial differences, both fields offer mutual benefit. Formal methods offer methods to verify and explain machine learning systems, aiding their adoption in safety critical domains. Machine learning offers approximate, computationally efficient approaches that let formal methods scale to larger problems. This paper gives an introduction to the track “Formal Methods Meets Machine Learning” (F3ML) and shortly presents its scientific contributions, structured into two thematic subthemes: One, concerning formal methods based approaches for the explanation and verification of machine learning systems, and one concerning the employment of machine learning approaches to scale formal methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Angluin, D.: Queries and concept learning. Mach. Learn. 2(4), 319–342 (1988)
Athalye, A., Engstrom, L., Ilyas, A., Kwok, K.: Synthesizing robust adversarial examples. In: Dy, J., Krause, A. (eds.) Proceedings of the 35th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 80, pp. 284–293. PMLR (2018). https://proceedings.mlr.press/v80/athalye18b.html
Bahar, R.I., et al.: Algebric decision diagrams and their applications. Formal Methods Syst. Des. 10(2), 171–206 (1997)
Barrett, C., Tinelli, C.: Satisfiability modulo theories. In: Clarke, E., Henzinger, T., Veith, H., Bloem, R. (eds.) Handbook of Model Checking, pp. 305–343. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-10575-8_11
Boyer, R.S., Elspas, B., Levitt, K.N.: Select-a formal system for testing and debugging programs by symbolic execution. ACM SigPlan Not. 10(6), 234–245 (1975)
Brown, T., et al.: Language models are few-shot learners. Adv. Neural. Inf. Process. Syst. 33, 1877–1901 (2020)
Chung, J., Gulcehre, C., Cho, K., Bengio, Y.: Empirical evaluation of gated recurrent neural networks on sequence modeling. arXiv preprint arXiv:1412.3555 (2014)
Clarke, E.M.: Model checking. In: Ramesh, S., Sivakumar, G. (eds.) FSTTCS 1997. LNCS, vol. 1346, pp. 54–56. Springer, Heidelberg (1997). https://doi.org/10.1007/BFb0058022
Cordy, M., et al.: A decade of featured transition systems. In: ter Beek, M.H., Fantechi, A., Semini, L. (eds.) From Software Engineering to Formal Methods and Tools, and Back. LNCS, vol. 11865, pp. 285–312. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30985-5_18
Doran, D., Schulz, S., Besold, T.R.: What does explainable AI really mean? A new conceptualization of perspectives. arXiv preprint arXiv:1710.00794 (2017)
Došilović, F.K., Brčić, M., Hlupić, N.: Explainable artificial intelligence: a survey. In: 2018 41st International Convention on Information and Communication Technology, Electronics and Microelectronics (MIPRO), pp. 0210–0215. IEEE (2018)
Dosovitskiy, A., et al.: An image is worth 16x16 words: transformers for image recognition at scale. arXiv preprint arXiv:2010.11929 (2020)
Everett, G.D., McLeod Jr., R.: Software testing. Testing Across the Entire (2007)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Gossen, F., Steffen, B.: Algebraic aggregation of random forests: towards explainability and rapid evaluation. Int. J. Softw. Tools Technol. Transf. (2021). https://doi.org/10.1007/s10009-021-00635-x
Gros, T.P., Hermanns, H., Hoffmann, J., Klauck, M., Steinmetz, M.: Deep statistical model checking. In: Gotsman, A., Sokolova, A. (eds.) FORTE 2020. LNCS, vol. 12136, pp. 96–114. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-50086-3_6
Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., Pedreschi, D.: A survey of methods for explaining black box models. ACM Comput. Surv. 51(5) (2018). https://doi.org/10.1145/3236009
Hartmanns, A., Klauck, M.: The modest state of learning, sampling, and verifying strategies. In: Margaria, T., Steffen, B. (eds.) ISoLA 2022. LNCS, vol. 13703, pp. 406–432. Springer, Cham (2022)
Hochreiter, S., Schmidhuber, J.: Long short-term memory. Neural Comput. 9(8), 1735–1780 (1997)
Jegourel, C., Larsen, K.G., Legay, A., Mikučionis, M., Poulsen, D.B., Sedwards, S.: Importance sampling for stochastic timed automata. In: Fränzle, M., Kapur, D., Zhan, N. (eds.) SETTA 2016. LNCS, vol. 9984, pp. 163–178. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-47677-3_11
Jegourel, C., Legay, A., Sedwards, S.: Importance splitting for statistical model checking rare properties. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 576–591. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39799-8_38
Jüngermann, F., Kretínský, J., Weininger, M.: Algebraically explainable controllers: decision trees and support vector machines join forces. Int. J. Softw. Tools Technol. Transf. (2022, to appear)
Kaelbling, L.P., Littman, M.L., Moore, A.W.: Reinforcement learning: a survey. J. Artif. Intell. Res. 4, 237–285 (1996)
Khmelnitsky, I., et al.: Analysis of recurrent neural networks via property-directed verification of surrogate models. Int. J. Softw. Tools Technol. Transf. (2022, to appear)
King, J.C.: Symbolic execution and program testing. Commun. ACM 19(7), 385–394 (1976)
Kohli, P., Chadha, A.: Enabling pedestrian safety using computer vision techniques: a case study of the 2018 Uber Inc. self-driving car crash. In: Arai, K., Bhatia, R. (eds.) FICC 2019. LNNS, vol. 69, pp. 261–279. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-12388-8_19
Larsen, K.G., Legay, A., Mikučionis, M., Poulse, D.B.: Importance splitting in uppaal. In: Margaria, T., Steffen, B. (eds.) ISoLA 2022. LNCS, vol. 13703, pp. 433–447. Springer, Cham (2022)
Lazreg, S., Cordy, M., Legay, A.: Verification of variability-intensive stochastic systems with statistical model checking. In: Margaria, T., Steffen, B. (eds.) ISoLA 2022. LNCS, vol. 13703, pp. 448–471. Springer, Cham (2022)
Legay, A., Delahaye, B., Bensalem, S.: Statistical model checking: an overview. In: Barringer, H., et al. (eds.) RV 2010. LNCS, vol. 6418, pp. 122–135. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-16612-9_11
Legay, A., Lukina, A., Traonouez, L.M., Yang, J., Smolka, S.A., Grosu, R.: Statistical model checking. In: Steffen, B., Woeginger, G. (eds.) Computing and Software Science. LNCS, vol. 10000, pp. 478–504. Springer, Cham (2019). https://doi.org/10.1007/978-3-319-91908-9_23
Mehrabi, N., Morstatter, F., Saxena, N., Lerman, K., Galstyan, A.: A survey on bias and fairness in machine learning. ACM Comput. Surv. (CSUR) 54(6), 1–35 (2021)
Müller-Olm, M., Schmidt, D., Steffen, B.: Model-checking. In: Cortesi, A., Filé, G. (eds.) SAS 1999. LNCS, vol. 1694, pp. 330–354. Springer, Heidelberg (1999). https://doi.org/10.1007/3-540-48294-6_22
Murthy, S.K.: Automatic construction of decision trees from data: a multi-disciplinary survey. Data Min. Knowl. Discov. 2(4), 345–389 (1998)
Murtovi, A., Bainczyk, A., Nolte, G., Schlüter, M., Bernhard, S.: Forest gump: a tool for veriification and explanation. Int. J. Softw. Tools Technol. Transf. (2022, to appear)
Noble, W.S.: What is a support vector machine? Nat. Biotechnol. 24(12), 1565–1567 (2006)
Rao, Q., Frtunikj, J.: Deep learning for self-driving cars: chances and challenges. In: Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems, pp. 35–38 (2018)
Rodrigues, G.N., et al.: Modeling and verification for probabilistic properties in software product lines. In: 16th IEEE International Symposium on High Assurance Systems Engineering, HASE 2015, Daytona Beach, FL, USA, 8–10, January 2015, pp. 173–180. IEEE Computer Society (2015). https://doi.org/10.1109/HASE.2015.34
Schlüter, M., Nolte, G., Steffen, B.: Towards rigorous understanding of neural networks via semantics preserving transformation. Int. J. Softw. Tools Technol. Transf. (2022, to appear)
Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Sutton, R.S., Barto, A.G.: Reinforcement Learning: An Introduction. MIT Press, Cambridge (2018)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Usman, M., Sun, Y., Gopinath, D., Dange, R., Manolache, L., Pasareanu, C.: An overview of structural coverage metrics for testing neural networks. Int. J. Softw. Tools Technol. Transf. (2022, to appear)
Vardi, M.Y., Wolper, P.: An automata-theoretic approach to automatic program verification. In: 1st Symposium in Logic in Computer Science (LICS). IEEE Computer Society (1986)
Woodcock, J., Larsen, P.G., Bicarregui, J., Fitzgerald, J.: Formal methods: practice and experience. ACM Comput. Surv. (CSUR) 41(4), 1–36 (2009)
Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., Zhu, J.: Explainable AI: a brief survey on history, research areas, approaches and challenges. In: Tang, J., Kan, M.-Y., Zhao, D., Li, S., Zan, H. (eds.) NLPCC 2019. LNCS (LNAI), vol. 11839, pp. 563–574. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32236-6_51
Acknowledgements
As organisers of the track, we would like to thank all authors for their contributions. We would also like to thank all reviewers for their insights and helpful comments and all participants of the track for asking interesting questions, giving constructive comments and partaking in lively discussions.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Larsen, K., Legay, A., Nolte, G., Schlüter, M., Stoelinga, M., Steffen, B. (2022). Formal Methods Meet Machine Learning (F3ML). In: Margaria, T., Steffen, B. (eds) Leveraging Applications of Formal Methods, Verification and Validation. Adaptation and Learning. ISoLA 2022. Lecture Notes in Computer Science, vol 13703. Springer, Cham. https://doi.org/10.1007/978-3-031-19759-8_24
Download citation
DOI: https://doi.org/10.1007/978-3-031-19759-8_24
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-19758-1
Online ISBN: 978-3-031-19759-8
eBook Packages: Computer ScienceComputer Science (R0)