Skip to main content

Bridging Formal Methods and Machine Learning with Global Optimisation

  • Conference paper
  • First Online:
Formal Methods and Software Engineering (ICFEM 2022)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13478))

Included in the following conference series:

Abstract

Formal methods and machine learning are two research fields with drastically different foundations and philosophies. Formal methods utilise mathematically rigorous techniques for the specification, development and verification of software and hardware systems. Machine learning focuses on pragmatic approaches to gradually improve a parameterised model by observing a training data set. While historically the two fields lack communication, this trend has changed in the past few years with an outburst of research interest in the robustness verification of neural networks. This paper will briefly review these works, and focus on the urgent need for broader, and more in-depth, communication between the two fields, with the ultimate goal of developing learning-enabled systems with not only excellent performance but also acceptable safety and security. We present a specification language, MLS\(^2\), and show that it can express a set of known safety and security properties, including generalisation, uncertainty, robustness, data poisoning, backdoor, model stealing, membership inference, model inversion, interpretability, and fairness. To verify MLS\(^2\) properties, we promote the global optimisation based methods, which have provable guarantees on the convergence to the optimal solution. Many of them have theoretical bounds on the gap between current solutions and the optimal solution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    There are different measurements to measure the distance between two distributions. In this paper, we take KL divergence as an example and believe the formalism can be extended to other measurements.

References

  1. Balakrishnan, A., Deshmukh, J., Hoxha, B., Yamaguchi, T., Fainekos, G.: PerceMon: online monitoring for perception systems. In: Feng, L., Fisman, D. (eds.) RV 2021. LNCS, vol. 12974, pp. 297–308. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-88494-9_18

    Chapter  Google Scholar 

  2. Balakrishnan, A., et al.: Specifying and evaluating quality metrics for vision-based perception systems. In: Design, Automation & Test in Europe Conference & Exhibition (DATE), pp. 1433–1438 (2019)

    Google Scholar 

  3. Beckert, B., Hähnle, R., Schmitt, P.H. (eds.): Verification of Object-Oriented Software. The KeY Approach - Foreword by K. Rustan M. Leino. LNCS (LNAI), vol. 4334. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-69061-0

    Book  Google Scholar 

  4. Bensalem, S., et al.: Formal specification for learning-enabled autonomous systems (extended abstract). In: FoMLAS2022 (2022)

    Google Scholar 

  5. Bishop, P., Povyakalo, A.: Deriving a frequentist conservative confidence bound for probability of failure per demand for systems with different operational and test profiles. Reliab. Eng. Syst. Saf. 158, 246–253 (2017)

    Article  Google Scholar 

  6. Demontis, A., et al.: Why do adversarial attacks transfer? Explaining transferability of evasion and poisoning attacks. In: 28th USENIX Security Symposium (USENIX Security 2019), Santa Clara, CA, August 2019, pp. 321–338. USENIX Association (2019)

    Google Scholar 

  7. Du, S.S., Lee, J.D., Li, H., Wang, L., Zhai, X.: Gradient descent finds global minima of deep neural networks. arXiv e-prints, arXiv:1811.03804 (2018)

  8. Dutle, A., et al.: From requirements to autonomous flight: an overview of the monitoring ICAROUS project. In: Proceedings of the 2nd Workshop on Formal Methods for Autonomous Systems. EPTCS, vol. 329, pp. 23–30 (2020)

    Google Scholar 

  9. Fukunaga, K.: Introduction to Statistical Pattern Recognition. Elsevier (2013)

    Google Scholar 

  10. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP) (2018)

    Google Scholar 

  11. Huang, W., et al.: Coverage-guided testing for recurrent neural networks. IEEE Trans. Reliab. 1–16 (2021)

    Google Scholar 

  12. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  13. Jin, G., Yi, X., Huang, W., Schewe, S., Huang, X.: Enhancing adversarial training with second-order statistics of weights. In: CVPR 2022 (2022)

    Google Scholar 

  14. Jin, G., Yi, X., Zhang, L., Zhang, L., Schewe, S., Huang, X.: How does weight correlation affect the generalisation ability of deep neural networks. In: NeurIPS 2020 (2020)

    Google Scholar 

  15. Jones, D.R., Martins, J.R.R.A.: The DIRECT algorithm: 25 years later. J. Glob. Optim. 79(3), 521–566 (2021)

    Article  MATH  Google Scholar 

  16. Jones, D.R., Perttunen, C.D., Stuckman, B.E.: Lipschitzian optimization without the Lipschitz constant. J. Optim. Theory Appl. 79, 157–181 (1993)

    Article  MATH  Google Scholar 

  17. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  18. Li, J., Liu, J., Yang, P., Chen, L., Huang, X., Zhang, L.: Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: Chang, B.-Y.E. (ed.) SAS 2019. LNCS, vol. 11822, pp. 296–319. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32304-2_15

    Chapter  Google Scholar 

  19. Littlewood, B., Rushby, J.: Reasoning about the reliability of diverse two-channel systems in which one channel is “possibly perfect’’. IEEE Transa. Softw. Eng. 38(5), 1178–1194 (2012)

    Article  Google Scholar 

  20. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR 2018 (2018)

    Google Scholar 

  21. Musa, J.: Operational profiles in software-reliability engineering. IEEE Softw. 10(2), 14–32 (1993)

    Article  Google Scholar 

  22. Orekondy, T., Schiele, B., Fritz, M.: Knockoff nets: stealing functionality of black-box models. In: IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2019, Long Beach, CA, USA, 16–20 June 2019, pp. 4954–4963. Computer Vision Foundation/IEEE (2019)

    Google Scholar 

  23. Pietrantuono, R., Popov, P., Russo, S.: Reliability assessment of service-based software under operational profile uncertainty. Reliab. Eng. Syst. Saf. 204, 107193 (2020)

    Google Scholar 

  24. Ruan, W., Huang, X., Kwiatkowska, M.: Reachability analysis of deep neural networks with provable guarantees. In: IJCAI, pp. 2651–2659 (2018)

    Google Scholar 

  25. Ruan, W., Wu, M., Sun, Y., Huang, X., Kroening, D., Kwiatkowska, M.: Global robustness evaluation of deep neural networks with provable guarantees for the hamming distance. In: IJCAI 2019, pp. 5944–5952 (2019)

    Google Scholar 

  26. Rushby, J.: Software verification and system assurance. In: 7th International Conference on Software Engineering and Formal Methods, Hanoi, Vietnam, pp. 3–10. IEEE (2009)

    Google Scholar 

  27. Saddiki, H., Trapp, A.C., Flaherty, P.: A deterministic global optimization method for variational inference (2017)

    Google Scholar 

  28. Salako, K., Strigini, L., Zhao, X.: Conservative confidence bounds in safety, from generalised claims of improvement & statistical evidence. In: 51st Annual IEEE/IFIP International Conference on Dependable Systems and Networks, DSN 2021, Taipei, Taiwan, pp. 451–462. IEEE/IFIP (2021)

    Google Scholar 

  29. Sun, Y., Huang, X., Kroening, D.: Testing deep neural networks. CoRR, abs/1803.04792 (2018)

    Google Scholar 

  30. Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., Kroening, D.: Concolic testing for deep neural networks. In: 33rd IEEE/ACM International Conference on Automated Software Engineering (ASE) (2018)

    Google Scholar 

  31. Sun, Y., Wu, M., Ruan, W., Huang, X., Kwiatkowska, M., Kroening, D.: DeepConcolic: testing and debugging deep neural networks. In: 41st ACM/IEEE International Conference on Software Engineering (ICSE 2019) (2019)

    Google Scholar 

  32. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR. Citeseer (2014)

    Google Scholar 

  33. Wicker, M., Huang, X., Kwiatkowska, M.: Feature-guided black-box safety testing of deep neural networks. In: Beyer, D., Huisman, M. (eds.) TACAS 2018. LNCS, vol. 10805, pp. 408–426. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89960-2_22

    Chapter  Google Scholar 

  34. Wirjadi, O., Breuel, T.: A branch and bound algorithm for finding the modes in kernel density estimates. Int. J. Comput. Intell. Appl. 08(01), 17–35 (2009)

    Article  MATH  Google Scholar 

  35. Wu, M., Wicker, M., Ruan, W., Huang, X., Kwiatkowska, M.: A game-based approximate verification of deep neural networks with provable guarantees. Theor. Comput. Sci. 807, 298–329 (2020)

    Article  MATH  Google Scholar 

  36. Xu, P., Ruan, W., Huang, X.: Towards the quantification of safety risks in deep neural networks. CoRR, abs/2009.06114 (2020)

    Google Scholar 

  37. Xu, P., Ruan, W., Huang, X.: Quantifying safety risks of deep neural networks. Complex Intell. Syst. (2022)

    Google Scholar 

  38. Yang, Z., Zhang, J., Chang, E.-C., Liang, Z.: Neural network inversion in adversarial setting via background knowledge alignment. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, CCS 2019, pp. 225–240. ACM, New York (2019)

    Google Scholar 

  39. Zhao, X., et al.: Assessing reliability of deep learning through robustness evaluation and operational testing. In: AISafety2021 (2021)

    Google Scholar 

  40. Zhao, X., et al.: Reliability assessment and safety arguments for machine learning components in assuring learning-enabled autonomous systems. CoRR, abs/2112.00646 (2021)

    Google Scholar 

Download references

Acknowledgment

This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 956123. Moreover, XH is also supported by the UK EPSRC under projects [EP/R026173/1, EP/T026995/1].

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaowei Huang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Huang, X., Ruan, W., Tang, Q., Zhao, X. (2022). Bridging Formal Methods and Machine Learning with Global Optimisation. In: Riesco, A., Zhang, M. (eds) Formal Methods and Software Engineering. ICFEM 2022. Lecture Notes in Computer Science, vol 13478. Springer, Cham. https://doi.org/10.1007/978-3-031-17244-1_1

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-17244-1_1

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-17243-4

  • Online ISBN: 978-3-031-17244-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics