Skip to main content

Verifying Feedforward Neural Networks for Classification in Isabelle/HOL

  • Conference paper
  • First Online:

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14000))

Abstract

Neural networks are being used successfully to solve classification problems, e.g., for detecting objects in images. It is well known that neural networks are susceptible if small changes applied to their input result in misclassification. Situations in which such a slight input change, often hardly noticeable by a human expert, results in a misclassification are called adversarial examples. If such inputs are used for adversarial attacks, they can be life-threatening if, for example, they occur in image classification systems used in autonomous cars or medical diagnosis.

Systems employing neural networks, e.g., for safety or security-critical functionality, are a particular challenge for formal verification, which usually expects a formal specification (e.g., given as source code in a programming language for which a formal semantics exists). Such a formal specification does, per se, not exist for neural networks.

In this paper, we address this challenge by presenting a formal embedding of feedforward neural networks into Isabelle/HOL and discussing desirable properties for neural networks in critical applications. Our Isabelle-based prototype can import neural networks trained in TensorFlow, and we demonstrate our approach using a neural network trained for the classification of digits on a dot-matrix display.

This work was supported by the Engineering and Physical Sciences Research Council [grant number 670002170].

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Data Availability Statement

The formalisation and case studies are available to view on Zenodo [9]. The materials include both the Isabelle/HOL implementation and the detailed documentation generated by Isabelle.

Notes

  1. 1.

    TensorFlowJS stores the structure of the machine learning model in a JSON [16]-based format that refers to a binary file containing the weights and biases. Our import mechanism fully supports this format, i.e., also importing the weights and biases from the external file.

References

  1. Abadi, M., et al.: TensorFlow: Large-Scale Machine Learning on Heterogeneous Systems (2015). Software https://www.tensorflow.org/

  2. Abdulaziz, M., Kurz, F.: Verified SAT-Based AI Planning. Archive of Formal Proofs (2020)

    Google Scholar 

  3. Aggarwal, C.C.: Machine learning with shallow neural networks. In: Aggarwal, C.C. (ed.) Neural Networks and Deep Learning, vol. 10, pp. 53–104. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-94463-0_2 ISBN 9783030068561

    Chapter  Google Scholar 

  4. Banerjee, K., Vishak Prasad, C., Gupta, R.R., Vyas, K., Anushree, H., Mishra, B.: Exploring Alternatives to Softmax Function (2020)

    Google Scholar 

  5. Barocas, S., Hardt, M., Narayanan, A.: Fairness in machine learning. Nips Tutor. 1, 2 (2017)

    Google Scholar 

  6. Bentkamp, A.: Expressiveness of deep learning. Archive of Formal Proofs (2016)

    Google Scholar 

  7. Bonaert, G., Dimitrov, D.I., Baader, M., Vechev, M.: Fast and precise certification of transformers. In: PLDI, pp. 466–481. ACM, Virtual, Canada (2021). https://doi.org/10.1145/3453483.3454056

  8. Brucker, A.D.: Nano JSON: working with JSON formatted data in Isabelle/HOL and Isabelle/ML. Archive of Formal Proofs (2022)

    Google Scholar 

  9. Brucker, A.D., Stell, A.: Dataset: feedforward neural network verification in Isabelle/HOL (2022). https://doi.org/10.5281/zenodo.7418170

  10. BS EN 50128:2011: Railway applications - Communication, signalling and processing systems - Software for railway control and protecting systems. Standard, British Standards Institute (BSI) (2014)

    Google Scholar 

  11. Campbell, A., Both, A., Sun, Q.: Detecting and mapping traffic signs from Google Street View images using deep learning and GIS. Comput. Environ. Urban Syst. 77, 101350 (2019). https://doi.org/10.1016/j.compenvurbsys.2019.101350

    Article  Google Scholar 

  12. Church, A.: A formulation of the simple theory of types. J. Symb. Log. 5(2), 56–68 (1940)

    Article  MathSciNet  MATH  Google Scholar 

  13. Cohen, N., Sharir, O., Shashua, A.: On the expressive power of deep learning: a tensor analysis. In: Conference on Learning Theory, pp. 698–728 (2016)

    Google Scholar 

  14. Common Criteria for Information Technology Security Evaluation (Version 3.1, Release 5) (2017). https://www.commoncriteriaportal.org/cc/

  15. Dvijotham, K., Stanforth, R., Gowal, S., Mann, T.A., Kohli, P.: A dual approach to scalable verification of deep networks. In: UAI, p. 3 (2018)

    Google Scholar 

  16. ECMA-404: The JSON data interchange syntax (2017). https://www.ecma-international.org/publications-and-standards/standards/ecma-404/

  17. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19

    Chapter  MATH  Google Scholar 

  18. Goodfellow, I., Lee, H., Le, Q., Saxe, A., Ng, A.: Measuring invariances in deep networks. In: Advances in Neural Information Processing Systems, vol. 22 (2009)

    Google Scholar 

  19. Harris, C.R., et al.: Array programming with NumPy. Nature 585(7825), 357–362 (2020). https://doi.org/10.1038/s41586-020-2649-2

    Article  Google Scholar 

  20. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  21. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  22. Katz, G., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26

    Chapter  Google Scholar 

  23. Klein, G.: Operating system verification – an overview. Sadhana 34(1), 27–69 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  24. Krizhevsky, A., Sutskever, I., Hinton, G.E.: ImageNet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017). https://doi.org/10.1145/3065386

    Article  Google Scholar 

  25. Kurd, Z., Kelly, T., Austin, J.: Developing artificial neural networks for safety critical systems. Neural Comput. Appl. 16(1), 11–19 (2007)

    Article  Google Scholar 

  26. Matichuk, D., Murray, T., Wenzel, M.: Eisbach: a proof method language for Isabelle. J. Autom. Reason. 56(3), 261–282 (2016). https://doi.org/10.1007/s10817-015-9360-2

    Article  MathSciNet  MATH  Google Scholar 

  27. Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: Machine Learning, pp. 3578–3586 (2018)

    Google Scholar 

  28. Nipkow, T., Paulson, L.C., Wenzel, M.: Isabelle/HOL—A Proof Assistant for Higher-Order Logic. Springer, Heidelberg (2002). https://doi.org/10.1007/3-540-45949-9

    Book  MATH  Google Scholar 

  29. Paulson, L.C.: ML for the Working Programmer. Cambridge Press, Cambridge (1996)

    Book  MATH  Google Scholar 

  30. Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_24

    Chapter  Google Scholar 

  31. Rintanen, J.: Madagascar: scalable planning with SAT. IPC 21, 1–5 (2014)

    Google Scholar 

  32. Rumelhart, D.E., Hinton, G.E., Williams, R.J.: Learning representations by back-propagating errors. Nature 323(6088), 533–536 (1986)

    Article  MATH  Google Scholar 

  33. Seshia, S.A., et al.: Formal specification for deep neural networks. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 20–34. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01090-4_2

    Chapter  Google Scholar 

  34. Singh, G., Gehr, T., Püschel, M., Vechev, M.: Boosting robustness certification of neural networks. In: Learning Representations (2018)

    Google Scholar 

  35. Smilkov, D., et al.: TensorFlow.js: Machine Learning for the Web and Beyond. CoRR abs/1901.05350 (2019)

    Google Scholar 

  36. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)

    Google Scholar 

  37. Taigman, Y., Yang, M., Ranzato, M., Wolf, L.: DeepFace: closing the gap to human-level performance in face verification. In: 2014 IEEE Conference on Computer Vision and Pattern Recognition, pp. 1701–1708 (2014). https://doi.org/10.1109/CVPR.2014.220

  38. Weng, L., et al.: Towards fast computation of certified robustness for relu networks. In: Machine Learning, pp. 5276–5285 (2018)

    Google Scholar 

  39. Wenzel, M., Wolff, B.: Building formal method tools in the Isabelle/Isar framework. In: Schneider, K., Brandt, J. (eds.) TPHOLs 2007. LNCS, vol. 4732, pp. 352–367. Springer, Heidelberg (2007). https://doi.org/10.1007/978-3-540-74591-4_26

    Chapter  MATH  Google Scholar 

  40. Wenzel, M., Paulson, L.: Isabelle/Isar. In: Wiedijk, F. (ed.) The Seventeen Provers of the World. LNCS (LNAI), vol. 3600, pp. 41–49. Springer, Heidelberg (2006). https://doi.org/10.1007/11542384_8

    Chapter  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Amy Stell .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Brucker, A.D., Stell, A. (2023). Verifying Feedforward Neural Networks for Classification in Isabelle/HOL. In: Chechik, M., Katoen, JP., Leucker, M. (eds) Formal Methods. FM 2023. Lecture Notes in Computer Science, vol 14000. Springer, Cham. https://doi.org/10.1007/978-3-031-27481-7_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-27481-7_24

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-27480-0

  • Online ISBN: 978-3-031-27481-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics