Abstract
Due to extensive applications of deep learning and neural networks, their security has attracted more and more attentions from academic and industrial circles. Under the guidance of the theory of formal verification, this paper summarizes three basic problems which indicate the common features of different neural networks, and proposes three typical properties covering the correctness of a model, the correctness of a sample and the robustness of a model for neural network systems. The method is driven by these properties, the model is constructed using the MSVL language and the properties are characterized by the logic PPTL. On this basis, the modeling and verification process is done in the MC compiler.
This research is supported by the NSFC Grant Nos. 61672403, 61272118, 61972301, 61420106004, and the Industrial Research Project of Shaanxi Province No. 2017GY-076.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Lu, H., Li, Y.: Artificial Intelligence and Computer Vision. Springer, Heidelberg (2017). https://doi.org/10.1007/978-3-319-46245-5
Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Signal Process. Mag. 29, 82–97 (2012)
Zhang, Q., Zhu, S.: Visual interpretability for deep learning: a survey. Front. Inf. Technol. Electron. Eng. 19, 27–39 (2018)
Cheng, C., et al.: Neural networks for safety-critical applications - challenges, experiments and perspectives. In: Design, Automation and Test in Europe, pp. 1005–1006. IEEE Press (2018)
Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)
Taylor, B., Darrah, M., Moats, C.: Verification and validation of neural networks: a sampling of research in progress. In: Intelligent Computing: Theory and Applications, pp. 8–16. SPIE (2003)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv 1312/6199 (2014)
Zhang, N., Duan, Z., Tian, C.: Model checking concurrent systems with MSVL. Sci. China Inf. Sci. 59, 101–118 (2016)
Tian, C., Chen, C., Duan, Z.: Differential testing of certificate validation in SSL/TLS implementations: an RFC-guided approach. ACM Trans. Softw. Eng. Methodol. 28, 24:1–24:37 (2019)
Cui, J., Duan, Z., Tian, C., Du, H.: A novel approach to modeling and verifying real-time systems for high reliability. IEEE Trans. Reliab. 67, 481–493 (2018)
Duan, Z., Tian, C., Zhang, N.: A canonical form based decision procedure and model checking approach for propositional projection temporal logic. Theor. Comput. Sci. 609, 544–560 (2016)
Duan, Z., Zhang, N., Maciej, K.: A complete proof system for propositional projection temporal logic. Theor. Comput. Sci. 497, 84–107 (2013)
Duan, Z., Maciej, K.: A framed temporal logic programming language. J. Comput. Sci. Technol. 19, 341–351 (2004)
Zhang, N., Duan, Z., Tian, C., Du, D.: A formal proof of the deadline driven scheduler in PPTL axiomatic system. Theor. Comput. Sci. 554, 229–253 (2014)
Yang, K., Duan, Z., Tian, C., Zhang, N.: A compiler for MSVL and its applications. Theor. Comput. Sci. 749, 2–16 (2017)
Duan, Z., Tian, C., Zhang, L.: A decision procedure for propositional projection temporal logic with infinite models. Acta Informatica 45, 43–78 (2008)
Wang, M., Tian, C., Zhang, N., Duan, Z.: Verifying full regular temporal properties of programs via dynamic program execution. IEEE Trans. Reliab. 68, 1101–1116 (2019)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. ArXiv:1412.6572 (2014)
Wicker, M., Huang, X., Kwiatkowska, M.: Feature-guided black-box safety testing of deep neural networks. In: Beyer, D., Huisman, M. (eds.) TACAS 2018. LNCS, vol. 10805, pp. 408–426. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-89960-2_22
Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: USENIX Conference on Security Symposium, pp. 1599–1614. USENIX Association (2018)
Seshia, S.A., et al.: Formal specification for deep neural networks. In: Lahiri, S.K., Wang, C. (eds.) ATVA 2018. LNCS, vol. 11138, pp. 20–34. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01090-4_2
Kurd, Z., Kelly, T.: Establishing safety criteria for artificial neural networks. In: Palade, V., Howlett, R.J., Jain, L. (eds.) KES 2003. LNCS (LNAI), vol. 2773, pp. 163–169. Springer, Heidelberg (2003). https://doi.org/10.1007/978-3-540-45224-9_24
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Wang, X., Yang, K., Wang, Y., Zhao, L., Shu, X. (2020). Towards Formal Verification of Neural Networks: A Temporal Logic Based Framework. In: Miao, H., Tian, C., Liu, S., Duan, Z. (eds) Structured Object-Oriented Formal Language and Method. SOFL+MSVL 2019. Lecture Notes in Computer Science(), vol 12028. Springer, Cham. https://doi.org/10.1007/978-3-030-41418-4_6
Download citation
DOI: https://doi.org/10.1007/978-3-030-41418-4_6
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-41417-7
Online ISBN: 978-3-030-41418-4
eBook Packages: Computer ScienceComputer Science (R0)