Abstract
We investigate a method for formally verifying the absence of adversarial examples in a neural network controller. Our approach applies to networks with piecewise affine activation units, which may be encoded symbolically as a piecewise affine mapping from inputs to outputs. The approach rests on characterizing and bounding a critical subset of the state space where controller action is required, partitioning this critical subset, and using satisfiability modulo theories (SMT) to prove nonexistence of safety counterexamples on each of the resulting partition elements. We demonstrate this approach on a simple collision avoidance neural network controller, trained with reinforcement learning to avoid collisions in a simplified simulated environment. After encoding the network weights in SMT, we formally verify safety of the neural network controller on a subset of the critical partition elements, and determine that the rest of the critical set partition elements are potentially unsafe. We further experimentally confirm the existence of actual adversarial collision scenarios in 90% of the identified potentially unsafe critical partition elements, indicating that our approach is reasonably tight.
A. Schmidt—This work was supported through JHU/APL internal R&D funds.
The views expressed herein are solely those of the authors, and no official support or endorsement by the Defense Nuclear Facilities Safety Board or the U.S. Government is intended or should be inferred.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Amodei, D., Olah, C., Steinhardt, J., Christiano, P., Schulman, J., Mané, D.: Concrete problems in AI safety, July 2016. arXiv:1606.06565 [cs.AI]
Bak, S., Tran, H.-D., Hobbs, K., Johnson, T.T.: Improved geometric path enumeration for verifying ReLU neural networks. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12224, pp. 66–96. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_4
National Transportation Safety Board: Preliminary report released for crash involving pedestrian, Uber Technologies Inc, test vehicle (2018). https://www.ntsb.gov/news/press-releases/Pages/NR20180524.aspx. Accessed 23 Sept 2020
Chen, T., Liu, J., Xiang, Y., Niu, W., Tong, E., Han, Z.: Adversarial attack and defense in reinforcement learning-from AI security view. Cybersecurity 2(1), 1–22 (2019). https://doi.org/10.1186/s42400-019-0027-x
Dutta, S., Chen, X., Jha, S., Sankaranarayanan, S., Tiwari, A.: Sherlock - a tool for verification of neural network feedback systems. In: ACM International Conference on Hybrid Systems: Computation and Control (HSCC), pp. 262–263. Association for Computing Machinery, New York (2019)
Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: AI\(^2\): safety and robustness certification of neural networks with abstract interpretation. In: IEEE Symposium on Security and Privacy (SP), pp. 3–18 (2018)
Huang, C., Fan, J., Li, W., Chen, X., Zhu, Q.: ReachNN: reachability analysis of neural-network controlled systems. ACM Trans. Embed. Comput. Syst. 18(5s) (2019)
Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1
Ivanov, R., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verisig: verifying safety properties of hybrid systems with neural network controllers. In: ACM International Conference on Hybrid Systems: Computation and Control (HSCC), pp. 169–178 (2019)
Jeannin, J., et al.: Formal verification of ACAS X, an industrial airborne collision avoidance system. In: Girault, A., Guan, N. (eds.) International Conference on Embedded Software, EMSOFT 2015, Amsterdam, The Netherlands, 4–9 October 2015. ACM (2015)
Johnson, T.T., et al.: ARCH-COMP20 category report: artificial intelligence and neural network control systems (AINNCS) for continuous and hybrid systems plants. In: Frehse, G., Althoff, M. (eds.) International Workshop on Applied Verification of Continuous and Hybrid Systems (ARCH20). EPiC Series in Computing, vol. 74, pp. 107–139 (2020)
Julian, K.D., Kochenderfer, M.J.: Reachability analysis for neural network aircraft collision avoidance systems. J. Guid. Control. Dyn. 44(6), 1132–1142 (2021)
Julian, K.D., Sharma, S., Jeannin, J.B., Kochenderfer, M.J.: Verifying aircraft collision avoidance neural networks through linear approximations of safe regions. In: AIAA Spring Symposium (2019). arXiv:1903.00762 [cs.SY]
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5
Katz, G., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26
Kochenderfer, M.J., Holland, J.E., Chryssanthacopoulos, J.P.: Next generation airborne collision avoidance system. Lincoln Lab. J. 19(1), 17–33 (2012)
Kouskoulas, Y., Genin, D., Schmidt, A., Jeannin, J.-B.: Formally verified safe vertical maneuvers for non-deterministic, accelerating aircraft dynamics. In: Ayala-Rincón, M., Muñoz, C.A. (eds.) ITP 2017. LNCS, vol. 10499, pp. 336–353. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-66107-0_22
Kouskoulas, Y., Schmidt, A., Jeannin, J.B., Genin, D., Lopez, J.: Provably safe controller synthesis using safety proofs as building blocks. In: 7th International Conference in Software Engineering Research and Innovation (CONISOFT), pp. 26–35 (2019)
Liu, C., Arnon, T., Lazarus, C., Barrett, C., Kochenderfer, M.J.: Algorithms for verifying deep neural networks (2019)
Lopez, D.M., Johnson, T., Tran, H.D., Bak, S., Chen, X., Hobbs, K.L.: Formal methods for intelligent aerospace systems. In: Verification of Neural Network Compression of ACAS Xu Lookup Tables with Star Set Reachability. AIAA SciTech Forum (2021)
McCloskey, M., Cohen, N.J.: Catastrophic interference in connectionist networks: the sequential learning problem. In: Psychology of Learning and Motivation, vol. 24, pp. 109–165. Elsevier (1989)
Mitsch, S., Platzer, A.: ModelPlex: verified runtime validation of verified cyber-physical system models. Formal Methods Syst. Des. 49(1), 33–74 (2016). Special issue of selected papers from RV 2014
de Moura, L., Bjørner, N.: Z3: an efficient SMT solver. In: Ramakrishnan, C.R., Rehof, J. (eds.) TACAS 2008. LNCS, vol. 4963, pp. 337–340. Springer, Heidelberg (2008). https://doi.org/10.1007/978-3-540-78800-3_24
Papusha, I., Topcu, U., Carr, S., Lauffer, N.: Affine multiplexing networks: system analysis, learning, and computation, April 2018. arXiv:1805.00164 [math.OC]
Papusha, I., Wu, R., Brulé, J., Kouskoulas, Y., Genin, D., Schmidt, A.: Incorrect by construction: fine tuning neural networks for guaranteed performance on finite sets of examples. In: 3rd Workshop on Formal Methods for ML-Enabled Autonomous Systems (FoMLAS), July 2020. arXiv:2008.01204 [cs.LG]
Platzer, A.: The logical path to autonomous cyber-physical systems. In: Parker, D., Wolf, V. (eds.) QEST 2019. LNCS, vol. 11785, pp. 25–33. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-30281-8_2
Wolfram Research Inc: Mathematica, Version 12.1, Champaign, IL (2020). https://www.wolfram.com/mathematica
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Appendix
Appendix
The minimum (maximum) relative acceleration \(a_\text {min}\) (\(a_\text {max}\)) allowed so that the intruder passes safely above (below) the ownship is given by the following functions
where
Note that the sets A and B are semialgebraic. For the decomposed dynamics, we have \(t_2=t_1+T\), where T is the fixed horizontal conflict duration. The infimum in \(a_\text {min}\) be determined by computing the infimum over each semialgebraic component \(\psi _A^{(i)}\) and keeping track of the valid sets. Similarly, the supremum in \(a_\text {max}\) can be found by maximizing a over each \(\psi _B^{(i)}\).
Rights and permissions
Copyright information
© 2022 Springer Nature Switzerland AG
About this paper
Cite this paper
Genin, D. et al. (2022). Formal Verification of Neural Network Controllers for Collision-Free Flight. In: Bloem, R., Dimitrova, R., Fan, C., Sharygina, N. (eds) Software Verification. NSV VSTTE 2021 2021. Lecture Notes in Computer Science(), vol 13124. Springer, Cham. https://doi.org/10.1007/978-3-030-95561-8_9
Download citation
DOI: https://doi.org/10.1007/978-3-030-95561-8_9
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-95560-1
Online ISBN: 978-3-030-95561-8
eBook Packages: Computer ScienceComputer Science (R0)