Abstract
The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not only the networks themselves but also the hosting DNN-controlled systems. Reachability analysis of these systems faces the same challenge. Existing approaches rely on over-approximating DNNs using simpler polynomial models. However, they suffer from low efficiency and large overestimation, and are restricted to specific types of DNNs. This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis. Specifically, we extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training. The inserted abstraction layer ensures that the values represented by an interval are indistinguishable to the network for both training and decision-making. Leveraging this, we devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states. Our approach is sound, tight, efficient, and agnostic to any DNN type and size. The experimental results on a wide range of benchmarks show that the DNNs trained by using our approach exhibit comparable performance, while the reachability analysis of the corresponding systems becomes more amenable with significant tightness and efficiency improvement over the state-of-the-art white-box approaches.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Continuous time is uniformly discretized into time steps.
References
Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: OSDI, vol. 16, pp. 265–283. Savannah, GA, USA (2016)
Abel, D.: A theory of state abstraction for reinforcement learning. In: AAAI, vol. 33, pp. 9876–9877 (2019)
Afzal, M., et al.: Veriabs: verification by abstraction and test generation. In: ASE, pp. 1138–1141. IEEE (2019)
Akrour, R., Veiga, F., Peters, J., Neumann, G.: Regularizing reinforcement learning with state abstraction. In: IROS, pp. 534–539. IEEE (2018)
Althoff, M.: An introduction to CORA 2015. In: Cyber-Physical Systems Virtual Organization (CPS-VO 2015), pp. 120–151 (2015)
Althoff, M., Magdici, S.: Set-based prediction of traffic participants on arbitrary road networks. IEEE Trans. Intell. Veh. 1(2), 187–202 (2016)
Alur, R., et al.: The algorithmic analysis of hybrid systems. Theoret. Comput. Sci. 138(1), 3–34 (1995)
Bacci, E., Parker, D.: Probabilistic guarantees for safe deep reinforcement learning. In: Bertrand, N., Jansen, N. (eds.) FORMATS 2020. LNCS, vol. 12288, pp. 231–248. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57628-8_14
Baier, C., Katoen, J.P.: Principles of Model Checking. MIT Press, Cambridge (2008)
Baluta, T., Chua, Z.L., Meel, K.S., Saxena, P.: Scalable quantitative verification for deep neural networks. In: ICSE, pp. 312–323. IEEE (2021)
Bertsekas, D.P., Rhodes, I.B.: On the minimax reachability of target sets and target tubes. Automatica 7(2), 233–247 (1971)
Campos Souza, P.V.: Fuzzy neural networks and neuro-fuzzy networks: a review the main techniques and applications used in the literature. Appl. Soft Comput. 92, 106275 (2020)
Chen, X., Ábrahám, E., Sankaranarayanan, S.: Taylor model flowpipe construction for non-linear hybrid systems. In: RTSS, pp. 183–192. IEEE (2012)
Chen, X., Ábrahám, E., Sankaranarayanan, S.: Flow*: an analyzer for non-linear hybrid systems. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 258–263. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39799-8_18
Christakis, M., et al.: Automated safety verification of programs invoking neural networks. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 201–224. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_9
Collins, P., Bresolin, D., et al.: Computing the evolution of hybrid systems using rigorous function calculus. IFAC Proc. Vol. 45(9), 284–290 (2012)
Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL, pp. 238–252 (1977)
Dong, Y., Zhao, X., Huang, X.: Dependability analysis of deep reinforcement learning based robotics and autonomous systems through probabilistic model checking. In: IROS, pp. 5171–5178. IEEE (2022)
Dreossi, T., et al.: VerifAI: a toolkit for the formal design and analysis of artificial intelligence-based systems. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 432–442. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_25
Drews, S., Albarghouthi, A., D’Antoni, L.: Proving data-poisoning robustness in decision trees. In: PLDI, pp. 1083–1097 (2020)
Dutta, S., Chen, X., Sankaranarayanan, S.: Reachability analysis for neural feedback systems using regressive polynomial rule inference. In: HSCC, pp. 157–168 (2019)
Fan, C., Qi, B., Mitra, S., Viswanathan, M.: DryVR: data-driven verification and compositional reasoning for automotive systems. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 441–461. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_22
Fan, J., Huang, C., Chen, X., Li, W., Zhu, Q.: ReachNN*: a tool for reachability analysis of neural-network controlled systems. In: Hung, D.V., Sokolsky, O. (eds.) ATVA 2020. LNCS, vol. 12302, pp. 537–542. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59152-6_30
Fang, X., Calinescu, R., Gerasimou, S., Alhwikem, F.: Fast parametric model checking through model fragmentation. In: ICSE, pp. 835–846. IEEE (2021)
Frehse, G.: SpaceEx: scalable verification of hybrid systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 379–395. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_30
Gallestey, E., Hokayem, P.: Lecture notes in nonlinear systems and control (2019)
Gomes, L.: When will Google’s self-driving car really be ready? It depends on where you live and what you mean by “ready’’ [news]. IEEE Spectr. 53(5), 13–14 (2016)
Heo, K., Oh, H., Yang, H.: Resource-aware program analysis via online abstraction coarsening. In: ICSE, pp. 94–104. IEEE (2019)
Hildebrandt, C., Elbaum, S., Bezzo, N.: Blending kinematic and software models for tighter reachability analysis. In: ICSE(NIER), pp. 33–36 (2020)
Huang, C., Fan, J., Chen, X., Li, W., Zhu, Q.: POLAR: a polynomial arithmetic framework for verifying neural-network controlled systems. In: Bouajjani, A., Holík, L., Wu, Z. (eds.) Automated Technology for Verification and Analysis. ATVA 2022. LNCS, vol. 13505, pp. 414–430. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19992-9_27
Huang, C., Fan, J., Li, W., Chen, X., Zhu, Q.: ReachNN: reachability analysis of neural-network controlled systems. ACM Trans. Embed. Comput. Syst. 18(5s), 1–22 (2019)
Ivanov, R., Carpenter, T., Weimer, J., Alur, R., Pappas, G., Lee, I.: Verisig 2.0: verification of neural network controllers using Taylor model preconditioning. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 249–262. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_11
Ivanov, R., Carpenter, T.J., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verifying the safety of autonomous systems with neural network controllers. ACM Trans. Embed. Comput. Syst. 20(1), 1–26 (2020)
Jin, P., Tian, J., Zhi, D., Wen, X., Zhang, M.: TRAINIFY: A CEGAR-driven training and verification framework for safe deep reinforcement learning. In: Shoham, S., Vizel, Y. (eds) Computer Aided Verification. CAV 2022. Lecture Notes in Computer Science, vol. 13371, pp. 193–218. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-13185-1_10
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Lillicrap, T.P., Hunt, J.J., Pritzel, A., et al.: Continuous control with deep reinforcement learning. In: ICLR, OpenReview.net (2016)
Limon, D., Bravo, J., Alamo, T., Camacho, E.: Robust MPC of constrained nonlinear systems based on interval arithmetic. IEE Proc. Control Theory App. 152(3), 325–332 (2005)
Lygeros, J., Tomlin, C., Sastry, S.: Controllers for reachability specifications for hybrid systems. Automatica 35(3), 349–370 (1999)
Makino, K., Berz, M.: Taylor models and other validated functional inclusion methods. Int. J. Pure Appl. Math. 6, 239–316 (2003)
Meiss, J.D.: Differential dynamical systems, Mathematical modeling and computation, vol. 14. SIAM (2007)
Minsky, M.L.: Computation. Prentice-Hall Englewood Cliffs, Hoboken (1967)
Mnih, V., Kavukcuoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)
Moore, R.E., Kearfott, R.B., Cloud, M.J.: Introduction to interval analysis. SIAM (2009)
Park, S., Kim, J., Kim, G.: Time discretization-invariant safe action repetition for policy gradient methods. In: NeurIPS 2021, vol. 34, pp. 267–279 (2021)
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: NeurIPS, vol. 32 (2019)
Pereira, A., Althoff, M.: Over approximative human arm occupancy prediction for collision avoidance. IEEE Trans. Autom. Sci. Eng. 15(2), 818–831 (2017)
Schilling, C., Forets, M., Guadalupe, S.: Verification of neural-network control systems by integrating Taylor models and zonotopes. In: AAAI, vol. 36, pp. 8169–8177 (2022)
Schmidt, L., Kontes, G., Plinge, A., Mutschler, C.: Can you trust your autonomous car? Interpretable and verifiably safe reinforcement learning. In: IV, pp. 171–178. IEEE (2021)
Schürmann, B., Kochdumper, N., Althoff, M.: Reached model predictive control for disturbed nonlinear systems. In: CDC, pp. 3463–3470. IEEE (2018)
Scott, J., Raimondo, D., Marseglia, G., Braatz, R.: Constrained zonotopes: a new tool for set-based estimation and fault detection. Automatica 69, 126–136 (2016)
Singh, S.P., Jaakkola, T., Jordan, M.I.: Reinforcement learning with soft state aggregation. NeurIPS 7, 361–368 (1995)
Song, M., Jing, Y., Pedrycz, W.: Granular neural networks: a study of optimizing allocation of information granularity in input space. Appl. Soft Comput. 77, 67–75 (2019)
Su, J., Chen, W.H.: Model-based fault diagnosis system verification using reachability analysis. IEEE Trans. Syst. Man Cybern. Syst. 49(4), 742–751 (2017)
Sun, X., Khedr, H., Shoukry, Y.: Formal verification of neural network controlled autonomous systems. In: HSCC, pp. 147–156 (2019)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
Tian, J., Zhi, D., Liu, S., Wang, P., Katz, G., Zhang, M.: Taming reachability analysis of DNN-controlled systems via abstraction-based training (2023)
Tran, H.-D., Bak, S., Xiang, W., Johnson, T.T.: Verification of deep convolutional neural networks using imagestars. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12224, pp. 18–42. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_2
Tran, H.-D., Yang, X., Manzanas Lopez, D., Musau, P., Nguyen, L.V., Xiang, W., Bak, S., Johnson, T.T.: NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12224, pp. 3–17. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_1
Wang, Z., Albarghouthi, A., Prakriya, G., Jha, S.: Interval universal approximation for neural networks. In: POPL, vol. 6, pp. 1–29. ACM (2022)
Xue, B., Zhang, M., Easwaran, A., Li, Q.: Pac model checking of black-box continuous-time dynamical systems. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 39(11), 3944–3955 (2020)
Zhang, Y., et al.: QVIP: an ILP-based formal verification approach for quantized neural networks. In: ASE, pp. 1–13. No. 80 (2022)
Acknowledgment
The work has been supported by the National Key Project (2020AAA0107800), NSFC Programs (62161146001, 62372176), Huawei Technologies Co., Ltd., the Shanghai International Joint Lab (22510750100), the Shanghai Trusted Industry Internet Software Collaborative Innovation Center, the Engineering and Physical Sciences Research Council (EP/T006579/1), the National Research Foundation (NRF-RSS2022-009), Singapore, and the Shanghai Jiao Tong University Postdoc Scholarship.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Tian, J., Zhi, D., Liu, S., Wang, P., Katz, G., Zhang, M. (2024). Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training. In: Dimitrova, R., Lahav, O., Wolff, S. (eds) Verification, Model Checking, and Abstract Interpretation. VMCAI 2024. Lecture Notes in Computer Science, vol 14500. Springer, Cham. https://doi.org/10.1007/978-3-031-50521-8_4
Download citation
DOI: https://doi.org/10.1007/978-3-031-50521-8_4
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-50520-1
Online ISBN: 978-3-031-50521-8
eBook Packages: Computer ScienceComputer Science (R0)