Skip to main content

Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training

  • Conference paper
  • First Online:
Verification, Model Checking, and Abstract Interpretation (VMCAI 2024)

Abstract

The intrinsic complexity of deep neural networks (DNNs) makes it challenging to verify not only the networks themselves but also the hosting DNN-controlled systems. Reachability analysis of these systems faces the same challenge. Existing approaches rely on over-approximating DNNs using simpler polynomial models. However, they suffer from low efficiency and large overestimation, and are restricted to specific types of DNNs. This paper presents a novel abstraction-based approach to bypass the crux of over-approximating DNNs in reachability analysis. Specifically, we extend conventional DNNs by inserting an additional abstraction layer, which abstracts a real number to an interval for training. The inserted abstraction layer ensures that the values represented by an interval are indistinguishable to the network for both training and decision-making. Leveraging this, we devise the first black-box reachability analysis approach for DNN-controlled systems, where trained DNNs are only queried as black-box oracles for the actions on abstract states. Our approach is sound, tight, efficient, and agnostic to any DNN type and size. The experimental results on a wide range of benchmarks show that the DNNs trained by using our approach exhibit comparable performance, while the reachability analysis of the corresponding systems becomes more amenable with significant tightness and efficiency improvement over the state-of-the-art white-box approaches.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Continuous time is uniformly discretized into time steps.

References

  1. Abadi, M., et al.: Tensorflow: a system for large-scale machine learning. In: OSDI, vol. 16, pp. 265–283. Savannah, GA, USA (2016)

    Google Scholar 

  2. Abel, D.: A theory of state abstraction for reinforcement learning. In: AAAI, vol. 33, pp. 9876–9877 (2019)

    Google Scholar 

  3. Afzal, M., et al.: Veriabs: verification by abstraction and test generation. In: ASE, pp. 1138–1141. IEEE (2019)

    Google Scholar 

  4. Akrour, R., Veiga, F., Peters, J., Neumann, G.: Regularizing reinforcement learning with state abstraction. In: IROS, pp. 534–539. IEEE (2018)

    Google Scholar 

  5. Althoff, M.: An introduction to CORA 2015. In: Cyber-Physical Systems Virtual Organization (CPS-VO 2015), pp. 120–151 (2015)

    Google Scholar 

  6. Althoff, M., Magdici, S.: Set-based prediction of traffic participants on arbitrary road networks. IEEE Trans. Intell. Veh. 1(2), 187–202 (2016)

    Article  Google Scholar 

  7. Alur, R., et al.: The algorithmic analysis of hybrid systems. Theoret. Comput. Sci. 138(1), 3–34 (1995)

    Article  MathSciNet  Google Scholar 

  8. Bacci, E., Parker, D.: Probabilistic guarantees for safe deep reinforcement learning. In: Bertrand, N., Jansen, N. (eds.) FORMATS 2020. LNCS, vol. 12288, pp. 231–248. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-57628-8_14

    Chapter  Google Scholar 

  9. Baier, C., Katoen, J.P.: Principles of Model Checking. MIT Press, Cambridge (2008)

    Google Scholar 

  10. Baluta, T., Chua, Z.L., Meel, K.S., Saxena, P.: Scalable quantitative verification for deep neural networks. In: ICSE, pp. 312–323. IEEE (2021)

    Google Scholar 

  11. Bertsekas, D.P., Rhodes, I.B.: On the minimax reachability of target sets and target tubes. Automatica 7(2), 233–247 (1971)

    Article  MathSciNet  Google Scholar 

  12. Campos Souza, P.V.: Fuzzy neural networks and neuro-fuzzy networks: a review the main techniques and applications used in the literature. Appl. Soft Comput. 92, 106275 (2020)

    Article  Google Scholar 

  13. Chen, X., Ábrahám, E., Sankaranarayanan, S.: Taylor model flowpipe construction for non-linear hybrid systems. In: RTSS, pp. 183–192. IEEE (2012)

    Google Scholar 

  14. Chen, X., Ábrahám, E., Sankaranarayanan, S.: Flow*: an analyzer for non-linear hybrid systems. In: Sharygina, N., Veith, H. (eds.) CAV 2013. LNCS, vol. 8044, pp. 258–263. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-39799-8_18

    Chapter  Google Scholar 

  15. Christakis, M., et al.: Automated safety verification of programs invoking neural networks. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 201–224. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_9

    Chapter  Google Scholar 

  16. Collins, P., Bresolin, D., et al.: Computing the evolution of hybrid systems using rigorous function calculus. IFAC Proc. Vol. 45(9), 284–290 (2012)

    Article  Google Scholar 

  17. Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: POPL, pp. 238–252 (1977)

    Google Scholar 

  18. Dong, Y., Zhao, X., Huang, X.: Dependability analysis of deep reinforcement learning based robotics and autonomous systems through probabilistic model checking. In: IROS, pp. 5171–5178. IEEE (2022)

    Google Scholar 

  19. Dreossi, T., et al.: VerifAI: a toolkit for the formal design and analysis of artificial intelligence-based systems. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 432–442. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_25

    Chapter  Google Scholar 

  20. Drews, S., Albarghouthi, A., D’Antoni, L.: Proving data-poisoning robustness in decision trees. In: PLDI, pp. 1083–1097 (2020)

    Google Scholar 

  21. Dutta, S., Chen, X., Sankaranarayanan, S.: Reachability analysis for neural feedback systems using regressive polynomial rule inference. In: HSCC, pp. 157–168 (2019)

    Google Scholar 

  22. Fan, C., Qi, B., Mitra, S., Viswanathan, M.: DryVR: data-driven verification and compositional reasoning for automotive systems. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 441–461. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_22

    Chapter  Google Scholar 

  23. Fan, J., Huang, C., Chen, X., Li, W., Zhu, Q.: ReachNN*: a tool for reachability analysis of neural-network controlled systems. In: Hung, D.V., Sokolsky, O. (eds.) ATVA 2020. LNCS, vol. 12302, pp. 537–542. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59152-6_30

    Chapter  Google Scholar 

  24. Fang, X., Calinescu, R., Gerasimou, S., Alhwikem, F.: Fast parametric model checking through model fragmentation. In: ICSE, pp. 835–846. IEEE (2021)

    Google Scholar 

  25. Frehse, G.: SpaceEx: scalable verification of hybrid systems. In: Gopalakrishnan, G., Qadeer, S. (eds.) CAV 2011. LNCS, vol. 6806, pp. 379–395. Springer, Heidelberg (2011). https://doi.org/10.1007/978-3-642-22110-1_30

    Chapter  Google Scholar 

  26. Gallestey, E., Hokayem, P.: Lecture notes in nonlinear systems and control (2019)

    Google Scholar 

  27. Gomes, L.: When will Google’s self-driving car really be ready? It depends on where you live and what you mean by “ready’’ [news]. IEEE Spectr. 53(5), 13–14 (2016)

    Article  Google Scholar 

  28. Heo, K., Oh, H., Yang, H.: Resource-aware program analysis via online abstraction coarsening. In: ICSE, pp. 94–104. IEEE (2019)

    Google Scholar 

  29. Hildebrandt, C., Elbaum, S., Bezzo, N.: Blending kinematic and software models for tighter reachability analysis. In: ICSE(NIER), pp. 33–36 (2020)

    Google Scholar 

  30. Huang, C., Fan, J., Chen, X., Li, W., Zhu, Q.: POLAR: a polynomial arithmetic framework for verifying neural-network controlled systems. In: Bouajjani, A., Holík, L., Wu, Z. (eds.) Automated Technology for Verification and Analysis. ATVA 2022. LNCS, vol. 13505, pp. 414–430. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19992-9_27

  31. Huang, C., Fan, J., Li, W., Chen, X., Zhu, Q.: ReachNN: reachability analysis of neural-network controlled systems. ACM Trans. Embed. Comput. Syst. 18(5s), 1–22 (2019)

    Article  Google Scholar 

  32. Ivanov, R., Carpenter, T., Weimer, J., Alur, R., Pappas, G., Lee, I.: Verisig 2.0: verification of neural network controllers using Taylor model preconditioning. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 249–262. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_11

    Chapter  Google Scholar 

  33. Ivanov, R., Carpenter, T.J., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verifying the safety of autonomous systems with neural network controllers. ACM Trans. Embed. Comput. Syst. 20(1), 1–26 (2020)

    Article  Google Scholar 

  34. Jin, P., Tian, J., Zhi, D., Wen, X., Zhang, M.: TRAINIFY: A CEGAR-driven training and verification framework for safe deep reinforcement learning. In: Shoham, S., Vizel, Y. (eds) Computer Aided Verification. CAV 2022. Lecture Notes in Computer Science, vol. 13371, pp. 193–218. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-13185-1_10

  35. LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)

    Article  Google Scholar 

  36. Lillicrap, T.P., Hunt, J.J., Pritzel, A., et al.: Continuous control with deep reinforcement learning. In: ICLR, OpenReview.net (2016)

    Google Scholar 

  37. Limon, D., Bravo, J., Alamo, T., Camacho, E.: Robust MPC of constrained nonlinear systems based on interval arithmetic. IEE Proc. Control Theory App. 152(3), 325–332 (2005)

    Article  Google Scholar 

  38. Lygeros, J., Tomlin, C., Sastry, S.: Controllers for reachability specifications for hybrid systems. Automatica 35(3), 349–370 (1999)

    Article  MathSciNet  Google Scholar 

  39. Makino, K., Berz, M.: Taylor models and other validated functional inclusion methods. Int. J. Pure Appl. Math. 6, 239–316 (2003)

    MathSciNet  Google Scholar 

  40. Meiss, J.D.: Differential dynamical systems, Mathematical modeling and computation, vol. 14. SIAM (2007)

    Google Scholar 

  41. Minsky, M.L.: Computation. Prentice-Hall Englewood Cliffs, Hoboken (1967)

    Google Scholar 

  42. Mnih, V., Kavukcuoglu, K., Silver, D., et al.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015)

    Article  Google Scholar 

  43. Moore, R.E., Kearfott, R.B., Cloud, M.J.: Introduction to interval analysis. SIAM (2009)

    Google Scholar 

  44. Park, S., Kim, J., Kim, G.: Time discretization-invariant safe action repetition for policy gradient methods. In: NeurIPS 2021, vol. 34, pp. 267–279 (2021)

    Google Scholar 

  45. Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: NeurIPS, vol. 32 (2019)

    Google Scholar 

  46. Pereira, A., Althoff, M.: Over approximative human arm occupancy prediction for collision avoidance. IEEE Trans. Autom. Sci. Eng. 15(2), 818–831 (2017)

    Article  Google Scholar 

  47. Schilling, C., Forets, M., Guadalupe, S.: Verification of neural-network control systems by integrating Taylor models and zonotopes. In: AAAI, vol. 36, pp. 8169–8177 (2022)

    Google Scholar 

  48. Schmidt, L., Kontes, G., Plinge, A., Mutschler, C.: Can you trust your autonomous car? Interpretable and verifiably safe reinforcement learning. In: IV, pp. 171–178. IEEE (2021)

    Google Scholar 

  49. Schürmann, B., Kochdumper, N., Althoff, M.: Reached model predictive control for disturbed nonlinear systems. In: CDC, pp. 3463–3470. IEEE (2018)

    Google Scholar 

  50. Scott, J., Raimondo, D., Marseglia, G., Braatz, R.: Constrained zonotopes: a new tool for set-based estimation and fault detection. Automatica 69, 126–136 (2016)

    Article  MathSciNet  Google Scholar 

  51. Singh, S.P., Jaakkola, T., Jordan, M.I.: Reinforcement learning with soft state aggregation. NeurIPS 7, 361–368 (1995)

    Google Scholar 

  52. Song, M., Jing, Y., Pedrycz, W.: Granular neural networks: a study of optimizing allocation of information granularity in input space. Appl. Soft Comput. 77, 67–75 (2019)

    Article  Google Scholar 

  53. Su, J., Chen, W.H.: Model-based fault diagnosis system verification using reachability analysis. IEEE Trans. Syst. Man Cybern. Syst. 49(4), 742–751 (2017)

    Article  Google Scholar 

  54. Sun, X., Khedr, H., Shoukry, Y.: Formal verification of neural network controlled autonomous systems. In: HSCC, pp. 147–156 (2019)

    Google Scholar 

  55. Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)

    Google Scholar 

  56. Tian, J., Zhi, D., Liu, S., Wang, P., Katz, G., Zhang, M.: Taming reachability analysis of DNN-controlled systems via abstraction-based training (2023)

    Google Scholar 

  57. Tran, H.-D., Bak, S., Xiang, W., Johnson, T.T.: Verification of deep convolutional neural networks using imagestars. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12224, pp. 18–42. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_2

    Chapter  Google Scholar 

  58. Tran, H.-D., Yang, X., Manzanas Lopez, D., Musau, P., Nguyen, L.V., Xiang, W., Bak, S., Johnson, T.T.: NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12224, pp. 3–17. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_1

    Chapter  Google Scholar 

  59. Wang, Z., Albarghouthi, A., Prakriya, G., Jha, S.: Interval universal approximation for neural networks. In: POPL, vol. 6, pp. 1–29. ACM (2022)

    Google Scholar 

  60. Xue, B., Zhang, M., Easwaran, A., Li, Q.: Pac model checking of black-box continuous-time dynamical systems. IEEE Trans. Comput. Aided Des. Integr. Circuits Syst. 39(11), 3944–3955 (2020)

    Article  Google Scholar 

  61. Zhang, Y., et al.: QVIP: an ILP-based formal verification approach for quantized neural networks. In: ASE, pp. 1–13. No. 80 (2022)

    Google Scholar 

Download references

Acknowledgment

The work has been supported by the National Key Project (2020AAA0107800), NSFC Programs (62161146001, 62372176), Huawei Technologies Co., Ltd., the Shanghai International Joint Lab (22510750100), the Shanghai Trusted Industry Internet Software Collaborative Innovation Center, the Engineering and Physical Sciences Research Council (EP/T006579/1), the National Research Foundation (NRF-RSS2022-009), Singapore, and the Shanghai Jiao Tong University Postdoc Scholarship.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Min Zhang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Tian, J., Zhi, D., Liu, S., Wang, P., Katz, G., Zhang, M. (2024). Taming Reachability Analysis of DNN-Controlled Systems via Abstraction-Based Training. In: Dimitrova, R., Lahav, O., Wolff, S. (eds) Verification, Model Checking, and Abstract Interpretation. VMCAI 2024. Lecture Notes in Computer Science, vol 14500. Springer, Cham. https://doi.org/10.1007/978-3-031-50521-8_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-50521-8_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-50520-1

  • Online ISBN: 978-3-031-50521-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics