skip to main content
10.1145/3575870.3587112acmconferencesArticle/Chapter ViewAbstractPublication PagescpsweekConference Proceedingsconference-collections
research-article
Results Reproduced / v1.1

Quantitative Verification for Neural Networks using ProbStars

Published:09 May 2023Publication History

ABSTRACT

Most deep neural network (DNN) verification research focuses on qualitative verification, which answers whether or not a DNN violates a safety/robustness property. This paper proposes an approach to convert qualitative verification into quantitative verification for neural networks. The resulting quantitative verification method not only can answer YES or NO questions but also can compute the probability of a property being violated. To do that, we introduce the concept of a probabilistic star (or shortly ProbStar), a new variant of the well-known star set, in which the predicate variables belong to a Gaussian distribution and propose an approach to compute the probability of a probabilistic star in high-dimensional space. Unlike existing works dealing with constrained input sets, our work considers the input set as a truncated multivariate normal (Gaussian) distribution, i.e., besides the constraints on the input variables, the input set has a probability of the constraints being satisfied. The input distribution is represented as a probabilistic star set and is propagated through a network to construct the output reachable set containing multiple ProbStars, which are used to verify the safety or robustness properties of the network. In case of a property is violated, the violation probability can be computed precisely by an exact verification algorithm or approximately by an overapproximate verification algorithm. The proposed approach is implemented in a tool named StarV and is evaluated using the well-known ACASXu networks and a rocket landing benchmark.

References

  1. Stanley Bak and Parasara Sridhar Duggirala. 2017. Rigorous simulation-based analysis of linear hybrid systems. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  2. Stanley Bak and Parasara Sridhar Duggirala. 2017. Simulation-Equivalent Reachability of Large Linear Systems with Inputs. In Proceedings of the 29th International Conference on Computer Aided Verification. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  3. Stanley Bak, Hoang-Dung Tran, Kerianne Hobbs, and Taylor T. Johnson. 2020. Improved Geometric Path Enumeration for Verifying ReLU Neural Networks. In Proceedings of the 32nd International Conference on Computer Aided Verification. Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  4. Stanley Bak, Hoang-Dung Tran, and Taylor T. Johnson. 2019. Numerical Verification of Affine Systems with Up to a Billion Dimensions. In Proceedings of the 22Nd ACM International Conference on Hybrid Systems: Computation and Control(HSCC ’19). ACM, New York, NY, USA, 23–32. https://doi.org/10.1145/3302504.3311792Google ScholarGoogle ScholarDigital LibraryDigital Library
  5. Teodora Baluta, Shiqi Shen, Shweta Shinde, Kuldeep S Meel, and Prateek Saxena. 2019. Quantitative verification of neural networks and its security applications. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 1249–1264.Google ScholarGoogle ScholarDigital LibraryDigital Library
  6. Zdravko I Botev. 2017. The normal law under linear restrictions: simulation and estimation via minimax tilting. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79, 1 (2017), 125–148.Google ScholarGoogle ScholarCross RefCross Ref
  7. Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. arXiv preprint arXiv:1606.01540 (2016).Google ScholarGoogle Scholar
  8. Parasara Sridhar Duggirala and Mahesh Viswanathan. 2016. Parsimonious, simulation based verification of linear systems. In International Conference on Computer Aided Verification. Springer, 477–494.Google ScholarGoogle ScholarCross RefCross Ref
  9. Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. 2018. Output range analysis for deep feedforward neural networks. In NASA Formal Methods Symposium. Springer, 121–138.Google ScholarGoogle ScholarCross RefCross Ref
  10. Mahyar Fazlyab, Manfred Morari, and George J Pappas. 2019. Probabilistic verification and reachability analysis of neural networks via semidefinite programming. In 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2726–2731.Google ScholarGoogle ScholarDigital LibraryDigital Library
  11. Mahyar Fazlyab, Manfred Morari, and George J Pappas. 2020. Safety verification and robustness analysis of neural networks via quadratic constraints and semidefinite programming. IEEE Trans. Automat. Control (2020).Google ScholarGoogle Scholar
  12. Alan Genz and Frank Bretz. 2009. Computation of multivariate normal and t probabilities. Vol. 195. Springer Science & Business Media.Google ScholarGoogle Scholar
  13. Alan Genz and Giang Trinh. 2016. Numerical computation of multivariate normal probabilities using bivariate conditioning. In Monte Carlo and Quasi-Monte Carlo Methods. Springer, 289–302.Google ScholarGoogle Scholar
  14. Navid Hashemi, Bardh Hoxha, Tomoya Yamaguchi, Danil Prokhorov, Geogios Fainekos, and Jyotirmoy Deshmukh. 2023. A Neurosymbolic Approach to the Verification of Temporal Logic Properties of Learning enabled Control Systems. arXiv preprint arXiv:2303.05394 (2023).Google ScholarGoogle Scholar
  15. Kai Jia and Martin Rinard. 2021. Verifying low-dimensional input neural networks via input quantization. In International Static Analysis Symposium. Springer, 206–214.Google ScholarGoogle ScholarDigital LibraryDigital Library
  16. Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification. Springer, 97–117.Google ScholarGoogle ScholarCross RefCross Ref
  17. Guy Katz, Derek A Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljić, 2019. The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification. Springer.Google ScholarGoogle ScholarCross RefCross Ref
  18. Haitham Khedr, James Ferlez, and Yasser Shoukry. 2021. Peregrinn: Penalized-relaxation greedy neural network verifier. In International Conference on Computer Aided Verification. Springer, 287–300.Google ScholarGoogle ScholarDigital LibraryDigital Library
  19. Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).Google ScholarGoogle Scholar
  20. Changliu Liu, Tomer Arnon, Christopher Lazarus, Clark Barrett, and Mykel J Kochenderfer. 2019. Algorithms for verifying deep neural networks. arXiv preprint arXiv:1903.06758 (2019).Google ScholarGoogle Scholar
  21. Alessio Lomuscio and Lalit Maganti. 2017. An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351 (2017).Google ScholarGoogle Scholar
  22. Guido Manfredi and Yannick Jestin. 2016. An introduction to ACAS Xu and the challenges ahead. In 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). IEEE, 1–9.Google ScholarGoogle ScholarCross RefCross Ref
  23. Yue Meng, Dawei Sun, Zeng Qiu, Md Tawhid Bin Waez, and Chuchu Fan. 2022. Learning density distribution of reachable states for autonomous systems. In Conference on Robot Learning. PMLR, 124–136.Google ScholarGoogle Scholar
  24. Pavithra Prabhakar and Zahra Rahimi Afzal. 2019. Abstraction based output range analysis for neural networks. Advances in Neural Information Processing Systems 32 (2019).Google ScholarGoogle Scholar
  25. Sanjit A Seshia, Dorsa Sadigh, and S Shankar Sastry. 2022. Toward verified artificial intelligence. Commun. ACM 65, 7 (2022), 46–55.Google ScholarGoogle ScholarDigital LibraryDigital Library
  26. Andy Shih, Adnan Darwiche, and Arthur Choi. 2019. Verifying Binarized Neural Networks by Angluin-Style Learning.. In SAT. 354–370. https://doi.org/10.1007/978-3-030-24258-9_25Google ScholarGoogle Scholar
  27. Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin Vechev. 2018. Fast and effective robustness certification. In Advances in Neural Information Processing Systems. 10825–10836.Google ScholarGoogle Scholar
  28. Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019), 41.Google ScholarGoogle ScholarDigital LibraryDigital Library
  29. Hoang-Dung Tran, Stanley Bak, Weiming Xiang, and Taylor T Johnson. 2020. Verification of deep convolutional neural networks using imagestars. In International conference on computer aided verification. Springer, 18–42.Google ScholarGoogle ScholarDigital LibraryDigital Library
  30. Hoang-Dung Tran, Patrick Musau, Diego Manzanas Lopez, Xiaodong Yang, Luan Viet Nguyen, Weiming Xiang, and Taylor T. Johnson. 2019. Star-Based Reachability Analsysis for Deep Neural Networks. In 23rd International Symposisum on Formal Methods (FM’19). Springer International Publishing.Google ScholarGoogle Scholar
  31. Hoang-Dung Tran, Neelanjana Pal, Patrick Musau, Xiaodong Yang, Nathaniel P. Hamilton, Diego Manzanas Lopez, Stanley Bak, and Taylor T. Johnson. 2021. Robustness Verification of Semantic Segmentation Neural Networks using Relaxed Reachability. In 33rd International Conference on Computer-Aided Verification (CAV). Springer.Google ScholarGoogle ScholarDigital LibraryDigital Library
  32. Hoang-Dung Tran, Weiming Xiang, and Taylor T Johnson. 2020. Verification approaches for learning-enabled autonomous cyber-physical systems. IEEE Design & Test (2020).Google ScholarGoogle ScholarCross RefCross Ref
  33. Hoang-Dung Tran, Xiaodong Yang, Diego Manzanas Lopez, Patrick Musau, Luan Viet Nguyen, Weiming Xiang, Stanley Bak, and Taylor T Johnson. 2020. NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In International Conference on Computer Aided Verification. Springer, 3–17.Google ScholarGoogle ScholarDigital LibraryDigital Library
  34. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems. 6369–6379.Google ScholarGoogle Scholar
  35. Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Formal security analysis of neural networks using symbolic intervals. In 27th { USENIX} Security Symposium ({ USENIX} Security 18). 1599–1614.Google ScholarGoogle Scholar
  36. Xiaodong Yang, Taylor T Johnson, Hoang-Dung Tran, Tomoya Yamaguchi, Bardh Hoxha, and Danil V Prokhorov. 2021. Reachability analysis of deep ReLU neural networks using facet-vertex incidence.. In HSCC. 18–1.Google ScholarGoogle Scholar
  37. Xiaodong Yang, Tom Yamaguchi, Hoang-Dung Tran, Taylor T. Johnson, and Danil Prokhorov. 2022. Neural Network Repair with Reachability Analysis. In 20th International Conference on Formal Modeling and Analysis of Timed Systems (FORMATS).Google ScholarGoogle Scholar
  38. Mojtaba Zarei, Yu Wang, and Miroslav Pajic. 2020. Statistical verification of learning-based cyber-physical systems. In Proceedings of the 23rd International Conference on Hybrid Systems: Computation and Control. 1–7.Google ScholarGoogle ScholarDigital LibraryDigital Library
  39. Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. In Advances in Neural Information Processing Systems. 4944–4953.Google ScholarGoogle Scholar
  40. Yedi Zhang, Zhe Zhao, Guangke Chen, Fu Song, and Taolue Chen. 2021. BDD4BNN: a BDD-based quantitative analysis framework for binarized neural networks. In International Conference on Computer Aided Verification. Springer, 175–200.Google ScholarGoogle ScholarDigital LibraryDigital Library

Index Terms

  1. Quantitative Verification for Neural Networks using ProbStars

    Recommendations

    Comments

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in
    • Published in

      cover image ACM Conferences
      HSCC '23: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control
      May 2023
      239 pages
      ISBN:9798400700330
      DOI:10.1145/3575870

      Copyright © 2023 ACM

      Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

      Publisher

      Association for Computing Machinery

      New York, NY, United States

      Publication History

      • Published: 9 May 2023

      Permissions

      Request permissions about this article.

      Request Permissions

      Check for updates

      Qualifiers

      • research-article
      • Research
      • Refereed limited

      Acceptance Rates

      Overall Acceptance Rate153of373submissions,41%

    PDF Format

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    HTML Format

    View this article in HTML Format .

    View HTML Format