ABSTRACT
Most deep neural network (DNN) verification research focuses on qualitative verification, which answers whether or not a DNN violates a safety/robustness property. This paper proposes an approach to convert qualitative verification into quantitative verification for neural networks. The resulting quantitative verification method not only can answer YES or NO questions but also can compute the probability of a property being violated. To do that, we introduce the concept of a probabilistic star (or shortly ProbStar), a new variant of the well-known star set, in which the predicate variables belong to a Gaussian distribution and propose an approach to compute the probability of a probabilistic star in high-dimensional space. Unlike existing works dealing with constrained input sets, our work considers the input set as a truncated multivariate normal (Gaussian) distribution, i.e., besides the constraints on the input variables, the input set has a probability of the constraints being satisfied. The input distribution is represented as a probabilistic star set and is propagated through a network to construct the output reachable set containing multiple ProbStars, which are used to verify the safety or robustness properties of the network. In case of a property is violated, the violation probability can be computed precisely by an exact verification algorithm or approximately by an overapproximate verification algorithm. The proposed approach is implemented in a tool named StarV and is evaluated using the well-known ACASXu networks and a rocket landing benchmark.
- Stanley Bak and Parasara Sridhar Duggirala. 2017. Rigorous simulation-based analysis of linear hybrid systems. In International Conference on Tools and Algorithms for the Construction and Analysis of Systems. Springer.Google ScholarDigital Library
- Stanley Bak and Parasara Sridhar Duggirala. 2017. Simulation-Equivalent Reachability of Large Linear Systems with Inputs. In Proceedings of the 29th International Conference on Computer Aided Verification. Springer.Google ScholarCross Ref
- Stanley Bak, Hoang-Dung Tran, Kerianne Hobbs, and Taylor T. Johnson. 2020. Improved Geometric Path Enumeration for Verifying ReLU Neural Networks. In Proceedings of the 32nd International Conference on Computer Aided Verification. Springer.Google ScholarDigital Library
- Stanley Bak, Hoang-Dung Tran, and Taylor T. Johnson. 2019. Numerical Verification of Affine Systems with Up to a Billion Dimensions. In Proceedings of the 22Nd ACM International Conference on Hybrid Systems: Computation and Control(HSCC ’19). ACM, New York, NY, USA, 23–32. https://doi.org/10.1145/3302504.3311792Google ScholarDigital Library
- Teodora Baluta, Shiqi Shen, Shweta Shinde, Kuldeep S Meel, and Prateek Saxena. 2019. Quantitative verification of neural networks and its security applications. In Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security. 1249–1264.Google ScholarDigital Library
- Zdravko I Botev. 2017. The normal law under linear restrictions: simulation and estimation via minimax tilting. Journal of the Royal Statistical Society: Series B (Statistical Methodology) 79, 1 (2017), 125–148.Google ScholarCross Ref
- Greg Brockman, Vicki Cheung, Ludwig Pettersson, Jonas Schneider, John Schulman, Jie Tang, and Wojciech Zaremba. 2016. Openai gym. arXiv preprint arXiv:1606.01540 (2016).Google Scholar
- Parasara Sridhar Duggirala and Mahesh Viswanathan. 2016. Parsimonious, simulation based verification of linear systems. In International Conference on Computer Aided Verification. Springer, 477–494.Google ScholarCross Ref
- Souradeep Dutta, Susmit Jha, Sriram Sankaranarayanan, and Ashish Tiwari. 2018. Output range analysis for deep feedforward neural networks. In NASA Formal Methods Symposium. Springer, 121–138.Google ScholarCross Ref
- Mahyar Fazlyab, Manfred Morari, and George J Pappas. 2019. Probabilistic verification and reachability analysis of neural networks via semidefinite programming. In 2019 IEEE 58th Conference on Decision and Control (CDC). IEEE, 2726–2731.Google ScholarDigital Library
- Mahyar Fazlyab, Manfred Morari, and George J Pappas. 2020. Safety verification and robustness analysis of neural networks via quadratic constraints and semidefinite programming. IEEE Trans. Automat. Control (2020).Google Scholar
- Alan Genz and Frank Bretz. 2009. Computation of multivariate normal and t probabilities. Vol. 195. Springer Science & Business Media.Google Scholar
- Alan Genz and Giang Trinh. 2016. Numerical computation of multivariate normal probabilities using bivariate conditioning. In Monte Carlo and Quasi-Monte Carlo Methods. Springer, 289–302.Google Scholar
- Navid Hashemi, Bardh Hoxha, Tomoya Yamaguchi, Danil Prokhorov, Geogios Fainekos, and Jyotirmoy Deshmukh. 2023. A Neurosymbolic Approach to the Verification of Temporal Logic Properties of Learning enabled Control Systems. arXiv preprint arXiv:2303.05394 (2023).Google Scholar
- Kai Jia and Martin Rinard. 2021. Verifying low-dimensional input neural networks via input quantization. In International Static Analysis Symposium. Springer, 206–214.Google ScholarDigital Library
- Guy Katz, Clark Barrett, David L Dill, Kyle Julian, and Mykel J Kochenderfer. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification. Springer, 97–117.Google ScholarCross Ref
- Guy Katz, Derek A Huang, Duligur Ibeling, Kyle Julian, Christopher Lazarus, Rachel Lim, Parth Shah, Shantanu Thakoor, Haoze Wu, Aleksandar Zeljić, 2019. The marabou framework for verification and analysis of deep neural networks. In International Conference on Computer Aided Verification. Springer.Google ScholarCross Ref
- Haitham Khedr, James Ferlez, and Yasser Shoukry. 2021. Peregrinn: Penalized-relaxation greedy neural network verifier. In International Conference on Computer Aided Verification. Springer, 287–300.Google ScholarDigital Library
- Timothy P Lillicrap, Jonathan J Hunt, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver, and Daan Wierstra. 2015. Continuous control with deep reinforcement learning. arXiv preprint arXiv:1509.02971 (2015).Google Scholar
- Changliu Liu, Tomer Arnon, Christopher Lazarus, Clark Barrett, and Mykel J Kochenderfer. 2019. Algorithms for verifying deep neural networks. arXiv preprint arXiv:1903.06758 (2019).Google Scholar
- Alessio Lomuscio and Lalit Maganti. 2017. An approach to reachability analysis for feed-forward relu neural networks. arXiv preprint arXiv:1706.07351 (2017).Google Scholar
- Guido Manfredi and Yannick Jestin. 2016. An introduction to ACAS Xu and the challenges ahead. In 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC). IEEE, 1–9.Google ScholarCross Ref
- Yue Meng, Dawei Sun, Zeng Qiu, Md Tawhid Bin Waez, and Chuchu Fan. 2022. Learning density distribution of reachable states for autonomous systems. In Conference on Robot Learning. PMLR, 124–136.Google Scholar
- Pavithra Prabhakar and Zahra Rahimi Afzal. 2019. Abstraction based output range analysis for neural networks. Advances in Neural Information Processing Systems 32 (2019).Google Scholar
- Sanjit A Seshia, Dorsa Sadigh, and S Shankar Sastry. 2022. Toward verified artificial intelligence. Commun. ACM 65, 7 (2022), 46–55.Google ScholarDigital Library
- Andy Shih, Adnan Darwiche, and Arthur Choi. 2019. Verifying Binarized Neural Networks by Angluin-Style Learning.. In SAT. 354–370. https://doi.org/10.1007/978-3-030-24258-9_25Google Scholar
- Gagandeep Singh, Timon Gehr, Matthew Mirman, Markus Püschel, and Martin Vechev. 2018. Fast and effective robustness certification. In Advances in Neural Information Processing Systems. 10825–10836.Google Scholar
- Gagandeep Singh, Timon Gehr, Markus Püschel, and Martin Vechev. 2019. An abstract domain for certifying neural networks. Proceedings of the ACM on Programming Languages 3, POPL (2019), 41.Google ScholarDigital Library
- Hoang-Dung Tran, Stanley Bak, Weiming Xiang, and Taylor T Johnson. 2020. Verification of deep convolutional neural networks using imagestars. In International conference on computer aided verification. Springer, 18–42.Google ScholarDigital Library
- Hoang-Dung Tran, Patrick Musau, Diego Manzanas Lopez, Xiaodong Yang, Luan Viet Nguyen, Weiming Xiang, and Taylor T. Johnson. 2019. Star-Based Reachability Analsysis for Deep Neural Networks. In 23rd International Symposisum on Formal Methods (FM’19). Springer International Publishing.Google Scholar
- Hoang-Dung Tran, Neelanjana Pal, Patrick Musau, Xiaodong Yang, Nathaniel P. Hamilton, Diego Manzanas Lopez, Stanley Bak, and Taylor T. Johnson. 2021. Robustness Verification of Semantic Segmentation Neural Networks using Relaxed Reachability. In 33rd International Conference on Computer-Aided Verification (CAV). Springer.Google ScholarDigital Library
- Hoang-Dung Tran, Weiming Xiang, and Taylor T Johnson. 2020. Verification approaches for learning-enabled autonomous cyber-physical systems. IEEE Design & Test (2020).Google ScholarCross Ref
- Hoang-Dung Tran, Xiaodong Yang, Diego Manzanas Lopez, Patrick Musau, Luan Viet Nguyen, Weiming Xiang, Stanley Bak, and Taylor T Johnson. 2020. NNV: the neural network verification tool for deep neural networks and learning-enabled cyber-physical systems. In International Conference on Computer Aided Verification. Springer, 3–17.Google ScholarDigital Library
- Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Efficient formal safety analysis of neural networks. In Advances in Neural Information Processing Systems. 6369–6379.Google Scholar
- Shiqi Wang, Kexin Pei, Justin Whitehouse, Junfeng Yang, and Suman Jana. 2018. Formal security analysis of neural networks using symbolic intervals. In 27th { USENIX} Security Symposium ({ USENIX} Security 18). 1599–1614.Google Scholar
- Xiaodong Yang, Taylor T Johnson, Hoang-Dung Tran, Tomoya Yamaguchi, Bardh Hoxha, and Danil V Prokhorov. 2021. Reachability analysis of deep ReLU neural networks using facet-vertex incidence.. In HSCC. 18–1.Google Scholar
- Xiaodong Yang, Tom Yamaguchi, Hoang-Dung Tran, Taylor T. Johnson, and Danil Prokhorov. 2022. Neural Network Repair with Reachability Analysis. In 20th International Conference on Formal Modeling and Analysis of Timed Systems (FORMATS).Google Scholar
- Mojtaba Zarei, Yu Wang, and Miroslav Pajic. 2020. Statistical verification of learning-based cyber-physical systems. In Proceedings of the 23rd International Conference on Hybrid Systems: Computation and Control. 1–7.Google ScholarDigital Library
- Huan Zhang, Tsui-Wei Weng, Pin-Yu Chen, Cho-Jui Hsieh, and Luca Daniel. 2018. Efficient neural network robustness certification with general activation functions. In Advances in Neural Information Processing Systems. 4944–4953.Google Scholar
- Yedi Zhang, Zhe Zhao, Guangke Chen, Fu Song, and Taolue Chen. 2021. BDD4BNN: a BDD-based quantitative analysis framework for binarized neural networks. In International Conference on Computer Aided Verification. Springer, 175–200.Google ScholarDigital Library
Index Terms
- Quantitative Verification for Neural Networks using ProbStars
Recommendations
Quantitative Verification of Neural Networks and Its Security Applications
CCS '19: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications SecurityNeural networks are increasingly employed in safety-critical domains. This has prompted interest in verifying or certifying logically encoded properties of neural networks. Prior work has largely focused on checking existential properties, wherein the ...
Quantitative verification: models, techniques and tools
ESEC-FSE companion '07: The 6th Joint Meeting on European software engineering conference and the ACM SIGSOFT symposium on the foundations of software engineering: companion papersAutomated verification is a technique for establishing if certain properties, usually expressed in temporal logic, hold for a system model. The model can be defined using a high-level formalism or extracted directly from software using methods such as ...
On Quantitative Software Verification
Proceedings of the 16th International SPIN Workshop on Model Checking SoftwareSoftware verification has made great progress in recent years, resulting in several tools capable of working directly from source code, for example, SLAM and Astree. Typical properties that can be verified are expressed as Boolean assertions or temporal ...
Comments