Abstract
As neural networks have taken on a critical role in real-world applications, formal verification is earnestly needed to guarantee the safety properties of the networks. However, it remains challenging to balance the trade-off between precision and efficiency in abstract interpretation based verification methods. In this paper, we propose an abstract refinement process that leverages the convex hull techniques to improve the analysis efficiency. Specifically, we introduce the double description method in the convex polytope domain to detect and eliminate multiple spurious adversarial labels simultaneously. We also combine the new activation relaxation technique with the iterative abstract refinement method to compensate for the precision loss during abstract interpretation. We have implemented our proposal into a verification framework named ARENA, and assessed its effectiveness by conducting a series of experiments. These experiments show that ARENA yields significantly better verification precision compared to the existing abstract-refinement-based tool DeepSRGR. It also identifies falsification by detecting adversarial examples, with reasonable execution efficiency. Lastly, it verifies more images than the state-of-the-art verifier PRIMA.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
Unstable ReLU neuron refers to a ReLU neuron whose input range can be both negative and positive (like \(y_2, y_3\)).
- 2.
- 3.
- 4.
- 5.
A trained defense refers to a defense method against adversarial samples, with the purpose of improving the robustness property of the network.
References
Ren, K., Zheng, T., Qin, Z., Liu, X.: Adversarial attacks and defenses in deep learning. Engineering 6(3), 346–360 (2020)
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. Learn. Syst. 30(9), 2805–2824 (2019)
Tjeng, V., Xiao, K., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: International Conference on Learning Representations (ICLR). OpenReview.net (2019)
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5
Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_24
Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE Computer Society (2018)
Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3(POPL), 41:1–41:30 (2019)
Singh, G., Ganvir, R., Püschel, M., Vechev, M.T.: Beyond the single neuron convex barrier for neural network certification. In: Wallach, H.M., Larochelle, H., Beygelzimer, A., d’Alché-Buc, F., Fox, E.B., Garnett, R., (eds.) Advances in Neural Information Processing Systems, vol. 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8–14, 2019, Vancouver, BC, Canada, pp. 15072–15083 (2019)
Müller, M.N., Makarchuk, G., Singh, G., Püschel, M., Vechev, M.T.: PRIMA: general and precise neural network certification via scalable convex hull approximations. Proc. ACM Program. Lang. 6(POPL), 1–33 (2022)
Yang, P.: Improving neural network verification through spurious region guided refinement. In: Groote, J.F., Larsen, K.G. (eds.) TACAS 2021. LNCS, vol. 12651, pp. 389–408. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72016-2_21
Clarke, E., Grumberg, O., Jha, S., Lu, Y., Veith, H.: Counterexample-guided abstraction refinement. In: Emerson, E.A., Sistla, A.P. (eds.) CAV 2000. LNCS, vol. 1855, pp. 154–169. Springer, Heidelberg (2000). https://doi.org/10.1007/10722167_15
Fukuda, K., Prodon, A.: Double description method revisited. In: Deza, M., Euler, R., Manoussakis, I. (eds.) CCS 1995. LNCS, vol. 1120, pp. 91–111. Springer, Heidelberg (1996). https://doi.org/10.1007/3-540-61576-8_77
CMU. Alpha-Beta-CROWN: a fast and scalable neural network verifier with efficient bound propagation (2022). https://github.com/huanzhang12/alpha-beta-CROWN. Accessed 11 Aug 2022
McMullen, P.: Convex polytopes, by Branko Grunbaum, second edition (first edition (1967) written with the cooperation of V. L. Klee, M. Perles and G. C. Shephard. Comb. Probab. Comput. 14(4), 623–626 (2005)
ETH. ETH Robustness Analyzer for Neural Networks (ERAN) (2022). https://github.com/eth-sri/eran. Accessed 11 Aug 2022
3rd International Verification of Neural Networks Competition (VNN-COMP’22) (2022). https://sites.google.com/view/vnn2022. Accessed 11 Aug 2022
LeCun, Y., Cortes, C.: MNIST handwritten digit database (2010)
Krizhevsky, A., Nair, V., Hinton, G.: Cifar-10 (Canadian institute for advanced research)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)
Mirman, M., Gehr, T., Vechev, M.T.: Differentiable abstract interpretation for provably robust neural networks. In: International Conference on Machine Learning (ICML), pp. 3575–3583 (2018)
Botoeva, E., Kouvaros, P., Kronqvist, J., Lomuscio, A., Misener, R.: Efficient verification of relu-based neural networks via dependency analysis. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, 7–12 February 2020, pp. 3291–3299. AAAI Press (2020)
Wang, S.: Beta-crown: efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. CoRR, abs/2103.06624 (2021)
Zhang, H., Weng, T.W., Chen, P.Y., Hsieh, C.J., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R., (eds.) Advances in Neural Information Processing Systems Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, 3–8 December 2018, Montréal, Canada, vol. 31, pp. 4944–4953 (2018)
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: Enck, W., Felt, A.P., (eds.) 27th USENIX Security Symposium, USENIX Security 2018, Baltimore, MD, USA, 15–17 August 2018, pp. 1599–1614. USENIX Association (2018)
Acknowledgement
This research is supported by a Singapore Ministry of Education Academic Research Fund Tier 1 T1-251RES2103. The second author is supported by a Singapore National Research Foundation Grant R-252-000-B90-279 for the project Singapore Blockchain Innovation Programme. We are grateful to Julian Rüth saraedum and Komei Fukuda for their prompt answer to our queries on cddlib. And we appreciate Mark Niklas Müller’s assistance to our queries on PRIMA.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Zhong, Y., Ta, QT., Khoo, SC. (2023). ARENA: Enhancing Abstract Refinement for Neural Network Verification. In: Dragoi, C., Emmi, M., Wang, J. (eds) Verification, Model Checking, and Abstract Interpretation. VMCAI 2023. Lecture Notes in Computer Science, vol 13881. Springer, Cham. https://doi.org/10.1007/978-3-031-24950-1_17
Download citation
DOI: https://doi.org/10.1007/978-3-031-24950-1_17
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-24949-5
Online ISBN: 978-3-031-24950-1
eBook Packages: Computer ScienceComputer Science (R0)