Abstract
Robustness verification of ‘s is becoming increasingly crucial for their potential use in many safety-critical applications. Essentially, the problem of robustness verification can be encoded as a typical Mixed-Integer Linear Programming (MILP) problem, which can be solved via branch-and-bound strategies. However, these methods can only afford limited scalability and remain challenging for verifying large-scale neural networks. In this paper, we present a novel framework to speed up the solving of the MILP problems generated from the robustness verification of deep neural networks. It employs a semi-planet relaxation to abstract ReLU activation functions, via an RNN-based strategy for selecting the relaxed ReLU neurons to be tightened. We have developed a prototype tool L2T and conducted comparison experiments with state-of-the-art verifiers on a set of large-scale benchmarks. The experiments show that our framework is both efficient and scalable even when applied to verify the robustness of large-scale neural networks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Alvarez, A.M., Louveaux, Q., Wehenkel, L.: A machine learning-based approximation of strong branching. INFORMS J. Comput. 29, 185–195 (2017)
Bak, S., Liu, C., Johnson, T.: The second international verification of neural networks competition (VNN-COMP 2021): summary and results. arXiv preprint arXiv:2109.00498 (2021)
Bak, S., Tran, H.D., Hobbs, K., Johnson, T.T.: Improved geometric path enumeration for verifying relu neural networks. In: International Conference on Computer Aided Verification, pp. 66–96 (2020)
Balunovic, M., Vechev, M.: Adversarial training and provable defenses: bridging the gap. In: International Conference on Learning Representations (2020)
Bunel, R., et al.: Lagrangian decomposition for neural network verification. In: Conference on Uncertainty in Artificial Intelligence, pp. 370–379 (2020)
Bunel, R., Mudigonda, P., Turkaslan, I., Torr, P., Lu, J., Kohli, P.: Branch and bound for piecewise linear neural network verification. J. Mach. Learn. Res. 42:1–42:39 (2020)
Bunel, R., Turkaslan, I., Torr, P.H.S., Kohli, P., Mudigonda, P.K.: A unified view of piecewise linear neural network verification. In: Neural Information Processing Systems (2018)
Dvijotham, K., Stanforth, R., Gowal, S., Mann, T.A., Kohli, P.: A dual approach to scalable verification of deep networks. In: Conference on Uncertainty in Artificial Intelligence, p. 3 (2018)
Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: International Symposium on Automated Technology for Verification and Analysis, pp. 269–286 (2017)
Elboher, Y.Y., Gottschlich, J., Katz, G.: An abstraction-based framework for neural network verification. In: International Conference on Computer Aided Verification, pp. 43–65 (2020)
Gasse, M., Chetelat, D., Ferroni, N., Charlin, L., Lodi, A.: Exact combinatorial optimization with graph convolutional neural networks. In: Advances in Neural Information Processing Systems. Curran Associates, Inc. (2019)
Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: Proceedings of the Fourteenth International Conference on Artificial Intelligence and Statistics, pp. 315–323 (2011)
Golson, J.: Tesla driver killed in crash with autopilot active, NHTSA investigating. The Verge (2016)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Hansknecht, C., Joormann, I., Stiller, S.: Cuts, primal heuristics, and learning to branch for the time-dependent traveling salesman problem. arXiv preprint arXiv:1805.01415 (2018)
Henriksen, P., Lomuscio, A.: Efficient neural network verification via adaptive refinement and adversarial search. In: European Conference on Artificial Intelligence, pp. 2513–2520 (2020)
Henriksen, P., Lomuscio, A.: Deepsplit: an efficient splitting method for neural network verification via indirect effect analysis. In: International Joint Conference on Artificial Intelligence (2021)
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: International Conference on Computer Aided Verification, pp. 97–117 (2017)
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980 (2014)
Lin, W., et al.: Robustness verification of classification deep neural networks via linear programming. In: Conference on Computer Vision and Pattern Recognition, pp. 11418–11427 (2019)
Lu, J., Kumar, M.P.: Neural network branching for neural network verification. In: International Conference on Learning Representations (2020)
Palma, A.D., Behl, H., Bunel, R.R., Torr, P., Kumar, M.P.: Scaling the convex barrier with active sets. In: International Conference on Learning Representations (2021)
Paszke, A., et al.: Pytorch: an imperative style, high-performance deep learning library. In: Conference on Neural Information Processing Systems, pp. 8024–8035 (2019)
Raghunathan, A., Steinhardt, J., Liang, P.: Semidefinite relaxations for certifying robustness to adversarial examples. In: Conference on Neural Information Processing Systems, pp. 10900–10910 (2018)
Ruan, W., Huang, X., Kwiatkowska, M.: Reachability analysis of deep neural networks with provable guarantees. In: International Joint Conference on Artificial Intelligence (2018)
Singh, G., Ganvir, R., Puschel, M., Vechev, M.: Beyond the single neuron convex barrier for neural network certification. In: Conference on Neural Information Processing Systems (2019)
Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.T.: Fast and effective robustness certification. In: Conference on Neural Information Processing Systems (2018)
Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. In: Proceedings of the ACM on Programming Languages, pp. 1–30 (2019)
Singh, G., Gehr, T., Püschel, M., Vechev, M.: Robustness certification with refinement. In: International Conference on Learning Representations (2019)
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014)
VNN-COMP: International verification of neural networks competition (VNN-COMP). In: International Conference on Computer Aided Verification (2020)
Wang, S., et al.: Beta-CROWN: efficient bound propagation with per-neuron split constraints for complete and incomplete neural network verification. In: Advances in Neural Information Processing Systems (2021)
Wong, E., Kolter, Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: International Conference on Machine Learning, pp. 5286–5295 (2018)
Xu, K., et al.: Fast and complete: enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In: International Conference on Learning Representations (2021)
Zhang, H., Weng, T.W., Chen, P.Y., Hsieh, C.J., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: Advances in Neural Information Processing Systems (2018)
Acknowledgements
This work is supported by the National Natural Science Foundation of China under Grant 12171159, 61902325, and Shanghai Trusted Industry Internet Software Collaborative Innovation Center.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Xue, H., Zeng, X., Lin, W., Yang, Z., Peng, C., Zeng, Z. (2023). An RNN-Based Framework for the MILP Problem in Robustness Verification of Neural Networks. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13841. Springer, Cham. https://doi.org/10.1007/978-3-031-26319-4_34
Download citation
DOI: https://doi.org/10.1007/978-3-031-26319-4_34
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-26318-7
Online ISBN: 978-3-031-26319-4
eBook Packages: Computer ScienceComputer Science (R0)