Abstract
Neural networks have been shown to be highly successful in a wide range of applications. However, due to their black box behavior, their applicability can be restricted in safety-critical environments and additional verification techniques are required. Many state-of-the-art verification approaches use abstract interpretation based on linear overapproximation of the activation functions. Linearly approximating non-linear activation functions clearly incurs a loss of precision. One way to overcome this limitation is the utilization of polynomial approximations. A second way shown to improve the obtained bounds is to optimize the slope of the linear relaxations. Combining these insights, we propose a method to enable similar parameter optimization for polynomial relaxations. Given arbitrary values for a polynomial’s monomial coefficients, we can obtain valid polynomial overapproximations by appropriate upward or downward shifts. Since any value of monomial coefficients can be used to obtain valid overapproximations in that way, we use gradient-based methods to optimize the choice of the monomial coefficients. Our evaluation on verifying robustness against adversarial patches on the MNIST and CIFAR10 benchmarks shows that we can verify more instances and achieve tighter bounds than state of the art bound propagation methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
These relaxations are not described in their publication, but can be found in their tool’s implementation at https://github.com/huanzhang12/CROWN-Robustness-Certification/blob/master/quad_fit.py.
- 2.
In theory implicit differentiation is only applicable if g fulfills certain differentiability and invertibility properties. However, neither we in our own experiments nor [2] encountered problems with this formulation.
- 3.
No iterations of gradient ascent are required, we only need \(g(x, \boldsymbol{\textbf{a}})\) as characterization of \(x^*(\boldsymbol{\textbf{a}})\) to apply implicit differentiation.
- 4.
- 5.
- 6.
Refinement by polynomials of higher degrees was not possible for \({{\,\textrm{ReLU}\,}}\) networks due to a bug in CORA.
- 7.
- 8.
References
Bak, S., Liu, C., Johnson, T.T.: The second international verification of neural networks competition (VNN-COMP 2021): summary and results. CoRR abs/2109.00498 (2021). https://arxiv.org/abs/2109.00498
Blondel, M., et al.: Efficient and modular implicit differentiation. In: NeurIPS (2022). http://papers.nips.cc/paper_files/paper/2022/hash/228b9279ecf9bbafe582406850c57115-Abstract-Conference.html
Brix, C., Bak, S., Liu, C., Johnson, T.T.: The fourth international verification of neural networks competition (VNN-COMP 2023): summary and results. CoRR abs/2312.16760 (2023). https://doi.org/10.48550/ARXIV.2312.16760
Brix, C., Müller, M.N., Bak, S., Johnson, T.T., Liu, C.: First three years of the international verification of neural networks competition (VNN-COMP). Int. J. Softw. Tools Technol. Transf. 25(3), 329–339 (2023)
Cheng, C.-H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 251–268. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_18
Fatnassi, W., Khedr, H., Yamamoto, V., Shoukry, Y.: BERN-NN: tight bound propagation for neural networks using Bernstein polynomial interval arithmetic. CoRR abs/2211.14438 (2022). https://doi.org/10.48550/arXiv.2211.14438
Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org
Henriksen, P., Lomuscio, A.R.: Efficient neural network verification via adaptive refinement and adversarial search. In: ECAI. Frontiers in Artificial Intelligence and Applications, vol. 325, pp. 2513–2520. IOS Press (2020)
Huang, C., Fan, J., Chen, X., Li, W., Zhu, Q.: POLAR: a polynomial arithmetic framework for verifying neural-network controlled systems. In: Bouajjani, A., Holík, L., Wu, Z. (eds.) ATVA 2022. LNCS, vol. 13505, pp. 414–430. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19992-9_27
Ivanov, R., Carpenter, T., Weimer, J., Alur, R., Pappas, G., Lee, I.: Verisig 2.0: verification of neural network controllers using Taylor model preconditioning. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 249–262. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_11
Ivanov, R., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verisig: verifying safety properties of hybrid systems with neural network controllers. In: Ozay, N., Prabhakar, P. (eds.) Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, HSCC 2019, Montreal, QC, Canada, 16–18 April 2019, pp. 169–178. ACM (2019). https://doi.org/10.1145/3302504.3311806
Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J.: Policy compression for aircraft collision avoidance systems. In: 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp. 1–10 (2016). https://doi.org/10.1109/DASC.2016.7778091
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5
Kern, P., Büning, M.K., Sinz, C.: Optimized symbolic interval propagation for neural network verification. In: 1st Workshop on Formal Verification of Machine Learning (WFVML 2022) colocated with ICML 2022: International Conference on Machine Learning (2022)
Kern, P., Sinz, C.: Appendix to: abstract interpretation of relu neural networks with optimizable polynomial relaxations (2024)
Kochdumper, N., Althoff, M.: Sparse polynomial zonotopes: a novel set representation for reachability analysis. IEEE Trans. Autom. Control 66(9), 4043–4058 (2021). https://doi.org/10.1109/TAC.2020.3024348
Kochdumper, N., Schilling, C., Althoff, M., Bak, S.: Open- and closed-loop neural network verification using polynomial zonotopes. In: Rozier, K.Y., Chaudhuri, S. (eds.) NFM 2023. LNCS, vol. 13903, pp. 16–36. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-33170-1_2
Ladner, T., Althoff, M.: Automatic abstraction refinement in neural network verification using sensitivity analysis. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control, HSCC 2023, San Antonio, TX, USA, 9–12 May 2023, pp. 18:1–18:13. ACM (2023). https://doi.org/10.1145/3575870.3587129
Lan, J., Zheng, Y., Lomuscio, A.: Tight neural network verification via semidefinite relaxations and linear reformulations. In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, 22 February–1 March 2022, pp. 7272–7280. AAAI Press (2022). https://ojs.aaai.org/index.php/AAAI/article/view/20689
Müller, M.N., Brix, C., Bak, S., Liu, C., Johnson, T.T.: The third international verification of neural networks competition (VNN-COMP 2022): Summary and results. CoRR abs/2212.10376 (2022). https://doi.org/10.48550/ARXIV.2212.10376
Pan, Y., et al.: Imitation learning for agile autonomous driving. Int. J. Robotics Res. 39(2–3) (2020). https://doi.org/10.1177/0278364919880273
Raghunathan, A., Steinhardt, J., Liang, P.: Semidefinite relaxations for certifying robustness to adversarial examples. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, pp. 10900–10910 (2018). https://proceedings.neurips.cc/paper/2018/hash/29c0605a3bab4229e46723f89cf59d83-Abstract.html
Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.T.: Fast and effective robustness certification. In: NeurIPS, pp. 10825–10836 (2018)
Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3(POPL), 41:1–41:30 (2019)
Szegedy, C., et al.: Intriguing properties of neural networks. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Conference Track Proceedings (2014). http://arxiv.org/abs/1312.6199
Tjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=HyGIdiRqtm
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. In: NeurIPS, pp. 6369–6379 (2018)
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: USENIX Security Symposium, pp. 1599–1614. USENIX Association (2018)
Xu, K., et al.: Fast and complete: enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, 3–7 May 2021. OpenReview.net (2021). https://openreview.net/forum?id=nVZtXBI6LNn
Zhang, H., Weng, T., Chen, P., Hsieh, C., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, pp. 4944–4953 (2018). https://proceedings.neurips.cc/paper/2018/hash/d04863f100d59b3eb688a11f95b0ae60-Abstract.html
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kern, P., Sinz, C. (2025). Abstract Interpretation of ReLU Neural Networks with Optimizable Polynomial Relaxations. In: Giacobazzi, R., Gorla, A. (eds) Static Analysis. SAS 2024. Lecture Notes in Computer Science, vol 14995. Springer, Cham. https://doi.org/10.1007/978-3-031-74776-2_7
Download citation
DOI: https://doi.org/10.1007/978-3-031-74776-2_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-74775-5
Online ISBN: 978-3-031-74776-2
eBook Packages: Computer ScienceComputer Science (R0)