Skip to main content

Abstract Interpretation of ReLU Neural Networks with Optimizable Polynomial Relaxations

  • Conference paper
  • First Online:
Static Analysis (SAS 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14995))

Included in the following conference series:

  • 104 Accesses

Abstract

Neural networks have been shown to be highly successful in a wide range of applications. However, due to their black box behavior, their applicability can be restricted in safety-critical environments and additional verification techniques are required. Many state-of-the-art verification approaches use abstract interpretation based on linear overapproximation of the activation functions. Linearly approximating non-linear activation functions clearly incurs a loss of precision. One way to overcome this limitation is the utilization of polynomial approximations. A second way shown to improve the obtained bounds is to optimize the slope of the linear relaxations. Combining these insights, we propose a method to enable similar parameter optimization for polynomial relaxations. Given arbitrary values for a polynomial’s monomial coefficients, we can obtain valid polynomial overapproximations by appropriate upward or downward shifts. Since any value of monomial coefficients can be used to obtain valid overapproximations in that way, we use gradient-based methods to optimize the choice of the monomial coefficients. Our evaluation on verifying robustness against adversarial patches on the MNIST and CIFAR10 benchmarks shows that we can verify more instances and achieve tighter bounds than state of the art bound propagation methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    These relaxations are not described in their publication, but can be found in their tool’s implementation at https://github.com/huanzhang12/CROWN-Robustness-Certification/blob/master/quad_fit.py.

  2. 2.

    In theory implicit differentiation is only applicable if g fulfills certain differentiability and invertibility properties. However, neither we in our own experiments nor [2] encountered problems with this formulation.

  3. 3.

    No iterations of gradient ascent are required, we only need \(g(x, \boldsymbol{\textbf{a}})\) as characterization of \(x^*(\boldsymbol{\textbf{a}})\) to apply implicit differentiation.

  4. 4.

    Available at https://github.com/Verified-Intelligence/alpha-beta-CROWN.

  5. 5.

    https://github.com/TUMcps/CORA/tree/master.

  6. 6.

    Refinement by polynomials of higher degrees was not possible for \({{\,\textrm{ReLU}\,}}\) networks due to a bug in CORA.

  7. 7.

    https://github.com/stanleybak/vnncomp2021.

  8. 8.

    https://github.com/eth-sri/eran.

References

  1. Bak, S., Liu, C., Johnson, T.T.: The second international verification of neural networks competition (VNN-COMP 2021): summary and results. CoRR abs/2109.00498 (2021). https://arxiv.org/abs/2109.00498

  2. Blondel, M., et al.: Efficient and modular implicit differentiation. In: NeurIPS (2022). http://papers.nips.cc/paper_files/paper/2022/hash/228b9279ecf9bbafe582406850c57115-Abstract-Conference.html

  3. Brix, C., Bak, S., Liu, C., Johnson, T.T.: The fourth international verification of neural networks competition (VNN-COMP 2023): summary and results. CoRR abs/2312.16760 (2023). https://doi.org/10.48550/ARXIV.2312.16760

  4. Brix, C., Müller, M.N., Bak, S., Johnson, T.T., Liu, C.: First three years of the international verification of neural networks competition (VNN-COMP). Int. J. Softw. Tools Technol. Transf. 25(3), 329–339 (2023)

    Article  Google Scholar 

  5. Cheng, C.-H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 251–268. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_18

    Chapter  MATH  Google Scholar 

  6. Fatnassi, W., Khedr, H., Yamamoto, V., Shoukry, Y.: BERN-NN: tight bound propagation for neural networks using Bernstein polynomial interval arithmetic. CoRR abs/2211.14438 (2022). https://doi.org/10.48550/arXiv.2211.14438

  7. Goodfellow, I., Bengio, Y., Courville, A.: Deep Learning. MIT Press (2016). http://www.deeplearningbook.org

  8. Henriksen, P., Lomuscio, A.R.: Efficient neural network verification via adaptive refinement and adversarial search. In: ECAI. Frontiers in Artificial Intelligence and Applications, vol. 325, pp. 2513–2520. IOS Press (2020)

    Google Scholar 

  9. Huang, C., Fan, J., Chen, X., Li, W., Zhu, Q.: POLAR: a polynomial arithmetic framework for verifying neural-network controlled systems. In: Bouajjani, A., Holík, L., Wu, Z. (eds.) ATVA 2022. LNCS, vol. 13505, pp. 414–430. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19992-9_27

    Chapter  Google Scholar 

  10. Ivanov, R., Carpenter, T., Weimer, J., Alur, R., Pappas, G., Lee, I.: Verisig 2.0: verification of neural network controllers using Taylor model preconditioning. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 249–262. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_11

    Chapter  MATH  Google Scholar 

  11. Ivanov, R., Weimer, J., Alur, R., Pappas, G.J., Lee, I.: Verisig: verifying safety properties of hybrid systems with neural network controllers. In: Ozay, N., Prabhakar, P. (eds.) Proceedings of the 22nd ACM International Conference on Hybrid Systems: Computation and Control, HSCC 2019, Montreal, QC, Canada, 16–18 April 2019, pp. 169–178. ACM (2019). https://doi.org/10.1145/3302504.3311806

  12. Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J.: Policy compression for aircraft collision avoidance systems. In: 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp. 1–10 (2016). https://doi.org/10.1109/DASC.2016.7778091

  13. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  MATH  Google Scholar 

  14. Kern, P., Büning, M.K., Sinz, C.: Optimized symbolic interval propagation for neural network verification. In: 1st Workshop on Formal Verification of Machine Learning (WFVML 2022) colocated with ICML 2022: International Conference on Machine Learning (2022)

    Google Scholar 

  15. Kern, P., Sinz, C.: Appendix to: abstract interpretation of relu neural networks with optimizable polynomial relaxations (2024)

    Google Scholar 

  16. Kochdumper, N., Althoff, M.: Sparse polynomial zonotopes: a novel set representation for reachability analysis. IEEE Trans. Autom. Control 66(9), 4043–4058 (2021). https://doi.org/10.1109/TAC.2020.3024348

    Article  MathSciNet  MATH  Google Scholar 

  17. Kochdumper, N., Schilling, C., Althoff, M., Bak, S.: Open- and closed-loop neural network verification using polynomial zonotopes. In: Rozier, K.Y., Chaudhuri, S. (eds.) NFM 2023. LNCS, vol. 13903, pp. 16–36. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-33170-1_2

    Chapter  Google Scholar 

  18. Ladner, T., Althoff, M.: Automatic abstraction refinement in neural network verification using sensitivity analysis. In: Proceedings of the 26th ACM International Conference on Hybrid Systems: Computation and Control, HSCC 2023, San Antonio, TX, USA, 9–12 May 2023, pp. 18:1–18:13. ACM (2023). https://doi.org/10.1145/3575870.3587129

  19. Lan, J., Zheng, Y., Lomuscio, A.: Tight neural network verification via semidefinite relaxations and linear reformulations. In: Thirty-Sixth AAAI Conference on Artificial Intelligence, AAAI 2022, Thirty-Fourth Conference on Innovative Applications of Artificial Intelligence, IAAI 2022, The Twelveth Symposium on Educational Advances in Artificial Intelligence, EAAI 2022 Virtual Event, 22 February–1 March 2022, pp. 7272–7280. AAAI Press (2022). https://ojs.aaai.org/index.php/AAAI/article/view/20689

  20. Müller, M.N., Brix, C., Bak, S., Liu, C., Johnson, T.T.: The third international verification of neural networks competition (VNN-COMP 2022): Summary and results. CoRR abs/2212.10376 (2022). https://doi.org/10.48550/ARXIV.2212.10376

  21. Pan, Y., et al.: Imitation learning for agile autonomous driving. Int. J. Robotics Res. 39(2–3) (2020). https://doi.org/10.1177/0278364919880273

  22. Raghunathan, A., Steinhardt, J., Liang, P.: Semidefinite relaxations for certifying robustness to adversarial examples. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, pp. 10900–10910 (2018). https://proceedings.neurips.cc/paper/2018/hash/29c0605a3bab4229e46723f89cf59d83-Abstract.html

  23. Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.T.: Fast and effective robustness certification. In: NeurIPS, pp. 10825–10836 (2018)

    Google Scholar 

  24. Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3(POPL), 41:1–41:30 (2019)

    Google Scholar 

  25. Szegedy, C., et al.: Intriguing properties of neural networks. In: Bengio, Y., LeCun, Y. (eds.) 2nd International Conference on Learning Representations, ICLR 2014, Banff, AB, Canada, 14–16 April 2014, Conference Track Proceedings (2014). http://arxiv.org/abs/1312.6199

  26. Tjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019). https://openreview.net/forum?id=HyGIdiRqtm

  27. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. In: NeurIPS, pp. 6369–6379 (2018)

    Google Scholar 

  28. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: USENIX Security Symposium, pp. 1599–1614. USENIX Association (2018)

    Google Scholar 

  29. Xu, K., et al.: Fast and complete: enabling complete neural network verification with rapid and massively parallel incomplete verifiers. In: 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, 3–7 May 2021. OpenReview.net (2021). https://openreview.net/forum?id=nVZtXBI6LNn

  30. Zhang, H., Weng, T., Chen, P., Hsieh, C., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, pp. 4944–4953 (2018). https://proceedings.neurips.cc/paper/2018/hash/d04863f100d59b3eb688a11f95b0ae60-Abstract.html

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Philipp Kern .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kern, P., Sinz, C. (2025). Abstract Interpretation of ReLU Neural Networks with Optimizable Polynomial Relaxations. In: Giacobazzi, R., Gorla, A. (eds) Static Analysis. SAS 2024. Lecture Notes in Computer Science, vol 14995. Springer, Cham. https://doi.org/10.1007/978-3-031-74776-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-74776-2_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-74775-5

  • Online ISBN: 978-3-031-74776-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics