Skip to main content

Using Counterexamples to Improve Robustness Verification in Neural Networks

  • Conference paper
  • First Online:
Automated Technology for Verification and Analysis (ATVA 2023)

Abstract

Given the pervasive use of neural networks in safety-critical systems it is important to ensure that they are robust. Recent research has focused on the question of verifying whether networks do not alter their behavior under small perturbations in inputs. Most successful methods are based on the paradigm of branch-and-bound, an abstraction-refinement technique. However, despite tremendous improvements in the last five years, there are still several benchmarks where these methods fail. One reason for this is that many methods use off-the-shelf methods to find the cause of imprecisions.

In this paper, our goal is to develop an approach to identify the precise source of imprecision during abstraction. We present a novel counterexample guided approach that can be applied alongside many abstraction techniques. As a specific case, we implement our technique on top of a basic abstraction framework provided by the tool DeepPoly and demonstrate how we can remove imprecisions in a targetted manner. This allows us to go past DeepPoly’s performance as well as outperform other refinement approaches in literature. Surprisingly, we are also able to verify several benchmark instances on which all leading tools fail.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We could not compare with VeriNet, which is the 2nd and 3rd of VNNCOMP 2021 and VNNCOMP 2022 respectively, as we had difficulties with its external solver’s (Xpress) license. We also could not compare with MN-BaB since it required GPU to run. Also, since we are comparing with DeepPoly and kPoly, and ERAN uses these techniques internally, we skipped a direct comparison with ERAN.

  2. 2.

    \(\alpha \beta \)-CROWN outperforms marabou in VNN-COMP’22, while in our experiments marabou performs better; potential reasons could be difference in benchmarks and that \(\alpha \beta \)-CROWN uses GPU, while marabou uses CPU in VNN-COMP’22.

References

  1. Bojarski, M., et al.: End to end learning for self-driving cars. Volume abs/1604.07316 (2016)

    Google Scholar 

  2. Amato, F., López, A., Peña-Méndez, E.M., Vaňhara, P., Hampl, A., Havel, J.: Artificial neural networks in medical diagnosis. J. Appl. Biomed. 11, 47–58 (2013)

    Article  Google Scholar 

  3. Hinton, G., et al.: Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups. IEEE Sig. Process. Mag. 29(6), 82–97 (2012)

    Article  Google Scholar 

  4. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Bengio, Y., LeCun, Y. (eds.) 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015, Conference Track Proceedings (2015)

    Google Scholar 

  5. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: a calculus for reasoning about deep neural networks. Formal Methods Syst. Des. 60, 87–116 (2021). https://doi.org/10.1007/s10703-021-00363-7

    Article  MATH  Google Scholar 

  6. Lomuscio, A., Maganti, L.: An approach to reachability analysis for feed-forward ReLU neural networks. CoRR, abs/1706.07351 (2017)

    Google Scholar 

  7. Fischetti, M., Jo, J.: Deep neural networks and mixed integer linear optimization. Constraints Int. J. 23(3), 296–309 (2018). https://doi.org/10.1007/s10601-018-9285-6

    Article  MathSciNet  MATH  Google Scholar 

  8. Dutta, S., Jha, S., Sankaranarayanan, S., Tiwari, A.: Output range analysis for deep feedforward neural networks. In: Dutle, A., Muñoz, C., Narkawicz, A. (eds.) NFM 2018. LNCS, vol. 10811, pp. 121–138. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-77935-5_9

    Chapter  Google Scholar 

  9. Cheng, C.-H., Nührenberg, G., Ruess, H.: Maximum resilience of artificial neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 251–268. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_18

    Chapter  Google Scholar 

  10. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  11. Katz, G., et al.: The marabou framework for verification and analysis of deep neural networks. In: Dillig, I., Tasiran, S. (eds.) CAV 2019. LNCS, vol. 11561, pp. 443–452. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-25540-4_26

    Chapter  Google Scholar 

  12. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19

    Chapter  MATH  Google Scholar 

  13. Huang, X., Kwiatkowska, M., Wang, S., Wu, M.: Safety verification of deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 3–29. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_1

    Chapter  Google Scholar 

  14. Wang, S., et al.: Beta-CROWN: efficient bound propagation with per-neuron split constraints for neural network robustness verification. In: Advances in Neural Information Processing Systems, vol. 34, pp. 29909–29921 (2021)

    Google Scholar 

  15. Xu, K., et al.: Fast and complete: enabling complete neural network verification with rapid and massively parallel incomplete verifiers. CoRR, abs/2011.13824 (2020)

    Google Scholar 

  16. Zhang, H., et al.: General cutting planes for bound-propagation-based neural network verification. CoRR, abs/2208.05740 (2022)

    Google Scholar 

  17. Dvijotham, K., Stanforth, R., Gowal, S., Mann, T.A., Kohli, P.: A dual approach to scalable verification of deep networks. In: UAI, vol. 1, p. 3 (2018)

    Google Scholar 

  18. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.T.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy, SP 2018, Proceedings, San Francisco, California, USA, 21–23 May 2018, pp. 3–18. IEEE Computer Society (2018)

    Google Scholar 

  19. Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.T.: Fast and effective robustness certification. In: Bengio, S., Wallach, H.M., Larochelle, H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems 31: Annual Conference on Neural Information Processing Systems 2018, NeurIPS 2018, Montréal, Canada, 3–8 December 2018, pp. 10825–10836 (2018)

    Google Scholar 

  20. Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: Boosting robustness certification of neural networks. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6–9 2019. OpenReview.net (2019)

    Google Scholar 

  21. Weng, L., et al.: Towards fast computation of certified robustness for ReLU networks. In: International Conference on Machine Learning, pp. 5276–5285. PMLR (2018)

    Google Scholar 

  22. Wong, E., Kolter, Z.: Provable defenses against adversarial examples via the convex outer adversarial polytope. In: International Conference on Machine Learning, pp. 5286–5295. PMLR (2018)

    Google Scholar 

  23. Zhang, H., Weng, T.-W., Chen, P.-Y., Hsieh, C.-J., Daniel, L.: Efficient neural network robustness certification with general activation functions. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  24. Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. Proc. ACM Program. Lang. 3(POPL), 1–30 (2019)

    Article  Google Scholar 

  25. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Formal security analysis of neural networks using symbolic intervals. In: 27th USENIX Security Symposium (USENIX Security 2018), pp. 1599–1614 (2018)

    Google Scholar 

  26. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. In: Advances in Neural Information Processing Systems, pp. 6367–6377 (2018)

    Google Scholar 

  27. Elboher, Y.Y., Gottschlich, J., Katz, G.: An abstraction-based framework for neural network verification. In: Lahiri, S.K., Wang, C. (eds.) CAV 2020. LNCS, vol. 12224, pp. 43–65. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-53288-8_3

    Chapter  MATH  Google Scholar 

  28. Yang, P., et al.: Improving neural network verification through spurious region guided refinement. In: Groote, J.F., Larsen, K.G. (eds.) TACAS 2021. LNCS, vol. 12651, pp. 389–408. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-72016-2_21

    Chapter  Google Scholar 

  29. Lin, X., Zhu, H., Samanta, R., Jagannathan, S.: ART: abstraction refinement-guided training for provably correct neural networks. In: 2020 Formal Methods in Computer Aided Design, FMCAD 2020, Haifa, Israel, 21–24 September 2020, pp. 148–157. IEEE (2020)

    Google Scholar 

  30. Bixby, B.: The gurobi optimizer. Transp. Res. Part B 41(2), 159–178 (2007)

    Google Scholar 

  31. Singh, G., Ganvir, R., Püschel, M., Vechev, M.: Beyond the single neuron convex barrier for neural network certification. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  32. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  33. \(\alpha \beta \)-CROWN (2021). https://github.com/huanzhang12/alpha-beta-CROWN

  34. Bunel, R., Mudigonda, P., Turkaslan, I., Torr, P., Lu, J., Kohli, P.: Branch and bound for piecewise linear neural network verification. J. Mach. Learn. Res. 21, 1–39 (2020)

    MathSciNet  MATH  Google Scholar 

  35. Cousot, P., Halbwachs, N.: Automatic discovery of linear restraints among variables of a program. In: Proceedings of the 5th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, pp. 84–96 (1978)

    Google Scholar 

  36. Cousot, P., Cousot, R.: Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints. In: Proceedings of the 4th ACM SIGACT-SIGPLAN Symposium on Principles of Programming Languages, pp. 238–252 (1977)

    Google Scholar 

  37. Tjeng, V., Xiao, K.Y., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019)

    Google Scholar 

  38. Zhang, H., et al.: A branch and bound framework for stronger adversarial attacks of ReLU networks. In: International Conference on Machine Learning, pp. 26591–26604. PMLR (2022)

    Google Scholar 

  39. Bunel, R.R., Turkaslan, I., Torr, P., Kohli, P., Mudigonda, P.K.: A unified view of piecewise linear neural network verification. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  40. Bunel, R., et al.: Lagrangian decomposition for neural network verification. In: Conference on Uncertainty in Artificial Intelligence, pp. 370–379. PMLR (2020)

    Google Scholar 

  41. De Palma, A., Behl, H.S., Bunel, R., Torr, P., Kumar, M.P.: Scaling the convex barrier with active sets. In: Proceedings of the ICLR 2021 Conference. Open Review (2021)

    Google Scholar 

  42. De Palma, A., Behl, H.S., Bunel, R., Torr, P.H.S., Kumar, M.P.: Scaling the convex barrier with sparse dual algorithms. CoRR, abs/2101.05844 (2021)

    Google Scholar 

  43. De Palma, A., et al.: Improved branch and bound for neural network verification via lagrangian decomposition. CoRR, abs/2104.06718 (2021)

    Google Scholar 

  44. Deng, L.: The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Sig. Process. Mag. 29(6), 141–142 (2012)

    Article  Google Scholar 

  45. Mirman, M., Gehr, T., Vechev, M.: Differentiable abstract interpretation for provably robust neural networks. In: International Conference on Machine Learning, pp. 3578–3586. PMLR (2018)

    Google Scholar 

  46. Brain, M., Davenport, J.H., Griggio, A.: Benchmarking solvers, SAT-style. In: SC\(^2\)@ ISSAC (2017)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohammad Afzal .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Afzal, M., Gupta, A., Akshay, S. (2023). Using Counterexamples to Improve Robustness Verification in Neural Networks. In: André, É., Sun, J. (eds) Automated Technology for Verification and Analysis. ATVA 2023. Lecture Notes in Computer Science, vol 14215. Springer, Cham. https://doi.org/10.1007/978-3-031-45329-8_20

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-45329-8_20

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-45328-1

  • Online ISBN: 978-3-031-45329-8

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics