Skip to main content

Integrative System of Deep Classifiers Certification: Case of Convolutional Attacks

  • Conference paper
  • First Online:
Agents and Artificial Intelligence (ICAART 2022)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 13786))

Included in the following conference series:

  • 260 Accesses

Abstract

Deep learning models robustness have been first studied under simple image attacks (2D rotation, brightness), and then, subsequently, under other perturbations such as filtering. These systems often evoke a single learning model on a single type of data.

Here, we intend to introduce an integrative method to certify deep classifiers against convolutional attacks. We study the impact of combining several data sources on the strengthen of the verification process. Using the abstract interpretation theory, we propose a new verification routine dealing with curves as well as images. We formulate the lower and upper bounds with abstract intervals to support other classes of advanced attacks including image and 2D contours filtering. Experimentation are conducted on MNIST, CIFAR10 and MPEG7 databases. The obtained results prove the utility of combining different entries in the certification system.

I. Smati, R. Khalsi and M. M. Sallami—These authors contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    Abstract transformer is a step of abstract interpretation construction which is a abstract set that includes all concrete outputs corresponding to the real data.

References

  1. Bojarski, M., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016)

  2. Khedher, M.I., Mziou-Sallami, M., Hadji, M.: Improving decision-making-process for robot navigation under uncertainty. In: ICAART (2), pp. 1105–1113 (2021)

    Google Scholar 

  3. Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J.: Policy compression for aircraft collision avoidance systems. In: 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp. 1–10. IEEE (2016)

    Google Scholar 

  4. Shen, D., Wu, G., Suk, H.-I.: Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 19, 221–248 (2017)

    Article  Google Scholar 

  5. Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25

    Chapter  Google Scholar 

  6. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  7. Li, J., Liu, J., Yang, P., Chen, L., Huang, X., Zhang, L.: Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: Chang, B.-Y.E. (ed.) SAS 2019. LNCS, vol. 11822, pp. 296–319. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32304-2_15

    Chapter  Google Scholar 

  8. Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.: Fast and effective robustness certification. In: Advances in Neural Information Processing Systems, pp. 10825–10836 (2018)

    Google Scholar 

  9. Balunovic, M., Baader, M., Singh, G., Gehr, T., Vechev, M.: Certifying geometric robustness of neural networks. In: Advances in Neural Information Processing Systems, pp. 15313–15323 (2019)

    Google Scholar 

  10. Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5

    Chapter  Google Scholar 

  11. Bunel, R.R., Turkaslan, I., Torr, P., Kohli, P., Mudigonda, P.K.: A unified view of piecewise linear neural network verification. In: Advances in Neural Information Processing Systems, pp. 4790–4799 (2018)

    Google Scholar 

  12. Li, R., et al.: Prodeep: a platform for robustness verification of deep neural networks. In: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 1630–1634 (2020)

    Google Scholar 

  13. Xiao, C., Zhu, J.-Y., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612 (2018)

  14. Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in Neural Information Processing Systems, vol. 28, pp. 2017–2025 (2015)

    Google Scholar 

  15. Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: International Conference on Machine Learning, pp. 1802–1811. PMLR (2019)

    Google Scholar 

  16. Adjed, F., Mziou Sallami, M., Taima, A.: Abstract interpretation limitations for deep neural network robustness evaluation. In: Traitement & Analyse de l’information Methodes et applications, pp. 68–76 (2022)

    Google Scholar 

  17. Goodfellow, I., Lee, H., Le, Q., Saxe, A., Ng, A.: Measuring invariances in deep networks. In: Advances in Neural Information Processing Systems, vol. 22, pp. 646–654 (2009)

    Google Scholar 

  18. Fawzi, A., Moosavi-Dezfooli, S.-M., Frossard, P.: The robustness of deep networks: a geometrical perspective. IEEE Signal Process. Mag. 34(6), 50–62 (2017)

    Article  Google Scholar 

  19. Alaifari, R., Alberti, G.S., Gauksson, T.: ADef: an iterative algorithm to construct adversarial deformations. arXiv preprint arXiv:1804.07729 (2018)

  20. Kerboua-Benlarbi, S., Mziou-Sallami, M., Doufene, A.: A novel GAN-based system for time series generation: application to autonomous vehicles scenarios generation. In: Boulouard, Z., Ouaissa, M., Ouaissa, M., El Himer, S. (eds.) AI and IoT for Sustainable Development in Emerging Countries. LNDECT, vol. 105, pp. 325–352. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-90618-4_16

    Chapter  Google Scholar 

  21. Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2018)

    Google Scholar 

  22. Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: Boosting robustness certification of neural networks. In: ICLR (Poster) (2019)

    Google Scholar 

  23. Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. In: Proceedings of the ACM on Programming Languages, vol. 3, no. POPL, pp. 1–30 (2019)

    Google Scholar 

  24. Mziou Sallami, M., Ibn Khedher, M., Trabelsi, A., Kerboua-Benlarbi, S., Bettebghor, D.: Safety and robustness of deep neural networks object recognition under generic attacks. In: Gedeon, T., Wong, K.W., Lee, M. (eds.) ICONIP 2019. CCIS, vol. 1142, pp. 274–286. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-36808-1_30

    Chapter  Google Scholar 

  25. Wu, H., et al.: Toward certified robustness against real-world distribution shifts. arXiv preprint arXiv:2206.03669 (2022)

  26. Mziou-Sallami, M., Adjed, F.: Towards a certification of deep image classifiers against convolutional attacks. In: Proceedings of the 14th International Conference on Agents and Artificial Intelligence, vol. 2, pp. 419–428 (2022)

    Google Scholar 

  27. Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Advances in Neural Information Processing Systems, pp. 2613–2621 (2016)

    Google Scholar 

  28. Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19

    Chapter  MATH  Google Scholar 

  29. Tjeng, V., Xiao, K., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. arXiv preprint arXiv:1711.07356 (2017)

  30. Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. arXiv preprint arXiv:1809.08098 (2018)

  31. Henriksen, P., Lomuscio, A.: Efficient neural network verification via adaptive refinement and adversarial search. In: ECAI 2020, pp. 2513–2520. IOS Press (2020)

    Google Scholar 

  32. Singh, G., Gehr, T., Püschel, M., Vechev, M.: Boosting robustness certification of neural networks. In: International Conference on Learning Representations (2018)

    Google Scholar 

  33. Singh, G., Ganvir, R., Püschel, M., Vechev, M.: Beyond the single neuron convex barrier for neural network certification. In: Advances in Neural Information Processing Systems, vol. 32 (2019)

    Google Scholar 

  34. Ko, C.-Y., Lyu, Z., Weng, L., Daniel, L., Wong, N., Lin, D.: POPQORN: quantifying robustness of recurrent neural networks. In: International Conference on Machine Learning, pp. 3468–3477. PMLR (2019)

    Google Scholar 

  35. Jacoby, Y., Barrett, C., Katz, G.: Verifying recurrent neural networks using invariant inference. In: Hung, D.V., Sokolsky, O. (eds.) ATVA 2020. LNCS, vol. 12302, pp. 57–74. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59152-6_3

    Chapter  Google Scholar 

  36. Wu, M., Kwiatkowska, M.: Robustness guarantees for deep neural networks on videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 311–320 (2020)

    Google Scholar 

  37. Ryou, W., Chen, J., Balunovic, M., Singh, G., Dan, A., Vechev, M.: Scalable polyhedral verification of recurrent neural networks. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 225–248. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_10

    Chapter  MATH  Google Scholar 

  38. Wang, Z., Zhai, J., Ma, S.: BppAttack: stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15074–15084 (2022)

    Google Scholar 

  39. Khalsi, R., Sallami, M., Smati, I., Ghorbel, F.: ContourVerifier: a novel system for the robustness evaluation of deep contour classifiers. In: Proceedings of the 14th International Conference on Agents and Artificial Intelligence, vol. 3, pp. 1003–1010 (2022)

    Google Scholar 

  40. Adjed, F., et al.: Coupling algebraic topology theory, formal methods and safety requirements toward a new coverage metric for artificial intelligence models. Neural Comput. Appl. 34, 1–16 (2022). https://doi.org/10.1007/s00521-022-07363-6

    Article  Google Scholar 

  41. Droby, A., El-Sana, J.: ContourCNN: convolutional neural network for contour data classification. arXiv preprint arXiv:2009.09412 (2020)

  42. Velich, R., Kimmel, R.: Deep signatures-learning invariants of planar curves. arXiv preprint arXiv:2202.05922 (2022)

  43. Ghorbel, F.: Invariants de formes et de mouvement 11 cas du 1d au 4d et de l’euclidien aux projectifs. La Manouba: ARTS-PI edition (2013)

    Google Scholar 

  44. Lu, P., et al.: Few-shot pulse wave contour classification based on multi-scale feature extraction. Sci. Rep. 11(1), 1–11 (2021)

    Google Scholar 

  45. Khalsi, R., Smati, I., Sallami, M.M., Ghorbel, F.: A novel system for deep contour classifiers certification under filtering attacks. In: Image, vol. 15, no. 12 (2022)

    Google Scholar 

  46. Ruoss, A., Baader, M., Balunović, M., Vechev, M.: Efficient certification of spatial robustness. arXiv preprint arXiv:2009.09318 (2020)

  47. Cousot, P., Cousot, R.: Abstract interpretation and application to logic programs. J. Logic Program. 13(2–3), 103–179 (1992)

    Article  MATH  Google Scholar 

  48. Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_24

    Chapter  Google Scholar 

  49. Vassaux, B., Nguyen, P., Baudry, S., Bas, P., Chassery, J.-M.: Survey on attacks in image and video watermarking. In: Applications of Digital Image Processing XXV, vol. 4790, pp. 169–179. SPIE (2002)

    Google Scholar 

  50. Gedraite, E.S., Hadad, M.: Investigation on the effect of a gaussian blur in image filtering and segmentation. In: Proceedings ELMAR-2011, pp. 393–396. IEEE (2011)

    Google Scholar 

  51. Ghorbel, F., de la Tocnaye, J.D.B.: Automatic control of lamellibranch larva growth using contour invariant feature extraction. Pattern Recogn. 23(3–4), 319–323 (1990)

    Article  Google Scholar 

  52. LeCun, Y.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist/

  53. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  54. Krizhevsky, A., Hinton, G., et al.: Learning Multiple Layers of Features from Tiny Images. Toronto, ON, Canada (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mallek Mziou-Sallami .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Smati, I., Khalsi, R., Mziou-Sallami, M., Adjed, F., Ghorbel, F. (2022). Integrative System of Deep Classifiers Certification: Case of Convolutional Attacks. In: Rocha, A.P., Steels, L., van den Herik, J. (eds) Agents and Artificial Intelligence. ICAART 2022. Lecture Notes in Computer Science(), vol 13786. Springer, Cham. https://doi.org/10.1007/978-3-031-22953-4_5

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-22953-4_5

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-22952-7

  • Online ISBN: 978-3-031-22953-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics