Abstract
Deep learning models robustness have been first studied under simple image attacks (2D rotation, brightness), and then, subsequently, under other perturbations such as filtering. These systems often evoke a single learning model on a single type of data.
Here, we intend to introduce an integrative method to certify deep classifiers against convolutional attacks. We study the impact of combining several data sources on the strengthen of the verification process. Using the abstract interpretation theory, we propose a new verification routine dealing with curves as well as images. We formulate the lower and upper bounds with abstract intervals to support other classes of advanced attacks including image and 2D contours filtering. Experimentation are conducted on MNIST, CIFAR10 and MPEG7 databases. The obtained results prove the utility of combining different entries in the certification system.
I. Smati, R. Khalsi and M. M. Sallami—These authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
- 1.
Abstract transformer is a step of abstract interpretation construction which is a abstract set that includes all concrete outputs corresponding to the real data.
References
Bojarski, M., et al.: End to end learning for self-driving cars. arXiv preprint arXiv:1604.07316 (2016)
Khedher, M.I., Mziou-Sallami, M., Hadji, M.: Improving decision-making-process for robot navigation under uncertainty. In: ICAART (2), pp. 1105–1113 (2021)
Julian, K.D., Lopez, J., Brush, J.S., Owen, M.P., Kochenderfer, M.J.: Policy compression for aircraft collision avoidance systems. In: 2016 IEEE/AIAA 35th Digital Avionics Systems Conference (DASC), pp. 1–10. IEEE (2016)
Shen, D., Wu, G., Suk, H.-I.: Deep learning in medical image analysis. Annu. Rev. Biomed. Eng. 19, 221–248 (2017)
Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)
Li, J., Liu, J., Yang, P., Chen, L., Huang, X., Zhang, L.: Analyzing deep neural networks with symbolic propagation: towards higher precision and faster verification. In: Chang, B.-Y.E. (ed.) SAS 2019. LNCS, vol. 11822, pp. 296–319. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-32304-2_15
Singh, G., Gehr, T., Mirman, M., Püschel, M., Vechev, M.: Fast and effective robustness certification. In: Advances in Neural Information Processing Systems, pp. 10825–10836 (2018)
Balunovic, M., Baader, M., Singh, G., Gehr, T., Vechev, M.: Certifying geometric robustness of neural networks. In: Advances in Neural Information Processing Systems, pp. 15313–15323 (2019)
Katz, G., Barrett, C., Dill, D.L., Julian, K., Kochenderfer, M.J.: Reluplex: an efficient SMT solver for verifying deep neural networks. In: Majumdar, R., Kunčak, V. (eds.) CAV 2017. LNCS, vol. 10426, pp. 97–117. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-63387-9_5
Bunel, R.R., Turkaslan, I., Torr, P., Kohli, P., Mudigonda, P.K.: A unified view of piecewise linear neural network verification. In: Advances in Neural Information Processing Systems, pp. 4790–4799 (2018)
Li, R., et al.: Prodeep: a platform for robustness verification of deep neural networks. In: Proceedings of the 28th ACM Joint Meeting on European Software Engineering Conference and Symposium on the Foundations of Software Engineering, pp. 1630–1634 (2020)
Xiao, C., Zhu, J.-Y., Li, B., He, W., Liu, M., Song, D.: Spatially transformed adversarial examples. arXiv preprint arXiv:1801.02612 (2018)
Jaderberg, M., Simonyan, K., Zisserman, A., et al.: Spatial transformer networks. In: Advances in Neural Information Processing Systems, vol. 28, pp. 2017–2025 (2015)
Engstrom, L., Tran, B., Tsipras, D., Schmidt, L., Madry, A.: Exploring the landscape of spatial robustness. In: International Conference on Machine Learning, pp. 1802–1811. PMLR (2019)
Adjed, F., Mziou Sallami, M., Taima, A.: Abstract interpretation limitations for deep neural network robustness evaluation. In: Traitement & Analyse de l’information Methodes et applications, pp. 68–76 (2022)
Goodfellow, I., Lee, H., Le, Q., Saxe, A., Ng, A.: Measuring invariances in deep networks. In: Advances in Neural Information Processing Systems, vol. 22, pp. 646–654 (2009)
Fawzi, A., Moosavi-Dezfooli, S.-M., Frossard, P.: The robustness of deep networks: a geometrical perspective. IEEE Signal Process. Mag. 34(6), 50–62 (2017)
Alaifari, R., Alberti, G.S., Gauksson, T.: ADef: an iterative algorithm to construct adversarial deformations. arXiv preprint arXiv:1804.07729 (2018)
Kerboua-Benlarbi, S., Mziou-Sallami, M., Doufene, A.: A novel GAN-based system for time series generation: application to autonomous vehicles scenarios generation. In: Boulouard, Z., Ouaissa, M., Ouaissa, M., El Himer, S. (eds.) AI and IoT for Sustainable Development in Emerging Countries. LNDECT, vol. 105, pp. 325–352. Springer, Cham (2022). https://doi.org/10.1007/978-3-030-90618-4_16
Gehr, T., Mirman, M., Drachsler-Cohen, D., Tsankov, P., Chaudhuri, S., Vechev, M.: AI2: safety and robustness certification of neural networks with abstract interpretation. In: 2018 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2018)
Singh, G., Gehr, T., Püschel, M., Vechev, M.T.: Boosting robustness certification of neural networks. In: ICLR (Poster) (2019)
Singh, G., Gehr, T., Püschel, M., Vechev, M.: An abstract domain for certifying neural networks. In: Proceedings of the ACM on Programming Languages, vol. 3, no. POPL, pp. 1–30 (2019)
Mziou Sallami, M., Ibn Khedher, M., Trabelsi, A., Kerboua-Benlarbi, S., Bettebghor, D.: Safety and robustness of deep neural networks object recognition under generic attacks. In: Gedeon, T., Wong, K.W., Lee, M. (eds.) ICONIP 2019. CCIS, vol. 1142, pp. 274–286. Springer, Cham (2019). https://doi.org/10.1007/978-3-030-36808-1_30
Wu, H., et al.: Toward certified robustness against real-world distribution shifts. arXiv preprint arXiv:2206.03669 (2022)
Mziou-Sallami, M., Adjed, F.: Towards a certification of deep image classifiers against convolutional attacks. In: Proceedings of the 14th International Conference on Agents and Artificial Intelligence, vol. 2, pp. 419–428 (2022)
Bastani, O., Ioannou, Y., Lampropoulos, L., Vytiniotis, D., Nori, A., Criminisi, A.: Measuring neural net robustness with constraints. In: Advances in Neural Information Processing Systems, pp. 2613–2621 (2016)
Ehlers, R.: Formal verification of piece-wise linear feed-forward neural networks. In: D’Souza, D., Narayan Kumar, K. (eds.) ATVA 2017. LNCS, vol. 10482, pp. 269–286. Springer, Cham (2017). https://doi.org/10.1007/978-3-319-68167-2_19
Tjeng, V., Xiao, K., Tedrake, R.: Evaluating robustness of neural networks with mixed integer programming. arXiv preprint arXiv:1711.07356 (2017)
Wang, S., Pei, K., Whitehouse, J., Yang, J., Jana, S.: Efficient formal safety analysis of neural networks. arXiv preprint arXiv:1809.08098 (2018)
Henriksen, P., Lomuscio, A.: Efficient neural network verification via adaptive refinement and adversarial search. In: ECAI 2020, pp. 2513–2520. IOS Press (2020)
Singh, G., Gehr, T., Püschel, M., Vechev, M.: Boosting robustness certification of neural networks. In: International Conference on Learning Representations (2018)
Singh, G., Ganvir, R., Püschel, M., Vechev, M.: Beyond the single neuron convex barrier for neural network certification. In: Advances in Neural Information Processing Systems, vol. 32 (2019)
Ko, C.-Y., Lyu, Z., Weng, L., Daniel, L., Wong, N., Lin, D.: POPQORN: quantifying robustness of recurrent neural networks. In: International Conference on Machine Learning, pp. 3468–3477. PMLR (2019)
Jacoby, Y., Barrett, C., Katz, G.: Verifying recurrent neural networks using invariant inference. In: Hung, D.V., Sokolsky, O. (eds.) ATVA 2020. LNCS, vol. 12302, pp. 57–74. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-59152-6_3
Wu, M., Kwiatkowska, M.: Robustness guarantees for deep neural networks on videos. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 311–320 (2020)
Ryou, W., Chen, J., Balunovic, M., Singh, G., Dan, A., Vechev, M.: Scalable polyhedral verification of recurrent neural networks. In: Silva, A., Leino, K.R.M. (eds.) CAV 2021. LNCS, vol. 12759, pp. 225–248. Springer, Cham (2021). https://doi.org/10.1007/978-3-030-81685-8_10
Wang, Z., Zhai, J., Ma, S.: BppAttack: stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15074–15084 (2022)
Khalsi, R., Sallami, M., Smati, I., Ghorbel, F.: ContourVerifier: a novel system for the robustness evaluation of deep contour classifiers. In: Proceedings of the 14th International Conference on Agents and Artificial Intelligence, vol. 3, pp. 1003–1010 (2022)
Adjed, F., et al.: Coupling algebraic topology theory, formal methods and safety requirements toward a new coverage metric for artificial intelligence models. Neural Comput. Appl. 34, 1–16 (2022). https://doi.org/10.1007/s00521-022-07363-6
Droby, A., El-Sana, J.: ContourCNN: convolutional neural network for contour data classification. arXiv preprint arXiv:2009.09412 (2020)
Velich, R., Kimmel, R.: Deep signatures-learning invariants of planar curves. arXiv preprint arXiv:2202.05922 (2022)
Ghorbel, F.: Invariants de formes et de mouvement 11 cas du 1d au 4d et de l’euclidien aux projectifs. La Manouba: ARTS-PI edition (2013)
Lu, P., et al.: Few-shot pulse wave contour classification based on multi-scale feature extraction. Sci. Rep. 11(1), 1–11 (2021)
Khalsi, R., Smati, I., Sallami, M.M., Ghorbel, F.: A novel system for deep contour classifiers certification under filtering attacks. In: Image, vol. 15, no. 12 (2022)
Ruoss, A., Baader, M., Balunović, M., Vechev, M.: Efficient certification of spatial robustness. arXiv preprint arXiv:2009.09318 (2020)
Cousot, P., Cousot, R.: Abstract interpretation and application to logic programs. J. Logic Program. 13(2–3), 103–179 (1992)
Pulina, L., Tacchella, A.: An abstraction-refinement approach to verification of artificial neural networks. In: Touili, T., Cook, B., Jackson, P. (eds.) CAV 2010. LNCS, vol. 6174, pp. 243–257. Springer, Heidelberg (2010). https://doi.org/10.1007/978-3-642-14295-6_24
Vassaux, B., Nguyen, P., Baudry, S., Bas, P., Chassery, J.-M.: Survey on attacks in image and video watermarking. In: Applications of Digital Image Processing XXV, vol. 4790, pp. 169–179. SPIE (2002)
Gedraite, E.S., Hadad, M.: Investigation on the effect of a gaussian blur in image filtering and segmentation. In: Proceedings ELMAR-2011, pp. 393–396. IEEE (2011)
Ghorbel, F., de la Tocnaye, J.D.B.: Automatic control of lamellibranch larva growth using contour invariant feature extraction. Pattern Recogn. 23(3–4), 319–323 (1990)
LeCun, Y.: The MNIST database of handwritten digits (1998). http://yann.lecun.com/exdb/mnist/
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Krizhevsky, A., Hinton, G., et al.: Learning Multiple Layers of Features from Tiny Images. Toronto, ON, Canada (2009)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Smati, I., Khalsi, R., Mziou-Sallami, M., Adjed, F., Ghorbel, F. (2022). Integrative System of Deep Classifiers Certification: Case of Convolutional Attacks. In: Rocha, A.P., Steels, L., van den Herik, J. (eds) Agents and Artificial Intelligence. ICAART 2022. Lecture Notes in Computer Science(), vol 13786. Springer, Cham. https://doi.org/10.1007/978-3-031-22953-4_5
Download citation
DOI: https://doi.org/10.1007/978-3-031-22953-4_5
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-22952-7
Online ISBN: 978-3-031-22953-4
eBook Packages: Computer ScienceComputer Science (R0)