Skip to main content
Log in

Generating unrestricted adversarial examples via three parameteres

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Deep neural networks have been shown to be vulnerable to adversarial examples deliberately constructed to misclassify victim models. As most adversarial examples have restricted their perturbations to the Lp-norm, existing defense methods have focused on these types of perturbations and less attention has been paid to unrestricted adversarial examples; which can create more realistic attacks, able to deceive models without affecting human predictions. To address this problem, the proposed adversarial attack method generates an unrestricted adversarial example with a limited number of parameters. The attack selects three points on the input image and based on their locations transforms the image into an adversarial example. By limiting the range of movement and location of these three points and by using a discriminatory network, the proposed unrestricted adversarial example preserves the image appearance. Experimental results show that the proposed adversarial examples obtain an average success rate of 93.5% in terms of human evaluation on the MNIST and SVHN datasets. It also reduces the model accuracy by an average of 73% on six datasets MNIST, FMNIST, SVHN, CIFAR10, CIFAR100, and ImageNet. The adversarial train of the attack also improves the model robustness against a randomly transformed image.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6

Similar content being viewed by others

References

  1. Alaifari R, Alberti GS, Gauksson T (2018) Adef: an iterative algorithm to construct adversarial deformations. arXiv:180407729

  2. Alcorn MA, Li Q, Gong Z, Wang C, Mai L, Ku WS, Nguyen A (2019) Strike (with) a pose: Neural networks are easily fooled by strange poses of familiar objects. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4845–4854

  3. Athalye A, Carlini N, Wagner D (2018) Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. In: International conference on machine learning, PMLR, pp 274–283

  4. Bhattad A, Chong MJ, Liang K, Li B, Forsyth DA (2019) Unrestricted adversarial examples via semantic manipulation. arXiv:190406347

  5. Brown TB, Mané D, Roy A, Abadi M, Gilmer J (2017) Adversarial patch. arXiv:171209665

  6. Brown TB, Carlini N, Zhang C, Olsson C, Christiano P, Goodfellow I (2018) Unrestricted adversarial examples. arXiv:180908352

  7. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 Ieee symposium on security and privacy (sp). IEEE, pp 39–57

  8. Cohen T, Welling M (2016) Group equivariant convolutional networks. In: International conference on machine learning. PMLR, pp 2990–2999

  9. Dai J, Qi H, Xiong Y, Li Y, Zhang G, Hu H, Wei Y (2017) Deformable convolutional networks. In: Proceedings of the IEEE international conference on computer vision, pp 764–773

  10. Dargan S, Kumar M, Ayyagari MR (2019) Kumar g. a survey of deep learning and its applications: a new paradigm to machine learning. Arch Comput Methods Eng:1–22

  11. Deng L (2012) The mnist database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Proc Mag 29(6):141–142

    Article  Google Scholar 

  12. Dong Y, Liao F, Pang T, Su H, Zhu J, Hu X, Li J (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193

  13. Engstrom L, Tsipras D, Schmidt L, Madry A (2017) A rotation and a translation suffice: Fooling cnns with simple transformations. arXiv:171202779 1(2):3

  14. Engstrom L, Tran B, Tsipras D, Schmidt L, Madry A (2019) Exploring the landscape of spatial robustness. In: International conference on machine learning. PMLR, pp 1802–1811

  15. Eykholt K, Evtimov I, Fernandes E, Li B, Rahmati A, Xiao C, Prakash A, Kohno T, Song D (2018) Robust physical-world attacks on deep learning visual classification. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 1625–1634

  16. Fawzi A, Frossard P. (2015) Manitest: Are classifiers really invariant? arXiv:150706535

  17. Goodfellow I, Shlens J, Szegedy C (2014) Explaining and harnessing adversarial examples. arXiv:14126572

  18. Gupta S, Mohan N, Kumar M (2020) A study on source device attribution using still images. Arch Comput Methods Eng:1–15

  19. He K, Zhang X, Ren S, Sun J (2016) Deep residual learning for image recognition. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 770–778

  20. Ho CH, Leung B, Sandstrom E, Chang Y, Vasconcelos N (2019) Catastrophic child’s play: easy to perform, hard to defend adversarial attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp 9229–9237

  21. Hosseini H, Poovendran R (2018) Semantic adversarial examples. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, pp 1614–1619

  22. Huq A, Pervin MT (2020) Analysis of adversarial attacks on skin cancer recognition. In: 2020 International conference on data science and its applications (ICoDSA). IEEE, pp 1–4

  23. Jaderberg M, Simonyan K, Zisserman A et al (2015) Spatial transformer networks. In: Advances in neural information processing systems, pp 2017–2025

  24. Kanbak C, Moosavi-Dezfooli SM, Frossard P (2018) Geometric robustness of deep networks: analysis and improvement. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp 4441–4449

  25. Krizhevsky A, Hinton G et al (2009) Learning multiple layers of features from tiny images. Master’s thesis, Department of Computer Science University of Toronto

  26. Krizhevsky A, Sutskever I, Hinton GE (2012) Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst 25:1097–1105

    Google Scholar 

  27. Kumar M, Jindal MK, Sharma RK, Jindal SR (2020) Performance evaluation of classifiers for the recognition of offline handwritten gurmukhi characters and numerals: a study. Artif Intell Rev 53(3):2075–2097

    Article  Google Scholar 

  28. Kurakin A, Goodfellow I, Bengio S (2016) Adversarial machine learning at scale. arXiv:161101236

  29. Laptev D, Savinov N, Buhmann JM, Pollefeys M (2016) Ti-pooling: transformation-invariant pooling for feature learning in convolutional neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 289–297

  30. LeCun Y, Bottou L, Bengio Y, Haffner P (1998) Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324

    Article  Google Scholar 

  31. Madry A, Makelov A, Schmidt L, Tsipras D, Vladu A (2017) Towards deep learning models resistant to adversarial attacks. arXiv:170606083

  32. Marcos D, Volpi M, Komodakis N, Tuia D (2017) Rotation equivariant vector field networks. In: Proceedings of the IEEE International Conference on Computer Vision, pp 5048–5057

  33. Moosavi-Dezfooli SM, Fawzi A, Frossard P (2016) Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582

  34. Naderi H, Goli L, Kasaei S (2020) Scale equivariant cnns with scale steerable filters. In: 2020 International conference on machine vision and image processing (MVIP). IEEE, pp 1–5

  35. Netzer Y, Wang T, Coates A, Bissacco A, Wu B, Ng AY (2011) Reading digits in natural images with unsupervised feature learning. Advances in Neural Information Processing Systems

  36. Poursaeed O, Jiang T, Goshu Y, Yang H, Belongie S, Lim SN (2019) Fine-grained synthesis of unrestricted adversarial examples. arXiv:191109058

  37. Riba E, Mishkin D, Ponsa D, Rublee E, Bradski G (2020) Kornia: an open source differentiable computer vision library for pytorch. In: Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, pp 3674–3683

  38. Shen X, Tian X, He A, Sun S, Tao D (2016) Transform-invariant convolutional neural networks for image classification and search. In: Proceedings of the 24th ACM international conference on Multimedia, pp 1345–1354

  39. Simonyan K, Zisserman A (2014) Very deep convolutional networks for large-scale image recognition. arXiv:14091556

  40. Sitawarin C, Bhagoji AN, Mosenia A, Chiang M, Mittal P (2018) Darts: Deceiving autonomous cars with toxic signs. arXiv:180206430

  41. Song Y, Shu R, Kushman N, Ermon S (2018) Constructing unrestricted adversarial examples with generative models. In: Advances in neural information processing systems, pp 8312–8323

  42. Szegedy C, Zaremba W, Sutskever I, Bruna J, Erhan D, Goodfellow I, Fergus R (2013) Intriguing properties of neural networks. arXiv:13126199

  43. Szegedy C, Vanhoucke V, Ioffe S, Shlens J, Wojna Z (2016) Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2818–2826

  44. Tramèr F, Kurakin A, Papernot N, Goodfellow I, Boneh D, McDaniel P (2017) Ensemble adversarial training: Attacks and defenses. arXiv:170507204

  45. Tramer F, Carlini N, Brendel W, Madry A (2020) On adaptive attacks to adversarial example defenses. arXiv:200208347

  46. Xiao C, Zhu JY, Li B, He W, Liu M, Song D (2018) Spatially transformed adversarial examples. arXiv:180102612

  47. Xiao H, Rasul K, Vollgraf R (2017) Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv:170807747

  48. Zhao H, Le T, Montague P, De Vel O, Abraham T, Phung D (2019) Perturbations are not enough: Generating adversarial examples with spatial distortions. arXiv:191001329

Download references

Acknowledgements

The authors would like to thank Dr. Seyed-Mohsen Moosavi-Dezfooli for the helpful discussions. This work was partly supported by a grant from Iran National Science Foundation (INSF).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hanieh Naderi.

Additional information

Publisher’s note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendices

Appendix A

Table 5 Accuracy of models in various scale factors of input image

Nine different scale factors in the range of 0.2 to 1 are applied to the input image, provided that in each scale factor change, three variables α, β, and γ receive the same amount of scale change. To make the image size equal to the size of the input image, the empty space is filled with zero-padding (black pixels). The accuracy of the models on these settings is given in Table 5. As expected, the smaller the factor scale, the lower the accuracy of the models. According to the results listed in Table 5, with each step of the scale factor reduction, the accuracy of the models decreases by approximately 8%.

Appendix B

To make the image size equal to the size of the input image, the empty space is filled with border-extrapolation (border pixels are extrapolated). The classification accuracy of the state-of-the-arts models on SVHN and ImageNet datasets, listed in Table 6, is related to border-extrapolation.

Table 6 Comparison of proposed attack with other attack strategies

Appendix C

The results in Table 7 show that the proposed attack can be generalized between different state-of-the-art models on the same CIFAR10 dataset. In Table 7, the adversary accuracy of each target model is seen when the UAEs generated by the source model were fed to it.

Table 7 Transferability of proposed unrestricted adversarial examples on CIFAR10 dataset

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Naderi, H., Goli, L. & Kasaei, S. Generating unrestricted adversarial examples via three parameteres. Multimed Tools Appl 81, 21919–21938 (2022). https://doi.org/10.1007/s11042-022-12007-x

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-022-12007-x

Keywords

Navigation