Skip to main content
Log in

An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers

  • Published:
Mobile Networks and Applications Aims and scope Submit manuscript

Abstract

Deep neural networks are susceptible to tiny crafted adversarial perturbations which are always added to all the pixels of the image to craft an adversarial example. Most of the existing adversarial attacks can reduce the L2 distance between the adversarial image and the source image to a minimum but ignore the L0 distance which is still huge. To address this issue, we introduce a new black-box adversarial attack based on evolutionary method and bisection method, which can greatly reduce the L0 distance while limiting the L2 distance. By flipping pixels of the target image, an adversarial example is generated, in which a small number of pixels come from the target image and the rest pixels are from the source image. Experiments show that our attack method is able to generate high quality adversarial examples steadily. Especially for generating adversarial examples for large scale images, our method performs better.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. https://github.com/fchollet/deep-learning-models/releases

References

  1. Collobert R, Weston J (2008) A unified architecture for natural language processing: deep neural networks with multitask learning[C]//proceedings of the 25th international conference on machine learning. ACM:160–167

  2. Hinton G, Deng L, Yu D et al (2012) Deep neural networks for acoustic modeling in speech recognition: the shared views of four research groups[J]. IEEE Signal Process Mag 29(6):82–97

    Article  Google Scholar 

  3. Krizhevsky A , Sutskever I , Hinton G . ImageNet Classification with Deep Convolutional Neural Networks[J]. Adv Neural Inf Proces Syst, 2012, 25(2)

  4. Chen HH, Chen M, Chiu CC (2016) The integration of artificial neural networks and text mining to forecast gold futures prices[J]. Communications in Statistics - Simulation and Computation 45(4):13

    Article  MathSciNet  Google Scholar 

  5. Yerima SY, Sezer S (2018) DroidFusion: a novel multilevel classifier fusion approach for android malware detection[J]. IEEE Transactions on Cybernetics:1–14

  6. He K, Zhang X, Ren S, et al. Deep residual learning for image recognition[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 770–778

  7. Carlini N, Wagner D. Adversarial examples are not easily detected: bypassing ten detection methods[J]. 2017

    Book  Google Scholar 

  8. Nguyen A, Yosinski J, Clune J. Deep neural networks are easily fooled: high confidence predictions for unrecognizable images[J]. 2015

    Google Scholar 

  9. Athalye A , Carlini N , Wagner D . Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples[J]. 2018

    Google Scholar 

  10. Huang S, Papernot N, Goodfellow I, et al. Adversarial attacks on neural network policies[J]. arXiv preprint arXiv:1702.02284, 2017

  11. Brendel W, Rauber J, Bethge M. Decision-based adversarial attacks: reliable attacks against black-box machine learning models[J]. 2018

    Google Scholar 

  12. Barreno M, Nelson B, Sears R et al (2006) Can machine learning be secure?[C]//proceedings of the 2006 ACM symposium on information, computer and communications security. ACM:16–25

  13. Barreno M, Nelson B, Joseph AD et al (2010) The security of machine learning[J]. Mach Learn 81(2):121–148

    Article  MathSciNet  Google Scholar 

  14. Szegedy C, Zaremba W, Sutskever I, et al. Intriguing properties of neural networks[J]. arXiv preprint arXiv:1312.6199, 2013

  15. Ian J Goodfellow, Jonathon Shlens, and Christian Szegedy. Explaining and harnessing adversarial examples. CoRR, abs/1412.6572, 2014. URL http://arxiv.org/abs/1412.6572

  16. Kurakin A, Goodfellow I, Bengio S. Adversarial examples in the physical world[J]. arXiv preprint arXiv:1607.02533, 2016

  17. Papernot N, McDaniel P, Jha S et al (2016) The limitations of deep learning in adversarial settings[C]//security and privacy (EuroS&P), 2016 IEEE European symposium on. IEEE:372–387

  18. Su J, Vargas D V, Kouichi S. One pixel attack for fooling deep neural networks[J]. arXiv preprint arXiv:1710.08864, 2017

  19. Papernot N, Mcdaniel P, Xi W, et al. Distillation as a defense to adversarial perturbations against deep neural networks[C]// Security & Privacy. 2016

    Google Scholar 

  20. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks[C]//2017 IEEE symposium on security and privacy (SP). IEEE:39–57

  21. Moosavidezfooli SM, Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks[C]// computer vision and pattern recognition. IEEE:2574–2582

  22. Papernot N, Mcdaniel P, Goodfellow I, et al. Practical black-box attacks against deep learning systems using adversarial examples[J]. 2016

    Google Scholar 

  23. Andor D, Alberti C, Weiss D, et al. Globally normalized transition-based neural networks[J]. arXiv preprint arXiv:1603.06042, 2016

  24. Liu Y, Chen X, Liu C, et al. Delving into transferable adversarial examples and black-box attacks[J]. arXiv preprint arXiv:1611.02770, 2016

  25. Tramèr F, Papernot N, Goodfellow I, et al. The space of transferable adversarial examples[J]. arXiv preprint arXiv:1704.03453, 2017

  26. Narodytska N, Kasiviswanathan S P. Simple black-box adversarial perturbations for deep networks[J]. 2016

    Google Scholar 

  27. Jin Y (2005) Jürgen Branke. Evolutionary optimization in uncertain environments---a survey[J]. IEEE Trans Evol Comput 9(3):303–317

    Article  Google Scholar 

  28. LeCun Y. The MNIST database of handwritten digits[J]. http://yann. lecun. com/exdb/mnist/, 1998

  29. Deng J, Dong W, Socher R, et al. ImageNet: a large-scale hierarchical image database[C]// IEEE conference on Computer Vision & Pattern Recognition. 2009

    Google Scholar 

Download references

Acknowledgements

This work is supported by National Natural Science Foundation of China (No.61876019).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Xiaohui Kuang.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhou, Y., Tan, Ya., Zhang, Q. et al. An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers. Mobile Netw Appl 26, 1616–1629 (2021). https://doi.org/10.1007/s11036-019-01499-x

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11036-019-01499-x

Keywords