Skip to main content

GF-Attack: A Strategy to Improve the Performance of Adversarial Example

  • Conference paper
  • First Online:
Machine Learning for Cyber Security (ML4CS 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12487))

Included in the following conference series:

  • 1142 Accesses

Abstract

As CNN’s powerful visual processing function is widely recognized, its security has attracted people’s attention. A large number of experiments prove that CNN is extremely vulnerable to adversarial attack. Existing attack methods have better performance in white-box attack, but in actual situations, attackers can usually only perform black-box attack. The success rate of black-box attack methods is relatively low. At the same time, the most attack methods will attack all pixels of the image, which will cause too much interference in the adversarial example. To this end, we propose an enhanced attack strategy GF-Attack. We recommend distinguishing between the attack area and the non-attack area and combining the information of the flipped image during the attack. This strategy can improve the transferability of the generated adversarial examples and reduce the amount of interference. We conducted single model and ensemble models attack on eight models, including normal training and adversarial training. We compared the success rate and distance of the adversarial examples generated by the enhanced method using the GF-Attack strategy and the original method. Experiments show that the improved method by GF-Attack is superior to the original method in the black-box setting and white-box setting. Increased maximum success rate 9.13%, reduced pixel interference 404K.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Bojarski, M., et al.: End to end learning for self-driving cars. arXiv: Computer Vision and Pattern Recognition (2016)

  2. Cogswell, M., Ahmed, F., Girshick, R., Zitnick, L., Batra, D.: Reducing overfitting in deep networks by decorrelating representations. arXiv: Learning (2015)

  3. Dong, Y., et al.: Boosting adversarial attacks with momentum, pp. 9185–9193 (2018)

    Google Scholar 

  4. Flepp, B.: Off-road obstacle avoidance through end-to-end learning. In: Advances in Neural Information Processing Systems (2005)

    Google Scholar 

  5. Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation, pp. 580–587 (2014)

    Google Scholar 

  6. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv: Machine Learning (2014)

  7. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, pp. 770–778 (2016)

    Google Scholar 

  8. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv: Computer Vision and Pattern Recognition (2016)

  9. Lecun, Y., Bottou, L.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp. 2278–2324 (1998)

    Google Scholar 

  10. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. arXiv: Learning (2016)

  11. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv: Machine Learning (2017)

  12. Madry, A., Makelov, A., Schmidt, L., Tsipras, D.,Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)

    Google Scholar 

  13. Moosavidezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations, pp. 86–94 (2017)

    Google Scholar 

  14. Prasoon, A., Petersen, K., Igel, C., Lauze, F., Dam, E.B., Nielsen, M.: Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network, vol. 16, pp. 246–253 (2013)

    Google Scholar 

  15. Selvaraju, R.R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., Batra, D.: Grad-cam: why did you say that? Visual explanations from deep networks via gradient-based localization (2016)

    Google Scholar 

  16. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014)

    Google Scholar 

  17. Singh, R., Lanchantin, J., Robins, G., Qi, Y.: Deepchrome: deep-learning for predicting gene expression from histone modifications. Bioinformatics 32(17), 639–648 (2016)

    Article  Google Scholar 

  18. Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)

    Article  Google Scholar 

  19. Swietojanski, P., Ghoshal, A., Renals, S.: Convolutional neural networks for distant speech recognition. IEEE Signal Process. Lett. 21(9), 1120–1124 (2014)

    Article  Google Scholar 

  20. Szegedy, C., et al.: Going deeper with convolutions, pp. 1–9 (2015)

    Google Scholar 

  21. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv: Computer Vision and Pattern Recognition (2013)

  22. Van Etten, A.: You only look twice: Rapid multi-scale object detection in satellite imagery. arXiv: Computer Vision and Pattern Recognition (2018)

  23. Wu, L., Liu, Z., Zhang, H., Cen, Y., Zhou, W.: Ps-mifgsm: focused image anti-attack algorithm. J. Comput. Appl. 1–8 (2019)

    Google Scholar 

  24. Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. 30(9), 2805–2824 (2019)

    Article  MathSciNet  Google Scholar 

  25. Zeggada, A., Melgani, F., Bazi, Y.: A deep learning approach to uav image multilabeling. IEEE Geosci. Remote Sensing Lett. 14(5), 694–698 (2017)

    Article  Google Scholar 

Download references

Acknowledgment

This work was supported in part by the National Natural Science Foundation of China under Grant 61762089, Grant 61663047, Grant 61863036, and Grant 61762092, and in part by the Science and Technology Innovation Team Project of Yunnan Province under Grant 2017HC012.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Qing Duan or Junhui Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhang, J., Liu, Z., Wu, L., Wu, L., Duan, Q., Liu, J. (2020). GF-Attack: A Strategy to Improve the Performance of Adversarial Example. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12487. Springer, Cham. https://doi.org/10.1007/978-3-030-62460-6_45

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62460-6_45

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62459-0

  • Online ISBN: 978-3-030-62460-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics