Abstract
As CNN’s powerful visual processing function is widely recognized, its security has attracted people’s attention. A large number of experiments prove that CNN is extremely vulnerable to adversarial attack. Existing attack methods have better performance in white-box attack, but in actual situations, attackers can usually only perform black-box attack. The success rate of black-box attack methods is relatively low. At the same time, the most attack methods will attack all pixels of the image, which will cause too much interference in the adversarial example. To this end, we propose an enhanced attack strategy GF-Attack. We recommend distinguishing between the attack area and the non-attack area and combining the information of the flipped image during the attack. This strategy can improve the transferability of the generated adversarial examples and reduce the amount of interference. We conducted single model and ensemble models attack on eight models, including normal training and adversarial training. We compared the success rate and distance of the adversarial examples generated by the enhanced method using the GF-Attack strategy and the original method. Experiments show that the improved method by GF-Attack is superior to the original method in the black-box setting and white-box setting. Increased maximum success rate 9.13%, reduced pixel interference 404K.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Bojarski, M., et al.: End to end learning for self-driving cars. arXiv: Computer Vision and Pattern Recognition (2016)
Cogswell, M., Ahmed, F., Girshick, R., Zitnick, L., Batra, D.: Reducing overfitting in deep networks by decorrelating representations. arXiv: Learning (2015)
Dong, Y., et al.: Boosting adversarial attacks with momentum, pp. 9185–9193 (2018)
Flepp, B.: Off-road obstacle avoidance through end-to-end learning. In: Advances in Neural Information Processing Systems (2005)
Girshick, R., Donahue, J., Darrell, T., Malik, J.: Rich feature hierarchies for accurate object detection and semantic segmentation, pp. 580–587 (2014)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv: Machine Learning (2014)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition, pp. 770–778 (2016)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv: Computer Vision and Pattern Recognition (2016)
Lecun, Y., Bottou, L.: Gradient-based learning applied to document recognition. In: Proceedings of the IEEE, pp. 2278–2324 (1998)
Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. arXiv: Learning (2016)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv: Machine Learning (2017)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D.,Vladu, A.: Towards deep learning models resistant to adversarial attacks (2017)
Moosavidezfooli, S., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations, pp. 86–94 (2017)
Prasoon, A., Petersen, K., Igel, C., Lauze, F., Dam, E.B., Nielsen, M.: Deep feature learning for knee cartilage segmentation using a triplanar convolutional neural network, vol. 16, pp. 246–253 (2013)
Selvaraju, R.R., Das, A., Vedantam, R., Cogswell, M., Parikh, D., Batra, D.: Grad-cam: why did you say that? Visual explanations from deep networks via gradient-based localization (2016)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition (2014)
Singh, R., Lanchantin, J., Robins, G., Qi, Y.: Deepchrome: deep-learning for predicting gene expression from histone modifications. Bioinformatics 32(17), 639–648 (2016)
Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2019)
Swietojanski, P., Ghoshal, A., Renals, S.: Convolutional neural networks for distant speech recognition. IEEE Signal Process. Lett. 21(9), 1120–1124 (2014)
Szegedy, C., et al.: Going deeper with convolutions, pp. 1–9 (2015)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv: Computer Vision and Pattern Recognition (2013)
Van Etten, A.: You only look twice: Rapid multi-scale object detection in satellite imagery. arXiv: Computer Vision and Pattern Recognition (2018)
Wu, L., Liu, Z., Zhang, H., Cen, Y., Zhou, W.: Ps-mifgsm: focused image anti-attack algorithm. J. Comput. Appl. 1–8 (2019)
Yuan, X., He, P., Zhu, Q., Li, X.: Adversarial examples: attacks and defenses for deep learning. IEEE Trans. Neural Netw. 30(9), 2805–2824 (2019)
Zeggada, A., Melgani, F., Bazi, Y.: A deep learning approach to uav image multilabeling. IEEE Geosci. Remote Sensing Lett. 14(5), 694–698 (2017)
Acknowledgment
This work was supported in part by the National Natural Science Foundation of China under Grant 61762089, Grant 61663047, Grant 61863036, and Grant 61762092, and in part by the Science and Technology Innovation Team Project of Yunnan Province under Grant 2017HC012.
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Zhang, J., Liu, Z., Wu, L., Wu, L., Duan, Q., Liu, J. (2020). GF-Attack: A Strategy to Improve the Performance of Adversarial Example. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12487. Springer, Cham. https://doi.org/10.1007/978-3-030-62460-6_45
Download citation
DOI: https://doi.org/10.1007/978-3-030-62460-6_45
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-62459-0
Online ISBN: 978-3-030-62460-6
eBook Packages: Computer ScienceComputer Science (R0)