Abstract
In the real world, blackbox attacks seem to be widely existed due to the lack of detailed information of models to be attacked. Hence, it is desirable to obtain adversarial examples with high transferability which will facilitate practical adversarial attacks. Instead of adopting traditional input transformation approaches, we propose a mechanism to derive masked images through removing some regions from the initial input images. In this manuscript, the removed regions are spatially uniformly distributed squares. For comparison, several transferable attack methods are adopted as the baselines. Eventually, extensive empirical evaluations are conducted on the standard ImageNet dataset to validate the effectiveness of GM-Attack. As indicated, our GM-Attack can craft more transferable adversarial examples compared with other input transformation methods and attack success rate on Inc-v4 has been improved by 6.5% over state-of-the-art methods.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR), Banff (2014)
Chen, P., Liu, S., Zhao, H., and Jia, J.: Gridmask data augmentation. arXiv preprint arXiv:2001.04086 (2020)
Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, pp. 9185–9193 (2018)
Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, pp. 4312–4321. IEEE (2019)
Girshick, R.B.: Fast R-CNN. In: Proceedings of IEEE International Conference on Computer Vision (ICCV), Santiago, pp. 1440–1448. IEEE (2015)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of International Conference on Learning Representations (ICLR), San Diego (2015)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp. 770–778. IEEE (2016)
Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)
Li, Y., Bai, S., Zhou, Y., Xie, C., Zhang, Z., Yuille, A.L.: Learning transferable adversarial examples via ghost networks. In: The Thirty-Fourth AAAI Conference on Artificial Intelligence, New York City, pp. 11458–11465 (2020)
Lin, J., Song, C., He, K., Wang, L., Hopcroft, J.E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. In: International Conference on Learning Representations (ICLR), New Orleans (2019)
Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and blackbox attacks. In: International Conference on Learning Representations (ICLR). Palais des Congrès Neptune (2017)
Russakovsky, O., et al.: ImageNet large scale visual recognition challenge. Int. J. Comput. Vision (IJCV) 115(3), 211–252 (2015)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, pp. 2818–2826. IEEE (2016)
Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A.A.: Inception-v4, Inception-Resnet and the impact of residual connections on learning. In: Proceedings of AAAI Conference on Artificial Intelligence, San Francisco, pp. 4278–4284 (2017)
Wang, X., He, X., Wang, J., He, K.: Admix: enhancing the transferability of adversarial attacks. In: Proceedings of the IEEE International Conference on Computer Vision, Montreal, pp. 16158–16167. IEEE (2021)
Wang, X., He, K.: Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Montreal, pp. 1924–1933. IEEE (2021)
Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, pp. 2730–2739. IEEE (2019)
Zou, J., Pan, Z., Qiu, J., Liu, X., Rui, T., Li, W.: Improving the transferability of adversarial examples with resized-diverse-inputs, diversity-ensemble and region fitting. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12367, pp. 563–579. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58542-6_34
Zhang, H., Moustapha, C., Dauphin, Y.N., Lopez-Paz, D.: Mixup: beyond empirical risk minimization. In: International Conference on Learning Representations (ICLR), Vancouver (2018)
Zhong, Z., Zheng, L., Kang, G., Li, S., Yang, Y.: Random erasing data augmentation. In: Proceedings of the AAAI Conference on Artificial Intelligence, New York City, pp. 13001–13008 (2020)
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. In: International Conference on Learning Representations (ICLR), Vancouver (2018)
Acknowledgement
This work was supported by the National Key R &D Program of China (No. 2020AAA0107704), National Natural Science Foundation of China (Nos. 62073263, 62102105, 61976181, 62025602), Guangdong Basic and Applied Basic Research Foundation (Nos. 2020A1515110997, 2022A1515011501), Science and Technology Program of Guangzhou (Nos. 202002030263, 202102010419), Technological Innovation Team of Shaanxi Province (No. 2020TD-013).
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Hong, J., Tang, K., Gao, C., Wang, S., Guo, S., Zhu, P. (2022). GM-Attack: Improving the Transferability of Adversarial Attacks. In: Memmi, G., Yang, B., Kong, L., Zhang, T., Qiu, M. (eds) Knowledge Science, Engineering and Management. KSEM 2022. Lecture Notes in Computer Science(), vol 13370. Springer, Cham. https://doi.org/10.1007/978-3-031-10989-8_39
Download citation
DOI: https://doi.org/10.1007/978-3-031-10989-8_39
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-10988-1
Online ISBN: 978-3-031-10989-8
eBook Packages: Computer ScienceComputer Science (R0)