Abstract
Adversarial examples are a key method to exploit deep neural networks. Using gradient information, such examples can be generated in an efficient way without altering the victim model. Recent frequency domain transformation has further enhanced the transferability of such adversarial examples, such as spectrum simulation attack. In this work, we investigate the effectiveness of frequency domain-based attacks, aligning with similar findings in the spatial domain. Furthermore, such consistency between the frequency and spatial domains provides insights into how gradient-based adversarial attacks induce perturbations across different domains, which is yet to be explored. Hence, we propose a simple, effective, and scalable gradient-based adversarial attack algorithm leveraging the information consistency in both frequency and spatial domains. We evaluate the algorithm for its effectiveness against different models. Extensive experiments demonstrate that our algorithm achieves state-of-the-art results compared to other gradient-based algorithms. Our code is available at: https://github.com/LMBTough/FSA.
Z. Jin and J. Zhang—These authors contributed equally to this work.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ahmed, N., Natarajan, T., Rao, K.R.: Discrete cosine transform. IEEE Trans. Comput. 100(1), 90–93 (1974)
Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q.: Recent advances in adversarial training for adversarial robustness. arXiv preprint arXiv:2102.01356 (2021)
Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv:1712.04248 (2017)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)
Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)
Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4312–4321 (2019)
Duan, R., Chen, Y., Niu, D., Yang, Y., Qin, A.K., He, Y.: AdvDrop: adversarial attack to DNNs by dropping information. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7506–7515 (2021)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Guo, C., Frank, J.S., Weinberger, K.Q.: Low frequency adversarial perturbation. arXiv preprint arXiv:1809.08758 (2018)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on CVPR, pp. 770–778 (2016)
Howard, A., et al.: Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314–1324 (2019)
Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)
Jin, Z., Zhang, J., Zhu, Z., Chen, H.: Benchmarking transferable adversarial attacks. CoRR (2024)
Jin, Z., et al.: Enhancing adversarial attacks via parameter adaptive adversarial attack. arXiv preprint arXiv:2408.07733 (2024)
Jin, Z., Zhu, Z., Wang, X., Zhang, J., Shen, J., Chen, H.: DANAA: towards transferable attacks with double adversarial neuron attribution. In: Yang, X., et al. (eds.) ADMA 2023. LNCS, vol. 14177, pp. 456–470. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-46664-9_31
Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112 (2018)
Lee, H., Bae, H., Yoon, S.: Gradient masking of label smoothing in adversarial robustness. IEEE Access 9, 6453–6464 (2020)
Long, Y., et al.: Frequency domain model augmentation for adversarial attack. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13664, pp. 549–566. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19772-7_32
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115, 211–252 (2015)
Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)
Sharma, Y., Ding, G.W., Brubaker, M.: On the effectiveness of low frequency perturbations. arXiv preprint arXiv:1903.00073 (2019)
Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)
Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)
Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)
Szegedy, C., et al.: Intriguing properties of neural networks. arXiv:1312.6199 (2013)
Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114 (2019)
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)
Wang, H., Wu, X., Huang, Z., Xing, E.P.: High-frequency component helps explain the generalization of convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on CVPR, pp. 8684–8694 (2020)
Wu, W., Su, Y., Lyu, M.R., King, I.: Improving the transferability of adversarial samples with adversarial transformations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9024–9033 (2021)
Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE/CVF Conference on CVPR, pp. 2730–2739 (2019)
Yin, D., Gontijo Lopes, R., Shlens, J., Cubuk, E.D., Gilmer, J.: A fourier perspective on model robustness in computer vision. In: NeurIPS, vol. 32 (2019)
Zhu, Z., et al.: Ge-advgan: improving the transferability of adversarial samples by gradient editing-based adversarial generative model. In: Proceedings of the 2024 SIAM International Conference on Data Mining (SDM), pp. 706–714. SIAM (2024)
Zhu, Z., et al.: Iterative search attribution for deep neural networks. In: Forty-first International Conference on Machine Learning (2024)
Zhu, Z., et al.: Improving adversarial transferability via frequency-based stationary point search. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pp. 3626–3635 (2023)
Zhu, Z., et al.: AttExplore: attribution for explanation with model parameters exploration. In: The Twelfth International Conference on Learning Representations (2024)
Zhu, Z., et al.: MFABA: a more faithful and accelerated boundary-based attribution method for deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 17228–17236 (2024)
Zhu, Z., Jin, Z., Wang, X., Zhang, J., Chen, H., Choo, K.K.R.: Rethinking transferable adversarial attacks with double adversarial neuron attribution. IEEE Trans. Artif. Intell. (2024)
Zhu, Z., Jin, Z., Zhang, J., Chen, H.: Enhancing model interpretability with local attribution over global exploration. arXiv preprint arXiv:2408.07736 (2024)
Zhu, Z., Wang, X., Jin, Z., Zhang, J., Chen, H.: Enhancing transferable adversarial attacks on vision transformers through gradient normalization scaling and high-frequency adaptation. In: The Twelfth International Conference on Learning Representations (2024)
Zhu, Z., Zhang, J., Wang, X., Jin, Z., Chen, H.: DMS: addressing information loss with more steps for pragmatic adversarial attacks. arXiv preprint arXiv:2406.07580 (2024)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Jin, Z., Zhang, J., Zhu, Z., Wang, X., Huang, Y., Chen, H. (2025). Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial Attacks. In: Hadfi, R., Anthony, P., Sharma, A., Ito, T., Bai, Q. (eds) PRICAI 2024: Trends in Artificial Intelligence. PRICAI 2024. Lecture Notes in Computer Science(), vol 15281. Springer, Singapore. https://doi.org/10.1007/978-981-96-0116-5_8
Download citation
DOI: https://doi.org/10.1007/978-981-96-0116-5_8
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-96-0115-8
Online ISBN: 978-981-96-0116-5
eBook Packages: Computer ScienceComputer Science (R0)