Skip to main content

Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial Attacks

  • Conference paper
  • First Online:
PRICAI 2024: Trends in Artificial Intelligence (PRICAI 2024)

Abstract

Adversarial examples are a key method to exploit deep neural networks. Using gradient information, such examples can be generated in an efficient way without altering the victim model. Recent frequency domain transformation has further enhanced the transferability of such adversarial examples, such as spectrum simulation attack. In this work, we investigate the effectiveness of frequency domain-based attacks, aligning with similar findings in the spatial domain. Furthermore, such consistency between the frequency and spatial domains provides insights into how gradient-based adversarial attacks induce perturbations across different domains, which is yet to be explored. Hence, we propose a simple, effective, and scalable gradient-based adversarial attack algorithm leveraging the information consistency in both frequency and spatial domains. We evaluate the algorithm for its effectiveness against different models. Extensive experiments demonstrate that our algorithm achieves state-of-the-art results compared to other gradient-based algorithms. Our code is available at: https://github.com/LMBTough/FSA.

Z. Jin and J. Zhang—These authors contributed equally to this work.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Ahmed, N., Natarajan, T., Rao, K.R.: Discrete cosine transform. IEEE Trans. Comput. 100(1), 90–93 (1974)

    Article  MathSciNet  Google Scholar 

  2. Bai, T., Luo, J., Zhao, J., Wen, B., Wang, Q.: Recent advances in adversarial training for adversarial robustness. arXiv preprint arXiv:2102.01356 (2021)

  3. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. arXiv:1712.04248 (2017)

  4. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  5. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

    Google Scholar 

  6. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 4312–4321 (2019)

    Google Scholar 

  7. Duan, R., Chen, Y., Niu, D., Yang, Y., Qin, A.K., He, Y.: AdvDrop: adversarial attack to DNNs by dropping information. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 7506–7515 (2021)

    Google Scholar 

  8. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  9. Guo, C., Frank, J.S., Weinberger, K.Q.: Low frequency adversarial perturbation. arXiv preprint arXiv:1809.08758 (2018)

  10. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE conference on CVPR, pp. 770–778 (2016)

    Google Scholar 

  11. Howard, A., et al.: Searching for mobilenetv3. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 1314–1324 (2019)

    Google Scholar 

  12. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 4700–4708 (2017)

    Google Scholar 

  13. Jin, Z., Zhang, J., Zhu, Z., Chen, H.: Benchmarking transferable adversarial attacks. CoRR (2024)

    Google Scholar 

  14. Jin, Z., et al.: Enhancing adversarial attacks via parameter adaptive adversarial attack. arXiv preprint arXiv:2408.07733 (2024)

  15. Jin, Z., Zhu, Z., Wang, X., Zhang, J., Shen, J., Chen, H.: DANAA: towards transferable attacks with double adversarial neuron attribution. In: Yang, X., et al. (eds.) ADMA 2023. LNCS, vol. 14177, pp. 456–470. Springer, Cham (2023). https://doi.org/10.1007/978-3-031-46664-9_31

    Chapter  Google Scholar 

  16. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: Artificial Intelligence Safety and Security, pp. 99–112 (2018)

    Google Scholar 

  17. Lee, H., Bae, H., Yoon, S.: Gradient masking of label smoothing in adversarial robustness. IEEE Access 9, 6453–6464 (2020)

    Article  Google Scholar 

  18. Long, Y., et al.: Frequency domain model augmentation for adversarial attack. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) ECCV 2022. LNCS, vol. 13664, pp. 549–566. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-19772-7_32

    Chapter  Google Scholar 

  19. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)

  20. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115, 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  21. Sandler, M., Howard, A., Zhu, M., Zhmoginov, A., Chen, L.C.: Mobilenetv2: inverted residuals and linear bottlenecks. In: Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, pp. 4510–4520 (2018)

    Google Scholar 

  22. Sharma, Y., Ding, G.W., Brubaker, M.: On the effectiveness of low frequency perturbations. arXiv preprint arXiv:1903.00073 (2019)

  23. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556 (2014)

  24. Szegedy, C., et al.: Going deeper with convolutions. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1–9 (2015)

    Google Scholar 

  25. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2818–2826 (2016)

    Google Scholar 

  26. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv:1312.6199 (2013)

  27. Tan, M., Le, Q.: EfficientNet: rethinking model scaling for convolutional neural networks. In: International Conference on Machine Learning, pp. 6105–6114 (2019)

    Google Scholar 

  28. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2017)

  29. Wang, H., Wu, X., Huang, Z., Xing, E.P.: High-frequency component helps explain the generalization of convolutional neural networks. In: Proceedings of the IEEE/CVF Conference on CVPR, pp. 8684–8694 (2020)

    Google Scholar 

  30. Wu, W., Su, Y., Lyu, M.R., King, I.: Improving the transferability of adversarial samples with adversarial transformations. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 9024–9033 (2021)

    Google Scholar 

  31. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE/CVF Conference on CVPR, pp. 2730–2739 (2019)

    Google Scholar 

  32. Yin, D., Gontijo Lopes, R., Shlens, J., Cubuk, E.D., Gilmer, J.: A fourier perspective on model robustness in computer vision. In: NeurIPS, vol. 32 (2019)

    Google Scholar 

  33. Zhu, Z., et al.: Ge-advgan: improving the transferability of adversarial samples by gradient editing-based adversarial generative model. In: Proceedings of the 2024 SIAM International Conference on Data Mining (SDM), pp. 706–714. SIAM (2024)

    Google Scholar 

  34. Zhu, Z., et al.: Iterative search attribution for deep neural networks. In: Forty-first International Conference on Machine Learning (2024)

    Google Scholar 

  35. Zhu, Z., et al.: Improving adversarial transferability via frequency-based stationary point search. In: Proceedings of the 32nd ACM International Conference on Information and Knowledge Management, pp. 3626–3635 (2023)

    Google Scholar 

  36. Zhu, Z., et al.: AttExplore: attribution for explanation with model parameters exploration. In: The Twelfth International Conference on Learning Representations (2024)

    Google Scholar 

  37. Zhu, Z., et al.: MFABA: a more faithful and accelerated boundary-based attribution method for deep neural networks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 38, pp. 17228–17236 (2024)

    Google Scholar 

  38. Zhu, Z., Jin, Z., Wang, X., Zhang, J., Chen, H., Choo, K.K.R.: Rethinking transferable adversarial attacks with double adversarial neuron attribution. IEEE Trans. Artif. Intell. (2024)

    Google Scholar 

  39. Zhu, Z., Jin, Z., Zhang, J., Chen, H.: Enhancing model interpretability with local attribution over global exploration. arXiv preprint arXiv:2408.07736 (2024)

  40. Zhu, Z., Wang, X., Jin, Z., Zhang, J., Chen, H.: Enhancing transferable adversarial attacks on vision transformers through gradient normalization scaling and high-frequency adaptation. In: The Twelfth International Conference on Learning Representations (2024)

    Google Scholar 

  41. Zhu, Z., Zhang, J., Wang, X., Jin, Z., Chen, H.: DMS: addressing information loss with more steps for pragmatic adversarial attacks. arXiv preprint arXiv:2406.07580 (2024)

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huaming Chen .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Jin, Z., Zhang, J., Zhu, Z., Wang, X., Huang, Y., Chen, H. (2025). Leveraging Information Consistency in Frequency and Spatial Domain for Adversarial Attacks. In: Hadfi, R., Anthony, P., Sharma, A., Ito, T., Bai, Q. (eds) PRICAI 2024: Trends in Artificial Intelligence. PRICAI 2024. Lecture Notes in Computer Science(), vol 15281. Springer, Singapore. https://doi.org/10.1007/978-981-96-0116-5_8

Download citation

  • DOI: https://doi.org/10.1007/978-981-96-0116-5_8

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-96-0115-8

  • Online ISBN: 978-981-96-0116-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics