Skip to main content
Log in

Fast-colorfool: faster and more transferable semantic adversarial attack with complementary colors and cumulative perturbation

  • Regular Paper
  • Published:
Multimedia Systems Aims and scope Submit manuscript

Abstract

Deep neural networks are known to be vulnerable to adversarial attacks. Research indicates that unrestricted attack methods tend to produce more natural-looking adversarial examples than restricted attack methods. However, existing unrestricted query-based black-box attack methods usually require a large number of queries but exhibit a low attack success rate and poor transferability. To address these issues, we propose a fast yet effective unrestricted query-based black-box attack method named Fast-ColorFool which consists of a complementary color attack strategy and a cumulative perturbation strategy. Specifically, we first put forward the complementary color attack strategy which is executed on the Hue channel of HSV color space for the first-step attack, and theoretical proof for the effectiveness of the complementary color attack strategy is provided. Then, we design the cumulative perturbation strategy to generate adversarial examples iteratively. This strategy is operated on the a and b channels of Lab color space. It is worth mentioning that both our complementary color attack strategy and our cumulative perturbation strategy can also be integrated into many other unrestricted methods. Extensive experiments demonstrate our method’s superiority over state-of-the-art approaches in terms of attack success rate, transferability, and number of queries. For example, on the ImageNet dataset, the proposed method achieves an average query attack success rate of 95.2% and an average transfer attack success rate of 49.5% on four classifiers (AlexNet, ResNet18, ResNet50, and ViT-Base/16), while only using an average of 176.8 queries.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Algorithm 1
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data Availibility

The data that support the findings of this study are available from the corresponding author upon reasonable request.

References

  1. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. Commun. ACM 60(6), 84–90 (2017)

    Article  MATH  Google Scholar 

  2. Sun, X., Tian, Y., Li, H.: Zero-shot image classification via visual-semantic feature decoupling. Multimed. Syst. 30(2) (2024)

  3. Bi, H., Tong, Y., Zhang, J., Zhang, C., Tong, J., Jin, W.: Depth alignment interaction network for camouflaged object detection. Multimed. Syst. 30(1) (2024)

  4. Hu, Y., Lu, M., Xie, C., Lu, X.: Video-based driver action recognition via hybrid spatial-temporal deep learning framework. Multimedia Syst. 27(3), 483–501 (2021)

    Article  MATH  Google Scholar 

  5. Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., Fergus, R.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR 2014 - Conference Track Proceedings (2014)

  6. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations, ICLR 2015 - Conference Track Proceedings (2015)

  7. Kong, Z., Guo, J., Li, A., Liu, C.: Physgan: Generating physical-world-resilient adversarial examples for autonomous driving. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 14254–14263 (2020)

  8. Chen, X., Gao, X., Zhao, J., Ye, K., Xu, C.-Z.: Advdiffuser: Natural adversarial example synthesis with diffusion models. In: 2023 IEEE/CVF International Conference on Computer Vision (ICCV), pp. 4539–4549 (2023)

  9. Li, Q., Hu, Y., Liu, Y., Zhang, D., Jin, X., Chen, Y.: Discrete point-wise attack is not enough: Generalized manifold adversarial attack for face recognition. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20575–20584 (2023)

  10. Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: A simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 2574–2582 (2016)

  11. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Proceedings - IEEE Symposium on Security and Privacy, vol. 0, pp. 39–57 (2017)

  12. Sun, L., Juefei-Xu, F., Huang, Y., Guo, Q., Zhu, J., Feng, J., Liu, Y., Pu, G.: Ala: Adversarial lightness attack via naturalness-aware regularizations. arXiv preprint arXiv:2201.06070 (2022)

  13. Zhao, Z., Liu, Z., Larson, M.: Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1036–1045 (2020)

  14. Chen, P.-Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.-J.: Zoo: Zeroth order optimization based black-box atacks to deep neural networks without training substitute models. In: AISec 2017 - Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, Co-located with CCS 2017, pp. 15–26 (2017)

  15. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: 35th International Conference on Machine Learning, ICML 2018, vol. 5, pp. 3392–3401 (2018)

  16. Wang, X., Zhang, Z., Tong, K., Gong, D., He, K., Li, Z., Liu, W.: Triangle attack: A query-efficient decision-based adversarial attack. In: Lecture Notes in Computer Science (including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), vol. 13665 LNCS, pp. 156–174 (2022)

  17. Shi, Y., Han, Y., Hu, Q., Yang, Y., Tian, Q.: Query-efficient black-box adversarial attack with customized iteration and sampling. IEEE Trans. Pattern Anal. Mach. Intell. 45(2), 2226–2245 (2023)

    Article  MATH  Google Scholar 

  18. Zhou, M., Wu, J., Liu, Y., Liu, S., Zhu, C.: Dast: Data-free substitute training for adversarial attacks. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 231–240 (2020)

  19. Xie, C., Zhang, Z., Zhou, Y., Bai, S., Wang, J., Ren, Z., Yuille, A.L.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2019-June, pp. 2725–2734 (2019)

  20. Gao, L., Huang, Z., Song, J., Yang, Y., Shen, H.T.: Push and pull: transferable adversarial examples with attentive attack. IEEE Trans. Multimedia 24, 2329–2338 (2022)

    Article  MATH  Google Scholar 

  21. Wang, X., He, K.: Enhancing the transferability of adversarial attacks through variance tuning. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1924–1933 (2021)

  22. Hosseini, H., Poovendran, R.: Semantic adversarial examples. In: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, vol. 2018-June, pp. 1695–1700 (2018)

  23. Shahin Shamsabadi, A., Sanchez-Matilla, R., Cavallaro, A.: Colorfool: Semantic adversarial colorization. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 1148–1157 (2020)

  24. Shamsabadi, A.S., Oh, C., Cavallaro, A.: Edgefool: An adversarial image enhancement filter. In: ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings, vol. 2020-May, pp. 1898–1902 (2020)

  25. Song, Y., Kushman, N., Shu, R., Ermon, S.: Constructing unrestricted adversarial examples with generative models. In: Advances in Neural Information Processing Systems, vol. 2018-December, pp. 8312–8323 (2018)

  26. Naderi, H., Goli, L., Kasaei, S.: Generating unrestricted adversarial examples via three parameteres. Multimedia Tools Appl. 81(15), 21919–21938 (2022)

    Article  Google Scholar 

  27. Yuan, S., Zhang, Q., Gao, L., Cheng, Y., Song, J.: Natural color fool: towards boosting black-box unrestricted attacks. Adv. Neural. Inf. Process. Syst. 35, 7546–7560 (2022)

    Google Scholar 

  28. Tramer, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: Attacks and defenses. In: 6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings (2018)

  29. Xie, C., Wu, Y., Maaten, L.v.d., Yuille, A.L., He, K.: Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 501–509 (2019)

  30. Hu, C., Shi, W., Tian, L.: Adversarial color projection: a projector-based physical-world attack to dnns. Image Vis. Comput. 140, 104861 (2023)

    Article  MATH  Google Scholar 

  31. Landau, B., Smith, L.B., Jones, S.S.: The importance of shape in early lexical learning. Cogn. Dev. 3(3), 299–321 (1988)

    Article  MATH  Google Scholar 

  32. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. In: 5th International Conference on Learning Representations, ICLR 2017 - Workshop Track Proceedings (2017)

  33. Dong, Y., Liao, F., Pang, T., Su, H., Zhu, J., Hu, X., Li, J.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pp. 9185–9193 (2018)

  34. Lin, J., Song, C., He, K., Wang, L., Hopcroft, J.E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. In: 8th International Conference on Learning Representations, ICLR 2020 (2020)

  35. Huang, Y., Sun, L., Guo, Q., Juefei-Xu, F., Zhu, J., Feng, J., Liu, Y., Pu, G.: Ala: Naturalness-aware adversarial lightness attack. arXiv preprint arXiv:2201.06070 (2022)

  36. Wang, B., Zhao, M., Wang, W., Wei, F., Qin, Z., Ren, K.: Are you confident that you have successfully generated adversarial examples? IEEE Trans. Circuits Syst. Video Technol. 31(6), 2089–2099 (2021)

    Article  MATH  Google Scholar 

  37. Das, N., Shanbhogue, M., Chen, S.-T., Hohman, F., Chen, L., Kounavis, M.E., Chau, D.H.: Keeping the bad guys out: Protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900 (2017)

  38. Xu, W., Evans, D., Qi, Y.: Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)

  39. Pridmore, R.W.: Complementary colors: a literature review. Color. Res. Appl. 46(2), 482–488 (2021)

    Article  MATH  Google Scholar 

  40. Lecun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2323 (1998)

    Article  MATH  Google Scholar 

  41. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

  42. Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., Huang, Z., Karpathy, A., Khosla, A., Bernstein, M., Berg, A.C., Fei-Fei, L.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Article  MathSciNet  Google Scholar 

  43. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2016-December, pp. 770–778 (2016)

  44. Alexey, D.: An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv: 2010.11929 (2020)

  45. Talebi, H., Milanfar, P.: Nima: neural image assessment. IEEE Trans. Image Process. 27(8), 3998–4011 (2018)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

We gratefully appreciate the editor and reviewers for reviewing this manuscript. Additionally, this work is partially supported by the Central Government Guided Local Funds for Science and Technology Development No.216Z0301G; National Natural Science Foundation of China No.62476235; Hebei Natural Science Foundation No.F2023203012; Science Research Project of Hebei Education Department No.QN2024010; Innovation Capability Improvement Plan Project of Hebei Province No.22567626H.

Author information

Authors and Affiliations

Authors

Contributions

SHZ contributed to conceptualization and methodology. XQH contributed to conceptualization, methodology, and original manuscript preparation. ZGC, SZ, and QT contributed to the review. All authors reviewed and approved the final manuscript.

Corresponding author

Correspondence to Xueqiang Han.

Ethics declarations

Conflict of interest

No potential Conflict of interest was reported by the authors.

Additional information

Communicated by Qianqian Xu.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, S., Han, X., Cui, Z. et al. Fast-colorfool: faster and more transferable semantic adversarial attack with complementary colors and cumulative perturbation. Multimedia Systems 31, 117 (2025). https://doi.org/10.1007/s00530-025-01721-9

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s00530-025-01721-9

Keywords