Skip to main content

Decision-Based Black-Box Attack Specific to Large-Size Images

  • Conference paper
  • First Online:
  • 268 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13842))

Abstract

Decision-based black-box attacks can craft adversarial examples by only querying the target model for hard-label predictions. However, most existing methods are not efficient when attacking large-size images due to optimization difficulty in high-dimensional space, thus consuming lots of queries or obtaining relatively large perturbations. In this paper, we propose a novel decision-based black-box attack to generate adversarial examples, which is Specific to Large-size Image Attack (SLIA). We only perturb on the low-frequency component of discrete wavelet transform (DWT) of an image, reducing the dimension of the gradient to be estimated. Besides, when initializing the adversarial example of the untargeted attack, we remain the high-frequency components of the original image unchanged, and only update the low-frequency component with the randomly sampled uniform noise, thereby reducing the distortion at the beginning of the attack. Extensive experimental results demonstrate that the proposed SLIA outperforms state-of-the-art algorithms when attacking a variety of different threat models. The source code is publicly available at https://github.com/GZHU-DVL/SLIA.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    https://keras.io/applications/#resnet50. https://keras.io/applications/#vgg16. https://keras.io/applications/#densenet201.

References

  1. Szegedy, C., et al.: Intriguing properties of neural networks. In: Proceedings of International Conference on Learning Representations (2014)

    Google Scholar 

  2. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of International Conference on Learning Representations (2015)

    Google Scholar 

  3. Liu, Y., Chen, X., Liu, C., Song, D.: Delving into transferable adversarial examples and black-box attacks. In: Proceedings of International Conference on Learning Representations (2016)

    Google Scholar 

  4. Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 15–26. ACM (2017)

    Google Scholar 

  5. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 9185–9193. IEEE (2018)

    Google Scholar 

  6. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: Proceedings of International Conference on Machine Learning, pp. 2142–2151. ACM (2018)

    Google Scholar 

  7. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: Proceedings of International Conference on Learning Representations (2017)

    Google Scholar 

  8. Fan, Y., et al.: Sparse adversarial attack via perturbation factorization. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12367, pp. 35–50. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58542-6_3

    Chapter  Google Scholar 

  9. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582. IEEE (2016)

    Google Scholar 

  10. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: Proceedings of the IEEE European Symposium on Security and Privacy (Euro S &P), pp. 372–387. IEEE (2016)

    Google Scholar 

  11. Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z.B., Swami, A.: Practical black-box attacks against machine learning. In: Proceedings of the 2017 ACM on Asia Conference on Computer and Communications Security, pp. 506–519. ACM (2017)

    Google Scholar 

  12. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: Proceedings of the IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  13. Bhagoji, A.N., He, W., Li, B., Song, D.: Practical black-box attacks on deep neural networks using efficient query mechanisms. In: Ferrari, V., Hebert, M., Sminchisescu, C., Weiss, Y. (eds.) ECCV 2018. LNCS, vol. 11216, pp. 158–174. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-01258-8_10

    Chapter  Google Scholar 

  14. Tu, C.-C., et al.: AutoZOOM: autoencoder-based zeroth order optimization method for attacking black-box neural networks. In AAAI Conference on Artificial Intelligence (2018)

    Google Scholar 

  15. Al-Dujaili, A., O’Reilly, U.M.: Sign bits are all you need for black-box attacks. In: Proceedings of International Conference on Learning Representations (2020)

    Google Scholar 

  16. Guo, C., Gardner, J.R., You, Y., Wilson, A.G., Weinberger, K.Q.: Simple black-box adversarial attacks. arXiv preprint arXiv:1905.07121 (2019)

  17. Moon, S., An, G., Song, H.O.: Parsimonious black-box adversarial attacks via efficient combinatorial optimization. In: Proceedings of International Conference on Machine Learning (2019)

    Google Scholar 

  18. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255 (2009)

    Google Scholar 

  19. Brendel, W., Rauber, J., Bethge, M.: Decision-based adversarial attacks: reliable attacks against black-box machine learning models. In: Proceedings of International Conference on Learning Representations (2018)

    Google Scholar 

  20. Cheng, M., Le, T., Chen, P.Y., Yi, J., Zhang, H., Hsieh, C.J.: Query-efficient hard-label black-box attack: an optimization-based approach. In: Proceedings of International Conference on Learning Representations (2019)

    Google Scholar 

  21. Chen, J., Jordan, M.I., Wainwright, M.: HopSkipJumpAttack: a query-efficient decision-based attack. In: Proceedings of the IEEE Symposium on Security and Privacy (SP), pp. 1277–1294. IEEE (2020)

    Google Scholar 

  22. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Leibe, B., Matas, J., Sebe, N., Welling, M. (eds.) ECCV 2016. LNCS, vol. 9908, pp. 630–645. Springer, Cham (2016). https://doi.org/10.1007/978-3-319-46493-0_38

    Chapter  Google Scholar 

  23. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. (2014)

  24. Huang, G., Liu, Z., van der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2261–2269. IEEE (2017)

    Google Scholar 

  25. Wang, D., Lin, J., Wang, Y.-G.: Query-efficient adversarial attack based on Latin hypercube sampling. arXiv preprint arXiv: 2207.02391. (Accept for presentation in IEEE International Conference on Image Processing 2022)

  26. Mallat, S.: The theory for multiresolution signal decomposition: the wavelet representation. In: Proceedings of the IEEE Transactions on Pattern Analysis and Machine Intelligence, pp. 654–693. IEEE (1989)

    Google Scholar 

  27. Guo, C., Frank, J.S., Weinberger, K.Q.: Low frequency adversarial perturbation. In: International Conference on Uncertainty in Artificial Intelligence, pp. 1127–1137. AUAI (2019)

    Google Scholar 

Download references

Acknowledgement

This work was supported in part by the NSFC (61872099, 62272116) and in part by the Scientific Research Project of Guangzhou University (YJ2021004). The authors acknowledge the Network Center of Guangzhou University for providing HPC computing resources.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yuan-Gen Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, D., Wang, YG. (2023). Decision-Based Black-Box Attack Specific to Large-Size Images. In: Wang, L., Gall, J., Chin, TJ., Sato, I., Chellappa, R. (eds) Computer Vision – ACCV 2022. ACCV 2022. Lecture Notes in Computer Science, vol 13842. Springer, Cham. https://doi.org/10.1007/978-3-031-26284-5_22

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-26284-5_22

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-26283-8

  • Online ISBN: 978-3-031-26284-5

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics