Skip to main content

Improving Query Efficiency of Black-Box Adversarial Attack

  • Conference paper
  • First Online:
Computer Vision – ECCV 2020 (ECCV 2020)

Part of the book series: Lecture Notes in Computer Science ((LNIP,volume 12370))

Included in the following conference series:

  • 4523 Accesses

Abstract

Deep neural networks (DNNs) have demonstrated excellent performance on various tasks, however they are under the risk of adversarial examples that can be easily generated when the target model is accessible to an attacker (white-box setting). As plenty of machine learning models have been deployed via online services that only provide query outputs from inaccessible models (e.g., Google Cloud Vision API2), black-box adversarial attacks (inaccessible target model) are of critical security concerns in practice rather than white-box ones. However, existing query-based black-box adversarial attacks often require excessive model queries to maintain a high attack success rate. Therefore, in order to improve query efficiency, we explore the distribution of adversarial examples around benign inputs with the help of image structure information characterized by a Neural Process, and propose a Neural Process based black-box adversarial attack (NP-Attack) in this paper. Extensive experiments show that NP-Attack could greatly decrease the query counts under the black-box setting. Code is available at https://github.com/Sandy-Zeng/NPAttack.

Y. Bai and Y. Zeng—Equal contribution.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We still use NP in the following without ambiguity.

  2. 2.

    The implementation details of ANP are shown in the Appendix A.

References

  1. Bai, J., et al.: Targeted attack for deep hashing based retrieval. In: ECCV (2020)

    Google Scholar 

  2. Bai, Y., Feng, Y., Wang, Y., Dai, T., Xia, S.T., Jiang, Y.: Hilbert-based generative defense for adversarial examples. In: ICCV (2019)

    Google Scholar 

  3. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: S and P (2017)

    Google Scholar 

  4. Chen, P.Y., Zhang, H., Sharma, Y., Yi, J., Hsieh, C.J.: ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. arXiv preprint arXiv:1708.03999 (2017)

  5. Chen, W., Zhang, Z., Hu, X., Wu, B.: Boosting decision-based black-box adversarial attacks with random sign flip. In: ECCV (2020)

    Google Scholar 

  6. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L.: ImageNet: a large-scale hierarchical image database. In: CVPR (2009)

    Google Scholar 

  7. Devlin, J., Chang, M.W., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: NAACL (2019)

    Google Scholar 

  8. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: CVPR (2018)

    Google Scholar 

  9. Du, J., Zhang, H., Zhou, J.T., Yang, Y., Feng, J.: Query-efficient meta attack to deep neural networks. In: ICLR (2020)

    Google Scholar 

  10. Garnelo, M., Schwarz, J., Dan, R., Viola, F., Teh, Y.W.: Neural processes. In: ICLR (2018)

    Google Scholar 

  11. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)

    Google Scholar 

  12. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR (2016)

    Google Scholar 

  13. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: ICML (2018)

    Google Scholar 

  14. Ilyas, A., Engstrom, L., Madry, A.: Prior convictions: black-box adversarial attacks with bandits and priors. arXiv preprint arXiv:1807.07978 (2018)

  15. Kim, H., et al.: Attentive neural processes. In: ICLR (2019)

    Google Scholar 

  16. Kingma, D.P., Welling, M.: Auto-encoding variational Bayes. In: ICLR (2014)

    Google Scholar 

  17. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images. Technical report, University of Toronto (2009)

    Google Scholar 

  18. Li, Y., Li, L., Wang, L., Zhang, T., Gong, B.: NATTACK: learning the distributions of adversarial examples for an improved black-box attack on deep neural networks. In: ICML (2019)

    Google Scholar 

  19. Liu, X., Bai, Y., Xia, S.T., Jiang, Y.: Self-adaptive feature fool. In: ICASSP (2020)

    Google Scholar 

  20. Lécun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  21. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)

    Google Scholar 

  22. Matthews, A., Rowland, M., Hron, J., Turner, R., Ghahramani, Z.: Gaussian process behaviour in wide deep neural networks. In: ICLR (2018)

    Google Scholar 

  23. Moosavi-Dezfooli, S.M., Fawzi, A., Fawzi, O., Frossard, P.: Universal adversarial perturbations. In: CVPR (2017)

    Google Scholar 

  24. Simonyan, K., Zisserman, A.: Very deep convolutional networks for large-scale image recognition. Comput. Sci. (2014)

    Google Scholar 

  25. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: CVPR (2016)

    Google Scholar 

  26. Tu, C.C., et al.: AutoZOOM: autoencoder-based zeroth order optimization method for attacking black-box neural networks. In: AAAI (2019)

    Google Scholar 

  27. Vaswani, A., et al.: Attention is all you need. In: NeurIPS (2017)

    Google Scholar 

  28. Wang, Y., Deng, X., Pu, S., Huang, Z.: Residual convolutional CTC networks for automatic speech recognition. arXiv preprint arXiv:1702.07793 (2017)

  29. Wang, Y., Ma, X., Bailey, J., Yi, J., Zhou, B., Gu, Q.: On the convergence and robustness of adversarial training. In: ICML (2019)

    Google Scholar 

  30. Wang, Y., Zou, D., Yi, J., Bailey, J., Ma, X., Gu, Q.: Improving adversarial robustness requires revisiting misclassified examples. In: ICLR (2020)

    Google Scholar 

  31. Wierstra, D., Schaul, T., Peters, J., Schmidhuber, J.: Natural evolution strategies. In: CEC (2008)

    Google Scholar 

  32. Wistuba, M., Schilling, N., Schmidt-Thieme, L.: Scalable Gaussian process-based transfer surrogates for hyperparameter optimization. Mach. Learn. 107(1), 43–78 (2018)

    Article  MathSciNet  Google Scholar 

  33. Wu, D., Wang, Y., Xia, S.T., Bailey, J., Ma, X.: Skip connections matter: on the transferability of adversarial examples generated with ResNets. In: ICLR (2020)

    Google Scholar 

  34. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: CVPR (2019)

    Google Scholar 

  35. Zagoruyko, S., Komodakis, N.: Wide residual networks. arXiv preprint arXiv:1605.07146 (2016)

Download references

Acknowledgement

This work is supported in part by the National Key Research and Development Program of China under Grant 2018YFB1800204, the National Natural Science Foundation of China under Grant 61771273, the R&D Program of Shenzhen under Grant JCYJ20180508152204044, and the project ‘PCL Future Greater-Bay Area Network Facilities for Large-scale Experiments and Applications (LZC0019)’. We also thanks for the GPUs supported by vivo and Rejoice Sport Tech. co., LTD.

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Yong Jiang or Yisen Wang .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 409 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Bai, Y., Zeng, Y., Jiang, Y., Wang, Y., Xia, ST., Guo, W. (2020). Improving Query Efficiency of Black-Box Adversarial Attack. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, JM. (eds) Computer Vision – ECCV 2020. ECCV 2020. Lecture Notes in Computer Science(), vol 12370. Springer, Cham. https://doi.org/10.1007/978-3-030-58595-2_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-58595-2_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-58594-5

  • Online ISBN: 978-3-030-58595-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics