Skip to main content

DBA: An Efficient Approach to Boost Transfer-Based Adversarial Attack Performance Through Information Deletion

  • Conference paper
  • First Online:
Knowledge Science, Engineering and Management (KSEM 2023)

Part of the book series: Lecture Notes in Computer Science ((LNAI,volume 14118))

  • 577 Accesses

Abstract

In practice, deep learning models are easy to be fooled by input images with subtle perturbations, and those images are called adversarial examples. Regarding one model, the crafted adversarial examples can successfully fool other models with varying architectures but the same task, which is referred to as adversarial transferability. Nevertheless, in practice, it is hard to get information about the model to be attacked, transfer-based adversarial attacks have developed rapidly. Later, different techniques are proposed to promote adversarial transferability. Different from existing input transformation attacks based on spatial transformation, our approach is a novel one on the basis of information deletion. By deleting squares of the input images by channels, we mitigate overfitting on the surrogate model of the adversarial examples and further enhance adversarial transferability. The corresponding performance of our method is superior to the existing input transformation attacks on different models (here, we consider unsecured models and defense ones), as demonstrated by extensive evaluations on ImageNet.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 59.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, L.C., Papandreou, G., Kokkinos, I., Murphy, K., Yuille, A.L.: Deeplab: semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 40(4), 834–848 (2017)

    Article  Google Scholar 

  2. He, K., Zhang, X., Ren, S., Sun, J.: Identity mappings in deep residual networks. In: Proceedings of European Conference on Computer Vision (ECCV), pp. 630–645. Amsterdam (2016)

    Google Scholar 

  3. Tang, K., et al.: Decision fusion networks for image classification. IEEE Trans. Neural Netw. Learn. Syst. (2022)

    Google Scholar 

  4. Tang, K., et al.: Rethinking perturbation directions for imperceptible adversarial attacks on point clouds. IEEE Internet of Things J. 10(6), 5158–5169 (2022)

    Google Scholar 

  5. Tang, K., et al.: NormalAttack: Curvature-aware shape deformation along normals for imperceptible point cloud attack. Secur. Commun. Netw. 2022 (2022)

    Google Scholar 

  6. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR). Banff (2014)

    Google Scholar 

  7. Guo, S., Li, X., Zhu, P., Mu, Z.: ADS-detector: an attention-based dual stream adversarial example detection method. Knowl.-Based Syst. 265, 110388 (2023)

    Article  Google Scholar 

  8. Goodfellow, I. J., Shlens J., and Szegedy, C.: Explaining and harnessing adversarial examples. In: Proceedings of International Conference on Learning Representations (ICLR). San Diego (2015)

    Google Scholar 

  9. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533. (2016)

  10. Dong, Y., et al.: Boosting adversarial attacks with momentum. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9185–9193. IEEE, Salt Lake City (2018)

    Google Scholar 

  11. Ilyas, A., Engstrom, L., Athalye, A., Lin, J.: Black-box adversarial attacks with limited queries and information. In: International Conference on Machine Learning (PMLR), pp. 2137–2146. Stockholm (2018)

    Google Scholar 

  12. Xie, C., et al.: Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2730–2739. IEEE, Long Beach (2019)

    Google Scholar 

  13. Dong, Y., Pang, T., Su, H., Zhu, J.: Evading defenses to transferable adversarial examples by translation-invariant attacks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 4312–4321. IEEE, Long Beach (2019)

    Google Scholar 

  14. Lin, J., Song, C., He, K., Wang, L., Hopcroft, J. E.: Nesterov accelerated gradient and scale invariance for adversarial attacks. In: International Conference on Learning Representations(ICLR). New Orleans (2019)

    Google Scholar 

  15. Hong, J., Tang, K., Gao, C., Wang, S., Guo, S., Zhu, P.: GM-Attack: Improving the transferability of adversarial attacks. In: Proceedings of Knowledge Science, Engineering and Management (KSEM), pp. 489–500. Springer, Singapore (2022). https://doi.org/10.1007/978-3-031-10989-8_39

  16. Zhu, P., Hong, J., Li, X., Tang, K., Wang, Z.: SGMA: A novel adversarial attack approach with improved transferability. Complex Intell. Syst., pp. 1–13 (2023)

    Google Scholar 

  17. Wang, X., He, X., Wang, J., He Kun.: Admix: Enhancing the transferability of adversarial attacks. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 16158–16167. IEEE, Montreal (2021)

    Google Scholar 

  18. Zhou, W., et al.: Transferable adversarial perturbations. In: Proceedings of the European Conference on Computer Vision (ECCV), pp. 452–467. Munich (2018)

    Google Scholar 

  19. Wang, Z., Guo, H., Zhang, Z., Liu, W., Qin, Z., Ren, K.: Feature importance-aware transferable adversarial attacks. In: Proceedings of the IEEE International Conference on Computer Vision (ICCV), pp. 7639–7648 IEEE, Montreal (2021)

    Google Scholar 

  20. Ghiasi, G., Lin, T.Y., Le, Q. V.: DropBlock: A regularization method for convolutional networks. In: Proceedings of the International Conference on Neural Information Processing Systems (NeurIPS), pp. 10750–10760. Red Hook (2018)

    Google Scholar 

  21. Russakovsky, O., et al.: Imagenet large scale visual recognition challenge. Int. J. Comput. Vision 115(3), 211–252 (2015)

    Google Scholar 

  22. Szegedy, C., Vanhoucke, V., Ioffe S., Shlens J., Wojna Z.: Rethinking the inception architecture for computer vision. In: Proceedings of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826. IEEE, Las Vegas (2016)

    Google Scholar 

  23. Szegedy, C., Ioffe, S., Vanhoucke, V., Alemi, A. A.: Inception-v4, inception-resnet and the impact of residual connections on learning. In: Proceedings of AAAI Conference on Artificial Intelligence, pp. 4278–4284. San Francisco (2017)

    Google Scholar 

  24. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778. IEEE, Las Vegas (2016)

    Google Scholar 

  25. Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., McDaniel, P.: Ensemble adversarial training: Attacks and defenses. In: International Conference on Learning Representations (ICLR). Vancouver (2018)

    Google Scholar 

Download references

Acknowledgement

This work was supported by the National Key R &D Program of China (No. 2020AAA0107704), National Natural Science Foundation of China (Nos. 62073263, 61976181, 62102105, 62261136549), Guangdong Basic and Applied Basic Research Foundation (Nos. 2020A1515110997, 2022A1515011501), Technological Innovation Team of Shaanxi Province (No. 2020TD-013).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Peican Zhu or Keke Tang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Fan, Z., Zhu, P., Gao, C., Hong, J., Tang, K. (2023). DBA: An Efficient Approach to Boost Transfer-Based Adversarial Attack Performance Through Information Deletion. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14118. Springer, Cham. https://doi.org/10.1007/978-3-031-40286-9_23

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-40286-9_23

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-40285-2

  • Online ISBN: 978-3-031-40286-9

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics