Skip to main content

Enhancing Adversarial Robustness via Stochastic Robust Framework

  • Conference paper
  • First Online:
Pattern Recognition and Computer Vision (PRCV 2023)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 14428))

Included in the following conference series:

  • 832 Accesses

Abstract

Despite deep neural networks (DNNs) have attained remarkable success in image classification, the vulnerability of DNNs to adversarial attacks poses significant security risks to their reliability. The design of robust modules in adversarial defense often focuses excessively on individual layers of the model architecture, overlooking the important inter-module facilitation. To this issue, this paper proposes a novel stochastic robust framework that employs the Random Local winner take all module and the random Normalization Aggregation module (RLNA). RLNA designs a random competitive selection mechanism to filter out outputs with high confidence in the classification. This filtering process improves the model’s robustness against adversarial attacks. Moreover, we employ a novel balance strategy in adversarial training (AT) to optimize the trade-off between robust accuracy and natural accuracy. Empirical evidence demonstrates that RLNA achieves state-of-the-art robustness accuracy against powerful adversarial attacks on two benchmarking datasets, CIFAR-10 and CIFAR-100. Compared to the method that focuses on individual network layers, RLNA achieves a remarkable 24.78% improvement in robust accuracy on CIFAR-10.

This work was supported in part by the National Natural Science Foundation of China (Grant No. 62006097, U1836218), in part by the Natural Science Foundation of Jiangsu Province (Grant No. BK20200593) and in part by the China Postdoctoral Science Foundation (Grant No. 2021M701456).

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Andriushchenko, M., Croce, F., Flammarion, N., Hein, M.: Square attack: a query-efficient black-box adversarial attack via random search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12368, pp. 484–501. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58592-1_29

    Chapter  Google Scholar 

  2. Cai, Z., Song, C., Krishnamurthy, S., Roy-Chowdhury, A., Asif, S.: Blackbox attacks via surrogate ensemble search. In: NeurIPS (2022)

    Google Scholar 

  3. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57. IEEE Computer Society (2017)

    Google Scholar 

  4. Croce, F., Hein, M.: Minimally distorted adversarial examples with a fast adaptive boundary attack. In: ICML. vol. 119. Proceedings of Machine Learning Research, pp. 2196–2205. PMLR (2020)

    Google Scholar 

  5. Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: ICML, vol. 119. Proceedings of Machine Learning Research, pp. 2206–2216. PMLR (2020)

    Google Scholar 

  6. Dhillon, G.S.: Stochastic activation pruning for robust adversarial defense. In: ICLR (Poster). OpenReview.net (2018)

    Google Scholar 

  7. Dong, M., Chen, X., Wang, Y., Xu, C.: Random normalization aggregation for adversarial defense. In: Oh, A.H., Agarwal, A., Belgrave, D., Cho, K., (eds.) Advances in Neural Information Processing Systems (2022)

    Google Scholar 

  8. Dong, M., Wang, Y., Chen, X., Xu, C.: Towards stable and robust addernets. In: NeurIPS, pp. 13255–13265 (2021)

    Google Scholar 

  9. Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9185–9193 (2018)

    Google Scholar 

  10. Dong, Y.: Boosting adversarial attacks with momentum. In: CVPR, pp. 9185–9193. Computer Vision Foundation. IEEE Computer Society (2018)

    Google Scholar 

  11. Duan, Y., Jiwen, L., Zheng, W., Zhou, J.: Deep adversarial metric learning. IEEE Trans. Image Process. 29, 2037–2051 (2020)

    Article  Google Scholar 

  12. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015)

    Google Scholar 

  13. Guo, M., Yang, Y., Xu, R., Liu, Z., Lin, D.: When nas meets robustness: In search of robust architectures against adversarial attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2020)

    Google Scholar 

  14. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778. IEEE Computer Society (2016)

    Google Scholar 

  15. Cong, H., Xiao-Jun, W., Li, Z.-Y.: Generating adversarial examples with elastic-net regularized boundary equilibrium generative adversarial network. Pattern Recognit. Lett. 140, 281–287 (2020)

    Article  Google Scholar 

  16. Cong, H., Hao-Qi, X., Xiao-Jun, W.: Substitute meta-learning for black-box adversarial attack. IEEE Signal Process. Lett. 29, 2472–2476 (2022)

    Article  Google Scholar 

  17. Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  18. Li, H., Xiao-Jun, W., Kittler, J.: Mdlatlrr: a novel decomposition method for infrared and visible image fusion. IEEE Trans. Image Process. 29, 4733–4746 (2020)

    Article  Google Scholar 

  19. Li, Q., Guo, Y., Zuo, W., Chen, H.: Squeeze training for adversarial robustness (2023)

    Google Scholar 

  20. Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In CVPR, pp. 510–519. Computer Vision Foundation/IEEE (2019)

    Google Scholar 

  21. Li, Y., Yang, Z., Wang, Y., Xu, C.: Neural architecture dilation for adversarial robustness. In: NeurIPS, pp. 29578–29589 (2021)

    Google Scholar 

  22. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (Poster). OpenReview.net (2018)

    Google Scholar 

  23. Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: CVPR, pp. 2574–2582. IEEE Computer Society (2016)

    Google Scholar 

  24. Panousis, K.P., Chatzis, S., Theodoridis, S.: Stochastic local winner-takes-all networks enable profound adversarial robustness. CoRR, abs/ arXiv: 2112.02671 (2021)

  25. Srivastava, R.-K., Masci, J., Gomez, F.J., Schmidhuber, J.: Understanding locally competitive networks. In: ICLR (Poster) (2015)

    Google Scholar 

  26. Tong, J., Chen, T., Wang, Q., Yao, Y.: Few-Shot Object Detection via Understanding Convolution and Attention. In: Yu, S., et al. (eds.) Pattern Recognition and Computer Vision. PRCV 2022. LNCS, vol 13534. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18907-4_52

  27. Vakhshiteh, F., Nickabadi, A., Ramachandra, R.: Adversarial attacks against face recognition: a comprehensive study. IEEE Access 9, 92735–92756 (2021)

    Article  Google Scholar 

  28. Wang, G., Yan, H., Wei, X.: Enhancing Transferability of Adversarial Examples with Spatial Momentum. In: Yu, S., et al. (eds.) Pattern Recognition and Computer Vision. PRCV 2022. LNCS, vol. 13534. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18907-4_46

  29. Wang, M., Deng, W.: Deep face recognition: a survey. Neurocomputing 429, 215–244 (2021)

    Article  Google Scholar 

  30. Xie, C., Tan, M., Gong, B., Yuille, A.L., Le, Q.V.: Smooth adversarial training. CoRR, abs/ arXiv: 2006.14536 (2020)

  31. Zagoruyko, S., Komodakis, N.: Wide residual networks. In: BMVC. BMVA Press (2016)

    Google Scholar 

  32. Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML, vol. 97. Proceedings of Machine Learning Research, pp. 7472–7482. PMLR (2019)

    Google Scholar 

  33. Zhang, W., Gou, Y., Jiang, Y., Zhang, Y.: Adversarial VAE with Normalizing Flows for Multi-Dimensional Classification. In: Yu, S., et al. (eds.) Pattern Recognition and Computer Vision. PRCV 2022. LNCS, vol 13534. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18907-4_16

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Cong Hu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Sun, Z., Li, Y., Hu, C. (2024). Enhancing Adversarial Robustness via Stochastic Robust Framework. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14428. Springer, Singapore. https://doi.org/10.1007/978-981-99-8462-6_16

Download citation

  • DOI: https://doi.org/10.1007/978-981-99-8462-6_16

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-99-8461-9

  • Online ISBN: 978-981-99-8462-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics