Abstract
Despite deep neural networks (DNNs) have attained remarkable success in image classification, the vulnerability of DNNs to adversarial attacks poses significant security risks to their reliability. The design of robust modules in adversarial defense often focuses excessively on individual layers of the model architecture, overlooking the important inter-module facilitation. To this issue, this paper proposes a novel stochastic robust framework that employs the Random Local winner take all module and the random Normalization Aggregation module (RLNA). RLNA designs a random competitive selection mechanism to filter out outputs with high confidence in the classification. This filtering process improves the model’s robustness against adversarial attacks. Moreover, we employ a novel balance strategy in adversarial training (AT) to optimize the trade-off between robust accuracy and natural accuracy. Empirical evidence demonstrates that RLNA achieves state-of-the-art robustness accuracy against powerful adversarial attacks on two benchmarking datasets, CIFAR-10 and CIFAR-100. Compared to the method that focuses on individual network layers, RLNA achieves a remarkable 24.78% improvement in robust accuracy on CIFAR-10.
This work was supported in part by the National Natural Science Foundation of China (Grant No. 62006097, U1836218), in part by the Natural Science Foundation of Jiangsu Province (Grant No. BK20200593) and in part by the China Postdoctoral Science Foundation (Grant No. 2021M701456).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Andriushchenko, M., Croce, F., Flammarion, N., Hein, M.: Square attack: a query-efficient black-box adversarial attack via random search. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12368, pp. 484–501. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58592-1_29
Cai, Z., Song, C., Krishnamurthy, S., Roy-Chowdhury, A., Asif, S.: Blackbox attacks via surrogate ensemble search. In: NeurIPS (2022)
Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: IEEE Symposium on Security and Privacy, pp. 39–57. IEEE Computer Society (2017)
Croce, F., Hein, M.: Minimally distorted adversarial examples with a fast adaptive boundary attack. In: ICML. vol. 119. Proceedings of Machine Learning Research, pp. 2196–2205. PMLR (2020)
Croce, F., Hein, M.: Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: ICML, vol. 119. Proceedings of Machine Learning Research, pp. 2206–2216. PMLR (2020)
Dhillon, G.S.: Stochastic activation pruning for robust adversarial defense. In: ICLR (Poster). OpenReview.net (2018)
Dong, M., Chen, X., Wang, Y., Xu, C.: Random normalization aggregation for adversarial defense. In: Oh, A.H., Agarwal, A., Belgrave, D., Cho, K., (eds.) Advances in Neural Information Processing Systems (2022)
Dong, M., Wang, Y., Chen, X., Xu, C.: Towards stable and robust addernets. In: NeurIPS, pp. 13255–13265 (2021)
Dong, Y., et al.: Boosting adversarial attacks with momentum. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 9185–9193 (2018)
Dong, Y.: Boosting adversarial attacks with momentum. In: CVPR, pp. 9185–9193. Computer Vision Foundation. IEEE Computer Society (2018)
Duan, Y., Jiwen, L., Zheng, W., Zhou, J.: Deep adversarial metric learning. IEEE Trans. Image Process. 29, 2037–2051 (2020)
Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) (2015)
Guo, M., Yang, Y., Xu, R., Liu, Z., Lin, D.: When nas meets robustness: In search of robust architectures against adversarial attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) (June 2020)
He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: CVPR, pp. 770–778. IEEE Computer Society (2016)
Cong, H., Xiao-Jun, W., Li, Z.-Y.: Generating adversarial examples with elastic-net regularized boundary equilibrium generative adversarial network. Pattern Recognit. Lett. 140, 281–287 (2020)
Cong, H., Hao-Qi, X., Xiao-Jun, W.: Substitute meta-learning for black-box adversarial attack. IEEE Signal Process. Lett. 29, 2472–2476 (2022)
Krizhevsky, A.: Learning multiple layers of features from tiny images (2009)
Li, H., Xiao-Jun, W., Kittler, J.: Mdlatlrr: a novel decomposition method for infrared and visible image fusion. IEEE Trans. Image Process. 29, 4733–4746 (2020)
Li, Q., Guo, Y., Zuo, W., Chen, H.: Squeeze training for adversarial robustness (2023)
Li, X., Wang, W., Hu, X., Yang, J.: Selective kernel networks. In CVPR, pp. 510–519. Computer Vision Foundation/IEEE (2019)
Li, Y., Yang, Z., Wang, Y., Xu, C.: Neural architecture dilation for adversarial robustness. In: NeurIPS, pp. 29578–29589 (2021)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (Poster). OpenReview.net (2018)
Moosavi-Dezfooli, S.-M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: CVPR, pp. 2574–2582. IEEE Computer Society (2016)
Panousis, K.P., Chatzis, S., Theodoridis, S.: Stochastic local winner-takes-all networks enable profound adversarial robustness. CoRR, abs/ arXiv: 2112.02671 (2021)
Srivastava, R.-K., Masci, J., Gomez, F.J., Schmidhuber, J.: Understanding locally competitive networks. In: ICLR (Poster) (2015)
Tong, J., Chen, T., Wang, Q., Yao, Y.: Few-Shot Object Detection via Understanding Convolution and Attention. In: Yu, S., et al. (eds.) Pattern Recognition and Computer Vision. PRCV 2022. LNCS, vol 13534. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18907-4_52
Vakhshiteh, F., Nickabadi, A., Ramachandra, R.: Adversarial attacks against face recognition: a comprehensive study. IEEE Access 9, 92735–92756 (2021)
Wang, G., Yan, H., Wei, X.: Enhancing Transferability of Adversarial Examples with Spatial Momentum. In: Yu, S., et al. (eds.) Pattern Recognition and Computer Vision. PRCV 2022. LNCS, vol. 13534. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18907-4_46
Wang, M., Deng, W.: Deep face recognition: a survey. Neurocomputing 429, 215–244 (2021)
Xie, C., Tan, M., Gong, B., Yuille, A.L., Le, Q.V.: Smooth adversarial training. CoRR, abs/ arXiv: 2006.14536 (2020)
Zagoruyko, S., Komodakis, N.: Wide residual networks. In: BMVC. BMVA Press (2016)
Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML, vol. 97. Proceedings of Machine Learning Research, pp. 7472–7482. PMLR (2019)
Zhang, W., Gou, Y., Jiang, Y., Zhang, Y.: Adversarial VAE with Normalizing Flows for Multi-Dimensional Classification. In: Yu, S., et al. (eds.) Pattern Recognition and Computer Vision. PRCV 2022. LNCS, vol 13534. Springer, Cham (2022). https://doi.org/10.1007/978-3-031-18907-4_16
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Sun, Z., Li, Y., Hu, C. (2024). Enhancing Adversarial Robustness via Stochastic Robust Framework. In: Liu, Q., et al. Pattern Recognition and Computer Vision. PRCV 2023. Lecture Notes in Computer Science, vol 14428. Springer, Singapore. https://doi.org/10.1007/978-981-99-8462-6_16
Download citation
DOI: https://doi.org/10.1007/978-981-99-8462-6_16
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8461-9
Online ISBN: 978-981-99-8462-6
eBook Packages: Computer ScienceComputer Science (R0)