Abstract
Adversarial training has emerged as a prominent approach for training robust classifiers. However, recent researches indicate that adversarial training inevitably results in a decline in a classifier’s accuracy on clean (natural) data. Robustness is at odds with clean accuracy due to the inherent tension between the objectives of adversarial robustness and standard generalization. Training a single classifier that combines high adversarial robustness and high clean accuracy appears to be an insurmountable challenge. This paper proposes a straightforward strategy to bridge the gap between robustness and clean accuracy. Inspired by the idea underlying dynamic neural networks, i.e., adaptive inference, we propose a robust cascade framework that integrates a standard classifier and a robust classifier. The cascade neural network dynamically classifies clean and adversarial samples using distinct classifiers based on the confidence score of each input sample. As deep neural networks suffer from serious overconfident problems on adversarial samples, we propose an effective confidence calibration algorithm for the standard classifier, enabling accurate confidence scores for adversarial samples. The experiments demonstrate that the proposed cascade neural network increases the clean accuracies by 10.1%, 14.67%, and 9.11% compared to the advanced adversarial training (HAT) on CIFAR10, CIFAR100, and Tiny ImageNet while keeping similar robust accuracies.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Brock, A., De, S., Smith, S.L.: Characterizing signal propagation to close the performance gap in unnormalized resnets. In: ICLR (2021)
Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: S &P, pp. 39–57 (2017)
Gao, R., et al.: Fast and reliable evaluation of adversarial robustness with minimum-margin attack. In: ICML, pp. 7144–7163 (2022)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: ICLR (2015)
Guo, C., Pleiss, G., Sun, Y., Weinberger, K.Q.: On calibration of modern neural networks. In: ICML, pp. 1321–1330 (2017)
Han, Y., Huang, G., Song, S., Yang, L., Wang, H., Wang, Y.: Dynamic neural networks: a survey. TPAMI 44(11), 7436–7456 (2021)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. In: ICLR (2018)
Rade, R., Moosavi-Dezfooli, S.M.: Reducing excessive margin to achieve a better accuracy vs. robustness trade-off. In: ICLR (2022)
Rice, L., Wong, E., Kolter, Z.: Overfitting in adversarially robust deep learning. In: ICML, pp. 8093–8104 (2020)
Sehwag, V., et al.: Robust learning meets generative models: can proxy distributions improve adversarial robustness? In: ICLR (2022)
Shaeiri, A., Nobahari, R., Rohban, M.H.: Towards deep learning models resistant to large perturbations. arXiv:2003.13370 (2020)
Szegedy, C., et al.: Intriguing properties of neural networks. In: ICLR (2014)
Tsipras, D., Santurkar, S., Engstrom, L., Turner, A., Madry, A.: Robustness may be at odds with accuracy. In: ICLR (2018)
Wang, Y., Zou, D., Yi, J., Bailey, J., Ma, X., Gu, Q.: Improving adversarial robustness requires revisiting misclassified examples. In: ICLR (2020)
Zhang, H., Yu, Y., Jiao, J., Xing, E.P., Ghaoui, L.E., Jordan, M.I.: Theoretically principled trade-off between robustness and accuracy. In: ICML, pp. 7472–7482 (2019)
Zheng, J., Chan, P.P., Chi, H., He, Z.: A concealed poisoning attack to reduce deep neural networks’ robustness against adversarial samples. Inf. Sci. 615, 758–773 (2022)
Acknowledgements
This work is supported by Guangdong Basic and Applied Basic Research Foundation (Nos. 2022A1515140116, 2022A1515010101), National Natural Science Foundation of China (Nos. 61972091, 61802061).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Chen, Z., He, Z., Zhou, Y., Chan, P.P.K., Zhang, F., Situ, H. (2024). CAS-NN: A Robust Cascade Neural Network Without Compromising Clean Accuracy. In: Luo, B., Cheng, L., Wu, ZG., Li, H., Li, C. (eds) Neural Information Processing. ICONIP 2023. Lecture Notes in Computer Science, vol 14448. Springer, Singapore. https://doi.org/10.1007/978-981-99-8082-6_38
Download citation
DOI: https://doi.org/10.1007/978-981-99-8082-6_38
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-99-8081-9
Online ISBN: 978-981-99-8082-6
eBook Packages: Computer ScienceComputer Science (R0)