Abstract
At present, deep neural networks are widely used on a variety of tasks in computer vision, machine translation, speech recognition, etc. Unfortunately, this inexplicable black-box structure lacks robustness. In previous work, adversarial examples are proposed to describe the phenomenon that neural networks are vulnerable to be attacked. Interestingly, in addition to the widely accepted “noise” or “bugs”, recent research has shown that the adversarial examples are “non-robust features”, because the classifier trained on adversarial examples retains the ability to generalize to the original test set. In this paper, we link the relationship between large margin methods and the capabilities to defend against adversarial attacks, and further link the relationship to non-robust features. We compare the defense capabilities of the models trained by large margin loss function and general cross-entropy loss function against Fast Gradient Sign Method (FGSM) attack and Project Gradient Descent (PGD) attack and evaluate non-robust features extracted by the trained models. It is proved that the model trained with large margin loss function is more resistant to adversarial perturbation and it gets fewer non-robust features. This further indicates a direction for training robust networks: to balance model test accuracy and defense capabilities. Based on the margin method, we combined thickness to strengthen the description of the decision boundary. Through the feature space visualization, the effect of the boundary methods on the robustness of the model is intuitively illustrated.
This work was supported by the National Natural Science Foundation of China under Grant 61971128.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Notes
- 1.
The definition is not formal, it is limited to the research described in this paper.
References
Blei, D.M., Ng, A.Y., Jordan, M.I.: Latent dirichlet allocation. J. Mach. Learn. Res. 3, 993–1022 (2003)
Elsayed, G., Krishnan, D., Mobahi, H., Regan, K., Bengio, S.: Large margin deep networks for classification. In: Advances in Neural Information Processing Systems, pp. 842–852 (2018)
Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the 13th International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)
Hinton, G.E., Srivastava, N., Krizhevsky, A., Sutskever, I., Salakhutdinov, R.R.: Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580 (2012)
Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175 (2019)
Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images. In: Handbook of Systemic Autoimmune Diseases, vol. 1, no. 4 (2009)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P., et al.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)
Lin, T.Y., Goyal, P., Girshick, R., He, K., Dollár, P.: Focal loss for dense object detection. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988 (2017)
Maaten, L., Hinton, G.: Visualizing data using t-sne. J. Mach. Learn. Res. 9, 2579–2605 (2008)
Maaten, L.: Barnes-Hut t-SNE. Comput. Sci. 1301, 3342 (2013)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Murugan, P., Durairaj, S.: Regularization and optimization strategies in deep convolutional neural network. CoRR abs/1712.04711 (2017). http://arxiv.org/abs/1712.04711
Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)
Sainath, T.N., Mohamed, A., Kingsbury, B., Ramabhadran, B.: Deep convolutional neural networks for LVCSR. In: 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, pp. 8614–8618. IEEE (2013)
Shlens, J.: A tutorial on principal component analysis. Int. J. Remote Sens. 51(2) (2014)
Su, J., Vargas, D.V., Sakurai, K.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23, 828–841 (2019)
Szegedy, C., et al.: Intriguing properties of neural networks. Computer Science (2013)
Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155 (2017)
Yang, Y., et al.: Boundary thickness and robustness in learning models. In: Advances in Neural Information Processing Systems, vol. 33, pp. 6223–6234. Curran Associates, Inc. (2020)
Yousefzadeh, R., O’Leary, D.P.: Investigating decision boundaries of trained neural networks. CoRR abs/1908.02802 (2019). http://arxiv.org/abs/1908.02802
Zhang, H., Yu, Y., Jiao, J., Xing, E., El Ghaoui, L., Jordan, M.: Theoretically principled trade-off between robustness and accuracy. In: International Conference on Machine Learning, pp. 7472–7482. PMLR (2019)
Zhang, H., Cisse, M., Dauphin, Y.N., Lopez-Paz, D.: mixup: beyond empirical risk minimization. arXiv preprint arXiv:1710.09412 (2017)
Zhang, X., Zhao, R., Qiao, Y., Wang, X., Li, H.: AdaCos: adaptively scaling cosine logits for effectively learning deep face representations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 10823–10832 (2019)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2021 Springer Nature Switzerland AG
About this paper
Cite this paper
Hu, X., Zhang, Z., Liu, Z., Han, Z., Yang, L. (2021). Reducing Adversarial Examples Through Boundary Methods. In: Fang, L., Chen, Y., Zhai, G., Wang, J., Wang, R., Dong, W. (eds) Artificial Intelligence. CICAI 2021. Lecture Notes in Computer Science(), vol 13070. Springer, Cham. https://doi.org/10.1007/978-3-030-93049-3_1
Download citation
DOI: https://doi.org/10.1007/978-3-030-93049-3_1
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-93048-6
Online ISBN: 978-3-030-93049-3
eBook Packages: Computer ScienceComputer Science (R0)