Abstract
In deep learning, repeated convolution and pooling processes help to learn image features, but complex nonlinear operations make deep learning models difficult for users to understand. Adversarial example attack is a unique form of attack in deep learning. The attacker attacks the model by applying invisible changes to the picture, affecting the results of the model judgment. In this paper, a research is implemented on the adversarial example attack and neural network interpretability. The neural network interpretability research is believed to have considerable potential in resisting adversarial examples. It helped understand how the adversarial examples induce the neural network to make a wrong judgment and identify adversarial examples in the test set. The corresponding algorithm was designed and the image recognition model was built based on the ImageNet training set. And then the adversarial-example generation algorithm and the neural network visualization algorithm were designed to determine the model learning heat map of the original example and the adversarial-example. The results show that it develops the application of neural network interpretability in the field of resisting adversarial-example attacks.
This work is supported by National Defense Science and Technology Innovation Special Zone Project (No. 18-163-11-ZT-002-045-04).
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsReferences
Biggio, B., et al.: Evasion attacks against machine learning at test time. In: Blockeel, H., Kersting, K., Nijssen, S., Železný, F. (eds.) ECML PKDD 2013. LNCS (LNAI), vol. 8190, pp. 387–402. Springer, Heidelberg (2013). https://doi.org/10.1007/978-3-642-40994-3_25
Papernot, N., Mcdaniel, P., Wu, X., et al.: Distillation as a defense to adversarial perturbations against deep neural networks. In: Proceedings of the IEEE Symposium on Security and Privacy, pp. 582–598 (2016)
Li, P., Zhao, W., Liu, Q., et al.: Review of machine learning security and its defense technology. Comput. Sci. Explor. 2, 171–184 (2018)
Marco, T.R., Sameer, S., Carlos, G.: Why should i trust you?: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1135–1144 (2016)
Qiu, Y., Li, S.: Security threat analysis and solutions for the development and application of artificial intelligence. Netinfo Secur. 9, 35–41 (2018)
Chu, L., Hu, X., Hu, J., Wang, L.J., et al.: Exact and consistent interpretation for piecewise linear neural networks: a closed form solution. In: Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 1244–1253 (2018)
LeCun, Y., Bengio, Y., Hinton, G.: Deep learning. Nature 521(7553), 436–444 (2015)
Yu, Y., Ding, L., Chen, Z.: Research on attacks and defenses towards machine learning systems. Netinfo Secur. 9, 10–18 (2018)
Chris, O., Arvind, S.: The Building Blocks of Interpretability [OL]. https://opensource.googleblog.com/2018/03/the-building-blocks-of-interpretability.html. 3 June 2018
Zhang, Q., Zhu, S.: Visual interpretability for deep learning: a survey. Front. Inf. Technol. Electron. Eng. 19(1), 27–39 (2018)
Li, Y., Yan, Z., Yan, G.: A edge-based 2-channel convolutional neural and its visualization. Comput. Eng. Sci. 41(10), 1837–1845 (2019)
Zhang, S., Zuo, X., Liu, J.: The Problem of the Adversarial Examples in Deep Learning [OL]. http://kns.cnki.net/kcms/detail/11.1826.2018-1-20
Zhou, B., Khosla, A., Lapedriza, A., et al.: Learning deep features for discriminative localization. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2921–2929 (2016)
Amit, D., Karthikeyan, S., Ronny, L., Peder, O.: Improving Simple Models with Confidence Profiles[OL]. https://arxiv.org/pdf/1807.07506.pdf. 19 June 2018
Zeiler, M.D., Fergus, R.: Visualizing and understanding convolutional networks. In: Fleet, D., Pajdla, T., Schiele, B., Tuytelaars, T. (eds.) ECCV 2014. LNCS, vol. 8689, pp. 818–833. Springer, Cham (2014). https://doi.org/10.1007/978-3-319-10590-1_53
Zeiler, M.D., Taylor, G.W., Fergus, R.: Adaptive deconvolutional networks for mid and high level feature learning. In: Proceedings of 2011 IEEE International Conference on Computer Vision, pp. 2018–2025 (2011)
Ramprasaath, R.S., Michael, C., Abhishek, D., Devi, P., Ramakrishna, V., Dhruv, B.: Grad-CAM: visual explanations from deep networks via gradient-based localization. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 618–626 (2017)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Singapore Pte Ltd.
About this paper
Cite this paper
Yu, C., Wang, X., Li, Y. (2020). Convolutional Neural Network Visualization in Adversarial Example Attack. In: Zeng, J., Jing, W., Song, X., Lu, Z. (eds) Data Science. ICPCSEE 2020. Communications in Computer and Information Science, vol 1257. Springer, Singapore. https://doi.org/10.1007/978-981-15-7981-3_16
Download citation
DOI: https://doi.org/10.1007/978-981-15-7981-3_16
Published:
Publisher Name: Springer, Singapore
Print ISBN: 978-981-15-7980-6
Online ISBN: 978-981-15-7981-3
eBook Packages: Computer ScienceComputer Science (R0)