Abstract
The vulnerability of neural networks to adversarial attacks has long been revealed. However, the structure of neural networks is not given due attention during the attack. The article deals with the impact of different parameters of a neural network on its resistance to adversarial attacks. The main purpose of this research is to determine which parameters increase resistance to attacks. The way by which neural networks can be compared has been proposed. Several neural networks were selected for comparison and a number of adversarial attacks were conducted on them. As a result, certain conditions were identified under which the attack took place over a longer time. It was also found that different changes in neural network parameters were required to protect against different attacks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Szegedy, C., et al.: Intriguing properties of neural networks. https://arxiv.org/abs/1312.6199 (2013). Accessed 21 May 2020
Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, pp. 6103–6113 (2018)
Zantedeschi, V., Nicolae, M.I., Rawat, A.: Efficient defenses against adversarial attacks. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 39–49 (November 2017)
Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)
Gu, J., et al.: Recent advances in convolutional neural networks. Pattern Recogn. 77, 354–377 (2018)
Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. https://arxiv.org/abs/1412.6572 (2014). Accessed 21 May 2020
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. https://arxiv.org/abs/1706.06083 (2017). Accessed 21 May 2020
Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582 (2016)
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372–387. IEEE (March 2016)
Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images, Master’s thesis, Department of Computer Science, University of Toronto (2009)
Acknowledgements
This paper is supported by Government of Russian Federation (grant 08-08).
Author information
Authors and Affiliations
Corresponding authors
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this paper
Cite this paper
Nemchenko, A., Bezzateev, S. (2020). Method of Comparison of Neural Network Resistance to Adversarial Attacks. In: Galinina, O., Andreev, S., Balandin, S., Koucheryavy, Y. (eds) Internet of Things, Smart Spaces, and Next Generation Networks and Systems. NEW2AN ruSMART 2020 2020. Lecture Notes in Computer Science(), vol 12526. Springer, Cham. https://doi.org/10.1007/978-3-030-65729-1_7
Download citation
DOI: https://doi.org/10.1007/978-3-030-65729-1_7
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-65728-4
Online ISBN: 978-3-030-65729-1
eBook Packages: Computer ScienceComputer Science (R0)