Skip to main content

Method of Comparison of Neural Network Resistance to Adversarial Attacks

  • Conference paper
  • First Online:
Internet of Things, Smart Spaces, and Next Generation Networks and Systems (NEW2AN 2020, ruSMART 2020)

Abstract

The vulnerability of neural networks to adversarial attacks has long been revealed. However, the structure of neural networks is not given due attention during the attack. The article deals with the impact of different parameters of a neural network on its resistance to adversarial attacks. The main purpose of this research is to determine which parameters increase resistance to attacks. The way by which neural networks can be compared has been proposed. Several neural networks were selected for comparison and a number of adversarial attacks were conducted on them. As a result, certain conditions were identified under which the attack took place over a longer time. It was also found that different changes in neural network parameters were required to protect against different attacks.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Szegedy, C., et al.: Intriguing properties of neural networks. https://arxiv.org/abs/1312.6199 (2013). Accessed 21 May 2020

  2. Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, pp. 6103–6113 (2018)

    Google Scholar 

  3. Zantedeschi, V., Nicolae, M.I., Rawat, A.: Efficient defenses against adversarial attacks. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 39–49 (November 2017)

    Google Scholar 

  4. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1778–1787 (2018)

    Google Scholar 

  5. Gu, J., et al.: Recent advances in convolutional neural networks. Pattern Recogn. 77, 354–377 (2018)

    Article  Google Scholar 

  6. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. https://arxiv.org/abs/1412.6572 (2014). Accessed 21 May 2020

  7. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. https://arxiv.org/abs/1706.06083 (2017). Accessed 21 May 2020

  8. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582 (2016)

    Google Scholar 

  9. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European symposium on security and privacy (EuroS&P), pp. 372–387. IEEE (March 2016)

    Google Scholar 

  10. Krizhevsky, A., Hinton, G.: Learning multiple layers of features from tiny images, Master’s thesis, Department of Computer Science, University of Toronto (2009)

    Google Scholar 

Download references

Acknowledgements

This paper is supported by Government of Russian Federation (grant 08-08).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Alexey Nemchenko or Sergey Bezzateev .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Nemchenko, A., Bezzateev, S. (2020). Method of Comparison of Neural Network Resistance to Adversarial Attacks. In: Galinina, O., Andreev, S., Balandin, S., Koucheryavy, Y. (eds) Internet of Things, Smart Spaces, and Next Generation Networks and Systems. NEW2AN ruSMART 2020 2020. Lecture Notes in Computer Science(), vol 12526. Springer, Cham. https://doi.org/10.1007/978-3-030-65729-1_7

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-65729-1_7

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-65728-4

  • Online ISBN: 978-3-030-65729-1

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics