Skip to main content

Efficient Defense Against Adversarial Attacks and Security Evaluation of Deep Learning System

  • Conference paper
  • First Online:
Machine Learning for Cyber Security (ML4CS 2020)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 12487))

Included in the following conference series:

  • 1207 Accesses

Abstract

Deep neural networks (DNNs) have achieved performance on classical artificial intelligence problems including visual recognition, natural language processing. Unfortunately, recent studies show that machine learning models are suffering from adversarial attacks, resulting in incorrect outputs in the form of purposeful distortions to inputs. For images, such subtle distortions are usually hard to be perceptible, yet they successfully fool machine learning models. In this paper, we propose a strategy, FeaturePro, for defending machine learning models against adversarial examples and evaluating the security of deep learning system. We tackle this challenge by reducing the visible feature space for adversary. By performing white-box attacks, black-box attacks, targeted attacks and non-targeted attacks, the security of deep learning algorithms which is an important indicator for evaluating artificial intelligence systems can be evaluated. We analyzed the generalization and robustness when it is composed with adversarial training. FeaturePro has efficient defense against adversarial attacks with a high accuracy and low false positive rates.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 129.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Montavon, G., Samek, W., Müller, K.: Methods for interpreting and understanding deep neural networks. Digital Signal Process. 73, 1–15 (2017)

    Article  MathSciNet  Google Scholar 

  2. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  3. Papernot, N., et al.: Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2015)

    Google Scholar 

  4. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2016)

  5. Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. arXiv preprint arXiv:1412.5068 (2014)

  6. Papernot, N., Mcdaniel, P.: On the effectiveness of defensive distillation. arXiv preprint arXiv:1607.05113 (2016)

  7. Moosavi-Dezfooli, S.M., et al.: Universal adversarial perturbations. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1765–1773. IEEE (2017)

    Google Scholar 

  8. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2015)

  9. Papernot, N., et al.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387. IEEE (2016)

    Google Scholar 

  10. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 2574–2582 (2016)

    Google Scholar 

  11. Liu, Y., et al.: Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770 (2016)

  12. Su, J., Vargas, D.V., Kouichi, S.: One pixel attack for fooling deep neural networks. IEEE Trans. Evol. Comput. 23(5), 828–841 (2017)

    Article  Google Scholar 

  13. Sarkar, S., et al.: UPSET and ANGRI: Breaking high performance image classifiers. arXiv preprint arXiv:1707.01159 (2017)

  14. Mardani, M., et al.: Deep generative adversarial networks for compressed sensing automates MRI. arXiv preprint arXiv:1706.0005 (2017)

  15. Akhtar, N., Liu, J., Mian, A.: Defense against universal adversarial perturbations. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3389–3398 (2017)

    Google Scholar 

  16. Lee, H., Han, S., Lee, J.: Generative adversarial trainer: defense to adversarial perturbations with GAN. arXiv preprint arXiv:1705.03387 (2017)

  17. Silver, D., et al.: Mastering the game of go without human knowledge. Nature 550(7676), 354–359 (2017)

    Article  Google Scholar 

  18. Sankaranarayanan, S., et al.: Regularizing deep networks using efficient layerwise adversarial training. arXiv preprint arXiv:1705.07819 (2017)

  19. Li, B., Sim, K.C.: Improving robustness of deep neural networks via spectral masking for automatic speech recognition. In: 2013 IEEE Workshop on Automatic Speech Recognition and Understanding, pp. 279–284. IEEE (2013)

    Google Scholar 

  20. Bhagoji, A.N., et al.: Enhancing robustness of machine learning systems via data transformations. In: 2018 52nd Annual Conference on Information Sciences and Systems (CISS), pp. 1–5. IEEE (2017)

    Google Scholar 

  21. Das, N., et al.: Keeping the bad guys out: protecting and vaccinating deep learning with jpeg compression. arXiv preprint arXiv:1705.02900 (2017)

  22. Shen, S., et al.: APE-GAN: Adversarial perturbation elimination with gan. arXiv preprint arXiv:1707.05474 (2017)

  23. Zantedeschi, V., Nicolae, M.I., Rawat, A.: Efficient defenses against adversarial attacks. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, pp. 39–49 (2017)

    Google Scholar 

  24. Akhtar, N., Mian, A.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018)

    Article  Google Scholar 

  25. Smith, L., Gal, Y.: Understanding measures of uncertainty for adversarial example detection. arXiv preprint arXiv:1803.08533 (2018)

  26. Dumont, B., Maggio, S., Montalvo, P.: Robustness of Rotation-Equivariant Networks to Adversarial Perturbations. arXiv preprint arXiv:1802.06627 (2018)

Download references

Acknowledgments

The authors are highly thankful for National Key Research Program (2019YFB1706001), National Natural Science Foundation of China (61773001), Industrial Internet Innovation Development Project (TC190H468).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Na Pang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Pang, N., Hong, S., Pan, Y., Ji, Y. (2020). Efficient Defense Against Adversarial Attacks and Security Evaluation of Deep Learning System. In: Chen, X., Yan, H., Yan, Q., Zhang, X. (eds) Machine Learning for Cyber Security. ML4CS 2020. Lecture Notes in Computer Science(), vol 12487. Springer, Cham. https://doi.org/10.1007/978-3-030-62460-6_53

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-62460-6_53

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-62459-0

  • Online ISBN: 978-3-030-62460-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics