Skip to main content

Adversarial Examples Are Closely Relevant to Neural Network Models - A Preliminary Experiment Explore

  • Conference paper
  • First Online:
  • 586 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 13345))

Abstract

Neural networks are fragile because adversarial examples can readily assault them. As a result of the current scenario, academics from various countries have paid close attention to adversarial examples: many research outcomes, e.g., adversarial and defensive approaches and algorithms. However, numerous people are still baffled about how adversarial examples affect neural networks. We present hypotheses and devise extensive experiments to acquire more information about adversarial examples to verify this notion. By experiments, we investigate the neural network’s sensitivity to adversarial examples in diverse aspects, e.g., model architectures, activation functions, and loss functions. The consequence of the experiment shows that adversarial examples are closely related to them. Peculiarly, sensitivity’s property could help us distinguish the adversarial examples from the data set. This work will inspire the research of adversarial examples detection.

This work is supported by the National Science Foundation of China (Grant No. 62071275) and Shandong Province Key Innovation Project (Grant No. 2020CXGC010903 and Grant No. 2021SFGC0701).

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   149.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   199.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017)

    Google Scholar 

  2. Clevert, D.A., Unterthiner, T., Hochreiter, S.: Fast and accurate deep network learning by exponential linear units (elus). arXiv: Learning (2016)

  3. Contributors, T.: Logsigmoid (2019). https://pytorch.org/docs/stable/generated/torch.nn.LogSigmoid.html. Accessed on 23 Mar 2022

  4. Contributors, T.: Softplus (2019). https://pytorch.org/docs/stable/generated/torch.nn.Softplus.html. Accessed on 23 Mar 2022

  5. Glorot, X., Bordes, A., Bengio, Y.: Deep sparse rectifier neural networks. In: 2011 Proceedings of the 14th International Conference on Artificial Intelligence and Statisitics (AISTATS), vol. 15, pp. 315–323 (2011)

    Google Scholar 

  6. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (ICLR) abs/1412.6572 (2015)

    Google Scholar 

  7. Huang, G., Liu, Z., Van Der Maaten, L., Weinberger, K.Q.: Densely connected convolutional networks. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2261–2269 (2017). https://doi.org/10.1109/CVPR.2017.243

  8. Ilyas, A., Santurkar, S., Tsipras, D., Engstrom, L., Tran, B., Madry, A.: Adversarial examples are not bugs, they are features. In: Advances in Neural Information Processing Systems abs/1905.02175, pp. 125–136 (2019)

    Google Scholar 

  9. Krizhevsky, A.: Learning multiple layers of features from tiny images. Technical report (2009)

    Google Scholar 

  10. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. Toulon, France (2017)

    Google Scholar 

  11. Lin, T.Y., Goyal, P., Girshick, R.B., He, K., Dollár, P.: Focal loss for dense object detection. IEEE Trans. Pattern Anal. Mach. Intell. 42, 318–327 (2020)

    Article  Google Scholar 

  12. Liu, C., et al.: Defend against adversarial samples by using perceptual hash. Comput. Mater. Continua 62(3), 1365–1386 (2020)

    Article  Google Scholar 

  13. Ma, X., et al.: Characterizing adversarial subspaces using local intrinsic dimensionality. In: International Conference on Learning Representations (2018). https://openreview.net/forum?id=B1gJ1L2aW

  14. Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. ArXiv abs/1706.06083 (2018)

    Google Scholar 

  15. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deepfool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016)

    Google Scholar 

  16. Papernot, N., Mcdaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: 2016 IEEE European Symposium on Security and Privacy (EuroS&P), pp. 372–387 (2016)

    Google Scholar 

  17. Qin, Y., Carlini, N., Goodfellow, I.J., Cottrell, G., Raffel, C.: Imperceptible, robust, and targeted adversarial examples for automatic speech recognition, vol. 97, p. 5231–5240, California, USA, 09–15 June 2019

    Google Scholar 

  18. Raff, E., Sylvester, J., Forsyth, S., McLean, M.: Barrage of random transforms for adversarially robust defense. In: 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6521–6530 (2019). https://doi.org/10.1109/CVPR.2019.00669

  19. Pearl, R., Reed, L.J.: On the rate of growth of the population of the united states since 1790 and its mathematical representation. Proc. Nat. Acad. Sci. 6, 275–288 (1920)

    Article  Google Scholar 

  20. Shannon, C.E.: A mathematical theory of communication. Bell Syst. Tech. J. 27, 379–423 (1948)

    Article  MathSciNet  Google Scholar 

  21. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (ICLR) abs/1312.6199 (2014)

    Google Scholar 

  22. Wang, J., Dong, G., Sun, J., Wang, X., Zhang, P.: Adversarial sample detection for deep neural network through model mutation testing. In: 2019 IEEE/ACM 41st International Conference on Software Engineering (ICSE), pp. 1245–1256 (2019)

    Google Scholar 

  23. Xiao, C., Li, B., Zhu, J.Y., He, W., Liu, M., Song, D.X.: Generating adversarial examples with adversarial networks. ArXiv abs/1801.02610 (2018)

    Google Scholar 

  24. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proc. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  25. Yakura, H., Sakuma, J.: Robust audio adversarial example for a physical attack. In: International Joint Conference on Artificial Intelligence (IJCAI), vol. abs/1810.11793, Macao, China (2019)

    Google Scholar 

  26. Yang, Y., Zhang, G., Katabi, D., Xu, Z.: Me-net: towards effective adversarial robustness with matrix estimation, vol. 97, pp. 7025–7034. Long Beach, California, USA (2019)

    Google Scholar 

  27. Wu, Y., Arora, S.S., Wu, Y., Yang, H.: Beating attackers at their own games: Adversarial example detection using adversarial gradient directions. In: 35th AAAI Conference on Artificial Intelligence/33rd Conference on Innovative Applications of Artificial Intelligence/11th Symposium on Educational Advances in Artificial Intelligence, pp. 2969–2977 (2021)

    Google Scholar 

  28. Zhang, C., et al.: Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity. IEEE Trans. Image Process. 30, 1291–1304 (2021). https://doi.org/10.1109/TIP.2020.3042083

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ju Liu .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, Z., Liu, J., Han, Y. (2022). Adversarial Examples Are Closely Relevant to Neural Network Models - A Preliminary Experiment Explore. In: Tan, Y., Shi, Y., Niu, B. (eds) Advances in Swarm Intelligence. ICSI 2022. Lecture Notes in Computer Science, vol 13345. Springer, Cham. https://doi.org/10.1007/978-3-031-09726-3_14

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-09726-3_14

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-09725-6

  • Online ISBN: 978-3-031-09726-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics