Skip to main content

One-Pixel Adversarial Example that Is Safe for Friendly Deep Neural Networks

  • Conference paper
  • First Online:
Book cover Information Security Applications (WISA 2018)

Part of the book series: Lecture Notes in Computer Science ((LNSC,volume 11402))

Included in the following conference series:

  • 652 Accesses

Abstract

Deep neural networks (DNNs) offer superior performance in machine learning tasks such as image recognition, speech recognition, pattern analysis, and intrusion detection. In this paper, we propose a one-pixel adversarial example that is safe for friendly deep neural networks. By modifying only one pixel, our proposed method generates a one-pixel-safe adversarial example that can be misclassified by an enemy classifier and correctly classified by a friendly classifier. To verify the performance of the proposed method, we used the CIFAR-10 dataset, ResNet model classifiers, and the Tensorflow library in our experiments. Results show that the proposed method modified only one pixel to achieve success rates of 13.5% and 26.0% in targeted and untargeted attacks, respectively. The success rate is slightly lower than that of the conventional one-pixel method, which has success rates of 15% and 33.5% in targeted and untargeted attacks, respectively; however, this method protects 100% of the friendly classifiers. In addition, if the proposed method modifies five pixels, this method can achieve success rates of 20.5% and 52.0% in targeted and untargeted attacks, respectively.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Abadi, M., et al.: TensorFlow: a system for large-scale machine learning. OSDI. 16, 265–283 (2016)

    Google Scholar 

  2. Carlini, N., Wagner, D.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57. IEEE (2017)

    Google Scholar 

  3. Das, S., Suganthan, P.N.: Differential evolution: a survey of the state-of-the-art. IEEE Trans. Evol. Comput. 15(1), 4–31 (2011)

    Article  Google Scholar 

  4. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  5. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  6. Ketkar, N.: Stochastic gradient descent. In: Ketkar, N. (ed.) Deep Learning with Python, pp. 111–130. Apress, Berkeley (2017). https://doi.org/10.1007/978-1-4842-2766-4_8

    Chapter  Google Scholar 

  7. Krizhevsky, A., Nair, V., Hinton, G.: The CIFAR-10 dataset (2014). http://www.cs.toronto.edu/kriz/cifar.html

  8. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: ICLR Workshop (2017)

    Google Scholar 

  9. Kwon, H., Kim, Y., Park, K.W., Yoon, H., Choi, D.: Friend-safe evasion attack: an adversarial example that is correctly recognized by a friendly classifier. Comput. Secur. 78, 380–397 (2018)

    Article  Google Scholar 

  10. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  11. Papernot, N., McDaniel, P., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE (2016)

    Google Scholar 

  12. Schmidhuber, J.: Deep learning in neural networks: an overview. Neural Netw. 61, 85–117 (2015)

    Article  Google Scholar 

  13. Storn, R., Price, K.: Differential evolution-a simple and efficient heuristic for global optimization over continuous spaces. J. Glob. Optim. 11(4), 341–359 (1997)

    Article  MathSciNet  Google Scholar 

  14. Su, J., Vargas, D.V., Kouichi, S.: One pixel attack for fooling deep neural networks. arXiv preprint arXiv:1710.08864 (2017)

  15. Szegedy, C., et al.: Intriguing properties of neural networks. In: International Conference on Learning Representations (2014). http://arxiv.org/abs/1312.6199

Download references

Acknowledgement

This work was supported by National Research Foundation (NRF) of Korea grants funded by the Korean government (MSIT) (2016R1A4A1011761 and 2017R1A2B4006026) and an Institute for Information & Communications Technology Promotion (IITP) grant funded by the Korean government (MSIT) (No. 2016-0-00173).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Daeseon Choi .

Editor information

Editors and Affiliations

Appendix

Appendix

Table 2. \(D_{\mathrm {friend}}\) and \(D_{\mathrm {enemy}}\) model of 34-layer ResNet [5]
Table 3. \(D_{\mathrm {friend}}\) and \(D_{\mathrm {enemy}}\) model parameters.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Kwon, H., Kim, Y., Yoon, H., Choi, D. (2019). One-Pixel Adversarial Example that Is Safe for Friendly Deep Neural Networks. In: Kang, B., Jang, J. (eds) Information Security Applications. WISA 2018. Lecture Notes in Computer Science(), vol 11402. Springer, Cham. https://doi.org/10.1007/978-3-030-17982-3_4

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-17982-3_4

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-17981-6

  • Online ISBN: 978-3-030-17982-3

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics