Skip to main content

Adversarial Defense Networks via Gaussian Noise and RBF

  • Conference paper
  • First Online:
Artificial Intelligence and Security (ICAIS 2021)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 12736))

Included in the following conference series:

  • 1755 Accesses

Abstract

Convolutional Neural Networks (CNNs) have excellent representative power and are state-of-the-art classifiers on many tasks. However, CNNs are vulnerable to adversarial examples, which are samples with imperceptible perturbations while dramatically misleading the CNNs. It has been found that, in past studies, Radial Basis Function (RBF) network can effectively reduce the linearization of the neural networks model, and Gaussian noise injection can prevent the network from overfitting, all of which are conducive for defending against adversarial examples. In this paper, we propose an incorporated defense method with Gaussian noise injection and RBF network, and analytically investigate the robustness mechanism of incorporated defense method. For our proposed method, it has the following two advantages: (1) it has significant classification accuracy, and (2) it can resist to various adversarial attacks effectively. The experimental results show the proposed method achieves the performance of about 79.25% accuracy on MNIST dataset and 43.87% accuracy on Fashion-MNIST dataset, even in the full white-box attack where attackers can craft malicious adversarial examples from defense models.

The research has been supported by the Natural Science Foundation of China under grant number 61872422, and the Natural Science Foundation of Zhejiang Province, China under great number LY19F020028.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778. IEEE, Las Vegas (2016)

    Google Scholar 

  2. Zoph, B., Vasudevan, V., Shlens J., Le, Q.V.: Learning transferable architectures for scalable image recognition. In: Proceeding of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 8697–8710. IEEE, Salt Lake City (2018)

    Google Scholar 

  3. Cao, Y.Q., Tan, C., Ji, G.L.: A multi-label classification method for vehicle video. J. Big Data 2(1), 19–31 (2020)

    Article  Google Scholar 

  4. Ren, S., He, K., Girshick, R.B., Sun, J.: Faster R-CNN: towards real-time object detection with region proposal networks. Adv. Neural. Inf. Process. Syst. 39(6), 1137–1149 (2017)

    Google Scholar 

  5. He, K., Gkioxari, G., Dollár, P., Girshick, R.B.: Mask R-CNN. In: Proceedings of the IEEE International Conference on Computer Vision, pp. 2980–2988. IEEE, Venice (2017)

    Google Scholar 

  6. Qayyum, A., Ahmad, I., Iftikhar, M., Mazher, M.: Object detection and fuzzy-based classification using UAV data. Intell. Autom. Soft Co. 26(4), 693–702 (2020)

    Article  Google Scholar 

  7. Bahdanau, D., Chorowski, J., Serdyuk, D., Brakel, P., Bengio, Y.: End-to-end attention-based large vocabulary speech recognition. In: IEEE International Conference on Acoustics, pp. 4945–4949. IEEE, Shanghai (2016)

    Google Scholar 

  8. Chiu, C.C., et al.: State-of-the-art speech recognition with sequence-to-sequence models. In: IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 4774–4778. IEEE, Calgary (2018)

    Google Scholar 

  9. Park, J., Kim, S.: Noise cancellation based on voice activity detection using spectral variation for speech recognition in smart home devices. Intell. Autom. Soft Co. 26(1), 149–159 (2020)

    Google Scholar 

  10. Nguyen, A.M., Yosinski, J., Clune, J.: Deep neural networks are easily fooled: high confidence predictions for unrecognizable images. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 427–436. IEEE Computer Society, Boston (2015)

    Google Scholar 

  11. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: 3rd International Conference on Learning Representations (ICLR). San Diego, CA, USA (2015) arXiv:1412.6572. Accessed 20 Mar 2015

  12. Szegedy, C., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations (ICLR). Banff, AB, Canada (2014). arXiv:1312.6199. Accessed 19 Feb 2014

  13. Xu, W., Evans, D., Qi, Y.: Feature squeezing: detecting adversarial examples in deep neural networks. In: 25th Annual Network and Distributed System Security Symposium (NDSS). The Internet Society, San Diego, California, USA (2017). arXiv:1704.01155. Accessed 5 Dec 2017

  14. Meng, D., Chen, H.: MagNet: a two-pronged defense against adversarial examples. In: Proceeding of the 2017 ACM SIGSAC Conference on Computer and Communications Security, pp. 135–147. ACM, New York (2017)

    Google Scholar 

  15. Folz, J., Palacio, S., Hees, J., Dengel, A.: Adversarial defense based on structure-to-signal autoencoders. In: 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), pp. 3568–3577. IEEE, Snowmass Village (2020)

    Google Scholar 

  16. Wong, E., Kolter, J.Z.: Provable Defenses against Adversarial Examples via the Convex Outer Adversarial Polytope. In: International Conference on Machine Learning (ICML), pp. 5283–5292. PMLR, Stockholmsmässan, Stockholm, Sweden (2018)

    Google Scholar 

  17. Papernot, N., McDaniel, P.D., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: 2016 IEEE Symposium on Security and Privacy (SP), pp. 582–597. IEEE Computer Society, San Jose (2018)

    Google Scholar 

  18. Buckman, J., Roy, A., Raffel, C., Goodfellow, I.J.: Thermometer encoding: one hot way to resist adversarial examples. In: 6th International Conference on Learning Representations (ICLR). OpenReview.net, Vancouver (2018). https://openreview.net/forum?id=rJUYGxbCW. Accessed 16 Feb 2018

  19. Zhang, J., Wang, J.: A survey on adversarial example. J. Inf. Hiding Privacy Protect. 2(1), 47–57 (2020)

    Article  Google Scholar 

  20. Chen, H., Zhu, H.Q., Yan, L.M., Wang, J.W.: A survey on adversarial examples in deep learning. J. Big Data 2(2), 71–84 (2020)

    Article  Google Scholar 

  21. Gu, S., Rigazio, L.: Towards deep neural network architectures robust to adversarial examples. In: 3rd International Conference on Learning Representations (ICLR). San Diego, CA, USA (2015). arXiv:1412.5068. Accessed 9 Apr 2015

  22. Peng, J.X., Li, K., Irwin, G.W.: A novel continuous forward algorithm for RBF neural modelling. IEEE T. Automat. Contr. 52(1), 117–122 (2007)

    Article  MathSciNet  Google Scholar 

  23. Moody, J.E., Darken, C.J.: Fast learning in networks of locally-tuned processing units. Neural Comput. 1(2), 281–294 (1989)

    Article  Google Scholar 

  24. Broomhead, D.S., Lowe, D.: Multivariable functional interpolation and adaptive networks. Complex Syst. 2(3), 321–355 (1988)

    MathSciNet  MATH  Google Scholar 

  25. Vidnerová, P., Neruda, R.: Deep networks with RBF layers to prevent adversarial examples. In: Rutkowski, L., Scherer, R., Korytkowski, M., Pedrycz, W., Tadeusiewicz, R., Zurada, J.M. (eds.) ICAISC 2018. LNCS (LNAI), vol. 10841, pp. 257–266. Springer, Cham (2018). https://doi.org/10.1007/978-3-319-91253-0_25

    Chapter  Google Scholar 

  26. LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. P. IEEE 86(11), 2278–2324 (1998)

    Article  Google Scholar 

  27. Han, X., Kashif, R., Roland, V.: Fashion-mnist: a novel image dataset for benchmarking machine learning algorithms. arXiv:1708.07747. Accessed 15 Sept 2017

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Li, J., Gao, J., Jiang, Q., He, G. (2021). Adversarial Defense Networks via Gaussian Noise and RBF. In: Sun, X., Zhang, X., Xia, Z., Bertino, E. (eds) Artificial Intelligence and Security. ICAIS 2021. Lecture Notes in Computer Science(), vol 12736. Springer, Cham. https://doi.org/10.1007/978-3-030-78609-0_42

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-78609-0_42

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-78608-3

  • Online ISBN: 978-3-030-78609-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics