Skip to main content

An Efficient Blurring-Reconstruction Model to Defend Against Adversarial Attacks

  • Conference paper
  • First Online:
  • 3047 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNTCS,volume 12396))

Abstract

Although deep neural networks have been widely applied in many fields, they can be easily fooled by adversarial examples which are generated by adding imperceptible perturbations to natural images. Intuitively, traditional denoising methods can be used to remove the added noise but the original useful information is eliminated inevitably when denoising. Inspired by image super-resolution, we propose a novel blurring-reconstruction method to defend against adversarial attacks which consists of two period, blurring and reconstruction. When blurring, the improved bilateral filter, which we call it Other Channels Assisted Bilateral Filter (OCABF), is firstly used to remove the perturbations, followed by a bilinear interpolation based downsampling to resize the image into a quarter size. Then, in the reconstruction period, we design a deep super-resolution neural network called SrDefense-Net to recover the natural details. It enlarges the downsampled image after blurring to the same size as the original one and complements the lost information. Plenty of experiments show that the proposed method outperforms the state-of-the-art defense methods as well as less training images demanded.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. Szegedy, C., Zaremba, W., Sutskever, I., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  2. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2015)

  3. Akhtar, N., Mian, A.S.: Threat of adversarial attacks on deep learning in computer vision: a survey. IEEE Access 6, 14410–14430 (2018). https://doi.org/10.1109/access.2018.2807385

    Article  Google Scholar 

  4. Ross, A.S., Doshi-Velez, F.: Improving the adversarial robustness and interpretability of deep neural networks by regularizing their input gradients. In: Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI), pp. 1660–1669 (2018)

    Google Scholar 

  5. Jia, X., Wei, X., Cao, X., Foroosh, H.: ComDefend: an efficient image compression model to defend adversarial examples. In: IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 6084–6092 (2019). https://doi.org/10.1109/cvpr.2019.00624

  6. Ledig, C., et al.: Photo-realistic single image super-resolution using a generative adversarial network. In: 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 105–114 (2017). https://doi.org/10.1109/cvpr.2017.19

  7. Kurakin, A., Goodfellow, I.J., Bengio, S.: Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533 (2017)

  8. Dong, Y., Liao, F., Pang, T., et al.: Boosting adversarial attacks with momentum. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). pp. 9185–9193 (2018). https://doi.org/10.1109/cvpr.2018.00957

  9. Moosavi-Dezfooli, S., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2574–2582 (2016). https://doi.org/10.1109/cvpr.2016.282

  10. Carlini, N., Wagner, D.A.: Towards evaluating the robustness of neural networks. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 39–57 (2017). https://doi.org/10.1109/sp.2017.49

  11. Tramèr, F., Kurakin, A., Papernot, N., et al.: Ensemble adversarial training: attacks and defenses. arXiv preprint arXiv:1705.07204 (2018)

  12. Papernot, N., McDaniel, P.D., Wu, X., Jha, S., Swami, A.: Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE Symposium on Security and Privacy (SP), pp. 582–597 (2016). https://doi.org/10.1109/sp.2016.41

  13. Hinton, G.E., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. CoRR abs/1503.02531 (2015)

    Google Scholar 

  14. Guo, C., Rana, M., Cissé, M., van der Maaten, L.: Countering adversarial images using input transformations. arXiv preprint arXiv:1711.00117 (2018)

  15. Rudin, L.I., Osher, S., Fatemi, E.: Nonlinear total variation based noise removal algorithms. Phys. D 60(1–4), 259–268 (1992). https://doi.org/10.1016/0167-2789(92)90242-f

    Article  MathSciNet  MATH  Google Scholar 

  16. Prakash, A., Moran, N., Garber, S., et al.: Deflecting adversarial attacks with pixel deflection. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 8571–8580 (2018). https://doi.org/10.1109/cvpr.2018.00894

  17. Liao, F., Liang, M., Dong, Y., Pang, T., Hu, X., Zhu, J.: Defense against adversarial attacks using high-level representation guided denoiser. In: 2018 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 1778–1787 (2018). https://doi.org/10.1109/cvpr.2018.00191

  18. Timofte, R., Smet, V.D., Gool, L.V.: Anchored neighborhood regression for fast example-based super-resolution. In: 2013 IEEE International Conference on Computer Vision (ICCV), pp. 1920–1927 (2013). https://doi.org/10.1109/iccv.2013.241

  19. Yang, J., Wright, J., Huang, T.S., Ma, Y.: Image super-resolution via sparse representation. IEEE Trans. Image Process. 19(11), 2861–2873 (2010). https://doi.org/10.1109/tip.2010.2050625

    Article  MathSciNet  MATH  Google Scholar 

  20. Bevilacqua, M., Roumy, A., Guillemot, C., Alberi-Morel, M.: Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In: British Machine Vision Conference (BMVC), pp. 1–10 (2012). https://doi.org/10.5244/c.26.135

  21. Zeyde, R., Elad, M., et al.: On single image scale-up using sparse-representations. In: Curves and Surfaces - 7th International Conference, pp. 711–730 (2010)

    Google Scholar 

  22. Kurakin, A., Goodfellow, I.J., Bengio, S., et al.: Adversarial attacks and defences competition. CoRR abs/1804.00097 (2018)

    Google Scholar 

  23. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 770–778 (2016). https://doi.org/10.1109/cvpr.2016.90

  24. Szegedy, C., Vanhoucke, V., Ioffe, S., Shlens, J., Wojna, Z.: Rethinking the inception architecture for computer vision. In: 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pp. 2818–2826 (2016). https://doi.org/10.1109/cvpr.2016.308

  25. Dong, C., Loy, C.C., He, K., Tang, X.: Image super-resolution using deep convolutional networks. IEEE Trans. Pattern Anal. Mach. Intell. 38(2), 295–307 (2016). https://doi.org/10.1109/tpami.2015.2439281

    Article  Google Scholar 

  26. Mao, X., Shen, C., Yang, Y.: Image restoration using convolutional auto-encoders with symmetric skip connections. CoRR abs/1606.08921 (2016)

    Google Scholar 

Download references

Acknowledgement

This work is supported by National Research and Development Program of China (No. 2017YFB1010004).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Liming Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Zhou, W., Wang, L., Zheng, Y. (2020). An Efficient Blurring-Reconstruction Model to Defend Against Adversarial Attacks. In: Farkaš, I., Masulli, P., Wermter, S. (eds) Artificial Neural Networks and Machine Learning – ICANN 2020. ICANN 2020. Lecture Notes in Computer Science(), vol 12396. Springer, Cham. https://doi.org/10.1007/978-3-030-61609-0_39

Download citation

  • DOI: https://doi.org/10.1007/978-3-030-61609-0_39

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-030-61608-3

  • Online ISBN: 978-3-030-61609-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics