Skip to main content
Log in

An adversarial defense algorithm based on robust U-net

  • Published:
Multimedia Tools and Applications Aims and scope Submit manuscript

Abstract

Due to the continuous development of neural network technology, it has been widely applied in fields such as autonomous driving and biomedicine. However, adversarial attacks can significantly degrade the performance and reliability of neural networks, making defending against such attacks a crucial research area. In this paper, we propose a robust U-Net and adversarial generative network-based defense method, training the target model on a clean training set without adversarial samples. Firstly, we train the target neural network on a clean training set. Then, we train the robust U-Net using the clean training set, employing reparameterization and random noise to resist adversarial perturbations. To supervise the quality of transformed images, we employ the adversarial generative network, utilizing the RU-Net as the generator and a discriminator to ensure the quality of generated images. Finally, we use the transformed images to retrain the target neural network, obtaining a robust neural network model capable of defending against adversarial attacks. Experimental evaluations on CIFAR-10 and Tiny Image Net demonstrate the effectiveness of our method in countering adversarial attacks and enhancing neural network robustness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11

Similar content being viewed by others

Data Availability

Data will be made available upon request.

References

  1. Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp 39–57, https://doi.org/10.1109/SP.2017.49

  2. Chakraborty A, Alam M, Dey V et al (2021) A survey on adversarial attacks and defences. CAAI Trans Intell Technol 6(1):25–45. https://doi.org/10.1049/cit2.12028

    Article  Google Scholar 

  3. Chen T, Zhang Z, Wang P et al (2022) Sparsity winning twice: Better robust generalization from more efficient training. In: The tenth international conference on learning representations, ICLR

  4. Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: Proceedings of the 37th international conference on machine learning, ICML, pp 2206–2216 http://proceedings.mlr.press/v119/croce20b.html

  5. Esmaeili B, Akhavanpour A, Sabokrou M (2021) Maximising robustness and diversity for improving the deep neural network safety. Electron Lett 57(3):116–118. https://doi.org/10.1049/ell2.12070

    Article  Google Scholar 

  6. Fan L, Liu S, Chen PY et al (2021) When does contrastive learning preserve adversarial robustness from pretraining to finetuning? Adv Neural Inf Process Syst 34:21480–21492

    Google Scholar 

  7. Fu C, Jin J, Ding F et al (2021) Spatial reliability enhanced correlation filter: An efficient approach for real-time UAV tracking. IEEE Trans Multimed 1–1. https://doi.org/10.1109/TMM.2021.3118891

  8. Goodfellow I, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial nets. In: Advances in neural information processing systems

  9. Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: 3rd international conference on learning representations, ICLR, arXiv:1412.6572

  10. He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR, pp 770–778, https://doi.org/10.1109/CVPR.2016.90

  11. Isola P, Zhu J, Zhou T et al (2017) Image-to-image translation with conditional adversarial networks. In: 2017 IEEE conference on computer vision and pattern recognition, CVPR, pp 5967–5976, https://doi.org/10.1109/CVPR.2017.632

  12. Jin G, Shen S, Zhang D et al (2019) APE-GAN: adversarial perturbation elimination with GAN. In: IEEE international conference on acoustics, speech and signal processing, ICASSP, pp 3842–3846, https://doi.org/10.1109/ICASSP.2019.8683044

  13. Kabiraj A, Meena T, Reddy PB et al (2022) Detection and classification of lung disease using deep learning architecture from x-ray images. In: International symposium on visual computing, Springer, pp 444–455

  14. Kabiraj A, Pal D, Ganguly D et al (2023) Number plate recognition from enhanced super-resolution using generative adversarial network. Multimed Tools Appl 82(9):13837–13853

    Article  Google Scholar 

  15. Khamaiseh SY, Bagagem D, Al-Alaj A et al (2022) Adversarial deep learning: A survey on adversarial attacks and defense mechanisms on image classification. IEEE Access 10:102266–102291. https://doi.org/10.1109/ACCESS.2022.3208131

    Article  Google Scholar 

  16. Kim H (2020) Torchattacks : A pytorch repository for adversarial attacks. CoRR abs/2010.01950. arXiv:2010.01950

  17. Kim M, Tack J, Hwang SJ (2020) Adversarial self-supervised contrastive learning. Adv Neural Inf Process Syst 33:2983–2994

    Google Scholar 

  18. Kingma DP, Welling M (2014) Auto-encoding variational bayes. In: 2nd international conference on learning representations, ICLR, arXiv:1312.6114

  19. Krizhevsky A (2009) Learning multiple layers of features from tiny images pp 32–33

  20. Krizhevsky A (2014) One weird trick for parallelizing convolutional neural networks. CoRR abs/1404.5997. arXiv:1404.5997

  21. Kurakin A, Goodfellow IJ, Bengio S (2018) Adversarial examples in the physical world. In: Artificial intelligence safety and security. p 99–112

  22. Liang Q, Li Q, Nie W (2022) Ld-gan: Learning perturbations for adversarial defense based on gan structure. Signal Process Image Commun 103:116659

    Article  Google Scholar 

  23. Liu A, Liu X, Yu H et al (2021) Training robust deep neural networks via adversarial noise propagation. IEEE Trans Image Process 30:5769–5781. https://doi.org/10.1109/TIP.2021.3082317

    Article  Google Scholar 

  24. Liu S, Zhang Q, Huang L (2023) Graphic image classification method based on an attention mechanism and fusion of multilevel and multiscale deep features. Comput Commun 209:230–238. https://doi.org/10.1016/j.comcom.2023.07.001www.sciencedirect.com/science/article/pii/S0140366423002281

  25. Liu X, Hsieh CJ (2019) Rob-gan: Generator, discriminator, and adversarial attacker. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11234–11243

  26. Liu X, Cheng M, Zhang H et al (2018) Towards robust neural networks via random self-ensemble. In: Computer vision - ECCV, pp 381–397, https://doi.org/10.1007/978-3-030-01234-2_23

  27. Madry A, Makelov A, Schmidt L et al (2018) Towards deep learning models resistant to adversarial attacks. In: 6th international conference on learning representations, ICLR, https://openreview.net/forum?id=rJzIBfZAb

  28. Meng D, Chen H (2017) Magnet: A two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, CCS, pp 135–147, https://doi.org/10.1145/3133956.3134057

  29. Moosavi-Dezfooli S, Fawzi A, Frossard P (2016) Deepfool: A simple and accurate method to fool deep neural networks. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR, pp 2574–2582, https://doi.org/10.1109/CVPR.2016.282

  30. Nicolae M, Sinn M, Minh TN et al (2018) Adversarial robustness toolbox v0.2.2. CoRR abs/1807.01069. arXiv:1807.01069

  31. Pal D, Reddy PB, Roy S (2022) Attention uw-net: A fully connected model for automatic segmentation and annotation of chest x-ray. Comput Biol Med 150:106083

    Article  Google Scholar 

  32. Papernot N, McDaniel PD, Wu X et al (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE symposium on security and privacy, SP, pp 582–597, https://doi.org/10.1109/SP.2016.41

  33. Rombach R, Blattmann A, Lorenz D et al (2022) High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10684–10695

  34. Ronneberger O, Fischer P, Brox T (2015) U-Net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention - MICCAI, pp 234–241, https://doi.org/10.1007/978-3-319-24574-4_28

  35. Sahay KB, Balachander B, Jagadeesh B et al (2022) A real time crime scene intelligent video surveillance systems in violence detection framework using deep learning techniques. Comput Electric Eng 103:108319. https://doi.org/10.1016/j.compeleceng.2022.108319www.sciencedirect.com/science/article/pii/S0045790622005419

  36. Shen S, Jin G, Gao K et al (2017) Ape-gan: Adversarial perturbation elimination with gan. arXiv:1707.05474

  37. Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd international conference on learning representations, ICLR, arXiv:1409.1556

  38. Szegedy C, Zaremba W, Sutskever I et al (2014) Intriguing properties of neural networks. In: 2nd international conference on learning representations, ICLR, arXiv:1312.6199

  39. Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR, pp 2818–2826. https://doi.org/10.1109/CVPR.2016.308

  40. Vysogorets A, Kempe J (2023) Connectivity matters: Neural network pruning through the lens of effective sparsity. J Mach Learn Res 24:99–1

    MathSciNet  Google Scholar 

  41. Wang C, Zhang G, Grosse RB (2020) Picking winning tickets before training by preserving gradient flow. In: 8th international conference on learning representations, ICLR. https://openreview.net/forum?id=SkgsACVKPH

  42. Wang D, Jin W, Wu Y et al (2023) Atgan: Adversarial training-based gan for improving adversarial robustness generalization on image classification. Appl Intell 1–17

  43. Wong E, Rice L, Kolter JZ (2020) Fast is better than free: Revisiting adversarial training. In: 8th international conference on learning representations, ICLR. https://openreview.net/forum?id=BJx040EFvH

  44. Wu X, Hong D, Chanussot J (2022) Uiu-net: U-net in u-net for infrared small object detection. IEEE Trans Image Process 32:364–376

    Article  Google Scholar 

  45. Xu W, Evans D, Qi Y (2018) Feature squeezing: Detecting adversarial examples in deep neural networks. In: 25th annual network and distributed system security symposium, NDSS

  46. Zantedeschi V, Nicolae M, Rawat A (2017) Efficient defenses against adversarial attacks. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, AISec, pp 39–49, https://doi.org/10.1145/3128572.3140449

  47. Zheng S, Song Y, Leung T et al (2016) Improving the robustness of deep neural networks via stability training. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR, pp 4480–4488, https://doi.org/10.1109/CVPR.2016.485

Download references

Acknowledgements

This research was supported by the Major Program of the National Natural Science Foundation of China (Grant Nos. 62192730 and 62192733).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Dian Zhang.

Ethics declarations

Conflicts of interest

The authors declare that they have no conflict of interest. The datasets generated during and analysed during the current study are available in the cifar10 and Tiny ImageNet repository, [cifar10: http://www.cs.toronto.edu/-kriz/cifar.html, Tiny ImageNet: http://cs231n.stanford.edu/tiny-imagenet-200.zip].

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Zhang, D., Dong, Y. & Yang, H. An adversarial defense algorithm based on robust U-net. Multimed Tools Appl 83, 45575–45601 (2024). https://doi.org/10.1007/s11042-023-17355-w

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s11042-023-17355-w

Keywords

Navigation