Abstract
Due to the continuous development of neural network technology, it has been widely applied in fields such as autonomous driving and biomedicine. However, adversarial attacks can significantly degrade the performance and reliability of neural networks, making defending against such attacks a crucial research area. In this paper, we propose a robust U-Net and adversarial generative network-based defense method, training the target model on a clean training set without adversarial samples. Firstly, we train the target neural network on a clean training set. Then, we train the robust U-Net using the clean training set, employing reparameterization and random noise to resist adversarial perturbations. To supervise the quality of transformed images, we employ the adversarial generative network, utilizing the RU-Net as the generator and a discriminator to ensure the quality of generated images. Finally, we use the transformed images to retrain the target neural network, obtaining a robust neural network model capable of defending against adversarial attacks. Experimental evaluations on CIFAR-10 and Tiny Image Net demonstrate the effectiveness of our method in countering adversarial attacks and enhancing neural network robustness.
Similar content being viewed by others
Data Availability
Data will be made available upon request.
References
Carlini N, Wagner D (2017) Towards evaluating the robustness of neural networks. In: 2017 IEEE symposium on security and privacy (SP), pp 39–57, https://doi.org/10.1109/SP.2017.49
Chakraborty A, Alam M, Dey V et al (2021) A survey on adversarial attacks and defences. CAAI Trans Intell Technol 6(1):25–45. https://doi.org/10.1049/cit2.12028
Chen T, Zhang Z, Wang P et al (2022) Sparsity winning twice: Better robust generalization from more efficient training. In: The tenth international conference on learning representations, ICLR
Croce F, Hein M (2020) Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In: Proceedings of the 37th international conference on machine learning, ICML, pp 2206–2216 http://proceedings.mlr.press/v119/croce20b.html
Esmaeili B, Akhavanpour A, Sabokrou M (2021) Maximising robustness and diversity for improving the deep neural network safety. Electron Lett 57(3):116–118. https://doi.org/10.1049/ell2.12070
Fan L, Liu S, Chen PY et al (2021) When does contrastive learning preserve adversarial robustness from pretraining to finetuning? Adv Neural Inf Process Syst 34:21480–21492
Fu C, Jin J, Ding F et al (2021) Spatial reliability enhanced correlation filter: An efficient approach for real-time UAV tracking. IEEE Trans Multimed 1–1. https://doi.org/10.1109/TMM.2021.3118891
Goodfellow I, Pouget-Abadie J, Mirza M et al (2014) Generative adversarial nets. In: Advances in neural information processing systems
Goodfellow IJ, Shlens J, Szegedy C (2015) Explaining and harnessing adversarial examples. In: 3rd international conference on learning representations, ICLR, arXiv:1412.6572
He K, Zhang X, Ren S et al (2016) Deep residual learning for image recognition. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR, pp 770–778, https://doi.org/10.1109/CVPR.2016.90
Isola P, Zhu J, Zhou T et al (2017) Image-to-image translation with conditional adversarial networks. In: 2017 IEEE conference on computer vision and pattern recognition, CVPR, pp 5967–5976, https://doi.org/10.1109/CVPR.2017.632
Jin G, Shen S, Zhang D et al (2019) APE-GAN: adversarial perturbation elimination with GAN. In: IEEE international conference on acoustics, speech and signal processing, ICASSP, pp 3842–3846, https://doi.org/10.1109/ICASSP.2019.8683044
Kabiraj A, Meena T, Reddy PB et al (2022) Detection and classification of lung disease using deep learning architecture from x-ray images. In: International symposium on visual computing, Springer, pp 444–455
Kabiraj A, Pal D, Ganguly D et al (2023) Number plate recognition from enhanced super-resolution using generative adversarial network. Multimed Tools Appl 82(9):13837–13853
Khamaiseh SY, Bagagem D, Al-Alaj A et al (2022) Adversarial deep learning: A survey on adversarial attacks and defense mechanisms on image classification. IEEE Access 10:102266–102291. https://doi.org/10.1109/ACCESS.2022.3208131
Kim H (2020) Torchattacks : A pytorch repository for adversarial attacks. CoRR abs/2010.01950. arXiv:2010.01950
Kim M, Tack J, Hwang SJ (2020) Adversarial self-supervised contrastive learning. Adv Neural Inf Process Syst 33:2983–2994
Kingma DP, Welling M (2014) Auto-encoding variational bayes. In: 2nd international conference on learning representations, ICLR, arXiv:1312.6114
Krizhevsky A (2009) Learning multiple layers of features from tiny images pp 32–33
Krizhevsky A (2014) One weird trick for parallelizing convolutional neural networks. CoRR abs/1404.5997. arXiv:1404.5997
Kurakin A, Goodfellow IJ, Bengio S (2018) Adversarial examples in the physical world. In: Artificial intelligence safety and security. p 99–112
Liang Q, Li Q, Nie W (2022) Ld-gan: Learning perturbations for adversarial defense based on gan structure. Signal Process Image Commun 103:116659
Liu A, Liu X, Yu H et al (2021) Training robust deep neural networks via adversarial noise propagation. IEEE Trans Image Process 30:5769–5781. https://doi.org/10.1109/TIP.2021.3082317
Liu S, Zhang Q, Huang L (2023) Graphic image classification method based on an attention mechanism and fusion of multilevel and multiscale deep features. Comput Commun 209:230–238. https://doi.org/10.1016/j.comcom.2023.07.001www.sciencedirect.com/science/article/pii/S0140366423002281
Liu X, Hsieh CJ (2019) Rob-gan: Generator, discriminator, and adversarial attacker. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 11234–11243
Liu X, Cheng M, Zhang H et al (2018) Towards robust neural networks via random self-ensemble. In: Computer vision - ECCV, pp 381–397, https://doi.org/10.1007/978-3-030-01234-2_23
Madry A, Makelov A, Schmidt L et al (2018) Towards deep learning models resistant to adversarial attacks. In: 6th international conference on learning representations, ICLR, https://openreview.net/forum?id=rJzIBfZAb
Meng D, Chen H (2017) Magnet: A two-pronged defense against adversarial examples. In: Proceedings of the 2017 ACM SIGSAC conference on computer and communications security, CCS, pp 135–147, https://doi.org/10.1145/3133956.3134057
Moosavi-Dezfooli S, Fawzi A, Frossard P (2016) Deepfool: A simple and accurate method to fool deep neural networks. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR, pp 2574–2582, https://doi.org/10.1109/CVPR.2016.282
Nicolae M, Sinn M, Minh TN et al (2018) Adversarial robustness toolbox v0.2.2. CoRR abs/1807.01069. arXiv:1807.01069
Pal D, Reddy PB, Roy S (2022) Attention uw-net: A fully connected model for automatic segmentation and annotation of chest x-ray. Comput Biol Med 150:106083
Papernot N, McDaniel PD, Wu X et al (2016) Distillation as a defense to adversarial perturbations against deep neural networks. In: IEEE symposium on security and privacy, SP, pp 582–597, https://doi.org/10.1109/SP.2016.41
Rombach R, Blattmann A, Lorenz D et al (2022) High-resolution image synthesis with latent diffusion models. In: Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, pp 10684–10695
Ronneberger O, Fischer P, Brox T (2015) U-Net: Convolutional networks for biomedical image segmentation. In: Medical image computing and computer-assisted intervention - MICCAI, pp 234–241, https://doi.org/10.1007/978-3-319-24574-4_28
Sahay KB, Balachander B, Jagadeesh B et al (2022) A real time crime scene intelligent video surveillance systems in violence detection framework using deep learning techniques. Comput Electric Eng 103:108319. https://doi.org/10.1016/j.compeleceng.2022.108319www.sciencedirect.com/science/article/pii/S0045790622005419
Shen S, Jin G, Gao K et al (2017) Ape-gan: Adversarial perturbation elimination with gan. arXiv:1707.05474
Simonyan K, Zisserman A (2015) Very deep convolutional networks for large-scale image recognition. In: 3rd international conference on learning representations, ICLR, arXiv:1409.1556
Szegedy C, Zaremba W, Sutskever I et al (2014) Intriguing properties of neural networks. In: 2nd international conference on learning representations, ICLR, arXiv:1312.6199
Szegedy C, Vanhoucke V, Ioffe S et al (2016) Rethinking the inception architecture for computer vision. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR, pp 2818–2826. https://doi.org/10.1109/CVPR.2016.308
Vysogorets A, Kempe J (2023) Connectivity matters: Neural network pruning through the lens of effective sparsity. J Mach Learn Res 24:99–1
Wang C, Zhang G, Grosse RB (2020) Picking winning tickets before training by preserving gradient flow. In: 8th international conference on learning representations, ICLR. https://openreview.net/forum?id=SkgsACVKPH
Wang D, Jin W, Wu Y et al (2023) Atgan: Adversarial training-based gan for improving adversarial robustness generalization on image classification. Appl Intell 1–17
Wong E, Rice L, Kolter JZ (2020) Fast is better than free: Revisiting adversarial training. In: 8th international conference on learning representations, ICLR. https://openreview.net/forum?id=BJx040EFvH
Wu X, Hong D, Chanussot J (2022) Uiu-net: U-net in u-net for infrared small object detection. IEEE Trans Image Process 32:364–376
Xu W, Evans D, Qi Y (2018) Feature squeezing: Detecting adversarial examples in deep neural networks. In: 25th annual network and distributed system security symposium, NDSS
Zantedeschi V, Nicolae M, Rawat A (2017) Efficient defenses against adversarial attacks. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, AISec, pp 39–49, https://doi.org/10.1145/3128572.3140449
Zheng S, Song Y, Leung T et al (2016) Improving the robustness of deep neural networks via stability training. In: 2016 IEEE conference on computer vision and pattern recognition, CVPR, pp 4480–4488, https://doi.org/10.1109/CVPR.2016.485
Acknowledgements
This research was supported by the Major Program of the National Natural Science Foundation of China (Grant Nos. 62192730 and 62192733).
Author information
Authors and Affiliations
Corresponding author
Ethics declarations
Conflicts of interest
The authors declare that they have no conflict of interest. The datasets generated during and analysed during the current study are available in the cifar10 and Tiny ImageNet repository, [cifar10: http://www.cs.toronto.edu/-kriz/cifar.html, Tiny ImageNet: http://cs231n.stanford.edu/tiny-imagenet-200.zip].
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Zhang, D., Dong, Y. & Yang, H. An adversarial defense algorithm based on robust U-net. Multimed Tools Appl 83, 45575–45601 (2024). https://doi.org/10.1007/s11042-023-17355-w
Received:
Revised:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11042-023-17355-w