Skip to main content
Log in

LDN-RC: a lightweight denoising network with residual connection to improve adversarial robustness

  • Published:
Applied Intelligence Aims and scope Submit manuscript

Abstract

Deep neural networks (DNNs) are prone to produce incorrect prediction results under the attack of adversarial samples. To cope with this problem, some defense methods are presented. However, most of them are based on adversarial training, which has great computational consumption and does not start from strengthening the architecture of the network model itself to resist the adversarial attack. Recent studies have shown that feature denoising can remove the adversarial perturbations in the adversarial samples. In this paper, we propose a lightweight denoising network with residual connection (LDN-RC), on which the internal denoising block and the intermediate denoising block are introduced for feature denoising and sample denoising, respectively; the two denoising blocks are combined in the network model, which can withstand the interference of the adversarial perturbations in the adversarial samples to a large extent and also save computational resources. In the training strategy, a two-stage denoising approach and fine-tuning are presented to train the RESNET network model on MNIST, CIFAR-10, and SVHN datasets, and the accuracy of the enhanced network model exceeds 60% on all three datasets under the \({L}_{\infty }\)-PGD white-box attack, which demonstrate that LDN-RC can effectively improve the adversarial robustness of the network model.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Data availability

The datasets generated during and/or analysed during the current study are available from the corresponding author on reasonable request.

References

  1. Lee S, Lee H, Yoon S (2020) Adversarial vertex mixup: toward better adversarially robust generalization. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 269–278. https://doi.org/10.1109/CVPR42600.2020.00035

  2. Deng Z, Yang X, Xu S et al (2021) LiBRe: a practical bayesian approach to adversarial detection. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 972–982. https://doi.org/10.1109/cvpr46437.2021.00103

  3. Qiu S, Liu Q, Zhou S, Wu C (2019) Review of artificial intelligence adversarial attack and defense technologies. Appl Sci 9(5). https://doi.org/10.3390/app9050909

  4. Madry A, Makelov A, Schmidt L et al (2018) Towards deep learning models resistant to adversarial attacks. In: International conference on learning representations, pp 1–28

  5. Liao F, Liang M, Dong Y et al (2018) Defense against adversarial attacks using high-level representation guided denoiser. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1778–1787

  6. Ma C, Ying L (2021) Achieving adversarial robustness requires an active teacher. J Comput Math 39(6):880–896. https://doi.org/10.4208/jcm.2105-m2020-0310

  7. Wang S, Gong Y (2021) Adversarial example detection based on saliency map features. Appl Intell. https://doi.org/10.1007/s10489-021-02759-8

    Article  Google Scholar 

  8. Xie C, Wu Y, Maaten L, Van Der et al (2019) Feature denoising for improving adversarial robustness. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 501–509

  9. Cui Z, Xue F, Cai X et al (2018) Detection of malicious code variants based on deep learning. IEEE Trans Industr Inf 14(7):3187–3196. https://doi.org/10.1109/TII.2018.2822680

    Article  Google Scholar 

  10. Mustafa A, Khan S, Hayat M et al (2019) Adversarial defense by restricting the hidden space of deep neural networks. In: Proceedings of the international conference on computer vision, pp 3384–3393

  11. Wadlow LR (2017) MagNet: a two-pronged defense against adversarial examples. In: Proceedings of the 24th ACM-SIGSAC conference on computer and communications security (ACM CCS), pp 135–147. https://doi.org/10.1145/3133956.3134057

  12. Ortiz-Jimenez G, Modas A, Moosavi-Dezfooli SM, Frossard P (2021) Optimism in the face of adversity: understanding and improving deep learning through adversarial robustness. Proc IEEE 109(5):635–659. https://doi.org/10.1109/JPROC.2021.3050042

    Article  Google Scholar 

  13. Li T, Liu A, Liu X et al (2021) Understanding adversarial robustness via critical attacking route. Inf Sci 547:568–578. https://doi.org/10.1016/j.ins.2020.08.043

    Article  MathSciNet  MATH  Google Scholar 

  14. Fang X, Li Z, Yang G (2021) A novel approach to generating high-resolution adversarial examples. Appl Intell. https://doi.org/10.1007/s10489-021-02371-w

    Article  Google Scholar 

  15. Naseer M, Khan S, Hayat M et al (2020) A self-supervised approach for adversarial robustness. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 259–268

  16. Wang L, Chen X, Tang R et al (2021) Improving adversarial robustness of deep neural networks by using semantic information. Knowl Based Syst. https://doi.org/10.1016/j.knosys.2021.107141

    Article  Google Scholar 

  17. Ghosh P, Losalka A, Black MJ (2019) Resisting adversarial attacks using Gaussian mixture variational autoencoders. In: Proceedings of the AAAI conference on artificial intelligence, pp 541–548

  18. Mahmood K, Gurevin D, van Dijk M, Ha Nguyen P (2021) Beware the black-box: On the robustness of recent defenses to adversarial examples. Entropy 23(10):1–40. https://doi.org/10.3390/e23101359

    Article  Google Scholar 

  19. Yin Z, Wang H, Wang J et al (2020) Defense against adversarial attacks by low-level image transformations. Int J Intell Syst 35(10):1453–1466. https://doi.org/10.1002/int.22258

    Article  Google Scholar 

  20. Liu N, Du M, Guo R et al (2020) Adversarial attacks and defenses: an interpretation perspective. https://doi.org/10.1145/3468507.3468519

  21. Nesti F, Biondi A, Buttazzo G (2021) Detecting adversarial examples by input transformations, defense perturbations, and voting. IEEE Trans Neural Netw Learn Syst 11–13. https://doi.org/10.1109/tnnls.2021.3105238

  22. Vivek BS, Babu RV (2020)Single-step adversarial training with dropout scheduling. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 947–956

  23. Wei W, Liu L (2021) Robust deep learning ensemble against deception. IEEE Trans Dependable Secur Comput 18(4):1513–1527. https://doi.org/10.1109/TDSC.2020.3024660

    Article  Google Scholar 

  24. He Z, Rakin AS, Fan D (2019) Parametric noise injection: Trainable randomness to improve deep neural network robustness against adversarial attack. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 588–597

  25. Jeddi A, Shafiee MJ, Karg M et al (2020) Learn2Perturb: An end-to-end feature perturbation learning to improve adversarial robustness. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1238–1247

  26. Shaham U, Yamada Y, Negahban S (2018) Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomputing 307:195–204. https://doi.org/10.1016/j.neucom.2018.04.027

    Article  Google Scholar 

  27. Chen T, Liu S, Chang S et al (2020) Adversarial robustness: from self-supervised pre-training to fine-tuning. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 696–705

  28. Chen P (2017) ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. In: Proceedings of the 10th ACM workshop on artificial intelligence and security, pp 15–26

  29. Wu T, Liu Z, Huang Q et al (2021) Adversarial robustness under long-tailed distribution. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 8659–8668

  30. Ho J, Lee BG, Kang DK (2021)Attack-less adversarial training for a robust adversarial defense. Appl Intell. https://doi.org/10.1007/s10489-021-02523-y

    Article  Google Scholar 

  31. Awasthi P, Yu G, Ferng C-S et al (2020) Adversarial robustness across representation spaces. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7604–7612

  32. Li G, Ding S, Luo J, Liu C (2020) Enhancing intrinsic adversarial robustness via feature pyramid decoder. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 797–805

  33. Cheng M, Chen P-Y, Liu S et al (2021)Self-progressing robust training. In: Proceedings of the AAAI conference on artificial intelligence, pp 7107–7115

  34. Cazenavette G, Murdock C, Lucey S (2021) Architectural adversarial robustness: the case for deep pursuit. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 7150–7158

  35. Zhang H, Yu Y, Jiao J et al (2019) Theoretically principled trade-off between robustness and accuracy. In: Proceedings of the 36th international conference on machine learning (ICML), pp 12907–12929

  36. Dong Y, Liao F, Pang T et al (2018) Boosting adversarial attacks with momentum. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 9185–9193

  37. Shi Y, Han Y, Zhang Q, Kuang X (2020) Adaptive iterative attack towards explainable adversarial robustness. Pattern Recogn. https://doi.org/10.1016/j.patcog.2020.107309

    Article  Google Scholar 

  38. Carlini N (2017) Towards evaluating the robustness of neural networks. In: Proceedings of the 38th IEEE symposium on security and privacy (SP), pp 39–57

  39. Fawzi A, Frossard P (2016) DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2574–2582

  40. Goodfellow IJ (2017) Adversarial examples in the physical world. In: Proceedings of the 5th international conference on learning representations (ICLR), pp 1–14

  41. Xie C, Zhang Z, Wang J et al (2019) Improving transferability of adversarial examples with input diversity. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 2725–2734

  42. Jin Y, Lai L (2021) On the adversarial robustness of hypothesis testing. IEEE Trans Signal Process 69:515–530. https://doi.org/10.1109/TSP.2020.3045206

    Article  MathSciNet  MATH  Google Scholar 

  43. Huang B, Ke Z, Wang Y et al (2021) Adversarial defence by diversified simultaneous training of deep ensembles. In: Proceedings of the AAAI conference on artificial intelligence, pp 7823–7831

  44. Li X, Li X, Pan D, Zhu D (2020) Improving adversarial robustness via probabilistically compact loss with logit constraints. In: Proceedings of the AAAI conference on artificial intelligence, pp 8482–8490

  45. Addepalli S, Vivek BS, Baburaj A et al (2020) Towards achieving adversarial robustness by enforcing feature consistency across bit planes. In: Proceedings of the IEEE conference on computer vision and pattern recognition, pp 1017–1026

  46. Hlihor P, Volpi R, Malagò L (2020) Evaluating the robustness of defense mechanisms based on autoencoder reconstructions against carlini-wagner adversarial attacks. In: Proceedings of the northern lights deep learning workshop. https://doi.org/10.7557/18.5173

  47. Deng Z, Zhang L, Ghorbani A, Zou J (2020) Improving adversarial robustness via unlabeled out-of-domain data. In: Proceedings of the 24th international conference on artificial intelligence and statistics (AISTATS)

  48. Zhang C, Liu A, Liu X et al (2021) Interpreting and improving adversarial robustness of deep neural networks with neuron sensitivity. IEEE Trans Image Process 30:1291–1304. https://doi.org/10.1109/TIP.2020.3042083

    Article  Google Scholar 

  49. Tavakoli M, Agostinelli F, Baldi P (2021) SPLASH: learnable activation functions for improving accuracy and adversarial robustness. Neural Netw 140:1–12. https://doi.org/10.1016/j.neunet.2021.02.023

    Article  Google Scholar 

  50. Liu A, Liu X, Yu H et al (2021) Training robust deep neural networks via adversarial noise propagation. IEEE Trans Image Process 30:5769–5781. https://doi.org/10.1109/TIP.2021.3082317

    Article  Google Scholar 

  51. Wang GG, Lu M, Dong YQ, Zhao XJ (2016)Self-adaptive extreme learning machine. Neural Comput Appl 27(2):291–303. https://doi.org/10.1007/s00521-015-1874-3

    Article  Google Scholar 

  52. Yi JH, Wang J, Wang GG (2016) Improved probabilistic neural networks with self-adaptive strategies for transformer fault diagnosis problem. Adv Mech Eng 8(1):1–13. https://doi.org/10.1177/1687814015624832

    Article  MathSciNet  Google Scholar 

  53. Han K, Xia B, Li Y (2022) (AD)2: adversarial domain adaptation to defense with adversarial perturbation removal. Pattern Recogn. https://doi.org/10.1016/j.patcog.2021.108303

    Article  Google Scholar 

  54. Yue Z, Yong H, Zhao Q et al (2019) Variational denoising network: toward blind noise modeling and removal. Adv Neural Inf Process Syst 32:1–12

    Google Scholar 

  55. Zhang K, Zuo W, Chen Y et al (2017) Beyond a gaussian denoiser: residual learning of deep cnn for image denoising. IEEE Trans Image Process 26(7):3142–3155. https://doi.org/10.1109/TIP.2017.2662206

    Article  MathSciNet  MATH  Google Scholar 

  56. Lecun Y, Bottou L, Bengio Y, Ha P (1998)Gradient-based learning applied to document recognition. Proc IEEE 86(11):2278–2324. https://doi.org/10.1109/5.726791

  57. McCrary MB (1992) Urban multicultural trauma patients. Asha 34(4)

  58. Netzer Y, Wang T, Coates A et al (2011) Reading digits in natural images with unsupervised feature learning. In: NIPS workshop on deep learning and unsupervised feature learning

  59. Rice L, Wong E, Kolter JZ (2020) Overfitting in adversarially robust deep learning. In: Proceedings of the 37th international conference on machine learning, pp 8093–8104

  60. Goldblum M, Fowl L, Feizi S, Goldstein T (2020) Adversarially robust distillation. In: Proceedings of the AAAI conference on artificial intelligence, pp 3996–4003

  61. Wang GG, Deb S, Cui Z (2019) Monarch butterfly optimization. Neural Comput Appl 31(7):1995–2014. https://doi.org/10.1007/s00521-015-1923-y

    Article  Google Scholar 

  62. Yang Y, Chen H, Heidari AA, Gandomi AH (2021) Hunger games search: visions, conception, implementation, deep analysis, perspectives, and towards performance shifts. Expert Syst Appl 177(114864). https://doi.org/10.1016/j.eswa.2021.114864

  63. Ahmadianfar I, Heidari AA, Gandomi AH et al (2021) RUN beyond the metaphor: an efficient optimization algorithm based on runge kutta method. Expert Syst Appl 181(115079). https://doi.org/10.1016/j.eswa.2021.115079

Download references

Acknowledgements

All the authors are deeply grateful to the editors for smooth and fast handling of the manuscript. The authors would also like to thank the anonymous referees for their valuable suggestions to improve the quality of this paper. This work is supported by the National Natural Science Foundation of China (Grant No. 61802111, 61872125) and the Key Science and Technology Project of Henan Province (Grant No. 201300210400, 212102210094).

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Xin He, Zhihua Gan or Xiangjun Wu.

Ethics declarations

Conflict of interest

The authors declare that they have no conflict of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Chai, X., Wei, T., Chen, Z. et al. LDN-RC: a lightweight denoising network with residual connection to improve adversarial robustness. Appl Intell 53, 5224–5239 (2023). https://doi.org/10.1007/s10489-022-03847-z

Download citation

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10489-022-03847-z

Keywords

Navigation