Skip to main content

Clean and Compact: Efficient Data-Free Backdoor Defense with Model Compactness

  • Conference paper
  • First Online:
Computer Vision – ECCV 2024 (ECCV 2024)

Abstract

Deep neural networks (DNNs) have been widely deployed in real-world, mission-critical applications, necessitating effective approaches to protect deep learning models against malicious attacks. Motivated by the high stealthiness and potential harm of backdoor attacks, a series of backdoor defense methods for DNNs have been proposed. However, most existing approaches require access to clean training data, hindering their practical use. Additionally, state-of-the-art (SOTA) solutions cannot simultaneously enhance model robustness and compactness in a data-free manner, which is crucial in resource-constrained applications.

To address these challenges, in this paper, we propose Clean & Compact (C&C), an efficient data-free backdoor defense mechanism that can bring both purification and compactness to the original infected DNNs. Built upon the intriguing rank-level sensitivity to trigger patterns, C&C co-explores and achieves high model cleanliness and efficiency without the need for training data, making this solution very attractive in many real-world, resource-limited scenarios. Extensive evaluations across different settings consistently demonstrate that our proposed approach outperforms SOTA backdoor defense methods.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    We use \(\boldsymbol{\sigma }_\text {norm}\) instead of \(\boldsymbol{\sigma }\) because it normalizes the sing. values of all layers to same range, hence impact of threshold \(\tau _\text {scale}\) can be applied on each layer in a fair way.

References

  1. Barni, M., Kallas, K., Tondi, B.: A new backdoor attack in CNNs by training set corruption without label poisoning. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 101–105. IEEE (2019)

    Google Scholar 

  2. Cai, Y., Yao, Z., Dong, Z., Gholami, A., Mahoney, M.W., Keutzer, K.: Zeroq: A novel zero shot quantization framework. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 13169–13178 (2020)

    Google Scholar 

  3. Chai, S., Chen, J.: One-shot neural backdoor erasing via adversarial weight masking. Adv. Neural. Inf. Process. Syst. 35, 22285–22299 (2022)

    Google Scholar 

  4. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526 (2017)

  5. Chen, X., et al.: Refit: a unified watermark removal framework for deep learning systems with limited data. In: Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, pp. 321–335 (2021)

    Google Scholar 

  6. Doan, B.G., Abbasnejad, E., Ranasinghe, D.C.: Februus: input purification defense against trojan attacks on deep neural network systems. In: Annual Computer Security Applications Conference, pp. 897–912 (2020)

    Google Scholar 

  7. Doan, K., Lao, Y., Zhao, W., Li, P.: Lira: Learnable, imperceptible and robust backdoor attacks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 11966–11976 (2021)

    Google Scholar 

  8. Gao, Y., Xu, C., Wang, D., Chen, S., Ranasinghe, D.C., Nepal, S.: Strip: a defence against trojan attacks on deep neural networks. In: Proceedings of the 35th Annual Computer Security Applications Conference, pp. 113–125 (2019)

    Google Scholar 

  9. Goodfellow, I.J., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572 (2014)

  10. Gu, T., Dolan-Gavitt, B., Garg, S.: Badnets: Identifying vulnerabilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017)

  11. Gusak, J., et al.: Automated multi-stage compression of neural networks. In: Proceedings of the IEEE/CVF International Conference on Computer Vision Workshops (2019)

    Google Scholar 

  12. Hayase, J., Kong, W., Somani, R., Oh, S.: Spectre: defending against backdoor attacks using robust statistics. In: International Conference on Machine Learning, pp. 4129–4139. PMLR (2021)

    Google Scholar 

  13. Huang, K., Li, Y., Wu, B., Qin, Z., Ren, K.: Backdoor defense via decoupling the training process. arXiv preprint arXiv:2202.03423 (2022)

  14. Kim, Y.D., Park, E., Yoo, S., Choi, T., Yang, L., Shin, D.: Compression of deep convolutional neural networks for fast and low power mobile applications. arXiv preprint arXiv:1511.06530 (2015)

  15. Li, Y., Lyu, X., Koren, N., Lyu, L., Li, B., Ma, X.: Neural attention distillation: Erasing backdoor triggers from deep neural networks. In: International Conference on Learning Representations (2021)

    Google Scholar 

  16. Liu, K., Dolan-Gavitt, B., Garg, S.: Fine-pruning: defending against backdooring attacks on deep neural networks. In: Bailey, M., Holz, T., Stamatogiannakis, M., Ioannidis, S. (eds.) RAID 2018. LNCS, vol. 11050, pp. 273–294. Springer, Cham (2018). https://doi.org/10.1007/978-3-030-00470-5_13

    Chapter  Google Scholar 

  17. Liu, Y., et al.: Trojaning attack on neural networks. In: 25th Annual Network And Distributed System Security Symposium (NDSS 2018). Internet Soc (2018)

    Google Scholar 

  18. Liu, Y., Ma, X., Bailey, J., Lu, F.: Reflection backdoor: a natural backdoor attack on deep neural networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) Computer Vision – ECCV 2020: 16th European Conference, Glasgow, UK, August 23–28, 2020, Proceedings, Part X, pp. 182–199. Springer International Publishing, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_11

    Chapter  Google Scholar 

  19. Mu, B., et al.: Progressive backdoor erasing via connecting backdoor and adversarial attacks. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 20495–20503 (2023)

    Google Scholar 

  20. Nguyen, A., Tran, A.: Wanet–imperceptible warping-based backdoor attack. arXiv preprint arXiv:2102.10369 (2021)

  21. Nguyen, T.A., Tran, A.: Input-aware dynamic backdoor attack. Adv. Neural. Inf. Process. Syst. 33, 3454–3464 (2020)

    Google Scholar 

  22. Pang, L., Sun, T., Ling, H., Chen, C.: Backdoor cleansing with unlabeled data. arXiv preprint arXiv:2211.12044 (2022)

  23. Phan, H., et al.: RIBAC: towards robust and imperceptible backdoor attack against compact DNN. In: Avidan, S., Brostow, G., Cissé, M., Farinella, G.M., Hassner, T. (eds.) Computer Vision – ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part IV, pp. 708–724. Springer Nature Switzerland, Cham (2022). https://doi.org/10.1007/978-3-031-19772-7_41

    Chapter  Google Scholar 

  24. Phan, H., Xie, Y., Liao, S., Chen, J., Yuan, B.: Cag: a real-time low-cost enhanced-robustness high-transferability content-aware adversarial attack generator. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 5412–5419 (2020)

    Google Scholar 

  25. Phan, H., Xie, Y., Liu, J., Chen, Y., Yuan, B.: Invisible and efficient backdoor attacks for compressed deep neural networks. In: ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 96–100. IEEE (2022)

    Google Scholar 

  26. Phan, H., Yin, M., Sui, Y., Yuan, B., Zonouz, S.: Cstar: towards compact and structured deep neural networks with adversarial robustness. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 37, pp. 2065–2073 (2023)

    Google Scholar 

  27. Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  28. Shi, C., et al.: Audio-domain position-independent backdoor attack via unnoticeable triggers. In: Proceedings of the 28th Annual International Conference on Mobile Computing And Networking, pp. 583–595 (2022)

    Google Scholar 

  29. Steinhardt, J., Koh, P.W.W., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems, vol.30 (2017)

    Google Scholar 

  30. Szegedy, C., et al.: Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199 (2013)

  31. Tran, B., Li, J., Madry, A.: Spectral signatures in backdoor attacks. In: Advances in Neural Information Processing Systems, vol. 31 (2018)

    Google Scholar 

  32. Tucker, L.R.: Some mathematical notes on three-mode factor analysis. Psychometrika 31(3), 279–311 (1966)

    Article  MathSciNet  Google Scholar 

  33. Turner, A., Tsipras, D., Madry, A.: Label-consistent backdoor attacks. arXiv preprint arXiv:1912.02771 (2019)

  34. Wang, Z., Zhai, J., Ma, S.: Bppattack: stealthy and efficient trojan attacks against deep neural networks via image quantization and contrastive adversarial learning. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 15074–15084 (2022)

    Google Scholar 

  35. Wu, D., Wang, Y.: Adversarial neuron pruning purifies backdoored deep models. Adv. Neural. Inf. Process. Syst. 34, 16913–16925 (2021)

    Google Scholar 

  36. Xie, Y., Shi, C., Li, Z., Liu, J., Chen, Y., Yuan, B.: Real-time, universal, and robust adversarial attacks against speaker recognition systems. In: ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 1738–1742. IEEE (2020)

    Google Scholar 

  37. Yin, M., Phan, H., Zang, X., Liao, S., Yuan, B.: Batude: budget-aware neural network compression based on tucker decomposition. In: Proceedings of the AAAI Conference on Artificial Intelligence. vol. 36, pp. 8874–8882 (2022)

    Google Scholar 

  38. Zeng, Y., Chen, S., Park, W., Mao, Z.M., Jin, M., Jia, R.: Adversarial unlearning of backdoors via implicit hypergradient. arXiv preprint arXiv:2110.03735 (2021)

  39. Zeng, Y., Chen, S., Park, W., Mao, Z., Jin, M., Jia, R.: Adversarial unlearning of backdoors via implicit hypergradient. In: International Conference on Learning Representations (2021)

    Google Scholar 

  40. Zhang, T., et al.: Inaudible backdoor attack via stealthy frequency trigger injection in audio spectrogram. In: Proceedings of the 30th Annual International Conference on Mobile Computing and Networking, pp. 31–45 (2024)

    Google Scholar 

  41. Zhang, X., Jin, Y., Wang, T., Lou, J., Chen, X.: Purifier: Plug-and-play backdoor mitigation for pre-trained models via anomaly activation suppression. In: Proceedings of the 30th ACM International Conference on Multimedia, pp. 4291–4299 (2022)

    Google Scholar 

  42. Zhao, T., et al.: Stealthy backdoor attack on rf signal classification. In: 2023 32nd International Conference on Computer Communications and Networks (ICCCN), pp. 1–10. IEEE (2023)

    Google Scholar 

  43. Zheng, R., Tang, R., Li, J., Liu, L.: Data-free backdoor removal based on channel lipschitzness. In: Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel, October 23–27, 2022, Proceedings, Part V, pp. 175–191. Springer (2022)

    Google Scholar 

  44. Zheng, R., Tang, R., Li, J., Liu, L.: Pre-activation distributions expose backdoor neurons. Adv. Neural. Inf. Process. Syst. 35, 18667–18680 (2022)

    Google Scholar 

Download references

Acknowledgements

This work was partially supported by National Science Foundation (NSF) under grants CNS2114220, CCF2211163, IIS2311596, CNS2120276, CNS2145389, IIS2311597, CCF1955909, and CNS2152908.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Huy Phan .

Editor information

Editors and Affiliations

1 Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary material 1 (pdf 849 KB)

Rights and permissions

Reprints and permissions

Copyright information

© 2025 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Phan, H. et al. (2025). Clean and Compact: Efficient Data-Free Backdoor Defense with Model Compactness. In: Leonardis, A., Ricci, E., Roth, S., Russakovsky, O., Sattler, T., Varol, G. (eds) Computer Vision – ECCV 2024. ECCV 2024. Lecture Notes in Computer Science, vol 15118. Springer, Cham. https://doi.org/10.1007/978-3-031-73027-6_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-73027-6_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-73026-9

  • Online ISBN: 978-3-031-73027-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics