Skip to main content

A Backdoor Embedding Method for Backdoor Detection in Deep Neural Networks

  • Conference paper
  • First Online:
Ubiquitous Security (UbiSec 2021)

Abstract

As the coming of artificial intelligence (AI) era, deep learning models are widely applied on many aspects of our daily lives, such as face recognition, speech recognition, and automatic driving. AI security is also becoming a burning problem. Because the deep learning model is usually regarded as a black box, it is susceptible to backdoor attacks that embed hidden patterns to impact the model prediction results. To promote backdoor detection research, this work proposes a simple backdoor embedding method to produce deep learning models with a backdoor for validating backdoor detection algorithms. Through conceptual embedding techniques, we decouple the backdoor pattern recognition function and the normal classification function in a deep learning model. One advantage is that the backdoor activation mechanism will not directly interfere with the normal function of the original DNN-based model. Another advantage is that the interference of the final prediction result can be more flexible. The backdoor pattern recognition phase and the model prediction interference phase can be developed independently. The aim is that the deep model needs to have an indistinguishable performance in terms of the normal sample classification while it can be triggered by the hidden backdoor patterns at the right time. The analysis and the experiments validate the proposed method that the embedding backdoor model can achieve almost the same prediction performance as the normal model while the backdoor mechanism can be activated precisely.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 69.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 89.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Chen, H., Fu, C., Zhao, J., Koushanfar, F.: Deepinspect: a black-box trojan detection and mitigation framework for deep neural networks. In: International Joint Conferences on Artificial Intelligence (IJCAI), pp. 4658–4664 (2019)

    Google Scholar 

  2. Dai, Y., Wang, G., Li, K.C.: Conceptual alignment deep neural networks. J. Intell. Fuzzy Syst. 34(3), 1631–1642 (2018)

    Article  Google Scholar 

  3. Gu, T., Liu, K., Dolan-Gavitt, B., Garg, S.: BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, 47230–47244 (2019). https://doi.org/10.1109/ACCESS.2019.2909068

    Article  Google Scholar 

  4. Liu, Y., Lee, W.C., Tao, G., Ma, S., Aafer, Y., Zhang, X.: Abs: scanning neural networks for back-doors by artificial brain stimulation. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 1265–1282 (2019)

    Google Scholar 

  5. Liu, Y., Ma, S., Aafer, Y., Lee, W.C., Zhai, J., Wang, W., Zhang, X.: Trojaning attack on neural networks. In: Proceedings of the 25th Network and Distributed System Security Symposium, pp. 1–15 (2018)

    Google Scholar 

  6. Qiu, H., Zeng, Y., Guo, S., Zhang, T., Qiu, M., Thuraisingham, B.: Deepsweep: an evaluation framework for mitigating DNN backdoor attacks using data augmentation. In: Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, pp. 363–377 (2021)

    Google Scholar 

  7. Saha, A., Subramanya, A., Pirsiavash, H.: Hidden trigger backdoor attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 34, pp. 11957–11965 (2020)

    Google Scholar 

  8. Shen, S., Tople, S., Saxena, P.: AUROR: defending against poisoning attacks in collaborative deep learning systems. In: Proceedings of the 32nd Annual Conference on Computer Security Applications, pp. 508–C519. ACSAC 2016, Association for Computing Machinery, New York, NY, USA (2016). https://doi.org/10.1145/2991079.2991125

  9. Szegedy, C., et al.: Intriguing properties of neural networks. In: 2nd International Conference on Learning Representations, ICLR 2014 (2014). https://arxiv.org/abs/1312.6199

  10. Tang, R., Du, M., Liu, N., Yang, F., Hu, X.: An embarrassingly simple approach for trojan attack in deep neural networks. In: Proceedings of the 26th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 218–228 (2020)

    Google Scholar 

  11. Wang, B., et al.: Neural cleanse: identifying and mitigating backdoor attacks in neural networks. In: 2019 IEEE Symposium on Security and Privacy (SP), pp. 707–723 (2019). https://doi.org/10.1109/SP.2019.00031

  12. Xiang, Z., Miller, D.J., Kesidis, G.: Revealing backdoors, post-training, in DNN classifiers via novel inference on optimized perturbations inducing group misclassification. In: ICASSP 2020–IEEE 2020 International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 3827–3831. IEEE (2020)

    Google Scholar 

  13. Xiao, H., Rasul, K., Vollgraf, R.: Fashion-MNIST: a novel image dataset for benchmarking machine learning algorithms (2017). https://arxiv.org/abs/1708.07747

  14. Xu, X., Wang, Q., Li, H., Borisov, N., Gunter, C.A., Li, B.: Detecting AI trojans using meta neural analysis. In: 2021 IEEE Symposium on Security and Privacy (SP) pp. 103–120 (2021). https://doi.org/10.1109/SP40001.2021.00034

  15. Yao, Y., Li, H., Zheng, H., Zhao, B.Y.: Latent backdoor attacks on deep neural networks. In: Proceedings of the 2019 ACM SIGSAC Conference on Computer and Communications Security, pp. 2041–2055 (2019)

    Google Scholar 

  16. Zhai, T., Li, Y., Zhang, Z., Wu, B., Jiang, Y., Xia, S.T.: Backdoor attack against speaker verification. In: ICASSP 2021–2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp. 2560–2564. IEEE (2021)

    Google Scholar 

Download references

Acknowledgments

This work is partly supported by Hunan Provincial Natural Science Foundation under Grant Number 2020JJ5367, Project of Hunan Social Science Achievement Appraisal Committee in 2020 (No. XSP20YBZ043), Key Project of Teaching Reform in Colleges and Universities of Hunan Province under Grant Number HNJG-2021-0251, Scientific Research Fund of Hunan Provincial Education Department under Grant Number 21A0599, and Scientific Research Innovation Project of Xiangjiang College of Artificial Intelligence, Hunan Normal University.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yinglong Dai .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, M., Zheng, H., Liu, Q., Xing, X., Dai, Y. (2022). A Backdoor Embedding Method for Backdoor Detection in Deep Neural Networks. In: Wang, G., Choo, KK.R., Ko, R.K.L., Xu, Y., Crispo, B. (eds) Ubiquitous Security. UbiSec 2021. Communications in Computer and Information Science, vol 1557. Springer, Singapore. https://doi.org/10.1007/978-981-19-0468-4_1

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-0468-4_1

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-0467-7

  • Online ISBN: 978-981-19-0468-4

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics