Skip to main content

Low-Poisoning Rate Invisible Backdoor Attack Based onĀ Important Neurons

  • Conference paper
  • First Online:
Book cover Wireless Algorithms, Systems, and Applications (WASA 2022)

Abstract

The present research on label-consistent invisible backdoor attacks mainly faces the problem of needing a high poisoning rate to achieve a high attack success rate. To address the problem, this paper proposes a low-poisoning rate invisible backdoor attack based on important neurons (INIB) by enhancing the connection between triggers and target labels with the help of the neural gradient ranking algorithm. The method first identifies the neurons with the most significant influence on the target label with the help of the neural gradient ranking algorithm, secondly establishes a strong link between the important neurons and the trigger using the gradient descent algorithm, and then generates a trigger based on the established strong link by minimizing the difference between the current activation value and the expected activation value of the important neurons, thus causing the important neurons to be strongly activated when images have the trigger, which in turn causes the model to misidentify them as the target label. Finally, detailed experimental results show that INIB is able to achieve a very high attack success rate with a very low poisoning rate. Specifically, INIB achieves a 98.7% backdoor attack success rate with the poisoning rate of only 1.64% on the MNIST dataset.

Supported by the National Natural Science Foundation of China (NSFC) under Grant No. 62172377 and 61872205, and the Natural Science Foundation of Shandong Province under Grant No. ZR2019MF018.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

References

  1. Cai, Z., Zheng, X.: A private and efficient mechanism for data uploading in smart cyber-physical systems. IEEE Trans. Netw. Sci. Eng. 7(2), 766ā€“775 (2020)

    ArticleĀ  MathSciNetĀ  Google ScholarĀ 

  2. Li, Y., Wu, B., Jiang, Y., Li, Z., Xia, S.T.: Backdoor learning: a survey (2020)

    Google ScholarĀ 

  3. Cai, Z., Xu, Z., Wang, J., He, Z.: Private data trading towards range counting queries in internet of things. IEEE Trans. Mobile Comput. 1 (2022)

    Google ScholarĀ 

  4. Zheng, X., Cai, Z.: Privacy-preserved data sharing towards multiple parties in industrial IoTs. IEEE J. Sel. Areas Commun. 38(5), 968ā€“979 (2020)

    ArticleĀ  Google ScholarĀ 

  5. Gu, T., Liu, K., Dolan-Gavitt, B., Garg, S.: BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, 47230ā€“47244 (2019)

    ArticleĀ  Google ScholarĀ 

  6. Xue, M., Wang, X., Sun, S., Zhang, Y., Wang, J., Liu, W.: Compression-resistant backdoor attack against deep neural networks (2022)

    Google ScholarĀ 

  7. Barni, M., Kallas, K., Tondi, B.: A new backdoor attack in CNNs by training set corruption without label poisoning. In: IEEE Internation Conference on Image Processing (2019)

    Google ScholarĀ 

  8. Saha, A., Subramanya, A., Pirsiavash, H.: Hidden trigger backdoor attacks. In: AAAI 2020 - Main Technical Track (Oral) (2019)

    Google ScholarĀ 

  9. Nguyen, A., Tran, A.: WaNet - imperceptible warping-based backdoor attack (2021)

    Google ScholarĀ 

  10. Clements, J., Lao, Y.: Backdoor attacks on neural network operations. In: 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP) (2018)

    Google ScholarĀ 

  11. Dumford, J., Scheirer, W.: Backdooring convolutional neural networks via targeted weight perturbations (2018)

    Google ScholarĀ 

  12. Bagdasaryan, E., Shmatikov, V.: Blind backdoors in deep learning models (2020)

    Google ScholarĀ 

  13. Salem, A., Backes, M., Zhang, Y.: Donā€™t trigger me! a triggerless backdoor attack against deep neural networks (2021)

    Google ScholarĀ 

  14. Goldstein, T., Studer, C., Baraniuk, R.: A field guide to forward-backward splitting with a FASTA implementation. Computer Science (2016)

    Google ScholarĀ 

  15. Krizhevsky, A., Sutskever, I., Hinton, G.: ImageNet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, vol. 25 (2012)

    Google ScholarĀ 

Download references

Acknowledgements

This research is supported by the National Natural Science Foundation of China (NSFC) under Grant No. 62172377 and 61872205, and the Natural Science Foundation of Shandong Province under Grant No. ZR2019MF018.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Hui Xia .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

Ā© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Yang, Xg., Qian, Xy., Zhang, R., Huang, N., Xia, H. (2022). Low-Poisoning Rate Invisible Backdoor Attack Based onĀ Important Neurons. In: Wang, L., Segal, M., Chen, J., Qiu, T. (eds) Wireless Algorithms, Systems, and Applications. WASA 2022. Lecture Notes in Computer Science, vol 13472. Springer, Cham. https://doi.org/10.1007/978-3-031-19214-2_31

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-19214-2_31

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-19213-5

  • Online ISBN: 978-3-031-19214-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics