Skip to main content

A Pragmatic Label-Specific Backdoor Attack

  • Conference paper
  • First Online:
Frontiers in Cyber Security (FCS 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1726))

Included in the following conference series:

  • 623 Accesses

Abstract

Backdoor attacks, as an insidious security threat to deep neural networks (DNNs), are adept at injecting triggers into DNNs. A malicious attacker can create a link between the customized trigger and the targeted label, such that the prediction of the poisoned model will be manipulated if the input contains the predetermined trigger. However, most existing backdoor attacks define an obvious trigger (eg: conspicuous pigment block) and need to modify the poisoned images’ label, causing these images seems to be labeled incorrectly, which leads to these images can not pass human inspection. In addition, the design of the trigger always needs the information of the entire training data set, an extremely stringent experiment setting. These settings above remarkably restrict the practicality of backdoor attacks in the real world.

In our paper, the proposed algorithm effectively solves these restrictions of existing backdoor attack. Our Label-Specific backdoor attack can design a unique trigger for each label, while just accessing the images of the target label. The victim model trained on our poisoned training dataset will maliciously output attacker-manipulated predictions while the poisoned model is activated by the trigger. Meanwhile victim model still maintains a good performance confronting benign data samples. Hence, our proposed backdoor attack approach must be more practical.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

References

  1. He, K., Zhang, X., Ren, S., Sun, J.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  2. Li, X., Ma, C., Wu, B., He, Z., Yang, M.H.: Target-aware deep tracking. In: Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, pp. 1369–1378 (2019)

    Google Scholar 

  3. Tramèr, F., Zhang, F., Juels, A., Reiter, M.K., Ristenpart, T.: Stealing machine learning models via prediction \(\{\)APIs\(\}\). In: 25th USENIX Security Symposium (USENIX Security 16), pp. 601–618 (2016)

    Google Scholar 

  4. Shokri, R., Stronati, M., Song, C., Shmatikov, V.: Membership inference attacks against machine learning models. In: 2017 IEEE Symposium on Security and Privacy (SP), pp. 3–18. IEEE (2017)

    Google Scholar 

  5. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: DeepFool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  6. Li, B., Wang, Y., Singh, A., Vorobeychik, Y.: Data poisoning attacks on factorization-based collaborative filtering. In: Advances in Neural Information Processing Systems, vol. 29 (2016)

    Google Scholar 

  7. Alfeld, S., Zhu, X., Barford, P.: Data poisoning attacks against autoregressive models. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 30 (2016)

    Google Scholar 

  8. Biggio, B., Nelson, B., Laskov, P.: Poisoning attacks against support vector machines. arXiv preprint arXiv:1206.6389 (2012)

  9. Shafahi, A., et al.: Poison frogs! targeted clean-label poisoning attacks on neural networks. In: Advances in Neural Information Processing systems, vol. 31 (2018)

    Google Scholar 

  10. Goldblum, M., et al.: Dataset security for machine learning: data poisoning, backdoor attacks, and defenses. IEEE Trans. Pattern Anal. Mach. Intell. (2022)

    Google Scholar 

  11. Muñoz-González, L., et al.: Towards poisoning of deep learning algorithms with back-gradient optimization. In: Proceedings of the 10th ACM Workshop On Artificial Intelligence and Security, pp. 27–38 (2017)

    Google Scholar 

  12. Steinhardt, J., Koh, PW., Liang, P.S.: Certified defenses for data poisoning attacks. In: Advances in Neural Information Processing Systems, vol. 30 (2017)

    Google Scholar 

  13. Tianyu, G., Liu, K., Dolan-Gavitt, B., Garg, S.: BadNets: evaluating backdooring attacks on deep neural networks. IEEE Access 7, 47230–47244 (2019)

    Article  Google Scholar 

  14. Souri, H., Goldblum, M., Fowl, L., Chellappa, R., Goldstein, T.: Sleeper agent: scalable hidden trigger backdoors for neural networks trained from scratch. arXiv preprint arXiv:2106.08970 (2021)

  15. Saha, A., Subramanya, A., Pirsiavash, H.: Hidden trigger backdoor attacks. In: Proceedings of the AAAI Conference on Artificial Intelligence, vol. 34, pp. 11957–11965 (2020)

    Google Scholar 

  16. Lin, J., Xu, L., Liu, Y., Zhang, X.: Composite backdoor attack for deep neural network by mixing existing benign features. In Proceedings of the 2020 ACM SIGSAC Conference on Computer and Communications Security, pp. 113–131 (2020)

    Google Scholar 

  17. Nguyen, T.A., Tran, A.: Input-aware dynamic backdoor attack. In: Advances in Neural Information Processing Systems, vol. 33, pp. 3454–3464 (2020)

    Google Scholar 

  18. Shokri, R., et al. Bypassing backdoor detection algorithms in deep learning. In: 2020 IEEE European Symposium on Security and Privacy (EuroS &P), pp. 175–183. IEEE (2020)

    Google Scholar 

  19. Chen, X., Liu, C., Li, B., Lu, K., Song, D.: Targeted backdoor attacks on deep learning systems using data poisoning. arXiv preprint arXiv:1712.05526. (2017)

  20. Quiring, E., Rieck, K.: Backdooring and poisoning neural networks with image-scaling attacks. In: 2020 IEEE Security and Privacy Workshops (SPW), pp. 41–47. IEEE (2020)

    Google Scholar 

  21. Li, Y., Li, Y., Wu, B., Li, L., He, R., Lyu, S.: Invisible backdoor attack with sample-specific triggers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 16463–16472 (2021)

    Google Scholar 

  22. Barni, M., Kallas, K., Tondi, B.: A new backdoor attack in CNNs by training set corruption without label poisoning. In: 2019 IEEE International Conference on Image Processing (ICIP), pp. 101–105. IEEE (2019)

    Google Scholar 

  23. Liu, Y., Ma, X., Bailey, J., Lu, F.: Reflection backdoor: a natural backdoor attack on deep neural networks. In: Vedaldi, A., Bischof, H., Brox, T., Frahm, J.-M. (eds.) ECCV 2020. LNCS, vol. 12355, pp. 182–199. Springer, Cham (2020). https://doi.org/10.1007/978-3-030-58607-2_11

    Chapter  Google Scholar 

  24. Krizhevsky, A., Hinton, G., et al.: Learning multiple layers of features from tiny images (2009)

    Google Scholar 

  25. Deng, J., Dong, W., Socher, R., Li, L.J., Li, K., Fei-Fei, L. ImageNet: a large-scale hierarchical image database. In: 2009 IEEE Conference on Computer Vision and Pattern Recognition, pp. 248–255. IEEE (2009)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Yu Wang .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Wang, Y., Yang, H., Li, J., Ge, M. (2022). A Pragmatic Label-Specific Backdoor Attack. In: Ahene, E., Li, F. (eds) Frontiers in Cyber Security. FCS 2022. Communications in Computer and Information Science, vol 1726. Springer, Singapore. https://doi.org/10.1007/978-981-19-8445-7_10

Download citation

  • DOI: https://doi.org/10.1007/978-981-19-8445-7_10

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-19-8444-0

  • Online ISBN: 978-981-19-8445-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics