Skip to main content

A Random Multi-target Backdooring Attack on Deep Neural Networks

  • Conference paper
  • First Online:
  • 989 Accesses

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1454))

Abstract

Deep learning has made tremendous progress in the past ten years and has been applied in various critical practical applications. However, recent studies have shown that deep learning models are vulnerable to backdoor attacks in which the target labels chosen by the attacker can be one or multiple. Conventional multi-target backdoor attack focus on applying multiple triggers to implement multi-target attack. In this paper, we propose a novel method that utilizes one trigger to correspond to multiple target labels, and the location of the trigger is not limited, which brings more flexibility. After proposing the backdoor attack, we also considered defending against this kind of attack. Therefore, to distinguish backdoor images and clean images, we propose a method to train a neural network as a detector to detect if the image has an abnormal part. Our experimental results show that our attack success rate is higher than 90% on MNIST, Cifar-10, and GTSRB. Our detection method can also successfully detect the backdoor image with a trigger at a random location of the image, and the detection success rate is 86.02%.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

References

  1. ImageNet large scale visual recognition competition (2012). http://www.imagenet.org/challenges/LSVRC/2012/

  2. Graves, A., Mohamed, A.-R., Hinton, G.: Speech recognition with deep recurrent neural networks. In: Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE international conference on. IEEE, pp. 6645–6649 (2013)

    Google Scholar 

  3. Hermann, K.M., Blunsom, P.: Multilingual distributed representations without word alignment. In: Proceedings of ICLR, [Online] (2014). http://arxiv.org/abs/1312.6173

  4. Gu, T., Dolan-Gavitt, B., Garg, S.: Badnets: identifying vulnera- bilities in the machine learning model supply chain. arXiv preprint arXiv:1708.06733 (2017)

  5. Hyun, K., Roh, J., Yoon, H., Park, K.W.:TargetNet backdoor: attack on deep neural network with use of different triggers. In: ICIIT (2020)

    Google Scholar 

  6. Salem, A., Wen, R., Backes, M., Ma, S., Zhang, Y.: Dynamic backdoor attacks against machine learning models. arXiv preprint arXiv:2003.03675 (2020)

  7. Liu, Y., Ma, S., Aafer, Y., Lee, W.-C., Zhai, J., Wang, W., Zhang, X.: Trojaning attack on neural networks (2017)

    Google Scholar 

  8. Chen, B., Carvalho, W., Baracaldo, N., et al.: Detecting backdoor attacks on deep neural networks by activation clustering. arXiv preprint arXiv:1811.03728 (2018)

  9. Wang, B., Yao, Y., Shan, S., et al.: Neural cleanse: Identifying and mitigating backdoor attacks in neural networks. In: Neural Cleanse: Identifying and Mitigating Backdoor Attacks in Neural Networks, 0 (2019)

    Google Scholar 

  10. Udeshi, S., Peng, S., Woo, G., et al.: Model agnostic defence against backdoor attacks in machine learning. arXiv preprint arXiv:1908.02203 (2019)

  11. Goodfellow, I., Shlens, J., Szegedy, C.: Explaining and harnessing adversarial examples. In: International Conference on Learning Representations (2015)

    Google Scholar 

  12. Kurakin, A., Goodfellow, I., Bengio, S.: Adversarial examples in the physical world. In: ICLR Workshop (2017)

    Google Scholar 

  13. Moosavi-Dezfooli, S.M., Fawzi, A., Frossard, P.: Deep-fool: a simple and accurate method to fool deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2574–2582 (2016)

    Google Scholar 

  14. Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z.B., Swami, A.: The limitations of deep learning in adversarial settings. In: Security and Privacy (EuroS&P), 2016 IEEE European Symposium on, pp. 372–387. IEEE (2016)

    Google Scholar 

  15. Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Advances in Neural Information Processing Systems, pp. 1097–1105 (2012)

    Google Scholar 

  16. He, K., Zhang, X., Ren, S., et al.: Deep residual learning for image recognition. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 770–778 (2016)

    Google Scholar 

  17. Honglin, L.U., Liming, W.A.N.G.: User-oriented data privacy preserving method for federated learning that supports user disconnection. Netinfo Secur. 21(3), 64–71 (2021)

    Google Scholar 

  18. Rong, W.A.N.G., Chunguang, M.A., Peng, W.U.: An intrusion detection method based on federated learning and convolutional neural network. Netinfo Secur. 20(4), 47–54 (2020)

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Singapore Pte Ltd.

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Liu, X., Yu, X., Zhang, Z., Zhang, Q., Li, Y., Tan, Ya. (2021). A Random Multi-target Backdooring Attack on Deep Neural Networks. In: Tan, Y., Shi, Y., Zomaya, A., Yan, H., Cai, J. (eds) Data Mining and Big Data. DMBD 2021. Communications in Computer and Information Science, vol 1454. Springer, Singapore. https://doi.org/10.1007/978-981-16-7502-7_5

Download citation

  • DOI: https://doi.org/10.1007/978-981-16-7502-7_5

  • Published:

  • Publisher Name: Springer, Singapore

  • Print ISBN: 978-981-16-7501-0

  • Online ISBN: 978-981-16-7502-7

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics